I use this quite frequently. Thanks for the heads up that Amazon is ignoring that fact and randomly assigning some attribute.
That’s that for ever going into the “Improve Listing Quality” dashboard…
I use this quite frequently. Thanks for the heads up that Amazon is ignoring that fact and randomly assigning some attribute.
That’s that for ever going into the “Improve Listing Quality” dashboard…
Its database is every book that has ever been published and the entire World Wide Web.
Versions previous to 4 did not search the web. If you weren’t experimenting with 4 check it out and see if you like it better.
None of my carefully crafted detail pages have improved as a result of their meddling. It’s too much work to keep up on the never-better changes and even worse to get them corrected. This platform is draining, like an energy vampire.*
Wrong materials like yours, titles missing the most important word(s)that actually tell what the item is and quantities on single-item listings.
I know one of the ai sources is Retail’s wholesale supplier database. I’ve seen titles changed to exactly what is on a few brands wholesale price list that was meant for utility, not marketing.
Common sense is endangered.
Sadly, I suspect that someone is going to suffer, needlessly, due to yet another case of Amabots Gone Wild:
https://sellercentral.amazon.com/seller-forums/discussions/t/a7e319c1-14de-492e-925a-6d9216ecb9b5
I just blasted all my contacts and maybe something will shake loose. Almost all are out for the weekend though.
Hoping the OP has REALLY GOOD liability insurance in place and paid for.
Emet, who I generally consider to be one of the better SMEs (“Subject Matter Experts”), has just responded over in the NSFE:
I am not encouraged.
…
I am not encouraged.
I AM, however, encouraged by this update posted by the NSFE discussion’s OP a few hours ago:
Nature_s_Fusions_Nut
In reply to: Nature_s_Fusions_Nut’s post
2 hours ago
Just wanted to update. I was finally able to get the case escalated to the Brands escalation team and they unmerged them. Even after getting responses from the SAS team and seller support and escalation from seller support that it wasn’t possible.
In the world of science, it has led to many embarrassing errors in scientific journals where the authors relied on erroneous information generated by AI. Wiley, for example, recently dumped a number of journals after discovering such embarrassing errors. A more personal anecdote: My husband gave a talk honoring his former student at her retirement event. He shared with the crowd that he “learned” a lot about the student he didn’t know from AI…and proceeded to quote laughable “facts” about her personal life and career that everyone knew to be ridiculously false.
Well, we both know within the strictures of science - that is, no science.
And while I’m an huge advocate of the philosophy science and therefore that the natural sciences encompass the highest level of Truth as we understand it today with the greatest amount of explanatory power - I think what you’re referring to is the large number of submissions by paper mills through its Hindawi portfolio - which were acquisitions they made in 2021
Such an error is bound to happen where there is an incentive to produce - regardless of AI - the various educational institutions are rife with error and are error prone without the use of AI - from political factions, donor influence, favoritism, exorbitant costs etc.
I forget the original context of what I said, but the Hindawi bit was found using AI ![]()
My point was simply that you can’t rely on chatgpt / AI for accuracy, and the innacuracies seems to multiply rapidly without sufficuent safeguards.
How would that differ from people using wiki information without corroborating it? Where does the onus lie? An LLM model simply uses a fancy algo to summarize, reword and present large/vast amounts of data that is readily available on the internet - so its a very complex search algorithm and only as good as the prompts its receives and yet still fails many a time.
How would that differ from people using wiki information without corroborating it?
But schools and other institutions specifically prohibit using, say, Wikipedia as a primary source and/or without corroboration. It is clearly marked and self-contained, obviously suspect, and well known to be an ineffective standalone source.
“AI” is none of those things, and primary source generators (or heck, even basic bloggers relying on AI) using AI apps or sourcing as a shortcut for their own works without their own independent corroboration PLUS Google’s new AI-based search-generated-results (no click through to source required) further complicates the entire issue.
It’s a (bleepin’) flaming mess.
Awesome wordsmithing to Tull.
PLUS Google’s new AI-based search-generated-results
Yikes all the way around ![]()
Ebay is pushing ai hard with single button to generate ai descriptions. They are nothing but fluff and add nothing to the listings and might even give features to items that don’t have said features. It’ll be interesting who eats the returns when ai says an item can do something it can’t. Guess its the sellers fault for pushing list.
I found an AI chatbot that teaches Japanese. OMG, it’s so cool. I want to learn conversational Japanese, and this one helps. If I don’t know what they are saying, I ask, and she answers. Who knows if I’m learning incorrect verbiage, but I know at least the basic sentences are correct. LOL.
I sent a friendly note to a new Chinese speaking family member, using google translate…it worked Ok except for signing my name “Sue” as taking legal action.
The Ebay Ai description are so bad they are embarrassing. Everything is a “must have”…and other blather. I open them to see if they offer anything interesting or persuasive and may wind up using bits and pieces at times, taking care to delete the info I don’t trust .