‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon | Artificial intelligence (AI)


Amazon sells books marketed in people looking for techniques to manage their ADHD who claim to offer expert advice, but seems to be written by a chatbot such as Chatgpt.

The Amazon market has been fell from works produced by artificial intelligence which are easy and inexpensive to publish but which include unnecessary or dangerous information, such as poor quality travel guides And mushroom research books that encourage risky tasting.

A number of books have appeared on the online retailer site offering ADHD guides which also seem to be written by chatbots. The titles include ADHD navigation in men: prosper with a late diagnosis,, Men with adult ADHD: very effective techniques to master concentration, time management and overcome anxiety And men with a diet and physical form of ADHD for adults.

Eight pound samples have been examined for The Guardian by Originality.ai, an American company that detects the content produced by artificial intelligence. The company said that each one had a 100% note on its AI detection score, which means that its systems were very confident that the books were written by a chatbot.

Experts said online markets were a “wild West” due to the lack of regulations concerning the work created by AI – and dangerous disinformation risked propagating accordingly.

Michael Cook, computer researcher at King’s College in London, said that generative AI systems were known to give dangerous advice, for example on the ingestion of toxic substances, the mixture of dangerous chemicals or to ignore health guidelines.

As such, it was “frustrating and depressing to see books authorized on AI increasingly on digital markets”, in particular on health and medical subjects, which could lead to an erroneous diagnosis or aggravate the conditions, he declared.

“Generative AI systems like Chatgpt may have been trained on many manuals and medical articles, but they have also been trained on pseudoscience, conspiracy theories and fiction,” said Cook.

“Nor can they be invoked to critically analyze or reliably reproduce the knowledge they have previously read-it is not as simple as having things from the` `remember ” that they have seen in their training data. Generative AI systems should not be allowed to deal with sensitive or dangerous subjects without the surveillance of an expert,” he added.

However, Cook noted that the Amazon’s business model prompted this type of practice, because it was “each time” that people bought a book, that work was “worthy or not”, while generators who created the products were not responsible.

Professor Shannon Vallor, director of the Center for Technomoror Futures at the University of Edinburgh, said that Amazon had “an ethical responsibility for not knowingly facilitating damages to their customers and the company”, although it is “absurd” to make a bookseller responsible for the content of all his books.

Problems arose because railings were previously deployed in the publishing industry-such as reputation concerns and verification of authors and manuscripts-had been completely transformed by AI, she noted.

This was aggravated by a “Wild West” regulatory environment in which there were no “significant consequences for those who allow damage”, feeding a “race down,” said Vallor.

Currently, there is no legislation that obliges books authorized to be labeled as such. The copyright law only applies if the content of a specific author has been reproduced, although Vallor noted that tort law should impose “the fundamental duties of care and reasonable diligence”.

The advertising standards agency said that the books authorized by AI cannot be announced to give a misleading impression that they were written by a human, allowing people who had seen such books to submit a complaint.

Richard Wordsworth hoped to know more about his recent ADHD diagnosis for adults when his father recommended a book he found on Amazon after searches the “adult men of ADHD”.

When Wordsworth sat to read it, “it immediately seemed strange,” he said. The book opened its doors with a quote from conservative psychologist Jordan Petersen, then contained a series of random anecdotes, as well as historical inaccuracies.

Some advice has been actively harmful, Wordsworth observed. For example, a chapter discusing emotional deregulation warned that friends and family do not “forgive the emotional damage you inflict. The pain and injury caused by impulsive anger leave lasting scars. ”

When Wordsworth searched for the author, he spotted a head that seemed generated by AI, as well as a lack of qualifications. He searched several other titles on the Amazon market and was shocked to meet warnings that his condition was “catastrophic” and that it was “four times more likely to die significantly earlier”.

He immediately felt “upset”, just like his father, very educated. “If he can be collected by this type of book, anyone could be – and people so well intentioned and desperate have their heads filled with dangerous nonsense by crooks while Amazon takes his cup,” said Wordsworth.

A spokesperson for Amazon said: “We have content guidelines governing books that can be listed for sale and we have proactive and reactive methods that help us detect the content that violates our directives, whether generated by AI or not. We invest directives and resources to ensure that our directives are followed and delete books that do not adhere to these directives.

“We continue to improve our protections against non -compliant content and our process and our directives will continue to evolve as we see changes in the publication.”

Leave a Reply

Your email address will not be published. Required fields are marked *