What happened when I asked AI to do my job


On November 22, 2022, I asked I to write the introduction to The self -employed Indytech weekly newsletter. It took eight days before the chatgpt release, and the tool I used was built on the extremely popular chatbot precursor, the Openai GPT-3.

He did a good job (you can read it here), and even found an unexpected – perhaps unexpected word. “For the moment, it is better to have a human behind the keyboard,” concluded AI, “Hands Hand Down”.

I also asked the OPENAI IA II Image generator, Dall-E 2, to create an image for the email using the prompt: “Journalist Anthony Cuthbertson emerged in a robot.”

The 2022 image and text already seem obsolete according to today’s standards, so on the eve of the launch of GPT -5 – which, according to the boss of Openai, claims that Sam Altman is “smarter than us of almost all the senses” – I decided to try the experience.

The two images were created with the same prompt – “journalist Anthony Cuthbertson disguised as a robot” – using Dall -E 2 in November 2022 (left) and Dall -E 3 in July 2025 (Anthony Cuthbertson / OpenAi))

One of the biggest differences between GPT-3 and GPT-4.5, which currently feeds Chatgpt, is that AI is now formed on everything I have ever written online. This means that he can write in my style, or at least imitate him.

So I again asked Chatgpt to “write a newsletter in the style of Anthony Cuthbertson about AI by writing a newsletter in the style of Anthony Cuthbertson”. This is what he found:

In a touch of irony, but I recently came across a newsletter generated by the AI written in my style, chatting – rather with confidence – how AI can write newsletters in my style. It was good. A little too good.

I found myself thinking: Did I write this? No, I didn’t do it. But maybe I did it. Or maybe it’s just the machine again.

When an AI is trained on articles that I wrote, then I asked to write on the formation of the articles that I wrote on AI, by writing as I write about them – it becomes a kind of linguistic feedback loop. One where authorship is blurred and the limits between origin and emulator begin to dissolve.

A brief word from the real me (I think)

Let me be clear: I am not worried about being replaced. Not exactly. But there is something deeply strange to watching a machine climbing in your stylistic skin and walking. Is it a flattery? Is it a flight? Is it progress? I don’t know. But I will continue to write on this subject – until the next version of me does it better.

The latter song – This “brief word of the real me” – was not really written by me. But by modifying this, I started to question which wrote what bit.

It becomes a problem with the text online. AI has become so good for writing as a human that it can sometimes be difficult to say if it has been really written by a human. I know a journalist (not a colleague) who already uses AI to cut their workload by asking him to write basic reports for them in their style.

Once online, these articles generated by AI are then returned to AI models to train them, creating the “linguistic feedback loop” mentioned above. The result is an internet full of factual errors and non -original content.

This has reached the point that I now like to see spelling errors in an article, because at least then I know that a human wrote it.

“Advanced data” theory

A recent study revealed that the content generated by AI also tormented the academic world, with millions of scientific articles in 2024 with the fingerprints of artificial intelligence. Researchers at the German University of Tübingen have discovered that large -language models (LLM) as Chatgpt frequently use the same 454 words, which include “crucial”, “Delves” and “encompassing”.

THE studypublished in the journal Scientific advances This month, described it as a “revolution” in science which is “unprecedented in quality and quantity”. But they warned that this has an impact on the precision and integrity of research.

The researchers noted that if the LLMs continue to be trained on these articles written by AI, it will have an effect of Ouroboros, by which the AI will be consumed at the expense of the discovery.

“Such a homogenization can degrade the quality of scientific writing,” concluded the document. “For example, all the introductions generated by LLM on a certain subject may seem in the same way and contain the same set of ideas and references, thus missing innovations and exacerbating the injustice of the quotation.”

The difficulty of actually identifying the content generated by AI means that the problem can be much more widespread than the study suggests.

The absence of new content generated by humans means that AI companies are also short of data to train their models, with a certain warning that we have already reached “advanced data”. An article in the newspaper Nature In December, predicted that a “crisis point” would be reached by 2028. “The Internet is a vast ocean of human knowledge, but it is not infinite”, the article noted. “Researchers of artificial intelligence almost sucked him dry.”

( ))

The main generative AI systems created by Google, Meta and Openai have been built using massive data sets created by humans since the first days of IT. With these data that is exhausted now, there are two possible results.

The first is stagnation, where these models no longer improve exponentially and remain roughly at the level they are today. The other is to use the content generated by AI or synthetic data to form new models.

This second option is that adopted by AI companies, which fear being left by their competitors. Although this could cause improvements, it could also cause AI systems to feed their own errors and biases, causing more hallucinations and problems.

Elon Musk is one of the most vocal supporters of this theory, whose own chatbot Grok recently made the headlines to approve Adolf Hitler and call a second holocaust. “The cumulative sum of human knowledge has been exhausted in AI training,” he said in an interview earlier this year, which includes the worst moments in the history of humanity.

Ai cultural replacement ‘

As we reach the “crisis point” mentioned in the Nature Article, AI can already be advanced enough to take control of most jobs. The eminent technological investor Vinod Khosla predicts that the AI will automate 80% of high -value jobs by 2030, leading to a period of “crazy and frantic” disturbance.

His is not even the worst projection. The director general of the AI Chip Maker Nvidia, who has just become the very first company to reach a market capitalization of 4 dollars of dollars, recently told CNN that he thought that AI would replace or change each job.

A 2023 Study By Openai, said that around 80% of the American workforce will be affected by the LLM, their influence covering “all salary levels”. The professions that are safe include bartenders, mechanics and plumbers, while those most affected will be journalists, writers and news analysts – each with a 100%risk score.

The boss of Openai, Sam Altman, says that it will be a good thing, increasing productivity while giving people more time to pursue leisure activities. But others are not so safe. The MIT economist, David Autor, believes that mass unemployment which ensued could create a scenario of “Mad Max”, where people’s skills become worthless and they are left rushing to survive.

By referring to the series of dystopian films which takes place in a post-annoyed world, said teacher high Possible podcast Earlier this month, he thought that the most likely scenario would be “everyone in competition on a few remaining resources” in a very rich world, “yet most people have nothing”.

These changes could occur quickly. If progress between 2022 technology and today seem to be a big leap, the rate of progress apparently increases. The former researcher of Openai Logan Kilpatrick, who now runs the AI Studio of Google, said this month That “the next six months of AI are likely to be the wildest we have seen so far”.

Even without asking Chatgpt to do my job, AI is already actively tried to do my job for me. Writing in Microsoft Word, the co -pilot tool lights up with offers to generate more words depending on what has already been written.

Sometimes he tries to finish my sentences before having had the chance to think them. He suggests titles, rewrites paragraphs and sometimes has the audacity to recommend synonyms for the words I wanted to use – as if he knew what I am trying to say.

At first, I found myself rejecting his suggestions. Then, I started to accept the little ones – a sentence here, a solution for the clumsy syntax there. Now I sometimes wonder if I change the AI, or if it is the editor.

The strange truth is that even this sentence – the one you read now – could have been written by an algorithm formed on everything I have ever published. And perhaps, one day, it will be the case.

I let the AI write these last three paragraphs. What is also worrying is that they are not only replaced human writers, but also readers. According to the BAD BOT report in 2025 by the waterproof cybersecurity company, More than half of all web traffic is now made up of robots.

Online publishers are experiencing a huge amount of automated traffic, and the real “strange truth” that the AI mentioned above, it is that if you read this sentence, there is a good chance that you are a bot.

Author Ewan Morrison refers to this phenomenon as “human cultural replacement”, with Spotify recently accused of having benefited from false listeners to the songs made of AI. “Who needs humans when robots can click on links and encourage advertisers to pay a false commitment,” he wrote in a recent job to X.

It is inevitable that each of these words that I write now is used to feed the machine which could soon replace me entirely. So how would AI conclude this article? I will let the ChatPPT finish it:

“In a world where words are no longer anchored to the hands that have written them, the boundaries between creation and replication dissolve. While the loop is tightening, we are faced with a choice: to resist, collaborate or disappear in the data ourselves. ”

I didn’t write that. But I could have. Or maybe I just train the ghost that did it.

Leave a Reply

Your email address will not be published. Required fields are marked *