Earlier this week, I had a bit of a strange situation from an editorial point of view. I wrote an article which I was happy enough for a local magazine on the mood coding, stressing how it helped Activate real excitement within the local community.
I was happy to highlight some of the very intelligent people with an impact on this scene.
But, almost at the same time I posted, there was another story in my mind, it involving Chatgpt. You have probably seen the title; It was in the New York Times (See our policy on linking them), and it was, to say it lightly, the drop in the stomach. The ethical questions concerning the increases in Adam Raine’s death are serious and are subject to a high level trial.
(It’s quite serious that Optai did immediate changes to chatgpt in response to the news.)
And there are also other stories of this type. The elderly with memory problems that have tried to go to New York by himself in a Quixotic attempt To meet a meta-optée chatbot, but died during the trip. THE Old technological framework Who thought everyone in his life (except Chatgpt) turned against him, which finally led him to kill his mother in a suicide murder.
It was a week of news if you follow the trends with casualness around LLM.
Fortunately for me, a much lighter story that addresses many of the same problems has been roughly at the same time. This story involved people who had found an effective way to break the AI in Drive -Thrus – by completely asking for comic orders of understanding your AI Middle bot. The video aboveWho has been downloaded from many places, shows a guy killing AI by asking for 18,000 cups of water.
(I also saw variants of this involving people who go to Wendy and recreate ordering I think you should leaveClassic Sketch “Pay it Forward”. You know this – “55 hamburgers, 55 fries“And so on. Children of trust with fast food budgets to know how to screw with a driving service.)
Which led to a Excellent title at the BBC: “Taco Bell rethinks a journey in force after the man ordered 18,000 waters.”
Each of these stories, in their own way, refers to the same question asked in a very different way: where is the line? As when the AI unfolds in our lives?
This problem comes back a lot. And in my head, I connect the points in a strange way. Earlier this year, I wrote a problem parallel to using AI to have a bionic arm. But I think that the metaphor collapses if you, as a user, contact the AI in a addictive way. Whether you realize it or not, your agency is slowly removed, which can become problematic when mixed with other mental health problems. It becomes a bionic costume, where you are still in there, but AI does most of the work. It is not a metaphor that people should want to meet.
About six months ago, Chatgpt added a “memory” function This allows him to remember all your past cats. It is good if you want to maintain a wider conversational context, but when your wider conversational context is dangerous, it could worsen things. This feature did not exist when it could have deepened its conversation with Adam Raine, but in the light of this story, it presents itself as a risky decision.
And the high competitive pressures around this subject suggest that others could do the same. Anthropic, which generally has a better security reputation than Optaai, recently added a functionality similar to Claude, according to The penis. Its version, at least for the moment, has added some important limitations, in particular that it allows you to reference specific cats, rather than creating a profile of you according to your entire cat history.
But what happens if the pressure to match the Openai approach develops?
Often, AI is described as “crashing” when a human takes over in the middle of a strange fast food order, but I think it really happens. It is a control and balance system that guarantees that the system does not come out of the rails. He must go to someone above his remuneration note – in particular a human – to ensure that the order is not bad or an elaborate joke.
It looks like more control than you see most of the traditional LLM implementations.
The question we should ask ourselves is not why someone can order 18,000 waters in Taco Bell with an AI chatbot. This is why it has a safety net, and other more serious AI implementations do not do so. Sometimes you need a reminder that it is not real.
Links without AI
New rule: If you do something for 22 years, you have to give people more than a month of notice before putting it offline. Sad for fans of Typepad.
I saw a really terrible film tonight, A classic shit piece called Grizzly II. What is notable about this film is that he was seated in a safe for 37 years, during which three actors who appeared in his opening scene – Charlie Sheen, Laura Dern and George Clooney – have become major stars. The film was never finished, but they tried to finish it using a lot of modern stock sequences. It is on Netflix if you are curious.
Dream scenario for Scroungers– A gaming enthusiast managed to discover an extremely rare familiar game in a well -known video game channel (the location of the pink gorilla in Las Vegas) Sell for $ 12. In general, Igo meikanA collections based on classical board games, goes in the four figures on eBay and elsewhere. “Proof that sometimes you can always get one on the pros”, ” Said Kelsey Lewin, co -owner of the pink gorilla, says Kelsey Lewin.
–
Find this an interesting reading? Share him with a boyfriend! And back to this in a few days.