The ethics of AI jobs: Are $100M salaries worth the societal risk?


It’s a good time to be a highly requested AI engineer. To attract the main researchers far from Openai and other competitors, Meta would have offered totalizing parcels totaling more than $ 100 million. The best AI engineers are now paid like football superstars.

Few people will never have to tackle the question of whether to work for the adventure of Mark Zuckerberg’s “superintendent” in exchange for enough money to never have to work again (Bloomberg columnist Matt Levine, noted Whether it is a bit of the fundamental challenge of Zuckerberg: if you pay enough someone to retire after a single month, they could leave after a single month, right? You need a kind of remuneration structure developed to ensure that they can inevitably become rich without simply retiring.)

Most of us can only dream of having this problem. But many of us have sometimes had to navigate the question of knowing whether to take an ethically doubtful job (denying insurance complaints? Shilling Cryptoc Equity? Make mobile games more forming habits?) To pay the bills.

For those who work in AI, this ethical dilemma is supervised to the point of absurdity. AI is a ridiculous technology with high issues – both for good and for patients – with ground leaders warning That it could kill us all. A small number of people talented enough to cause a superijacked AI can considerably modify the trajectory of technology. Is it even possible for them to do it ethically?

AI will really be a big problem

On the one hand, the main AI companies offer workers the potential to gain unfathomable wealth and also contribute to very significant social good – including productivity tools that can accelerate medical breakthroughs and technological discovery, and allow more people to code, design and do any other work that can be done on a computer.

On the other hand, well, it is difficult for me to argue that the “Waifu engineer“May XAI hire now – a role that will be responsible for the creation of Grok’s Risque” companion “animates” Companion “, is even more formed by the habit – is all social benefit, and in fact, I fear that the rise of these bots is at the lasting detriment of society. Encourage delusional beliefs in vulnerable users with mental illness.

Much more worrying, the researchers who run to build powerful IA “agents” – systems that can write independently of the code, make online purchases, interact with people and hire subcontractors for tasks – meet many signs that these AIS could intentionally deceive humans And even take dramatic and hostile measures against us. In the tests, the AIS tried to Changes their creators Or send a copy of themselves to the servers where they can operate more freely.

For the moment, AIS only presents this behavior when given prompts with precision designed to push them to their limits. But with an increasing number of agents of AI populating the world, everything that can happen in the right circumstances, so rare, will probably happen sometimes.

In recent years, the consensus among the experts of AI has increased from “the hostiles who have tried to kill us are completely incredible” to “hostile AI only try to kill us in carefully designed scenarios”. Bernie Sanders – Not exactly a media threshing man – is now the last politician to warn that the independent AIS become more powerful, they could take the power of humans. It is a “scenario of the doomsday”, as he called it, but it is no longer a eccentric burden.

And whether or not the ais decide whether or not to kill us or hurt us, they could fall into the hands of people who do it. Experts fear that AI will make things much easier Voyeous individuals for plagues engineers Or Plan of acts of mass violenceAnd for states to reach heights of surveillance on their citizens that they have been dreaming of for a long time but has never been able to achieve before.

Register here to explore the big complicated problems with which the world is confronted and the most effective means of solving them. Sent twice a week.

In principle, many of these risks could be attenuated if the laboratories designed and adhered to solid security plans, quickly responding to signs of frightening behavior among the AI in nature. Google, Openai and Anthropic have security plans, which do not seem fully adequate but which are much better than nothing. But in practice, attenuation often falls on the edge of the path in the face of intense competition between the laboratories of AI. Several laboratories to have weakened their security plans While their models got closer to respect pre-special performance thresholds. In the meantime, xaiThe creator of Grok pushes the versions without any apparent security planning.

Worse, even the laboratories that deeply and sincerely start to ensure that AI is developed in a responsible manner Modification of the course later Due to the huge financial incentives in the field. This means that even if you take a job in Meta, Openai or Anthropic with the best intentions, all your efforts to build a good result of AI could be reoriented towards something else.

So should you take the job?

I watch this industry have been playing for seven years now. Although I am generally a techno-optimist who wants to see the conception of humanity and invent new things, my optimism has been tempered by the companies of AI openly admitting that their products could kill us all, then the race with precautions which seem completely inadequate to these issues. More and more, he has the impression that the AI race is moving away from a cliff.

Given all of this, I don’t think it’s ethical to work in an Ai Frontier laboratory unless you have given very Reflect with waiting for the risks that your work will bring closer to the conclusion, And You have a specific and defensible reason why your contributions will make the situation better, not worse. Or, you have a case in any case that humanity does not have to worry at all at all, in which case, please publish it so that the rest of us can check your work!

When large sums of money are at stake, it is easy to disappoint yourself. But I would not go so far as to claim that everyone working in Frontier IA is engaged in the auto-trompery. Some of the work documenting the capable AI systems and probe how they “think” are extremely precious. Deepmind, Openai and Anthropic safety and alignment teams have done and do good work.

But anyone Push so that a plane takes off while being convinced that he has 20% chance of crashing Would be extremely irresponsible, and I see little difference by trying to build superintendent as quickly as possible.

A hundred million dollars, after all, is not worth accelerating the death of your loved ones or the end of human freedom. In the end, it is only worth it if you cannot just become rich in AI, but also help do it well.

It could be difficult to imagine anyone who refuses breathtaking riches simply because it is the right thing to face the theoretical future risks, but I know a lot of people who did exactly that. I expect it to be more in the years to come, because more and more absurdities like the recent debacle of Machahitler de Grok pass from science fiction to reality.

And ultimately, if the future is well revealed for humanity can depend on whether we can persuade some of the richest people in history to notice something that their pay checks depend on their non-reproductive: that their work could be really, really bad for the world.

Leave a Reply

Your email address will not be published. Required fields are marked *