The world is moving towards an “IA-STH” paradigm, where artificial intelligence (AI) becomes not only omnipresent but fundamental for business, technology and daily life. It is used in industry and demography – to diagnose diseases, detect fraud, create software, optimize operations and facilitate decision -making. But it also distributes disinformation, allows surveillance without consent and strengthens inequality when it is built using biased data sets.
While AI is quietly infiltrating our daily existence, there is a greater question that is looming on us: who decides how it comes in our lives? Who can decide how it is created and deployed? And, do we have his say?
“Technology should not happen to people. It should happen with and through people,” said Divya Siddarth, founder and executive director of the Collective Intelligence Project, a technological non -profit model for AI that involves more democratic processes.
Having spent almost a decade in Silicon Valley and Microsoft Research, Siddarth knows how much technologies are and how little the public’s contribution is invoiced to these decisions.
“I do not think we need democratic contribution on each technological decision, such as the user interface of Google Sheets or the appearance of a MacBook office, but when a technology is really transformative, the risks of not involving the collective contribution become much more serious,” said Siddarth.
In 2025, more than 378 million people worldwide should use AI tools, and this number should reach 730 million by 2030, according to the Industry Group Edge Ai and Vision Alliance.
Studies show That, while almost 80% of people around the world think that AI will considerably affect their lives during the next decade, more than 85% believe that they have no say on how these technologies are evolving. A similar example is the moment when social media has started to impregnate our lives, to reshape communication and society often without railing or consent, said Siddarth.
“We have seen major social media damage, for example, with children using these platforms or in countries where they have ignored their impact on the elections,” said Siddarth. “Many of these problems could have been elected if the voices and experiences of everyday people had been part of the decision -making process. History has shown that listening to these voices is not the defect, but it should be.”
But when everyday people do not build AI and do not fully understand how it works, how can they be involved in the process?
“Of course, I can’t just approach someone on the street and ask how we have to assess Claude 3.7 for their societal impact. This is not realistic,” said Siddarth. “What we can do is identify the parts of the technological battery where decisions concern values, not just technical specifications. These are the points where people should have their say. ”
Some of the most promising examples of collective contribution for technology occur in places like Taiwan, where despite part of the highest levels of external disinformation In the world, public comments shape AI national policy.
“In Taiwan, we have helped develop policies concerning freedom of expression and the use of AI in electoral campaigns,” said Siddarth. “We collected the contributions of hundreds of thousands of people, asking them:” Where is the border between the protection of freedom of expression and the fight against disinformation? ” Thanks to the public’s contribution, we have reached a point where platforms are now necessary for false content at the bottom, to share this data on platforms and to be entirely transparent on the functioning of algorithms. »»
In the last round of his Global dialog initiativeWho asks people around the world how they interact with and are affected by AI, the collective intelligence project interviewed around 1,000 participants in 63 countries, asking for comments on some of the most urgent problems of AI, including Chatbot-Human relationships. The results show that if many respondents have expressed their discomfort with the idea that AI replaces intimate human relations, a significant number has made exceptions for areas such as palliative care or mental health support.
“There is a lot of confusion around these problems, but I don’t think you need deep technical knowledge of AI to weigh,” said Siddarth. “Before we find ourselves in a society, we do not recognize – or do not like – we must understand what people want. Where do we agree with this technology, and where are we not? ”
The lack of education of AI among the general public should not be used as an excuse to ignore the comments of the public, said Siddarth. “For example, if your elderly parent spends hours speaking to a chatbot, what would you like to know about the company that built it or his personality?” She said. “These are the types of questions to which people can answer without the need to be AI experts.”
There is a strong parallel between AI governance and data confidentiality regulations, because the two involve technologies that reshape our lives, mainly without our direct contribution.
“I was fresh out of college and very GUNG-HO to give people direct control over their data: you need to know what’s going on, make choices on the places where your data goes,” said Siddarth. “A month after the start of work, I realized that no one wanted. Even I don’t want to spend my days think about where my data packages go or who see what. What I want is a trustworthy intermediary. Someone who has my best interest in heart. I just don’t want to be that person. ”
Ordinary people can provide more founded and practical concerns that are often overlooked in these debates, said Siddarth.
“At the beginning of 2023, we led a project with Openai to understand the risks that are worried with AI,” she said. “Public conversation was described as completely polarized: either AI was going to destroy or save us. But when we talked about everyday people, which was most of the way was the fear of the conviction. It was the fear of compensation. People said, “I don’t need to be involved in building this daily thing. I did not understand our risks of cybersecurity or things like that.
While conversations on AI governance become greater, it is also easy to fall into extremes. “I think there is a danger when we say things like:” People who make decisions do not act in the public interest, so let’s give direct democratic control “,” said Siddarth. “Or” Let’s go the government to be all involved in everything “. In reality, what we need is a better intermediate layer. »»
Making a form of work of democracy for AI governance is to find intelligent ways to organize and use the public’s contribution. The collective intelligence project designs systems that find what Siddarth calls “a surprising agreement” between the divisions.
“You want to surface ideas that people with different perspectives all appreciate, not only platitudes like,” children should be happy “, but a specific and exploitable consensus,” she said.
The push for the collective contribution must be at all levels – by internal government champions, in technological companies, in AI laboratories and from independent votes working together, said Siddarth. This includes organizations with credible neutrality that can work in all organizations and prioritize collective contributions.
While the collective intelligence project expands its global dialog program, Siddarth hopes to see a future where no one feels excluded from technologies that shape their lives.
“I would love to see a world where you could go to a corner distant from the earth and ask someone if they felt that they had a word to say in the technology they use – or that is used around them – and they would say:” Yes, I know what’s going on, and I am involved “,” she said. “It seems to me to be a success. It is ambitious, but that’s what I aim.”