Fostering Effective Policy for a Brave New AI World: A Conversation with Rishi Bommasani


During the nine years since Rishi Bommasani began his university career, the AI ​​field has evolved quickly. His research later expanded, of a technical focus on the construction of AI to explore how to manage its societal and economic implications.

Now main research researcher at the Stanford Institute for Human Centered AI (Hai), Bommasani recently wrote a paper published in ScienceJoining 19 other researchers to expose a vision of AI policy based on evidence.

In the next conversation, Bommasani describes where he believes that he has had the greatest impact so far, his predictions for AI policy and the questions he still wishes to answer the governance of the AI.

How has your research path changed with the evolution of AI?

Today, AI is widely deployed and the transition from research to deployment is much shorter than in most other areas. Since the beginning of my doctorate in 2020, my work has passed the way of building and assessing AI in the way of governing it. It has become difficult for the university world to build cutting -edge models due to the required capital, but we nevertheless need university leadership. It has become more and more important to think about how we can fill the gap between AI and AI policy.

When you think what you have accomplished so far, what work do you think you have the most impactful?

Overall, I think that my work has shown that the academic world can pursue large -scale multidisciplinary collaborations, to have a more direct impact on AI public policy and governance.

Two examples come to my mind. The first is An article that I co-written in 2021 Where we invented the term “foundation models”. This framework lasted spectacular changes in the field and has become central both to the EU AI Act, to the first complete AI to the world and to the executive order of President Biden on AI.

The second example is my work to help fill the gap in research on AI. OUR Document based on evidence in Science It was a beautiful cornerstone for a collection of works around the governance of AI. I also played a more practical role by directly advising European and American leaders. The European Commission appointed me independent president to supervise the implementation of the AI ​​EU law. And following the Biden decree, I helped make consensus construction efforts on how to approach models of large open source language, which included the facilitation of a private workshop with the White House and national telecommunications and information administration (NTIA).

The resulting paperof which the 26 authors lasted 17 organizations, as well as the official comment that we led with princeton colleagues and a more focused on politics company newspaper In Sciencereflected the growing consensus on the idea of ​​marginal risk. We have urged decision -makers to wonder not only if the open models could be used in the wrong escient, but if they have introduced new or more important risks than existing technologies such as search engines. This work informed the Final report of the NTIA And shaped the American approach to the open models, which continues as part of the AI ​​action plan for the Trump administration.

What other options have you explored to govern AI beyond public policy?

Public policy is only an approach to govern so many increasingly important AI companies. There are also commercial and market incentives which are interesting to explore as the adoption of AI increases.

For example, in the United States, we have not really regulated digital technology, whether search engines or browsers or social media platforms or digital advertisements. People in technology are used to having little government intervention. If you try to change things, you could be better if you can reorganize commercial incentives without providing regulatory elements. Market -based approaches can be preferable to regulatory approaches because they are more agile, if we think that governments are slow, and they are more sustainable, as they can undergo changes in administration.

Your most recent article explores the need for an “policy of evidence”. Why is the definition of “evidence” in politics sometimes problematic?

We must use credible evidence in the development of policies. But what evidence counts as credible? And how can we highlight good proofs?

In public health policy, evidence is generally observational. In economic policy, we allow more theoretical approaches such as forecasts. We must create a standard for AI evidence that balances the data and the theory of the real world to inform political decisions.

Then, how can we sculpt incentives to generate more credible evidence faster? For example, policy could take charge of third -party AI tests. In cybersecurity, companies sometimes give hackers to “white hat” pirates a legal security port while they are looking for vulnerabilities in order to improve systems. But social media societies have repaled against researchers who try to bypass the controls of AI models with potential surface damage. What if we had Presents safe for a third faith assessment In the AI ​​space?

What makes IA governance particularly difficult in relation to development policies and railing for other technologies?

General technologies such as AI, the Internet and Electricity are very important for the company; We can all feel this reality. However, they are very difficult to understand, especially in real time, and even less to govern effectively. What is clear is that these technologies do not only create a new technological niche with certain companies that build technology and certain consumers: they fully modify the functioning of society as a whole. Our company before and after the Internet is very different.

AI is involved in an incredibly wide portfolio of risks such as biases, self -control, violations of privacy, copyright violations, children’s sexual abuse equipment, cyber attacks, power concentration, geopolitical tension, economic disturbance, etc. Although each technology that we have built in the history presents risks, this particular mixture of risks is unique.

And so, how to make progress on security and safety is complicated. For example, I can tell you that we are gradually improving security in autonomous cars, but I don’t know if we make the language models safer. Some of the questions about AI also have older counterparts that we have still not resolved. It is not as if we had completely resolved internet confidentiality problems, or racial prejudices in hiring, for example, and we have new problems of reduction in privacy and biases with AI.

What do you think of the state of AI policy so far?

Currently, most IA policy ideas are very speculative. Few ideas have been implemented and we have very little clear signal on the question of whether politics succeeds in producing better results.

But I see two good trends happening. The first is that in the world, we have government institutions that think of AI, such as the Center for AI standards and innovation in the United States, we have a real collection of people from our government, some of which come from technical horizons, which are determined to understand technology and its impact.

Second, there are many more non -government and university groups that are now studying AI governance, which matures the field. That we will collectively offer good ideas and implement them is uncertain, but having more people to think about it seriously is progress.

What was the value of interdisciplinary centers like Stanford Hai in your work?

Almost always, interdisciplinary work will not be immediately appreciated by the underlying learned communities. But Stanford scholars generally think that it is worth reaching out of your discipline to pursue a large -scale societal impact.

HAI and similar groups around the campus facilitate this type of interaction. That Paper 2021 Who invented the term “foundation models” is an excellent example. This project brought together more than 100 researchers from 10 different departments in Stanford and introduced me to legal researchers, economists and political scientists with whom I still collaborate today.

It is clear that AI will cross with all parts of society and raise fundamental questions. Are we going, as a researchers, to choose to follow the pace in all ways in which AI interacts with the world? Are we going to build bridges and work together on these problems?

Leave a Reply

Your email address will not be published. Required fields are marked *