UC Berkeley Law events address concerns at the intersection of AI and racial justice


By Andrew Cohen

Two recent events brought the best experts to the UC Berkeley law to respond to known concerns and disturbing uncertainties concerning the impact of artificial intelligence (IA).

The annual symposium of the school and technological school, presented by the Berkeley Center for Law & Technology and Berkeley technological law journalfocused on the center of racial justice in the development of AI policies. Four days earlier, a panel sponsored by the Edley Center on Law & Democracy of the Faculty of Law and the Goldman School of Public Policy have asked that IA deepens inequalities or will help advance a more inclusive and participatory democracy.

At the symposium, UC Berkeley’s law professors, Daniel A. Farber, Andrea Roth, Colleen Dog and Osagie K. Obasogie have moderated the IA impact panels on racial equity in environmental policy, the criminal legal system, labor justice and health care respectively.

Samuelson Law, Technology & Public Policy Staff The lawyer Juliana Devries ’17 described problems with certain AI technologies in the space of criminal justice.

Samuelson Law, Technology & Public Policy Clinic Clinic Juliana Devries ’17 has described certain perils of the use of the machine and the evidence generated by AI in criminal cases, and how clinical students treat it in the context of parole and probation. Noting that almost 25% of American prisoners are incarcerated for violating a condition of probation or parole – and that 500,000 adults in the United States are subject to electronic surveillance – it highlighted the concerns about the reliability of technologies which often signal such violations.

“These are complex technologies used against people who are unlikely to defend themselves against these allegations,” she said. “In some states, people are not even entitled to a lawyer to challenge them. Intelligent telephone applications are increasingly used to have serious precision and bias problems, and facial recognition technology has proven to have false and biased results, often with the highest error rates for black women. “

Calling more transparency in the development of AI products, Devries, she said that more resources to better understand technical disclosure could help, as an internal engineer in public defender offices, but that it is “far enough from the reality of our criminal legal system”.

Nicole Ozer ’03, a key figure in the development of the Act respecting the privacy of electronic communications in California and the law on the privacy of readers, has discussed a variety of legal work through the strategy to defend and advance rights and security. The ACLU of the director of founding technology and civil freedoms in northern California, she worked for more than two decades to strengthen rights, justice and democracy in the digital age, in particular by developing the organization’s national confidentiality campaign of the organization and by designing innovative local surveillance reform strategies that have now used the country.

Ozer discussed a case against a face surveillance company provoked in the name of racial justice and militants of the rights of immigrants who are now taking place in the courts of the State of California (she worked on a friendly file for this case focused on the constitutional law of California to private life). She also underlined an ACLU case brought to Illinois against the same company, which said that she had captured more than 10 billion invoices of online photos of peoples worldwide, and the resulting regulations which permanently prohibit the company from making its database facilitated at the disposal of most companies and other private entities.

“At the heart of all this, really, is power,” said Ozer, now Executive Director of the New Center for Constitutional Democracy of the UC Law San Francisco. “We are most often confronted with the most powerful forces, the largest companies, the government; And the rights and interests of people are generally the oppressed in these fights. We must therefore work more intelligently, more strategically and more in collaboration if we want to make sure that AI and other new technologies work for people and advances, rights, equity and justice. ”

Change work as we know it

The panel focused on work has amplified the uncertainty of the impact of AI on employment and the way in which Europe has adopted a stricter approach to regulate it in relation to the UC law professor of the UC Berkeley Diana S. Reddy has noted that 70% of American workers are concerned with the AI ​​who potentially replace them, the same figure would also more easily welcome the parts of their work.

Diana S. Reddy, UC Berkeley law professor and employment expert, thinks that AI concerns could feed an increase in the unionization of American workers.

“In the past, automation has been limited for specific uses and designed for specific tasks,” she said. “But AI is capable of responding in real time to the modification of inputs and the fight against a wide variety of roles, and employers have history of use of technological innovations to reduce labor costs.”

While union membership among American workers increased from around 20% in 1983 to only 10% today, Reddy sees a potential renewal in union interest to help workers say how AI should be used in their work. However, she noted that current labor laws only apply to traditional employees and that many companies have used technology to classify workers as entrepreneurs rather than employees. This, she said, is “a fundamental end of the end of our existing legal infrastructure” which is disproportionately harms people of color and other marginalized groups.

“If AI considerably moves human workers, it is not only a short -term job loss – it is potentially a permanent replacement,” said Reddy. “This would allow a greater concentration of riches accumulated to companies rather than workers, fueling arrow inequalities. It is therefore not only unemployment; Risk here is a major social and economic change. ”

She also expressed her concern about AI regulatory decisions falling to the states instead of the federal government. Historically, Reddy added that fragmentary regulations within labor and employment law led employers to flee in places with fewer protections for workers.

“Companies have long qualified as job creators, arguing that they deserve documents,” because helping businesses means helping workers, “she said. “But to the extent that companies use technology to replace workers, insofar as they cease to be job creators, which could cause radical changes in the way we think of regulating them if they no longer participate in the creation of wealth for all of us.”

Crossed technology and democracy

To technological policy for a just future: AI, Equity Racial and Democracy event, moderated by the Executive Director of Law Edley Center on Law & Democracy, Catherine E. Lhamon, the law professor of George Washington Spencer Overton and the Brennan Center for Justice Vice -President of the Elections and the Lawrence Norden law. Berkeley this year – cited the areas of both concern and optimism.

Based on his next article “ethnationism by algorithm”, Overton argued that AI is at the heart of American identity and democracy, and designed by and used to benefit “members of the dominant racial, ethnic or cultural group while trying to exclude or assimilate others”.

George Washington’s law professor Spencer Overton discusses efforts to dissuade the elimination of discrimination in AI algorithms. Photo of Amanda Ye

Citing reports on AI, which has led to racial results in various fields, from health care decisions to criminal justice to mortgage loans, he supervised them as part of a decline against America from 15% of people in color in 1965 to more than 40% today.

“This reaction is not unique to America, we also see an increasing nativism in Brazil and India and other places,” he said. “Racial diversity is no longer considered as a public good, and I believe that this approach also shapes the approach of our government in AI governance.”

Overton explained how the Trump administration has canceled an administration order from Biden demanding that federal agencies limit discrimination resulting from AI algorithms, and described a current mandate to eliminate slowdowns of development and AI proposals to retain federal funding from states with AI regulations.

“If you refine your AI to reduce biases, the federal government will not buy it,” he said. “This prevents innovation and prevents people from trying to improve AI. These policies are operationalized throughout the government. ”

Norden called the transparency of “incredibly important” AI developers and the confidentiality of “critical” data. He said the AI ​​could help draw more equitable district cards, to find polling stations closer to public transport and to achieve similar objectives, but that the growing power of the executive branch rooted by the Supreme Court, as well as the disproportionate power of certain technological companies, have hampered such progress.

“Social media have had a huge impact on how we see the world, and I think IA will be many,” said Norden. “We see how companies move with political winds, and many companies and social media companies that were talking about wanting to protect our elections have gotten away from the positions [that previously existed to help with that work]. “”

Deploring that people working to protect democracy and civil rights are often too compartmentalized, he stressed that technology was an integral part of the future of democracy – and that AI must be better understood.

“There is a lot of potential for AI to be a great equalizer,” said Norden. “But that will not happen if companies have no incentives to make it a priority.”

Leave a Reply

Your email address will not be published. Required fields are marked *