On Wednesday, October 8, the University of Massachusetts Amherst College of Social and Behavioral Sciences hosted a webinar featuring former Biden administration U.S. Deputy Chief Technology Officer Alexander Macgillivray. He spoke about his experience helping shape U.S. policy on artificial intelligence.
Macgillivray’s talk, “The Past and Future of AI and Regulation: A Practitioner’s Perspective,” served as the keynote event for the UMass Social Sciences Series, “AI and Us: Rethinking Research, Work, and Society.”.“The series aimed to bring together prominent voices from campus and industry for lectures, workshops and interactive events exploring the role of AI in the social sciences.
He began his speech by describing the proposed AI Bill of Rights, the first major AI regulation project that he and the Biden administration have worked on. Completed in October 2022 before the release of ChatGPT, it aimed to ensure that AI would be safe, effective and responsible.
The proposed AI Bill of Rights is intended to be a guide to protecting society from the threats that artificial intelligence may pose to democracy, fairness and important resources. The five principles used to guide the design are: safe and effective systems, protections against algorithmic discrimination, data privacy, notice and explanation and humane alternatives, considerations and fallbacks, and explain by the Office of Science and Technology Policy.
Macgillivray said there was immediately a lot of interest in the regulations that would be implemented. The bill seemed to apply well to people’s feelings
According to an IEEE article, other AI researchers agree that the choice of language in the bill makes it clear that AI governance is an important civil rights issue that merits expanded protections.
Macgillivray then discussed President Biden’s Executive Order 14110, which defined the administration’s AI policy goals. At this point, Macgillivray was no longer working in government, but he was sharing his observations about the administration’s approach.
“From the outside, you could see that the Biden administration was struggling with how to encourage the benefits while reducing the harms,” Macgillivray said. He said the decree attempted to answer several questions at once; the harms, risks and benefits generated by new technology.
Macgillivray further explained that while the Biden administration had set restrictions on where AI chips could be exported and where training of large-scale AI models could take place, Macgillivray said the restrictions had been eased under the Trump administration.
This was done by reversing the Biden administration’s framework for artificial intelligence releases, the AI Release Rule, which aiming to regulate all global artificial intelligence transactions and create licensing requirements for any exports carried out.
The Trump administration overturned this in May 2025, just before the framework begins to take effect. The US Department of Commerce said this turnaround was necessary because it “would have damaged U.S. diplomatic relations with dozens of countries by demoting them to second-tier status.”
Executive Orders 14179 and 14141, both issued by the Trump administration, focus on increasing infrastructure investment, reducing regulations on power plants and data centers, and limiting government use of ideologically biased AI.
Macgillivray said that although the executive order to limit the use of ideologically based AI was passed, it has not, to his knowledge, been properly implemented. “Although the government continues, I think, to use Elon Musk’s AI, which has a bunch of things hard-coded in it to make sure it’s ideologically consistent with Elon.”
Grok is a generative AI chatbot created by Musk. In September 2025, the Grok for Government deal was approved by the Trump administration, making the AI robot available to all federal agencies, according to at the General Services Administration.
The chatbot received backlash because, according to a New York Times chatbot analysis, Musk has AI. program to reflect its own political priorities.
According to Macgillivray, an unproductive trend in the world of AI policy is where people predict a single future for AI and then tailor their policy proposals only to that narrow outcome.
“If you believe in a different scenario, you end up having a conversation that is sort of disjointed because the basic assumptions behind the policy proposals are just radically different,” he said. To combat this, he suggested considering all future potentials of AI and creating policy solutions that can adapt to all situations.
Macgillivray also suggested that focusing on harms can help combat basing policy proposals on assumptions. He described how important it is that policymakers and government officials understand this technology as deeply as possible and that people who understand it can contribute as much as possible.
“We need to attract talent into government. We need to train them,” Macgillivary said. “The government itself needs to test these technologies, try to build them responsibly and see what works and what doesn’t, so we can regulate more effectively. »
Pearl Davis can be reached [email protected].