The Church of AI – M-A Chronicle


“We will not use the Taser today. We have guests with us,” Frank* said, referring to the undisclosed “punishment.” mentioned for those who have not completed the required reading before coming. Berkeley Rationalist Reading Groups occur weekly, with each event post always alluding to this mysterious penalty.

A Brief History of the Rationalists

Although not officially a church, the Rationalists started as an online community from a blog called Overcoming prejudices in 2006. In their early years, rationalists strove to create discussions on questions of philosophy, ethics, and technology. In three years, the blog has become an active network of hundreds of readers and contributors.

After these three years, greater interest in rationalism prompted a chorus of new blogs to spring up from Overcoming Bias, forming what is now called the “rationalist blogosphere.” One of those blogs…Less harm– focused on deeper questions about how to think rationally and the future of technology. Since then, LessWrong has become the largest blog in the rationalist blogosphere and has over 170,000 registered users.

LessWrong founder Eliezer Yudkowsky co-founded the Singularity Institute for Artificial Intelligence (SIAI), which later became the Machine Intelligence Research Institute (MIRI) in 2013. SIAI’s original goal was to develop artificial intelligence, but in 2003, “Yudkowsky realized that there would actually be a problem in aligning AI that is smarter than humans with human values,” according to the MIRI. website. MIRI is currently based in Berkeley as a non-profit organization.

Yudkowsky is also the author of “The sequences“, a nearly 2,400-page collection of essays detailing rationalist philosophy. Although “The Sequences” covers a wide variety of topics, for example, abstract concepts like “superexponential conceptual space” – its main goal is to teach readers how to “live life rationally.” According to Yudkowsky, rationality is a state of thinking free from cognitive biases.

Courtesy of Time magazine Eliezer Yudkovsky.

Haven Light

The Rationalists have seen tremendous growth with the recent rise of AI, so much so that they have funded nearly $3 million to purchase a permanent property that will serve as their de facto headquarters.

Courtesy of Lighthaven Bayes House building, within the Lighthaven property.

Called “Lighthaven,” the property was once the site of a former hotel. It is now managed by Lightcone Infrastructure, a non-profit arm of the Rationalists. According to their websiteLightcone’s mission is to preserve the human race. “We may not even survive the century. To increase our chances, we are building services and infrastructure for the people who help humanity get through this crucial time,” he says.

Courtesy of Lighthaven Outside the Bayes House building.

Lighthaven also hosts a weekly book club, where members discuss portions of “The Sequences” or “highlights” from LessWrong discussions. However, the required reading for October 28, 2025 was not about AGI or the apocalypse – it was a “Guide to Rationalist Interior Design.”

Although the movement is best known for debating the future of humanity, many members apply Yudkowsky’s principles to ordinary problems, approaching 401(k) management Or follow a diet with in-depth statistical analysis.

The decorating guide explains in detail how to optimize everything, from bulb brightness (1,600+ lumens is a good start) to the brand of air purifier (Coway or Blueair are best). The writing is surprisingly brutal. “For fuck’s sake, please remove the plastic covers from the filters before operating them,” he says, referring to air purifiers.

During meetings, participants first divide into small groups to discuss the reading.

“Poor lighting that most people can ignore really bothers me,” said network engineer Matt Kinkele, who was at the event. “I feel crappy spending all day working under crappy fluorescent lights. I don’t like going to the grocery store because the lighting is bad. […] I like going to rationalist houses because it’s nice.

“A lot of the lighting in my home is designed with a very rationalistic and critical approach,” Matt added. “In many other social contexts, people will say to me, ‘Matt, you’re thinking too much about this.’ In that context, people say it’s complimentary. People say, “Wow, you put a lot of thought into that light bulb. Well done. This doesn’t happen in many other places.

Another group quickly moved away from the topic. One participant argued that humanity should universally embrace “meat tubes,” in line with the rest of the group’s agreement. The ultimate goal, they say, should be to pump all kinds of food through pipes, just like water, eliminating the inconvenience of traveling to the store.

Ironically, the person who ordered dinner forgot to select the “delivery” option and had to drive to pick it up. The groups then converged on the kitchen for pizza and more discussion.

Vesta Kassayan / MA Chronicle A dining area in the kitchen.

There is an extreme variety of rationalist beliefs. “I’ve had rationalist friends who had mental breakdowns wondering if they were killing phytoplankton,” Carson* said.

“[Some] Rationalists have extreme beliefs and they try to convince each other that they are right,” said Elijah Ravitz-Campbell. “There is almost certainly someone in this reading group who has a P (misfortune) 95% or more – they think it’s the end of the world.

Matt has a red circular pin that he carries with him everywhere. On it, in white letters: “P(doom) > 25%”.

Elijah, who describes himself as “your stereotypical Redditor,” explained that he, among many other members of the group, does not share such extreme views, but still finds the practical side of philosophy while enjoying the socialization offered by the discussions. “There are a lot of people here who have extreme ideas,” Elijah said. “Eliezer Yudkowsky is, and has been for over a decade, convinced that AI is very, very dangerous and will probably kill us all.”

“Eliezer Yudkowsky is, and has been for over a decade, convinced that AI is very, very dangerous and will probably kill us all.”

In addition to “The Sequences,” Yudkowsky wrote a book called “Harry Potter and the Methods of Rationality,” which tells the story of an alternate universe in which Harry Potter is raised by a biochemistry professor at Oxford and becomes a wizard with a rationalist worldview. Each of the 122 chapters is titled according to the rationalist lesson being established, such as “Testing Multiple Hypotheses” and “Comparing Reality to Its Alternatives.” “It’s a lot of fun. I have to say, to my shame, that I really like it and I think it’s pretty good,” Elijah said.

“In a broad sense, rationalism is probably 40% AI and 60% everything else,” Elijah said. “Rationalism is largely secular, maybe sometimes in a self-religious way presenting itself as an alternative to that, like you don’t need to go to religion,” Elijah said. “Maybe part of the religious thing is there’s a strong feeling of trying to save the world,” attendee Austin added. “Effective altruists, yes,” Elijah said.

“I think the diversity of personal ideologies that you see in the rationalist community is greater than in any other group of individuals. There are people here that I have no ideological and moral relationship with, and we have fantastic debates,” Matt added.

Courtesy of hpmor.com The cover of Harry Potter and the Methods of Rationality.

How to survive until the Apocalypse

Although LessWrong offers a multitude of articles on topics of sexual dimorphism has control systemsone notable article covers a more central tenet of the rationalist movement: How to Survive Until AGI. AGI stands for “artificial general intelligence” and refers to AI that is essentially as intelligent, or more so, than humans. The author estimates that AGI will arrive within the next 20 years and talks about simple strategies to maximize its chances of survival between now and then.

The article focuses on the number of “micromorts” – a unit of risk that represents a one in a million chance of death – involved in common activities. He advises against obvious things like hard drugs and mountaineering, and compares how a sport like paragliding has 74 micromorts per takeoff compared to skiing, which only has 0.7 micromorts per day.

Polyamory

Other articles venture into topics such as polyamory, defined as being in multiple romantic relationships simultaneously, with one article addressing the issue: “Why are so many rationalists polyamorous??” The author states that “anecdotally, the most common justifications [they] hearing about monogamy are linked to jealousy. They added: “Jealousy is just an emotion, and rationalists have a tradition of distrusting emotions. »

“Anecdotally, the most common justifications I hear for monogamy are related to jealousy”

They then compare the debate on monogamy to the famous The Prisoners’ Dilemma and associate game theorystating that “monogamy is a zero-sum game. Each person has a partner, and once that partner is chosen, they are removed from the dating pool for everyone else. There is no sharing, coordination, or exchange. There are no strategies that can be optimized. In other words, this is of no interest to rationalists.”

Non-monogamy, or polyamory, on the other hand, is described as a positive-sum game. “Nonmonogamy allows parties, for example, to go on a date with one partner while the other partner is busy, spend time with multiple partners at the same time, and coordinate to compensate for libido imbalances. Parties rarely expect exactly the same from their partners, so there are usually great opportunities for emotional arbitrage.”

A comment on the post states that “rationality started with a polyamorous founder.” Ben Pace, organizer of the Berkeley reading group, responded to the comment by adding, “I think it’s a bad approach to polyamory to constantly feel angry/jealous/threatened by what’s happening in your romantic relationships, but to continue to practice ignoring it until you become numb to that part of yourself.” »

More posts and discussions can be found at less harm.comcovering topics such as effective altruism, artificial intelligence and its threat to humanity, transhumanism, artificial food substitutes, open borders, and the Slate Star Codex.

Leave a Reply

Your email address will not be published. Required fields are marked *