Opinion: If AI is going to kill us all, let’s make it do some work before we die

At last Wednesday’s meeting of the Bloomington city council, Dave Rollo used his time for councilmember reports at the start of the meeting to deliver eight minutes of extemporaneous remarks about the dangers of artificial general intelligence (AGI).

Bloomington city councilmember Dave Rollo talks about artificial intelligence at the council’s June 21, 2023 meeting. The image is a screengrab from the CATS recording, after it was uploaded to the city’s Youtube channel.

It’s worth contrasting artificial general intelligence with artificial intelligence developed to perform narrow tasks—like converting the audio from councilmember speeches to text, or sharpening out-of-focus photographs.

In his remarks, Rollo cited AI researcher Eliezer Yudkowsky’s recent op-ed in TIME magazine “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

Yudkowsky is not exactly enthusiastic about AI. From the TIME op-ed: [T]he most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”

About a moratorium, Yudkowsky writes: “If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth.”

Yudkowsky doesn’t think a moratorium on AGI research is enough. He wants to shut it down.

Rollo said last Wednesday that after the summer break, he might be presenting the city council with a resolution on the topic. The idea of the resolution would be to call on US representatives to push for an international moratorium on AGI research.

The summer break ends with the council’s next regular meeting, which is set for July 26.

For my part, I don’t think of AI tools like ChatGPT as something that will eventually  kill me. Instead,  I think of it as showing some potential for saving me some work, or making my work more efficient.

As an example, I gave ChatGPT the headline I wrote for this column and asked it to come up with 10 alternatives that convey the same idea. Inside of about two seconds it produced this list:

  1. Counting on AI: Let Me Benefit from Its Power Before the End
  2. Surviving the AI Threat: Making the Most of Its Assistance Before the Apocalypse
  3. My Deal with AI: Leveraging Its Potential Before It Takes Over
  4. Preparing Myself: Embracing AI’s Efficiency Amidst the Looming Doom
  5. My Productivity Pact: Optimizing AI’s Help in the Face of Extinction
  6. Embracing the AI Shadow: Maximizing Its Potential in My Final Days
  7. AI’s Offering: Letting It Assist Me Until Humanity’s Last Breath
  8. My AI Journey: Navigating Its Dangers while Exploiting Its Capabilities
  9. My AI Alliance: Utilizing Its Skills Before Our Inevitable End
  10. My Final To-Do List: Harnessing AI’s Efficiency Before My Time Runs Out

I think my original headline is better than any of those, so I kept it.

Still, I think there’s some value in having 10 alternatives to consider, to cover the possibility that there’s not some obvious better headline.

Maybe a better headline could be written—by some crotchety old reporter who works in a small town newsroom somewhere in America. I know of a guy like that, but it would take more than two seconds to track him down again.

ChatGPT is also great for entertainment. You can make it try to tell you jokes. Many of its attempts at humor are weird, and just not funny. But here’s one that I think counts as legitimate political satire:

Prompt: Tell me a joke that starts: “How is the Little 500 different from a Bloomington city council meeting?”

ChatGPT: How is the Little 500 different from a Bloomington city council meeting? At the Little 500, crashes are expected, while at the city council, they’re just a metaphor for the entire process.”

Anyhow, I think bicycles might be a fair enough comparison to AI. In 1980, Steve Jobs drew that analogy—between two-wheeled pedal-powered vehicles and personal computers: “When we invented the personal computer, we created a new kind of bicycle…a new man-machine partnership…a new generation of entrepreneurs.”

If a personal computer is like a standard issue Little 500 single-speed Roadmaster, then maybe AI is more like the bicycle that 2023 Race Across America winner Isa Pulver rode through Bloomington last week.

I am glad that our local representatives are aware of some important issues of our time and take the time to daylight them to the public.

I don’t think a resolution like the one Rollo has in mind can do much harm, and it might do a little bit of good.

But with all due respect to Rollo, I am not sure that a resolution like the one he has in mind should occupy more than a few additional minutes of the local public’s time.

If Rollo decides to sponsor such a resolution, I hope he does not ask the council attorneys to spend their time on it.

I hope he asks some AI language model to write the resolution.

Whatever text it produces will probably count as a good enough draft for him to work from.

Here’s what ChatGPT produced, when I asked it to write the resolution that Rollo described in his remarks last Wednesday:

Title: Resolution Urging Immediate Action to Address the Risks Posed by Artificial General Intelligence (AGI)

WHEREAS, the Bloomington City Council recognizes the rapid advancement of large language models, specifically artificial general intelligence (AGI), which possess the ability to surpass human intelligence and perform tasks traditionally undertaken by individuals with higher education;

WHEREAS, AGI poses a significant social disrupter and may result in job displacement, impacting both the workforce and higher education institutions;

WHEREAS, AGI models, including but not limited to Bing and ChatGPT, continue to evolve by learning from vast amounts of human knowledge obtained from the internet;

WHEREAS, the exact workings of AGI remain unclear, and its designers lack full comprehension of its evolving capabilities and potential consequences;

WHEREAS, concerns have been raised by technical experts and notable figures within the field, as evidenced by a petition signed by over 1,800 individuals, warning of the inherent dangers associated with AGI and calling for a temporary halt to its research and development;

WHEREAS, recent reports indicate that leading CEOs from major corporations express alarm over the potential for AI to drive humans to extinction within five to ten years;

WHEREAS, it is vital for the Bloomington City Council to take prompt action and join the global effort to address the risks and ensure the safe development and deployment of AGI;

NOW, THEREFORE, BE IT RESOLVED, that the Bloomington City Council:

  1. Calls upon local representatives and relevant authorities to familiarize themselves with the potential threats and consequences associated with AGI by engaging with experts in the field, such as Eliezer Yudkowsky, to gather knowledge and insight on AGI’s impact on society.
  2. Urges representatives to take immediate action by proposing a moratorium on AGI research and development, with the aim of comprehensively assessing the risks, establishing appropriate safety measures, and defining ethical guidelines for its implementation.
  3. Encourages representatives to collaborate at the national and international level to advocate for a coordinated approach among countries involved in AGI research, ensuring a collective commitment to addressing the containment, alignment, and ethical concerns associated with AGI.
  4. Requests local representatives to actively support initiatives aimed at fostering public awareness and understanding of AGI, promoting responsible discourse, and engaging with the community to address concerns and provide accurate information.
  5. Expresses the Bloomington City Council’s commitment to actively monitor and evaluate advancements in AGI and its potential impact on society, advocating for policies and regulations that prioritize human values and the well-being of all individuals.

BE IT FURTHER RESOLVED, that the Bloomington City Council shall transmit copies of this resolution to local representatives, relevant state and federal agencies, and organizations engaged in AGI research, urging their attention and prompt action to address the concerns raised herein.

ADOPTED this __ day of _________, 2023.

City Council President__________


Video: Bloomington city councilmember Dave Rollo on AGI (June 21, 2023)

5 thoughts on “Opinion: If AI is going to kill us all, let’s make it do some work before we die

  1. Excellent example of the Dunning Kruger effect. Likely reenforced by Confirmation Bias.

  2. What would be the bad effects of focusing on other priorities for a time, leaving AGI for a time? Lord knows, efforts for sustainability, reduced waste, human relationships, increased equity and quality of life (rather than surveillance and mechanization and less human contact) could use improvement and evolution.

    1. Sorry I wasn’t clearer. “Leaving AGI focus” meant not furthering its hold on our systems and approving David’s resolution in order to focus on issues that deserve attention such as increased equity and quality of life and sustainability. Why are we spending tax dollars on surveillance and mechanization and less human contact?

  3. I’m retired from a career mostly working on the fringes of AI. There’s tremendous confusion among experts as well as the public about how much of a threat AI is, and when and why.

    To respond to Cynthia Bretheim, real AGI, much less superintelligence, won’t exist for quite awhile, so I agree, we should put worrying about it aside. But plain AI — current less-than-human intelligence — is a serious threat NOW. I’m thinking mostly of deepfakes and scams. For example, I’ve read two articles by journalists who used computer-generated replicas of their own voices to get past their bank’s voice recognition, and one about a mother who said a man used AI software to generate her daughter’s voice in a horrifying kidnapping scam. And what’s going to happen when highly realistic videos appear shortly before the 2024 election showing President Biden and his Republican opponent doing completely outrageous things?

    As Daniel said, AI risks are very real, and the flippant tone of the article ignores serious concerns. The world we’re living in now with alien intelligence of IQ 75 is plenty dangerous, thanks to miscreants with human IQ 110 — and we can’t afford to wait to do something about that!

Comments are closed.