Today’s Oppenheimer Dilemma isn’t What You Think it is

Almost half a decade ago, I concluded my Outperform keynote with the following 2 slides. Was I being too optimistic?

Operform2019 Keynote Conclusion Slides1

In the 20th century, the most terrifying technology is nuclear weapons. We learned to live with that today; nuclear power exists. We learned to leverage its power in generating clean energy for us, and we learned how to control the damage that it can make. So that’s in the last hundred years.

Operform2019 Keynote Conclusion Slides2

In the next hundred years, the 21st century, the most terrifying technology may be AI and Robotics. And I’m sure that we as a human race, will learn to cope with that and live with them in co-existence as well. So maybe with the help of AI, we can actually be better humans. And I still have faith in humanity.

In case you are interested, here is the complete recording of my 2019 keynote.


As one of my undergraduate majors is in Physics at UC Berkeley, I can’t help but be drawn to watch the recent movie Oppenheimer, and I enjoyed it thoroughly. Not only is it nostalgic for me, it evoked many complex emotions. There is excitement and disgust, and it made me angry and made me cried. More importantly, it also reinforces the moral dilemma I recognized long ago as we develop advanced AI systems. As we’ve seen in recent geopolitics, anything can be weaponized by an adversary; everything from the economy, our interpersonal relationships (our social networks), to our freedom and value, let alone something as powerful as AI.

However, this is NOT the Oppenheimer Dilemma I want to discuss today, as the moral dilemma of AI development is already a widely discussed topic. In fact, this very topic was one thread of discussion at the Public Lecture that was just hosted by UC Berkeley Extensions. Additionally, 2 timely and reputable articles on this topic have been brought to my attention shortly after the debut of Oppenheimer.

It’s easy to lose sight of the bigger picture beyond our immediate sphere of influence and become complacent working in the comfort of our homes. While I don’t think a pause or slowdown in advanced AI research would do anything to delay the imminent technological singularity, I do think some forms of regulation is necessary as with any new technologies. We shouldn’t repeat the same mistake we made with social media. That said, even if we do slow down, our adversaries will still plunge ahead, rendering all our efforts useless.

Our Oppenheimer Dilemma

The problem is systemic. The competitive dynamic created today’s intelligence arms race as it did with the nuclear arms race. So are we down to choosing the lesser of two evils (either we win or our adversaries win), or is there a third option that we just couldn’t see now? This is our Oppenheimer dilemma, and it’s beyond the moral dilemma of building AI with unknown potential and consequences. It’s more challenging! It’s about how we can avoid an AI doomsday without breaking our current system (e.g. our current way of life and our current pace of AI innovation)? The alternative would not only be painfully disruptive, but also extremely unpopular.

Clearly, this is too big of a problem, and I do not have a solution (or I wouldn’t be here). But I believe there are some principles that could serve as good guidelines as we move forward.

  1. Don’t slow down AI research, instead expand its scope and rigor to include more societal constraints as part of the research objectives
  2. Develop AI that can align with our human values to ensure the AI is serving our people and society
  3. Upgrade the moral and ethical standards of society, so AI can learn from better data we generate

The reality of any arms race is ugly. The truth is that slowing down our AI research is not really an option, given our adversaries’ relentless pursuit to win this AI race. By incorporating societal constraints, the AI developed will gain more public acceptance and also avoid regulatory backlashes. This will require close collaboration and communication between the scientists and engineers building the AI and various professionals, including economists, sociologists, psychologists, anthropologists, ethicists, regulators, and more. This more comprehensive approach to AI development will attract more collaborations, investments, and talents, leading to an acceleration of AI innovation.

the inevitable singularity doesn't have to become AI apocalypse
The inevitability of the singularity doesn’t have to become an AI apocalypse

But wouldn’t this also accelerate the approach of the AI singularity? Yes, it would! As we’ve already seen from Oppenheimer’s nuclear arms race, when the race is on, it’s unstoppable. But the singularity doesn’t have to be an apocalypse. By developing AI that respects human rights, privacy, and social values, we ensure that AI technologies serve people and benefit society while staying at the forefront of progress. These efforts include reducing bias and discrimination in AI algorithms, resulting in fairness and equity in AI-based decision-making systems. Because AI tends to amplify the inherent biases in the training data, accelerating AI development without a close alignment with human values will inadvertently increase social tensions by creating more bias, distrust, and greater inequality. Therefore, accelerating AI development by adversaries who do not respect human rights will only accelerate their own demise. It’s a ticking time bomb!

Lastly, as we continue to advance our AI technologies, we must also advance ourselves, the users of these AI tools. Like any tool, AI itself is neutral. It’s neither inherently good nor evil. It is often the human actors employing these tools who dictate whether they are used for good or harm. However, since AI systems continuously learn from the feedback data they receive, which is generated by humans, it is crucial that we set a good role model for AI. So an effective way to guard against the misuse or adverse consequences of unchecked AI applications is for us to upgrade our moral and ethical standards.

So I am optimistic! And 4+ years later, today, I am still optimistic. One of the main takeaways from my 2019 Keynote is that the technological singularity is not avoidable, but an AI doomsday is.

And the best way to ensure that AI don’t destroy us all, is for us to learn how not to kill each other when we encountered conflicts. The best way to prevent the inevitable singularity from turning into an AI apocalypse is for us to become better humans as a race.