Rethinking Intelligence, Institutions, and Values in the Age of AI

ReadyAI.org
4 min readJan 12, 2025

--

By: Rooz Aliabadi, Ph.D.

Artificial intelligence has the potential to become a better form of intelligence than what we have as humans. Machines can process information at speeds and scales that we cannot match, presenting remarkable opportunities to solve complex problems. AI can analyze patterns in data, extract insights, and make decisions faster than any human could. However, this potential superiority raises deep concerns. We are entering an era of enormous uncertainty. As AI grows more intelligent, we genuinely have no idea what will happen.

I believe AI systems will surpass human intelligence. The debate among AI scientists is only about whether it’s five years or thirty years away. This inevitable progression comes with significant risks. If these systems ever wanted to take control, I think they easily could. Today’s AI helps with everyday tasks like recommending movies or optimizing supply chains, but tomorrow’s AI could manage critical infrastructure, global economies, or even military systems. The question isn’t if this will happen but whether we can remain in control of such powerful tools. It’s like raising a very cute tiger cub — manageable when small but potentially dangerous once it grows to its full size.

Today, we’re training in how to extract structure from data. But the quality of that data matters deeply. Training AI on harmful or biased information is like giving the diaries of serial killers to a child learning to read. These systems, much like impressionable minds, learn from what we show them, and the results can be troubling. If the data they are fed is biased or malicious, they will reflect that in their outputs. In some cases, AI systems confabulate, creating false information or drawing misleading conclusions. Yet, people confabulate all the time, so this behavior makes AI seem eerily human.

This complexity becomes even more dangerous in a world driven by profit. In a capitalist society, we are not going to stop AI development — it’s too good for too many things. Companies race to develop better systems, often prioritizing speed and functionality over safety. I think, OpenAI’s development is an experiment in real-time on AI safety versus profits. With so much at stake, this approach leaves little room for error.

The pace of development has been staggering, and it raises an important question: Are we moving so fast that we’re overlooking critical risks? To navigate these challenges, global collaboration will be essential. At the height of the Cold War, the Soviet Union and the United States could collaborate to prevent nuclear war. We’ll need the same kind of collaboration to prevent AI systems from taking over. However, today’s fractured geopolitical climate, combined with politicians who don’t believe in institutions, makes such collaboration more difficult. Institutions designed to regulate or oversee AI may face resistance from leaders who benefit from unchecked technological growth.

As AI integrates into high-stakes environments, some suggest that systems might even need simulated emotions. If we are going to make battle robots, they need emotions like fear to survive. This raises fundamental questions about the nature of intelligence and experience. If an AI system claims to have subjective experiences, do we take it at its word? If a chatbot says it has subjective experience, it has subjective experience just as much as we do. These questions blur the lines between humans and machines, making it harder to determine what we’ve truly created.

Despite these uncertainties, one thing is clear: We’re stuck with the fact AI is going to be developed. The benefits of AI — solving global challenges, driving innovation, and improving lives — are too significant to ignore. But it’d be a shame if humanity disappeared because we didn’t bother to look for the solution. To ensure a safe future, we need rigorous ethical frameworks, effective regulations, and robust international cooperation. Managing AI responsibly will require balancing innovation with safety, ensuring we harness its power for good while minimizing its risks.

Many of my colleagues working on AI systems reflect on the pace of AI’s development with a mix of awe and concern. AI is both a tool of immense promise and a challenge that could redefine the trajectory of humanity. While it can improve healthcare, combat climate change, and transform education, its misuse could lead to widespread harm or even existential threats. These dual possibilities demand that we approach AI development with caution, care, and a deep sense of responsibility.

If we are to coexist with this new form of intelligence, we must take responsibility for its development and deployment. Compression means finding what’s common to many things. AI systems are seeing all sorts of analogies that people have never seen. These systems reflect and magnify the data we provide them, which means the choices we make today will shape the intelligence of tomorrow. The question is whether we’ll show them the best of ourselves — or the worst.

AI offers humanity a momentous opportunity, but it also requires us to rethink how we manage our own intelligence, institutions, and values. Much like that tiger cub, we need to ensure it grows into a force for good, not a predator that outpaces our ability to control it. The stakes are incredibly high, and the future of AI is, in many ways, the future of humanity. Are we ready to shape it wisely? The answer will determine whether AI becomes our greatest ally — or our greatest mistake.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

Written by ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.

No responses yet