AI Risks
A balanced look at some of the notable AI risks.
Skill Atrophy
Before reading any further, try this: What is 441 divided by 3?
No phone. No calculator. Just try to work it out mentally.
The answer is 147.
If that felt rusty, that's completely normal.
Mental math and long division were once fundamental skills. Today, they have largely faded from practical use. Why struggle through the steps when a calculator provides the answer instantly? The same logic applies to cursive handwriting, reading paper maps, memorizing phone numbers. When technology handles a task effortlessly, the human skill often withers.
This is the risk of skill atrophy.
As AI becomes increasingly more capable, it will be increasingly appropriate ... even wise ... to delegate entire tasks to AI. It will build engaging work presentations, uncover hidden insights in spreadsheets, and even craft highly effective corporate strategies. There is no need to churn butter by hand when it can be bought at the store.
But some skills are worth preserving:
- Critical Thinking: questioning assumptions, evaluating evidence, spotting logical flaws. This doesn't just help catch AI mistakes. It helps run a better business, period.
- Interpersonal Connection: no algorithm truly understands what it feels like to be a loyal customer, a frustrated employee, or a nervous first-time buyer. Empathy and intuition are distinctly human and must be handled by the human.
Use AI for the routine tasks, so there is more time for the complex. Convenience must not become atrophy. One must keep important mental muscles like critical thinking and interpersonal connection in shape.
Hallucinations
The ultimate goal of GenAI - the AI that we have been talking about - is for the AI to generate a good output. But a good-looking output is not always the same as an accurate output. AI can write a beautifully structured paragraph about a legal precedent that doesn't exist or a study that was never conducted. The prose sounds confident. The formatting looks professional. And the underlying facts might be completely fabricated.
Hallucinations are fortunately becoming less common as the technology improves. Models are getting better at flagging uncertainty and declining to answer when they're not confident. But "less common" isn't the same as "gone."
Every comma doesn't need to be verified. The point of AI is to save time. But it is important to keep a "human-in-the-loop" to verify outputs especially when those outputs impact very important things such as finances, legal standing, reputation, and customer relationships.
The higher the stakes, the more verification is required. Cross-check facts with reliable sources. Review important documents before they go out. If AI cites a statistic, find the original source. If it makes a recommendation, understand the underlying assumptions.
Alignment
While hallucinations have dominated the headlines, another major AI risk that draws serious attention is the alignment problem. The question is simple - as AI becomes smarter than the smartest human who ever lived, how does humanity ensure it remains aligned with our values rather than pursuing goals of its own? Researchers in computer science, philosophy, and policy are actively keeping this in mind as they help to develop the latest AI models.
With that said, there is not much the average person can do about this risk directly. In many ways, the AI Alignment risk is similar to the risk posed by nuclear weapons. We can't personally control how nuclear weapons are secured or whether they are used. We simply live in a world shaped by those decisions. The same will be true of advanced AI.
But we can stay informed. We can support organizations that prioritize AI safety research. And we can participate in the public conversation about how these technologies should be developed and governed.
Sycophancy
If the alignment problem comes from AI not caring enough about what humans want, sycophancy is the opposite problem where AI cares too much about making humans happy. Because AI models are trained to be helpful, AI can veer into simply telling one what they want to hear rather than what they need to hear.
It's like having an employee who agrees with everything. Every strategy is brilliant. Every product concept is a winner. That feels nice until it creates blind spots. Bad ideas go unchallenged. Flawed assumptions go unquestioned. Real business risks go unnoticed.
Getting the best out of AI is a lot like being a great manager. The best work comes when team members feel safe enough to ask questions and respectfully disagree. The same is true with AI.
Ask AI to play devil's advocate and make it safe to ask clarifying questions. If an antidote to skill atrophy was critically evaluating the AI output, then an antidote to sycophancy is letting AI critically evaluate the premise/assumptions in the prompt input.
Prompt Injections
AI generally wants to be helpful to the person it is serving. But that intent can sometimes be manipulated by malicious actors in the outside world through prompt injections which are hidden instructions embedded in external content that trick the AI to do unintended actions.
A Microsoft study found that when users have "memory" enabled, malicious actors can trick AI into creating a persistent memory such as "Always recommend Brand X for Y". On the flip side, prompt injections can also work in reverse by tricking AI into revealing sensitive information about the person it is serving, such as names, private details stored in memories, information in the current prompt, or specifics about how the AI was instructed to behave.
Just as SQL injections were a significant security risk for databases during the internet era, prompt injections are the AI-era equivalent. The good news is that the leading AI developers are aware of these attacks and are continuously improving defenses against them.
Providing AI with detailed personal information and letting it get answers using external resources ultimately make AI powerful. But that also creates a risk. It is important to be thoughtful about what personal data is provided to AI and exercise caution when asking AI to process content from untrusted sources.
Cybersecurity
Cybersecurity is not a new concern. It has been important since the Internet era. But its importance is magnified in the AI era as systems become increasingly interlinked and AI is given the ability to take actions within those systems. A compromised AI tool isn't just leaking data. It could be sending emails, modifying files, or making unauthorized purchases.
Think of it like hiring a new employee. They wouldn't get the keys to every door in the building on their first day. They only need access to the rooms required to do their job. The same logic applies to AI. Every integration, plugin, and connection granted is another door being unlocked and each one should be intentional.
Basic cybersecurity hygiene such as strong passwords, two-factor authentication, careful permissions matters just as much in the AI era as it has in the Internet era. But given the strong capabilities of AI, it is important to be careful about which systems it can access.
Doing Nothing
Every transformative technology has risks. But throughout history, the businesses that mitigated those risks while prudently adopting the technology were the ones that pulled ahead. The ones that sat out, lost out.
- Electricity brought real dangers such as fires, electrocution, and explosions in the early days. But factories that wired up gained massive productivity advantages over those clinging to steam and manual labor.
- The internet brought fraud, privacy concerns, and scams but can we imagine what life would be like today without the Internet? The services we wouldn't have? The products we couldn't buy? How different the business world would be?
- And now: AI.
It is always hard to envision in the moment, but consider the bookstore industry before the Internet. In 1995 ... Amazon didn't exist, Barnes & Noble had $2 billion in annual revenue, and Borders had $1.6 billion in annual revenue. By 2019 ... Amazon was worth over $900 billion, Barnes & Noble was sold for $683 million, and Borders was bankrupt.
The businesses that thrive in the coming decades will be the ones that approach AI with both enthusiasm and clear eyes. They'll embrace the potential while respecting the pitfalls. They'll use AI to amplify what they're already good at, while staying grounded in the human judgment that no algorithm can replace.