The real cost of artificial intelligence

How to manage the cyber security risks around AI

Add bookmark
Michael Nizich
Michael Nizich
09/05/2023

A black and white image showing a machine learning algorithm

Very recently, the term artificial intelligence (AI) has found its way into nearly every discussion at almost every level. Not so long ago however, the discussion of AI and machine learning algorithms (ML) were to many people, simply a part of science fiction or the occasional news report of an ongoing research project somewhere. Times have certainly changed and with the global release of ChatGPT from Open AI in November of 2022, we have all been thrust into a new world of technological opportunities. Unfortunately, this new world also brings real risk and the threat of the nefarious uses of these technologies by cyber attackers, threat agents and nation state actors.

So, what does the introduction of powerful and available AI technology to society do to threaten our security and to create vulnerabilities that were previously non-existent?        

AI can be described as an autonomous and self-evolving system that can recognize and learn from unknown and unpredictable data patterns. AI systems can continuously evolve, learn and improve their domain adaptation and self-organization after being designed. While this creates many opportunities for self-improving evolving systems, it also creates risks from such systems being used by adversaries against their original intentions.

As with any discussion, there are always multiple perspectives, so let's consider this discussion with two perspectives: one being the benefits of AI into society and the other being the real costs and risks of AI into mainstream technology that place us all at risk. 

On the positive side, these technologies are starting to improve our lives in myriad ways, from simplifying our shopping to enhancing our healthcare experiences. Their value to businesses has also become undeniable as nearly 80 percent of executives at companies that are deploying AI recently stated they are already seeing moderate value from it. We can see firsthand how these technologies have progressed in areas like banking, marketing and entertainment and are demonstrated by readily available technologies like Alexa, Siri and Google.  

“The risks that AI poses to our daily lives including privacy violations, discrimination, accidents and manipulation of political systems, are more than enough to prompt caution.”

On the negative side, the risks that AI poses to our daily lives including privacy violations, discrimination, accidents, and manipulation of political systems, are more than enough to prompt caution. There are more concerning vulnerabilities being created by AI including the loss of human life or the compromise of national security. These can occur simply by bugs and flaws existing in the ML algorithms that are created by humans to control the AI system. For an organization or individuals, this can lead to life altering effects including revenue losses, fines and penalties, criminal investigation and diminished public trust.

Increasingly, electronic systems are using AI to help them to perform better by using predictive methods to anticipate future conditions in which to react and perform more efficiently. This opens a new overall vulnerability to cyber attackers because they can now focus on attacking the AI components of a system versus the system itself. This is critical because in general, new system components like AI integration have not had the chance to be fully tested for security vulnerabilities as a system that has been in use for a while. 

Previous generations of cyber attacks were aimed mostly at stealing data and disabling systems.  Emerging attack methods however, are now focusing on changing the behavior of the system by targeting the AI components. These types of attacks could become more prevalent as more critical systems adopt AI functionality to improve their performance in a race to help them maintain a competitive advantage in the marketplace.  

On March 22 2023, an open letter was published, signed by more than 33,000 people including some of the most recognizable names in business and politics, urging for the pausing of giant AI experiments for six months until we know more about the consequences that may be lying in wait for us. Perhaps we should all take a moment. 


RECOMMENDED