What every CISO should know about AI security threats in 2023

ai artificial intelligence cyber security Jul 18, 2023
Artificial Intelligence (AI) and cybersecurity

The “rise of the machines” is on everyone’s mind these days, thanks to recent AI breakthroughs and the popularity of ChatGPT. However, the AI apocalypse is not an immediate threat. If you’re a security professional, you’ve got more urgent priorities when it comes to managing AI risks.

Don’t get me wrong, I’m glad intelligent minds are working on safeguards for out-of-control AI that could drastically impact our future, but Chief Information Security Officers (CISOs) have more pressing concerns.

AI poses a number of immediate risks, from opaque algorithms that produce biased results to the weaponization of the technology by cybercriminals.

Let’s take a realistic look at both the threats and opportunities that AI delivers. We’ll also explore ways to mitigate those risks, protect ourselves, and use this brave new technology to its full potential.

Opportunities presented by AI

If you’re a CISO or an executive looking to harness AI technology to achieve a competitive advantage, that’s great!

Artificial Intelligence can help:

  • Spark innovation in product development and marketing efforts
  • Aid in decision-making by analyzing large volumes of data
  • Enhance customer support by anticipating user needs
  • Improve operational efficiency with AI-guided suggestions
  • Enhance cybersecurity efforts by spotting security holes

All these areas represent opportunities for AI to transform business for the better, but they don’t come without risks.

4 Security risks presented by AI (and how to address them)

Every rapid advance in technology presents uncertainties. Here are some of the primary, immediate considerations that security leaders should keep in mind when implementing, employing, or creating AI systems.  

Risk #1: Reliance on opaque algorithms and big data sets

AI uses algorithms formed through trial and error, drawing from vast pools of data and doubling down on what it gets right. The trouble is that we don’t always have a clear understanding of how the algorithms work, we just see the results.

The reliance on opaque algorithms and big data sets in AI systems can lead to:

  •  Lack of transparency
  •  Potential biases and discrimination
  •  Privacy concerns

What does this look like in the real world? Here are a few examples of AI gone wrong.

Russian Tank Fallacy: The name of this AI fallacy is based on an urban legend, but the principle is accurate, so the name has stuck. The story claims the U.S. military trained AI to spot Russian tanks, and it distinguished them from U.S. tanks with 100% accuracy—right up until they put it into practice.

The story goes that instead of noticing substantive differences, the AI noticed the consistently cloudy weather in the photographs of Russian tanks. All the U.S. tanks were photographed on sunny days, so it assumed that Russian tanks came with cloudy weather.

Again, while the story is likely apocryphal, we still use it to describe a common issue with AI. AI might notice characteristics within a data set that have nothing to do with qualities that distinguish one class of data from another.

Bigoted AI: A serious (and 100% factual) implication of this problem arises when using AI to predict human behavior. MIT recently published an article about predictive policing based on AI. AI uses algorithms to determine which factors make someone more likely to commit a crime, and immutable characteristics such as race, age, and gender come into play.

Needless to say, the AI cannot take into consideration the complex sociological factors that influence correlations between those characteristics and an increased likelihood to commit a crime. All it sees are the correlations, and rather than striving for an ideal of blind justice, it makes bigoted policing suggestions.

The same problem arises when AI attempts to predict recidivism rates and guide parole decisions. We put someone’s future in the hands of an opaque algorithm, and that’s not the ideal of justice that any decent person strives for.

Privacy: Drawing from large data sets might mean accessing data that users didn’t consent to allowing anyone to collect and process. Add poor security practices to the mix, and it opens companies up to all sorts of potential violations.

Risk mitigation advice

Stay up to date on the latest best practices, which are constantly evolving.

New controls and standards will be required to mitigate these risks, and they are currently in development. Review the NIST Artificial Intelligence Risk Management Framework 1.0, released in January 2023, to learn more about the latest standards and techniques.

Risk #2: Trustworthiness

Trustworthiness in AI refers to an algorithm’s ability to produce valuable, actionable, and meaningful results. There are a number of factors that can impact the trustworthiness of the information that AI provides, from the quality of the data to unchecked human bias.

The following principles form the building blocks of trustworthiness when dealing with AI.

Validation and reliability: Ensure that data is as accurate and reliable as possible, otherwise the algorithm will have nothing helpful to work with. As the old saying goes: “Garbage in, garbage out.”

Accountability: Clearly define the roles of anyone involved in AI work, identifying what is required of them to ensure trustworthiness and maintain security.

Transparency: Even though AI algorithms can be a mystery in some respects, it’s important to understand (to the best of your ability) what is occurring and why AI produces a specific set of results.

Explainability: Explainability feeds into transparency. AI models should ideally be able to explain why they make the decisions they make. For example, if a self-driving car brakes when there is a road hazard ahead, we want to know why it made that decision. Did it break to avoid the hazard, or for some other reason? That information is vital.

Interpretability: Interpretability describes our ability to predict how a change in the algorithm will affect future AI actions. We should strive for a high degree of interpretability. 

Human bias management: Human bias can make its way into AI systems, and we’re not just talking about the correlation vs. causation problem we discussed earlier with the Russian Rank Fallacy. Biased training data, built by human beings, can shape the way AI “sees” the world.

Risk mitigation advice

Make note of the building blocks of trustworthiness, and integrate them into your processes when developing or consuming AI technology. Use these principles when communicating with your IT team and make sure they are aware of the new standards and frameworks that are being developed. 

Risk #3: Security (in AI and through AI)

Cybersecurity is important both in terms of ensuring that AI systems themselves are secure, as well as the use of AI tools to augment security.

Security in AI: If you will be using AI, it’s important to understand how the AI was developed. Are the data sets pristine? Can you trace them back to where they originated? Who is providing the oversight and writing out the algorithms?

Security through AI: You can use AI toolsets to augment your security efforts. These tools can spot patterns that suggest security threats, but it’s important to remember that correlation does not necessarily imply causation. For example, AI might flag repeated traffic from a single IP in a high-risk country as a security threat, even though the user is a real person with no harmful intent.

When used correctly, AI has the ability to correlate extraordinary amounts of data and recognize new patterns that a human could never pick up on. AI will help augment the efforts of humans to provide better security in a variety of ways, assuming we use it wisely and recognize its limitations.

Risk mitigation advice

Security risk mitigation begins with education, which includes the building blocks of trustworthiness mentioned earlier.

In terms of using AI to enhance security, many AI tools are currently in development that collect data from the dark web, not just from the visible or surface web. By scraping and correlating data from the dark web, they may be able to provide better threat awareness, such as early signs of ransomware links.  

Risk #4: Weaponization of AI

AI can help cyber criminals take advantage of unsuspecting users in a variety of ways, including:

  •  Creating phishing lures that are free of grammatical mistakes
  •  Spoofing accounts or email addresses to produce what appear to be realistic conversations at a massive scale
  •  Supercharging Ransomware as a Service (RaaS) activities to infiltrate business systems

In short, AI can help bad actors find completely new vulnerabilities to exploit at unprecedented levels. That’s why it’s important to stay one step ahead.

Risk mitigation advice

General users need to be thoroughly educated on the threats. AI is not infallible and there are ways to spot issues, even when receiving phishing messages from sophisticated systems. Training will help end users spot the dangers and avoid leaking Personally Identifiable Information (PII), Intellectual Property (IP), and other sensitive company information.

The path forward for IT Teams

Educating your IT staff is critical when consuming or developing AI. IT teams should fall back on many existing best practices for secure software including:

  • Patch management
  • SDLC for AI development
  • Transferring data in secure formats when using APIs
  • MFA
  • Data Encryption at all levels

The CIA triad (Confidentiality, Integrity, and Availability) should remain top of mind, guiding all your team’s cybersecurity efforts. Together, we can harness the productive power of AI while strategically managing the new threats it introduces to the cybersecurity world.

Learn, Connect, and Strategize in Hollywood, FL

Sign up to learn more about our Next CISO Masterminds Summit Event, an Invitation-Only Event for Cyber Security Executives, and receive a free brochure that will provide you with the information about the event.

You're safe with CISO. We will never spam you or sell your contact info.