Key Takeaways:

  • Generative AI offers businesses increased workforce efficiency but introduces unique risks and exacerbates existing risks.
  • NIST has developed a framework (AI-600-1) targeting the unique risks posed by generative AI, giving guidance to organizations on how to address such risks.
  • A SOC 2 report could be used to show customers and business partners an adherence to the framework, and thus adherence to generative AI best practices.

Research and regulatory updates continue to press organizations to ensure they are utilizing AI in a safe and secure manner. While Generative AI has the capability to significantly improve products, customer experience, and employee efficiencies, there are innate risks that come with implementing AI.

If you’re an organization that has or is looking to add generative AI to your service offerings, you need to know what these risks are and how to mitigate them. By incorporating unique risks associated with generative AI into a new or existing SOC 2 Type 2 report, you directly show customers that your AI controls have been vetted by a third party for design and operating effectiveness. Additionally, new risks can mean additional burdens on compliance efforts. Including AI in an existing SOC 2 report decreases the burden of an additional compliance effort, such as ISO 42001.

A Quick Overview of Generative AI

Artificial intelligence as a public technology has become a buzzword since AI entered the public consciousness around 2022 during the rise of ChatGPT. While “AI” as a concept has been around since 1950, when Alan Turing developed the “Turing Test,” Generative AI is relatively new and is what most reference when discussing AI. Generative AI uses Large Language Models to understand human interaction and to generate human-like responses. This is done by leveraging immense amounts of data. Generative AI’s value has already been recognized by a significant number of businesses. A survey by McKinsey & Company noted 92% of companies plan to increase their AI investments over the next three years. As such, Generative AI is here to stay, and the associated risks need to be addressed by any company that wants to stay ahead.

Current State of AI Audits

As AI has progressed in presence and capability, particularly with the increased capabilities of agentic AI, the public’s concern has also been raised. As the general public’s enthusiasm for AI continues to decline, the demand for accountability and audits around AI are sure to increase. While frameworks exist, such as the NIST framework we’ll discuss in the next section, the governing bodies of audits and assessments are working in tandem to address AI risks head on. HITRUST has released two new service offerings for evaluating and/or certifying AI. CSA Star is also in the process of developing their own program for AI Security Assurance. While an independent audit of a framework, such as the NIST AI Framework, could be performed, it would not carry the same weight as an audit supported by a governing body, such as a SOC 2 Type 2 Report.

While the AICPA has produced various articles around AI governance, an AI-specific framework has not been announced. However, that does not mean an organization cannot use its SOC 2 Type 2 Report to address the same risks.

NIST AI 600-1 – AI Risk Management Framework

The NIST AI 600-1 is a further specialization of the NIST CSF. The 600-1 extension addresses specific risks associated with generative AI. Released in July 2024, this profile provides organizations with a structured approach to identifying and managing the unique challenges posed by generative AI technologies. It serves as a companion resource to the broader AI RMF, offering targeted guidance for developing and deploying generative AI systems. Organizations can better align their risk management practices with the nuanced demands of generative AI applications by adopting the Generative AI Profile. Twelve risks are identified as being unique to generative AI or being exacerbated by generative AI. Below, we’ll look in depth at three of the unique risks that most closely align with the SOC 2 Trust Services Criteria (TSCs), but all 12 can be mapped to the various TSCs.

Data Privacy

Data Privacy is a significant challenge with generative AI, especially regarding Large Language Models (LLMs) and their potential to expose sensitive information through training data. If sensitive information is included within the training data, it has the potential to be regurgitated by the AI as part of a response. Further, the AI may be able to infer sensitive information about an individual given the data it was trained on. Given that the sensitive information could be PII, it is then possible that the generative AI could infer PII data on an individual that is accurate, which is a problem. If it is inaccurate and that information is presented to the individual in question, that is also a problem. 

As the source of the risk around data privacy is with the LLM, it is therefore imperative to understand and vet the source of the Large Language Model used when building or integrating Generative AI into a system. Treating the source of the LLM as a critical vendor who needs to be vetted to the fullest extent possible, you would be taking the first steps to identifying potential harmful LLM practices that could result in data leakage. Reviewing the LLM source’s compliance reports and process documentation for their change management practices could identify potential issues that could result in data leakage.

Confabulation/Hallucination

Confabulations, more commonly referred to as AI hallucinations, are when the Generative AI is confidently incorrect. These occur due to the way the models are designed in that they generate outputs that aim to be close to the statistical distribution of their training data. This can be further exacerbated when the Generative AI includes justification on its output, which could further mislead an individual reviewing the output.

The impacts of hallucinations are largely dependent on use. For a chatbot, the risk is that it presents an individual with factually incorrect information and the individual then makes decisions on that incorrect information. While this could be as inconsequential as using the wrong amount of seasoning in a dish, it could be as consequential as an incorrect diagnosis. While some limited studies have occurred, additional studies are warranted.  

Controls to mitigate the risks hallucinations pose to an organization would largely involve the outputs. For example, if the output of an AI is used as part of the processing of information, having an individual with knowledge of the subject the AI is processing review the output prior to the output progressing along the processing flow could mitigate the risk of hallucinations making their way into the final output.

Information Security

NIST highlights two primary information security risks: Lowering barriers and increasing automation of offensive capabilities, and increasing attack surface. Some examples of the types of attacks using AI that could occur are:

  • Attackers can use generative AI to identify and exploit vulnerabilities and simultaneously use the generative AI to automate the exploitation of vulnerabilities, once identified.
  • An attacker could modify the prompt posed to the AI that would affect the response or actions taken by AI.
  • An attacker could also inject a prompt into the data an AI would retrieve and in a way, hijack the response. This is of particular concern when using agentic AI with access to sensitive information.
  • An attacker could have compromised the training data or other data related to the model that could modify the outputs of the system prior to an organization acquiring the AI.

Mitigation of these types of attacks can be related to established best practice security controls. By implementing logical access controls to the generative AI systems, particularly where the prompt is handled, an organization can mitigate risks of an attacker manipulating the prompt. A similar approach should be taken to protect the training data from unauthorized access and manipulation. Where an organization acquires their LLM from a third-party, ensuring the third-party has implemented sufficient logical access and change management controls is important to ensure there haven’t been any compromises prior to your organization acquiring the LLM.

SOC 2 Type 2 and Generative AI Risks

As SOC 2 Type 2 reports are standard across many industries for service organizations to demonstrate compliance with information security best practices, including AI components as part of the scope of an existing SOC 2 Type 2 is a low-cost way to provide reasonable assurance to customers of the security and compliance of an organization’s AI components. If your organization has clients inquiring how you are addressing risks associated with Generative AI, a SOC 2 Report that addresses risks would be an excellent way to convey what controls are in place.

The five TSCs are broad enough where controls could be developed to address these risks. Whether this includes a mapping of the controls and/or criteria to the NIST AI Framework or a full SOC 2 Type 2 + NIST AI 600-1 Assessment, the result would be an assessment from a third party attesting that your controls, including controls addressing AI, are designed and operating effectively. It should be noted that auditors cannot provide reasonable assurance on the output of AI but can provide reasonable assurance on the controls around the AI. We’ve provided below a linking of the SOC 2 Criteria to the NIST AI 600-1 unique risks to demonstrate a mapping that is feasible. An organization would still need to design and implement controls to specifically address the unique risks generative AI poses.

If your organization needs assistance with designing controls, implementing controls, or needs a SOC 2 Type 2 that addresses existing controls around Generative AI, reach out to us.

NIST AI 600-1 Unique Risk

SOC TSCs

Note

BRN Information/Capabilities

CC1.1
CC8.1
CC9.2

While the risk is not specifically mentioned within the TSCs, implementing change management controls and third-party risk management controls on the LLMs could help mitigate the risk.

Confabulation

CC1.1
PI1.1
PI1.2
PI1.3
PI1.4

 

Dangerous, Violent, or Hateful Content

CC1.1
CC8.1
CC9.2

While the risk is not specifically mentioned within the TSCs, implementing change management controls and third-party risk management controls on the LLMs could help mitigate the risk.

Data Privacy

CC6 (All Criteria)
CC8.1
CC9.2
P2.1
P3.1
P4.1

 

Environmental Impacts

A1.2

 

Harmful Bias/Homogenization

CC1.1
CC8.1
CC9.2
PI1.2
PI1.4

 

Human-AI Configuration

CC1.4
CC1.5
PI1.2
PI1.4

 

Information Integrity

CC6.1
CC6.2
CC6.3
CC6.6
CC6.8
CC8.1
PI1.1
PI1.2

 

Information Security

CC2.1
CC6 (All Criteria)
CC8.1

 

Intellectual Property

CC1.1
CC8.1
CC9.2

While the risk is not specifically mentioned within the TSCs, implementing change management controls and third-party risk management controls on the LLMs could help mitigate the risk.

Obscene, Degrading, and/or Abusive Content

CC1.1
CC8.1
CC9.2

Value Chain and Component Integration

CC1.1
CC8.1
CC9.2

Content provided by Andrew Stansfield, LBMC Cybersecurity Manager. He can be reached at [email protected].

Keep Up with AI Risk and Compliance Trends

Generative AI is changing quickly, and so are the risks and regulatory expectations. If you’re bringing AI into your work or fine-tuning what you already have, it’s super important to keep yourself updated. Check out our cybersecurity newsletter! It’s packed with expert insights, practical tips, and the latest news on cybersecurity strategies. These resources can help lighten your audit load and show your dedication to adopting secure AI practices.

OSZAR »