AI layers - addenda
Annex: related resources - suggested reading
There is no lack of papers portraying the big picture linking AI and (cyber)security. They line up issues cross the board leaving the questions of who needs to pick up what to address the concerns raised. We briefly describe 4 relevant references in this area:
The US National Institute of Standards and Technology – NIST
Based on: AI Risk Management Framework
The document identifies risks, including the following 10 risks:
Confabulation: known colloquially as "hallucinations" or "fabrications"
- Eased production of and access to violent, inciting, radicalizing, or threatening content
- Data Privacy: unauthorized use, disclosure, de-anonymization
- Harmful Bias and undesired homogeneity that skews system or model outputs
- Human-AI Configuration: inappropriately anthropomorphizing GAI systems
- Information Integrity: not distinguish fact from opinion or fiction or not acknowledging uncertainties
- Information Security: Lowered barriers for offensive cyber capabilities
- Intellectual Property: replication of copyrighted, trademarked, or licensed content; exposure of trade secrets; or plagiarism or illegal replication.
- Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, degrading, and/or abusive imagery, including synthetic child sexual abuse material and nonconsensual intimate images of adults.
- Value Chain and Component Integration: Non-transparent or untraceable integration of upstream third-party components
Based on: Generative Artificial Intelligence Profile Source:
This NIST framework presents the requirements for trustworthiness in this picture

These properties are at the top of the risk control stack. They are not a concern for all AI systems. Two somewhat new items are included:
- Explainable and Interpretable
- Fair – with harmful bias managed
These two properties are valid for other systems, yet, they are of particular interest in the current vague of AI systems.
The Open Worldwide Application Security Project - OWASP
Based on: OWASP Top 10 for Large Language Model Applications
OWASP has created a top 10 of application security concerns for LLM applications. They provide a generic technical set-up for such a system alongside the key concerns. These are
Prompt Injection
Insecure Output Handling
Training Data Poisoning
Model Denial of Service
Supply Chain Vulnerabilities
Sensitive Information Disclosure
Insecure Plugin Design
Excessive Agency
Overreliance
Model Theft
These ten items stay close to the typical OWASP focus, namely security in (AI) projects during the build/acquire life cycle.
Deloitte
Based on: The CISO's Guide to Generative AI
In the document "the CISO's guide to generative AI" Deloitte shows the probable positive impact of AI in security solutions. They list advantages for prediction, interpretation, simulation, automation, detection, interaction and creation. This long list implies that all security disciplines can be impacted in a positive way. More specifically, the document states that AI can be deeply impactful in enhancements to cyber infrastructure and detect-and-respond capabilities.
The message is that security solutions can profit from AI, and they must, to compete with malicious actors equally introducing AI to attack.
ENISA – European Union Agency for Cybersecurity
Based on: Multilayer Framework For Good Cybersecurity Practices For AI
- Governance layering
The document identifies 3 layers to build security for AI :
- The ground layer are the general cybersecurity foundations
- On top of that, generic but AI specific elements are added
- The top layer addresses sectorial AI
This layering is different from the layering to build AI applications we describe.
- ICT layering
AI systems are layered no different than other IT systems as far as infrastructure goes: infrastructure, data, applications, processes, … The demands on storage, processing and communication however are not the same.
- Types of AI systems
The ENISA documents considers the following types of AI based on the area in which it is applied: computer vision, expert systems, machine learning, multi-agent systems, natural language processing, robotics, and speech recognition.
Distinguishing between multiple types is not just an academic exercise, it can actually help to provide the right security for the right use case. Not all AI systems are build the same way. These differences impact the risks and the required controls.