Why "how likely ร how bad" is the only risk formula that matters, and where vulnerabilities actually come from.
Vulnerability = a weakness that could be exploited. Threat = someone or something that might exploit it. Risk = the combination: Risk = Likelihood ร Impact. A vulnerability with no realistic threat is low risk; a vulnerability with both motivated threats AND serious impact is critical risk. Vulnerabilities come in three flavours: technical (bugs, misconfiguration, outdated software), human (phishing, weak passwords, poor training), and process/physical (no offboarding, unlocked rooms, weak vendor management). Good security picks defences by risk, not by fear or checklist.
11.1 The Key Terms
These three words get used interchangeably in everyday speech, but in security they mean different things and mixing them up in an exam loses marks.
Term
Definition
Example
Asset
Something you want to protect โ data, system, reputation
Customer database
Vulnerability
A weakness in the asset or its surroundings
Database server is running outdated software with a known exploit
Threat
Anyone or anything that could exploit the vulnerability
Cybercriminals who want to steal customer data and sell it
Risk
The combination โ how likely is the threat to exploit this vulnerability, and how bad would that be?
High risk: motivated attackers + valuable data + known exploit
Exploit
The specific method or code that turns a vulnerability into an actual attack
Public "Metasploit" module that attacks the outdated database software
Control
Something you put in place to reduce risk
Patch the database; add a firewall; monitor for unusual queries
LEARN THIS SENTENCE: "The asset has a vulnerability that a threat could exploit, creating a risk, which we reduce by applying controls." Examiners love this chain of reasoning โ use the specific words, in this order, when explaining a scenario.
11.2 The Risk Formula
The cleanest way to reason about risk โ and the formulation used throughout Chapter 16's design framework:
Risk = Likelihood ร Impact
Likelihood = how probable is it that the threat exploits this vulnerability? (influenced by: existence of exploit, attacker motivation, exposure) Impact = if it does happen, how bad are the consequences? (data loss, financial cost, legal penalty, reputation damage, human safety)
Why both factors matter
A vulnerability on a public-facing login page (high likelihood, high impact) = critical. A vulnerability in an internal system that's air-gapped (low likelihood, same impact) = much lower risk. The asset is the same; what differs is exposure.
Every control you add can target either side of the equation:
Controls that reduce likelihood: MFA, patching, firewalls, user training, good passwords
Controls that reduce impact: backups, encryption at rest, network segmentation, incident response plans
A good design uses both sides. This is called defence in depth โ layers reduce likelihood (harder to get in) AND layers reduce impact (damage is limited if someone does get in).
11.3 The Risk Matrix โ A Visual Tool
A risk matrix plots likelihood on one axis and impact on the other, producing risk ratings by where each risk sits. It's the standard way to communicate risk to non-technical stakeholders.
The 3ร3 matrix is simple but powerful. Larger matrices (5ร5) are used where more granularity is needed. Either way โ the logic is the same.
HOW TO USE IT: For each identified risk, place it on the matrix. Start with the top-right corner โ these are the "fix today" risks. Risks in the top-left or bottom-right might justify mitigation. Bottom-left can often be accepted as a cost of doing business. Organisations that try to address every risk equally run out of budget before addressing the important ones. In exam terms: risk prioritisation is the difference between working security and security theatre.
11.4 The Three Families of Vulnerability
Vulnerabilities come from three very different sources. A serious assessment covers all three.
Technical vulnerabilities
Weaknesses in code, configuration, or architecture.
Programmer errors; often decades-old patterns that keep reappearing
Misconfiguration
Default passwords, cloud storage left public, permissive firewall rules
Admin oversight, "get it working first, secure it later" shortcuts
Outdated software
Running Windows 7 in 2026, unpatched servers
Legacy systems; lack of patch management; dependencies
Design flaws
Weak or absent encryption, plaintext passwords, no authentication
Security not considered at design time; budget/schedule pressure
Supply chain
A library your software depends on has a vulnerability
Modern apps use 100s of dependencies; any one being compromised affects you
Human vulnerabilities
Weaknesses in people, training, and human behaviour. Remember from Chapter 10 โ "the squishy meat at the keyboard" (in exam terms: human vulnerability).
Type
Examples
Social engineering susceptibility
Falls for phishing, gives info to callers claiming to be IT support, clicks suspicious links
Doesn't recognise spoofed email, posts sensitive info on social media, works in sensitive files in public
Insider risks
Disgruntled staff, financial pressure, coercion
Habit-based
Autopiloting through security warnings, reusing credentials across systems
WHY HUMAN VULNS ARE HARDEST TO FIX: You can patch software with a click. You can't "patch" a person. Training helps but doesn't eliminate risk โ humans are variable and attack techniques evolve. That's why modern defence-in-depth assumes that some users will click and designs controls to limit what happens next: email filtering catches most phishing; MFA means stolen passwords alone aren't enough; least privilege limits the damage if they do get in.
Process and physical vulnerabilities
Weaknesses in how the organisation operates.
Type
Examples
Poor process
No offboarding (ex-employees keep access), no change control, no backup testing, no incident response plan
Vendor/supply chain
Third-party contractor with weak security has access to your systems (this was the Medibank pattern)
Physical
Unlocked server rooms, unsecured network jacks, unattended workstations, tailgating into buildings
Policy gaps
No password policy, no data handling policy, no acceptable-use policy
Shadow IT
Staff using personal apps/accounts for work (Dropbox for files, personal email for business), outside IT's control and monitoring
11.5 Where Vulnerabilities Come From โ The CVE System
Publicly-known technical vulnerabilities are catalogued in the CVE (Common Vulnerabilities and Exposures) database. Each gets an ID like CVE-2024-12345. When a researcher or vendor discovers a vulnerability, it goes through coordinated disclosure:
Researcher finds the bug and reports it privately to the vendor
Vendor investigates and develops a patch (usually 30โ90 days)
Patch is released. CVE ID is published.
Details become public (sometimes immediately, sometimes after a delay)
Attackers start scanning for unpatched systems. Defenders scramble to apply the patch.
The gap between "patch released" and "attackers exploiting it at scale" is typically hours to days. This is why patch management is so critical.
Zero-day vulnerabilities
A zero-day is a vulnerability being exploited in the wild before a patch exists โ the defenders had "zero days" to prepare. Nation-state actors often use zero-days against high-value targets. They're dangerous because normal defences (patching) don't apply โ you need compensating controls like network segmentation, monitoring, and restricted privileges.
Log4Shell (CVE-2021-44228): A critical vulnerability in the widely-used Java logging library Log4j, disclosed December 2021. Rated 10.0/10.0 on CVSS. Exploitation was trivial โ a single log entry could run attacker code. Because Log4j was used in tens of thousands of applications, virtually every large organisation spent weeks scrambling to patch. Nation-state actors and cybercriminals were exploiting it within hours of disclosure. The ACSC issued emergency guidance. This is the archetypal supply-chain vulnerability: no one made a mistake in their own code, but their software depended on an affected library.
11.6 OWASP Top 10 โ Web App Vulnerabilities
The Open Worldwide Application Security Project (OWASP) publishes a Top 10 list of the most common critical vulnerabilities in web applications, updated every few years. You don't need to memorise all 10, but know the framework exists and a few examples:
Category
Plain-English description
Broken Access Control
Users can access things they shouldn't โ e.g., changing a URL to see someone else's profile. An authorisation failure.
Cryptographic Failures
Data isn't encrypted properly or at all โ e.g., passwords stored in plaintext, data sent over HTTP.
Injection
User input is treated as code โ e.g., SQL injection (Chapter 10), command injection, cross-site scripting.
Security Misconfiguration
Default credentials, overly permissive settings, unnecessary features enabled.
Vulnerable Components
Using outdated libraries with known CVEs (Log4Shell territory).
HOW THIS RELATES TO CIA AND AAA: Most OWASP Top 10 categories map directly to AuthN/AuthZ failures (from Chapter 8). "Broken Access Control" = authorisation failure. "Authentication Failures" = authentication failure. "Injection" typically violates Confidentiality and Integrity. The frameworks all weave together โ that's the sign they're actually useful.
11.7 Vulnerability Management โ The Discipline
Vulnerability management is the ongoing process of finding, prioritising, and fixing vulnerabilities. It's not a one-off project โ it's a continuous cycle.
Mature security teams run this cycle weekly or monthly. Consumer software (Windows, Mac) runs a similar cycle for users via automatic updates.
11.8 Risk Treatment โ Your Four Options
Once a risk is identified and assessed, you have four options. Most organisations mix them:
Option
What it means
When to use
Mitigate
Apply controls to reduce likelihood or impact
Most common โ default for meaningful risks
Transfer
Shift the risk to someone else โ insurance, outsourcing, vendor contracts
When another party is better placed to handle it or when financial protection is needed
Avoid
Stop doing the activity that creates the risk
When the risk is too high and the activity isn't critical โ e.g., don't store card numbers at all, use a payment provider instead
Accept
Acknowledge the risk and do nothing (consciously)
When cost of control exceeds potential impact โ common for low-impact risks. Must be documented.
TRAP: "Accept" is a legitimate option, but it must be a deliberate decision, documented, and approved by someone with authority. "We didn't do anything" because nobody thought about it is NOT acceptance โ it's negligence, and under the Privacy Act and NDB scheme, that can constitute a failure to take reasonable steps.
11.9 Putting It Together
The full pipeline, from asset to treatment:
Identify assets โ what do we need to protect?
Identify vulnerabilities โ technical, human, process
Identify threats โ who would exploit these and why? (Chapter 9's actors)
Assess risks โ Likelihood ร Impact for each combination
Prioritise โ highest risks first
Decide treatment โ mitigate, transfer, avoid, or accept
Apply controls โ matched to the chosen treatment
Monitor and verify โ did the controls work? Are new risks emerging?
THE EXAM-READY STRUCTURE: Any scenario asking you to "assess risks and propose controls" should follow this order. Identify assets first, then vulns, then threats, then assess, then prioritise, then treat. A response that just jumps to "install a firewall" skips the reasoning that earns marks.
11.10 Quiz Time
A small library has a router that hasn't been updated in 5 years. The vulnerability is real but obscure. Is this a critical risk?
Probably not critical, but meaningful. Likelihood: moderate โ the router is internet-exposed, so automated scans will eventually find it, and older router CVEs often have public exploits. Impact: moderate โ a compromised router enables MITM and network-wide attack, but library assets are limited (no payment data, no sensitive PII beyond library accounts). Risk rating: probably High (not Critical, but action should be prioritised). Treatment: firmware update (mitigate), or replace (avoid continued use). Taking no action = accepting a risk that's only high because of an easy-to-fix technical issue, which would be hard to defend if exploited.
Explain the difference between "threat" and "vulnerability" using an unlocked front door as an analogy.
The unlocked door is the vulnerability โ a weakness in your home's security. The burglar is the threat โ someone who might exploit it. The risk depends on both: a house in a quiet rural area (low threat likelihood) might have low risk even with an unlocked door; the same door on an inner-city street (high threat likelihood) is high risk. Mitigation: lock the door (reduces likelihood); have insurance (transfers the impact); move to a safer area (avoids the threat); or accept the risk (if contents are low-value and you never leave the house).
A company does annual vulnerability scanning. Why might that not be enough?
Because new vulnerabilities are disclosed every day. A CVE published the day after your annual scan could go unaddressed for 364 days. Between scans, your systems are invisible to your own security team. Good practice: continuous or at least monthly scanning, plus subscribing to vendor security alerts so critical patches are applied within days of release. Annual scanning might satisfy a compliance checkbox but doesn't match the speed of modern threats.
A cloud storage bucket was set to "public" accidentally, exposing customer data. Classify the vulnerability.
This is a technical vulnerability (security misconfiguration, specifically) with strong process and human components:
โ Technical: the bucket is publicly readable when it shouldn't be.
โ Process: no change-control review caught the setting; no automated scanning detected public buckets; no least-privilege default.
โ Human: whoever deployed it either didn't know the setting or misjudged it.
This is exactly how many real data breaches happen โ Accenture, Verizon, Microsoft, and countless others have had "cloud bucket left public" incidents. Impact: Confidentiality breach (likely NDB-reportable if personal info is exposed). Treatment: immediate remediation (private), review of other buckets, automated policies preventing public buckets, training.