Chapter 11 ยท Part B

Vulnerabilities & Risk Core

Why "how likely ร— how bad" is the only risk formula that matters, and where vulnerabilities actually come from.
Vulnerability = a weakness that could be exploited. Threat = someone or something that might exploit it. Risk = the combination: Risk = Likelihood ร— Impact. A vulnerability with no realistic threat is low risk; a vulnerability with both motivated threats AND serious impact is critical risk. Vulnerabilities come in three flavours: technical (bugs, misconfiguration, outdated software), human (phishing, weak passwords, poor training), and process/physical (no offboarding, unlocked rooms, weak vendor management). Good security picks defences by risk, not by fear or checklist.

11.1 The Key Terms

These three words get used interchangeably in everyday speech, but in security they mean different things and mixing them up in an exam loses marks.

TermDefinitionExample
AssetSomething you want to protect โ€” data, system, reputationCustomer database
VulnerabilityA weakness in the asset or its surroundingsDatabase server is running outdated software with a known exploit
ThreatAnyone or anything that could exploit the vulnerabilityCybercriminals who want to steal customer data and sell it
RiskThe combination โ€” how likely is the threat to exploit this vulnerability, and how bad would that be?High risk: motivated attackers + valuable data + known exploit
ExploitThe specific method or code that turns a vulnerability into an actual attackPublic "Metasploit" module that attacks the outdated database software
ControlSomething you put in place to reduce riskPatch the database; add a firewall; monitor for unusual queries
LEARN THIS SENTENCE: "The asset has a vulnerability that a threat could exploit, creating a risk, which we reduce by applying controls." Examiners love this chain of reasoning โ€” use the specific words, in this order, when explaining a scenario.

11.2 The Risk Formula

The cleanest way to reason about risk โ€” and the formulation used throughout Chapter 16's design framework:

Risk = Likelihood ร— Impact

Likelihood = how probable is it that the threat exploits this vulnerability? (influenced by: existence of exploit, attacker motivation, exposure)
Impact = if it does happen, how bad are the consequences? (data loss, financial cost, legal penalty, reputation damage, human safety)

Why both factors matter

A vulnerability on a public-facing login page (high likelihood, high impact) = critical. A vulnerability in an internal system that's air-gapped (low likelihood, same impact) = much lower risk. The asset is the same; what differs is exposure.

Every control you add can target either side of the equation:

A good design uses both sides. This is called defence in depth โ€” layers reduce likelihood (harder to get in) AND layers reduce impact (damage is limited if someone does get in).

11.3 The Risk Matrix โ€” A Visual Tool

A risk matrix plots likelihood on one axis and impact on the other, producing risk ratings by where each risk sits. It's the standard way to communicate risk to non-technical stakeholders.

Risk Matrix โ€” Likelihood ร— Impact LOW LOW MEDIUM LOW MEDIUM HIGH MEDIUM HIGH CRITICAL IMPACT โ†’ Low Medium High โ†‘ LIKELIHOOD Low Medium High Action priority: Critical: fix now High: prioritise Medium: planned Low: accept or monitor
The 3ร—3 matrix is simple but powerful. Larger matrices (5ร—5) are used where more granularity is needed. Either way โ€” the logic is the same.
HOW TO USE IT: For each identified risk, place it on the matrix. Start with the top-right corner โ€” these are the "fix today" risks. Risks in the top-left or bottom-right might justify mitigation. Bottom-left can often be accepted as a cost of doing business. Organisations that try to address every risk equally run out of budget before addressing the important ones. In exam terms: risk prioritisation is the difference between working security and security theatre.

11.4 The Three Families of Vulnerability

Vulnerabilities come from three very different sources. A serious assessment covers all three.

Technical vulnerabilities

Weaknesses in code, configuration, or architecture.

TypeExamplesHow they arise
Software bugsBuffer overflows, injection flaws, memory corruptionProgrammer errors; often decades-old patterns that keep reappearing
MisconfigurationDefault passwords, cloud storage left public, permissive firewall rulesAdmin oversight, "get it working first, secure it later" shortcuts
Outdated softwareRunning Windows 7 in 2026, unpatched serversLegacy systems; lack of patch management; dependencies
Design flawsWeak or absent encryption, plaintext passwords, no authenticationSecurity not considered at design time; budget/schedule pressure
Supply chainA library your software depends on has a vulnerabilityModern apps use 100s of dependencies; any one being compromised affects you

Human vulnerabilities

Weaknesses in people, training, and human behaviour. Remember from Chapter 10 โ€” "the squishy meat at the keyboard" (in exam terms: human vulnerability).

TypeExamples
Social engineering susceptibilityFalls for phishing, gives info to callers claiming to be IT support, clicks suspicious links
Weak credentialsReused passwords, dictionary passwords, shared accounts
Poor awarenessDoesn't recognise spoofed email, posts sensitive info on social media, works in sensitive files in public
Insider risksDisgruntled staff, financial pressure, coercion
Habit-basedAutopiloting through security warnings, reusing credentials across systems
WHY HUMAN VULNS ARE HARDEST TO FIX: You can patch software with a click. You can't "patch" a person. Training helps but doesn't eliminate risk โ€” humans are variable and attack techniques evolve. That's why modern defence-in-depth assumes that some users will click and designs controls to limit what happens next: email filtering catches most phishing; MFA means stolen passwords alone aren't enough; least privilege limits the damage if they do get in.

Process and physical vulnerabilities

Weaknesses in how the organisation operates.

TypeExamples
Poor processNo offboarding (ex-employees keep access), no change control, no backup testing, no incident response plan
Vendor/supply chainThird-party contractor with weak security has access to your systems (this was the Medibank pattern)
PhysicalUnlocked server rooms, unsecured network jacks, unattended workstations, tailgating into buildings
Policy gapsNo password policy, no data handling policy, no acceptable-use policy
Shadow ITStaff using personal apps/accounts for work (Dropbox for files, personal email for business), outside IT's control and monitoring

11.5 Where Vulnerabilities Come From โ€” The CVE System

Publicly-known technical vulnerabilities are catalogued in the CVE (Common Vulnerabilities and Exposures) database. Each gets an ID like CVE-2024-12345. When a researcher or vendor discovers a vulnerability, it goes through coordinated disclosure:

  1. Researcher finds the bug and reports it privately to the vendor
  2. Vendor investigates and develops a patch (usually 30โ€“90 days)
  3. Patch is released. CVE ID is published.
  4. Details become public (sometimes immediately, sometimes after a delay)
  5. Attackers start scanning for unpatched systems. Defenders scramble to apply the patch.

The gap between "patch released" and "attackers exploiting it at scale" is typically hours to days. This is why patch management is so critical.

Zero-day vulnerabilities

A zero-day is a vulnerability being exploited in the wild before a patch exists โ€” the defenders had "zero days" to prepare. Nation-state actors often use zero-days against high-value targets. They're dangerous because normal defences (patching) don't apply โ€” you need compensating controls like network segmentation, monitoring, and restricted privileges.

Log4Shell (CVE-2021-44228): A critical vulnerability in the widely-used Java logging library Log4j, disclosed December 2021. Rated 10.0/10.0 on CVSS. Exploitation was trivial โ€” a single log entry could run attacker code. Because Log4j was used in tens of thousands of applications, virtually every large organisation spent weeks scrambling to patch. Nation-state actors and cybercriminals were exploiting it within hours of disclosure. The ACSC issued emergency guidance. This is the archetypal supply-chain vulnerability: no one made a mistake in their own code, but their software depended on an affected library.

11.6 OWASP Top 10 โ€” Web App Vulnerabilities

The Open Worldwide Application Security Project (OWASP) publishes a Top 10 list of the most common critical vulnerabilities in web applications, updated every few years. You don't need to memorise all 10, but know the framework exists and a few examples:

CategoryPlain-English description
Broken Access ControlUsers can access things they shouldn't โ€” e.g., changing a URL to see someone else's profile. An authorisation failure.
Cryptographic FailuresData isn't encrypted properly or at all โ€” e.g., passwords stored in plaintext, data sent over HTTP.
InjectionUser input is treated as code โ€” e.g., SQL injection (Chapter 10), command injection, cross-site scripting.
Security MisconfigurationDefault credentials, overly permissive settings, unnecessary features enabled.
Vulnerable ComponentsUsing outdated libraries with known CVEs (Log4Shell territory).
Identification and Authentication FailuresWeak password policies, no MFA, broken session management โ€” authentication failures.
HOW THIS RELATES TO CIA AND AAA: Most OWASP Top 10 categories map directly to AuthN/AuthZ failures (from Chapter 8). "Broken Access Control" = authorisation failure. "Authentication Failures" = authentication failure. "Injection" typically violates Confidentiality and Integrity. The frameworks all weave together โ€” that's the sign they're actually useful.

11.7 Vulnerability Management โ€” The Discipline

Vulnerability management is the ongoing process of finding, prioritising, and fixing vulnerabilities. It's not a one-off project โ€” it's a continuous cycle.

The Vulnerability Management Cycle 1. DISCOVER scans, audits, pen tests, CVE feeds 2. ASSESS likelihood ร— impact prioritise by risk 3. REMEDIATE patch, configure, compensating control 4. VERIFY re-scan to confirm the fix worked ... repeat forever โ€” new vulns appear every day This cycle runs continuously in mature organisations. Not fixing is accepted risk; the decision is deliberate, not accidental.
Mature security teams run this cycle weekly or monthly. Consumer software (Windows, Mac) runs a similar cycle for users via automatic updates.

11.8 Risk Treatment โ€” Your Four Options

Once a risk is identified and assessed, you have four options. Most organisations mix them:

OptionWhat it meansWhen to use
MitigateApply controls to reduce likelihood or impactMost common โ€” default for meaningful risks
TransferShift the risk to someone else โ€” insurance, outsourcing, vendor contractsWhen another party is better placed to handle it or when financial protection is needed
AvoidStop doing the activity that creates the riskWhen the risk is too high and the activity isn't critical โ€” e.g., don't store card numbers at all, use a payment provider instead
AcceptAcknowledge the risk and do nothing (consciously)When cost of control exceeds potential impact โ€” common for low-impact risks. Must be documented.
TRAP: "Accept" is a legitimate option, but it must be a deliberate decision, documented, and approved by someone with authority. "We didn't do anything" because nobody thought about it is NOT acceptance โ€” it's negligence, and under the Privacy Act and NDB scheme, that can constitute a failure to take reasonable steps.

11.9 Putting It Together

The full pipeline, from asset to treatment:

  1. Identify assets โ€” what do we need to protect?
  2. Identify vulnerabilities โ€” technical, human, process
  3. Identify threats โ€” who would exploit these and why? (Chapter 9's actors)
  4. Assess risks โ€” Likelihood ร— Impact for each combination
  5. Prioritise โ€” highest risks first
  6. Decide treatment โ€” mitigate, transfer, avoid, or accept
  7. Apply controls โ€” matched to the chosen treatment
  8. Monitor and verify โ€” did the controls work? Are new risks emerging?
THE EXAM-READY STRUCTURE: Any scenario asking you to "assess risks and propose controls" should follow this order. Identify assets first, then vulns, then threats, then assess, then prioritise, then treat. A response that just jumps to "install a firewall" skips the reasoning that earns marks.

11.10 Quiz Time

A small library has a router that hasn't been updated in 5 years. The vulnerability is real but obscure. Is this a critical risk?
Probably not critical, but meaningful. Likelihood: moderate โ€” the router is internet-exposed, so automated scans will eventually find it, and older router CVEs often have public exploits. Impact: moderate โ€” a compromised router enables MITM and network-wide attack, but library assets are limited (no payment data, no sensitive PII beyond library accounts). Risk rating: probably High (not Critical, but action should be prioritised). Treatment: firmware update (mitigate), or replace (avoid continued use). Taking no action = accepting a risk that's only high because of an easy-to-fix technical issue, which would be hard to defend if exploited.
Explain the difference between "threat" and "vulnerability" using an unlocked front door as an analogy.
The unlocked door is the vulnerability โ€” a weakness in your home's security. The burglar is the threat โ€” someone who might exploit it. The risk depends on both: a house in a quiet rural area (low threat likelihood) might have low risk even with an unlocked door; the same door on an inner-city street (high threat likelihood) is high risk. Mitigation: lock the door (reduces likelihood); have insurance (transfers the impact); move to a safer area (avoids the threat); or accept the risk (if contents are low-value and you never leave the house).
A company does annual vulnerability scanning. Why might that not be enough?
Because new vulnerabilities are disclosed every day. A CVE published the day after your annual scan could go unaddressed for 364 days. Between scans, your systems are invisible to your own security team. Good practice: continuous or at least monthly scanning, plus subscribing to vendor security alerts so critical patches are applied within days of release. Annual scanning might satisfy a compliance checkbox but doesn't match the speed of modern threats.
A cloud storage bucket was set to "public" accidentally, exposing customer data. Classify the vulnerability.
This is a technical vulnerability (security misconfiguration, specifically) with strong process and human components:
โ€” Technical: the bucket is publicly readable when it shouldn't be.
โ€” Process: no change-control review caught the setting; no automated scanning detected public buckets; no least-privilege default.
โ€” Human: whoever deployed it either didn't know the setting or misjudged it.
This is exactly how many real data breaches happen โ€” Accenture, Verizon, Microsoft, and countless others have had "cloud bucket left public" incidents. Impact: Confidentiality breach (likely NDB-reportable if personal info is exposed). Treatment: immediate remediation (private), review of other buckets, automated policies preventing public buckets, training.
โ† Previous
10. Attacks & How They Work