1.1
Compare and contrast different types of social engineering
techniques.
Phishing: Communication designed to trick a person
into revealing sensitive information to the attacker, often by asking
them to visit a fraudulent website.
Smishing: SMS phishing. Users are asked (via SMS)
to click a link, call a number, or contact an email address, which then
requests private data from the user.
Vishing: Voice phishing. Scam calls, sometimes
automated/recorded, which make false claims in order to gain access to
private information such as credit card numbers.
Spam: Irrelevant and unsolicited messages sent to a
person.
Spam Over Instant Messaging (SPIM): Self
explanatory.
Spear phishing: Phishing which targets a specific
organisation or individual.
Dumpster diving: Retrieving discarded hardware and
using the information contained upon it to plan or stage an attack.
Shoulder surfing: Looking over someones shoulder,
without them knowing, to obtain private information.
Pharming: The redirection of traffic from a
legitimate website to a fraudulent one.
Tailgating: Following a person into a restricted
area.
Eliciting information: The use of casual
conversation in order to extract information from people.
Whaling: Spear phishing attacks directed
specifically at senior executives or other high-profile targets.
Prepending: Prepending characters to a URL
(i.e. https://www.ggoogle.com/) to make a fraudulent address appear
legitimate if not scrutinised closely.
Identity fraud: Impersonation, or a person
pretending to be someone they are not.
Invoice scams: Sending invoices for work that has
not been done.
Credential harvesting: A cyberattack with the
purpose of gaining access to stored login credentials.
Reconnaissance: Preliminary research.
Hoax: A situation that seems like it could be real,
but isn’t. For example, an email from a bank, or a voicemail from a
government department.
Impersonation: An attacker pretending to be someone
they are not.
Watering hole attack: Compromising a trusted
website or service (the watering hole), in order to exploit all who
visit or use the website or service.
Typosquatting: Similar to prepending. The
registration of domain names similar to well known domain names, but
with one character off (i.e. https://www.bingg.com/) in the hopes that a
user will mistype the web address and not notice they have done so.
Pretexting: The establishment of a false story or
context in order to convince a person to provide information.
Influence campaigns: Campaigns which attempt to
sway public opinion on public issues. Often done through the coordinated
usage of large numbers of social media profiles.
Social Engineering Principles
Authority: The attacker pretending to be a figure
of authority (i.e. employer, law enforcement).
Intimidation: The use of threats to get someone to
perform an action.
Consensus: Socially pressuring someone into
performing an action by stating that everyone or many others have agreed
that the action is the right thing to do.
Scarcity: Convincing someone to act instantaneously
by falsely claiming the existence of a temporary opportunity (i.e. you
can have a million dollars if you visit this link in the next 15
minutes).
Familiarity: Establishing a friendship, or
trustworthiness, with a person, to get them to do what you want.
Trust: The attacker establishing trust with the
victim by saying, for example, that they are from the IT
department.
Urgency: Similar to scarcity, convincing someone to
act urgently.
1.2
Given a scenario, analyse potential indicators to determine the type of
attack.
Malware
Ransomware: Malware that silently encrypts files
and demands a ransom before they are unencrypted.
Trojans: A program in disguise. It claims to do one
thing, yet in fact does another.
Worms: Malware that has the ability to
self-reproduce and spread through and across networks.
Potentially unwanted programs (PUPs): A program
that may be unwanted, despite a user consenting to download and install
it.
Fileless virus: Malware that uses legitimate tools,
such as bash scripts, to execute an attack.
Command and control server: A computer controlled
by an attacker which is used to send commands to systems compromised by
malware and receive stolen data.
Bots: A computer program that runs in an automated
fashion.
Cryptomalware: Malware that mines cryptocurrency on
a users device, unknown to them.
Logic bombs: Malware that sits idly, and activates
only once specific conditions are met.
Spyware: Malware that gathers personal information
and forwards it to a malicious actor.
Keyloggers: A program that records all keystrokes
on a device.
Remote access trojan (RAT): Malware that provides
an intruder with remote access to a computer.
Rootkit: Malware that allow an unauthorised user to
remotely gain root access to a computer without detection.
Backdoor: An undocumented way of gaining access to
a computer system.
Password attacks
Spraying: A password spraying attack is a type of cyberattack where an attacker tries a few commonly used passwords or a specific set of passwords against a large number of user accounts.
Dictionary: An attempt to access an account by
systematically entering every word in a dictionary as a password.
Brute force: The process of submitting many
passwords or passphrases in the hopes of eventually guessing correctly.
Offline: Not rate limited. Requires password
hash.
Online: Rate limited and subject to other security
mechanisms.
Rainbow table: A precomputed listing of all
plaintext permutations of specific permutations specific to a
hashing algorithm.
Plaintext/unencrypted:An unencrypted password attack refers to a cybersecurity attack that involves intercepting or obtaining passwords that are transmitted or stored without encryption. This could be done via packet sniffing/a MITM attack, or breaching a database that stores passwords in plaintext.
Physical attacks
Malicious Universal Serial Bus (USB) cable: A USB
cable that represents itself to a computer as a HID, and is therefore
able to start executing commands once connected.
Malicious flash drive: The flash drive equivalent
of a malicious USB cable.
Card cloning: The creation of a physical card from
stolen credit card details.
Skimming: Copying credit card information, usually
done by adding specialised hardware to credit card readers/EFTPOS machines/ATMs.
Adversarial artificial intelligence (AI)
Tainted training data for machine learning (ML):
Data, used to train machine learning algorithms, that has been tainted
with harmful or irrelevant data.
Cloud based vs on premises attacks: Attacks aimed at services hosted in the cloud versus those hosted on premises require different approaches. As a service hoster, on-prem gives
you full control, however cloud providers generally provide robust
security.
Cryptographic attacks
Birthday: Attempts to find a hash collision in
accordance with the birthday
problem.
Collision: An attempt to find two input strings of
a hash function that produce the same result.
Downgrade: An attack in which the attacker forces a
network channel to switch to an unprotected or less secure data
transmission standard.
1.3
Given a scenario, analyse potential indicators associated with
application attacks.
Privilege escalation: Gaining elevated access to a
system or resources via a bug, design flaw, or configuration
oversight.
Cross site scripting: Cross-Site Scripting (XSS) is a security vulnerability that occurs when an attacker injects malicious scripts into web applications viewed by other users. This exploit allows attackers to execute scripts in the victim's web browser, potentially stealing sensitive information, manipulating user sessions, defacing websites, or spreading malware.
Injections
SQL: The injection of SQL commands via the frontend
of a website.
DLL: By altering a DLL file, arbitrary code
execution capabilities can be acquired.
LDAP: LDAP is used as an authentication database.
By injecting a malicious request, authentication details can be made
available.
XML: Sending an XML file that has been tampered
with or maliciously crafted to a device or program that has not been
secured properly.
Pointer/object dereference: If an attacker can make
an application point to a null section of memory where nothing exists
rather than the part of memory where the application data might exist,
that’s a null pointer dereference. DOS attack, causes a crash.
Directory traversal: Any attack that allows the
traversing of directories.
Buffer overflows: Buffers are memory storage
regions that temporarily hold data while it is being transferred from
one location to another. A buffer overflow (or buffer overrun) occurs
when the volume of data exceeds the storage capacity of the memory
buffer. As a result, the program attempting to write the data to the
buffer overwrites adjacent memory locations. If attackers know the
memory layout of a program, they can intentionally feed input that the
buffer cannot store, and overwrite areas that hold executable code,
replacing it with their own code.
Race conditions: A race condition is an undesirable
situation that occurs when a device or system attempts to perform two or
more operations at the same time, but because of the nature of the
device or system, the operations must be completed in the proper sequence to
be done correctly.
Time of check/time of use:
A Time of Check/Time of Use (TOCTOU) race condition is a specific type of race condition in software that arises when there's a time gap between the validation of a resource's status (Time of Check) and its subsequent use (Time of Use). This time gap creates a window of opportunity for an attacker to manipulate or change the resource after it has been checked but before it is used.
Replay attack
Session replays: A replay attack is a form of
network attack in which valid data transmission is maliciously or
fraudulently repeated or delayed. This is carried out either by the
originator or by an adversary who intercepts the data and re-transmits
it.
Integer overflow: If a program performs a
calculation and the true answer is larger than the available space, it
may result in an integer overflow. These integer overflows can cause the
program to use incorrect numbers and respond in unintended ways, which
can then be exploited by attackers.
Request forgeries
Server side: A web security vulnerability that
allows an attacker to induce the server-side application to make
requests to an unintended location.
Cross site:
Cross-Site Forgery (CSRF) or Cross-Site Request Forgery is a security vulnerability that takes advantage of the trust relationship between a user's browser and a particular website. It exploits the fact that a website trusts that actions initiated by the user's browser are legitimate and are intended by the user.
API attacks: An attack that targets an API.
Resource exhaustion: A type of attack that uses up
the available resources on a device so that the application or the
service that’s being used by it is no longer accessible by others.
Memory leak: Memory reserved for an application is
never returned back to the system and the application or OS eventually
crashes.
SSL stripping: An attack in which HTTPS
communication is downgraded to HTTP, usually via a MITM attack.
Driver manipulation
Shimming: A shim is something you would use to fit
into the gap that’s created between two different objects. Operating
systems contains shims, such as the Windows compatibility layer, and
these shims use use a shim cache. This cache can be tampered with in
order to introduce malware.
Refactoring: Refactoring malware means restructuring code related to manipulating or interacting with a driver.
Pass the hash: A replay attack in which the hash of
a password, sent for authentication across a network, is captured and
replayed at a later time.
1.4
Given a scenario, analyse potential indicators associated with network
attacks.
Wireless
Evil twin: An evil twin is a fraudulent Wi-Fi
access point that appears to be legitimate but is set up to eavesdrop on
wireless communications.
Rogue access point: A rogue access point is a
wireless access point that has been installed on a secure network
without explicit authorization from a local network administrator,
whether added by a well-meaning employee or by a malicious
attacker.
Bluesnarfing:
In a bluesnarfing attack, the attacker exploits vulnerabilities in Bluetooth-enabled devices, typically mobile phones, smartphones, or other portable devices. By exploiting security flaws in the Bluetooth protocol or the device's software, the attacker gains unauthorized access to the device's data, contacts, messages, emails, photos, and sometimes even the device's control.
Bluejacking: Bluejacking is the sending of
unsolicited messages over Bluetooth to Bluetooth-enabled devices.
Disassociation: A type of DoS attack in which the
attacker breaks the wireless connection between the victim device and
the access point.
Jamming: A jamming attack is an attack in which an
attacker transfers interfering signals on a wireless network
intentionally.
Radio frequency identification: A MITM attack
against an RFID system uses a hardware device to capture and decode the
RFID signal between the victim’s card and a card reader. The malicious
device then decodes the information and transmits it to the attacker so
they can replay the code and gain access to the building.
Near-field communication (NFC): An eavesdropping
type of attackwhere communication between two NFC devices can be
intercepted and read.
Initialisation vector (IV): An initialisation
vector is like a dynamic salt for WLAN packets. A poorly implemented IV
can make an encryption standard vulnerable i.e. WEP.
Man in the middle attack (on path attack): A man in
the middle (MITM) attack is a general term for when a perpetrator
positions himself in a conversation between a user and an
application.
Layer 2 attacks
ARP poisoning: ARP Poisoning consists of abusing
the weaknesses in ARP to corrupt the MAC-to-IP mappings of other devices
on the network.
MAC flooding: During a MAC flooding attack, an attacker floods the switch with a flood of fake MAC addresses, surpassing the switch's capacity to handle them. The goal is to overload the switch's MAC address table, causing it to enter a fail-open or fail-closed mode depending on its configuration.
MAC cloning: A MAC spoofing/cloning attack is where
the intruder sniffs the network for valid MAC addresses and attempts to
act as one of the valid MAC addresses.
Domain name system (DNS)
Domain hijacking: Domain hijacking or domain theft
is the act of changing the registration of a domain name without the
permission of its original registrant, or by abuse of privileges on
domain hosting and registrar software systems.
DNS poisoning: DNS poisoning is the act of entering
false information into a DNS cache, so that DNS queries return an
incorrect response and users are directed to the wrong websites.
URL redirection:
In the context of the DNS (Domain Name System), a URL redirection attack refers to a malicious technique where an attacker manipulates the DNS resolution process to redirect users from legitimate websites to fraudulent or malicious websites. This attack involves compromising the DNS records associated with a legitimate domain name.
Distributed denial of service (DDoS): A distributed
denial-of-service (DDoS) attack is a malicious attempt to disrupt the
normal traffic of a targeted server, service or network by overwhelming
the target or its surrounding infrastructure with a flood of Internet
traffic.
1.5
Explain different threat actors, vectors, and intelligence sources.
Actors and threats
Advanced persistent threat (APT): An advanced
persistent threat (APT) is a broad term used to describe an attack
campaign in which an intruder, or team of intruders, establishes an
illicit, long-term presence on a network in order to mine highly
sensitive data.
Insider threats: An insider threat is a security
risk that originates from within the targeted organisation. It typically
involves a current or former employee or business associate who has
access to sensitive information or privileged accounts within the
network of an organisation, and who misuses this access.
State actors: Nation-states are frequently the most
sophisticated threat actors, with dedicated resources and personnel, and
extensive planning and coordination. Some nation-states have operational
relationships with private sector entities and organised criminals.
Hacktivists: In Internet activism, hacktivism, or
hacktivism, is the use of computer-based techniques such as hacking as a
form of civil disobedience to promote a political agenda or social
change.
Script kiddies: A script kiddie, skiddie, or skid
is a relatively unskilled individual who uses scripts or programs, such
as a web shell, developed by others to attack computer systems and
networks and deface websites.
Criminal syndicates: Cyber crime organisations are
groups of hackers, programmers and other tech bandits who combine their
skills and resources to commit major crimes that might not otherwise be
possible.
Hackers
Authorised: These are people who are hired to look
at a network try to gain access, find the weak points, and then help
resolve those weak points to make the network even stronger.
Unauthorised: The other end of the spectrum is a
hacker who is simply malicious. They’re looking to cause problems.
They’re looking to gain access to your data, and they’re looking to
cause the most amount of mayhem as possible.
Semi-authorised: Hackers who may be looking for
vulnerabilities, but don’t necessarily act on those vulnerabilities.
This is a hacker who is more of a researcher and trying to find access
to someone’s network without necessarily taking advantage of that
access.
Shadow IT: Shadow IT is the use of information
technology systems, devices, software, applications, and services
without explicit IT department approval.
Competitors: Commercial competitors.
Vectors
Direct access: If an attacker has direct access to
the hardware that is running an operating system, then they have a lot
of attack vectors available to them. They will find a way into that
operating system if they have physical access.
Wireless: Attackers can gain access to a network
via exploiting incorrectly configured wireless networks, or by via
installing a rogue access point.
Email: Attacks via email can come through the
sending of phishing links, the sending of malicious email attachments,
or various social engineering methods.
Supply chain: There’s an entire supply chain
designed to provide you with products, and many different manufacturers
and entities are connected with that supply chain. Each one of those
steps along the way is an attack vector.
Social media: Social media provides a wealth of
information to potential attackers.
Removable media: A flash drive can be used to copy
sensitive information from a computer.
Cloud: Cloud services are generally public facing,
and therefore provide an opportunity for exploitation.
Threat intelligence sources
OSINT: Open-source intelligence is the collection
and analysis of data gathered from open sources to produce actionable
intelligence.
Closed/proprietary: Pay to access databases.
Vulnerability databases: Large databases that
compile information, coming from many different researchers. The
researchers will find a vulnerability, they’ll report that into the
Vulnerability Database, and then they will publish that database to
everyone.
Public/private information sharing centers:
Vulnerability databases can be public, or invite only.
Dark web: There are a number of communication
channels available on the dark web, and these forums can also be a
valuable tool to use in your search for intelligence against the
attackers.
Automated Indicator Sharing (AIS): A way to
automate the process of moving threat information between organisations
over the internet.
Structured Threat Information eXpression (STIX): To
be able to transfer this data, there needs to be a standardized format
for these threats, and the standardized format is called STIX. This is a
Structured Threat Information eXpression, that includes information such
as motivations, abilities, capabilities, and response information.
Trusted Automated eXchange of Intelligence Information
(TAXII): In order to securely exchange this information, you
need some type of trusted transport. And that trusted transport is a
TAXII. It’s the Trusted Automated eXchange of Indicator Information. We
use this standard TAXII format, to be able to transfer the STIX data
between organisations.
Threat maps: There are a number of threat maps that
you can view on the internet, that give you a perspective of different
types of attacks, and how often these attacks are occurring throughout
the day. These threat maps are often created from real-time data pulled
from many different sources.
File/code repositories: There are a number of file
or code repositories on the internet that can give you even more
intelligence about what to protect yourself against. Locations like
GitHub are sometimes used by the hackers, to be able to put together the
tools that they then use to attack your network.
Research sources
Vendor websites: If you’re interested in knowing
the threats associated with an operating system or an application, you
should start with the companies that wrote the operating system or the
application.
Vulnerability feeds: There are many different
resources for these threat feeds, for example, the US Department of
Homeland Security, the FBI, the SANS Internet Storm Center, VirusTotal
Intelligence, and other feeds as well.
Conferences: Researchers will present information
at these conferences that can help explain things that they found that
are new, trends that may be occurring in the industry or information
about the latest hacks. This is also a good place where you can learn
from people who’ve gone through these attacks. Very often there’s some
lessons that can be taken away and their stories can help give you ideas
of how to protect your network even better.
Academic journals: These are usually periodicals or
online resources that are written by industry experts. They usually
provide information about existing security technologies and evaluate
which types of security technologies may be better than others.
Request for comments (RFC): RFCs are a way to track
and formalize a set of standards that anyone on the internet can
use.
Social media: There is a wealth of security
information on social media. For example, the large hacker groups will
often put information on Twitter that will describe recent
vulnerabilities they’ve discovered or recent attacks that they’ve
completed. There’s also a number of resources on Twitter that can give
you details about new vulnerabilities that are discovered or new attacks
that may be occurring.
1.6
Explain the security concerns associated with various types of
vulnerabilities.
Cloud-based vs on-premises vulnerabilities:
Generally, cloud providers implement large-scale, effective security. An
advantage/disadvantage of on-premises deployments is that everything is
under your control and is your responsibility.
Zero-day: A zero-day is a computer-software
vulnerability previously unknown to those who should be interested in
its mitigation, like the vendor of the target software.
Weak configurations
Open permissions: Incorrectly configured hosts or
services can have their permissions set in a way that allows attackers
to gain access and exploit vulnerabilities.
Unsecure root accounts: Root accounts with insecure
passwords are an issue. Root access should typically be disabled.
Errors: Over-descriptive application error messages
can provide information to potential attackers that they would not
otherwise have.
Weak encryption: Data that has been encrypted
insufficiently is essentially unencrypted.
Unsecure protocols: Use protocols with encryption.
Don’t use outdated wireless protocols such as WPA.
Default settings: Default settings can be insecure.
Always change default usernames and passwords.
Open ports and services: Close ports and disable
services that aren’t in use.
Third party risks
Vendor management
System integration: Third party vendors
(i.e. contractors) may need to have access to internal systems and
information to be able to carry out their tasks. This poses a security
threat.
Lack of vendor support: We may rely on third party
vendors to provide services to us. These vendors may choose to stop
supporting their proprietary services, or the vendor itself may go out
of business, leaving the user of the service in a bad position.
Outsourced code development: This is the same as
using a third party vendor. You will need to have an isolated, shared
location in which code can be accessed by staff and the third party.
Outsourced code needs to be audited for security purposes.
Data storage: Stored data, especially data that is
remotely stored, needs to be secured and encrypted. Data transfers
should be over an encrypted protocol.
Improper or weak patch management: As new security
exploits are discovered, devices need to have their firmware and OS’s
patched, and applications need to be updated. If these patches are not
created by the manufacturer, or not implemented for any reason,
vulnerabilities become apparent.
Legacy platforms: Hardware and software that has
reached its end of life will no longer receive security patches. It’s up
to the individual or organisation whether they want to put the extra
effort into manually maintaining a secure environment for these devices
or applications to run in.
Impacts
Data breaches/loss/exfiltration: Attackers can
steal or delete sensitive data. This can lead to issues such as identity
theft.
Identity theft: Is easy to make happen once a
persons leaked identity data is made available.
Financial: Bank accounts can and has been hacked.
SWIFT can and has been hacked. There are likely many more examples of
financial implications resulting from cybersecurity issues.
Reputation: Companies that get pwned lose trust
with the public.
Availability loss: Cybersecurity attacks can cause
downtime for online services, resulting in other negative effects.
1.7
Summarise the techniques used in security assessments.
Threat hunting
Intelligence hunting: A successful threat hunting
program is based on an environment’s data fertility. In other words, an
organisation must first have an enterprise security system in place,
collecting data. The information gathered from it provides valuable
clues for threat hunters.
Threat feeds:
Advisories and bulletins:
Maneuver:
Vulnerability scans: Vulnerability scans are
designed to look at systems to see if potential vulnerabilities might
exist in an operating system, a network device, or an application,
rather than exploit those vulnerabilities.
False positives: A report of a vulnerability
existing on a device, when in fact it doesn’t.
False negatives: A report of a vulnerability not
existing on a device, when in fact it does.
Log reviews: The reviewing of log files to gain
information.
Credentialed vs non-credentialed: A credentialed
scan would be a scan with the permission levels of a credentialed user,
whilst a non-credentialed scan would be a scan with the permission
levels of a guest or somebody external to the network.
Intrusive vs non-intrusive: Non-intrusive scans
simply identify a vulnerability and report on it so you can fix it.
Intrusive scans attempt to exploit a vulnerability when it is
found.
Application: Individual applications can be tested
for vulnerabilities.
Web application: Web applications can be tested for
vulnerabilities.
Network: It’s common to run a vulnerability on
whole network (i.e. all devices on the network).
CVE/CVSS: Common Vulnerabilities and Exposures
(CVE) is a catalog of known security threats. The catalog is sponsored
by the United States Department of Homeland Security (DHS), and threats
are divided into two categories: vulnerabilities and exposures. The
Common Vulnerability Scoring System (CVSS) provides a numerical (0-10)
representation of the severity of an information security
vulnerability.
Configuration review: A review of a devices
configuration.
Syslog/SIEM: In computing, syslog is a standard for
message logging. It allows separation of the software that generates
messages, the system that stores them, and the software that reports and
analyses them. Security information and event management (SIEM) is a
field within the field of computer security, where software products and
services combine security information management and security event
management. They provide real-time analysis of security alerts generated
by applications and network hardware.
Review reports: SIEM software can create reports
which can then be reviewed.
Packet capture: Raw packet captures can be
implemented in a SIEM system to provide additional information.
User behaviour analysis: There are tools to
actively examine the way in which people use the network and devices
upon it.
Sentiment analysis: The examination of public
sentiment towards an organisation or company.
SOAR: SOAR stands for Security Orchestration
Automation and Response. The goal of SOAR is to take processes in
security that were manual or tedious and automate them so that all of it
is done at the speed of the computer. This typically relates to
automatically configuring security rules, permissions, or application
configurations in response to certain circumstances.
1.8 Explain
the techniques used in penetration testing.
Penetration testing: A penetration test, or pen
test, is an attempt to evaluate the security of an IT infrastructure by
safely trying to exploit vulnerabilities.
Known environment: A penetration test in which the
tester is familiar with the environment.
Unknown environment: A penetration test in which
the tester is unfamiliar with the environment.
Partially known environment: A penetration test in
which the tester is partially familiar with the environment.
Rules of engagement: Predefined rules defining the
scope and boundaries of the penetration test.
Lateral movement: Moving from device to device
within a network.
Privilege escalation: Privilege escalation can be
defined as an attack that involves gaining illicit access of elevated
rights, or privileges, beyond what is intended or entitled for a
user.
Persistence: Persistence is a technique used to
maintain a connection with target systems after interruptions that can
cut off their access. In this context, persistence includes access and
configuration to maintain the initial foothold of the systems.
Cleanup: Cleaning up after the penetration test.
This involves, for example, securely removing all executable, scripts
and temporary file from a compromised system, and returning system
settings and application configuration parameters to their original
values.
Bug bounty: A reward offered to a person who
identifies an error or vulnerability in a computer program or
system.
Pivoting: Pivoting is a method of accessing a
machine that we have no way of accessing, through an intermediary. The
attacker compromises a visible host and then pivots using the
compromised host to attack other clients from within the network.
Passive and active reconnaissance: Passive
reconnaissance involves gathering data in a way that would not be
noticed by the victim, for example pulling information from social media
pages. Active reconnaissance involves directly interacting with the
network, or target environment, in order to gather information.
War flying: Using a drone to gather information
about WAPs.
War driving: Using a road vehicle to gather
information about WAPs.
Footprinting: Footprinting is the process of
gathering the blueprint of a particular system or a network and the
devices that are attached to the network under consideration.
White team: The meta team, which oversees the red
and the blue team.
Purple team: A team with members from the red and
blue team. Instead of competing with each other, they share information
about what they find on the network.
2.0 Architecture and Design
2.1
Explain the importance of security concepts in an enterprise
environment.
Data protection:
Diagrams: Documenting the layout of a network by
creating a network diagram is something that should be done.
Baseline configurations: A “standard” configuration
which can be checked against at will.
Standard naming conventions: Be consistent.
IP schema: Decide on a schema and stick to it.
Data sovereignty: Data sovereignty is the idea that
data are subject to the laws and governance structures of the nation
where they are collected.
Data protection:
DLP: Data loss prevention (DLP) is a part of a
company’s overall security strategy that focuses on detecting and
preventing the loss, leakage or misuse of data through breaches,
ex-filtration transmissions and unauthorised use. A comprehensive DLP
solution provides the information security team with complete visibility
into all data on the network, including data in use, data in motion, and
data at rest.
Masking: Data masking or data obfuscation is the
process of modifying sensitive data in such a way that it is of no or
little value to unauthorised intruders while still being usable by
software or authorised personnel.
Encryption: Encryption in cyber security is the
conversion of data from a readable format into an encoded format.
Encrypted data can only be read or processed after it’s been decrypted.
Encryption is the basic building block of data security.
At rest: Data on a storage device. Data at rest
should be encrypted.
In transit: Data that’s in the process of being
moved across a network. The data should be encrypted and transported via
an encrypted protocol.
Tokenisation: Tokenisation, when applied to data
security, is the process of substituting a sensitive data element with a
non-sensitive equivalent, referred to as a token, that has no intrinsic
or exploitable meaning or value.
Rights management: Digital rights management (DRM)
is the management of legal access to digital content.
Geographical considerations: Things to be
considered include the risk of environmental damage to hardware, and the
laws (particularly privacy related) of the region in which data is
stored.
Response and recover controls: Once an attack has
been identified, the two most important things to do are to document the
timeline of events, and limit the impact of the attack by isolating
resources.
SSL/TLS inspection: You can inspect the contents of
SSL/TLS packets if you have a valid certificate. This may be something
worth considering implementing, depending on your threat level.
Hashing: A hash function is any function that can
be used to map data of arbitrary size to fixed-size values. Hashing is
one-way - once data is hashed, the original data can’t be retrieved from
the hash.
API considerations: The security of API endpoints
should be highly prioritised. can be compromised by MITM attacks, weak
authentication requirements, injections, and other techniques.
Site resiliency
Hot site: Hot sites are essentially mirrors of an
entire datacenters infrastructure, with the environment live and ready
to be switched to at a moments notice.
Cold site: A cold site is an empty space
(warehouse), without any server-related equipment installed. It provides
power and cooling.
Warm site: A warm site is a space with server
equipment installed, but data is not actively mirrored to it from the
production site. Data would need to be transferred to the warm site in
the event of an outage at the production site.
Deception and disruption
Honeypots: In computer terminology, a honeypot is a
computer security mechanism set to detect, deflect, or, in some manner,
counteract attempts at unauthorized use of information systems.
Generally, a honeypot consists of data (for example, in a network site)
that appears to be a legitimate part of the site which contains
information or resources of value to attackers. It is actually isolated,
monitored, and capable of blocking or analyzing the attackers. This is
similar to police sting operations, colloquially known as “baiting” a
suspect.
Honeyfiles: Honeyfiles are bait files intended for
hackers to access. The files reside on a file server, and the server
sends an alarm when a honeyfile is accessed.
Honeynets: A honeynet is a decoy network that
contains one or more honeypots.
Fake telemetry: Fake telemetry can be added to
malware to fool anti-malware ML algorithms.
DNS sinkhole: A DNS sinkhole, also known as a
sinkhole server, Internet sinkhole, or Blackhole DNS is a DNS server
that has been configured to hand out non-routable addresses for a
certain set of domain names. Computers that use the sinkhole fail to
access the real site.
2.2
Summarise virtualisation and cloud computing concepts
Cloud models:
IaaS: Infrastructure as a service is a cloud
computing service model by means of which computing resources are hosted
in a public, private, or hybrid cloud.
PaaS: Platform as a service (PaaS) ) is a category
of cloud computing services that allows customers to provision,
instantiate, run, and manage a modular bundle comprising a computing
platform and one or more applications, without the complexity of
building and maintaining the infrastructure typically associated with
developing and launching the application(s); and to allow developers to
create, develop, and package such software bundles.
SaaS: Software as a service (AKA service as a
software substitute) is a software licensing and delivery model in which
software is licensed on a subscription basis and is centrally
hosted.
XaaS: “Anything as a service” (XaaS) describes a
general category of services related to cloud computing and remote
access. It recognizes the vast number of products, tools, and
technologies that are now delivered to users as a service over the
internet.
Public: Public cloud is a cloud deployment model
where computing resources are owned and operated by a provider and
shared across multiple tenants via the Internet.
Community: Community cloud computing refers to a
shared cloud computing service environment that is targeted to a limited
set of organisations or employees.
Private: Managed Private Cloud refers to a
principle in software architecture where a single instance of the
software runs on a server, serves a single client organisation, and is
managed by a third party. The third-party provider is responsible for
providing the hardware for the server, and also for preliminary
maintenance.
Hybrid: A hybrid cloud is a computing environment
that combines an on-premises datacenter (also called a private cloud)
with a public cloud, allowing data and applications to be shared between
them.
Cloud service providers: Are companies that provide
cloud computing services and resources.
MSP/MSSP: A Managed Service Provider (MSP) delivers
network, application, database and other general IT support and
services. A Managed Security Service Provider (MSSP) is exclusively
focused on providing cybersecurity services.
Fog computing: Fog computing is a decentralised
computing infrastructure in which data, compute, storage and
applications are located somewhere between the data source and the
cloud.
Edge computing: Edge computing is a distributed
computing paradigm that brings computation and data storage closer to
the sources of data. This is expected to improve response times and save
bandwidth. It is an architecture rather than a specific technology.
Thin client: In computer networking, a thin client
is a simple (low-performance) computer that has been optimised for
establishing a remote connection with a server-based computing
environment.
Containers: A lightweight, standalone, executable
package of software that includes everything needed to run an
application.
Microservices: A microservice architecture is an
architectural pattern that arranges an application as a collection of
loosely-coupled, fine-grained services, communicating through
lightweight protocols.
Infrastructure as code:
Software-defined networking (SDN): Software-defined
networking technology is an approach to network management that enables
dynamic, programmatically efficient network configuration in order to
improve network performance and monitoring, making it more like cloud
computing than traditional network management.
Software-defined visibility (SDV): A framework that
allows customers, security partners, managed service providers and
others to automate detection, reaction and response to threats and
programmatically adapt security policies to network changes.
Serverless: Serverless computing is a cloud
computing execution model in which the cloud provider allocates machine
resources on demand, taking care of the servers on behalf of their
customers. “Serverless” is a misnomer in the sense that servers are
still used by cloud service providers to execute code for
developers.
Services integration: Service Integration and
Management (SIAM) is an approach to managing multiple suppliers of
services (business services as well as information technology services)
and integrating them to provide a single business-facing IT
organisation. It aims at seamlessly integrating interdependent services
from various internal and external service providers into end-to-end
services in order to meet business requirements.
Resource policies: A resource access policy
specifies which users are allowed or denied access to a set of protected
resources.
Transit gateway: A transit gateway is a network
transit hub that you can use to interconnect your virtual private clouds
(VPCs) and on-premises networks.
Virtualisation:
VM sprawl avoidance: VM sprawl is a phenomenon that
occurs when the number of virtual machines (VMs) on a network reaches a
point where administrators can no longer manage them effectively.
Escape protection: Virtual machine escape is a
security exploit that enables a hacker/cracker to gain access to the
primary hypervisor and its created virtual machines.
2.3
Summarise secure application development, deployment, and automation
concepts.
Environment:
Development: This is where code is written and
software development takes place. This environment usually consists of a
server that is shared by several developers working together on the same
project.
Test: An environment which provides automated or
non-automated testing of new and/or changed code.
Staging: This environment seeks to mirror a
production environment as closely as possible. The purpose of this
environment is to test software on a near-production level but in a
non-production environment.
Production: The “live” environment, and the one
which end-users see.
Quality assurance (QA): Quality Assurance is a much
wider topic than Testing because it covers more than just the outputs of
software delivery (the end product), it also covers the inputs (how the
product is being developed), in order to improve the likelihood of a
positive outcome. QA is a proactive process that works out ways to
prevent possible bugs in the process of software development.
Provisioning and deprovisioning: Provisioning is
the process of making something available. For example, if you are
provisioning an application then you’re probably going to deploy a web
server and/or a database server. This comes with security
considerations. Deprovisioning is the diametric opposite of
provisioning.
Integrity measurement: Measuring the integrity of
the software: making sure that it does what it should, can be tested,
has security features, lacks security vulnerabilities, can be understood
and followed logically, and can be upgraded without introducing new
errors.
Secure coding techniques:
Normalisation: Ensuring that data structures and
formats are standardised.
Stored procedures: The storing of procedures, such
as database calls, on the server itself, rather than having the client
send the call (which can be vulnerable to injection).
Obfuscation/camouflage: Obfuscation is a way to
take something that normally is very easy to understand and make it so
that is very difficult to understand (i.e. minified JavaScript).
Code reuse/dead code: Code reuse can be problematic
if the code being reused is not current and has security
vulnerabilities.
Server-side vs client-side execution and
validation: Client side execution/validation can be faster,
however server side is considered safer.
Memory management: If memory is not managed
correctly, buffer overflows etc. can lead to arbitrary code
execution.
Use of third-party libraries and SDKs: Involve
trust in the author(s) of the third party libraries, in the fact that
they don’t contain malicious code.
Data exposure: Data should be encrypted wherever
possible.
OWASP: The Open Web Application Security Project
(OWASP) is an online community that produces freely-available articles,
methodologies, documentation, tools, and technologies in the field of
web application security.
Software diversity: A method in which each compiled
application is compiled to a slightly different binary form, to enhance
security.
Automation/scripting:
Automated courses of action: Many problems can be
pre-planned for, as well as a set of automated responses to those
problems.
Continuous monitoring: The continuous monitoring
for a certain circumstance or event, such as a drive becoming full.
Continuous validation: Continuous validation is a
method that lets you constantly monitor new code, testing it against
criteria for functionality, security, and performance.
Continuous integration: Developers practicing
continuous integration merge their changes back to the main branch as
often as possible. The developer’s changes are validated by creating a
build and running automated tests against the build. By doing so, you
avoid integration challenges that can happen when waiting for release
day to merge changes into the release branch. Continuous integration
puts a great emphasis on testing automation to check that the
application is not broken whenever new commits are integrated into the
main branch.
Continuous delivery: Continuous delivery is an
extension of continuous integration since it automatically deploys all
code changes to a testing and/or production environment after the build
stage.
Continuous deployment: Continuous deployment goes
one step further than continuous delivery. With this practice, every
change that passes all stages of your production pipeline is released to
your customers. There’s no human intervention, and only a failed test
will prevent a new change to be deployed to production.
Elasticity: The central idea behind scalability is
to provide sufficient resources to a computing system to deal with
momentary demand. If the workload increases, more resources are released
to the system; on the contrary, resources are immediately removed from
the system when the workload decreases.
Scalability: Scalability consists of the ability of
a system to be responsive as the demand (load) increases over time.
Version control: Version control systems are
software tools that help software teams manage changes to source code
over time (i.e. Git, Subversion).
2.4
Summarise authentication and authorisation design concepts.
Authentication methods:
Directory services: A central database that stores
usernames, passwords, computers, printers, and other devices that might
be connected to the network. This database is distributed across
multiple devices, and those databases will communicate to each other,
and send replication data so that every database is always up to date
with the latest information. LDAP. Different block cipher modes of
operation can have significantly different performance and efficiency
characteristics, even when used with the same block cipher. GCM can take
full advantage of parallel processing and implementing GCM can make
efficient use of an instruction pipeline or a hardware pipeline.
Federation: With federated authentication, user
access and authentication are managed centrally. All user identities are
managed in one database called the user directory.
Attestation: Attestation is the mechanism in which
software verifies the authenticity and integrity of the hardware and
software of a device.
Technologies:
TOTP: Time-based one-time password (TOTP) is a
computer algorithm that generates a one-time password (OTP) that uses
the current time as a source of uniqueness. As an extension of the
HMAC-based one-time password algorithm (HOTP), it has been adopted as
IETF standard RFC 6238.
HOTP: HMAC-based one-time password (HOTP) is a
one-time password OTP) algorithm based on HMAC. In cryptography, n HMAC
(sometimes expanded as either keyed-hash message authentication code or
hash-based message authentication code) is a specific type of message
authentication code (MAC) involving a cryptographic hash function and a
secret cryptographic key. As with any MAC, it may be used to
simultaneously verify both the data integrity and authenticity of a
message.
SMS: SMS 2FA is a type of authentication often used
next to the standard password during Two-Factor Authentication (2FA) or
Multi-Factor Authentication (MFA). SMS 2FA involves sending a short
one-time password (OTP) to the user via text message. The user must
enter the one-time password into the log-in form to prove their identity
and gain access to their account.
Token key: This authentication method uses a
pseudo-random token generator to create what would seem to be a random
set of numbers that are used during the login process. This might take
the form of a physical device, like this token generator that would fit
on a keyring, or it may be an app on a smartphone.
Static codes: An alphanumeric sequence that is
static and does not change, used for authentication purposes (i.e. a
password).
Authentication applications: An application on a
smartphone can receive a push notification with infortmation from a
server providing details (i.e. a code) required to authenticate.
Push notifications: See above.
Phone call: Users can receive a phone call on a
previously registered phone number which plays back a numeric or
alphanumeric code, which they have to submit to authenticate.
Smart card authentication: This type of
authentication requires a physical smart card to be present in the
device in which authentication is being requested.
Biometrics:
Fingerprint: Self explanatory.
Retina: Self explanatory.
Iris: Self explanatory.
Facial: Self explanatory.
Voice: Self explanatory.
Vein: Self explanatory.
Gait analysis: Self explanatory.: Encoding and
escaping are defensive techniques meant to stop injection attacks.
Encoding (commonly called “Output Encoding”) involves translating
special characters into some different but equivalent form that is no
longer dangerous in the target interpreter, for example translating the
< character into the < string when writing to an HTML page
Efficacy rates: The rate of effectiveness, derived
from other statistics.
False acceptance: The rate of unauthorised users
who are falsely granted access by the biometric authentication
system.
False rejection: The rate of authorised users who
are falsely denied access by the biometric authentication system.
Crossover error rate: The rate of both false
acceptance and false rejection, combined and expressed as a single
number.
Multifactor authentication (MFA): When we are
authenticating into a system, there are a set of factors that we would
use. Those three factors are something you know, something you have, and
something you are. You can add on to those factors, some attributes.
Those attributes would be somewhere you are, something you can do,
something you exhibit, and someone you know. An authentication factor is
comparing a characteristic to what you know is associated with an
individual. An authentication attribute is a bit more fluid. It may not
necessarily directly be associated with an individual, but we can
include these with other authentication factors to help prove someone’s
identity.
Factors:
Something you know: The authentication factor of
something you know is something that’s in your brain, and only you
happen to know what this particular value is. One of the most common
things that we know is a password.
Something you have: This is usually a device or
some type of system that is near where you happen to be. Something like
a smart card or a YubiKey for example, would be something that we have
with us.
Something you are: This is a biometric factor, so this might
be a fingerprint, an iris scan, or perhaps a voice print.
Attributes:
Somewhere you are: One of the authentication
attributes that doesn’t necessarily identify a specific individual but
can help with the authentication process, is some where you are. This
would provide an authentication factor based on where you might happen
to be geographically. Authentication approval can be locked down to
certain countries, states, cities, or buildings.
Something you can do: A good example of something
you can do might be your signature. The way that you write your
signature is something that’s very unique to you and it’s very difficult
for someone else to be able to replicate that. These attributes may seem
very similar to biometrics, but biometrics can provide us with
characteristics that are very specific to an individual, whereas
something you can do is a much broader description of a
characteristic.
Something you exhibit: Something that you do
unwillingly or unknowingly (i.e. the way you walk).
Someone you know: A web of trust can be used to aid
authentication, such as is done with certificate authorities..
Authentication, authorisation, and accounting:
Authentication: Identification, proving who you are. Authorisation:
Determining the level of access you are permitted to have. Accounting:
Keeping track of authentications, and the actions of authenticated
users.
Cloud vs. on premises requirements: Authenticating
with cloud provider services typically involves using a centralised,
cloud based platform that can be accessed from anywhere, and may include
API integrations. Security features/levels may be toggleable. On prem
authentication can involve additional or alternate requirements, as you
would be physically in the location in which you are trying to gain
access to.
2.5 Given
a scenario, implement cybersecurity resilience.
Redundancy:
Geographic dispersal: Geographic dispersal of
hardware greatly reduces the chance of failure caused by natural
disasters or power outages.
Disk:
RAID levels:
0: Striped (no redundancy)
1: Mirrored
5: Striping with parity
6: Striping with double parity
10: Striped and mirrored
Multipath: A networking method in which redundancy
is built in. If one part of the network fails, another path is
available.
Network:
Load balancers: In computing, load balancing is the
process of distributing a set of tasks over a set of resources, with the
aim of making their overall processing more efficient. Load balancing
can optimise the response time and avoid unevenly overloading some
compute nodes while other compute nodes are left idle. Frequently used
for web servers.
NIC teaming: NIC teaming is the process of
combining multiple network cards together for performance, load
balancing, and redundancy reasons. Use NIC teaming to group two or more
physical NICs into a single logical network device called a bond.
Power:
UPS: An uninterruptible power supply (UPS) is an
electrical apparatus that provides emergency power to a load when the
input power source or mains power fails. It is essentially a big
battery.
Generator: In electricity generation, a generator
is a device that converts motive power into electric power for use in an
external circuit. These can be used during a mains power outage when
uptime is critical.
Dual supply: Servers can have multiple power
supplies installed, so that if one fails, power is continued to be
delivered to the device.
Managed PDUs: A power distribution unit, or PDU, is
usually a device that provides multiple power sources (sockets). Some
PDUs have monitoring capabilities.
Replication:
SAN: A storage area network or storage network is a
computer network which provides access to consolidated, block-level data
storage. SANs are primarily used to access data storage devices, such as
disk arrays and tape libraries from servers so that the devices appear
to the operating system as direct-attached storage. They can be used for
backup/replication purposes.
VM: Virtual machines can be used for a variety of
purposes. They are able to be snapshotted and replicated easily,
providing redundancy.
On-premises vs. cloud:
Backup types:
Full: A full backup is exactly what the name
implies: It is a full copy of your entire data set. Although full
backups arguably provide the best protection, most organisations don’t
use them on a daily basis because they are time-consuming and often
require a lot of disk or tape capacity.
Incremental: Incremental backups only back up the
data that has changed since the previous backup.
Snapshot: Snapshot refers to an instantaneous
“picture” of your devices file system at a certain period. This picture
apprehends the entire file system as it was when the snapshot was taken.
When a snapshot is accustomed to restoring the server, the server will
revert to exactly how it was at the time of the snapshot. Snapshots are
designed for short-term storage.
Differential: A differential backup is similar to
an incremental backup in that it starts with a full backup and
subsequent backups only contain data that has changed. The difference in
incremental vs. differential backup is that, while an incremental backup
only includes the data that has changed since the previous backup, a
differential backup contains all of the data that has changed since the
last full backup. Suppose that you wanted to create a full backup on
Monday and differential backups for the rest of the week. Tuesday’s
backup would contain all of the data that has changed since Monday. It
would, therefore, be identical to an incremental backup at that point.
On Wednesday, however, the differential backup would back up any data
that had changed since Monday as well.
Tape: Tape backup is the practice of periodically
copying data from a primary storage device to a tape cartridge so the
data can be recovered if there is a hard disk crash or failure. Magnetic
tape is well-suited for archiving because of its high capacity, low cost
and durability. Tape is a linear recording system that is not good for
random access.
Disk: Disk backup, or disk-based backup, is a data
backup and recovery method that backs data up to hard disk storage.
Copy: Similar to a snapshot. A copy or an image of
a system that is an exact duplicate of a system at a particular point in
time.
NAS: An NAS device is a storage device connected to
a network that allows storage and retrieval of data from a central
location for authorised network users and varied clients. NAS devices
are flexible and scale out, meaning that as you need additional storage,
you can add to what you have. NAS is like having a private cloud in the
office. It’s faster, less expensive and provides all the benefits of a
public cloud on site, giving you complete control.
SAN: A storage area network or storage network is a
computer network which provides access to consolidated, block-level data
storage. SANs are primarily used to access data storage devices, such as
disk arrays and tape libraries from servers so that the devices appear
to the operating system as direct-attached storage.
Cloud: Cloud backup, also known as online backup or
remote backup, is a strategy for sending a copy of a physical or virtual
file/directory/filesystem or database to a secondary, off-site location
for preservation in case of equipment failure or catastrophe.
Image: Instead of backing up individual files on a
system, image backups involve backing up everything that is on a
computer and creating an exact duplicate or replica of that entire file
system. Similar to a snapshot.
Online vs. offline: An offline backup is a backup
to a device that is taken offline once that backup has been transferred
to it. An online backup is a backup to a device that in constantly
online and accessible.
Offsite storage: Backup to a storage device that is
located offsite.
Distance considerations: Latency &
bandwidth.
Non-persistence: The constant tearing down and
bringing up of applications and services on a server. Cloud based
environments are constantly in motion.
Revert to known state:
Last known good configuration:
Live boot media:
High availability: High availability (HA) is the
ability of a system to operate continuously without failing for a
designated period of time. HA works to ensure a system meets an
agreed-upon operational performance level. In information technology
(IT), a widely held but difficult-to-achieve standard of availability is
known as five-nines availability, which means the system or product is
available 99.999% of the time.
Scalability: The ability of a computer application
or product (hardware or software) to continue to function well when it
(or its context) is changed in size or volume in order to meet a user
need. Typically, the rescaling is to a larger size or volume. The
rescaling can be of the product itself (for example, a line of computer
systems of different sizes in terms of storage, RAM, and so forth) or in
the scalable object’s movement to a new context (for example, a new
operating system).
Restoration order: Things often have to be restored
in a certain order. For example, if you’re in a situation where you have
to rebuild an application instance, you need to make sure that you
perform that restoration in the correct order. The different application
components will probably need to be restored in a very particular
order.
Diversity: Don’t put all your eggs in one
basket.
2.6
Explain the security of implications of embedded and specialized
systems.
Embedded systems: An embedded system is a computer
system — a combination of a computer processor, computer memory, and
input/output peripheral devices — that has a dedicated function within a
larger mechanical or electronic system (e.g. traffic light controllers,
digital watches, or a medical imaging system).
Raspberry Pi: Raspberry Pi is a series of small
single-board computers (SBCs) developed in the United Kingdom by the
Raspberry Pi Foundation in association with Broadcom. It is widely used
in many areas, such as for weather monitoring, because of its low cost,
modularity, and open design. It is typically used by computer and
electronic hobbyists, due to its adoption of the HDMI and USB
standards.
FPGA: A field-programmable gate array (FPGA) is an
integrated circuit designed to be configured by a customer or a designer
after manufacturing – hence the term field-programmable. FPGAs contain
an array of programmable logic blocks, and a hierarchy of reconfigurable
interconnects allowing blocks to be wired together. Logic blocks can be
configured to perform complex combinational functions, or act as simple
logic gates like AND and XOR. In most FPGAs, logic blocks also include
memory elements. FPGAs have a remarkable role in embedded system
development due to their capability to start system software development
simultaneously with hardware, enable system performance simulations at a
very early phase of the development, and allow various system trials and
design iterations before finalizing the system architecture.
Arduino: Arduino is an open-source hardware and
software company, project, and user community that designs and
manufactures single-board microcontrollers and microcontroller kits for
building digital devices. Its hardware products are licensed under a CC
BY-SA license, while software is licensed under the GNU Lesser General
Public License (LGPL) or the GNU General Public License (GPL),
permitting the manufacture of Arduino boards and software distribution
by anyone.
Supervisory Control and Data Acquisition (SCADA)/Industrial
Control Systems (ICS): SCADA/ICS systems are used in industrial
settings. The systems are used to control industrial equipment, like
manufacturing, farming, and many other types of equipment. A good
security implementation is critical for SCADA/ICS systems.
Internet of Things (IoT): The Internet of things
describes physical objects with sensors, processing ability, software,
and other technologies that connect and exchange data with other devices
and systems over the Internet or other communications networks. Examples
include thermometers, doorbells, wearables, kitchen appliances, laundry
appliances, and TVs. They’re often developed without much consideration
for security, and often have a short support lifespan from the
manufacturer and don’t receive any updates. IOT devices should be on a
separate network/VLAN from everything else.
Sensors: IOT devices may record information from
sensors for things such as health information (e.g. heart rate), data
which may be inherently valuable to the user.
Smart devices: A smart device is an electronic
device, generally connected to other devices or networks via different
wireless protocols that can operate to some extent interactively and
autonomously.
Wearables: Smart watches, fitness trackers,
etc.
Facility automation: Smart buildings, i.e. lights,
temperature control, door locking etc.
Weak defaults: Many smart devices/IOT devices come
with weak defaults, because, to put it bluntly, the companies behind the
devices don’t care about your privacy and just want to make money.
Specialised:
Medical systems: Many medical systems these days
are connected to the internet. Their security is paramount.
Vehicles: Many vehicles are now connected to the
internet. If hacked, the attackers could potentially interfere with the
car while it’s driving.
Aircraft: Aircraft contain many sensors which
communicate with one another. If these were manipulated in any way, the
results could be catastrophic.
Smart meters: Internet connected water/electricity
meters.
VoIP: Voice over Internet Protocol (VoIP) has
replaced analogue phone lines, and can be jerry-rigged to do a variety
of tasks other than facilitate phone calls.
HVAC: Heating, Ventilation, and Air Conditioning
(HVAC) systems can have networking capabilities. Hacking, and sabotage,
of HVAC systems can lead to bad outcomes, especially in environments
such as datacenters.
Drones: Drones are increasingly used for commercial
purposes, and it could damage a corporations bottom line if they were
hacked.
Multifunction printer (MFP): An MFP (multi-function
printer), multi-functional, all-in-one (AIO), or multi-function device
(MFD), is an office machine which incorporates the functionality of
multiple devices in one, so as to have a smaller footprint in a home or
small business setting (the SOHO market segment), or to provide
centralized document management/distribution/production in a
large-office setting. A typical MFP may act as a combination of some or
all of the following devices: email, fax, photocopier, printer, scanner.
When disposing of old printers with local storage, one should keep in
mind that confidential documents (print, scan, copy jobs) are
potentially still unencrypted on the printer’s local storage and can be
undeleted.
Surveillance systems: Surveillance systems and CCTV
cameras may monitor areas where sensitive actions may take place, and as
such the security of the recordings and they network on which they
reside are important in regards to security.
System on Chip: A system on a chip is an integrated
circuit that integrates most or all components of a computer or other
electronic system. SoC security is critical as more and more personal
computing needs are controlled by a chip with the prevalence of Internet
of Things.
Communication considerations:
5G: In telecommunications, 5G is the
fifth-generation technology standard for broadband cellular networks,
which cellular phone companies began deploying worldwide in 2019, and is
the planned successor to the 4G networks which provide connectivity to
most current cellphones. In addition to 5G being faster than existing
networks, 5G has higher bandwidth and can thus connect more different
devices, improving the quality of Internet services in crowded areas.[4]
Due to the increased bandwidth, it is expected the networks will
increasingly be used as general internet service providers (ISPs) for
laptops and desktop computers, competing with existing ISPs such as
cable internet, and also will make possible new applications in
internet-of-things (IoT) and machine-to-machine areas.
Narrow-band: Narrowband signals are signals that
occupy a narrow range of frequencies or that have a small fractional
bandwidth. It’s very common to be able to send communication over these
bands across a very long distance.
Baseband radio: Baseband refers to a single-channel
digital system and that single channel is used to communicate with
devices on a network, as opposed to broadband, which is wide bandwidth
data transmission which generates an analog carrier frequency, which
carries multiple digital signals or multiple channels. Since there is a
single frequency being used for this communication (baseband
communication), anything going over this link is going to use all of the
bandwidth on that connection.
SIM cards: A Subscriber Identity Module (SIM) card
is an integrated circuit (IC) intended to securely store the
international mobile subscriber identity (IMSI) number and its related
key, which are used to identify and authenticate subscribers on mobile
telephony devices (such as mobile phones and computers).
Zigbee: Zigbee is an IEEE 802.15.4-based
specification for a suite of high-level communication protocols used to
create personal area networks with small, low-power digital radios, such
as for home automation, medical device data collection, and other
low-power low-bandwidth needs, designed for small scale projects which
need wireless connection. Hence, Zigbee is a low-power, low data rate,
and close proximity (i.e., personal area) wireless ad hoc network.
Constraints:
Power: Embedded devices are generally very low
power and may depend upon batteries.
Compute: Embedded devices typically have a
relatively tiny amount of computing power, compared to general purpose
CPUs.
Network: The networking capabilities of embedded
devices are in part defined by their physical location, and many
embedded devices are placed in unusual locations.
Crypto: Embedded devices typically have no or very
little cryptographic hardware.
Inability to patch: Once they are deployed, it is
difficult to patch or update an embedded device. They are generally
headless.
Authentication: Often, embedded devices do not have
any authentication mechanisms in place.
Cost: Embedded devices are typically low cost,
accounting for the constraints in other areas.
Implied trust: No access to firmware/blobs/OS/HDL
source code/schematics etc. Completely dependent on the goodwill of the
manufacturer.
2.7
Explain the importance of physical security controls.
Bollards/barricades: Bollards and barricades are
used to restrict access to physical areas, particularly from
vehicles.
Access control vestibules: An access control
vestibule usually consists of a door providing access to the vestibule,
in which another door exists, providing access to the restricted
area.
Badges: Badges usually provide an ID mechanism and
be used to prove who you are. They can also be RFID enabled to allow
access through doors.
Alarms: Fire alarms and security alarms are
important for obvious reasons.
Signage: Signage communicates what is and isn’t
expected in a certain area.
Cameras:
Motion recognition: Motion recognition cameras can
record only when motion is detected, saving storage space due to a
limited amount of footage being captured.
Object detection: This can include face scanning
technology.
Closed-circuit television (CCTV): Closed-circuit
television, also known as video surveillance, is the use of video
cameras to transmit a signal to a specific place, on a limited set of
monitors.
Industrial camouflage: Disguising a building which
contains important equipment as an inconspicuous warehouse or other type
of building.
Personnel:
Guards: Security guards still play in important
role in security, despite advances in technology and automation.
Robot sentries: Robots are starting to replace
humans doing sentry duties, freeing the humans up for more important
tasks.
Reception: Receptionists can keep track of who
enters and exits a building.
Two-person integrity/control: A security in which a
two people are required to to gain access to a building or restricted
area/resource.
Locks:
Biometrics: Locks which require biometric data to
unlock.
Electronic: A keyless lock, usually requiring a
pin-code to unlock.
Physical: A traditional lock, requiring a key.
Cable locks: Cable locks are versatile, and include
the Kensington locks commonly compatible with laptops.
USB data blocker: A USB cable that transmits only
power, and not data. Useful if you do not trust the host device.
Lighting: Proper lighting in and around a building
helps prevent intruders.
Fencing: Fencing prevents people from intruding on
private property.
Fire suppression: Good fire suppression protects
from accidental mishaps and arsonists.
Sensors: These are self explanatory.
Motion
Noise
Proximity
Moisture
Cards (RFID)
Temperature
Drones: Drones can be used to monitor areas for
suspicious activity. They can also be fitted with auxiliary sensors to
receive and transmit other types of information.
Visitor logs: Visitor logs record the name of
everyone who has gained access to a facility.
Faraday cages: Faraday cages are enclosures used to
block electromagnetic fields.
Air gap: An air gap, air wall, air gapping or
disconnected network is a network security measure employed on one or
more computers to ensure that a secure computer network is physically
isolated from unsecured networks, such as the public Internet or an
unsecured local area network.
Demilitarised zone (DMZ) AKA screened subnet: In
computer security, a DMZ or demilitarised zone is a physical or logical
subnetwork that contains and exposes an organization’s external-facing
services to an untrusted, usually larger, network such as the
Internet.
Protected cable distribution: Having all cables and
networking equipment physically isolated and unable to be accessed other
than by those who are authorised to do so.
Secure areas:
Air gap: An air gap, air wall, air gapping or
disconnected network is a network security measure employed on one or
more computers to ensure that a secure computer network is physically
isolated from unsecured networks, such as the public Internet or an
unsecured local area network.
Vault: Vaults are secure rooms where important
information, such as backups, can be stored.
Safe: Safes are like vaults, but don’t take up an
entire room.
Hot aisle: An aisle in a data center in which air
that travels through servers, and therefore heated, is guided into, from
which it is then pushed back into a cooling system.
Cold aisle: An aisle in a data center in which
cooled air travels, before/while it is sucked into the servers.
Secure data destruction:
Burning: Documents that are no longer needed can be
lit on fire.
Shredding: They can also be shredded.
Pulping: Or pulped.
Pulverising: Or pulverised.
Degaussing: Degaussing is the process of reducing
or eliminating an unwanted magnetic field (or data) stored on tape and
disk media such as computer and laptop hard drives, diskettes, reels,
cassettes and cartridge tapes. When exposed to the powerful magnetic
field of a degausser, the magnetic data on a tape or hard disk is
neutralized, or erased.
Third party solutions: Some third parties provide
data destruction cervices.
2.8 Summarise
the basics of cryptographic concepts.
Digital signatures: A message is signed with the
sender’s private key and can be verified by anyone who has access to the
sender’s public key. This verification proves that the sender had access
to the private key, and therefore is very likely to be the person
associated with the public key. It also proves that the signature was
prepared for that exact message, since verification will fail for any
other message one could devise without using the private key.
Key length: Key length (a.k.a. key size) is the
number of bits of a key used to encrypt a message. The length on its own
is not a measure of how secure the ciphertext is. However, for secure
ciphers, the longer the key the stronger the encryption.
Key stretching:
Salting: In cryptography, a salt is random data
that is used as an additional input to a one-way function that hashes
data, a password or passphrase. A new salt is randomly generated for
each password. Typically, the salt and the password (or its version
after key stretching) are concatenated and fed to a cryptographic hash
function, and the output hash value (but not the original password) is
stored with the salt in a database. Hashing allows later authentication
without keeping and therefore risking exposure of the plaintext password
if the authentication data store is compromised.
Hashing: Hashing is the process of transforming any
given key or a string of characters into another value. This is usually
represented by a shorter, fixed-length value or key that represents and
makes it easier to find or employ the original string. A hash function
generates new values according to a mathematical hashing algorithm,
known as a hash value or simply a hash. To prevent the conversion of
hash back into the original key, a good hash always uses a one-way
hashing algorithm.
Key exchange: One of the logistical challenges we
have is the need to be able to share keys between two people so that you
can then perform an encryption. Some methods include:
Out of band exchange: Exchanging keys via a method
other than over a computer network.
Asymmetric encryption: Symmetric encryption keys
can be transferred securely using asymmetric encryption.
Elliptic-curve cryptography: Elliptic-curve
cryptography is an approach to public-key cryptography based on the
algebraic structure of elliptic curves over finite fields. ECC allows
smaller keys compared to non-EC cryptography to provide equivalent
security.
Perfect forward secrecy: Perfect forward secrecy
means that a piece of an encryption system automatically and frequently
changes the keys it uses to encrypt and decrypt information, such that
if the latest key is compromised, it exposes only a small portion of the
user’s sensitive data. Encryption tools with perfect forward secrecy
switch their keys as frequently as every message in text-based
conversation, every phone call in the case of encrypted calling apps, or
every time a user loads or reloads an encrypted web page in his or her
browser.
Quantum:
Computing: Whilst classical computing uses binary
bits, that are either 1 or 0, quantum computing uses qubits, which exist
in more than one state at the same time (and so can somehow be 1 and 0
simultaneously). Some of the implications of quantum computing include
the potential rendering of existing cryptography methods as useless,
obsolete, and insecure.
Communications: Entanglement is integral to quantum
computing power. Pairs of qubits can be made to become entangled. This
means that the two qubits then exist in a single state. In such a state,
changing one qubit directly affects the other in a manner that’s
predictable.
Post-quantum: In cryptography, post-quantum
cryptography (sometimes referred to as quantum-proof, quantum-safe or
quantum-resistant) refers to cryptographic algorithms (usually
public-key algorithms) that are thought to be secure against a
cryptanalytic attack by a quantum computer. The problem with currently
popular algorithms is that their security relies on one of three hard
mathematical problems: the integer factorization problem, the discrete
logarithm problem or the elliptic-curve discrete logarithm problem. All
of these problems could be easily solved on a sufficiently powerful
quantum computer running Shor’s algorithm.
Ephemeral: A key that’s not permanent, such as a
session key.
Modes of operation:
Authenticated: Authenticated Encryption (AE) and
Authenticated Encryption with Associated Data (AEAD) are forms of
encryption which simultaneously assure the confidentiality and
authenticity of data.
Unauthenticated: Encryption that does not
simultaneously provide confidentiality and authenticity of
data.
Counter: In cryptography, Galois/Counter Mode (GCM)
is a mode of operation for symmetric-key cryptographic block ciphers
which is widely adopted for its performance. GCM throughput rates for
state-of-the-art, high-speed communication channels can be achieved with
inexpensive hardware resources.[1] The operation is an authenticated
encryption algorithm designed to provide both data authenticity
(integrity) and confidentiality. GCM is defined for block ciphers with a
block size of 128 bits. Galois Message Authentication Code (GMAC) is an
authentication-only variant of the GCM which can form an incremental
message authentication code. Both GCM and GMAC can accept initialization
vectors of arbitrary length. Different block cipher modes of operation
can have significantly different performance and efficiency
characteristics, even when used with the same block cipher. GCM can take
full advantage of parallel processing and implementing GCM can make
efficient use of an instruction pipeline or a hardware pipeline.
Blockchain: A blockchain is a type of distributed
ledger technology (DLT) that consists of growing list of records, called
blocks, that are securely linked together using cryptography.
Public ledgers: A distributed ledger is the
consensus of replicated, shared, and synchronised digital data that is
geographically spread (distributed) across many sites, countries, or
institutions. In contrast to a centralised database, a distributed
ledger does not require a central administrator, and consequently does
not have a single (central) point-of-failure.
Cipher suites:
Stream: Stream ciphers encrypt the digits
(typically bytes), or letters (in substitution ciphers) of a message one
at a time.
Advantages:
Speed of transformation: Algorithms are linear in time and
constant in space.
Low error propagation: An error in encrypting one symbol
will likely not affect subsequent symbols.
Disadvantages:
Low diffusion: All information of a plaintext symbol is
contained in a single ciphertext symbol.
Susceptibility to insertions/modifications: An active
interceptor who breaks the algorithm might insert spurious text that
looks authentic.
Block: Block ciphers take a number of bits and
encrypt them as a single unit, padding the plaintext so that it is a
multiple of the block size.
Advantages:
High diffusion: Information from one plaintext symbol is
diffused into several ciphertext symbols.
Immunity to tampering: Difficult to insert symbols without
detection.
Disadvantages:
Slowness of encryption
Error propagation: An error in one symbol may corrupt the
entire block.
Symmetric vs. asymmetric:
Symmetric: Symmetric encryption is a type of
encryption that uses the same key to encrypt and decrypt data. Both the
sender and the recipient have identical copies of the key, which they
keep secret and don’t share with anyone.
Asymmetric: Asymmetric encryption uses two keys — a
public key (that anyone can access) to encrypt information and a private
key to decrypt information. Public-key cryptography, or asymmetric
cryptography, is the field of cryptographic systems that use pairs of
related keys. Each key pair consists of a public key and a corresponding
private key. Key pairs are generated with cryptographic algorithms based
on mathematical problems termed one-way functions. Security of
public-key cryptography depends on keeping the private key secret; the
public key can be openly distributed without compromising security. In a
public-key encryption system, anyone with a public key can encrypt a
message, yielding a ciphertext, but only those who know the
corresponding private key can decrypt the ciphertext to obtain the
original message. For example, a journalist can publish the public key
of an encryption key pair on a web site so that sources can send secret
messages to them in ciphertext. Only the journalist who knows the
corresponding private key can decrypt the ciphertext to obtain the
sources’ messages—an eavesdropper reading email on its way to the
journalist can’t decrypt the ciphertext. However, public-key encryption
doesn’t conceal metadata like what computer a source used to send a
message, when they sent it, or how long it is. Public-key encryption on
its own also doesn’t tell the recipient anything about who sent a
message—it just conceals the content of a message in a ciphertext that
can only be decrypted with the private key.
AsyIn cryptography, Galois/Counter Mode
(GCM) is a mode of operation for symmetric-key cryptographic block
ciphers which is widely adopted for its performance. GCM throughput
rates for state-of-the-art, high-speed communication channels can be
achieved with inexpensive hardware resources.[1] The operation is an
authenticated encryption algorithm designed to provide both data
authenticity (integrity) and confidentiality. GCM is defined for block
ciphers with a block size of 128 bits. Galois Message Authentication
Code (GMAC) is an authentication-only variant of the GCM which can form
an incremental message authentication code. Both GCM and GMAC can accept
initialization vectors of arbitrary length. Different block cipher modes
of operation can have significantly different performance and efficiency
characteristics, even when used with the same block cipher. GCM can take
full advantage of parallel processing and implementing GCM can make
efficient use of an instruction pipeline or a hardware pipeline.mmetric
Encryption
Lightweight cryptography: Cryptographic functions
that require relatively little computational horsepower to implement and
run.
Steganography: The practice of concealing messages
or information within other non-secret text or data. For example,
malicious code can be hidden in seemingly normal audio, video, or image
files.
Homomorphic encryption: Homomorphic encryption is a
form of encryption that permits users to perform computations on its
encrypted data without first decrypting it.
Limitations:
Speed: Computationally expensive encryption can be
slow to implement.
Size: If your block size is 16 bytes and you’re
encrypting some data that is 8 bytes in size, you have to fill in the
other remaining 8 bytes so that you have a full 16 bytes to be able to
encrypt, effectively doubling (unnecessarily) the size of the data.
Weak keys: Old cryptographic methods and smaller
key sizes are vulnerable to brute forcing.
Time: Encryption takes time.
Longevity: As time progresses and computing power
increases, old and/or present secure cryptographic methods may become
compromised.
Predictability: Cryptographic relies heavily on
randomness, so a predictable RNG is a huge vulnerability.
Resource vs. security constraints: Very weak
devices (such as IOT devices) may have to sacrifice security due to
their limited processing power.
3.0 Implementation
3.1 Given a
scenario, implement secure protocols.
Protocols:
DNSSEC: DNSSEC strengthens authentication in DNS
using digital signatures based on public key cryptography. With DNSSEC,
it’s not DNS queries and responses themselves that are cryptographically
signed, but rather DNS data itself is signed by the owner of the data.
Every DNS zone has a public/private key pair. The zone owner uses the
zone’s private key to sign DNS data in the zone and generate digital
signatures over that data. The zone’s public key, however, is published
in the zone itself for anyone to retrieve. Any recursive resolver that
looks up data in the zone also retrieves the zone’s public key, which it
uses to validate the authenticity of the DNS data.
SSH: ssh (Secure Shell) is a program for logging
into a remote machine and for executing commands on a remote machine. It
is intended to provide secure encrypted communications between two
untrusted hosts over an insecure network. X11 connections, arbitrary TCP
ports and UNIX-domain sockets can also be forwarded over the secure
channel.
S/MIME: S/MIME (Secure/Multipurpose Internet Mail
Extensions) is a standard for public key encryption and signing of MIME
data. Multipurpose Internet Mail Extensions (MIME) is an Internet
standard that extends the format of email messages to support text in
character sets other than ASCII, as well as attachments of audio, video,
images, and application programs. Message bodies may consist of multiple
parts, and header information may be specified in non-ASCII character
sets.
SRTP: The Secure Real-time Transport Protocol
(SRTP) is a profile for Real-time Transport Protocol (RTP) intended to
provide encryption, message authentication and integrity, and replay
attack protection to the RTP data in both unicast and multicast
applications. The Real-time Transport Protocol (RTP) is a network
protocol for delivering audio and video over IP networks. RTP is used in
communication and entertainment systems that involve streaming media,
such as telephony, video teleconference applications including WebRTC,
television services and web-based push-to-talk features.
LDAPS: The Lightweight Directory Access Protocol
(LDAP) is an open, vendor-neutral, industry standard application
protocol for accessing and maintaining distributed directory information
services over an Internet Protocol (IP) network. A common use of LDAP is
to provide a central place to store usernames and passwords. This allows
many different applications and services to connect to the LDAP server
to validate users. LDAPS is LDAP over SSL/TLS.
FTPS: FTPS is an extension to the commonly used
File Transfer Protocol (FTP) that adds support for the Transport Layer
Security (TLS) and, formerly, the Secure Sockets Layer (SSL)
cryptographic protocols.
SFTP: The SSH File Transfer Protocol is a network
protocol that provides file access, file transfer, and file management
over any reliable data stream. It was designed by the Internet
Engineering Task Force (IETF) as an extension of the Secure Shell
protocol (SSH) version 2.0 to provide secure file transfer
capabilities.
SNMPv3: Simple Network Management Protocol (SNMP)
is an Internet Standard protocol for collecting and organising
information about managed devices on IP networks and for modifying that
information to change device behaviour. Devices that typically support
SNMP include cable modems, routers, switches, servers, workstations,
printers, and more.
HTTPS: Hypertext transfer protocol secure (HTTPS)
is the secure version of HTTP, which is the primary protocol used to
send data between a web browser and a website. HTTPS is encrypted in
order to increase security of data transfer.
IPSec: In computing, Internet Protocol Security
(IPsec) is a secure network protocol suite that authenticates and
encrypts packets of data to provide secure encrypted communication
between two computers over an Internet Protocol network. It is used in
virtual private networks (VPNs).
Authentication Header (AH):
Encapsulation Security Payloads (ESP):
Tunnel/transport:
POP/IMAP: POP and IMAP are both protocols used for
retrieving email from an email server so you can read messages on your
device. POP stands for Post Office Protocol, and is the older of the
two. It was created in 1984 as a way to download emails from a remote
server. IMAP, or Internet Message Access Protocol, was designed in 1986.
Instead of simply retrieving emails, it was created to allow remote
access to emails stored on a remote server.
Use cases:
Voice and video:
Time synchronisation:
Email and web:
File transfer:
Directory services:
Remote access:
Domain name resolution:
Routing and switching:
Network address allocation:
Subscription services:
3.2
Given a scenario, implement host or application security solutions.
Endpoint protection:
Antivirus/anti-malware: Antivirus software
(abbreviated to AV software), also known as anti-malware, is a computer
program used to prevent, detect, and remove malware. Antivirus software
was originally developed to detect and remove computer viruses, hence
the name. However, with the proliferation of other malware, antivirus
software started to protect from other computer threats. In particular,
modern antivirus software can protect users from malicious browser
helper objects (BHOs), browser hijackers, ransomware, keyloggers,
backdoors, rootkits, trojan horses, worms, malicious LSPs, diallers,
fraud tools, adware, and spyware. Some products also include protection
from other computer threats, such as infected and malicious URLs, spam,
scam and phishing attacks, online identity (privacy), online banking
attacks, social engineering techniques, advanced persistent threat
(APT), and botnet DDoS attacks.
Endpoint detection and response (EDR): Endpoint
detection and response (EDR) is a cybersecurity technology that
continually monitors an “endpoint” (e.g. mobile phone, laptop,
Internet-of-Things device) to mitigate malicious cyber threats. Endpoint
detection and response technology is used to identify suspicious
behavior and Advanced Persistent Threats on endpoints in an environment,
and alert administrators accordingly. It does this by collecting and
aggregating data from endpoints and other sources. That data may or may
not be enriched by additional cloud analysis. EDR solutions are
primarily an alerting tool rather than a protection layer but functions
may be combined depending on the vendor.
DLP: Data loss prevention (DLP) software detects
potential data breaches/data ex-filtration transmissions and prevents
them by monitoring, detecting and blocking sensitive data while in use
(endpoint actions), in motion (network traffic), and at rest (data
storage).
Next Generation Firewall (NGFW): A next-generation
firewall (NGFW) is a part of the third generation of firewall
technology, combining a traditional firewall with other network device
filtering functions, such as an application firewall using in-line deep
packet inspection (DPI), an intrusion prevention system (IPS). Other
techniques might also be employed, such as TLS/SSL encrypted traffic
inspection, website filtering, QoS/bandwidth management, antivirus
inspection and third-party identity management integration (i.e. LDAP,
RADIUS, Active Directory).
Host Based Intrusion Prevention System (HIPS):
Host Based Intrusion Detection System (HIDS): A
host-based intrusion detection system (HIDS) is an intrusion detection
system that is capable of monitoring and analyzing the internals of a
computing system as well as the network packets on its network
interfaces, similar to the way a network-based intrusion detection
system (NIDS) operates. This was the first type of intrusion detection
software to have been designed, with the original target system being
the mainframe computer where outside interaction was infrequent.
Host Based Firewall: A host-based firewall is
firewall software that is installed directly on a computer (rather than
a network).
Boot integrity:
UEFI: The Unified Extensible Firmware Interface is
a publicly available specification that defines a software interface
between an operating system and platform firmware.
Measured boot: Windows 8 introduced a new feature
called Measured Boot, which measures each component, from firmware up
through the boot start drivers, stores those measurements in the Trusted
Platform Module (TPM) on the machine, and then makes available a log
that can be tested remotely to verify the boot state of the client.
Boot attestation:
Secure boot: Verifies software which is expected to
run on the device. Occurs during the boot process. Secure boot
propagates the trust from one software component to an other which
results in building a chain of trust.
Database:
Tokenisation: Tokenisation, when applied to data
security, is the process of substituting a sensitive data element with a
non-sensitive equivalent, referred to as a token, that has no intrinsic
or exploitable meaning or value. The token is a reference that maps back
to the sensitive data through a tokenisation system.
Salting: In cryptography, a salt is random data
that is used as an additional input to a one-way function that hashes
data, a password or passphrase. A new salt is randomly generated for
each password. Typically, the salt and the password (or its version
after key stretching) are concatenated and fed to a cryptographic hash
function, and the output hash value (but not the original password) is
stored with the salt in a database. Hashing allows later authentication
without keeping and therefore risking exposure of the plaintext password
if the authentication data store is compromised.
Hashing: A hash function is any function that can
be used to map data of arbitrary size to fixed-size values. Hashing is
one-way - once data is hashed, the original data can’t be retrieved from
the hash.
Application security:
Input validation: Input validation is performed to
ensure only properly formed data is entering the workflow in an
information system, preventing malformed data from persisting in the
database and triggering malfunction of various downstream components.
This common occurs with web forms, where malicious input data could be
used to exploit SQL systems.
Secure cookies: Secure cookies are a type of HTTP
cookie that have Secure attribute set, which limits the scope of the
cookie to “secure” channels. When a cookie has the Secure attribute, the
user agent will include the cookie in an HTTP request only if the
request is transmitted over a secure channel.
HTTP headers: HTTP headers let the client and the
server pass additional information with an HTTP request or response. An
HTTP header consists of its case-insensitive name followed by a colon
(:), then by its value. Whitespace before the value is ignored.
Code signing: Code signing is a method of putting a
digital signature on a program, file, software update or executable, so
that its authenticity and integrity can be verified upon installation
and execution. Like a wax seal, it guarantees to the recipient who the
author is, and that it hasn’t been opened and tampered with.
Whitelist/allow list: A list of items explicitly
allowed.
Blacklist/block list/deny list: A list of items
explicitly denied.
Secure coding practices:
Input validation: Input validation is the process
of testing input received by the application for compliance against a
standard defined within the application, such as validating data input
via a HTML form to ensure there are no SQL injection attacks present. https://cheatsheetseries.owasp.org/cheatsheets/Input_Validation_Cheat_Sheet.html
Output encoding: Encoding and escaping are
defensive techniques meant to stop injection attacks. Encoding (commonly
called “Output Encoding”) involves translating special characters into
some different but equivalent form that is no longer dangerous in the
target interpreter, for example translating the < character into the
< string when writing to an HTML page. https://cheatsheetseries.owasp.org/cheatsheets/Web_Service_Security_Cheat_Sheet.html#output-encoding
Session management: In order to keep the
authenticated state and track the users progress within the web
application, applications provide users with a session identifier
(session ID or token) that is assigned at session creation time, and is
shared and exchanged by the user and the web application for the
duration of the session (it is sent on every HTTP request). The session
ID is a name=value pair. The name used by the session ID should not be
extremely descriptive nor offer unnecessary details about the purpose
and meaning of the ID. The session ID must be long enough to prevent
brute force attacks. The session ID must be unpredictable (random
enough) to prevent guessing attacks, where an attacker is able to guess
or predict the ID of a valid session through statistical analysis
techniques. The session ID content (or value) must be meaningless to
prevent information disclosure attacks, where an attacker is able to
decode the contents of the ID and extract details of the user, the
session, or the inner workings of the web application.
Static code analysis: Static analysis, also called
static code analysis, is a method of computer program debugging that is
done by examining the code without executing the program. The process
provides an understanding of the code structure and can help ensure that
the code adheres to industry standards. Static analysis is used in
software engineering by software development and quality assurance
teams. Automated tools can assist programmers and developers in carrying
out static analysis. The software will scan all code in a project to
check for vulnerabilities while validating the code.
Manual code review: Manual code review involves a
human looking at source code, line by line, to find
vulnerabilities.
Dynamic code analysis: Dynamic code analysis – also
called Dynamic Application Security Testing (DAST) – is designed to test
a running application for potentially exploitable vulnerabilities. DAST
tools to identify both compile time and runtime vulnerabilities, such as
configuration errors that only appear within a realistic execution
environment. A DAST tool uses a dictionary of known vulnerabilities and
malicious inputs to “fuzz” an application.
Fuzzing: In programming and software development,
fuzzing or fuzz testing is an automated software testing technique that
involves providing invalid, unexpected, or random data as inputs to a
computer program.
Hardening:
Open ports and services: It’s best practice to only
open ports to the internet that are essential.
Registry: The Windows Registry can be monitored and
hardened to identify and protect from exploits.
Disk encryption: FDE should be used where
possible.
OS: OS’s should be kept up to date.
Patch management:
Third party updates: Occur on Windows/macOS as
updaters are bundled with applications.
Auto update: Should be used on workstations.
Self-encrypting drive (SED)/Full-disk encryption
(FDE): Self-encrypting drives (SEDs) encrypt data as it is
written to the disk.
OPAL: OPAL is a set of specifications for
self-encrypting drives developed by the Trusted Computing Group.
Hardware root of trust: Root of trust is a concept
that starts a chain of trust needed to ensure computers boot with
legitimate code. If the first piece of code executed has been verified
as legitimate, those credentials are trusted by the execution of each
subsequent piece of code. Hardware root of trust typically involves
encryption keys or digital certificates being built into hardware in a
way that they can’t be altered.
Trusted Platform Module (TPM): Trusted Platform
Module is an international standard for a secure cryptoprocessor, a
dedicated microcontroller designed to secure hardware through integrated
cryptographic keys. The term can also refer to a chip conforming to the
standard.
Sandboxing: Sandboxing is a cybersecurity practice
where you run code, observe and analyze and code in a safe, isolated
environment on a network that mimics end-user operating environments.
Sandboxing is designed to prevent threats from getting on the network
and is frequently used to inspect untested or untrusted code.
3.3 Given a
scenario, implement secure network designs.
Load balancing: Load balancing is the process of
distributing network traffic across multiple servers. This ensures no
single server bears too much demand. By spreading the work evenly, load
balancing improves application responsiveness. It also increases
availability of applications and websites for users.
Active/active: In Active/Active mode, two or more
servers aggregate the network traffic load, and working as a team, they
distribute it to the network servers. The load balancers can also
remember information requests from users and keep this information in
cache.
Active/passive: In an active-passive configuration,
the server load balancer recognises a failed node and redirects traffic
to the next available node
Scheduling: There are various load balancing
methods available, and each method uses a particular criterion to
schedule incoming traffic. Some of the common load balancing methods are
as follows:
Round robin: In this method, an incoming request is
routed to each available server in a sequential manner.
Weighted round robin: Here, a static weight is
preassigned to each server and is used with the round robin method to
route an incoming request.
Least connection: This method reduces the overload
of a server by assigning an incoming request to a server with the lowest
number of connections currently maintained.
Weighted least connection: In this method, a weight
is added to a server depending on its capacity. This weight is used with
the least connection method to determine the load allocated to each
server.
Fixed weighted: In this method, the weight of each
server is preassigned and most of the requests are routed to the server
with the highest priority. If the server with the highest priority
fails, the server that has the second highest priority takes over the
services.
Weighted response: Here, the response time from
each server is used to calculate its weight.
Source IP hash: In this method, an IP hash is used
to find the server that must attend to a request.
Virtual IP: A virtual IP address (VIP) is an IP
address that does not correspond to a physical network interface. Uses
for VIPs include network address translation (especially, one-to-many
NAT), fault-tolerance, and mobility.
Persistence: Session Persistence (sometimes called
sticky sessions) involves directing a user’s requests to one application
or backend web server for the duration of a “session.” The session is
the time it takes a user to complete a transaction or task that might
include multiple requests.
Network segmentation:
VLAN: In essence, a VLAN is a collection of devices
or network nodes that communicate with one another as if they made up a
single LAN, when in reality they exist in one or several LAN segments.
In a technical sense, a segment is separated from the rest of the LAN by
a bridge, router, or switch, and is typically used for a particular
department. This means that when a workstation broadcasts packets, they
reach all other workstations on the VLAN but none outside it.
DMZ (Demilitarised Zone): In computer security, a
DMZ or demilitarized zone (sometimes referred to as a perimeter network
or screened subnet) is a physical or logical subnetwork that contains
and exposes an organization’s external-facing services to an untrusted,
usually larger, network such as the Internet. The purpose of a DMZ is to
add an additional layer of security to an organization’s local area
network (LAN): an external network node can access only what is exposed
in the DMZ, while the rest of the organization’s network is
firewalled.
East-west traffic: In computer networking,
east-west traffic is network traffic among devices within a specific
data center. The other direction of traffic flow is north-south traffic,
data flowing from or to a system physically residing outside the data
center.
Intranet: An intranet can be understood as a
private extension of the internet confined to an organisation.
Extranet: While an intranet connects employees
inside an organisation, an extranet connects employees to external
parties. An extranet is defined as: a controlled private network
allowing customers, partners, vendors, suppliers and other businesses to
gain information, typically about a specific company or educational
institution, and do so without granting access to the organization’s
entire network. In simpler words, an intranet is for your employees, and
an extranet is for external stakeholders.
Zero trust: Zero trust assumes there is no implicit
trust granted to assets or user accounts based solely on their physical
or network location (i.e., local area networks versus the internet) or
based on asset ownership (enterprise or personally owned).
Authentication and authorisation (both subject and device) are discrete
functions performed before a session to an enterprise resource is
established. Zero trust is a response to enterprise network trends that
include remote users, bring your own device (BYOD), and cloud-based
assets that are not located within an enterprise-owned network boundary.
Zero trust focuses on protecting resources (assets, services, workflows,
network accounts, etc.), not network segments, as the network location
is no longer seen as the prime component to the security posture of the
resource.
Virtual Private Network (VPN): A virtual private
network (VPN) extends a private network across a public network and
enables users to send and receive data across shared or public networks
as if their computing devices were directly connected to the private
network. The benefits of a VPN include increases in functionality,
security, and management of the private network. It provides access to
resources that are inaccessible on the public network and is typically
used for remote workers. Encryption is common, although not an inherent
part of a VPN connection.
Always-on: Always On VPN is Microsoft’s technology
for Windows 10 clients that replaces Direct Access and provides secure
remote access for clients. As implied in the name, the VPN connection is
“always on” and is connected as soon as the internet connection is
established.
Split vs. full tunnel: Full tunnel means using your
VPN for all your traffic, whereas split tunneling means sending part of
your traffic through a VPN and part of it through the open network. This
means that full tunneling is more secure than split tunneling because it
encrypts all your traffic rather than just some of it.
Remote access vs. site to site: A remote access VPN
connects remote users from any location to a corporate network. A
site-to-site VPN, meanwhile, connects individual networks to each
other.
IPSec: In computing, Internet Protocol Security is
a secure network protocol suite that authenticates and encrypts packets
of data to provide secure encrypted communication between two computers
over an Internet Protocol network. It is used in virtual private
networks. The initial IPv4 suite was developed with few security
provisions. As a part of the IPv4 enhancement, IPsec is a layer 3 OSI
model or internet layer end-to-end security scheme. In contrast, while
some other Internet security systems in widespread use operate above the
network layer, such as Transport Layer Security (TLS) that operates
above the transport layer and Secure Shell (SSH) that operates at the
application layer, IPsec can automatically secure applications at the
internet layer.
SSL/TLS: SSL stands for Secure Sockets Layer and,
in short, it’s the standard technology for keeping an internet
connection secure and safeguarding any sensitive data that is being sent
between two systems, preventing criminals from reading and modifying any
information transferred, including potential personal details. It does
this by making sure that any data transferred between users and sites,
or between two systems remain impossible to read. It uses encryption
algorithms to scramble data in transit, preventing hackers from reading
it as it is sent over the connection. TLS (Transport Layer Security) is
just an updated, more secure, version of SSL.
HTML5: HTML5 is a markup language used for
structuring and presenting content on the World Wide Web. The HyperText
Markup Language or HTML is the standard markup language for documents
designed to be displayed in a web browser.
L2TP: In computer networking, Layer 2 Tunneling
Protocol (L2TP) is a tunneling protocol used to support virtual private
networks (VPNs) or as part of the delivery of services by ISPs. It uses
encryption (‘hiding’) only for its own control messages (using an
optional pre-shared secret), and does not provide any encryption or
confidentiality of content by itself. Rather, it provides a tunnel for
Layer 2 (which may be encrypted), and the tunnel itself may be passed
over a Layer 3 encryption protocol such as IPsec.
DNS: The Domain Name System (DNS) is the
hierarchical and distributed naming system used to identify computers
reachable through the Internet or other Internet Protocol (IP) networks.
The resource records contained in the DNS associate domain names with
other forms of information. These are most commonly used to map
human-friendly domain names to the numerical IP addresses computers need
to locate services and devices using the underlying network protocols,
but have been extended over time to perform many other functions as
well. The Domain Name System has been an essential component of the
functionality of the Internet since 1985. DNSSEC strengthens
authentication in DNS using digital signatures based on public key
cryptography.
Network Access Control (NAC): Network access
control (NAC) is an approach to computer security that attempts to unify
endpoint security technology (such as antivirus, host intrusion
prevention, and vulnerability assessment), user or system authentication
and network security enforcement. Network access control is a computer
networking solution that uses a set of protocols to define and implement
a policy that describes how to secure access to network nodes by devices
when they initially attempt to access the network.[3] NAC might
integrate the automatic remediation process (fixing non-compliant nodes
before allowing access) into the network systems, allowing the network
infrastructure such as routers, switches and firewalls to work together
with back office servers and end user computing equipment to ensure the
information system is operating securely before interoperability is
allowed. A basic form of NAC is the 802.1X standard.
Example: When a computer connects to a computer
network, it is not permitted to access anything unless it complies with
a business defined policy; including anti-virus protection level, system
update level and configuration. While the computer is being checked by a
pre-installed software agent, it can only access resources that can
remediate (resolve or update) any issues. Once the policy is met, the
computer is able to access network resources and the Internet, within
the policies defined by the NAC system. NAC is mainly used for endpoint
health checks, but it is often tied to Role-based Access. Access to the
network will be given according to the profile of the person and the
results of a posture/health check. For example, in an enterprise the HR
department could access only HR department files if both the role and
the endpoint meets anti-virus minimums.
Agent and agentless: The fundamental idea behind
NAC is to allow the network to make access control decisions based on
intelligence about end-systems, so the manner in which the network is
informed about end-systems is a key design decision. A key difference
among NAC systems is whether they require agent software to report
end-system characteristics, or whether they use scanning and network
inventory techniques to discern those characteristics remotely. As NAC
has matured, software developers such as Microsoft have adopted the
approach, providing their network access protection (NAP) agent as part
of their Windows 7, Vista and XP releases, however, beginning with
Windows 10, Microsoft no longer supports NAP. There are also NAP
compatible agents for Linux and Mac OS X that provide equal intelligence
for these operating systems.
Out of band management: Out-of-band (OOB)
management is a networking term which refers to accessing and managing
network infrastructure at remote locations, and doing it through a
separate management plane from the production network.
Port security:
Broadcast storm prevention: A broadcast storm or
broadcast radiation is the accumulation of broadcast and multicast
traffic on a computer network. Extreme amounts of broadcast traffic
constitute a “broadcast storm”. It can consume sufficient network
resources so as to render the network unable to transport normal
traffic. Routers and firewalls can be configured to detect and prevent
maliciously inducted broadcast storms. Broadcast storm control is a
feature of many managed switches in which the switch intentionally
ceases to forward all broadcast traffic if the bandwidth consumed by
incoming broadcast frames exceeds a designated threshold. Although this
does not resolve the root broadcast storm problem, it limits broadcast
storm intensity and thus allows a network manager to communicate with
network equipment to diagnose and resolve the root problem.
BPDU guard:
Loop prevention: The Spanning Tree Protocol (STP)
is a network protocol that builds a loop-free logical topology for
Ethernet networks. The basic function of STP is to prevent bridge loops
and the broadcast radiation that results from them. Spanning tree also
allows a network design to include backup links providing fault
tolerance if an active link fails.
DHCP snooping: In computer networking, DHCP
snooping is a series of techniques applied to improve the security of a
DHCP infrastructure. DHCP servers allocate IP addresses to clients on a
LAN. DHCP snooping can be configured on LAN switches to exclude rogue
DHCP servers and remove malicious or malformed DHCP traffic. In
addition, information on hosts which have successfully completed a DHCP
transaction is accrued in a database of bindings which may then be used
by other security or accounting features.
MAC filtering: In computer networking, MAC
Filtering refers to a security access control method whereby the MAC
address assigned to each network card is used to determine access to the
network.
Network appliances:
Jump servers: A jump server, jump host or jump box
is a system on a network used to access and manage devices in a separate
security zone. A jump server is a hardened and monitored device that
spans two dissimilar security zones and provides a controlled means of
access between them. The most common example is managing a host in a DMZ
from trusted networks or computers.
Proxy servers: In computer networking, a proxy
server is a server application that acts as an intermediary between a
client requesting a resource and the server providing that resource.
Forward: A forward proxy is an Internet-facing
proxy used to retrieve data from a wide range of sources (in most cases
anywhere on the Internet).
Reverse: A reverse proxy (or surrogate) is a proxy
server that appears to clients to be an ordinary server. Reverse proxies
forward requests to one or more ordinary servers that handle the
request. The response from the proxy server is returned as if it came
directly from the original server, leaving the client with no knowledge
of the original server.
NIDS/NIPS: Network-based Intrusion Detection
System/Network-based Intrusion Prevention System
Signature based: Assesses packets by their
signature and acts accordingly.
Heuristic/behaviour: This analyses typical
behaviour on the system, and detects/prevents anything outside of that
normal range.
Inline vs. passive: This refers to the location of
the NIDS/NIPS device within the network. Is it off to the side receiving
traffic via a port mirror (passive) or does traffic need to flow through
it to reach a gateway/host (inline)?
HSM: A hardware security module (HSM) is a physical
computing device that safeguards and manages digital keys, performs
encryption and decryption functions for digital signatures, strong
authentication and other cryptographic functions. These modules
traditionally come in the form of a plug-in card or an external device
that attaches directly to a computer or network server. A hardware
security module contains one or more secure cryptoprocessor chips.
Sensors: A sensor can be anything from a network
tap to a firewall log; it is something that collects information about
your network and can be used to make judgement calls about your
network’s security.
Collectors: Network collectors collect and store
network traffic information.
Aggregators: This is the same as a collector.
Firewalls: A firewall is a network security device
that monitors incoming and outgoing network traffic and permits or
blocks data packets based on a set of security rules.
Web Application Firewall (WAF): A WAF or web
application firewall helps protect web applications by filtering and
monitoring HTTP traffic between a web application and the Internet. It
typically protects web applications from attacks such as cross-site
forgery, cross-site-scripting (XSS), file inclusion, and SQL injection,
among others. A WAF is a protocol layer 7 defense (in the OSI model),
and is not designed to defend against all types of attacks. This method
of attack mitigation is usually part of a suite of tools which together
create a holistic defense against a range of attack vectors. A WAF is a
type of reverse-proxy, protecting the server from exposure by having
clients pass through the WAF before reaching the server.
NGFW: A next-generation firewall (NGFW) is a
security appliance that processes network traffic and applies rules to
block potentially dangerous traffic. NGFWs evolve and expand upon the
capabilities of traditional firewalls. They do all that firewalls do,
but more powerfully and with additional features, such as: packet
filtering, stateful inspection, VPN awareness, deep packet inspection,
and more.
Stateless: Stateless firewalls don’t remember any
previous state of data packets. Stateless firewalls filters packets that
pass through the firewall in real-time according to a rule list, held
client-side. Rules could be anything from the destination or source
address, or anything in the header of the packet contents, and this will
determine whether the traffic is permitted into the network, or denied
access. This type of firewall is also known as a packet filtering
firewall.
Stateful: Just as its name suggests, a stateful
firewall remembers the state of the data that’s passing through the
firewall, and can filter according to deeper information than its
stateless friend. It will monitor all the parts of a traffic stream,
including TCP connection stages, status updates, and previous packet
activity. After a type of traffic has been approved, it will be added to
a kind of database (known as a state table or a connection table) so
that the stateful firewall works to make intelligent decisions about
these kinds of packets in the future. This type of firewall is also
called a dynamic packet filtering firewall, and an example is the
Microsoft Defender Firewall, often the default choice for PC users.
Unified Threat Management (UTM): Unified threat
management (UTM) is an approach to information security where a single
hardware or software installation provides multiple security functions.
This contrasts with the traditional method of having point solutions for
each security function. UTM simplifies information-security management
by providing a single management and reporting point for the security
administrator rather than managing multiple products from different
vendors. UTM appliances have been gaining popularity since 2009, partly
because the all-in-one approach simplifies installation, configuration
and maintenance. Such a setup saves time, money and people when compared
to the management of multiple security systems. Instead of having
several single-function appliances, all needing individual familiarity,
attention and support, network administrators can centrally administer
their security defenses from one computer. Some of the prominent UTM
brands are Cisco, Fortinet, Sophos, Netgear, FortiGate, Huawei,
WiJungle, SonicWall and Check Point. UTMs are now typically called
next-generation firewalls.
NAT gateway: Network address translation (NAT) is a
method of mapping an IP address space into another by modifying network
address information in the IP header of packets while they are in
transit across a traffic routing device. The majority of network address
translators map multiple private hosts to one publicly exposed IP
address. In a typical configuration, a local network uses one of the
designated private IP address subnets (RFC 1918). A router in that
network (which acts as the NAT gateway) has a private address of that
address space. The router is also connected to the internet with a
public address, typically assigned by an ISP. As traffic passes from the
local network to the internet, the source address in each packet is
translated on the fly from a private address to the public address. The
router tracks basic data about each active connection (particularly the
destination address and port). When a reply returns to the router, it
uses the connection tracking data it stored during the outbound phase to
determine the private address on the internal network to which to
forward the reply.
Content/URL filter: URL filtering restricts what
web content users can access. It does this by blocking certain URLs from
loading. URL filtering bases its filtering policies on a database that
classifies URLs by topic and by “blocked” or “allowed” status. Typically
a company will not develop this database internally, relying instead on
the vendor providing the filtering service. However, most vendors enable
to customise which URLs are blocked or allowed. URL filtering takes
place at the application layer of the OSI.
Open source vs. proprietary: Open source software
can be srutinised, and any vulnerabilites are likely to be exposed and
made public quickly (due to the open nature of the source code).
Proprietary software may have undiscovered bugs, that are known by some
(potential attackers) but not by others (users and/or proprietors of the
software).
Hardware vs. software: Hardware networking
appliances can take computing load off traditional servers and computing
equipment. Software based solutions are much more dynamic, and can be
easily changed, updated, upgraded, and reconfigured.
Appliance vs. host-based vs. virtual:
Access Control List (ACL): In computer security, an
access-control list (ACL) is a list of permissions associated with a
system resource (object). An ACL specifies which users or system
processes are granted access to objects, as well as what operations are
allowed on given objects. Each entry in a typical ACL specifies a
subject and an operation. For instance, if a file object has an ACL that
contains (Alice: read,write; Bob: read), this would give Alice
permission to read and write the file and give Bob permission only to
read it.
Route security:
Quality of Service (QoS): In the field of computer
networking and other packet-switched telecommunication networks, quality
of service refers to traffic prioritisation and resource reservation
control mechanisms rather than the achieved service quality. Quality of
service is the ability to provide different priorities to different
applications, users, or data flows, or to guarantee a certain level of
performance to a data flow. Quality of service is particularly important
for the transport of traffic with special requirements. In particular,
developers have introduced Voice over IP technology to allow computer
networks to become as useful as telephone networks for audio
conversations, as well as supporting new applications with even stricter
network performance requirements.
Implications of IPv6:
Port spanning/port mirroring: Port mirroring is
used on a network switch to send a copy of network packets seen on one
switch port (or an entire VLAN) to a network monitoring connection on
another switch port.
Port taps: A network tap is a system that monitors
events on a local network.A tap is typically a dedicated hardware
device, which provides a way to access the data flowing across a
computer network. The network tap has (at least) three ports: an A port,
a B port, and a monitor port. A tap inserted between A and B passes all
traffic through unimpeded in real time, but also copies that same data
to its monitor port, enabling a third party to listen.
File integrity monitors: File integrity monitoring,
or FIM, is a technology that monitors and detects file changes that
could be indicative of a cyberattack.
3.4
Given a scenario, install and configure wireless security settings.
Cryptographic protocols:
WPA2: Wi-Fi Protected Access (WPA), Wi-Fi Protected
Access II (WPA2), and Wi-Fi Protected Access 3 (WPA3) are the three
security and security certification programs developed after 2000 by the
Wi-Fi Alliance to secure wireless computer networks. Ratified in 2004,
WPA2 replaced WPA. WPA2, which requires testing and certification by the
Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In
particular, it includes mandatory support for CCMP, an AES-based
encryption mode. From March 13, 2006, to June 30, 2020, WPA2
certification was mandatory for all new devices to bear the Wi-Fi
trademark.
WPA3: In January 2018, the Wi-Fi Alliance announced
WPA3 as a replacement to WPA2. Certification began in June 2018, and
WPA3 support has been mandatory for devices which bear the “Wi-Fi
CERTIFIED™” logo, since July 2020.
CCMP: Counter Mode Cipher Block Chaining Message
Authentication Code Protocol (Counter Mode CBC-MAC Protocol) or CCM mode
Protocol (CCMP) is an encryption protocol designed for Wireless LAN
products that implements the standards of the IEEE 802.11i amendment to
the original IEEE 802.11 standard. CCMP is an enhanced data
cryptographic encapsulation mechanism designed for data confidentiality
and based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced
Encryption Standard (AES) standard. It was created to address the
vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated,
insecure protocol. CCMP is the standard encryption protocol for use with
the Wi-Fi Protected Access II (WPA2) standard and is much more secure
than the Wired Equivalent Privacy (WEP) protocol and Temporal Key
Integrity Protocol (TKIP) of Wi-Fi Protected Access (WPA).
SAE: In cryptography, Simultaneous Authentication
of Equals (SAE) is a password-based authentication and
password-authenticated key agreement method. SAE is a variant of the
Dragonfly Key Exchange defined in RFC 7664, based on Diffie–Hellman key
exchange using finite cyclic groups which can be a primary cyclic group
or an elliptic curve. The problem of using Diffie–Hellman key exchange
is that it does not have an authentication mechanism. So the resulting
key is influenced by a pre-shared key and the MAC addresses of both
peers to solve the authentication problem. In January 2018, the Wi-Fi
Alliance announced WPA3 as a replacement to WPA2. The WPA3 standard
replaces the pre-shared key (PSK) exchange with Simultaneous
Authentication of Equals as defined in IEEE 802.11-2016 resulting in a
more secure initial key exchange in personal mode.
Authentication protocols:
EAP: Extensible Authentication Protocol (EAP) is an
authentication framework frequently used in network and internet
connections. It is defined in RFC 3748, which made RFC 2284 obsolete,
and is updated by RFC 5247. EAP is used to pass authentication
information between the supplicant (i.e. host/workstation/client) and
the authentication server.
PEAP: The Protected Extensible Authentication
Protocol, also known as Protected EAP or simply PEAP, is a protocol that
encapsulates EAP within a potentially encrypted and authenticated
Transport Layer Security (TLS) tunnel. PEAP was jointly developed by
Cisco Systems, Microsoft, and RSA Security.
EAP-FAST: Flexible Authentication via Secure
Tunneling (EAP-FAST; RFC 4851) is a protocol proposal by Cisco Systems
as a replacement for LEAP. The protocol was designed to address the
weaknesses of LEAP while preserving the “lightweight” implementation.
Use of server certificates is optional in EAP-FAST. EAP-FAST uses a
Protected Access Credential (PAC) to establish a TLS tunnel in which
client credentials are verified.
EAP-TLS: EAP Transport Layer Security (EAP-TLS),
defined in RFC 5216, is an IETF open standard that uses the Transport
Layer Security (TLS) protocol, and is well-supported among wireless
vendors. EAP-TLS is the original, standard wireless LAN EAP
authentication protocol.
EAP-TTLS: EAP Tunneled Transport Layer Security
(EAP-TTLS) is an EAP protocol that extends TLS. It was co-developed by
Funk Software and Certicom and is widely supported across platforms.
Microsoft did not incorporate native support for the EAP-TTLS protocol
in Windows XP, Vista, or 7. Supporting TTLS on these platforms requires
third-party Encryption Control Protocol (ECP) certified software.
Microsoft Windows started EAP-TTLS support with Windows 8, support for
EAP-TTLS appeared in Windows Phone version 8.1. The client can, but does
not have to be authenticated via a CA-signed PKI certificate to the
server. This greatly simplifies the setup procedure since a certificate
is not needed on every client. After the server is securely
authenticated to the client via its CA certificate and optionally the
client to the server, the server can then use the established secure
connection (“tunnel”) to authenticate the client.
IEEE 802.1X: IEEE 802.1X is an IEEE Standard for
port-based Network Access Control (PNAC). It is part of the IEEE 802.1
group of networking protocols. It provides an authentication mechanism
to devices wishing to attach to a LAN or WLAN. IEEE 802.1X defines the
encapsulation of the Extensible Authentication Protocol (EAP) over wired
IEEE 802 networks and over 802.11 wireless networks, which is known as
“EAP over LAN” or EAPOL.
RADIUS: Remote Authentication Dial-In User Service
(RADIUS) is a networking protocol that provides centralized
authentication, authorization, and accounting (AAA) management for users
who connect and use a network service. RADIUS was developed by
Livingston Enterprises in 1991 as an access server authentication and
accounting protocol. It was later brought into IEEE 802 and IETF
standards. RADIUS is a client/server protocol that runs in the
application layer, and can use either TCP or UDP. Network access
servers, which control access to a network, usually contain a RADIUS
client component that communicates with the RADIUS server. RADIUS is
often the back-end of choice for 802.1X authentication. A RADIUS server
is usually a background process running on UNIX or Microsoft
Windows.
Methods:
PSK vs. Enterprise vs. Open: PSK mode uses one
shared key (password) which all users use to connect to the wireless
network. WPA enterprise provides the security needed for wireless
networks in business environments. It is more complicated to set up, and
it offers individualized and centralized control over access to your
Wi-Fi network. When users try to connect to the network, they need to
present their login credentials.
WPS: Wi-Fi Protected Setup (WPS) is a feature
supplied with many routers. It is designed to make the process of
connecting to a secure wireless network from a computer or other device
easier. Basically, you push a button, and then you can connect to the
network from your device without a password.
Captive portals: A captive portal is a web page
accessed with a web browser that is displayed to newly connected users
of a Wi-Fi or wired network before they are granted broader access to
network resources. Captive portals are commonly used to present a
landing or log-in page which may require authentication, payment,
acceptance of an end-user license agreement, acceptable use policy,
survey completion, or other valid credentials that both the host and
user agree to adhere by. Depending on the feature set of the gateway,
websites or TCP ports can be white-listed so that the user would not
have to interact with the captive portal in order to use them. The MAC
address of attached clients can also be used to bypass the login process
for specified devices.
Installation considerations:
Site surveys: Surveys to determine the layout of a
building/site and the potential site factors in regards to wireless
networking.
Heat maps: A WiFi heatmap is a visual
representation of the wireless signal coverage and strength of an
area.
WiFi analysers: Devices or software that can
analyse the WiFi signals in the area.
Channel overlaps: Occurs when multiple wireless
networks broadcast on the same wireless channel
WAP placement: Should be done logically and
strategically.
Controller and access point security: WiFi
controllers and access points should be physically isolated from
threats.
3.5 Given a
scenario, implement secure mobile solutions.
Connection methods and receivers:
Cellular: A mobile broadband modem, also known as
wireless modem or cellular modem, is a type of modem that allows a
personal computer or a router to receive wireless Internet access via a
mobile broadband connection instead of using telephone or cable
television lines. Phones connect to cell towers which separate the
mobile network into individual cells. IMSI (International Mobile
Subscriber Identity) catchers (AKA fake/false cell towers or stingrays)
can be set up to intercept communications over a mobile network.
WiFi: WiFi is subject to multiple potential
threats, including evil twin attacks, jamming/interference, and rogue
APs.
Bluetooth: Threats related to Bluetooth include
bluesnarfing, eavesdropping, and the sending of unwanted files including
malware.
NFC: A subset of RFID. NFC is used for
close-proximity data exchange. It can be complemented with RFID
capabilities to extend the range of an NFC tag. Apple Pay, Google Pay,
credit/debit cards and POS terminals, access control systems, and some
passports all use NFC. NFC systems are subject to MITM attacks, RFID
skimming, and signal duplication/replay attacks.
Infrared: TV remote controls and garage door
openers use IR. It’s easy to find programmable IR transmitters that can
be used on property not owned by the person with the transmitter.
USB: Because USB is used so broadly, there are many
ways a USB device could be used maliciously. One threat posed by USB
storage devices is the automated deliver of payloads via hotplugging a
device capable of keystroke injection.
**Point to point:* Self explanatory. Risks are the same as the
underlying technology (i.e. WiFi, Bluetooth).
Point to multipoint: Self explanatory. Risks are
the same as the underlying technology (i.e. WiFi, Bluetooth).
GPS: GPS connectivity can be abused at the
application level - someone could install a program to constantly
monitor and report your whereabouts.
RFID: Similar to NFC, but communication is limited
to simplex.
Mobile Device Management (MDM): Mobile device
management is the administration of mobile devices, such as smartphones,
tablet computers, and laptops. MDM is usually implemented with the use
of a third-party product that has management features for particular
vendors of mobile devices.
Application management: Applications can be
remotely installed, removed, updated, downgraded, and queried.
Installable applications may be limited to a certain set, specified by
the MDM administrator. Certain applications may also be prevented from
being installed on the users device.
Content management: Content management refers to
the managing of data on the mobile device. For example, policies could
be set to retrieve a certain resource from a certain location, rather
than the default. It also refers to the management of the location and
way in which data is stored on the device.
Remote wipe: Devices can be wiped and restored
remotely.
Geofencing: Features on the mobile device can be
enabled/disabled while the reported location of the mobile device is
outside a certain, admin defined area.
Geolocation: Mobile devices can be queried for
their exact location.
Screen locks: Minimum time without activity before
the device is locked can be dictated via MDM.
Push notifications: Arbitrary push notifications
may be able to sent to mobile devices enrolled in the MDM scheme.
Passwords and pins: Certain unlocking requirements
can be implemented, such as using complex characters in a password with
a minimum mandatory length, or disabling PINs altogether.
Biometrics: Requirement of biometrics to unlock MDM
devices or access certain features can be implemented.
Context aware authentication: This refers to
implementing a required combination of factors (such as time, location,
and battery power remaining) to coincide in order to allow, disallow, or
in some way alter some functionality (i.e. an app may only be available
under specific conditions).
Containerisation: All modern mobile OS’s natively
support and enforce application containerisation.
Storage segmentation: This refers to storage
partitioning. A certain partition of storage on the mobile device may be
configured as read only, for example.
Full device encryption: All modern mobile OS’s
natively support FDE. Data on the storage is encrypted at rest.
Mobile devices:
MicroSD HSMs: A hardware security module (HSM) is a
physical computing device that safeguards and manages digital keys,
performs encryption and decryption functions for digital signatures,
strong authentication and other cryptographic functions. A HSM can be
implemented via a device that fits into the microSD slot of a mobile
device.
MDM/UEM: Mobile Device Management/Unified Endpoint
Management refers to software suites that enable centralised control and
management of mobile devices.
MAM: Mobile Application Management. Refers to
hosting a repository of approved/patched/updated apps, through which
devices enrolled in the MDM program source their downloads.
SEAndroid: Security Enhancements (SE) for Android
was an NSA-led project that created and released an open source
reference implementation of how to enable and apply SELinux (Security
Enhanced Linux - a Linux kernel security module that provides a
mechanism for supporting access control security policies, including
mandatory access controls) to Android, made the case for adopting
SELinux into mainline Android, and worked with the Android Openly Source
Project (AOSP) to integrate the changes into mainline Android. As a
result, SELinux is now a core part of Android.
Enforcement and monitoring of:
Third party application stores: iOS does not allow
access to third party application stores. Android does, however this may
be disabled through MDM policies.
Rooting/Jailbreaking: Rooting or jailbreaking a
phone refers to gaining root or administrator access to the underlying
system. This inherently introduces a greater general security risk for
the device and is probably frowned upon by employers who provide their
employees with mobile devices. A rooted device could potentially reverse
the effects of policies implemented through MDM.
Sideloading: Sideloading is the practice of
installing software on a device without using the approved app store or
software distribution channel.
Custom firmware: This typically only applies to
Android devices. Because Android is open source, people modify and
redistribute AOSP. These custom versions are known as custom ROMs or
custom firmware. Use/installation of custom ROMs may be disallowed or
made impossible via MDM.
Carrier unlocking: Some countries, carriers, and
mobile phone plans lock the device to a certain carrier/network. They
usually provide carrier unlocking - allowing you to use the device with
any carrier/network - for a fee, or for free after a certain time
period.
Firmware OTA updates: This simply refers to the
process of phones receiving OS/firmware/software updates from the
manufacturer (or carrier). These updates may be enforced or disabled via
MDM.
Camera use: In certain situations, camera use may
not be permitted. MDM policies can be used to enforce this, based on
things such as location and proximity to a WiFi gateway for
example.
SMS/MMS/RCS: SMS/MMS/RCS has been and is used an
attack vector. It can be disabled via MDM.
External media: External media that gets connected
to mobile devices can contain potentially contain malware. This
functionality can be disabled via MDM.
USB OTG: USB On-The-Go is a specification first
used in late 2001 that allows USB devices, such as tablets or
smartphones, to act as a host, allowing other USB devices, such as USB
flash drives, digital cameras, mouse or keyboards, to be attached to
them. It may be the method via which external media is connected to a
mobile device. Connecting external devices to mobile devices introduces
security risks. This functionality can be disabled via MDM.
Recording microphone: Certain high security
environments may disallow the use of audio recording devices. MDM
policies can be set up to disable the use of the microphone completely
or if certain conditions are met.
GPS tagging: This refers to the addition of
metadata containing the current location to a file, such as when a photo
is taken.
WiFi direct/ad hoc: Connecting to devices via
ad-hoc WiFi, like cheap Chinese robot vacuums or IoT devices, is
insecure, as you don’t have the security benefits of having a router
between your mobile device and the device you’re connecting to.
Tethering/Hotspot: This refers to the
wired/wireless connection of a mobile device to another device such as a
laptop, in order to make use of the mobile device’s internet connection
on the other device (laptop). Employees on a locked down corporate
network may do this to work around a URL block or something similar.
This functionality can be disabled, or routed in a custom way, via
MDM.
Payment methods: Apple Pay/Google Pay can be
disabled or restricted via MDM if required.
Deployment models:
BYOD: Bring Your Own Device.
COPE: Corporate Owned, Personally Enabled.
CYOD: Choose Your Own Device (from a preselected
list of devices).
COBO: Corporate Owned, Business Only.
VDI: Virtual Desktop Infrastructure. Desktop
virtualisation is a software technology that separates the desktop
environment and associated application software from the physical client
device that is used to access it. AKA thin clients.
3.6
Given a scenario, apply cybersecurity solutions to the cloud.
Cloud security controls:
High availability across zones: Zones in this sense
refers to separate geographic regions, each containing a data center (or
part of a data center) operated by the cloud service provider. This
brings advantages in terms of stability, speed, latency, and
availability independent on the location of the end user. In terms of
availability, if a zone goes down, there will always be a backup, and
thus availability is ensured.
Resource policies: This refers to policies
dictating who gets access to cloud resources, and what they get access
to.
Secrets management: This refers to the management
of secret API keys, passwords, certificates and other sensitive data
required to use cloud services. Access to secrets can be managed,
logged, and audits should occur at a designated frequency.
Integration and auditing: Any appliance or service
that has the potential to be implicated in a security risk, such as
firewalls, routers, switches, VPNs, servers, secret managers, should be
audited frequently to uncover security breaches and
vulnerabilities.
Storage:
Permissions: Data can be configured in a way so
that it is inaccessible to the public. Access can also be configured in
a more specific way, specifically permitting who can access certain
data, and how much they can access.
Encryption: Data stored in the cloud should be
encrypted.
Replication: Cloud services typically handle data
replication. Data replication is the process by which data residing on a
physical/virtual server(s) or cloud instance (primary instance) is
continuously replicated or copied to a secondary server(s) or cloud
instance (standby instance).
High availability: High availability is a quality
of computing infrastructure that allows it to continue functioning, even
when some of its components fail. This is important for mission-critical
systems that cannot tolerate interruption in service, and any downtime
can cause damage or result in financial loss. Most cloud providers
provide SLAs defining the level of accepted downtime (typically 0.01%)
before they are held liable.
Network:
Virtual networks: Cloud providers enable to setup
and use of entire virtual networks, to connect your virtual
infrastructure. This includes virtual switches, routers, firewalls, and
pretty much any other network appliance you can think of.
Public and private subnets: Virtual servers can be
exposed to the public via a public IP address, or be restricted to a
private subnet accessible only via certain other hosts.
Segmentations: Due to the ease of spinning up and
pulling down servers in the cloud, it’s typical to segment functionality
between servers. For example, you may have a server solely for running a
database, and another which runs an application. Whilst the application
and database work together and are used together, they are virtually
(and perhaps physically) segmented.
API inspection and integration: API calls are an
attack vector and should be monitored. Monitoring of API requests may be
able to be integrated into an SIEM dashboard.
Compute:
Security groups: Levels of security can be defined
according to any number of factors, including IP address.
Dynamic resource allocation: AKA elasticity and
scalability, this refers to the ability for cloud compute instances to
scale up or down its resource pool (with a financial consequence) as
utilisation increases/decreases.
Instance awareness: There are different types of
“instances” (virtual servers) available to be chosen from with most
cloud service providers.
VPC endpoint: VPC endpoints allow traffic to flow
between a VPC (Virtual Private Cloud) and other resources hosted by the
same cloud provider, without ever leaving the cloud providers network
(i.e. reaching the internet).
Container security: Containers in this context
refers to LxC, LxD, Docker, and Podman containers etc.
Solutions:
CASB: A Cloud Access Security Broker is on-premises
or cloud based software that sits between cloud service users and cloud
applications, and monitors all activity and enforces security
policies.
Application security: As well as all of the other
security considerations, applications themselves need to be correctly
configured in order to be secure.
Next generation SWG: A secure web gateway protects
an organisation from online security threats and infections by enforcing
company policy and filtering Internet-bound traffic. A secure web
gateway is an on-premise or cloud-delivered network security service.
Sitting between users and the Internet, secure web gateways provide
advanced network protection by inspecting web requests against company
policy to ensure malicious applications and websites are blocked and
inaccessible. A secure web gateway includes essential security
technologies such as URL filtering, application control, data loss
prevention, antivirus, and https inspection to provide organisations
with strong web security.
Firewall considerations:
Cost: Physical firewalls are expensive. Virtual
firewalls are free or cheaper.
Cloud native controls vs. third-party solutions:
Third-party controls often have advantages, such as: they provide a
consistent interface regardless of the cloud service provider; they may
be able to interact with multiple cloud service providers
simultaneously; and they may provide enhanced reporting mechanisms and
functionality.
3.7
Given a scenario, implement identity and account management
controls.
Identity:
IdP: Identity Provider. A system or entity that
enables authentication; facilitates single sign-on to cloud services,
among other applications.
Attributes: This refers to the attributes or data a
user provides when signing up or authenticating with a
service(i.e. name, email address, date of birth).
Certificates: PKI can be used to identify an
individual via a digital certificate.
SSH keys: SSH keys use PKI methods to authenticate
individuals.
Smart cards: Smart cards are physical devices that,
when connected to a computer, authenticate the user as being who they
say they are.
Account types:
User account: An account associated with an
individual.
Shared and generic accounts/credentials: An account
associated with more than one person.
Guest accounts: An account specifically designated
for guests to use.
Service accounts: A root or admin account using for
making system level changes.
Account policies:
Password complexity: Refers to the types of
characters (i.e. letters, numbers, symbols) in a password. May also
refer to the length of the password.
Password reuse: Policies may be enforced which
prevent users from using passwords that they have previously used (in
the past x days).
Network location: Users may be prevented from
accessing resources or logging in according to their current reported
location.
Geofencing: Geofencing refers to creating a set of
rules that allow certain behaviour while the user is within a defined
geographic area, and disallow that behaviour while the user is outside
of that area.
Geotagging: Refers to that tagging of files with
location metadata, such as when someone takes a photo. This may be
disabled via MDM or similar tools.
Geolocation: Refers to the ability of an
administrator to query the location of a device.
Time-based logins: Users may be restricted to
logging in or accessing resources only during a certain time
period.
Access policies: All-inclusive term that defines
the degree of access granted to use a particular resource, data,
systems, or facilities.
Account permissions: Sets of attributes that
network administrators assign to users and groups to define what they
can do to resources.
Account audits: It’s common in most environments to
perform periodic audits to make sure that all of the policies configured
are indeed being used on the systems.
Impossible travel time/risky login: If someone logs
in from Sydney, and then five minutes later their traffic starts coming
from the Bahamas, something may be wrong.
Lockout: This refers to enforcing policies such as
“after 3 incorrect password attempts, you won’t be able to attempt to
log in for 30 minutes” and similar.
Disablement: Refers to the complete disabling of
user accounts, if necessary.
3.8
Given a scenario, implement authentication and authorisation
solutions.
Authentication management:
Password keys: This refers to a physical device
like a YubiKey or OnlyKey, which can provide an additional factor in a
multifactor authentication based login process.
Password vaults: This refers to a software based
password manager (i.e. KeePass, BitWarden).
TPM: Trusted Platform Module is an international
standard for a secure cryptoprocessor, a dedicated microcontroller
designed to secure hardware through integrated cryptographic keys. The
term can also refer to a chip conforming to the standard. A TPM protects
its own keys.
HSM: A hardware security module (HSM) is a
dedicated crypto processor that is specifically designed for the
protection of the crypto key lifecycle. A HSM protects foreign
keys.
Knowledge based authentication: This refers to
login processes involving questions such as “what was the name of your
first pet?” or “what is your mothers maiden name?”.
Authentication/authorisation:
EAP: Extensible Authentication Protocol. A
framework that is utilised by other authentication protocols.
CHAP: Challenge Handshake Authentication Protocol.
A step up from PAP, as provides encryption.
PAP: Password Authentication Protocol. A basic
method of authentication, unencrypted.
802.1X: IEEE 802.1X, an IEEE Standard for
Port-Based Network Access Control (PNAC), provides protected
authentication for secure network access for devices attempting to
connect and authenticate to a LAN or WLAN.
RADIUS: Remote Authentication Dial In User Service.
RADIUS is a client-server protocol and software that enables remote
access servers to communicate with a central server to authenticate
dial-in users and authorise their access to the requested system or
service.
SSO: Single Sign On. A security mechanism whereby a
user needs to log in only once and their credentials are valid
throughout the entire enterprise network, granting them access to
various resources without the need to use a different set of credentials
or continually re-identify themselves.
SAML: Security Assertion Markup Language. A format
for a client and server to exchange authentication and authorisation
data securely. SAML defines three roles for making this happen:
principle, identity provider (IdP), and service provider.
TACACS+: Terminal Access Controller Access Control
System Plus. A proprietary protocol developed by Cisco to support AAA in
a network with many routers and switches. It is similar to RADIUS in
function, but uses TCP port 49 by default and separates authorisation,
authentication, and accounting into different parts.
OAuth: Standard that enables users to access Web
sites using credentials from other Web services, such as Amazon or
Google, without compromising or sharing those credentials. OAuth is
about authorisation (i.e. to grant access to functionality/data/etc.
without having to deal with the original authentication).
OpenID: Authentication protocol that enables users
to log into Web sites using credentials established with other services,
such as Google or Amazon. OpenID is about authentication (i.e. proving
who you are).
Kerberos: An authentication standard designed to
allow different operating systems and applications to authenticate each
other. Kerberos uses time stamps and a Ticket-Granting Service as
mechanisms to provide authentication and access to different
resources.
Access control schemes:
ABAC: Attribute Based Access Control. An access
control model based upon identifying information about a resource, such
as subject (like name, clearance, department) or object (data type,
classification, color). Access to resources is granted or denied based
on preset rules concerning those attributes.
Role based access control: An access control model
based upon the definition of specific roles that have specific rights
and privileges assigned to them. Rights and permissions are not assigned
on an individual basis; rather, individuals must be assigned to a role
by an administrator.
Rule based access control: An access control model
in which access to different resources is strictly controlled on the
basis of specific rules configured and applied to the resource. Rules
may entail time of day, originating host, and type of action conditions.
Rule-based access control models are typically seen on network security
devices such as routers and firewalls.
MAC: Mandatory Access Control. A security model in
which every resource is assigned a label that defines its security
level. If the user lacks the security level assigned to a resource, the
user cannot get access to that resource. MAC is typically found only in
highly secure systems.
DAC: Discretionary Access Control. An authorisation
model based on the idea that there is an owner of a resource who may, at
his or her discretion, assign access to that resource. DAC is considered
much more flexible than mandatory access control (MAC).
Conditional access: Access to resources based on
certain conditions. A superset of the access control schemes listed
above.
Privileged access management: A centralised method
of handling elevated access to human resources.
Filesystem permissions: The native capabilities of
a filesystem to control access and enforce permission based rules
3.9 Given
a scenario, implement public key infrastructure.
Public Key Infrastructure (PKI): Asymmetric
encryption uses two keys — a public key (that anyone can access) to
encrypt information and a private key to decrypt information. Public-key
cryptography, or asymmetric cryptography, is the field of cryptographic
systems that use pairs of related keys. Each key pair consists of a
public key and a corresponding private key. Key pairs are generated with
cryptographic algorithms based on mathematical problems termed one-way
functions. Security of public-key cryptography depends on keeping the
private key secret; the public key can be openly distributed without
compromising security. In a public-key encryption system, anyone with a
public key can encrypt a message, yielding a ciphertext, but only those
who know the corresponding private key can decrypt the ciphertext to
obtain the original message. For example, a journalist can publish the
public key of an encryption key pair on a web site so that sources can
send secret messages to them in ciphertext. Only the journalist who
knows the corresponding private key can decrypt the ciphertext to obtain
the sources’ messages—an eavesdropper reading email on its way to the
journalist can’t decrypt the ciphertext. However, public-key encryption
doesn’t conceal metadata like what computer a source used to send a
message, when they sent it, or how long it is. Public-key encryption on
its own also doesn’t tell the recipient anything about who sent a
message—it just conceals the content of a message in a ciphertext that
can only be decrypted with the private key.
Asymmetric Encryption
Public Key Infrastructure (PKI):
Key management: Keys should have an expiration
date. Keys that are no longer needed should be revoked.
CA: A certificate authority or certification
authority is an entity that stores, signs, and issues digital
certificates. A digital certificate certifies the ownership of a public
key by the named subject of the certificate.
Intermediate CA: A CA that is signed by a superior
CA (e.g., a Root CA or another Intermediate CA) and signs CAs (e.g.,
another Intermediate or Subordinate CA). The Intermediate CA exists in
the middle of a trust chain between the Trust Anchor, or Root, and the
subscriber certificate issuing Subordinate CAs.
RA: Registration Authority. An additional element
often used in larger organisations to help offset the workload of the
certificate authority. The RA assists by accepting user requests and
verifying their identities before passing along the request to the
certificate authority.
CRL: Certificate Revocation List. An electronic
file, published by a certificate authority, that shows all certificates
that have been revoked by that CA.
Certificate attributes: Certificates have many
attributes. The common X.509 certificate fields include the following:
Subject
DNS
Issuer
Validity
Key Size
Signature Algorithm
Serial Number
SAN
Policies
DACL (Discretionary Access Control List)
OCSP: Online Certificate Status Protocol (OCSP). A
security protocol used by an organisation to publish the revocation
status of digital certificates in an electronic certificate revocation
list (CRL).
CSR: Certificate Signing Request. In public key
infrastructure (PKI) systems, a certificate signing request (also CSR or
certification request) is a message sent from an applicant to a
certificate authority of the public key infrastructure in order to apply
for a digital identity certificate. It usually contains the public key
for which the certificate should be issued, identifying information
(such as a domain name) and a proof of authenticity including integrity
protection (e.g., a digital signature).
CN: Common Name, also known is the Fully Qualified
Domain Name (FQDN). The Common Name (AKA CN) represents the server name
protected by the SSL certificate. The certificate is valid only if the
request hostname matches the certificate common name. Most web browsers
display a warning message when connecting to an address that does not
match the common name in the certificate. In the case of a single-name
certificate, the common name consists of a single host name
(e.g. example.com, www.example.com), or a wildcard name in case of a
wildcard certificate (e.g. *.example.com). The common name is
technically represented by the commonName field in the X.509 certificate
specification.
SAN: Subject Alternative Name. Allows multiple
hosts (websites, IP addresses) to be protected by a single
certificate.
Expiration: Certificates expire. They may have a
lifespan of no longer than 27 months. If you try and visit a website
with an expired certificate, your browser will issue a warning.
Types of certificates:
Wildcard: In computer networking, a wildcard
certificate is a public key certificate which can be used with multiple
sub-domains of a domain.
Subject alternative name: A SAN or subject
alternative name is a structured way to indicate all of the domain names
and IP addresses that are secured by the certificate.
Code signing: Code Signing is a method of using an
X.509 certificate to place a digital signature on a file, program, or
software update which guarantees that the file or software has not been
tampered with or compromised. It’s a means of providing an added level
of assurance to the user that the item is authentic and safe to
use.
Self-signed: Self-signed certificates are public
key certificates that are not issued by a certificate authority
(CA).
Machine/computer: Device certificates identify
specific devices and their permissions. They are typically used when a
device has only a single user (hence no need for numerous user
certificates) or when an IoT device has no users.
Email: S/MIME (Secure/Multipurpose Internet Mail
Extension) is a certificate that allows users to digitally sign their
email communications as well as encrypt the content and attachments
included in them. Not only does this authenticate the identity of the
sender to the recipient, but it also protects the integrity of the email
data before it is transmitted across the internet. In a nutshell, an
S/MIME email certificate allows you to: encrypt your emails so that only
your intended recipient can access the content of the message; and
digitally sign your emails so the recipient can verify that the email
was, in fact, sent by you and not a phisher posing as you. Email
encryption works by using asymmetric encryption.
User: User certificates specify which resources a
given user can have access to. They are sometimes used on devices that
several users share. When different users log in, their profile and
certificate are automatically loaded, granting them access to their
required information. This is critical if different users of the same
device need access to different resources.
Root: A root certificate is a public key
certificate that identifies a root certificate authority (CA). Root
certificates are self-signed (and it is possible for a certificate to
have multiple trust paths, say if the certificate was issued by a root
that was cross-signed) and form the basis of an X.509-based public key
infrastructure (PKI).
Domain validation: The process of proving domain
ownership to a certificate authority. SSL certificate authorities can
ask for email verification, file based verification, or can check the
website’s web registrar’s information to validate the domain.
Extended validation: An Extended Validation
Certificate (EV) is a certificate conforming to X.509 that proves the
legal entity of the owner and is signed by a certificate authority key
that can issue EV certificates. EV certificates can be used in the same
manner as any other X.509 certificates, including securing web
communications with HTTPS and signing software and documents. Unlike
domain-validated certificates and organisation-validation certificates,
EV certificates can be issued only by a subset of certificate
authorities (CAs) and require verification of the requesting entity’s
legal identity before certificate issuance.
Certificate formats:
DER: Distinguished Encoding Rules.
PEM: Privacy Enhanced Mail.
PFX: Personal Information Exchange.
.cer: Certificate File.
P12: A P12 file contains a digital certificate that
uses PKCS#12 (Public Key Cryptography Standard #12) encryption.
P7B: Supports storage of all certificates in path
and does not store private keys.
Concepts:
Online vs. offline CA: As opposed to a standard
online CA, which issues certificates over the internet, an offline root
certificate authority is a certificate authority (as defined in the
X.509 standard and RFC 5280) which has been isolated from network
access, and is often kept in a powered-down state. Because the
consequences of a compromised root CA are so great (up to and including
the need to re-issue each and every certificate in the PKI), all root
CAs must be kept safe from unauthorized access. A common method to
ensure the security and integrity of a root CA is to keep it in an
offline state. It is only brought online when needed for specific,
infrequent tasks, typically limited to the issuance or re-issuance of
certificates authorising intermediate CAs.
Stapling: In order to know what OCSP Stapling is,
you must first know about OCSP. OCSP or Online Certificate Status
Protocol is an internet protocol that checks the validity status of a
certificate in real-time. When a user makes an https:// connection with
your web server, their browser normally performs an OCSP check with the
CA that issued the SSL certificate to confirm that the certificate has
not been revoked. In some cases, this may create a momentary delay in
the SSL handshake. OCSP Stapling improves performance by positioning a
digitally-signed and time-stamped version of the OCSP response directly
on the web server. This stapled OCSP response is then refreshed at
predefined intervals set by the CA. The stapled OCSP response allows the
web server to include the OCSP response within the initial SSL
handshake, without the need for the user to make a separate external
connection to the CA. OCSP Stapling is outlined in RFC 6066.
Pinning: Certificate pinning restricts which
certificates are considered valid for a particular website, limiting
risk. Instead of allowing any trusted certificate to be used, operators
“pin” the certificate authority (CA) issuer(s), public keys or even
end-entity certificates of their choice. Clients connecting to that
server will treat all other certificates as invalid and refuse to make
an HTTPS connection.
Trust models
Direct Trust: The trust between two communicating parties is
established directly.
Hierarchical Trust: In the hierarchical trust model everybody’s
certificate is issued by a third party called Certificate Authority
(CA). If one trust the CA then he automatically trust the certificates
that CA issues.
Indirect Trust: The trust between two communicating parties is
established through a trusted third party
Key escrow: Key escrow is an arrangement in which
the keys needed to decrypt encrypted data are held in escrow so that,
under certain circumstances, an authorised third party may gain access
to those keys. An escrow is a contractual arrangement in which a third
party (the stakeholder or escrow agent) receives and disburses money or
property for the primary transacting parties, with the disbursement
dependent on conditions agreed to by the transacting parties.
Certificate chaining: A certificate chain is an
ordered list of certificates, containing an SSL/TLS Certificate and
Certificate Authority (CA) Certificates, that enable the receiver to
verify that the sender and all CA’s are trustworthy. The chain or path
begins with the SSL/TLS certificate, and each certificate in the chain
is signed by the entity identified by the next certificate in the
chain.
4.0 Operations and Incident
Response
4.1
Given a scenario, use the appropriate tool to assess organisational
security.
Network reconnaissance and discovery:
tracert/traceroute: This program attempts to trace
the route an IP packet would follow to some internet host by launching
probe packets with a small ttl (time to live) then listening for an ICMP
“time exceeded” reply from a gateway. https://linux.die.net/man/8/traceroute
dig: dig (domain information groper) is a flexible
tool for interrogating DNS name servers. It performs DNS lookups and
displays the answers that are returned from the name server(s) that were
queried. https://linux.die.net/man/1/dig
ipconfig/ifconfig: ifconfig is used to configure
the kernel-resident network interfaces. ipconfig is the Windows
alternative. https://linux.die.net/man/8/ifconfig
nmap: Nmap (“Network Mapper”) is an open source
tool for network exploration and security auditing. It was designed to
rapidly scan large networks, although it works fine against single
hosts. Nmap uses raw IP packets in novel ways to determine what hosts
are available on the network, what services (application name and
version) those hosts are offering, what operating systems (and OS
versions) they are running, what type of packet filters/firewalls are in
use, and dozens of other characteristics. The output from Nmap is a list
of scanned targets, with supplemental information on each depending on
the options used. Key among that information is the “interesting ports
table”.. That table lists the port number and protocol, service name,
and state. The state is either open, filtered, closed, or unfiltered. https://linux.die.net/man/1/nmap
ping/pathping: ping uses the ICMP protocol’s
mandatory ECHO_REQUEST datagram to elicit an ICMP ECHO_RESPONSE from a
host or gateway. The PathPing command is a command-line network utility
supplied in Windows 2000 and beyond that combines the functionality of
ping with that of tracert. https://linux.die.net/man/8/ping
hping: hping is an open-source packet generator and
analyser for the TCP/IP protocol created by Salvatore Sanfilippo (also
known as Antirez). It is one of the common tools used for security
auditing and testing of firewalls and networks, and was used to exploit
the idle scan scanning technique (also invented by the hping author),
and now implemented in the Nmap Security Scanner. https://linux.die.net/man/8/hping3
netstat: Netstat prints information about the Linux
networking subsystem. Note: This program is obsolete. Replacement for
netstat is ss. Replacement for netstat -r is ip route. Replacement for
netstat -i is ip -s link. Replacement for netstat -g is ip maddr. https://linux.die.net/man/8/netstat
netcat: The nc (or netcat) utility is used for just
about anything under the sun involving TCP or UDP. It can open TCP
connections, send UDP packets, listen on arbitrary TCP and UDP ports, do
port scanning, and deal with both IPv4 and IPv6. https://linux.die.net/man/1/nc
IP scanners:
arp: Arp manipulates the kernel’s ARP cache in
various ways. The primary options are clearing an address mapping entry
and manually setting up one. For debugging purposes, the arp program
also allows a complete dump of the ARP cache. https://linux.die.net/man/8/arp
route: Route manipulates the kernel’s IP routing
tables. Its primary use is to set up static routes to specific hosts or
networks via an interface after it has been configured with the ifconfig
program. When the add or del options are used, route modifies the
routing tables. Without these options, route displays the current
contents of the routing tables. https://linux.die.net/man/8/route
curl: curl is a tool to transfer data from or to a
server, using one of the supported protocols (HTTP, HTTPS, FTP, FTPS,
SCP, SFTP, TFTP, DICT, TELNET, LDAP or FILE). The command is designed to
work without user interaction. https://linux.die.net/man/1/curl
theHarvester: theHarvester is a command-line tool
written in Python that acts as a wrapper for a variety of search engines
and is used to find email accounts, subdomain names, virtual hosts, open
ports / banners, and employee names related to a domain from different
public sources (such as search engines and PGP key servers). In recent
versions, the authors added the capability of doing DNS brute force,
reverse IP resolution, and Top-Level Domain (TLD) expansion. https://github.com/laramies/theHarvester
sn1per: Sn1per is a web penetration testing
framework used for information gathering and vulnerabilities
assessments. The framework has a premium and a community version. https://github.com/1N3/Sn1per
scanless: A Python 3 command-line utility and
library for using websites that can perform port scans on your behalf.
https://github.com/vesche/scanless
dnsenum: Dnsenum is a multithreaded perl script to
enumerate DNS information of a domain and to discover non-contiguous ip
blocks. The main purpose of Dnsenum is to gather as much information as
possible about a domain. https://github.com/fwaeytens/dnsenum
grep: grep searches the named input FILEs (or
standard input if no files are named, or if a single hyphen-minus (-) is
given as file name) for lines containing a match to the given PATTERN.
By default, grep prints the matching lines. https://linux.die.net/man/1/grep
chmod: chmod changes the file mode bits of each
given file according to mode, which can be either a symbolic
representation of changes to make, or an octal number representing the
bit pattern for the new mode bits. https://linux.die.net/man/1/chmod
logger: Logger makes entries in the system log. It
provides a shell command interface to the syslog system log module. https://linux.die.net/man/1/logger
Shell and script environments:
SSH: The Secure Shell Protocol (SSH) is a
cryptographic network protocol for operating network services securely
over an unsecured network. Its most notable applications are remote
login and command-line execution. SSH applications are based on a
client–server architecture, connecting an SSH client instance with an
SSH server.
PowerShell: PowerShell is a task automation and
configuration management program from Microsoft, consisting of a
command-line shell and the associated scripting language. Initially a
Windows component only, known as Windows PowerShell, it was made
open-source and cross-platform on 18 August 2016 with the introduction
of PowerShell Core. https://microsoft.com/powershell
Python: Python is a high-level, general-purpose
programming language. Its design philosophy emphasises code readability
with the use of significant indentation. https://www.python.org/
OpenSSL: OpenSSL is a software library for
applications that secure communications over computer networks against
eavesdropping or need to identify the party at the other end. It is
widely used by Internet servers, including the majority of HTTPS
websites. OpenSSL contains an open-source implementation of the SSL and
TLS protocols. The core library, written in the C programming language,
implements basic cryptographic functions and provides various utility
functions. https://www.openssl.org/
Packet capture and relay:
tcpreplay: Replay network traffic stored in pcap
files. The basic operation of tcpreplay is to resend all packets from
the input file(s) at the speed at which they were recorded, or a
specified data rate, up to as fast as the hardware is capable.
Optionally, the traffic can be split between two interfaces, written to
files, filtered and edited in various ways, providing the means to test
firewalls, NIDS and other network devices. https://linux.die.net/man/1/tcpreplay
tcpdump: Tcpdump prints out a description of the
contents of packets on a network interface that match the boolean
expression. It can also be run with the -w flag, which causes it to save
the packet data to a file for later analysis, and/or with the -r flag,
which causes it to read from a saved packet file rather than to read
packets from a network interface. In all cases, only packets that match
expression will be processed by tcpdump. https://linux.die.net/man/8/tcpdump
Wireshark: Wireshark is a free and open-source
packet analyser. It is used for network troubleshooting, analysis,
software and communications protocol development, and education.
Originally named Ethereal, the project was renamed Wireshark in May 2006
due to trademark issues. Wireshark is cross-platform, using the Qt
widget toolkit in current releases to implement its user interface, and
using pcap to capture packets; it runs on Linux, macOS, BSD, Solaris,
some other Unix-like operating systems, and Microsoft Windows. There is
also a terminal-based (non-GUI) version called TShark. Wireshark, and
the other programs distributed with it such as TShark, are free
software, released under the terms of the GNU General Public License
version 2 or any later version. https://www.wireshark.org/
WinHex: WinHex is a proprietary commercial disk
editor and universal hexadecimal editor (hex editor) used for data
recovery and digital forensics. http://www.winhex.com/winhex/
FTK imager: This tool saves an image of a hard disk
in one file or in segments that may be later on reconstructed. It
calculates MD5 and SHA1 hash values and can verify the integrity of the
data imaged is consistent with the created forensic image. The forensic
image can be saved in several formats, including DD/raw, E01, and AD1.
https://www.exterro.com/ftk-imager
Autopsy: Autopsy is computer software that makes it
simpler to deploy many of the open source programs and plugins used in
The Sleuth Kit. The Sleuth Kit (TSK) is a library and collection of
Unix- and Windows-based utilities for extracting data from disk drives
and other storage so as to facilitate the forensic analysis of computer
systems. It forms the foundation for Autopsy, a better known tool that
is essentially a graphical user interface to the command line utilities
bundled with The Sleuth Kit. http://www.autopsy.com/
4.2
Summarise the importance of policies, processes, and procedures for
incident response.
Incident response plans: A Cyber Incident Response
Plan is essentially a guide or a set of steps that your business will
follow in the event of a cyberattack. It is a document that spells out
the actions that need to be taken to minimise the damage and protect
your business data during and after the attack.
Incident response procedures:
Preparation: In this phase of incident response
planning, you have to ensure that all employees have a certain degree of
awareness about cybersecurity and a basic level of incident response
training in dealing with a cyber crisis. Everyone also has to be aware
of their roles and responsibilities in case of a cyber event.
Identifying critical assets and crown jewels and conducting incident
response testing also form an integral part of this incident response
phase.
Identification: This phase in incident response
planning, as the name suggests, is about identifying if you’ve been
breached or if any of your systems have been compromised. In case a
breach is indeed discovered, you should focus on answering questions
such as:
Who discovered the breach?
What is the extent of the breach?
Is it affecting operations?
What could be the source of the compromise etc.
Containment: This incident response phase involves
everything you can do to mitigate damage once you’re already under a
cyber-attack. In this phase of the incident response plan, you need to
consider what can be done to contain the effects of the breach. Which
systems can be taken offline? Can and should anything be deleted safely?
What is the short term strategy? What is the long term strategy to deal
with the effects of the attack?
Eradication: Phase 4 of the cyber incident response
plan is all about understanding what caused the breach in the first
place and dealing with it in real time. The incident response process in
this phase will involve patching vulnerabilities in the system, removing
malicious software, updating old software versions etc. Basically this
phase involves doing whatever is required to ensure that all malicious
content is wiped clean from your systems.
Recovery: As the name suggests, this phase of the
incident response plan is concerned with getting the affected systems
back online after an attack or an incident. This phase of the cyber
incident response plan is critical because it tests, monitors and
verifies the affected systems. Without proper recovery, it would be very
difficult to avoid another similar incident in the future.
Lessons learned: We might go out on a limb and say
that this is one of the most important phases in the incident response
plan. Yes, everyone can and will get breached. However, it is how we
deal with the breach and what we learn from it that makes all the
difference. In the phase, it is vital to gather all members of the
Incident Response team together and discuss what happened. It’s like a
retrospective on the attack.You can evaluate what happened, why it
happened and what was done to contain the situation. But most
importantly, in this phase, the business must discuss if something could
have been done differently. Were there any gaps in the incident response
plan? Was there a department or stakeholder who could have responded
faster or differently? This phase is all about learning from the attack
in order to ensure that it doesn’t happen again and if it does, the
situation is handled even better.
Exercises:
Tabletop: A tabletop exercise (or TTX) provides
hands-on training for your cybersecurity team. The exercise describes an
actual attack, and you can measure the team’s performance and the
strength of your incident response plan through the process. You then
can make adjustments, always striving for a perfect response.
Walkthroughs: Same as a TTX.
Simulations: A simulated, fully interactive
cybersecurity situation and response exercise.
Attack frameworks:
MITRE ATT&CK: The Adversarial Tactics,
Techniques, and Common Knowledge or MITRE ATT&CK is a guideline for
classifying and describing cyberattacks and intrusions. It was created
by the Mitre Corporation and released in 2013. https://attack.mitre.org/
The Diamond Model of Intrusion Analysis: The
Diamond Model of Intrusion Analysis is an approach employed by several
information security professionals to authenticate and track cyber
threats. According to this approach, every incident can be depicted as a
diamond. This methodology underlines the relationships and
characteristics of four components of the diamond—adversary, capability,
infrastructure, and victim. These four core elements are connected to
delineate the relationship between each other which can be analytically
examined to further uncover insights and gain knowledge of malicious
activities.
Cyber Kill Chain: The cyber kill chain is a series
of steps that trace stages of a cyberattack from the early
reconnaissance stages to the exfiltration of data. Lockheed Martin
derived the kill chain framework from a military model – originally
established to identify, prepare to attack, engage, and destroy the
target. The eight stages of the Cyber Kill Chain are: reconnaisance,
intrusion, exploitation, privilege escalation, lateral movement,
obfuscation/anti-forensics, and denial of service.
Stakeholder management: It’s always a good idea to
maintain a good relationship with your stakeholders.
Communication plan: Cybersecurity incidents require
careful coordination between the incident response team and a variety of
internal and external stakeholders. An incident response communication
plan is a crucial component of an organisation’s broader incident
response plan that provides guidance and direction to these
communication efforts.
Disaster recovery plan: Disaster recovery plans
(DRP) seek to quickly redirect available resources into restoring data
and information systems following a disaster. A disaster can be
classified as a sudden event, including an accident or natural disaster,
that creates wide scoping, detrimental damage. When DRPs are properly
designed and executed they enable the efficient recovery of critical
systems and help an organization avoid further damage to
mission-critical operations. Benefits include minimising recovery time
and possible delays, preventing potential legal liability, improving
security, and avoiding potentially damaging last minute decision making
during a disaster.
Business continuity plan: Business Continuity
Planning (BCP) is the process of creating preventive and recovery
systems to deal with potential cyber threats to an organisation or to
ensure business process continuity in the wake of a cyberattack.
Continuity Of Operations Planning (COOP): The same
as a BCP.
Incident response team: A computer emergency
response team (CERT) is an expert group that handles computer security
incidents. Alternative names for such groups include computer emergency
readiness team and computer security incident response team (CSIRT). A
more modern representation of the CSIRT acronym is Cyber Security
Incident Response Team.
Retention policies: Policies relating to data
retention.
4.3
Given an incident, utilise appropriate data sources to support and
investigation.
Vulnerability scan output: There are many different
vulnerability scanners, and therefore many variations of scan
outputs.
SIEM (Security Information and Event Management)
dashboards: Dashboards are an integral component of any
effective SIEM solution. After log data is aggregated from different
sources, a SIEM solution prepares the data for analysis after
normalisation. The outcomes of this analysis are presented in the form
of actionable insights through dashboards.
Sensor: e.g. NetFlow sensors.
Sensitivity: Can be tuned.
Trends: Can trigger notifications.
Alerts: Can be configured to trigger due to a
variety of circumstances.
Correlation: Can occur automatically or manually,
to build a bigger picture of the scope and details of a certain
event.
Application: Applications may keep their own log
files.
Security: Encapsulates all other sources.
Web: Web servers keep log files.
DNS: DNS servers keep log files. DNS sinkholes
provide log files.
Authentication: Authentication events are typically
recorded in log files.
Dump files: Can be created on demand (i.e. memory
dumps).
VoIP and call managers: Have log files detailing
participants, among other information.
syslog/rsyslog/syslog-ng: Syslog is a standard for
sending and receiving notification messages – in a particular format –
from various network devices. Syslog offers a central repository for
logs from multiple sources. Rsyslog is an open-source software utility
used on UNIX and Unix-like computer systems for forwarding log messages
in an IP network. syslog-ng is a free and open-source implementation of
the syslog protocol for Unix and Unix-like systems. It extends the
original syslogd model with content-based filtering, rich filtering
capabilities, flexible configuration options and adds important features
to syslog, like using TCP for transport.
journalctl: Journalctl is a utility for querying
and displaying logs from journald, systemd’s logging service. Since
journald stores log data in a binary format instead of a plaintext
format, journalctl is the standard way of reading log messages processed
by journald.
NXLog: NXLog is a proprietary multi-platform log
collection and centralization tool that offers log processing features,
including log enrichment and log forwarding. In concept NXLog is similar
to syslog-ng or Rsyslog but it is not limited to UNIX and syslog
only.
Bandwidth monitors: Bandwidth is a fundamental
network statistic and one that is almost universal no matter what device
you’re connecting to. Bandwidth monitors monitor bandwidth usage.
Metadata: Data that that describes and gives
information about other data.
Email: An email header is a collection of metadata
that documents the path by which the email got to you. You may find a
deluge of information in the header or just the basics. Information
includes from, to, subject, return path, reply-to, envelope-to, date,
received, DKIM signature, message-id, MIME version, content-type, and
message body.
Mobile: Smartphones collect large amounts of
metadata about practically anything that occurs on or with them.
Web: Browser fingerprinting, IP address, OS type
etc.
File: File metadata may include personal
information such as your name and email address.
Netflow/sFlow: NetFlow is a feature that was
introduced on Cisco routers around 1996 that provides the ability to
collect IP network traffic as it enters or exits an interface. sFlow,
short for “sampled flow”, is an industry standard for packet export at
Layer 2 of the OSI model. sFlow was originally developed by InMon
Corp. sFlow is a multi-vendor, packet sampling technology used to
monitor network devices including routers, switches, host devices and
wireless access points.
Netflow: NetFlow is a feature that was introduced
on Cisco routers around 1996 that provides the ability to collect IP
network traffic as it enters or exits an interface.
sFlow: sFlow is a multi-vendor, packet sampling
technology used to monitor network devices including routers, switches,
host devices and wireless access points.
IPFIX: Internet Protocol Flow Information Export
(IPFIX) is an IETF protocol, as well as the name of the IETF working
group defining the protocol. It was created based on the need for a
common, universal standard of export for Internet Protocol flow
information from routers, probes and other devices that are used by
mediation systems, accounting/billing systems and network management
systems to facilitate services such as measurement, accounting and
billing. The IPFIX standard defines how IP flow information is to be
formatted and transferred from an exporter to a collector. The IPFIX
standards requirements were outlined in the original RFC 3917. Cisco
NetFlow Version 9 was the basis for IPFIX.
Protocol analyser output: Output from a network
protocol analyser such as Wireshark or SolarWinds.
4.4
Given an incident, apply mitigation techniques or controls to secure an
environment.
Reconfigure endpoint security solutions:
Application approved list: By providing control of
the applications running on the endpoint, the IT security team can
create a more secure and stable environment. An approved list of
applications would provide a set of applications that can be installed,
meaning that any application that is not on the list would not be able
to be installed.
Application blocklist/deny list: An application
blocklist is a list of applications that are specifically unable to be
installed on endpoints.
Quarantine: If your endpoint security software does
recognize an application that seems to have malicious software, then it
can remove that from the system and place it into a quarantine area.
This might be a folder that’s on the existing system where no
applications are allowed to run. Later, the IT security team can look
into the quarantine folder and perform additional analysis of that
software.
Configuration changes:
Firewall rules: NGFs allow us to set rules by
application, port, protocol, factors, and many other factors. This
functionality should be fully taken advantage.
MDM: MDM software allows policies to be set
regarding mobile device usage, based on things like the physical
location of the device, and can be used to enforce things like certain
login procedures.
DLP: Data Loss Prevention functionality actively
identifies and blocks the transfer of PII from a specific endpoint
and/or across an entire network.
Content filter/URL filter: Content can be blocked.
For example, all images can be blocked on an endpoint or network. Types
of content, such as adult content, can also be blocked. Specific URLs
can also be blocked.
Update or revoke certificates: Certificates can be
deployed to trusted devices, and revoked when needed.
Isolation: The concept of isolation is one where we
can move a device into an area or virtual area/configuration where it
has limited or no access to other resources.
Containment: One way to prevent the spread of
malicious software is to prevent the software from having anywhere to
go. This can be done by sandboxing applications, and/or sandboxing
devices (isolation).
Segmentation: The segmentation of networks, to
prevent unauthorised access of data.
SOAR: Security Orchestration, Automation, and
Response.
Runbooks: A runbook consists of a series of
conditional steps to perform actions, such as data enrichment, threat
containment, and sending notifications, automatically as part of the
incident response or security operations process. This automation helps
to accelerate the assessment, investigation, and containment of threats
to speed up the overall incident response process. Runbooks can also
include human decision-making elements as required, depending on the
particular steps needed within the process and the amount of automation
the organisation is comfortable using. Like playbooks, runbooks can also
be used to automatically assign tasks that will be carried out by a
human analyst; however, most runbooks are primarily action-based.
Playbooks: A playbook is a linear style checklist
of required steps and actions required to successfully respond to
specific incident types and threats.
4.5 Explain the
key aspects of digital forensics.
Documentation/evidence:
Legal hold: A legal hold is a process that an
organisation uses to preserve all forms of potentially relevant
information when litigation is pending or reasonably anticipated.
Video: Video information such as screen recordings
can be valuable to digital forensics.
Admissibility: One concern regarding the data that
you collect is how admissible that data might be in a court of law. Not
all data you collect is something that can be used in a legal
environment. And the laws are different depending on where you might be.
The important part is that you collect the data with a set of standards,
which would allow that data to be used in a court of law if
necessary.
Chain of custody: Chain of custody, in legal
contexts, is the chronological documentation or paper trail that records
the sequence of custody, control, transfer, analysis, and disposition of
materials, including physical or electronic evidence.
Timeline of sequence of events:
Time stamps: Are important pieces of
information.
Time offset: The timezone offset between the local
time and the recorded time of an event occurring..
Tags: Tags are keywords you assign to files. This
may also refer to physical tags attached to physical devices.
Reports: Once all data has been gathered, a report
needs to be made providing a description of the events that have
occurred.
Event logs: Event logs are good evidence.
Interviews: Interviews can allow a person to ask
questions and get information about what a person saw when a particular
security event occurred.
Acquisition:
Order of volatility: Data should be acquisitioned
in terms of its order of volatility (i.e. take an image of the RAM, CPU
registers, and CPU cache first, because these things are volatile and
will not persist through a reboot).
Disk: Non volatile although may be encrypted via
FDE.
RAM: Volatile.
Swap/pagefile: Volatile.
OS: Volatile.
Device: Volatile.
Firmware: Non volatile although may be
encrypted.
Snapshot: Take one.
Cache: Volatile.
Network: ARP cache etc. are volatile. Packet
captures should be initiated.
Artifacts: Leftover bits of information in caches
and logs. May be valuable.
On premises vs. cloud:
Right-to-audit clauses: A right-to-audit clause is
a clause in a contract specifying if and how security audits of data can
take place regarding a specific product (such as a cloud storage
service).
Regulatory/jurisdiction: Regulations can vary
according to jurisdiction. The jurisdiction of the potential auditor,
the user of the service, and the data itself need to be taken into
account.
Data breach notification laws: Many jurisdictions
have laws or regulations that state, if any consumer data happens to be
breached, then the consumers must be informed of that situation.
Integrity: Data collected for evidence needs to
have its integrity verified and verifiable.
Hashing: Hashing is the process of transforming any
given key or a string of characters into another value. This is usually
represented by a shorter, fixed-length value or key that represents and
makes it easier to find or employ the original string. A hash function
generates new values according to a mathematical hashing algorithm,
known as a hash value or simply a hash. To prevent the conversion of
hash back into the original key, a good hash always uses a one-way
hashing algorithm.
Checksums: Checksums are similary to hashes. A
checksum (such as CRC32) is to prevent accidental changes. If one byte
changes, the checksum changes. The checksum is not safe to protect
against malicious changes: it is pretty easy to create a file with a
particular checksum. A hash function maps some data to other data. It is
often used to speed up comparisons or create a hash table. Not all hash
functions are secure and the hash does not necessarily changes when the
data changes.
Provenance: The source of data - where did it
originate.
Preservation: It’s important when working with data
as evidence that we are able to preserve this information and to verify
that nothing has changed with this information while it’s been stored.
We commonly will take the original source of data and create a copy of
that data, often imaging storage drives or copying everything that might
be on a mobile device.
E-discovery: Electronic discovery (sometimes known
as e-discovery, ediscovery, eDiscovery, or e-Discovery) is the
electronic aspect of identifying, collecting and producing
electronically stored information (ESI) in response to a request for
production in a law suit or investigation. ESI includes, but is not
limited to, emails, documents, presentations, databases, voicemail,
audio and video files, social media, and web sites. After data is
identified by the parties on both sides of a matter, potentially
relevant documents (including both electronic and hard-copy materials)
are placed under a legal hold – meaning they cannot be modified,
deleted, erased or otherwise destroyed. Potentially relevant data is
collected and then extracted, indexed and placed into a database. At
this point, data is analysed to cull or segregate the clearly
non-relevant documents and e-mails. The data is then hosted in a secure
environment and made accessible to reviewers who code the documents for
their relevance to the legal matter (contract attorneys and paralegals
are often used for this phase of the document review).
Data recovery: n computing, data recovery is a
process of salvaging deleted, inaccessible, lost, corrupted, damaged, or
formatted data from secondary storage, removable media or files, when
the data stored in them cannot be accessed in a usual way.
Non-repudiation: If we can ensure that the
information that we’ve received is exactly what was sent and we can
verify the person who sent it, then we have what’s called
non-repudiation.
Strategic intelligence/counterintelligence:
Gathering evidence can also be done by using strategic intelligence.
This is when we are focusing on a domain and gathering threat
information about that domain. We might want to look at business
information, geographic information, or details about a specific
country. If we’re the subject of someone’s strategic intelligence, we
may want to prevent that intelligence from occurring. And instead, we
would perform strategic counterintelligence or CI. With CI, we would
identify someone trying to gather information on us. And we would
attempt to disrupt that process. And then we would begin gathering our
own threat intelligence on that foreign operation.
5.0 Governance, Risk, and
Compliance
5.1 Compare and
contrast various types of controls.
Category:
Managerial: This is a control that focuses on the
design of the security or the policy implementation associated with the
security. We might have a set of security policies for our organisation
or set of standard operating procedures that everyone is expected to
follow.
Operational: These are controls that are managed by
people. If we have security guards posted at the front doors or we have
an awareness program to let people know that phishing is a significant
concern, these would be operational controls.
Technical: We can use our own systems to prevent
some of these security events from occurring, these would be technical
controls. If you’ve implemented antivirus on your workstations or
there’s a firewall connecting you to the internet, you could consider
these technical controls.
Control type:
Preventive: This would be something that prevents
access to a particular area. Something like locks on a door or a
security guard would certainly prevent access as would a firewall,
especially if we have a connection to the internet.
Detective: A detective control type commonly
identifies and is able to record that a security event has occurred, but
it may not be able to prevent access.
Corrective: A corrective control is designed to
mitigate any damage that was occurred because of a security event.
Deterrent: A deterrent may not stop an intrusion
from occurring but it may deter someone from performing an
intrusion.
Compensating: A compensating control attempts to
recover from an intrusion by compensating for the issues that were left
behind. If someone cut the power to our data center, we could have
backup power systems or generators that would compensate for that lack
of power.
Physical: A physical control type is something we
would have in the real world that would prevent the security event, like
a fence or a door lock.
5.2
Explain the importance of applicable regulations, standards or
frameworks that impact organisational security posture.
Regulations, standards, and legislation:
GDPR: The GDPR codifies and unifies data privacy
laws across all European Union member countries. Penalties for
non-compliance with the provisions of the GDPR regarding collecting and
using personal data are potentially devastating. Personal data is
defined as any information related to a natural person that can be used
to directly or indirectly identify that person. The provisions of the
GDPR for keeping the personal data of customers secure and regarding the
legal collection and use of that data by businesses is straightforward
and basic common sense, but the penalties laid out for violations are
significant.
National, territory, or state laws: Most nations,
territories, and states have their own laws regarding privacy, digital
security, data security, and related topics.
PCI DSS: The Payment Card Industry Data Security
Standard (PCI DSS) is a set of requirements intended to ensure that all
companies that process, store, or transmit credit card information
maintain a secure environment. It was launched on September 7, 2006, to
manage PCI security standards and improve account security throughout
the transaction process. An independent body created by Visa,
MasterCard, American Express, Discover, and JCB, the PCI Security
Standards Council (PCI SSC) administers and manages the PCI DSS.
Interestingly, the payment brands and acquirers are responsible for
enforcing compliance, rather than the PCI SSC.
Key frameworks:
CIS: The Center for Internet Security (CIS)
benchmarks are a set of best-practice cybersecurity standards for a
range of IT systems and products. CIS Benchmarks provide the baseline
configurations to ensure compliance with industry-agreed cybersecurity
standards. The benchmarks are developed by CIS alongside communities of
cybersecurity experts within industry and research institutes. CIS
Benchmarks can be seen as frameworks to configure IT services and
products.
NIST RMF/CSF: The Cybersecurity Framework (CSF) was
created by The National Institute of Standards and Technology (NIST) as
a voluntary cybersecurity framework based on existing standards,
guidelines, and practices for organisations to better manage and reduce
cybersecurity risk. Although CSF was initially targeted at critical
infrastructure it has now become the de facto cybersecurity standard and
is being implemented universally in both the private and public sectors.
In contrast to the NIST CSF — originally aimed at critical
infrastructure and commercial organisations — the NIST RMF has always
been mandatory for use by federal agencies and organisations that handle
federal data and information.
ISO 27001/27002/27701/31000:
ISO/IEC 27001 is the world’s best-known standard for information
security management systems (ISMS) and their requirements. Additional
best practice in data protection and cyber resilience are covered by
more than a dozen standards in the ISO/IEC 27000 family. Together, they
enable organisations of all sectors and sizes to manage the security of
assets such as financial information, intellectual property, employee
data and information entrusted by third parties.
ISO/IEC 27002 is an information security standard published by the
International Organisation for Standardization (ISO) and by the
International Electrotechnical Commission (IEC), titled Information
security, cybersecurity and privacy protection — Information security
controls. SO/IEC 27002 provides best practice recommendations on
information security controls for use by those responsible for
initiating, implementing or maintaining information security management
systems (ISMS). Information security is defined within the standard in
the context of the CIA triad: the preservation of confidentiality
(ensuring that information is accessible only to those authorized to
have access), integrity (safeguarding the accuracy and completeness of
information and processing methods) and availability (ensuring that
authorized users have access to information and associated assets when
required).
ISO/IEC 27701:2019 (formerly known as ISO/IEC 27552 during the
drafting period) is a privacy extension to ISO/IEC 27001. The design
goal is to enhance the existing Information Security Management System
(ISMS) with additional requirements in order to establish, implement,
maintain, and continually improve a Privacy Information Management
System (PIMS). The standard outlines a framework for Personally
Identifiable Information (PII) Controllers and PII Processors to manage
privacy controls to reduce the risk to the privacy rights of
individuals.
ISO 31000 is a family of standards relating to risk management
codified by the International Organisation for Standardization. ISO
31000:2018 provides principles and generic guidelines on managing risks
that could be negative faced by organisations as these could have
consequence in terms of economic performance and professional
reputation. SO 31000 seeks to provide a universally recognized paradigm
for practitioners and companies employing risk management processes to
replace the myriad of existing standards, methodologies and paradigms
that differed between industries, subject matters and regions. For this
purpose, the recommendations provided in ISO 31000 can be customised to
any organisation and its context.
SSAE SOC 2 Type I/II: Statement on Standards for
Attestation Engagements (SSAE) is a standard from the American Institute
of Certified Public Accountants (AICPA). The examinations and audits of
these Standards are known as Service Organisation Control (SOC) reports.
SOC 1: Controls over financial reporting. This is most relevant for
organisations that provide financial services, such as payroll, banking,
investments, capital management, etc.
SOC 2: Focuses on information security, privacy, integrity,
confidentiality, addressing both cybersecurity and business process
controls. The list of controls usually follow a selected framework,
taking into account additional requirements from partnering
businesses.
Cloud Security Alliance: Cloud Security Alliance
(CSA) is a not-for-profit organisation with the mission to “promote the
use of best practices for providing security assurance within cloud
computing, and to provide education on the uses of cloud computing to
help secure all other forms of computing.”
Cloud Control Matrix: The Cloud Security Alliance
Cloud Controls Matrix (CCM) is specifically designed to provide
fundamental security principles to guide cloud vendors and to assist
prospective cloud customers in assessing the overall security risk of a
cloud provider. The CSA CCM provides a controls framework that gives
detailed understanding of security concepts and principles that are
aligned to the Cloud Security Alliance guidance in 13 domains. The
foundations of the Cloud Security Alliance Controls Matrix rest on its
customised relationship to other industry-accepted security standards,
regulations, and controls frameworks such as the ISO 27001/27002, ISACA
COBIT, PCI, NIST, Jericho Forum and NERC CIP and will augment or provide
internal control direction for service organisation control reports
attestations provided by cloud providers.
Benchmarks/secure configuration guides:
Platform/vendor specific guides (related to correct
configuration and hardening/security) typically exist for:
Web servers
Operating systems
Application servers
Network infrastructure devices
5.3
Explain the importance of policies to organisational security.
Personnel:
Acceptable use policy: An acceptable use policy
(AUP) is a document stipulating constraints and practices that a user
must agree to for access to a corporate network, the internet or other
resources.
Job rotation: Job rotation is the concept of not
having one person in one position for a long period of time. The purpose
is to prevent a single individual from having too much control. Allowing
someone to have total control over certain assets can result in the
misuse of information, the possible modification of data, and
fraud.
Mandatory vacation: Mandatory vacations are an
administrative control which provides operational security by forcing
employees to take vacations and reinforces job rotation principles
adding the advantage that an employee sharing that job may determine if
unethical occurrences have been made.
Separation of duties: Refers to the principle that
no user should be given enough privileges to misuse the system on their
own. For example, the person authorising a paycheck should not also be
the one who can prepare them.
Least privilege: The principle of least privilege
is a security concept in which a user is given the minimum levels of
access or permissions needed to perform their job.
Clean desk space: A clean desk policy (CDP) is a
corporate directive that specifies how employees should leave their work
space when they leave the office. CDPs are primarily used to ensure
important papers are not left out and to conform to data security
regulations.
Background checks: A background check is a process
a person or company uses to verify that an individual is who they claim
to be, and this provides an opportunity to check and confirm the
validity of someone’s criminal record, education, employment history,
and other activities from their past.
NDA: A non-disclosure agreement (NDA) is a legally
binding contract that establishes a confidential relationship. The party
or parties signing the agreement agree that sensitive information they
may obtain will not be made available to any others.
Social media analysis: Currently, there are no laws
that prohibit an employer from monitoring employees on social networking
sites.
Onboarding: Onboarding refers to the processes in
which new hires are integrated into an organisation.
Offboarding: Employee offboarding describes the
separation process when an employee leaves a company.
User training:
Gamification: Gamification in training is the act
of adding competitive game-based elements to training programs in order
to create a fun and engaging training environment while also increasing
learning engagement.
Capture the flag: Capture the Flag (CTF) in
computer security is an exercise in which “flags” are secretly hidden in
purposefully-vulnerable programs or websites. It can either be for
competitive or educational purposes. Competitors steal flags either from
other competitors (attack/defense-style CTFs) or from the organisers
(jeopardy-style challenges)
Phishing campaigns:
Phishing simulations: Anti-phishing and security
training solutions show employees the different types of attacks, how to
recognize the subtle clues and report suspicious emails to your IT
department. As part of the training, phishing simulations and other mock
attacks are typically used to test and reinforce good employee behavior.
Advanced solutions provide highly-variable attack simulations for
multiple vectors, including voice, text messages and found physical
media.
Computer Based Training (CBT): Computer Based
Training (CBT) is any course of instruction whose primary means of
delivery is a computer.
Role based training: Role-based training is the
term used to describe most learning activities centred around the
practical application of learned skills. For example, in a retail
setting, a role-based training exercise could be having new
cashiers-in-training operate a real cash register.
Diversity of training techniques: Help promote
strong, healthy methods of learning.
Third-party risk management:
Vendors: Third-party vendor risk refers to any risk
incurred on an organisation by external parties like service providers,
vendors, suppliers, partners, or contractors. These external parties
pose a risk due to their access to internal company systems, data, and
other privileged information.
Supply chain: Risks to the supply chain range from
unpredictable natural events (such as tsunamis and pandemics) to
counterfeit products, and reach across quality, security, to resiliency
and product integrity.
Business partners: Are an inherent risk due to
their level of control and the amount of knowledge they have.
SLA: A service-level agreement is a commitment
between a service provider and a client. Particular aspects of the
service – quality, availability, responsibilities – are agreed between
the service provider and the service user.
MOU: A memorandum of understanding is a type of
agreement between two or more parties. It expresses a convergence of
will between the parties, indicating an intended common line of
action.
MSA: A master service agreement, sometimes known as
a framework agreement, is a contract reached between parties, in which
the parties agree to most of the terms that will govern future
transactions or future agreements. A master agreement delineates a
schedule of lower-level service agreements, permitting the parties to
quickly enact future transactions or agreements, negotiating only the
points specific to the new transactions and relying on the provisions in
the master agreement for common terms.
BPA: Business Partnership Agreement.
EOL: An end-of-life product is a product at the end
of the product lifecycle which prevents users from receiving updates,
indicating that the product is at the end of its useful life. At this
stage, a vendor stops the marketing, selling, or provision of parts,
services or software updates for the product.
EOSL: End of Support Life, same as EOL.
NDA: A non-disclosure agreement is a legal contract
or part of a contract between at least two parties that outlines
confidential material, knowledge, or information that the parties wish
to share with one another for certain purposes, but wish to restrict
access to.
Data:
Classification: Data classification is broadly
defined as the process of organising data by relevant categories so that
it may be used and protected more efficiently.
Governance: Data governance (DG) is the process of
managing the availability, usability, integrity and security of the data
in enterprise systems, based on internal data standards and policies
that also control data usage. Effective data governance ensures that
data is consistent and trustworthy and doesn’t get misused. It’s
increasingly critical as organisations face new data privacy regulations
and rely more and more on data analytics to help optimise operations and
drive business decision-making.
Retention: Data retention defines the policies of
persistent data and records management for meeting legal and business
data archival requirements.
Credential policies:
Personnel: Unshared accounts (one accoun per
person); user (non-root) access.
Third party: Multifactor authentication; accounts
need to be audited regularly; no account sharing.
Service accounts: Accounts tied to a service; scope
should be limited to service requirements.
Administrator/root accounts: Should be disabled, or
at least not logged in to; for temporary use only.
Organisational policies:
Change management: Change management is a
systematic approach to dealing with the transition or transformation of
an organisation’s goals, processes or technologies.
Change control: Within quality management systems
and information technology systems, change control is a process — either
formal or informal — used to ensure that changes to a product or system
are introduced in a controlled and coordinated manner.
Asset management: Asset management is the process
of planning and controlling the acquisition, operation, maintenance,
renewal, and disposal of organisational assets.
5.4 Summarise
risk management processes and concepts.
Risk types:
External: Risk from external factors such as the
weather, the power company, or a third party contractor.
Internal: Risk from internal factors such as
employees.
Legacy systems: Legacy systems can have unpatched
security vulnerabilities.
Multiparty: The situation that arises when supplier
relationships have combine to create interfaces and points of
collaboration that increase the attack surface.
IP theft: Intellectual property isn’t real and
therefore can’t be stolen.
Software compliance/licensing: Software licences
can be complex and may come with liability clauses/statements.
Risk management strategies:
Acceptance: Accepting risk, or risk acceptance,
occurs when a business or individual acknowledges that the potential
loss from a risk is not great enough to warrant spending money to avoid
it.e
Avoidance: Risk avoidance is the elimination of
hazards, activities and exposures that can negatively affect an
organisation and its assets.
Transference: A risk management and control
strategy that involves the contractual shifting of a pure risk from one
party to another.
Cybersecurity insurance: Cybersecurity insurance,
also called cyber liability insurance or cyber insurance, is a contract
that an entity can purchase to help reduce the financial risks
associated with doing business online. In exchange for a monthly or
quarterly fee, the insurance policy transfers some of the risk to the
insurer.
Mitigation: Risk management is the identification,
evaluation, and prioritization of risks followed by coordinated and
economical application of resources to minimize, monitor, and control
the probability or impact of unfortunate events or to maximize the
realisation of opportunities.
Risk analysis:
Risk register: A risk register is a tool in risk
management and project management. It is used to identify potential
risks in a project or an organisation, sometimes to fulfill regulatory
compliance but mostly to stay on top of potential issues that can derail
intended outcomes.
Risk matrix/heat map: A risk matrix is a matrix
that is used during risk assessment to define the level of risk by
considering the category of probability or likelihood against the
category of consequence severity.
Risk control assessment: A risk and control
assessment is the process by which organisations assess and examine
operational risks and the effectiveness of controls used to
circumnavigate them. It’s one of the easiest and most effective tools in
the risk management arsenal, and the objective is simple: to provide
firms with reasonable assurance that all business objectives are going
to be met, and existing risk management protocols are sustainable and
robust.
Risk control self-assessment: Risk and control self
assessment (RCSA) is a process through which operational risks and the
effectiveness of controls are assessed and examined. The objective is to
provide reasonable assurance that all business objectives will be met.
One of the most popular approaches for conducting RCSA is to hold a
workshop where the stakeholders identify and assess risks and controls
in their respective areas of operations.
Risk awareness: Cyber security awareness is the
combination of both knowing and doing something to protect a business’s
information assets. When an enterprise’s employees are cyber security
aware, it means they understand what cyber threats are, the potential
impact a cyber-attack will have on their business and the steps required
to reduce risk and prevent cyber-crime infiltrating their online
workspace.
Inherent risk: Inherent risk is the inherent
probability that a cybersecurity event may occur as a result of a lack
of countermeasures.
Residual risk: Residual risk is what remains after
risk mitigation efforts have been implemented
Risk appetite: Risk appetite is the level of risk
that an organisation is prepared to accept in pursuit of its objectives,
before action is deemed necessary to reduce the risk. It represents a
balance between the potential benefits of innovation and the threats,
that change inevitably brings.
Regulations that affect risk posture: Often,
governments will set standards that need to be complied with that reduce
risk.
Risk assessment types:
Qualitative: Qualitative risk focuses on
identifying risks to measure both the likelihood of a specific risk
event occurring during the project life cycle and the impact it will
have on the overall schedule should it hit.
Quantitative: Quantitative risk analysis uses
verifiable data to analyse the effects of risk in terms of cost
overruns, scope creep, resource consumption, and schedule delays.
Likelihood of occurrence: Risk likelihood means the
possibility of a potential risk occurring, interpreted using qualitative
values such as low, medium, or high.
Impact: The impact assessment estimates the effects
of a risk event on a project objective.
Asset value: This type of risk analysis assigns
independent, objective, numeric monetary values to the elements of risk
assessment and the assessment of potential losses
Single-loss expectancy (SLE): Single-loss
expectancy (SLE) is the monetary value expected from the occurrence of a
risk on an asset.
Annual-loss expectancy (ALE): The annualised loss
expectancy is the product of the annual rate of occurrence and the
single loss expectancy.
Annualised rate of occurrence (ARO): Refers to the
expected frequency with which a risk or a threat is expected to occur.
ARO is also commonly referred to as probability determination.
Disasters:
Environmental: An environmental disaster or
ecological disaster is defined as a catastrophic event regarding the
natural environment that is due to human activity.
Person made: Person-made disasters have an element
of human intent, negligence, or error involving a failure of a
person-made system, as opposed to natural disasters resulting from
natural hazards. Such man-made disasters are crime, arson, civil
disorder, terrorism, war, biological/chemical threat, cyber-attacks,
etc.
Internal vs. external: Internal threats include
employees. External threats include third-party contractors.
Business impact analysis:
Recovery Time Objective (RTO): The recovery time
objective (RTO) is the amount of real time a business has to restore its
processes at an acceptable service level after a disaster to avoid
intolerable consequences associated with the disruption.
Recovery Point Objective (RPO): Recovery point
objective (RPO) is defined as the maximum amount of data – as measured
by time – that can be lost after a recovery from a disaster, failure, or
comparable event before data loss will exceed what is acceptable to an
organisation.
Mean Time To Repair (MTTR): Mean time to repair is
a basic measure of the maintainability of repairable items. It
represents the average time required to repair a failed component or
device.
Mean Time Between Failures (MTBF): Mean time
between failures is the predicted elapsed time between inherent failures
of a mechanical or electronic system, during normal system operation.
MTBF can be calculated as the arithmetic mean time between failures of a
system.
Functional recovery plans: A set of processes and
procedures that can take an organisation from the very beginning of
resolving the issue all the way through to getting back up and
running.
Single point of failure: Single points of failure
(a part of a system that, if it fails, will stop the entire system from
working) should be identified and then remedied.
Disaster Recovery Plan (DRP): A disaster recovery
plan (DRP) is a formal document created by an organisation that contains
detailed instructions on how to respond to unplanned incidents such as
natural disasters, power outages, cyber attacks and any other disruptive
events.
Mission essential functions: Mission essential
functions (MEF) are those essential functions directly related to
accomplishing the organisation’s mission or goals. In most cases, these
functions are unique to each organisation and its mission.
Identification of critical systems:
Mission-essential functions and critical systems are those that are
essential to an organisation’s success.
Site risk assessment: A cyber security risk
assessment identifies the information assets that could be affected by a
cyber attack (such as hardware, systems, laptops, customer data and
intellectual property). It then identifies the risks that could affect
those assets.
5.5
Explain privacy and sensitive data concepts in relation to
security.
Organisational consequences of privacy and data
breaches:
Reputation damage: Data breaches have to potential
to ruin businesses. The reputation of organisations that suffer a data
breach is likely to be lower than those which have not suffered a data
breach.
Identity theft: Identity theft can impact
individuals substantially, through financial, medical, and other means.
Identity theft can also enable attackers to gain access to restricted
information and resources within an organisation, leading to a cyber
attack/data breach.
Fines: Companies and organisations that do not
employ proper security practices may be fined by their government for
failing to protect the data of their users and employees. This is
opportunistic, predatory behaviour from the government, because they’re
not the ones who’ve had their data or identity compromised - affected
users should be paid out. The government should instead enforce the laws
that they put in place and support organisation’s ability to comply with
them so that these situations do not occur in the first place.
IP theft: Intellectual property isn’t real and
therefore can’t be stolen.
Notifications of breaches: May be legislated to
occur within a certain time frame.
Escalation: Data breaches, once discovered, should
have be escalated immediately to the correct person through a predefined
plan/policy/procedure. No blame should be laid on the user who does the
escalating. - Public notifications and disclosures: May
be legislated to occur within a certain time frame.
Data types:
Classifications:
Public: Public data is the least sensitive data
used by the company and would cause the least harm if disclosed. This
could be anything from data used for marketing to the number of
employees in the company.
Private: Private data is usually compartmental data
that might not do the company damage but must be keep private for other
reasons. Human resources data is one example of data that can be
classified as private.
Sensitive: Data that is to have the most limited
access and requires a high degree of integrity. This is typically data
that will do the most damage to the organisation should it be
disclosed.
Confidential: Data that might be less restrictive
within the company but might cause damage if disclosed.
Critical: An adjective, not a classification.
Proprietary: Data owned by a company.
PII: Personally Identifiable Information -
Information that can be used on its own or with other information to
identify, contact or locate a single person, or to identify an
individual in context.
Health information: Cybersecurity in healthcare
involves the protecting of electronic information and assets from
unauthorised access, use and disclosure.
Financial information: Financial institutions are
leading targets of cyber attacks. Banks are where the money is, and for
cybercriminals, attacking banks offers multiple avenues for profit
through extortion, theft, and fraud, while nation-states and hacktivists
also target the financial sector for political and ideological
leverage.
Government data: Governments collect and store vast
amounts of data about individual and classified topics, which must be
secured.
Privacy enhancing technologies:
Data minimisation: Only take what you need. Get rid
of it once you’re done with it.
Data masking: Data masking or data obfuscation is
the process of modifying sensitive data in such a way that it is of no
or little value to unauthorised intruders while still being usable by
software or authorised personnel.
Tokenisation: Tokenisation, when applied to data
security, is the process of substituting a sensitive data element with a
non-sensitive equivalent, referred to as a token, that has no intrinsic
or exploitable meaning or value. The token is a reference that maps back
to the sensitive data through a tokenisation system.
Anonymisation: Data anonymisation is the process of
protecting private or sensitive information by erasing or encrypting
identifiers that connect an individual to stored data.
Pseudo-anonymisation: Pseudo-anonymisation is a
data management and de-identification procedure by which personally
identifiable information fields within a data record are replaced by one
or more artificial identifiers, or pseudonyms.
Roles and responsibilities: These roles and
responsibilities overlap and may be consolidated.
Data owners: These will be senior people within
your organisation who have signed up to be accountable for the quality
of a defined dataset. For example, you may have your Finance Director as
the Data Owner for finance data in your organisation. In order for this
role to have the authority it needs, it should be undertaken by senior
individuals. However, this level of seniority means that they are
“““unlikely to have the time””” to be involved in data quality
activities on a day-to-day basis.
Data controller: In GDPR and other privacy laws,
the data controller has the most responsibility when it comes to
protecting the privacy and rights of the data’s subject, such as the
user of a website. Simply put, the data controller controls the
procedures and purpose of data usage. In short, the data controller will
be the one to dictate how and why data is going to be used by the
organisation.
Data processor: A data processor simply processes
any data that the data controller gives them. The third-party data
processor does not own the data that they process nor do they control
it. This means that the data processor will not be able to change the
purpose and the means in which the data is used. Furthermore, data
processors are bound by the instructions given by the data
controller.
Data custodian/steward: The main difference between
a Data Owner and a Data Steward is that the latter is responsible for
the quality of a defined dataset on day-to-day basis. For example, it is
likely that they will draft the data quality rules by which their data
is measured and the Data Owner will approve those rules.
Data protection officer (DPO): The primary role of
the data protection officer (DPO) is to ensure that their organisation
processes the personal data of its staff, customers, providers or any
other individuals (also referred to as data subjects) in compliance with
the applicable data protection rules
Information life cycle: Information lifecycle
management is the consistent management of information from creation to
final disposition.
Impact assessment: A Privacy Impact Assessment is a
process which assists organisations in identifying and managing the
privacy risks arising from new projects, initiatives, systems,
processes, strategies, policies, business relationships etc.
Terms of agreement: Terms of service are the legal
agreements between a service provider and a person who wants to use that
service.
Privacy notice: A privacy policy is a statement
that explains in simple language how an organisation or agency handles
your personal information.