Encryption is one of the basic building blocks for any organization containing sensitive data/information. Sensitive data compliant with data privacy regulations creates a brand value for your organization as your organization becomes less prone to data breaches. As we all know the strength of the encryption depends upon two critical factors
Security of the Keys
Key length is quantifiable and could be determined using the various encryption algorithms such as AES-128 or AES-256. On the other hand, the Security of the key is a subjective matter. As we all know, the more secure the keys are, private keys in asymmetric and shared keys in symmetric encryption, the more powerful the encryption landscape is.
In today’s article, we will compare Cloud-based HSM and On-prem HSM and try to
Find an answer for what criteria a customer should choose as the appropriate option for their organization’s crypto security.
As organizations step up their cloud journey as fast as possible to utilize the advantages of the cloud e.g. scalability, flexibility, cost-effectiveness, they have to parallelly think about data security in their IT landscape. This makes encryption, and subsequently HSMs, an inevitable component of an organization’s Cybersecurity strategy.Based on the use cases, we can classify HSMs into two categories: Cloud-based HSMs and On-Prem HSMsIn regards to the classification of HSMs (On-prem vs Cloud-based HSM), kindly be clear that the cryptographic technology is the same, but delivered via different methods.
On-prem HSMs are specifically useful for storing encryption keys when the organization wants complete control over their keys and policies without having any dependency on the Cloud Service Provider (CSPs). However, this comes at substantial upfront investment in terms of hardware, skilled resources, management software licenses managing the HSM cluster etc.
On-prem HSMs also make sense when an organization uses a secure application which is extremely sensitive to latency. The secure application uses an On-prem HSM only, thus avoiding the latency. Another important use case is where an application with intensive cryptographic operations is in use due to security best practices, technological designs, and/or performance requirements. On-prem HSM is also beneficial to organizations which operate in countries with strict regulatory/compliance requirements on data localization, and where Cloud Service Providers (CSP) may not have a local datacenter in that geographic location. It also benefits organizations with foreseeable workloads, where it is highly unlikely that the business requirements and transaction volumes will exceed the capacity of the HSM in the near future.
On the other hand, Cloud-based HSMs offer out-and-out advantages of the cloud in addition to conventional features of HSMs. To dig deeper, we can further classify the Cloud-based HSM into two categories: Public Cloud HSM Services and Third- Party HSM Services. Some Public Cloud HSM Services offer Single-tenant/dedicated or Multi-tenant services (e.g. AWS, Azure) whereas others offer only Multi-tenant services (e.g. GCP KMS, Oracle Key Vault) thus, these HSM Services are best suited for organizations which are dependent upon single Cloud Service Provider (CSP). In Third Party HSM Services, you can leverage multi-cloud platforms managed through the central management portal (e.g. DPoD) thus, these HSM Services are best suited for organizations with multi-cloud strategies. These HSM Services also offer use-case-based modular services to lessen data protection cost. Some examples of these services are Key Vault, Oracle TDE (Transparent Data Encryption), Code/Digital Signing etc.
Cloud-based HSMs are extremely helpful in the case of SMB (small and medium business) organizations which already have some other IT service dependencies, and substantial upfront investments for On-prem HSMs may not be feasible in terms of cost- effectiveness. Another classic Cloud-based HSM use case is where enterprises want to test or pilot multiple vendor HSM services with minimal upfront investments before committing to a vendor. Also, it is useful in organizations where the workloads are less in the organization/department and application performance and latency requirements are not stringent enough to require a dedicated, On-prem HSM. This model is suitable for smaller organizations which prefer a foreseeable and PAYG (pay-as-you-go) based financial model offered by the Cloud Service Provider (CSP) rather than high initial capital investments required by an On-prem HSM. Organizations/departments with highly variable workloads which might require elasticity (i.e. scaling up and scaling down of the HSM infrastructure) do come under the umbrella of Cloud-based HSMs as well.
Comparison at a Glance
No hardware required
# of hardware required including for resiliency, HA, Management Platform etc.
Included in the cost
Licenses may be required for each partition and Client Software
Easy with CSP documentation
Complex and skill dependent
Responsibility of CSP
Responsibility of the organization
Low as it’s a managed service from CSP
High as its managed by organization
SLA (Service Level Agreements)
Responsibility of CSP
Responsibility of organization
Operational Technical Knowledge
Medium with CSP’s documentation & vendor support
High as its managed by organization
Total Cost of Ownership
High specifically for low # of partitions
*CSP: Cloud Service Provider
The HSM service is certainly a critical component while designing and deciding the data privacy measures for your organization’s PKI infrastructure. The decision between Cloud- based HSM or On-prem HSM is a function of TCO (total cost of ownership), number and complexity of the use cases, business, regulatory, legal compliances, foreseeable growth in the volume of the sensitive data, divergent data sources, and choice of business applications to name a few. Although Cloud-based HSM Services are becoming more popular considering the fact that more and more organizations are jumping to the cloud for its numerous benefits. However, On-prem HSMs become critical in the case when Cloud Service Provider (CSPs) hit some limitations, although they are very few in the count. To conclude, one thing remains consistently clear: the benefits offered by Public Key Infrastructure (PKI) can be completely undermined if private keys are compromised. Protecting and managing those keys is, therefore, a critical requirement to ensure enterprise data security. HSMs, whether On-prem or Cloud-based, are the best options today to fulfil that requirement.
Dipanshu Bhatnagar is a Principal Consultant Cloud Security Specialty at Encryption Consulting working with PKIs, AWS Cloud Cryptographic services and tools, Google Cloud Cryptographic Services, and helping high profile clients towards their cloud journey with complete data privacy assurance.
In 2014, JPMorgan Chase was under a massive cyber- attack in which the data of 76 million private customers and 7 million business customers was leaked. The attacker was able to get administrative rights due to non-functional two-factor authentication and was able to access user data. The webserver and the web application were secured, but the database remained unencrypted where the data was copied from.
If Format Preserving Encryption had been used, this situation could have been mitigated. With FPE, there would not have been any change to the database schema, and the encryption could be integrated on the fly.
What is Format Preserving Encryption?
For basic information in regard to FPE, please refer to this link
To give you some context, Format Preserving Encryption or FPE is an encryption algorithm used to preserve the format of the clear text while it remains encrypted. However, the strength of FPE is lower compared to AES. FPE is, however, an important mechanism for encrypting data whilst preserving the data length. FPE ensures that while data remains encrypted, all programs, applications and databases continue to be functional.
Why use Format Preserving Encryption?
Implementing a perfectly secure network is harder than just encrypting your data. Encrypting data is cheaper, easier, more secure, and thus better in every way imaginable.There are many organizations with a legacy infrastructure which may not be as secure. Thus, protecting all of the data in the legacy network protects the data even if the network gets compromised. This change can be made with almost no impact to existing infrastructure.Even if the organization has a robust infrastructure, it may face issues while the data is under audit. No one wants to reveal raw customer data which may put their reputation under seize. Thus FPE can be used to de- identify all data, remove all PII (Personal Identifiable Information) of customers and would serve as an extra defence mechanism when data is breached. –
As per NIST 800-38G:
Format-preserving encryption (FPE) is designed for data that is not necessarily binary. In particular, given any finite set of symbols, like the decimal numerals, a method for FPE transforms data that is formatted as a sequence of the symbols in such a way that the encrypted form of the data has the same format, including the length, as the original data. Thus, an FPE encrypted SSN would be a sequence of nine decimal digits.
So, if we convert a 16-digit credit-card number it will return another 16-digit value. A 9-digit Social Security Number would return another 9-digit value.This cannot be achieved with other modes of encryption, such as AES where if we encrypt a credit card it will look like 0B6X8rMr058Ow+z3Ju5wimxYERpomz402++zNozLhv w= which is greater than 16 digits and has not just numbers inside it.This kind of output would not work in most systems or databases where we must follow strict data types. Thus if it expects 16 digit numbers, this type of output would not suffice and may even result in a system-wide crash.
NIST SP 800-38G recommends ways through which we can encrypt this sensitive data in the databases. These solutions would also follow FIPS 140-2. So if someone wishes to use FPE, they can rest assured that they would be following almost all regulations and standards which would be enough to satisfy regulatory requirements of HIPAA, PCI DSS etc.
Now, since we talked about why to use FPE regardless of using a legacy network, let us talk about FPE provided by Google Cloud Platform, and what benefit it provides over other platforms.
FPE By Google Cloud
Firstly, Google is the only cloud provider currently who is providing FPE through their DLP APIs. Now, most of the organizations are currently transitioning to the cloud, but to make that transition happen securely, data should stay encrypted while in transit.
To do that, Google provides FPE under Cloud Data Loss Prevention. Using DLP API, customers can encrypt their data using FPE and de-identify information using predefined info types such as Credit card numbers, phone numbers, etc.This would encrypt the data, and make it safer to transition to the cloud. The transfer of data from a datacenter to a database on the cloud would also maintain their referential integrity as well as their format.
FPE is an encryption mechanism that keeps data encrypted while databases and applications remain functional. FPE preserves the format of the data which allows legacy systems and networks to remain functional while data is encrypted. GCP provides a DLP API which offers FPE through their platform. This helps in making all types of systems and programs functional/available and also improves data auditability by removing all PII data within it.
The National Institute of Standards and Technology, also known as the NIST, is a United States government laboratory that works to develop, test, and recommend best practices for federal agencies, and other organizations relating to things such as online security. Metrics, measurements, and regulations, like the Federal Information Protection Standard, are created by the NIST to help strengthen the reliability and security of technologies being developed. All federal organizations are required to follow standards outlined by the NIST in their specific field when they are dealing with confidential, federal data. The standards and regulations set out by the NIST are recognized internationally, meaning any organization that follows the NIST’s standards for their business sector is trusted to be using the correct practices in their technology. NIST standards and regulations have been created for many Science, Technology, Engineering, and Mathematics (STEM) fields, from astrophysics to cybersecurity.
Why should you try and be compliant?
One of the many questions asked by organizations is why should I comply with the NIST’s standards and regulations? The main reason is the amount of testing put into the publications they release. Weeks, months, and sometimes years of testing are implemented into the subject NIST publications are related to before they are released to the public. This ensures that methods and practices proposed in the standards are the most up-to-date and methods available at the time of writing. The research is done by a team of professionals in their field, so the publications released to the public are extremely accurate, both informationally and technically.
Another reason to comply with the NIST’s standards is the fact that it will make your organizations infrastructure and new technologies much more secure. The goal of releasing NIST publications is to provide a more secure environment for both the government and companies in general. The more organizations that follow these standards, the less security breaches and vulnerabilities are available for exploitation by threat actors. Some regulations, like the Federal Information Protection Standard (FIPS), are required for work with the federal government. This means, any company seeking federal work contracts, will need to be FIPS 140-2 compliant, along with potentially needing to comply with other regulations, depending on the organizations field.
Compliance can also provide your business with an edge over competitors. Those organizations that comply with federal security standards will appeal to customers over those businesses who don’t comply. Those same customers will trust your organization to produce an equally secure product or service in the future, winning your company future business with a recurring client. Some organizations will require compliance with specific regulations if a company wishes to be their vendor. One of these organizations is the United States federal government.
Who needs to be NIST compliant?
All contractors, vendors, subcontractors, and all federal agencies are required to be compliant with NIST standards and regulations if they wish to work with the United States federal government. This is due to the sensitive data that companies working with the government will be manipulating, storing, and processing. If the data is handled improperly, this could cause a security gap allowing threat actors access to information or services that are meant to be top secret. Certain organizations, as well as local governments, may require those companies wishing to work with them to comply with certain NIST standards and regulations as well.
How do you comply with regulations and standards?
One of the easiest ways to follow NIST regulations is to comply with the requirements set forth in the NIST publications. These requirements are specific to each publication, meaning following the requirements of one publication will not guarantee compliance with all NIST publications. To help your company with being compliant with current and future publications created by the National Institute of Science and Technology, you should utilize the Cybersecurity Framework, created by the NIST. The NIST Cybersecurity Framework does not guarantee compliance with all current publications, rather it is a set of uniform standards that can be applied to most companies.
The NIST Cybersecurity Framework was created to improve the cybersecurity of organizations to prevent data breaches and increase the strength of cybersecurity tactics used by organizations. By implementing a uniform set of standards, organizations following the Cybersecurity Framework will already understand the infrastructure and cybersecurity tactics used by other Cybersecurity Framework organizations. The Cybersecurity Framework is broken into 5 stages, called the Framework Core:
Identify – The Identify stage helps the rest of the Framework Core function properly. This stage provides transparency into the workings of the tools currently in use, while prioritizing actions for securing critical infrastructure. Companies implementing this stage will identify all of the software and systems that are critical to the organization’s infrastructure. This helps find unauthorized devices within the network, such as a worker’s phone that is accessing their email, which could be used as an attack vector for threat actors. Understanding the systems at play in your infrastructure helps identify where most of the secure data is kept, which can then be prioritized for protection. All data cannot be protected within an organization, thus secure data has a priority for protection. Asset management, risk assessment, and risk management strategy are all tasks that fall under the Identify stage.
Protect – The protect phase is focused on reducing the number of breaches and other cybersecurity events that occur in your infrastructure. It also handles mitigating the damage a breach will cause if it occurs. This could mean putting security systems in to prevent or detect data loss, such as intruder prevention systems, or other such cybersecurity tools. Identity access and management (IAM) control, training, and data security are just a few of the processes that fall under the protection umbrella.
Detect – This stage helps with the detection of an intruder once a breach occurs, as no security system is 100% secure. Once an attacker gets into your organization’s infrastructure, they must be detected and dealt with in a timely manner, so they do not have enough time to steal any data or compromise any client systems. The longer it takes to detect an intruder, the more data that could be compromised. Events, monitoring, and detection are all a part of the Detect stage.
Respond – The respond stage deals with the response an organization has to a breach. These guidelines help with developing and implementing a plan to respond to a security breach. If the breach is not secured and the attacker is given free reign of an organization, then the breach can become worse and worse. Response planning, communications, analysis, mitigation, and improvements are the steps implemented in the Respond phase.
Recover – The final stage, Recover, deals with the aftermath of a security breach. A plan for disaster recovery is created and implemented here. A back-up of all databases and infrastructure should be in place as part of the recovery plan. This stage includes recovery planning, communications, and improvements for the future.
Payment Card Industry Data Security Standards (PCI DSS) are a set of security standards formed in 2004 to secure credit and debit card transactions against data theft and fraud. PCI DSS is a set of compliance methods, which are a requirement for any business.
Let’s suppose payment card data is stored, processed, or transmitted to a cloud environment. In that case, PCI DSS will apply to that environment and will involve validation of the CSP’s infrastructure, and the client’s usage of that environment.
PCI DSS Requirements:
Install and maintain a firewall configuration to protect cardholder data
Do not use vendor-supplied default for system passwords and other security parameters
Protect stored cardholder data
Encrypt transmission of cardholder data across an open, public network
Use and regularly update anti-virus software or programs
Develop and maintain secure systems and applications
Restrict access to cardholder data by business need to know
Assign a unique ID to each person with computer access
Restrict physical access to cardholder data
Track and monitor all access to network resources and cardholder data
Regularly test security systems and processes
Maintain a policy that addresses information security for all personnel
Keeping sensitive data, such as Personally Identifiable Information (PII), secure in every stage of its life is an important task for any organization. To simplify this process, standards, regulations, and best practices were created to better protect data. The Federal Information Protection Standard, or FIPS, is one of these standards. These standards were created by the National Institute of Science and Technology (NIST) to protect government data, and ensure those working with the government comply with certain safety standards before they have access to data. FIPS has a number of standards released, but this article discusses FIPS 140-2.
What is FIPS 140-2?
FIPS 140-2 is a standard which handles cryptographic modules and the ones that organizations use to encrypt data-at-rest and data-in-motion. FIPS 140-2 has 4 levels of security, with level 1 being the least secure, and level 4 being the most secure:
FIPS 140-2 Level 1- Level 1 has the simplest requirements. It requires production-grade equipment, and atleast one tested encryption algorithm. This must be a working encryption algorithm, not one that has not been authorized for use.
FIPS 140-2 Level 2- Level 2 raises the bar slightly, requiring all of level 1’s requirements along with role-based authentication and tamper evident physical devices to be used. It should also be run on an Operating System that has been approved by Common Criteria at EAL2.
FIPS 140-2 Level 3- FIPS 140-2 level 3 is the level the majority of organizations comply with, as it is secure, but not made difficult to use because of that security. This level takes all of level 2’s requirements and adds tamper-resistant devices, a separation of the logical and physical interfaces that have “critical security parameters” enter or leave the system, and identity-based authentication. Private keys leaving or entering the system must also be encrypted before they can be moved to or from the system.
FIPS 140-2 Level 4- The most secure level of FIPS 140-2 uses the same requirements of level 3 and desires that the compliant device be able to be tamper-active and that the contents of the device be able to be erased if certain environmental attacks are detected. Another focus of FIPS 140-2 level 4 is that the Operating Systems being used by the cryptographic module must be more secure than earlier levels. If multiple users are using a system, the OS is held to an even higher standard.
Why is being FIPS 140-2 compliant important?
One of the many reasons to become FIPS compliant is due to the government’s requirement that any organization working with them must be FIPS 140-2 compliant. This requirement ensures government data handled by third-party organizations is stored and encrypted securely and with the proper levels of confidentiality, integrity, and authenticity. Companies desiring to create cryptographic modules, such as nCipher or Thales, must become FIPS compliant if they want the vast majority of companies to use their device, especially the government. Many organizations have developed the policy of becoming FIPS 140-2 compliant, as it makes their organization and services seem more secure and trusted.
Another reason to be FIPS compliant is the rigorous testing that has gone into verifying the strength behind the requirements of FIPS 140-2. The requirements for each level of FIPS 140-2 have been selected after a variety of tests for confidentiality, integrity, non-repudiation, and authenticity. As the government has some of the most sensitive information in the nation, devices, services, and other products used by them must be at the highest level of security at all times. Using services or software without these tested methods in place could lead to a massive breach in security, causing problems for every person in the nation.
Who needs to be FIPS compliant?
The main organizations that are required to be FIPS 140-2 compliant are federal government organizations that either collect, store, share, transfer, or disseminate sensitive data, such as Personally Identifiable Information. All federal agencies, their contractors, and service providers must all be compliant with FIPS as well. Additionally, any systems deployed in a federal environment must also be FIPS 140-2 compliant. This includes the encryption systems utilized by Cloud Service Providers (CSPs), computer solutions, software, and other related systems. This means only those services, devices, and software that are FIPS compliant can even be considered for use by the federal government, which is one of the reasons so many technology companies want to ensure they are FIPS 140-2 compliant.
FIPS compliance is also recognized around the world as one of the best ways to ensure cryptographic modules are secure. Many organizations follow FIPS to ensure their own security is up to par with the government’s security. Many other organizations become FIPS 140-2 compliant to distribute their products and services in not only the United States, but also internationally. As FIPS is recognized around the world, any organization that possesses FIPS compliance will be seen as a trusted provider of services, products, and software. Some fields, such as manufacturing, healthcare, and financial sectors, along with local governments require FIPS 140-2 compliance as well.
Twofish is the successor to Blowfish, and, like its predecessor, uses symmetric encryption, so only one 256-bit key is necessary. This technique is one of the fastest encryption algorithms and is ideal for both hardware and software environments. When it was released, it was a finalist for the National Institute of Technology and Science’s (NIST’s) competition to find a replacement for the Data Encryption Standard (DES) encryption algorithm. In the end, the Rjindael algorithm was selected over the Twofish encryption algorithm. Similar to Blowfish, a block cipher is used in this symmetric encryption algorithm.
Symmetric encryption is a process that uses a single key to both encrypt and decrypt information. The key is taken in, along with the plaintext information, by the encryption algorithm. This key encrypts the data into ciphertext, which cannot be understood unless it is decrypted. When the encrypted data is sent to the recipient of the data, the symmetric encryption key must also be sent, either with or after the ciphertext has been sent. This key can then be used to decrypt the data.
Is Twofish secure?
A question many organizations ask is: Is Twofish safe, if the NIST did not want to use it to replace DES? The answer is yes, Twofish is extremely safe to use. The reason the NIST did not wish to utilize Twofish is due to it being slower, compared to the Rjindael encryption algorithm. One of the reasons that Twofish is so secure is that it uses a 128-bit key, which is almost impervious to brute force attacks. The amount of processing power and time needed to brute force a 128-bit key encrypted message makes whatever information that is being decrypted unactionable, as it could take decades to decrypt one message.
This does not mean that Twofish is impervious to all attacks, however. Part of Twofish’s encryption algorithm uses pre-computed, key dependent substitution to produce the ciphertext. Precomputing this value makes Twofish vulnerable to side channel attacks, but the dependence of a key with the substitution helps protect it from side channel attacks. Several attacks have been made on Twofish, but the creator of the algorithm, Bruce Schneier, argues these were not true cryptanalysis attacks. This means a practical break of the Twofish algorithm has not occurred yet.
What uses Twofish for encryption?
Though, like the Advanced Encryption Standard (AES), Twofish is not the most commonly used encryption algorithm, it still has many uses seen today. The most well-known products that use Twofish in their encryption methods are:
PGP (Pretty Good Privacy): PGP is an encryption algorithm that utilizes Twofish to encrypt emails. The data of the email is encrypted, but the sender and subject are not encrypted.
GnuPG: GnuPG is an implementation of OpenPGP that lets users encrypt and send data in communications. GnuPGP uses key management systems and modules to access public key directories. These public key directories provide public keys published by other users on the Internet, so that if they send a message with encrypted with their private key, anyone with access to the public key directory can decrypt that message.
TrueCrypt: TrueCrypt encrypts data on devices, with encryption methods that are transparent to the user. TrueCrypt works locally on the user’s computer, and automatically encrypts data when it leaves the local computer. An example would be a user sending a file from their local computer to an outside database. The file sent to the database would be encrypted as it leaves the local computer.
KeePass: KeePass is a password management software that encrypts passwords that are stored, and creates passwords using Twofish.
The Advanced Encryption Standard, or AES, is an encryption algorithm created by the National Institute of Science and Technology (NIST) in 2001. The cipher utilized in AES is a block cipher from the Rjindael cipher family. When AES was created, three different Rjindael block ciphers were selected for use, to make AES even more secure. All three ciphers used were 128 bits, but the keys they each used were of different sizes: 128, 192, and 256 bits. This is considered a symmetric block cipher, as only one key is used in the encryption process.
Symmetric encryption is a form of encryption that uses a single key for both encryption and decryption. Its counterpart, asymmetric encryption, uses two keys during the encryption and decryption process. One key is kept secret from everyone but the key’s creator, while the other key is a public key that can be viewed and utilized by anyone. Initially, AES was only used by the United States, but it has now been adopted worldwide as one of the most secure encryption algorithms.
Why was AES developed?
The Advanced Encryption Standard was created as a replacement for the Data Encryption Standard, or DES. DES was found to be increasingly more vulnerable to brute-force attackers, and thus needed to be phased out. AES’ original creation was to protect sensitive government information, but the security and ease of implementation provided by AES caused the majority of organizations to utilize AES in their encryption processes. Both public and private sector companies use AES now, as it protects against cyber-attacks, like brute force. AES does present an issue when exporting products encrypted with this encryption algorithm.
The Bureau of Industry and Security (BIS) has a number of controls and regulations in place that make it difficult export encryption products encrypted with AES. Commercial encryption products are required by the BIS to gain a license for their product that allows the organization to export their product to several destinations, without needing to acquire a separate license for each destination. Certain embargoed countries cannot receive commercial encryption products from the United States at all. These countries are: Cuba, Iran, Iraq, Libya, North Korea, Sudan and Syria.
Choosing the Rjindael cipher
To create the AES algorithm, a competition was held, which initially had 15 different encryption algorithms in the running. It was eventually narrowed down to just 5 algorithms:
These encryption algorithms were extensively analyzed by both the NIST and the National Security Agency (NSA), to determine the most secure one to use in the Advanced Encryption Standard. After rigorous testing of these algorithms, the Rjindael cipher was selected to be used in AES. The use of a 256 bit key gives the Rjindael cipher strong security, while maintaining its interoperability with existing hardware and software. Stronger ciphers exist, but they do not have the ability to be implemented into existing systems easily, like the Rjindael cipher can.
Understanding AES key size differences
The way a block cipher works is the plaintext of the data being encrypted is broken down into blocks of equal size, which for AES is 128 bits. Using a series of bitwise operations, the blocks of data are encrypted using keys of a specific length as well. AES allows 128, 192, and 256 bit keys for use, and the bigger the key size, the more secure the encryption. If a 128 bit key is used, the encryption on the block is done 10 times. With 192, the encryption is done 12 times, and with 256, 14 times. Thus, 256 bit keys are the most secure, but for most encryption cases, 128 bit keys are sufficient. The higher the security level of the data, however, the higher the size of the key should be.
To give an example on the security of AES, let’s take a look at how long it would take someone to crack one password encrypted with an AES-256 bit key. To break one 16-byte section of data encrypted with an AES-256 bit key, it would take centuries using a brute force method. The total amount of permutations that are possible with a 256 bit key are 2256, which makes cracking an AES-256 encrypted message virtually impossible. Even using a 128 bit key, the smallest size, there are still 2128 different permutations available, which would still take decades to brute force.
Attacks on AES
Researchers continually attempt to break AES with methods that are viable. The reason researchers are attempting to crack AES is to be one step ahead of attackers. If an attacker were to crack AES, and keep it a secret, then the world would continue to use AES believing it is completely secure. So far, a few different, theoretical attacks have been proposed, including:
Related-key attack: A related-key attack involves identifying how a cipher works under different keys. This cryptanalysis technique involves feeding a cipher, used to encrypt data, several different keys with the same plaintext. The process that occurs between the key and cipher can help identify a mathematical relationship between the cipher and key, thus helping identify the actual key’s value. This attack method is, however, not considered a big threat to AES, as it is useless as long as the protocols were implemented correctly.
Distinguished key attack: An attack that used a known key to find out the inner workings of an 8 round AES-128 algorithm was successfully used. As this was done on an 8 round algorithm, as opposed to the official 10 round algorithm, this is an attack that should not cause issues with any official AES algorithms.
Side channel attack: A side channel attack involves the leaking of information from an organization’s infrastructure. The data is leaked through locations, and the attacker listens at in to the sound, timing information, electromagnetic information or the power consumption in order to gather inferences from the algorithm which can then be used to break it. This can be stopped, however, by fixing the source of the leak or ensuring no pattern exists in the leaking information.
Key compromise: Though not a direct attack on the AES algorithm, the compromise of the key used for encryption cripples the entire AES algorithm. This is why proper key management and security are vital to the IT infrastructure of any organization.
Quantum computing: Quantum computing is the successor to classical computing, which we do now, that is still in the process of being created and understood. Though it has not been fully realized yet, the creation of quantum computers will make all classical computing cryptography irrelevant, as quantum computing could crack any classical cryptography algorithm in potentially seconds.
Who and what uses AES?
The majority of products, services, and organizations using symmetric encryption utilize AES. Most agencies and organizations in the United States government, including the NSA, use AES as well. The proven strength of AES and the inability to crack it mean the majority of companies looking for an encryption algorithm will use AES. A number of file transfer methods use AES for encryption as well. HTTPS is just one example of this.