FIPS 140-3 is a U.S. government standard specifying security requirements for cryptographic modules, including hardware and software components. These security requirements are designed to ensure that cryptographic modules provide a minimum level of security for protecting sensitive information and transactions.

Some of the key security requirements specified in FIPS 140-3 include the following

  1. Cryptographic module specification

    A detailed specification of the cryptographic module, including its physical and logical boundaries, interfaces, and security functions.

  2. Roles and services

    A definition of the roles and services provided by the cryptographic module, including key management, encryption, and authentication.

  3. Cryptographic algorithms

    A list of approved cryptographic algorithms that can be used by the cryptographic module, along with specific requirements for key length, block size, and other parameters.

  4. Key management

    Requirements for generating, storing, and protecting cryptographic keys, including key lifecycle management and destruction.

  5. Physical security

    Requirements for physical security measures to protect the cryptographic module from unauthorized access, tampering, and theft.

  6. Logical security

    Requirements for logical security measures to protect the cryptographic module from attacks such as malware, side-channel attacks, and unauthorized access.

  7. Security testing and validation

    Requirements for testing and validating the security of the cryptographic module, including vulnerability testing, penetration testing, and compliance testing.

General requirements for each level of FIPS 140-3

The below table explains the general requirements for each level of FIPS 140-3

General requirements
Level 1 Level 2 Level 3 Level 4
  • The cryptographic module must use an approved algorithm and implement the algorithm correctly.
  • The module must have physical security mechanisms to prevent unauthorized access.
  • The module must have a power-on self-test to verify that the module is functioning correctly.
  • All the requirements for Level 1, plus:
  • The module must have additional physical security mechanisms to detect and respond to unauthorized access
  • The module must have a role-based authentication system to restrict access to authorized users only.
  • The module must have a tamper-evident mechanism to detect if it has been tampered with.
  • All the requirements for Level 2, plus:
  • The module must have physical security mechanisms to prevent unauthorized modification of the module’s firmware or software.
  • The module must have a trusted path for sensitive data to ensure authorized components only process data.
  • The module must have a mechanism for detecting and responding to environmental attacks, such as temperature or voltage variations.
  • All the requirements for Level 3, plus:
  • The module must have physical security mechanisms to protect against the most sophisticated attacks, including invasive and non-invasive attacks.
  • The module must have a highly secure design and implementation with no known vulnerabilities.
  • The module must be resistant to side-channel attacks, which are attacks that exploit information leaked by the module during its normal operation.

Table 1: General requirements of FIPS 140-3 for each level

These are the general requirements for each level of FIPS 140-3. However, there are many specific requirements for each level, and the requirements for each level are quite detailed. The purpose of the standard is to provide a framework for evaluating and certifying the security of cryptographic modules. The specific requirements for each level are designed to ensure that the module provides a certain level of protection against various attacks.

FIPS 140-3 Security requirements for each level

FIPS 140-3 is a security standard defining the requirements for cryptographic modules that protect sensitive data.

FIPS 140-3 Level 1

Level 1 of FIPS 140-3 provides the lowest level of security and is intended for use in low-risk applications. The security requirements for level 1 are as follows

  1. Cryptographic module specification

    The module must have a concise specification describing its cryptographic functions, interfaces, and protocols.

  2. Roles, services, and authentication

    The module must define the roles, services, and authentication mechanisms required for secure operation.

  3. Physical security

    The module must have physical safeguards to protect against unauthorized access, theft, or tampering

  4. Design assurance

    The module must have undergone a comprehensive design process, including security analysis and testing.

  5. Mitigation of attacks

    The module must have measures to prevent or mitigate attacks, such as software or hardware countermeasures

  6. Self-tests

    The module must perform self-tests to ensure it is functioning correctly and detect tampering attempts

  7. Environmental design

    The module must be designed to operate in various environmental conditions, including temperature, humidity, and electromagnetic interference.

  8. Cryptographic key management

    The module must have secure key management processes to ensure that cryptographic keys are generated, stored, and used securely.

FIPS 140-3 Level 2

Level 2 of the FIPS 140-3 standard outlines the security requirements for a cryptographic module that provides moderate security assurance. The following are the specific security requirements for a cryptographic module to achieve FIPS 140-3 level 2

  1. Physical security

    The module must be physically protected against unauthorized access, tampering, and theft. It should be designed to withstand environmental hazards like fire, water, and electromagnetic interference.

  2. Cryptographic key management

    The module must have strong key management mechanisms that ensure cryptographic keys’ confidentiality, integrity, and availability. The module should protect keys against unauthorized access, modification, and destruction

  3. Authentication and access control

    The module must authenticate users and restrict access to authorized users only. It should have strong password policies and mechanisms to prevent brute-force attacks.

  4. Audit logging and reporting

    The module must log all security-related events and provide reports to authorized users. The module should also have mechanisms to protect audit logs against tampering or destruction.

  5. Software security

    The module must be designed with secure software practices that minimize the risk of security vulnerabilities. The software should be tested for security flaws and vulnerabilities and undergo regular security updates and patches.

  6. Communication security

    The module must use secure communication protocols and encryption to protect data in transit. The module should also have mechanisms to prevent unauthorized access to data transmitted over networks.

  7. Cryptographic algorithms

    The module must use approved cryptographic algorithms and standards that NIST has validated. The module should also have mechanisms to prevent using weak or outdated cryptographic algorithms.

FIPS 140-3 Level 3

Level 3 of the FIPS 140-3 standard protects against unauthorized cryptographic module access and sensitive information. It is also the third-highest level of FIPS 140-3. The following are the security requirements for level 3

  1. Physical Security

    The cryptographic module must be physically protected against unauthorized access, tampering, theft, and damage. The module must also be designed to resist physical attacks, such as drilling, cutting, and probing. The module must be in a secure facility with access controls, video surveillance, and intrusion detection systems.

  2. Cryptographic Key Management

    The cryptographic module must have a strong key management system that ensures the secure generation, storage, distribution, and destruction of cryptographic keys. The key management system must use strong cryptographic algorithms, such as Advanced Encryption Standard (AES), and have key backup, recovery, and destruction mechanisms.

  3. Cryptographic Operations

    The cryptographic module must perform cryptographic operations securely and reliably. The module must use approved cryptographic algorithms and protocols, such as Transport Layer Security (TLS), Secure Sockets Layer (SSL), and IPsec. The module must also have error detection and correction mechanisms and handle cryptographic exceptions and failures.

  4. Self-Tests and Tamper Evidence

    The cryptographic module must have self-tests and tamper-evidence mechanisms that detect and prevent unauthorized modifications, tampering, or substitution of the module’s hardware or software. The self-tests must be run periodically and must include checks for the integrity and authenticity of the module’s firmware, hardware, and software.

  5. Design Assurance

    The cryptographic module must have a strong design assurance that ensures the module’s security requirements are met throughout the module’s lifecycle. An independent third-party evaluator must review and verify the module’s design. The module must be tested against a set of security requirements defined in the FIPS 140-3 standard. The design assurance also requires using secure coding practices, security testing, and security documentation.

  6. Security Management

    The cryptographic module must have a strong security management system that includes policies, procedures, and controls for managing the module’s security risks. The security management system must include mechanisms for auditing, monitoring, reporting security events, and responding to security incidents and vulnerabilities.

    FIPS 140-3 level 3 provides strong protection against physical and logical attacks and requires a high level of key management, cryptographic operations, self-tests, tamper evidence, design assurance, and security management. The security requirements for level 3 are designed to protect sensitive information and maintain the integrity and availability of the cryptographic module.

FIPS 140-3 Level 4

Level 4 is the highest level of security defined in the standard and is intended for applications where the consequences of a security failure are severe. The following are the key security requirements for FIPS 140-3 Level 4

  1. Physical Security

    The cryptographic module must be housed in a tamper-evident, ruggedized container designed to resist physical attacks, such as drilling, cutting, or punching. The container must also have sensors to detect unauthorized access and trigger alarms.

  2. Cryptographic Key Management

    The module must have a strong, verifiable, and auditable key management system that ensures cryptographic keys’ secure generation, storage, distribution, and destruction. The module must also support key revocation and recovery.

  3. User Authentication

    The module must have a robust and secure user authentication mechanism, such as biometric or smart card-based authentication, to ensure only authorized personnel can access the module.

  4. Logical Security

    The module must have robust logical security mechanisms to prevent unauthorized access, tampering, or manipulation of the cryptographic module or its data. This includes secure boot, secure firmware updates, and secure communications protocols.

  5. Environmental Controls

    The module must operate in various environmental conditions, such as extreme temperatures, humidity, and electromagnetic interference. The module must also be able to withstand power surges and disruptions.

  6. Life Cycle Support

    The module must have a comprehensive life cycle support mechanism that includes regular security updates, vulnerability assessments, and secure disposal procedures

    Overall, FIPS 140-3 Level 4 defines the highest level of security for cryptographic modules, and it is intended for applications where the consequences of a security failure are severe. The standard defines strict security requirements in physical security, cryptographic key management, user authentication, logical security, environmental controls, and life cycle support to ensure the highest level of protection for sensitive data and systems.

Requirements of Cryptographic algorithms for each level of FIPS 140-3

The FIPS 140-3 standard outlines four security levels (Level 1-4), each specifying a different set of requirements for cryptographic algorithms

Here are the cryptographic algorithms required for each level

  • Level 1

    This is the lowest level of security, requiring only basic encryption and key management functions. The cryptographic algorithms required for this level include AES (128-bit), Triple-DES (112-bit), and SHA-1.

  • Level 2

    This level requires additional physical security features to protect against tampering or unauthorized access to the cryptographic module. The cryptographic algorithms required for this level include AES (128-bit and 192-bit), Triple-DES (168-bit), SHA-2 (256-bit), and HMAC.

  • Level 3

    This level requires the highest level of physical security to prevent unauthorized access and protect against attacks. The cryptographic algorithms required for this level include AES (256-bit), RSA (2048-bit), ECDSA (224-bit), SHA-2 (384-bit), and HMAC (with keys of at least 128 bits).

  • Level 4

    This is the highest level of security, requiring the most stringent physical and logical security features to protect against sophisticated attacks. The cryptographic algorithms required for this level include AES (256-bit), RSA (3072-bit), ECDSA (384-bit), SHA-3 (512-bit), and HMAC (with keys of at least 256 bits).

Conclusion

In summary, FIPS 140-3 has been approved and launched as the latest standard for the security evaluation of cryptographic modules. It covers a large spectrum of threats and vulnerabilities as it defines the security requirements starting from the initial design phase leading towards the final operational deployment of a cryptographic module. FIPS 140-3 requirements are primarily based on the two previously existing international standards ISO/IEC 19790:2012 “Security Requirements for Cryptographic Modules” and ISO 24759:2017 “Test Requirements for Cryptographic Modules”.

The FIPS 140-3 standard provides a framework for ensuring the security of cryptographic modules used in sensitive applications such as banking, healthcare, and government.

To know more about FIPS 140-3 read : Knowing the new FIPS 140-3

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read Time: 5 minutes

Why does strong encryption key management matter? Being involved in this security world, we would have heard different responses; however, there would be one thing in common for all the responses “Save the key from getting compromised.”

As you know, encryption involves scrambling data so only the intended party or organization can access it. This process is accomplished by using encryption keys. Each key contains a randomly generated string of bits. You can think about the encryption key as a password, ex: you access your bank account or any other account if you have your password. Similarly, you can decrypt your data when you have the associated encryption key with you. As you encrypt more and more data, you attain more of these keys, and managing the keys properly is very important.

Compromise of your encryption keys could lead to serious consequences since they could be used to:

  • Extract/tamper with the data stored on the server and read encrypted documents or emails.
  • Applications or documents could be signed in your name
  • Create phishing websites, impersonating your original websites,
  • Pass through your corporate network, impersonating you, etc.

Do you need to manage your encryption key?

In a word, yes.  As stated in NIST SP 800-57 part 1, rev. 5:

Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of cryptographic mechanisms and protocols associated with the keys, and the protection provided to the keys. Secret and private keys need to be protected against unauthorized disclosure, and all keys need to be protected against modification.

Encryption key management is an essential part of any enterprise security strategy. Proper key management ensures that sensitive data is protected from unauthorized access and that access to encrypted data is granted only to authorized individuals. Here are 10 best practices for effective enterprise encryption key management:

Key Management Best Practices:

  1. Follow key generation best practices

    There are specific best practices that should be followed when generating encryption keys. In selecting cryptographic and key management algorithms for a given application, it is important to understand its objectives. This includes using a strong random number generator and creating keys with the sufficient algorithm, length, and regularly rotating keys.

  2. Use a centralized key management system

    A centralized key management system is essential for effective key management in an enterprise setting. This system should be secure and allow easy management of keys across the organization.

  3. Use key-encrypting keys

    To ensure an extra level of security, consider using key-encrypting keys (KEKs) to protect your encryption keys. KEKs are used to encrypt and decrypt encryption keys, providing an additional layer of security.

  4. Establish key access controls

    It is essential to have controls in place for who has access to your encryption keys. This includes establishing access controls for key generation, key storage, and key use.

  5. Centralize User roles and access

    Some businesses may utilize thousands of encryption keys, but not every employee needs access to them. Therefore, only individuals whose occupations necessitate it should have access to encryption keys. These roles should be specified in the centralized key management so that only authenticated users will be allowed access credentials to the encrypted data that are connected to that specific user profile.

    Additionally, make sure that no administrator or user has exclusive access to the key. This provides a backup plan in case a user forgets his login information or unexpectedly departs the firm.

  6. Use key backup and recovery

    Proper key backup and recovery is essential to ensure that you can quickly restore access to your encrypted data in case of an emergency. This includes regularly backing up keys and having a clear plan in place for key recovery.

  7. Use key expiration

    Key expiration is a process in which keys are set to expire after a certain period of time. This ensures that keys are regularly rotated and that access to encrypted data is kept up to date.

  8. Use key revocation

    Key revocation is a process in which keys are invalidated and can no longer be used to access encrypted data. This is essential for ensuring that access to data is properly controlled and that unauthorized individuals are not using keys.

  9. Use Automation to Your Advantage

    An enterprise or larger organization relying solely on manual key management is time-consuming, expensive, and prone to mistakes. With Certificate management we have heard a lot about automation, however it is not just for digital certificate management. The smartest approach to encryption key management is using automation to generate key pairs, renew keys and rotate keys at set intervals.

  10. Preparation to Handle Accidents

    Although an administrator or security personel implement the correct policies, controls to secure the sensitive information, key things can go wrong at any point, and the organization must be prepared for it. For example:

    • User has lost credential to their keys
    • Employee leaves or gets fired from the company
    • Used flawed encryption algorithm
    • Human error, accidently publishing private key to a public website

For such situations one should always be prepared, identify all possibilities before it actually occurs and take precautionary measures. Audit your security infrastructure on a regular basis to minimize such incidents.

Conclusion

By following these best practices, you can ensure that your enterprise encryption key management is effective and secure. Proper key management is essential for protecting your sensitive data and ensuring that only authorized individuals can access it.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

To establish a public key infrastructure and offer your company’s public key cryptography, digital certificates, and digital signature capabilities, an ADCS server role is necessary. AD CS provides customizable services for issuing and managing digital certificates in software security systems that engage public key technologies.

The digital certificates that AD CS provides can be used to encrypt and digitally sign electronic documents and messages. These digital certificates can authenticate network computers, users, or device accounts. Digital certificates are used to provide the following:

  1. Confidentiality through encryption
  2. Integrity through digital signatures
  3. Authentication by associating certificate keys with a computer, user, or device account on a computer network

The public key services container cannot be limited to any specific domain or domains. It is available to any client in the forest. Since the public key service container is stored in a configuration naming context, the content is simulated between all domain controllers in the current forest.

CN=Public Key Services, CN=Services, CN=Configuration, DC= {forest root domain}

The followings are the sub-containers under public key services containers:

  1. AIA
  2. CDP
  3. Certificate Templates
  4. Certification Authorities
  5. Enrollment Services
  6. KRA
  7. OID

Below are the descriptions of each container:

AIA

In order to create a trusted certificate chain and retrieve any cross-certificates issued by the CA, clients can retrieve CA certificates from the AIA container by utilizing the authority information access (AIA) certificate extension. Another name for AIA is Authority Information Access (AIA). The AIA container will automatically install the new enterprise CA’s certificate during installation. To install the CA certificate programmatically to this container, run the below command:

Certutil -dspublish -f <PathToCertFile.cer> SubCA

<PathToCertFile.cer> – this is the actual path and certificate name file.

CDP

CDP stores certificate revocation lists. It contains all base CRLs, and Delta CRLs published in the forest.

You can install the certificate revocation list to the CDP container by running the certutil command.

Certutil -dspublish -f <PathToCRLFile.crl> <SubcontainerName>

How to add a CDP

Below command is to add CDP

Add-CRLDistributionPoint [-InputObject] <CRLDistributionPoint[]> [-URI] <String[]> [<CommonParameters>]

Parameters

-InputObject <CRLDistributionPoint[]>  -> Specifies the CRLDistributionPoint object to which new CRL distribution points are added

[-URI] <String[]>  -> This specifies new CRL file publishing distribution points for a particular CA.

<CommonParameters> : The cmdlet supports common parameters like: Debug (db), ErrorAction (ea), ErrorVariable (ev), InformationAction (infa), InformationVariable (iv), OutVariable (ov), OutBuffer (ob), PipelineVariable (pv), Verbose (vb), WarningAction (wa), WarningVariable (wv)

CRL Publication options

VariableNameDescription
%1ServerDNSNameThe CA computer’s Domain Name System (DNS) name
%2ServerShortNameThe CA computer’s NetBIOS name
%3CA NameThe CA’s logical name
%6ConfigDNThe Lightweight Directory Access Protocol (LDAP) path of the forest’s configuration naming context for the forest
%8CRLNameSuffixThe CRL’s renewal extension
%9DeltaCRLAllowedIndicates whether the CA supports delta CRLs
%10CDPObjectClassIndicates that an object is a CDP object in AD DS

Certificate Templates

By defining the common features shared by all certificates issued using that template and determining the permissions for which users or computers can enroll in or automatically enroll for the certificate, certificate templates are used to automate certificate deployment. A certificate template that automatically enrolls all domain users with valid email addresses for a secure email (S/MIME) certificate would serve as an illustration.

All certificate templates available in AD, whether published on an enterprise CA or not, are stored in the Certificate Templates container. If an enterprise CA publishes a certificate template, the value is written as an attribute on the CA object in the Enrollment Services container. By default, over 30 Microsoft predefined certificate templates are installed when building an enterprise CA.

Certification Authorities

Certificate authorities container is for the trusted Root CA(s). During the enterprise Root CA creation, the certificate is automatically deployed in the container. In the case of an offline standalone CA, the PKI administrator must manually publish the offline root CA certificate using the certutil command.

Example: certutil -dspublish -f <Root.cer> Root CA

Enrollment services container

It includes the certificates for the enterprise CAs that can grant certificates to individuals, machines, or services located within the forest. Only an Enterprise Admins member who installs an enterprise CA can add enterprise CA certificates to this container. The Manage AD Containers dialogue box cannot be used to add the certificates manually

KRA

Contains the certificates for key recovery agents for the forest. Key recovery agents must be configured to support key archival and recovery. Key recovery agent certificates can be added to this container automatically by enrolling with an enterprise CA. The key recovery agent certificates cannot be added manually by using the Manage AD Containers dialog box.

OID

This container stores object identifiers (OID) registered in the enterprise. OID container can hold object identifier definitions for custom Application Policies, Issuance (Certificate) Policies, and certificate templates. When the client is a member of the Active Directory Forest, it uses an OID container to resolve object identifiers and the local OID database.

New OIDs should be registered via Certificate Templates (certtmpl. msc) MMC snap-in by adding a new Application or Issuance (Certificate) Policy in the certificate template Extension tab.

Conclusion

Understanding each active directory certificate services container is vital for an enterprise PKI administrator. An Administrator must know the activities and necessity of each container while managing the enterprise PKI infrastructure.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 5 mins

The CipherTrust data security platform’s core management point is CipherTrust Manager. With the help of this market-leading enterprise key management solution, businesses can set up security policies, give granular access controls, and centrally manage encryption keys. The key lifecycle tasks managed by CipherTrust Manager include creation, rotation, destruction, import, and export. It also gives role-based access control to keys and policies, allows thorough auditing and reporting, and provides REST APIs that are easy for management and development. The physical and virtual form factors of CipherTrust Manager are FIPS 140-2 compliant up to level 3. Additionally, hardware security modules (HSM) like Thales Luna and Luna Cloud HSM can be used to anchor the CipherTrust Manager.

Are you still using the older version of CipherTrust Manager in your environment? Then it’s time to upgrade it to the latest version. The below upgrade details will help you upgrade your CipherTrust Manager all by yourself. This document covers basic system upgrade details for the Thales CipherTrust Manager. For more detailed instructions, please refer to the Thales System Upgrade Guide.

Pre-requisites:

Pre-requisites are important to plan and be ready for the upgrade. The following checks must be run before the upgrade of the CipherTrust Manager is complete:

  • Know your current software version of the CipherTrust Manager (CM) and the desired version of the CipherTrust Manager. Example: Your Current version of CM is 2.0, and the desired version is 2.9.
  • Define the upgrade path: For the above example: The upgrade path would be 2.0 > 2.3 > 2.6 > 2.9 [NOTE: Thales tests upgrades from the three previous minor versions. Upgrades from other versions have never been tested and may not work correctly.]
  • Ensure you have access as Ksadmin with an SSH Key.
  • Take a system-level backup and ensure that you have downloaded the CM backup file and backup key. (This can be done via the CM Web UI).
  • Run the command df -h toensureat least 12 GB of space available (excluding the upgrade file)
  • SCP the upgrade file to the CipherTrust Manager (while using Winscp, ensure that SCP is selected for the file protocol). The upgrade file can be transferred via the WinSCP application or the following command:

scp -i <path_to_private_SSH_key> <upgrade_file_name> ksadmin@<ip>:.

[NOTE: Upgrade files can be downloaded from the Thales Support portal for the desired version. Or you can also open a ticket with Thales support to help you get the upgrade files]

Upgrade Workaround:

  1. Login as Ksadmin via SSH
  2. Run the following command to upgrade sudo /opt/keysecure/ks_upgrade.sh -f <archive_file_path>
  3. Reboot the appliance once all the services are running Sudo reboot

[NOTE: The upgrade can also be performed via serial connect as a ksadmin]

Post-upgrade Checks:

The following checks should be run after upgrading the CTM:

  1. Check that all services are running with the following command: sudo docker ps | wc -l
  2. Ensure the CipherTrust Manager services have started. From the ksadmin session, run “systemctl status keysecure.”
  3. Alternatively, you can visit the CipherTrust Manager web console or attempt to connect with the ksctl CLI

Known issue

There is a known issue in CipherTrust Manager instances upgraded from 2.6 and earlier, where network device names sometimes swap MAC addresses after reboot. This has been observed for network interfaces beginning with eth and bonded connections created from network interfaces beginning with eth. To avoid this, a connection for each network interface should be configured.

Conclusion

This document does not replace the standard Safenet documentation set for the CipherTrust Manager User Guides. Rather it is an addendum designed to be used alongside that documentation. It is always a best practice to upgrade your security solution software with the major release.

Sources: thalesdocs.com

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 7 minutes

3DES is an encryption cipher derived from the original Data Encryption Standard (DES). 3DES was first introduced in 1998, the algorithm is primarily adopted in finance and other private industry to encrypt data-at-rest and data-in-transit. It became prominent in the late nineties but has since fallen out of favor due to the rise of more secure algorithms, such as AES-256 and XChaCha20. Although it will depreciate in 2023, it’s still implemented in some situations.

About Triple DES or 3DES

The Triple DES (often referred to as Data Encryption Algorithm (TDEA)) is specified in SP 800-6711 107 and has two variations, known as two-key TDEA and 108 three-key TDEA. Three-key TDEA is the stronger of the two variations.Below is the status of the 3DES algorithm used for encryption and decryption

AlgorithmStatus
Two-key TDEA EncryptionDisallowed
Two-key TDEA DecryptionLegacy use
Three-key TDEA EncryptionDeprecated through 2023Disallowed after 2023
Three-key TDEA DecryptionLegacy use

*Deprecated: you may use but must accept a specific risk

*Disallowed: algorithm or key length not suitable for use anymore

Three-key TDEA encryption and decryption

Effective as of the final publication of this revision of SP 800-131A, encryption using three-key TDEA is deprecated through December 31, 2023, using the approved encryption modes. Note that SP 800-67 specifies a restriction on protecting no more than 220 data blocks using the same single key bundle. Three-key TDEA may continue to be used for encryption in existing applications but shall not be used for encryption in new applications. After December 31, 2023, three-key TDEA is disallowed for encryption unless specifically allowed by other NIST guidance. Decryption using three-key TDEA is allowed for legacy use.

How is Triple DES/3DES applied?

Triple DES is a type of encryption that employs three DES instances on the same plaintext. It employs a variety of key selection approaches, including the following:

  • all utilized keys are different in the first
  • two keys are the same and one is different in the second
  • and all keys are the same in the third.

Difference between 3DES and DES

DES is a symmetric-key algorithm that uses the same key for encryption and decryption processes. 3DES was developed as a more secure alternative because of DES’s small key length. 3DES or Triple DES was built upon DES to improve security. In 3DES, the DES algorithm is run three times with three keys; however, it is only considered secure if three separate keys are used.

Triple DES/3DES is not secure?

The Triple Data Encryption Algorithm (TDEA or 3DES) is being officially decommissioned, according to draught guidelines provided by NIST on July 19, 2018. According to the standards, 3DES will be deprecated for all new applications following a period of public deliberation, and its use will be prohibited after 2023.

DES no longer used?

The Data Encryption Standard, also known as DES, is no longer considered secure. While there are no known severe weaknesses in its internals, it is inherently flawed because its 56-bit key is too short. A German court recently declared DES to be “out-of-date and not secure enough,” and held a bank accountable for utilizing it.

AES replaced DES encryption

One of the primary objectives for the DES replacement algorithm from the National Institute of Standards and Technology (NIST) was that it be efficient in both software and hardware implementations. (Originally, DES was only practical in hardware implementations.) Performance analysis of the algorithms was carried out using Java and C reference implementations. AES was chosen in an open competition that included 15 candidates from as many research teams as possible from around the world, and the overall amount of resources dedicated to the process was enormous.

Finally, in October 2000, the National Institute of Standards and Technology (NIST) announced Rijndael as the proposed Advanced Encryption Standard (AES).

Differences between 3DES and AES encryption?

Both AES and 3DES, often known as triple-DES, are symmetric block ciphers. These are the current data encryption standards. Though the use of 3DES has become increasingly unpopular in recent years. Both have the same goals and objectives, yet there are a lot of similarities between them.

Parameters of comparison3DESAES
Key Length168 bits (k1, k2, and k3), 112 bits (k1 and k2)128, 192, or 256 bits
Cipher TypeSymmetric block cipherSymmetric block cipher
Block Size64 bits128 bits
SecurityProven inadequateConsidered secure

Reference

nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar2.pdf

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 8 minutes

Cryptographic keys are a vital part of any security system. They do everything from data encryption and decryption to user authentication. The compromise of any cryptographic key could lead to the collapse of an organization’s entire security infrastructure, allowing the attacker to decrypt sensitive data, authenticate themselves as privileged users, or give themselves access to other sources of classified information. Luckily, proper Management of keys and their related components can ensure the safety of confidential information.

Key management deals with the creation, exchange, storage, deletion, renewal of keys, etc. Key Management is putting certain standards in place to ensure the security of cryptographic keys in an organization.

Types of Cryptographic keys:

Cryptographic keys are grouped into various categories based on their functions. Let’s talk about a few types:

  1. Master Key

    The master key is used only to encrypt other subordinate encryption keys. The master key always remains in a secure area in the cryptographic facility (e.g., hardware security module), and its length will typically be 128 – 256 bits, depending on the algorithm used.

  2. The Key Encryption Key (KEK)

    When a secret key or data encryption is used, it must be “wrapped” with KEK keys to ensure the confidentiality, integrity, and authenticity of the key. The KEK is also known as the “key wrapping key” or the “key transport key.”

  3. The Data Encryption Key (DEK)

    Depending on the scenario and requirements, data may be encrypted with symmetric or asymmetric keys. In the case of symmetric keys, an AES key with a key length of 128-256 bits is typically used. A key length of 1024 – 4096 bits is generally used for asymmetric keys with the RSA algorithm. In simpler terms, you encrypt your data with data encryption keys.

  4. Root Keys

    The Root Key is the topmost key of your PKI hierarchy, which is used to authenticate and sign digital certificates. The Root Key usually has a longer lifetime than other keys in the hierarchy. The private portion of the root key pair is stored securely in a FIPS-140 2 level 3 compliant hardware security module.

Key length and algorithm

Choosing the right key length and algorithm is very important for the security of your cryptography environment. The key length of keys must be aligned with the key algorithm in use. For any keys (symmetric or asymmetric keys), the key length is chosen based on several factors:

  • The key algorithm being used
  • The required security strength.
  • The amount of data being processed utilizing the key (e.g., bulk data)
  • The crypto period of the key

Importance of Key Management:

Key Management forms the basis of all data security. Data is encrypted and decrypted via encryption keys, which means the loss or compromise of any encryption key would invalidate the data security measures put into place. Keys also ensure the safe transmission of data across an Internet connection. With authentication methods like code signing, attackers could pretend to be a trusted service like Microsoft while giving victims malware if they steal a poorly protected key. Keys comply with specific standards and regulations to ensure companies use best practices when protecting cryptographic keys. Well-protected keys should only be accessible by users who need them.

Key management systems are commonly used to ensure that the keys are:

  • Generated to the required key length and algorithm
  • Well protected (security architects generally prefer FIPS 140-2 complaint hardware security modules)
  • Managed and accessible only by authorized users
  • Rotated regularly
  • Deleted when no longer required
  • Audited regularly for their usage

Centralized Key Management:

People often ask if it is mandatory to have a 3rd party key management solution to manage encryption keys centrally. As per my opinion, no, it is not compulsory, but it is good to have features for your organization. A centralized key management system offers more efficiency than application-specific KMS.

The benefits of a centralized key management system:

  • Reduces operation overhead
  • Reduces costs with automation
  • With automation, it reduces the risk of human errors
  • Automated key update and distribution to any end-point
  • Provides tamper-evident records for proof of compliance
  • High availability and scalability
  • Meets regulatory compliance
  • Simplify your key management lifecycle

Compliance and Best Practices

Compliance standards and regulations ask a lot of key management practices. Standards created by the NIST and regulations, like PCI DSS, FIPS, and HIPAA, expect users to follow certain best practices to maintain the security of cryptographic keys used to protect sensitive data.

The following are important practices to ensure compliance with government regulations and standards:

  • The most important practice with cryptographic keys is never hard-coding key values anywhere. Hard-coding a key into open-source code, or code of any kind, instantly compromises the key. Anyone with access to that code now has access to the key value of one of your encryption keys, resulting in an insecure key.
  • The principle of least privilege is that users should only have access to keys necessary for their work. This assures only authorized users can access important cryptographic keys while tracking key usage. If a key is misused or compromised, only a handful of people have access to the key, so the suspect pool is narrowed down if the breach was within the organization.
  • HSMs are physical devices that store cryptographic keys and perform cryptographic operations on-premises. For an attacker to steal the keys from an HSM, they would need to physically remove the device from the premises, steal a quorum of access cards required to access the HSM, and bypass the encryption algorithm used to keep the keys secure. HSMs on the Cloud are also a viable key management storage method. Still, there is always the chance that the Cloud Service Provider’s security fails, allowing an attacker to access the keys stored therein.
  • Automation is a widely practiced method of ensuring keys do not go past their crypto period and become overused. Other portions of the key lifecycle can be automated, like creating new keys, backing up keys regularly, distributing keys, revoking keys, and destroying keys.
  • Creating and enforcing security policies relating to encryption keys is another way many organizations ensure the safety and compliance of their key management system. Security policies provide the methods everyone within an organization follows and create another method of tracking who can and has accessed specific keys.
  • Separating duties related to key Management is another important practice for any organization. An example of separation of duties is that one person is assigned to authorize the new user’s access to keys, another distributes the keys, and a third person creates the keys. With this method, the first person cannot steal the key during the distribution phase or learn the value during the generation phase of the key lifecycle.

Encryption Consulting Assessment

At Encryption Consulting, we ensure your system meets compliance standards and protects data with the best possible methods. We perform encryption assessments, including key Management and cloud key lifecycle management. We also write weekly blogs that can help you find the best practices for your key management needs and learn more about the different aspects of data security.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 5 minutes

What is Data Loss Prevention?

Data Loss Prevention (DLP) is a set of processes used to ensure an the organization’s sensitive data is not lost, misused, leaked, breached, or accessed by unauthorized users. Organizations use DLP to protect and secure data and comply with regulations. Organizations pass their sensitive data to partners, customers, remote employees, and other legitimate users through their network, and sometimes it may get intercepted by an unauthorized user.

Many organizations find it challenging to keep track of their data and lack effective data loss prevention best practices. This results in a lack of visibility into what data leaves the organization and obfuscates data loss prevention.

Why do you need Data Loss Prevention?

Data loss can be damaging for businesses of all sizes. The primary purpose of data loss prevention is to secure sensitive data and prevent data leakage /data breaches. Data loss prevention solutions are designed to monitor and filter data constantly. In addition to dealing with the data being used, stored, and transmitted within the network, data loss prevention applications ensure no harmful outside information enters the company network and that no sensitive information leaves the company network via an unauthorized user.

Organizations typically use DLP to:

  • Protect personal Identifiable Information (PII) data and comply with relevant regulations.
  • Protect intellectual property, which is critical for the organization.
  • Secure data on remote cloud systems or storage.
  • Enforce security in a BYOD environment.
  • Achieve data visibility.

Reasons why Data Loss Prevention is necessary for business:

  • Outside threats and attacks are increasing daily; hackers have become more sophisticated with time and finding new ways to access networks and sensitive data occurs very frequently. Organizations should actively look for new threats.
  • Insider threats are also a prime reason to use DLP. Disgruntled employees deliberately cause harm to the company by sharing the company’s sensitive data with unauthorized users or by trying to find assistance from outside to carry out the attacks. The Verizon 2021 Data Breach Investigations Report revealed that more than 20% of security incidents involved insiders.
  • Data loss can impact the financial health of your business. Data loss can also lead to loss of productivity, revenue, client trust and damage the company’s brand name and reputation. According to the IBM Cost of a Data Breach Report 2021, the global average data breach costs increased from $3.86 million to $4.2 million in 2021.
  • Organizations have welcomed the Bring Your Own Device (BYOD) approach on an immense scale. However, some industries or organizations have poorly deployed and maintained BYOD solutions. In this case, it is easier for employees to inadvertently share sensitive information through their personal devices.

Therefore, a data loss prevention strategy is crucial to secure your data, protect intellectual property, and comply with regulations. DLP systems ensure that your company’s sensitive data is not lost, mishandled, or accessed by unauthorized users.

Data Loss Prevention (DLP) best practices:

  1. Determine your data protection objective

    Define what you are trying to achieve with your data loss prevention program. So you want to protect your intellectual property, better visibility, or meet regulatory and compliance requirements. Having a clear objective will help you/the organization determine the appropriate DLP solution to include your DLP strategy.

  2. Data classification and identification

    Identify the critical data for your business, such as client information, financial records, source codes, etc, and classify them based on their criticality level.

  3. Data Security policies

    Define comprehensive data security rules and policies and establish them across your company’s network. DLP technologies help block sensitive data/information/files from being shared via unsecured sources.

  4. Access Management

    Access to and use of critical or sensitive data should be restricted or limited based on users’ roles and responsibilities. The DLP solution helps the system administrators assign the appropriate authorization controls to users depending upon the type of data users handle and their access level.

  5. Evaluate internal resources

    To execute the DLP strategy/program successfully, an organization needs personnel with DLP expertise, who can help the organization to implement the appropriate DLP solution, including DLP risk analysis, reporting, data breach response, and DLP training and awareness.

  6. Conduct an assessment

    Evaluating the types of data and their value to the organization is an essential step in implementing a DLP program. This includes identifying relevant data, wherever the data is stored, and if it is sensitive data—intellectual property, confidential information,etc.

    Some DLP solutions can identify information assets by scanning the metadata of files and cataloging the result, or if necessary, analyze the content by opening the files. The next step is to evaluate the risk associated with each type of data if the data is leaked.

    Losing information about employee benefits programs carries a different level of risk than the loss of 1,000 patient medical files or 100,000 bank account numbers and passwords. Additional considerations include data exit points and the likely cost to the organization if the data is lost.

  7. Research for DLP vendors

    Establish your evaluation criteria while researching for a DLP vendor for your organization, such as:

    • Type of deployment architecture offered by the vendor.
    • Operating systems (Windows, Linux, etc.) the solution supports.
    • Does the vendor provide managed services?
    • Protecting structured or unstructured data, what’s your concern?
    • How do you plan to enforce data movement?(e.g., based on policies, events, or users)
    • Regulatory and Compliance requirement for your organization.
    • What is the timeline to deploy DLP solution?
    • Will you need additional staff/ experts to manage DLP? Etc.
    • Define Roles and Responsibilities

      Define the roles and responsibilities of individuals involved in the DLP program. This will provide checks and balances during the deployment of the program.

    • Define use cases

      Organizations often try to solve all the use cases simultaneously. Define the initial approach and set fast and measurable objectives, or choose an approach to narrow your focus on specific data types.

    Conclusion

    DLP solutions classify regulated, confidential, and business critical data, it additionally identifies any violations of policies specified by organizations or within a predefined policy set, usually driven by regulatory compliance such as PCI-DSS, HIPAA, or GDPR. In case violations are identified, DLP enforces remediation with alerts to prevent end users from accidentally or delibartely sharing data that could put the organization at risk. DLP solutions monitor and control endpoint activities, protect data-at-rest, data-in-motion, and data-in-use, and also has a reporting feature to meet compliance and auditing requirements.

    Free Downloads

    Datasheet of Encryption Consulting Services

    Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

    Download
    Encryption Services

    About the Author

    Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

    Read time: 5 minutes

    What is an IoT device?

    Before we jump into the issues and challenges, let’s get a better idea of IoT devices. Devices that have a sensor attached to it and transmit data from one object to another or to people with the help of the Internet is known as an IoT device.IoT devices are wireless sensors, software, actuators, and computer devices. An IoT device is any device that connects to a network to access the Internet, so Personal Computers, cellphones, speakers, and even some outlets are considered IoT devices. Today, even cars and airplanes use IoT devices, meaning if these devices are attacked by threat actors, then cars or airplanes could be hijacked or stolen. With such widespread use of IoT devices in place globally, authenticating and authorizing IoT devices within your organization’s network has become vital. Allowing unauthorized IoT devices onto your network can lead to threat actors leveraging these unauthorized devices to perform malware attacks within your organization.

    Need for IoT Security

    Security breaches in IoT devices can occur anytime, including manufacturing, network deployment, and software updates. These vulnerabilities provide entry points for hackers to introduce malware into the IoT device and corrupt it. In addition, because all the devices are connected to the Internet, for example: through Wi-Fi, a flaw in one device might compromise the entire network, leading other devices to malfunction.Some key requirements for IoT security are:

    • Device security, such as device authentication through digital certificates and signatures.
    • Data security, including device authentication and data confidentiality and integrity.
    • To comply with regulatory requirements and requests to ensure that IoT devices meet the regulations set up by the industry within which they are used.

    IoT Security Challenges:

    1. Malware and Ransomware

      The number of malware and ransomware used to exploit IoT-connected devices continue to rise in the coming years as the number of connected devices grows. While classic ransomware uses encryption to lock users out of various devices and platforms entirely, hybridization of malware and ransomware strains is on the rise to integrate multiple attacks.

      The ransomware attacks could reduce or disable device functions while stealing user data. For example, a simple IP (Internet Protocol) camera can collect sensitive information from your house, office, etc.

    2. Data Security and Privacy

      Data privacy and security are the most critical issues in today’s interconnected world. Large organizations use various IoT devices, such as smart TVs, IP cameras, speakers, lighting systems, printers, etc., to constantly capture, send, store, and process data. All the user data is often shared or even sold to numerous companies, violating privacy and data security rights and creating public distrust.

      Before storing and disassociating IoT data payloads from information that might be used to identify users personally, the organization needs to establish dedicated compliance and privacy guidelines that redact and anonymize sensitive data. Mobile, web, cloud apps, and other services used to access, manage, and process data associated with IoT devices should comply with these guidelines. Data that has been cached but is no longer needed should be safely disposed of. If the data is saved, complying with various legal and regulatory structures will be the most challenging part.

    3. Brute Force Attacks

      According to government reports, manufacturers should avoid selling IoT devices with default credentials, as they use “admin” as a username and password. However, these are only guidelines at this point, and there are no legal penalties in place to force manufacturers to stop using this risky approach. In addition, almost all IoT devices are vulnerable to password hacking and brute-forcing because of weak credentials and login details.

      For the same reason, Mirai malware successfully detected vulnerable IoT devices and compromised them using default usernames and passwords.

    4. Skill Gap

      Nowadays, organizations face a significant IoT skill gap that stops them from fully utilizing new prospects. As it is not always possible to hire a new team, setting up training programs is necessary. Adequate training workshops and hands-on activities should be set up to hack a specific smart gadget. The more knowledge your team members have in IoT, the more productive and secure your IoT will be.

    5. Lack of Updates and Weak Update Mechanism

      IoT products are designed with connectivity and ease of use in mind. They may be secure when purchased, but they become vulnerable when hackers find new security flaws or vulnerabilities. In addition, IoT devices become vulnerable over time if they are not fixed with regular updates.

    Top IoT Vulnerabilities

    The Open Web Application Security Project (OWASP) has published the IoT vulnerabilities, an excellent resource for manufacturers and users alike.

    1. Weak Password Protection

      Use of easily brute-forced, publicly available, or unchangeable credentials, including backdoors in firmware or client software that grants unauthorized access to deployed systems.

      Weak, guessable, default, and hardcoded credentials are the easiest way to hack and attack devices directly and launch further large-scale botnets and other malware.

      In 2018, California’s SB-327 IoT law passed to prohibit the use of default certificates. This law aims to solve the use of weak password vulnerabilities.

    2. Insecure network services

      Unnecessary or unsafe network services that run on the devices, particularly those that are exposed to the internet, jeopardize the availability of confidentiality, integrity/authenticity of the information, and open the risk of unauthorized remote control of IoT devices.

      Unsecured networks make it easy for cybercriminals to exploit weaknesses in protocols and services that run on IoT devices. Once they have exploited the network, attackers can compromise confidential or sensitive data transmitted between the user’s device and the server. Unsecured networks are especially vulnerable to Man-in-the-Middle (MITM) attacks, which steal device credentials and authentication as part of broader cyberattacks.

    3. Insecure Ecosystem Interfaces

      Insecure web, backend API, cloud, or mobile interfaces in the ecosystem outside of the device that allows compromise of the device or its related components. Common issues include a lack of authentication/authorization, lacking or weak encryption, and a lack of input and output filtering.

      Useful identification tools help the server distinguish legitimate devices from malicious users. Insecure ecosystem interfaces, such as application programming interfaces (APIs), web applications, and mobile devices, allow attackers to compromise devices. Organizations should implement authentication and authorization processes to authenticate users and protect their cloud and mobile interfaces.

    4. Insecure or Outdated Components

      Use of deprecated or insecure software components/libraries that could allow the device to be compromised. This includes insecure customization of operating system platforms, and the use of third-party software or hardware components from a compromised supply chain.

      The IoT ecosystem can be compromised by code and software vulnerabilities as well as legacy systems. Using unsafe or outdated components, such as open source or third-party software, can create security vulnerabilities that expand an organization’s attack surface.

    5. Lack of Proper Privacy Protection

      User’s personal information stored on the device or in the ecosystem that is used insecurely, improperly, or without permission.

      IoT devices often collect personal data that organizations must securely store and process in order to comply with various data privacy regulations. Failure to protect this data can result in fines, loss of reputation and loss of business. Failure to implement adequate security can lead to data leaks that jeopardize user privacy.

    6. Insecure Default Settings

      Devices or systems shipped with insecure default settings or lack the ability to make the system more secure by restricting operators from modifying configurations.

      IoT devices, like personal devices, come with hard-coded, default settings that allow for easy configuration. However, these default settings are very insecure and vulnerable to attackers. Once compromised, hackers can exploit vulnerabilities in a device’s firmware and launch broader attacks aimed at businesses.

    7. Lack of Physical Hardening

      Lack of physical hardening measures, allowing potential attackers to gain sensitive information that can help in a future remote attack or take local control of the device.

      The nature of IoT devices suggests that they are deployed in remote environments rather than in easy-to-manage, controlled scenarios. This makes it easy for attackers to target, disrupt, manipulate, or sabotage critical systems within an organization.

    8. Lack of secure update mechanisms

      Lack of ability to securely update the device. This includes lack of firmware validation on device, lack of secure delivery (un-encrypted in transit), lack of anti-rollback mechanisms, and lack of notifications of security changes due to updates.

      Unauthorized firmware and software updates pose a great threat to launch attacks against IoT devices.

    Conclusion

    How Encryption Consulting’s PKI-as-a-service helps secure your IoT devices?

    Encryption Consulting LLC (EC) will completely offload the Public Key Infrastructure environment and build the PKI infrastructure to lead and manage the PKI environment (on-premises, PKI in the cloud, cloud-based hybrid PKI infrastructure) of your organization. Encryption Consulting will deploy and support your PKI using a fully developed and tested set of procedures and audited processes. Admin rights to your Active Directory will not be required, and control over your PKI and its associated business processes will always remain with you. Furthermore, for security best practices, the CA keys will be held in FIPS 140-2 Level 3 HSMs hosted either in your secure datacentre or in our Encryption Consulting datacentre in Dallas, Texas.

    References

    Free Downloads

    Datasheet of Encryption Consulting Services

    Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

    Download
    Encryption Services

    About the Author

    Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

    Read time: 3 minutes

    What is a Wildcard Certificate?

    A wildcard certificate (like SSL/TLS) is a public key certificate that can protect several subdomains inside a domain and is usually acquired from a trustworthy public Certificate Authority (CA).

    Multiple subdomains for your website can benefit your business, but they can also be challenging to manage. Multiple SSL/TLS certificates to secure those subdomains increase their complexity, but a wildcard certificate can efficiently resolve this issue.

    Compared to managing individual certificates for your subdomains, a Wildcard certificate can save you time and money.

    The domain name is prefixed by an asterisk and a period in wildcard notation. Wildcards are frequently used in Secure Socket Layer (SSL) certificates to extend SSL encryption to subdomains. A traditional SSL certificate is only valid for a single domain, such as www.domain.com. A *.domain.com wildcard certificate will also protect cloud.domain.com, shop.domain.com, mobile.domain.com, and other domains.

    Why should you use Wildcard certificates?

    Wildcard certificates are easier to use as they allow organizations to use a single certificate for all subdomains.

    The following are some advantages of using wildcard certificates:

    • Secure any number of subdomains:

      Without having different SSL certificates for each subdomain, a single wildcard SSL certificate can cover as many subdomains as you want.

    • Straightforward Certificate Administration:

      Individual SSL certificates must be deployed and appropriately managed to secure an increasing number of public-facing domains, cloud workloads, and devices. But by using a single wildcard certificate, you can manage unlimited domains that make certificate management simpler.

    • Cost-cutting

      A wildcard certificate costs more than an ordinary SSL certificate, but it becomes a cost-effective alternative compared to the overall cost of securing all of your subdomains, each with their own certificate.

    • Fast and Flexible Implementation:

      A wildcard certificate is a great way to build new sites on new subdomains that your existing certificate can cover. There’s no need to wait for a new SSL certificate, which saves your organization time and speeds up your time to market.

    Potential Security risks of Wildcard certificates

    When a wildcard certificate is reused across multiple subdomains hosted on various servers, there are additional security concerns for the protections offered by SSL/TLS certificates. In the event of a breach of one of the servers, adversaries will compromise the certificate. If this is the case, the confidentiality and integrity of traffic to each site where the certificate is used is jeopardized. An attacker who obtains the certificate would be able to decrypt, read, modify, and re-encrypt traffic. This is likely to result in the exposure of sensitive information and further targeted attacks.

    Wildcard certificates are frequently used to cover all domains with the same registered root, making administration straightforward. However, because the same private key is used across numerous systems, the freedom that comes with using wildcard certificates also comes with severe security risks:

    • Access To Private Keys:

      If the private key of a wildcard certificate gets compromised, the hacker can impersonate any domain for the wildcard certificate.

    • Fake Certificates

      Attackers can fool a certificate authority (CA) into issuing a wildcard certificate for a bogus organization. Once the attacker gets the fictitious company’s wildcard certificates, they can set up subdomains and phishing sites.

    • Certificate Management

      All sub-domains will require a new certificate if the wildcard certificate gets revoked.

    • Web Server Security

      If one server or sub-domain gets hacked, all sub-domains may be hacked as well.

    • A single point of failure:

      The private key of a wildcard certificate is a single point of a total compromise. If that key is compromised, all secure connections to all servers and subdomains listed in the certificate will be compromised.

    Attackers can easily misuse wildcard certificates if an organization doesn’t have adequate security, control, or monitoring.

    Strategy to consider when using Wildcard Certificates

    • Limit the use of wildcard certificates to a specific purpose for better security control.
    • A detailed discussion with the security team and leadership, about the purpose of using a wildcard certificate.
      • Understand the security risks.
      • Will this decision be more efficient for your organization?
      • Are you planning to use a wildcard certificate to save time?
      • Are you trying to save money?
    • Keep an accurate and up-to-date inventory of certificates in your environment which includes documenting key length, hash algorithms, certificate expiry, certificate locations, and the certificate owner.
    • Ensure that private keys are stored and protected according to the industry’s best practices (i.e., using a certified HSM).
    • Automate certificate renewal, revocation, and provisioning processes to prevent unexpected expirations and outages.

    Conclusion:

    No organization wants to put their brand name into a situation where it is a piece of cake for the attackers to leak sensitive information. Although wildcard certificates offer certain benefits, you should make sure you are using them consciously and strategically.

    Free Downloads

    Datasheet of Encryption Consulting Services

    Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

    Download
    Encryption Services

    About the Author

    Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

    Read time: 12 minutes

    PKI, the abbreviation for Public Key Infrastructure, is a set of roles, procedures, and policies needed to create, distribute, manage, use, and revoke digital certificates and manage public-key encryption. PKI is used to confirm the identity of a user by providing ownership of a private key. It is a trusted service to verify that a sender or receiver of data is exactly who they claim to be.

    PKI is built around components and procedures for managing the key pairs (public and private key pairs).

    A typical PKI is made up of the following components:

    1. Certificate Authority (CA): A trusted CA is the only entity in PKI that can issue trusted digital certificates. CA accepts requests for certificates and verifies the information provided by the applicants based on certificate management policy. CA will sign the certificates with its private key and issue them to the applicants if the information is legal.
    2. Registration Authority (RA): RA is responsible for receiving certificate signing requests for the initial enrollment or renewal of certificates from users, servers, other applications. RA verifies the identity of an end-entity and forwards the request to a certificate authority (CA).
    3. Public Key: Public key can be distributed widely and does not require secure storage. Its corresponding private key can only decrypt messages/Data encrypted by the public key.
    4. Private Key: Private keys are used by the recipient to decrypt the message/data encrypted by its corresponding public key. This establishes the ownership of the private and public key pair, ensuring the message is only read by the approved parties
    5. Root Certificate Authority (Root CA): A certificate is considered valid when a trusted Root CA sign it. A Root CA is entitled to verify a person’s identity and signs the root certificate that is distributed to a user.
    6. Intermediate Certificate Authority: An Intermediate CA is also a trusted CA and is used as a chain between the root CA and the client certificate that the user enrolls for. Since the Root CA has signed and trusts the intermediate CA, certificates generated from the intermediate CA are also trusted.
    7. Hardware security module: A Hardware Security Module isn’t a mandatory component of a PKI, but it improves the security of the PKI when implemented. This device protects and manages digital keys and serves as the groundwork for building a secure enterprise PKI infrastructure. The HSM manages the complete lifecycle of cryptographic keys, including creation, rotation, deletion, auditing, and support for APIs to integrate with various applications.

    Now we have a fair idea about some of the PKI components, let’s talk about different PKI vendors and their best practices:

    Microsoft PKI

    Below is few best practices that are recommended to use Microsoft PKI effectively.

    • Make a detailed plan of your PKI infrastructure before deployment.
    • Avoid installing ADCS on a domain controller.
    • Root CA should be standalone and offline.
    • Do not issue certificates to end-entity from a Root CA.
    • Enable auditing events for both Root and Issuing CA.
    • Secure the Private key with HSM (FIPS 140-2 level 3)
    • Install Enterprise CA only if your CA issues a certificate for devices or users.
    • It is not recommended to use default certificate templates.
    • CRL distribution point should be highly available.
    • Publish Root CA CRL to Active directory.
    • Hash Algorithm should be at least SHA-2 (SHA 256 bit).
    • The end-entity certificate validity period should be a maximum of 2 years.

    AWS Certificate Manager

    Here are the top 10 best practices we identified for AWS Certificate Manager (ACM):

    • ACM Certificate expiry check: Ensure removal of expired SSL/TLS certificates managed by ACM. This eliminates the risk of deploying an invalid SSL/TLS certificate in resources that trigger the front end. This might cause a loss of credibility for business as well.
    • ACM Certificate validity check: Ensure requests that arrive during the SSL/TLS certificate issue or renewal process are validated regularly.
    • Root Certificate Authority (CA) usage: It is always a best practice to minimize the use of Root CA. Amazon recommends creating a separate account for Root CA.
    • Transport layer protection is vital to ensure security. It is recommended to use only TLS version 1.1 or above and not use SSL as it is no longer secure.
    • Whenever you import certificates instead of ACM-issued certificates, ensure keys used to generate SSL/TLS certificate private keys have high key strength to avoid a data breach.
    • Avoid using wildcard domain certificates. Instead, try to issue ACM single domain certificate for each domain and subdomain with its own private key.
    • Allow usage of imported certificates only from authenticated and trusted partners of your organization in ACM. When wildcard certificates are imported into AWS Certificate Manager (ACM), security threat risk is high as the user might hold an unencrypted copy of the certificate’s private key.
    • Recommended best practice is to always use a Fully Qualified Domain Name (FQDN) in SSL/TLS ACM certificates.
    • To avoid misuse of generated certificates, perform frequent audits of AWS environment for trusted certificates and validate audit reports.
    • Turn on AWS CloudTrail and CloudWatch alarms: CloudTrail logging helps track the history of AWS API calls and monitor AWS deployments. CloudTrail can be integrated with applications for performing automated logging and monitoring activities. Enabling the CloudWatch alarm feature helps in alerting through notifications when configured metrics breach.

    AWS ACM Private CA (ACM PCA)

    Below are the recommended best practices that can help you use AWS ACM PCA more effectively.

    • AWS recommends documenting all your policies and practices for operating your CA, including CA hierarchy, architecture diagram, CA validation period policies, path length, etc.
      The CA structure and policies above can be captured in two documents known as Certificate Policy (CP) and Certificate Practice Statement (CPS). Refer to RFC 3647 for a framework for capturing important information about your CA operations.
    • Root CA should, in general, only be used to issue a certificate for intermediate CAs.
    • Creating a Root CA and Subordinate CA in two different AWS account is recommended best practice.
    • The CA administrator role should be separate from users who need access only to issue end-entity certificates.
    • Turn on CloudTrail logging before you create and start operating a private CA. With CloudTrail, you can retrieve a history of AWS API calls for your account to monitor your AWS deployments.
    • It is a best practice to update the private key for your private CA periodically. You can update a key by importing a new CA certificate or replacing the private CA with a new CA.
    • Delete the unused private CA permanently.
    • ACM Private CA recommends using the Amazon S3 Block Public Access (BPA) feature on buckets that contain CRLs. This avoids unnecessarily exposing details of your private PKI to potential enemies. BPA is an S3 best practice and is enabled by default on new buckets.

    Google Cloud Certificate Authority Services

    This topic outlines some of the best practices that can help you use Certificate Authority Service more effectively.

    • Role and access control: Individuals shouldn’t be assigned more than one role at any given time. Everyone holding an assigned role should be adequately briefed and trained on their responsibilities and security practices. If you want to assign a diverse set of permissions to an individual, it is recommended that you create a custom role using IAM.
    • In most cases, it is recommended to use the Enterprise tier to create a certificate authority (CA) pool that issues certificates to other CAs and end-entities.
    • While creating a CA pool, it is recommended to carefully consider the DevOps tier, as it does not support certificate revocation.
    • Secure CA signing keys by leveraging Cloud HSM.
    • Enable cloud audit logs to monitor access and use of Cloud HSM signing keys.
    • It is recommended not to import an existing external CA with issued certificates into CA service.
    • For a Root CA and Subordinate CA, it is recommended to use the most significant key size available for that algorithm family.
      • For RSA, the largest supported key size is 4096 bits.
      • For ECDSA, the largest supported key size is 384 bits.

      (For subordinate CAs with a shorter lifetime, it is sufficient to use smaller key sizes, such as 2048 bits for RSA or 256 bits for ECDSA.)

    • It is recommended that the authors of the certificate template grant the CA service user role to the members in the organization who might use that certificate template.
    #CategoriesMicrosoft PKIAWS Certificate Manager (ACM)AWS ACM Private CA (ACM PCA)Google Cloud – Certificate Authority Services (CAS)
    1Root CARoot CA is deployed on-premises and kept offlineAWS Certificate Manager is a service using which you can easily provision, manage and deploy public/private SSL/TLS certificates for use with AWS services and internal connected resources.Root CA can be deployed in AWS cloud or Issuing CA CSR can be Signed by the External Root CA.Root CA can be deployed in the google cloud – certificate authority services or Issuing CA CSR can be signed by the External Root CA.
    2Certificate TemplateIt is not recommended to use the default Certificate template. Certificate templates can be configured.Use AWS CloudFormation templates to issue private certificates using AWS Certificate Manager (ACM).ACM Private CA supports four varieties of certificate template.

     

    1. Base template – pre-defined templates in which no passthrough parameters are allowed.
    2. CSRPassthrough templates – Templates that extend their corresponding base template versions by allowing CSR passthrough.
    3. APIPassthrough templates – Templates that extend their corresponding base template versions by allowing API passthrough.
    4. APICSRPassthrough templates – Templates that extend their corresponding base template versions by allowing both API and CSR passthrough.
    A new Certificate template can be created in each project and location in Google cloud CAS service.
    3key algorithm and Key SizeMicrosoft PKI supports the Key size as per NIST 800 standards. The minimum key size is 2048- bit. However, for any CA that has certificate expiration more than 15 years in the future, it is recommended to use RSA must be 4096 or greater or, if the CA key uses ECC, the CA key must use either the P-384 or P-521 curve.The following public key algorithms and key sizes are supported by ACM:
    2048-bit RSA (RSA_2048)
    3072-bit RSA (RSA_3072)
    4096-bit RSA (RSA_4096)
    Elliptic Prime Curve 256 bit (EC_prime256v1)
    Elliptic Prime Curve 384 bit (EC_secp384r1)
    Elliptic Prime Curve 384 bit (EC_secp384r1)
    AWS ACM Private CA supports the following cryptographic algorithm and Key size for private key generation. With the advanced option, following algorithms are available: (This list applies only to certificates issued directly by ACM Private CA through its console, API, or command line.)

     

    • RSA 2048
    • RSA 4096
    • ECDSA P256
    • ECDSA P384
    The following key algorithm and key size is used in Google cloud:

     

    • 2048-bit RSA (RSA_2048)
    • 3072-bit RSA (RSA_3072)
    • 4096-bit RSA (RSA_4096)
    • ECDSA P256
    • ECDSA P384

    For a new root CA or a subordinate CA that is expected to have a lifetime in the order of years, google recommends that you use the largest key size available for that algorithm family. (For RSA, the largest supported key size is 4096 bits. and for ECDSA, the largest supported key size is 384 bits.)

    4Hashing algorithmIt is recommended to use the advance hashing algorithm (SHA 256 and above) for the new deployment and the existing PKI.Certificates managed in AWS Certificate Manager use RSA keys with a 2048-bit modulus and SHA-256. ACM does not currently have the ability to manage other certificates such as ECDSA certificates.ACM Private CA supports the following certificate signing algorithm. This list applies only to certificates issued directly by ACM Private CA through its console, API, or command line.

     

    • SHA256WITHECDSA
    • SHA384WITHECDSA
    • SHA512WITHECDSA
    • SHA256WITHRSA
    • SHA384WITHRSA
    • SHA512WITHRSA
    Google cloud certificate authority services supports SHA256, SHA384.
    5RFC ComplianceCA Certificates within the Microsoft IT PKI shall be X.509 Version 3 and shall conform to the RFC 5280: Internet X.509 Public Key Infrastructure Certificate and CRL Profile, dated May 2008. As applicable to the Certificate type, Certificates conform to the current version of the CA/Browser Forum Baseline Requirements for the Issuance and Management of Publicly Trusted Certificates.AWS Certificate Manager is responsible for protecting the infrastructure that runs AWS services in the AWS cloud. AWS also provides services that can be used securely. Third-party auditors regularly test and verify the effectiveness of the security as part of the AWS Compliance Programs (https://aws.amazon.com/compliance/programs/)Certain constraints appropriate to a private CA are enforced as per RFC 5280. However, ACM Private CA does not enforce all constraints defined in RFC 5280.Certificate Authority Service uses the ZLint tool to ensure that X.509 certificates are valid as per RFC 5280 rules. However, CA Service does not enforce all RFC 5280 requirements, and it is possible for a CA created using CA Service to issue a non-compliant certificate.
    6CRL (Certificate Revocation List) Distribution pointMicrosoft PKI deposits the CRL under LDAP (Lightweight directory access protocol) and HTTP.AWS support centre can help to revoke a certificate in AWS Certificate Manager. You need to raise a support ticket/case for the same.ACM Private CA automatically deposits the CRL in the Amazon S3 bucket you designate.CRL Publication must be enabled on a CA pool for it to publish the CRL. CRL publication can be enabled during the creation of a CA pool.
    7Storage of Private KeysIt is recommended to store the private keys in a FIPS 140-2 level 3 compliant HSMAWS certificate Manager stores the certificate and its corresponding private key and uses AWS Key Management Service (AWS KMS) to help protect the private key.By default, the private keys for private CAs are stored in AWS-managed hardware security model (HSMs). The HSMs comply with FIPS PUB 140-2 Security Requirements for Cryptographic Modules.CA keys are stored in Cloud HSM, FIPS 140-2 Level 3 validated and available in regions across the Americas, Europe, and the Asia Pacific.
    8Audit reportsAuditing can be enabled on a CA in windows server to provide an audit log for all certificate services management tasks.AWS Certificate Manager (ACM) is integrated with AWS CloudTrail, a service that records actions taken by a user, role, or an AWS service in ACM. CloudTrail is enabled by default on your AWS account.Audit reports are created to list all the certificates that ACM private CA has issued or revoked. The report is saved in a new or existing specified S3 bucket.Cloud Audit logs can be used in Certificate Authority Services. Cloud Audit Logs provides the following audit logs for each Cloud project, folder, and organization:

     

    • Admin Activity audit logs
    • Data Access audit logs
    • System Event audit logs
    • Policy Denied audit logs
    9Best Practices (high-level)
    1. Make a detailed plan of your PKI infrastructure before deployment.
    2. Avoid installing ADCS on a domain controller.
    3. Root CA should be standalone and offline.
    4. Do not issue certificates to end-entity from a Root CA.
    5. Enable auditing events for both Root and Issuing CA.
    6. Secure the Private key with HSM (FIPS 140-2 level 3)

    (More details on the previous section)

    ACM Certificate expiry check: Ensure removal of expired SSL/TLS certificates managed by ACM. This eliminates the risk of deploying an invalid SSL/TLS certificate in resources that trigger the front end. This might cause a loss of credibility for business as well.

     

    • ACM Certificate validity check: Ensure requests that arrive during the SSL/TLS certificate issue or renewal process are validated regularly.
    • Root Certificate Authority (CA) usage: It is always a best practice to minimize the use of Root CA. Amazon recommends creating a separate account for Root CA.
    • Transport layer protection is vital to ensure security. It is recommended to use only TLS version 1.1 or above and not use SSL as it is no longer secure.

    (More details on the previous section)

    Recommended best practices to use ACM PCA effectively:

     

    1. Documenting CA structure and Policies.
    2. Minimize use of Root CA.
    3. Give the Root CA its own AWS Account.
    4. Separate administrators and issuer roles.
    5. Turn on CloudTrail login.
    6. Rotate the CA private Key.
    7. Delete an unused CA (AWS bills you for a CA until it is deleted).
    Role and access control: Individuals shouldn’t be assigned more than one role at any given time. Everyone holding an assigned role should be adequately briefed and trained on their responsibilities and security practices. If you want to assign a diverse set of permissions to an individual, it is recommended that you create a custom role using IAM.

     

    • In most cases, it is recommended to use the Enterprise tier to create a certificate authority (CA) pool that issues certificates to other CAs and end-entities.
    • While creating a CA pool, it is recommended to carefully consider the DevOps tier, as it does not support certificate revocation.

    (More details on the previous section)

    10CA hierarchyIn a hierarchical PKI (a typical deployment), there are generally three types of hierarchies – one-tier, two-tier, and three-tier.AWS Certificate Manager is a service using which you can easily provision, manage and deploy public/private SSL/TLS certificates with AWS services and internal connected resources.With ACM PCA, you can design and create a hierarchy of certificate authorities with up to five levels.When creating a subordinate CA that chains up to an external Root CA, the properties included in the CSR generated by CA Service must be preserved in the CA certificate signed by the external Root CA. The external CA can add additional extensions if it preserves the properties in the CSR. For example, the signed subordinate CA certificate must also include the same path length restriction if the CSR contains a path length restriction.
    11Redundancy and Disaster RecoveryRedundancy and disaster recovery plans should be complete during a PKI deployment’s designing and implementation planning phase.ACM does not have an SLA. The ACM Private Certificate Authority managed private CA service has an SLA.ACM Private CA is available in multiple Regions, allowing the user to create redundant CAs. The ACM Private CA service operates with a service level agreement (SLA) of 99.9% availability.Google Cloud Certificate Authority services (CAS) are available in multiple regions, which allows redundancy. Google Cloud (CAS)operates with a service level agreement (SLA) of 99.9% availability.

    Resources:

    PcaWelcome

    Certificate Authority Service

    Microsoft PKI Services CP

    Free Downloads

    Datasheet of Encryption Consulting Services

    Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

    Download
    Encryption Services

    About the Author

    Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

    Let's talk