Reading Time : 10 minutes

In the world of information technology, change is constant. Technology evolves rapidly, so businesses must adapt to keep up with the latest advancements. One area that often requires attention is the management of digital certificates, which play a vital role in ensuring secure communications and data integrity. If your organization’s Microsoft Certificate Authority (CA) runs on Windows Server 2012 and 2012 R2, it’s time to start thinking about a migration strategy as soon as possible.

Microsoft will stop supporting windows server 2012 and 2012 R2 on 10th Oct 2023. Microsoft will discontinue offering bug fixing, technical support, or any new problem that can affect the reliability or usability of servers.

How Serious Is the Risk?

Using Windows Server 2012 and 2012 R2 comes with several risks due to the operating systems reaching the end of their supported lifecycle. These risks can have serious implications for organizations, as this risk leads to being subject to compliance issues and cyber attacks. The environment using unsupported servers will be the prime target for the bad guys. Without an upgrade strategy, the organization’s IT parties and management will be responsible for the jeopardy posed by unsupported servers.

Here are some of the key risks associated with using Windows 2012 and 2012 R2:

  • Security vulnerabilities

    As operating systems age, security vulnerabilities are discovered, and cybercriminals actively exploit them. Over time, the risk of security breaches and unauthorized access to your systems increases significantly. Without regular security updates and patches from Microsoft, any newly discovered vulnerabilities will remain unaddressed, exposing your system to potential attacks.

  • Lack of support

    When an operating system reaches its end of support, Microsoft no longer provides technical support or assistance for issues related to that version. This lack of support means that if you encounter any problems or face challenges with your Windows 2012 or 2012 R2 servers; you won’t be able to rely on Microsoft for help. This can lead to prolonged downtime, increased costs, and difficulty resolving critical issues.

  • Compliance violations

    Many industry regulations and standards, such as PCI DSS, HIPAA, and GDPR, require organizations to use supported software and regularly apply security updates. By running unsupported operating systems, you risk non-compliance with these regulations, which can result in penalties, legal liabilities, and damage to your organization’s reputation.

  • Limited feature enhancements

    With the end of support, there will be no new feature updates or enhancements for Windows 2012 and 2012 R2. This means you won’t benefit from the latest functionalities, improvements, or performance optimizations available in newer operating systems. Staying on outdated platforms can hinder your ability to leverage modern technologies and advancements that can drive efficiency and productivity.

  • Incompatibility with new software and hardware

    As software and hardware vendors release updates and new products, they increasingly focus on compatibility with the latest operating systems. Over time, you may encounter compatibility issues when installing or running newer software or hardware on Windows 2012 or 2012 R2. This can limit your ability to adopt new technologies and take advantage of the latest features and capabilities. For Example: Integration of Windows Hello.

The seriousness of these risks should not be underestimated. Running an unsupported operating system puts your organization’s security, stability, and compliance at significant risk. As time progresses, the risks associated with using Windows 2012 and 2012 R2 will only increase as cyber threats evolve and unsupported systems become more susceptible to attacks.

Planning and executing a migration to a supported operating system version on time is essential to mitigate these risks. This ensures that you can continue to receive security updates, access technical support, and maintain compliance with industry standards, while also taking advantage of the latest features and improvements offered by modern operating systems.

Is this the right time to assess your PKI needs?

Since we are at a forced decision point, it may be the right time for you to assess the current PKI environment and strategies to enhance your overall PKI infrastructure. Asking the right questions will help you understand your PKI environment’s as-it-is and as-to-be state

To understand your organization’s approach to PKI (Public Key Infrastructure), it’s essential to ask the right questions. These questions will help you assess your current PKI architecture and make informed decisions about its future. Consider the following inquiries:

  1. Is it advisable to retain your existing PKI architecture, or would it be more beneficial to migrate?
  2. Have there been any changes in business use cases since the initial deployment of your PKI?
  3. Does your Microsoft PKI adequately support the evolving demands of PKI use cases?
  4. Are you familiar with your PKI architecture’s specific components and state, including its infrastructure, certificates, and dependencies
  5. Should you explore the adoption of cloud-based PKI or PKI-as-a-Service solutions?
  6. Have you conducted thorough testing of your migration plan?
  7. Have you developed a contingency plan in the event of issues during the migration process?

By asking these pertinent questions, you’ll gain insights into your organization’s PKI landscape, identify areas for improvement, and make informed decisions regarding the future of your PKI architecture.

PKI as a service

Steps to prepare for the migration:

To ensure your organization’s continued security and compliance, it’s crucial to migrate your Microsoft CA from Windows Server 2012 R2 to a supported platform. Here are some steps you can take to prepare for the migration:

  1. Assess your current environment

    Begin by evaluating your existing CA infrastructure. Understand the scale of your certificate operations, including the number of certificates issued and the dependencies on the current CA. Identify any custom configurations or integrations that may need to be considered during the migration.

  2. Select a target platform

    Determine which version of Windows Server you plan to migrate to. Windows Server 2019 or the latest version available during migration are recommended choices. Evaluate each version’s features, compatibility, and support lifecycle to make an informed decision.

  3. Plan the migration process

    Develop a detailed migration plan outlining the steps, potential risks, and timelines. Consider factors such as downtime requirements, certificate validity periods, and communication with stakeholders. To ensure a smooth transition, engage key stakeholders, including IT personnel, security teams, and application owners.

  4. Test the migration in a lab environment

    Before performing the actual migration, set up a test environment to simulate the migration process. This allows you to identify and address any potential issues or conflicts before migrating your production CA.

  5. Perform the migration

    Once you’ve completed thorough testing, execute the migration plan in your production environment. Follow best practices provided by Microsoft or seek assistance from qualified professionals to ensure a successful migration.

  6. Validate and monitor

    After the migration, validate the functionality of your new CA infrastructure. Test certificate issuance, revocation, and renewal processes to ensure everything is functioning.

How Encryption Consulting Can help your PKI Migration Journey  

Migrating your Public Key Infrastructure (PKI) can be complex, requiring careful planning and execution to ensure a smooth transition. In such situations, seeking assistance from a reputable Encryption Consulting firm can greatly benefit your PKI migration journey. Let’s explore how Encryption Consulting can help facilitate a successful migration process:

  1. Expertise and Experience

    Encryption Consulting firms specialize in cryptographic solutions, PKI, and encryption technologies. We have extensive experience in designing, implementing, and managing PKI infrastructures. Our expertise allows them to assess your organization’s specific requirements, identify potential challenges, and provide tailored solutions that align with industry best practices.

  2. Comprehensive Assessment

    Encryption Consulting can comprehensively assess your existing PKI architecture. We analyze the current state, evaluate its effectiveness, identify any vulnerabilities or inefficiencies, and provide recommendations for improvement. This assessment ensures that your migration plan is based on a thorough understanding of your PKI’s strengths and weaknesses.

  3. Migration Strategy and Planning

    Encryption Consulting can assist in formulating a migration strategy and creating a detailed plan tailored to your organization’s unique needs. They consider factors such as infrastructure dependencies, certificate lifecycles, compatibility issues, and downtime requirements. By leveraging their expertise, you can develop a well-structured migration roadmap that minimizes disruptions and ensures a seamless transition.

  4. Vendor Evaluation and Selection

    Choosing the right vendors and technologies is critical during PKI migration. Our team can help you evaluate different vendors, assess their solutions, and select the most suitable options for your organization. We have insights into the latest industry trends and can guide you in making informed decisions regarding hardware, software, or cloud-based PKI solutions.

  5. Implementation and Configuration

    Encryption Consulting play a vital role in implementing your PKI migration plan. We have the technical expertise to set up and configure the new infrastructure, ensuring compatibility with existing systems and applications. You can avoid common pitfalls and ensure a successful implementation by leveraging our knowledge.

  6. Testing and Validation

    Encryption Consulting conducts rigorous testing and validation processes to ensure the migrated PKI infrastructure operates as intended. They verify certificate issuance, revocation, and renewal processes and validate interoperability with various systems and applications. This meticulous testing minimizes the risk of potential issues and ensures the stability and functionality of the new PKI environment.

  7. Training and Support

    Encryption Consulting provides training and support services to enable your organization’s IT staff to effectively manage the newly migrated PKI environment. We offer guidance on operational procedures, best practices, and ongoing maintenance tasks. This empowers your internal team to handle day-to-day PKI operations confidently.

  8. Continuous Monitoring and Maintenance

    PKI requires ongoing monitoring and maintenance to ensure its optimal performance and security. Encryption Consulting can provide continuous monitoring services to proactively identify and resolve any issues, monitor certificate validity, and implement necessary updates and patches. This helps to maintain the integrity and reliability of your PKI infrastructure.


Encryption Consulting brings invaluable expertise, experience, and specialized knowledge to your PKI migration journey. Their comprehensive assessment, strategic planning, implementation support, and ongoing maintenance services can significantly streamline the migration process and mitigate risks. By partnering with  Encryption Consulting LLC, you can confidently navigate the complexities of PKI migration and achieve a secure and efficient PKI infrastructure for your organization.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Implementing & migrating PKI solutions for enterprises

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Reading Time : 5 minutes

Code signing is a technique used to ensure the authenticity and integrity of software code. This technique involves digitally signing the code with a digital signature, which can be verified by the user to ensure that the code has not been modified or tampered with. Code signing is an essential security measure for any organization that develops or distributes software, as it helps to prevent unauthorized code from being installed on users’ systems. In this blog, we will explore the importance of code signing and its impact on your organization.

What is Code Signing?

Code signing is the process of digitally signing software code to ensure its authenticity and integrity. A digital signature is a mathematical algorithm that uses a private key to encrypt the code and a public key to decrypt it. This signature ensures that the code has not been modified or tampered with since it was signed.

Code signing is done using a digital certificate issued by a trusted certificate authority (CA). The digital certificate contains information about the signer and the code being signed, and it is used to verify the signature’s authenticity. When the code is installed on a user’s system, it checks the digital signature against the certificate to ensure it has not been tampered with.

Why is Code Signing Important?

Code signing is essential for several reasons. Firstly, it ensures the authenticity and integrity of software code. This means that users can trust that the code they are installing is from a trusted source and has not been tampered with. This is particularly important for software that is used to handle sensitive data, such as financial or healthcare data.

Secondly, code signing helps prevent malware and other malicious code from being installed on users’ systems. Malware often tries to disguise itself as legitimate software, and code signing helps to prevent this by providing users with a way to verify the authenticity of the code they are installing.

Finally, code signing is often a requirement for software vendors that want to distribute their software through trusted channels, such as app stores or enterprise software distribution platforms. These platforms often require that software be signed with a trusted digital certificate to ensure its authenticity and security.

How does Code Signing impact your organization?

Code signing can have a significant impact on your organization. Here are some of the keyways that code signing can impact your organization:

  • Enhanced Security

    Code signing can help to enhance the security of your organization’s software. Signing your code with a trusted digital certificate ensures that only legitimate code is installed on users’ systems. This can help prevent malware and other malicious code from infecting your organization’s network.

  • Compliance

    Many regulatory bodies require that software be signed with a trusted digital certificate to ensure its authenticity and security. Signing your software with a trusted digital certificate ensures that your organization complies with these regulations. This can help to avoid costly fines and legal issues.

  • Brand Protection

    Code signing can help to protect your organization’s brand. Signing your software with a trusted digital certificate ensures that users trust the software they are installing. This can help to protect your organization’s reputation and prevent damage to your brand.

  • Increased User Confidence

    Code signing can increase user confidence in your organization’s software. By signing your software with a trusted digital certificate, you can provide users with a way to verify the authenticity and integrity of the software they are installing. This can help to increase user confidence in your organization’s software and lead to increased adoption.

  • Reduced Support Costs

    Code signing can help to reduce support costs for your organization. By ensuring that only legitimate software is installed on users’ systems, you can reduce the likelihood of software-related issues. This can help to reduce the number of support requests your organization receives, leading to lower support costs.

To ensure that your organization maximizes the benefits of code signing, there are several best practices which can be followed:

  • Use a Trusted Certificate Authority

    To ensure that your digital certificate is trusted by users, it is important to use a trusted certificate authority (CA) to issue your certificate. Trusted CAs are widely recognized and trusted by most operating systems and web browsers, ensuring that users can verify the authenticity of your code.

  • Keep Your Private Key Secure

    Your private key is used to sign your software code, and it must always remain secure. If your private key is compromised, an attacker could use it to sign and distribute malicious code that appears to be from your organization. To prevent this, ensure that your private key is stored securely, and that only authorized personnel have access to it.

  • Sign All Code

    To ensure maximum security, it is important to sign all of your organization’s software code, including drivers, DLLs, and other executables. This ensures that users can verify the authenticity and integrity of all your code, not just the main executable.

  • Timestamp Your Signature

    When you sign your code, including a timestamp to indicate when the code was signed is important. This ensures that users can verify that the code was signed before a specific date, helping to prevent attacks that rely on the compromise of older signed code.

  • Use Strong Passwords

    When generating your private key, it is important to use a strong password to prevent unauthorized access. A strong password should be long, complex, and unique and should be changed regularly to ensure maximum security.

  • Verify Your Signature

    Before distributing your code, it is important to verify your signature to ensure that it is valid. This ensures that users will be able to verify your signature when they install your software.

  • Revoke Certificates as Necessary

    If your private key is compromised or if you need to revoke a certificate for any reason, it is important to do so immediately. This ensures that users will not trust any code signed with the compromised or revoked certificate.

How can Encryption Consulting (EC) CodeSign Secure help you?

EC has developed the most efficient and user-friendly code-signing solution, CodeSign Secure. What makes us the best out in the market?

  • CodeSign Secure starts with virus scanning before commencing any type of signing process. It searches for any viruses or malware that may have been injected into the file before sending it away for the signing process.
  • CodeSign secure uses client-side hashing, providing customers with that extra layer of security. Hashing a file at its origin helps maintain its integrity at its peak and gives the customer a clear view of the file and what comes after signing.

  • Never exposing the key while signing the file/code. The file is signed inside the HSM, and the keys are never exposed to the outside world.
  • Our organization provides Role-based access control for code/file signing providing correct access and privileges to the user. Ensuring only those with proper roles can access certificates and keys within the tool.
  • Timestamp your signed code avoids the risks of software expiring unexpectedly when the code signing certificate expires. When a code signing certificate expires, the validity of the software that was signed will also expire unless the software was timestamped when it was signed.
  • CodeSign Secure follows the latest NIST (National Institute of Standards and Technology) guidelines of code signing
  • Monitor and audit key signing workflows, certificates and keys are associated with specific applications, and whoever is signing anything gets recorded in the logs of our tool, so we have the IP and the username of anyone attempting to sign. They will be blocked if they do not have valid credentials, or the key or certificate has expired.
  • Enable automated code signing in SDLC processes, we have the ability to sign from a client tool, via APIs, or via the command line so it is very straightforward and simple to integrate our tool into SDLC processes, including tools like Jenkins and Bamboo.
  • Compare signing from different build servers, our tool will check that the code being signed is the most up to date version of the code on the build servers in use by the client, ensuring an older or potentially malicious version of the code is not being signed.
  • Revoking compromised certificates, when a certificate expires, it will automatically be renewed, but in the case that a certificate or key is found to have been compromised, the key can be revoked and thus the signing process cannot occur with that key and certificate.


Code signing is an essential security measure for any organization that develops or distributes software. It helps to ensure the authenticity and integrity of software code, preventing unauthorized code from being installed on users’ systems. Code signing can significantly impact your organization, enhancing security, ensuring compliance, protecting your brand, increasing user confidence, and reducing support costs.

To ensure that your organization maximizes the benefits of code signing, it is important to follow best practices such as using a trusted certificate authority, keeping your private key secure, signing all code, timestamping your signature, using strong passwords, verifying your signature, and revoking certificates as necessary. Following these best practices ensures that your organization’s software is secure, trusted, and compliant with industry regulations.

EC’s CodeSign Secure offers a simple and efficient way to sign your code and protect your software, ensuring it complies with today’s security-conscious digital environment standards.

Free Downloads

Datasheet of Code Signing Solution

Code signing is a process to confirm the authenticity and originality of digital information such as a piece of software code.

secure and flexible code signing solution

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

FIPS 140-3 is a U.S. government standard specifying security requirements for cryptographic modules, including hardware and software components. These security requirements are designed to ensure that cryptographic modules provide a minimum level of security for protecting sensitive information and transactions.

Some of the key security requirements specified in FIPS 140-3 include the following

  1. Cryptographic module specification

    A detailed specification of the cryptographic module, including its physical and logical boundaries, interfaces, and security functions.

  2. Roles and services

    A definition of the roles and services provided by the cryptographic module, including key management, encryption, and authentication.

  3. Cryptographic algorithms

    A list of approved cryptographic algorithms that can be used by the cryptographic module, along with specific requirements for key length, block size, and other parameters.

  4. Key management

    Requirements for generating, storing, and protecting cryptographic keys, including key lifecycle management and destruction.

  5. Physical security

    Requirements for physical security measures to protect the cryptographic module from unauthorized access, tampering, and theft.

  6. Logical security

    Requirements for logical security measures to protect the cryptographic module from attacks such as malware, side-channel attacks, and unauthorized access.

  7. Security testing and validation

    Requirements for testing and validating the security of the cryptographic module, including vulnerability testing, penetration testing, and compliance testing.

General requirements for each level of FIPS 140-3

The below table explains the general requirements for each level of FIPS 140-3

General requirements
Level 1 Level 2 Level 3 Level 4
  • The cryptographic module must use an approved algorithm and implement the algorithm correctly.
  • The module must have physical security mechanisms to prevent unauthorized access.
  • The module must have a power-on self-test to verify that the module is functioning correctly.
  • All the requirements for Level 1, plus:
  • The module must have additional physical security mechanisms to detect and respond to unauthorized access
  • The module must have a role-based authentication system to restrict access to authorized users only.
  • The module must have a tamper-evident mechanism to detect if it has been tampered with.
  • All the requirements for Level 2, plus:
  • The module must have physical security mechanisms to prevent unauthorized modification of the module’s firmware or software.
  • The module must have a trusted path for sensitive data to ensure authorized components only process data.
  • The module must have a mechanism for detecting and responding to environmental attacks, such as temperature or voltage variations.
  • All the requirements for Level 3, plus:
  • The module must have physical security mechanisms to protect against the most sophisticated attacks, including invasive and non-invasive attacks.
  • The module must have a highly secure design and implementation with no known vulnerabilities.
  • The module must be resistant to side-channel attacks, which are attacks that exploit information leaked by the module during its normal operation.

Table 1: General requirements of FIPS 140-3 for each level

These are the general requirements for each level of FIPS 140-3. However, there are many specific requirements for each level, and the requirements for each level are quite detailed. The purpose of the standard is to provide a framework for evaluating and certifying the security of cryptographic modules. The specific requirements for each level are designed to ensure that the module provides a certain level of protection against various attacks.

FIPS 140-3 Security requirements for each level

FIPS 140-3 is a security standard defining the requirements for cryptographic modules that protect sensitive data.

FIPS 140-3 Level 1

Level 1 of FIPS 140-3 provides the lowest level of security and is intended for use in low-risk applications. The security requirements for level 1 are as follows

  1. Cryptographic module specification

    The module must have a concise specification describing its cryptographic functions, interfaces, and protocols.

  2. Roles, services, and authentication

    The module must define the roles, services, and authentication mechanisms required for secure operation.

  3. Physical security

    The module must have physical safeguards to protect against unauthorized access, theft, or tampering

  4. Design assurance

    The module must have undergone a comprehensive design process, including security analysis and testing.

  5. Mitigation of attacks

    The module must have measures to prevent or mitigate attacks, such as software or hardware countermeasures

  6. Self-tests

    The module must perform self-tests to ensure it is functioning correctly and detect tampering attempts

  7. Environmental design

    The module must be designed to operate in various environmental conditions, including temperature, humidity, and electromagnetic interference.

  8. Cryptographic key management

    The module must have secure key management processes to ensure that cryptographic keys are generated, stored, and used securely.

FIPS 140-3 Level 2

Level 2 of the FIPS 140-3 standard outlines the security requirements for a cryptographic module that provides moderate security assurance. The following are the specific security requirements for a cryptographic module to achieve FIPS 140-3 level 2

  1. Physical security

    The module must be physically protected against unauthorized access, tampering, and theft. It should be designed to withstand environmental hazards like fire, water, and electromagnetic interference.

  2. Cryptographic key management

    The module must have strong key management mechanisms that ensure cryptographic keys’ confidentiality, integrity, and availability. The module should protect keys against unauthorized access, modification, and destruction

  3. Authentication and access control

    The module must authenticate users and restrict access to authorized users only. It should have strong password policies and mechanisms to prevent brute-force attacks.

  4. Audit logging and reporting

    The module must log all security-related events and provide reports to authorized users. The module should also have mechanisms to protect audit logs against tampering or destruction.

  5. Software security

    The module must be designed with secure software practices that minimize the risk of security vulnerabilities. The software should be tested for security flaws and vulnerabilities and undergo regular security updates and patches.

  6. Communication security

    The module must use secure communication protocols and encryption to protect data in transit. The module should also have mechanisms to prevent unauthorized access to data transmitted over networks.

  7. Cryptographic algorithms

    The module must use approved cryptographic algorithms and standards that NIST has validated. The module should also have mechanisms to prevent using weak or outdated cryptographic algorithms.

FIPS 140-3 Level 3

Level 3 of the FIPS 140-3 standard protects against unauthorized cryptographic module access and sensitive information. It is also the third-highest level of FIPS 140-3. The following are the security requirements for level 3

  1. Physical Security

    The cryptographic module must be physically protected against unauthorized access, tampering, theft, and damage. The module must also be designed to resist physical attacks, such as drilling, cutting, and probing. The module must be in a secure facility with access controls, video surveillance, and intrusion detection systems.

  2. Cryptographic Key Management

    The cryptographic module must have a strong key management system that ensures the secure generation, storage, distribution, and destruction of cryptographic keys. The key management system must use strong cryptographic algorithms, such as Advanced Encryption Standard (AES), and have key backup, recovery, and destruction mechanisms.

  3. Cryptographic Operations

    The cryptographic module must perform cryptographic operations securely and reliably. The module must use approved cryptographic algorithms and protocols, such as Transport Layer Security (TLS), Secure Sockets Layer (SSL), and IPsec. The module must also have error detection and correction mechanisms and handle cryptographic exceptions and failures.

  4. Self-Tests and Tamper Evidence

    The cryptographic module must have self-tests and tamper-evidence mechanisms that detect and prevent unauthorized modifications, tampering, or substitution of the module’s hardware or software. The self-tests must be run periodically and must include checks for the integrity and authenticity of the module’s firmware, hardware, and software.

  5. Design Assurance

    The cryptographic module must have a strong design assurance that ensures the module’s security requirements are met throughout the module’s lifecycle. An independent third-party evaluator must review and verify the module’s design. The module must be tested against a set of security requirements defined in the FIPS 140-3 standard. The design assurance also requires using secure coding practices, security testing, and security documentation.

  6. Security Management

    The cryptographic module must have a strong security management system that includes policies, procedures, and controls for managing the module’s security risks. The security management system must include mechanisms for auditing, monitoring, reporting security events, and responding to security incidents and vulnerabilities.

    FIPS 140-3 level 3 provides strong protection against physical and logical attacks and requires a high level of key management, cryptographic operations, self-tests, tamper evidence, design assurance, and security management. The security requirements for level 3 are designed to protect sensitive information and maintain the integrity and availability of the cryptographic module.

FIPS 140-3 Level 4

Level 4 is the highest level of security defined in the standard and is intended for applications where the consequences of a security failure are severe. The following are the key security requirements for FIPS 140-3 Level 4

  1. Physical Security

    The cryptographic module must be housed in a tamper-evident, ruggedized container designed to resist physical attacks, such as drilling, cutting, or punching. The container must also have sensors to detect unauthorized access and trigger alarms.

  2. Cryptographic Key Management

    The module must have a strong, verifiable, and auditable key management system that ensures cryptographic keys’ secure generation, storage, distribution, and destruction. The module must also support key revocation and recovery.

  3. User Authentication

    The module must have a robust and secure user authentication mechanism, such as biometric or smart card-based authentication, to ensure only authorized personnel can access the module.

  4. Logical Security

    The module must have robust logical security mechanisms to prevent unauthorized access, tampering, or manipulation of the cryptographic module or its data. This includes secure boot, secure firmware updates, and secure communications protocols.

  5. Environmental Controls

    The module must operate in various environmental conditions, such as extreme temperatures, humidity, and electromagnetic interference. The module must also be able to withstand power surges and disruptions.

  6. Life Cycle Support

    The module must have a comprehensive life cycle support mechanism that includes regular security updates, vulnerability assessments, and secure disposal procedures

    Overall, FIPS 140-3 Level 4 defines the highest level of security for cryptographic modules, and it is intended for applications where the consequences of a security failure are severe. The standard defines strict security requirements in physical security, cryptographic key management, user authentication, logical security, environmental controls, and life cycle support to ensure the highest level of protection for sensitive data and systems.

Requirements of Cryptographic algorithms for each level of FIPS 140-3

The FIPS 140-3 standard outlines four security levels (Level 1-4), each specifying a different set of requirements for cryptographic algorithms

Here are the cryptographic algorithms required for each level

  • Level 1

    This is the lowest level of security, requiring only basic encryption and key management functions. The cryptographic algorithms required for this level include AES (128-bit), Triple-DES (112-bit), and SHA-1.

  • Level 2

    This level requires additional physical security features to protect against tampering or unauthorized access to the cryptographic module. The cryptographic algorithms required for this level include AES (128-bit and 192-bit), Triple-DES (168-bit), SHA-2 (256-bit), and HMAC.

  • Level 3

    This level requires the highest level of physical security to prevent unauthorized access and protect against attacks. The cryptographic algorithms required for this level include AES (256-bit), RSA (2048-bit), ECDSA (224-bit), SHA-2 (384-bit), and HMAC (with keys of at least 128 bits).

  • Level 4

    This is the highest level of security, requiring the most stringent physical and logical security features to protect against sophisticated attacks. The cryptographic algorithms required for this level include AES (256-bit), RSA (3072-bit), ECDSA (384-bit), SHA-3 (512-bit), and HMAC (with keys of at least 256 bits).


In summary, FIPS 140-3 has been approved and launched as the latest standard for the security evaluation of cryptographic modules. It covers a large spectrum of threats and vulnerabilities as it defines the security requirements starting from the initial design phase leading towards the final operational deployment of a cryptographic module. FIPS 140-3 requirements are primarily based on the two previously existing international standards ISO/IEC 19790:2012 “Security Requirements for Cryptographic Modules” and ISO 24759:2017 “Test Requirements for Cryptographic Modules”.

The FIPS 140-3 standard provides a framework for ensuring the security of cryptographic modules used in sensitive applications such as banking, healthcare, and government.

To know more about FIPS 140-3 read : Knowing the new FIPS 140-3

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Implementing & migrating PKI solutions for enterprises

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read Time: 5 minutes

Why does strong encryption key management matter? Being involved in this security world, we would have heard different responses; however, there would be one thing in common for all the responses “Save the key from getting compromised.”

As you know, encryption involves scrambling data so only the intended party or organization can access it. This process is accomplished by using encryption keys. Each key contains a randomly generated string of bits. You can think about the encryption key as a password, ex: you access your bank account or any other account if you have your password. Similarly, you can decrypt your data when you have the associated encryption key with you. As you encrypt more and more data, you attain more of these keys, and managing the keys properly is very important.

Compromise of your encryption keys could lead to serious consequences since they could be used to:

  • Extract/tamper with the data stored on the server and read encrypted documents or emails.
  • Applications or documents could be signed in your name
  • Create phishing websites, impersonating your original websites,
  • Pass through your corporate network, impersonating you, etc.

Do you need to manage your encryption key?

In a word, yes.  As stated in NIST SP 800-57 part 1, rev. 5:

Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of cryptographic mechanisms and protocols associated with the keys, and the protection provided to the keys. Secret and private keys need to be protected against unauthorized disclosure, and all keys need to be protected against modification.

Encryption key management is an essential part of any enterprise security strategy. Proper key management ensures that sensitive data is protected from unauthorized access and that access to encrypted data is granted only to authorized individuals. Here are 10 best practices for effective enterprise encryption key management:

Key Management Best Practices:

  1. Follow key generation best practices

    There are specific best practices that should be followed when generating encryption keys. In selecting cryptographic and key management algorithms for a given application, it is important to understand its objectives. This includes using a strong random number generator and creating keys with the sufficient algorithm, length, and regularly rotating keys.

  2. Use a centralized key management system

    A centralized key management system is essential for effective key management in an enterprise setting. This system should be secure and allow easy management of keys across the organization.

  3. Use key-encrypting keys

    To ensure an extra level of security, consider using key-encrypting keys (KEKs) to protect your encryption keys. KEKs are used to encrypt and decrypt encryption keys, providing an additional layer of security.

  4. Establish key access controls

    It is essential to have controls in place for who has access to your encryption keys. This includes establishing access controls for key generation, key storage, and key use.

  5. Centralize User roles and access

    Some businesses may utilize thousands of encryption keys, but not every employee needs access to them. Therefore, only individuals whose occupations necessitate it should have access to encryption keys. These roles should be specified in the centralized key management so that only authenticated users will be allowed access credentials to the encrypted data that are connected to that specific user profile.

    Additionally, make sure that no administrator or user has exclusive access to the key. This provides a backup plan in case a user forgets his login information or unexpectedly departs the firm.

  6. Use key backup and recovery

    Proper key backup and recovery is essential to ensure that you can quickly restore access to your encrypted data in case of an emergency. This includes regularly backing up keys and having a clear plan in place for key recovery.

  7. Use key expiration

    Key expiration is a process in which keys are set to expire after a certain period of time. This ensures that keys are regularly rotated and that access to encrypted data is kept up to date.

  8. Use key revocation

    Key revocation is a process in which keys are invalidated and can no longer be used to access encrypted data. This is essential for ensuring that access to data is properly controlled and that unauthorized individuals are not using keys.

  9. Use Automation to Your Advantage

    An enterprise or larger organization relying solely on manual key management is time-consuming, expensive, and prone to mistakes. With Certificate management we have heard a lot about automation, however it is not just for digital certificate management. The smartest approach to encryption key management is using automation to generate key pairs, renew keys and rotate keys at set intervals.

  10. Preparation to Handle Accidents

    Although an administrator or security personel implement the correct policies, controls to secure the sensitive information, key things can go wrong at any point, and the organization must be prepared for it. For example:

    • User has lost credential to their keys
    • Employee leaves or gets fired from the company
    • Used flawed encryption algorithm
    • Human error, accidently publishing private key to a public website

For such situations one should always be prepared, identify all possibilities before it actually occurs and take precautionary measures. Audit your security infrastructure on a regular basis to minimize such incidents.


By following these best practices, you can ensure that your enterprise encryption key management is effective and secure. Proper key management is essential for protecting your sensitive data and ensuring that only authorized individuals can access it.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

To establish a public key infrastructure and offer your company’s public key cryptography, digital certificates, and digital signature capabilities, an ADCS server role is necessary. AD CS provides customizable services for issuing and managing digital certificates in software security systems that engage public key technologies.

The digital certificates that AD CS provides can be used to encrypt and digitally sign electronic documents and messages. These digital certificates can authenticate network computers, users, or device accounts. Digital certificates are used to provide the following:

  1. Confidentiality through encryption
  2. Integrity through digital signatures
  3. Authentication by associating certificate keys with a computer, user, or device account on a computer network

The public key services container cannot be limited to any specific domain or domains. It is available to any client in the forest. Since the public key service container is stored in a configuration naming context, the content is simulated between all domain controllers in the current forest.

CN=Public Key Services, CN=Services, CN=Configuration, DC= {forest root domain}

The followings are the sub-containers under public key services containers:

  1. AIA
  2. CDP
  3. Certificate Templates
  4. Certification Authorities
  5. Enrollment Services
  6. KRA
  7. OID

Below are the descriptions of each container:


In order to create a trusted certificate chain and retrieve any cross-certificates issued by the CA, clients can retrieve CA certificates from the AIA container by utilizing the authority information access (AIA) certificate extension. Another name for AIA is Authority Information Access (AIA). The AIA container will automatically install the new enterprise CA’s certificate during installation. To install the CA certificate programmatically to this container, run the below command:

Certutil -dspublish -f <PathToCertFile.cer> SubCA

<PathToCertFile.cer> – this is the actual path and certificate name file.


CDP stores certificate revocation lists. It contains all base CRLs, and Delta CRLs published in the forest.

You can install the certificate revocation list to the CDP container by running the certutil command.

Certutil -dspublish -f <PathToCRLFile.crl> <SubcontainerName>

How to add a CDP

Below command is to add CDP

Add-CRLDistributionPoint [-InputObject] <CRLDistributionPoint[]> [-URI] <String[]> [<CommonParameters>]


-InputObject <CRLDistributionPoint[]>  -> Specifies the CRLDistributionPoint object to which new CRL distribution points are added

[-URI] <String[]>  -> This specifies new CRL file publishing distribution points for a particular CA.

<CommonParameters> : The cmdlet supports common parameters like: Debug (db), ErrorAction (ea), ErrorVariable (ev), InformationAction (infa), InformationVariable (iv), OutVariable (ov), OutBuffer (ob), PipelineVariable (pv), Verbose (vb), WarningAction (wa), WarningVariable (wv)

CRL Publication options

%1ServerDNSNameThe CA computer’s Domain Name System (DNS) name
%2ServerShortNameThe CA computer’s NetBIOS name
%3CA NameThe CA’s logical name
%6ConfigDNThe Lightweight Directory Access Protocol (LDAP) path of the forest’s configuration naming context for the forest
%8CRLNameSuffixThe CRL’s renewal extension
%9DeltaCRLAllowedIndicates whether the CA supports delta CRLs
%10CDPObjectClassIndicates that an object is a CDP object in AD DS

Certificate Templates

By defining the common features shared by all certificates issued using that template and determining the permissions for which users or computers can enroll in or automatically enroll for the certificate, certificate templates are used to automate certificate deployment. A certificate template that automatically enrolls all domain users with valid email addresses for a secure email (S/MIME) certificate would serve as an illustration.

All certificate templates available in AD, whether published on an enterprise CA or not, are stored in the Certificate Templates container. If an enterprise CA publishes a certificate template, the value is written as an attribute on the CA object in the Enrollment Services container. By default, over 30 Microsoft predefined certificate templates are installed when building an enterprise CA.

Certification Authorities

Certificate authorities container is for the trusted Root CA(s). During the enterprise Root CA creation, the certificate is automatically deployed in the container. In the case of an offline standalone CA, the PKI administrator must manually publish the offline root CA certificate using the certutil command.

Example: certutil -dspublish -f <Root.cer> Root CA

Enrollment services container

It includes the certificates for the enterprise CAs that can grant certificates to individuals, machines, or services located within the forest. Only an Enterprise Admins member who installs an enterprise CA can add enterprise CA certificates to this container. The Manage AD Containers dialogue box cannot be used to add the certificates manually


Contains the certificates for key recovery agents for the forest. Key recovery agents must be configured to support key archival and recovery. Key recovery agent certificates can be added to this container automatically by enrolling with an enterprise CA. The key recovery agent certificates cannot be added manually by using the Manage AD Containers dialog box.


This container stores object identifiers (OID) registered in the enterprise. OID container can hold object identifier definitions for custom Application Policies, Issuance (Certificate) Policies, and certificate templates. When the client is a member of the Active Directory Forest, it uses an OID container to resolve object identifiers and the local OID database.

New OIDs should be registered via Certificate Templates (certtmpl. msc) MMC snap-in by adding a new Application or Issuance (Certificate) Policy in the certificate template Extension tab.


Understanding each active directory certificate services container is vital for an enterprise PKI administrator. An Administrator must know the activities and necessity of each container while managing the enterprise PKI infrastructure.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 5 mins

The CipherTrust data security platform’s core management point is CipherTrust Manager. With the help of this market-leading enterprise key management solution, businesses can set up security policies, give granular access controls, and centrally manage encryption keys. The key lifecycle tasks managed by CipherTrust Manager include creation, rotation, destruction, import, and export. It also gives role-based access control to keys and policies, allows thorough auditing and reporting, and provides REST APIs that are easy for management and development. The physical and virtual form factors of CipherTrust Manager are FIPS 140-2 compliant up to level 3. Additionally, hardware security modules (HSM) like Thales Luna and Luna Cloud HSM can be used to anchor the CipherTrust Manager.

Are you still using the older version of CipherTrust Manager in your environment? Then it’s time to upgrade it to the latest version. The below upgrade details will help you upgrade your CipherTrust Manager all by yourself. This document covers basic system upgrade details for the Thales CipherTrust Manager. For more detailed instructions, please refer to the Thales System Upgrade Guide.


Pre-requisites are important to plan and be ready for the upgrade. The following checks must be run before the upgrade of the CipherTrust Manager is complete:

  • Know your current software version of the CipherTrust Manager (CM) and the desired version of the CipherTrust Manager. Example: Your Current version of CM is 2.0, and the desired version is 2.9.
  • Define the upgrade path: For the above example: The upgrade path would be 2.0 > 2.3 > 2.6 > 2.9 [NOTE: Thales tests upgrades from the three previous minor versions. Upgrades from other versions have never been tested and may not work correctly.]
  • Ensure you have access as Ksadmin with an SSH Key.
  • Take a system-level backup and ensure that you have downloaded the CM backup file and backup key. (This can be done via the CM Web UI).
  • Run the command df -h toensureat least 12 GB of space available (excluding the upgrade file)
  • SCP the upgrade file to the CipherTrust Manager (while using Winscp, ensure that SCP is selected for the file protocol). The upgrade file can be transferred via the WinSCP application or the following command:

scp -i <path_to_private_SSH_key> <upgrade_file_name> ksadmin@<ip>:.

[NOTE: Upgrade files can be downloaded from the Thales Support portal for the desired version. Or you can also open a ticket with Thales support to help you get the upgrade files]

Upgrade Workaround:

  1. Login as Ksadmin via SSH
  2. Run the following command to upgrade sudo /opt/keysecure/ -f <archive_file_path>
  3. Reboot the appliance once all the services are running Sudo reboot

[NOTE: The upgrade can also be performed via serial connect as a ksadmin]

Post-upgrade Checks:

The following checks should be run after upgrading the CTM:

  1. Check that all services are running with the following command: sudo docker ps | wc -l
  2. Ensure the CipherTrust Manager services have started. From the ksadmin session, run “systemctl status keysecure.”
  3. Alternatively, you can visit the CipherTrust Manager web console or attempt to connect with the ksctl CLI

Known issue

There is a known issue in CipherTrust Manager instances upgraded from 2.6 and earlier, where network device names sometimes swap MAC addresses after reboot. This has been observed for network interfaces beginning with eth and bonded connections created from network interfaces beginning with eth. To avoid this, a connection for each network interface should be configured.


This document does not replace the standard Safenet documentation set for the CipherTrust Manager User Guides. Rather it is an addendum designed to be used alongside that documentation. It is always a best practice to upgrade your security solution software with the major release.


Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 7 minutes

3DES is an encryption cipher derived from the original Data Encryption Standard (DES). 3DES was first introduced in 1998, the algorithm is primarily adopted in finance and other private industry to encrypt data-at-rest and data-in-transit. It became prominent in the late nineties but has since fallen out of favor due to the rise of more secure algorithms, such as AES-256 and XChaCha20. Although it will depreciate in 2023, it’s still implemented in some situations.

About Triple DES or 3DES

The Triple DES (often referred to as Data Encryption Algorithm (TDEA)) is specified in SP 800-6711 107 and has two variations, known as two-key TDEA and 108 three-key TDEA. Three-key TDEA is the stronger of the two variations.Below is the status of the 3DES algorithm used for encryption and decryption

Two-key TDEA EncryptionDisallowed
Two-key TDEA DecryptionLegacy use
Three-key TDEA EncryptionDeprecated through 2023Disallowed after 2023
Three-key TDEA DecryptionLegacy use

*Deprecated: you may use but must accept a specific risk

*Disallowed: algorithm or key length not suitable for use anymore

Three-key TDEA encryption and decryption

Effective as of the final publication of this revision of SP 800-131A, encryption using three-key TDEA is deprecated through December 31, 2023, using the approved encryption modes. Note that SP 800-67 specifies a restriction on protecting no more than 220 data blocks using the same single key bundle. Three-key TDEA may continue to be used for encryption in existing applications but shall not be used for encryption in new applications. After December 31, 2023, three-key TDEA is disallowed for encryption unless specifically allowed by other NIST guidance. Decryption using three-key TDEA is allowed for legacy use.

How is Triple DES/3DES applied?

Triple DES is a type of encryption that employs three DES instances on the same plaintext. It employs a variety of key selection approaches, including the following:

  • all utilized keys are different in the first
  • two keys are the same and one is different in the second
  • and all keys are the same in the third.

Difference between 3DES and DES

DES is a symmetric-key algorithm that uses the same key for encryption and decryption processes. 3DES was developed as a more secure alternative because of DES’s small key length. 3DES or Triple DES was built upon DES to improve security. In 3DES, the DES algorithm is run three times with three keys; however, it is only considered secure if three separate keys are used.

Triple DES/3DES is not secure?

The Triple Data Encryption Algorithm (TDEA or 3DES) is being officially decommissioned, according to draught guidelines provided by NIST on July 19, 2018. According to the standards, 3DES will be deprecated for all new applications following a period of public deliberation, and its use will be prohibited after 2023.

DES no longer used?

The Data Encryption Standard, also known as DES, is no longer considered secure. While there are no known severe weaknesses in its internals, it is inherently flawed because its 56-bit key is too short. A German court recently declared DES to be “out-of-date and not secure enough,” and held a bank accountable for utilizing it.

AES replaced DES encryption

One of the primary objectives for the DES replacement algorithm from the National Institute of Standards and Technology (NIST) was that it be efficient in both software and hardware implementations. (Originally, DES was only practical in hardware implementations.) Performance analysis of the algorithms was carried out using Java and C reference implementations. AES was chosen in an open competition that included 15 candidates from as many research teams as possible from around the world, and the overall amount of resources dedicated to the process was enormous.

Finally, in October 2000, the National Institute of Standards and Technology (NIST) announced Rijndael as the proposed Advanced Encryption Standard (AES).

Differences between 3DES and AES encryption?

Both AES and 3DES, often known as triple-DES, are symmetric block ciphers. These are the current data encryption standards. Though the use of 3DES has become increasingly unpopular in recent years. Both have the same goals and objectives, yet there are a lot of similarities between them.

Parameters of comparison3DESAES
Key Length168 bits (k1, k2, and k3), 112 bits (k1 and k2)128, 192, or 256 bits
Cipher TypeSymmetric block cipherSymmetric block cipher
Block Size64 bits128 bits
SecurityProven inadequateConsidered secure


Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 8 minutes

Cryptographic keys are a vital part of any security system. They do everything from data encryption and decryption to user authentication. The compromise of any cryptographic key could lead to the collapse of an organization’s entire security infrastructure, allowing the attacker to decrypt sensitive data, authenticate themselves as privileged users, or give themselves access to other sources of classified information. Luckily, proper Management of keys and their related components can ensure the safety of confidential information.

Key management deals with the creation, exchange, storage, deletion, renewal of keys, etc. Key Management is putting certain standards in place to ensure the security of cryptographic keys in an organization.

Types of Cryptographic keys:

Cryptographic keys are grouped into various categories based on their functions. Let’s talk about a few types:

  1. Master Key

    The master key is used only to encrypt other subordinate encryption keys. The master key always remains in a secure area in the cryptographic facility (e.g., hardware security module), and its length will typically be 128 – 256 bits, depending on the algorithm used.

  2. The Key Encryption Key (KEK)

    When a secret key or data encryption is used, it must be “wrapped” with KEK keys to ensure the confidentiality, integrity, and authenticity of the key. The KEK is also known as the “key wrapping key” or the “key transport key.”

  3. The Data Encryption Key (DEK)

    Depending on the scenario and requirements, data may be encrypted with symmetric or asymmetric keys. In the case of symmetric keys, an AES key with a key length of 128-256 bits is typically used. A key length of 1024 – 4096 bits is generally used for asymmetric keys with the RSA algorithm. In simpler terms, you encrypt your data with data encryption keys.

  4. Root Keys

    The Root Key is the topmost key of your PKI hierarchy, which is used to authenticate and sign digital certificates. The Root Key usually has a longer lifetime than other keys in the hierarchy. The private portion of the root key pair is stored securely in a FIPS-140 2 level 3 compliant hardware security module.

Key length and algorithm

Choosing the right key length and algorithm is very important for the security of your cryptography environment. The key length of keys must be aligned with the key algorithm in use. For any keys (symmetric or asymmetric keys), the key length is chosen based on several factors:

  • The key algorithm being used
  • The required security strength.
  • The amount of data being processed utilizing the key (e.g., bulk data)
  • The crypto period of the key

Importance of Key Management:

Key Management forms the basis of all data security. Data is encrypted and decrypted via encryption keys, which means the loss or compromise of any encryption key would invalidate the data security measures put into place. Keys also ensure the safe transmission of data across an Internet connection. With authentication methods like code signing, attackers could pretend to be a trusted service like Microsoft while giving victims malware if they steal a poorly protected key. Keys comply with specific standards and regulations to ensure companies use best practices when protecting cryptographic keys. Well-protected keys should only be accessible by users who need them.

Key management systems are commonly used to ensure that the keys are:

  • Generated to the required key length and algorithm
  • Well protected (security architects generally prefer FIPS 140-2 complaint hardware security modules)
  • Managed and accessible only by authorized users
  • Rotated regularly
  • Deleted when no longer required
  • Audited regularly for their usage

Centralized Key Management:

People often ask if it is mandatory to have a 3rd party key management solution to manage encryption keys centrally. As per my opinion, no, it is not compulsory, but it is good to have features for your organization. A centralized key management system offers more efficiency than application-specific KMS.

The benefits of a centralized key management system:

  • Reduces operation overhead
  • Reduces costs with automation
  • With automation, it reduces the risk of human errors
  • Automated key update and distribution to any end-point
  • Provides tamper-evident records for proof of compliance
  • High availability and scalability
  • Meets regulatory compliance
  • Simplify your key management lifecycle

Compliance and Best Practices

Compliance standards and regulations ask a lot of key management practices. Standards created by the NIST and regulations, like PCI DSS, FIPS, and HIPAA, expect users to follow certain best practices to maintain the security of cryptographic keys used to protect sensitive data.

The following are important practices to ensure compliance with government regulations and standards:

  • The most important practice with cryptographic keys is never hard-coding key values anywhere. Hard-coding a key into open-source code, or code of any kind, instantly compromises the key. Anyone with access to that code now has access to the key value of one of your encryption keys, resulting in an insecure key.
  • The principle of least privilege is that users should only have access to keys necessary for their work. This assures only authorized users can access important cryptographic keys while tracking key usage. If a key is misused or compromised, only a handful of people have access to the key, so the suspect pool is narrowed down if the breach was within the organization.
  • HSMs are physical devices that store cryptographic keys and perform cryptographic operations on-premises. For an attacker to steal the keys from an HSM, they would need to physically remove the device from the premises, steal a quorum of access cards required to access the HSM, and bypass the encryption algorithm used to keep the keys secure. HSMs on the Cloud are also a viable key management storage method. Still, there is always the chance that the Cloud Service Provider’s security fails, allowing an attacker to access the keys stored therein.
  • Automation is a widely practiced method of ensuring keys do not go past their crypto period and become overused. Other portions of the key lifecycle can be automated, like creating new keys, backing up keys regularly, distributing keys, revoking keys, and destroying keys.
  • Creating and enforcing security policies relating to encryption keys is another way many organizations ensure the safety and compliance of their key management system. Security policies provide the methods everyone within an organization follows and create another method of tracking who can and has accessed specific keys.
  • Separating duties related to key Management is another important practice for any organization. An example of separation of duties is that one person is assigned to authorize the new user’s access to keys, another distributes the keys, and a third person creates the keys. With this method, the first person cannot steal the key during the distribution phase or learn the value during the generation phase of the key lifecycle.

Encryption Consulting Assessment

At Encryption Consulting, we ensure your system meets compliance standards and protects data with the best possible methods. We perform encryption assessments, including key Management and cloud key lifecycle management. We also write weekly blogs that can help you find the best practices for your key management needs and learn more about the different aspects of data security.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

Read time: 5 minutes

What is Data Loss Prevention?

Data Loss Prevention (DLP) is a set of processes used to ensure an the organization’s sensitive data is not lost, misused, leaked, breached, or accessed by unauthorized users. Organizations use DLP to protect and secure data and comply with regulations. Organizations pass their sensitive data to partners, customers, remote employees, and other legitimate users through their network, and sometimes it may get intercepted by an unauthorized user.

Many organizations find it challenging to keep track of their data and lack effective data loss prevention best practices. This results in a lack of visibility into what data leaves the organization and obfuscates data loss prevention.

Why do you need Data Loss Prevention?

Data loss can be damaging for businesses of all sizes. The primary purpose of data loss prevention is to secure sensitive data and prevent data leakage /data breaches. Data loss prevention solutions are designed to monitor and filter data constantly. In addition to dealing with the data being used, stored, and transmitted within the network, data loss prevention applications ensure no harmful outside information enters the company network and that no sensitive information leaves the company network via an unauthorized user.

Organizations typically use DLP to:

  • Protect personal Identifiable Information (PII) data and comply with relevant regulations.
  • Protect intellectual property, which is critical for the organization.
  • Secure data on remote cloud systems or storage.
  • Enforce security in a BYOD environment.
  • Achieve data visibility.

Reasons why Data Loss Prevention is necessary for business:

  • Outside threats and attacks are increasing daily; hackers have become more sophisticated with time and finding new ways to access networks and sensitive data occurs very frequently. Organizations should actively look for new threats.
  • Insider threats are also a prime reason to use DLP. Disgruntled employees deliberately cause harm to the company by sharing the company’s sensitive data with unauthorized users or by trying to find assistance from outside to carry out the attacks. The Verizon 2021 Data Breach Investigations Report revealed that more than 20% of security incidents involved insiders.
  • Data loss can impact the financial health of your business. Data loss can also lead to loss of productivity, revenue, client trust and damage the company’s brand name and reputation. According to the IBM Cost of a Data Breach Report 2021, the global average data breach costs increased from $3.86 million to $4.2 million in 2021.
  • Organizations have welcomed the Bring Your Own Device (BYOD) approach on an immense scale. However, some industries or organizations have poorly deployed and maintained BYOD solutions. In this case, it is easier for employees to inadvertently share sensitive information through their personal devices.

Therefore, a data loss prevention strategy is crucial to secure your data, protect intellectual property, and comply with regulations. DLP systems ensure that your company’s sensitive data is not lost, mishandled, or accessed by unauthorized users.

Data Loss Prevention (DLP) best practices:

  1. Determine your data protection objective

    Define what you are trying to achieve with your data loss prevention program. So you want to protect your intellectual property, better visibility, or meet regulatory and compliance requirements. Having a clear objective will help you/the organization determine the appropriate DLP solution to include your DLP strategy.

  2. Data classification and identification

    Identify the critical data for your business, such as client information, financial records, source codes, etc, and classify them based on their criticality level.

  3. Data Security policies

    Define comprehensive data security rules and policies and establish them across your company’s network. DLP technologies help block sensitive data/information/files from being shared via unsecured sources.

  4. Access Management

    Access to and use of critical or sensitive data should be restricted or limited based on users’ roles and responsibilities. The DLP solution helps the system administrators assign the appropriate authorization controls to users depending upon the type of data users handle and their access level.

  5. Evaluate internal resources

    To execute the DLP strategy/program successfully, an organization needs personnel with DLP expertise, who can help the organization to implement the appropriate DLP solution, including DLP risk analysis, reporting, data breach response, and DLP training and awareness.

  6. Conduct an assessment

    Evaluating the types of data and their value to the organization is an essential step in implementing a DLP program. This includes identifying relevant data, wherever the data is stored, and if it is sensitive data—intellectual property, confidential information,etc.

    Some DLP solutions can identify information assets by scanning the metadata of files and cataloging the result, or if necessary, analyze the content by opening the files. The next step is to evaluate the risk associated with each type of data if the data is leaked.

    Losing information about employee benefits programs carries a different level of risk than the loss of 1,000 patient medical files or 100,000 bank account numbers and passwords. Additional considerations include data exit points and the likely cost to the organization if the data is lost.

  7. Research for DLP vendors

    Establish your evaluation criteria while researching for a DLP vendor for your organization, such as:

    • Type of deployment architecture offered by the vendor.
    • Operating systems (Windows, Linux, etc.) the solution supports.
    • Does the vendor provide managed services?
    • Protecting structured or unstructured data, what’s your concern?
    • How do you plan to enforce data movement?(e.g., based on policies, events, or users)
    • Regulatory and Compliance requirement for your organization.
    • What is the timeline to deploy DLP solution?
    • Will you need additional staff/ experts to manage DLP? Etc.
    • Define Roles and Responsibilities

      Define the roles and responsibilities of individuals involved in the DLP program. This will provide checks and balances during the deployment of the program.

    • Define use cases

      Organizations often try to solve all the use cases simultaneously. Define the initial approach and set fast and measurable objectives, or choose an approach to narrow your focus on specific data types.


    DLP solutions classify regulated, confidential, and business critical data, it additionally identifies any violations of policies specified by organizations or within a predefined policy set, usually driven by regulatory compliance such as PCI-DSS, HIPAA, or GDPR. In case violations are identified, DLP enforces remediation with alerts to prevent end users from accidentally or delibartely sharing data that could put the organization at risk. DLP solutions monitor and control endpoint activities, protect data-at-rest, data-in-motion, and data-in-use, and also has a reporting feature to meet compliance and auditing requirements.

    Free Downloads

    Datasheet of Encryption Consulting Services

    Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

    Encryption Services

    About the Author

    Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

    Read time: 5 minutes

    What is an IoT device?

    Before we jump into the issues and challenges, let’s get a better idea of IoT devices. Devices that have a sensor attached to it and transmit data from one object to another or to people with the help of the Internet is known as an IoT device.IoT devices are wireless sensors, software, actuators, and computer devices. An IoT device is any device that connects to a network to access the Internet, so Personal Computers, cellphones, speakers, and even some outlets are considered IoT devices. Today, even cars and airplanes use IoT devices, meaning if these devices are attacked by threat actors, then cars or airplanes could be hijacked or stolen. With such widespread use of IoT devices in place globally, authenticating and authorizing IoT devices within your organization’s network has become vital. Allowing unauthorized IoT devices onto your network can lead to threat actors leveraging these unauthorized devices to perform malware attacks within your organization.

    Need for IoT Security

    Security breaches in IoT devices can occur anytime, including manufacturing, network deployment, and software updates. These vulnerabilities provide entry points for hackers to introduce malware into the IoT device and corrupt it. In addition, because all the devices are connected to the Internet, for example: through Wi-Fi, a flaw in one device might compromise the entire network, leading other devices to malfunction.Some key requirements for IoT security are:

    • Device security, such as device authentication through digital certificates and signatures.
    • Data security, including device authentication and data confidentiality and integrity.
    • To comply with regulatory requirements and requests to ensure that IoT devices meet the regulations set up by the industry within which they are used.

    IoT Security Challenges:

    1. Malware and Ransomware

      The number of malware and ransomware used to exploit IoT-connected devices continue to rise in the coming years as the number of connected devices grows. While classic ransomware uses encryption to lock users out of various devices and platforms entirely, hybridization of malware and ransomware strains is on the rise to integrate multiple attacks.

      The ransomware attacks could reduce or disable device functions while stealing user data. For example, a simple IP (Internet Protocol) camera can collect sensitive information from your house, office, etc.

    2. Data Security and Privacy

      Data privacy and security are the most critical issues in today’s interconnected world. Large organizations use various IoT devices, such as smart TVs, IP cameras, speakers, lighting systems, printers, etc., to constantly capture, send, store, and process data. All the user data is often shared or even sold to numerous companies, violating privacy and data security rights and creating public distrust.

      Before storing and disassociating IoT data payloads from information that might be used to identify users personally, the organization needs to establish dedicated compliance and privacy guidelines that redact and anonymize sensitive data. Mobile, web, cloud apps, and other services used to access, manage, and process data associated with IoT devices should comply with these guidelines. Data that has been cached but is no longer needed should be safely disposed of. If the data is saved, complying with various legal and regulatory structures will be the most challenging part.

    3. Brute Force Attacks

      According to government reports, manufacturers should avoid selling IoT devices with default credentials, as they use “admin” as a username and password. However, these are only guidelines at this point, and there are no legal penalties in place to force manufacturers to stop using this risky approach. In addition, almost all IoT devices are vulnerable to password hacking and brute-forcing because of weak credentials and login details.

      For the same reason, Mirai malware successfully detected vulnerable IoT devices and compromised them using default usernames and passwords.

    4. Skill Gap

      Nowadays, organizations face a significant IoT skill gap that stops them from fully utilizing new prospects. As it is not always possible to hire a new team, setting up training programs is necessary. Adequate training workshops and hands-on activities should be set up to hack a specific smart gadget. The more knowledge your team members have in IoT, the more productive and secure your IoT will be.

    5. Lack of Updates and Weak Update Mechanism

      IoT products are designed with connectivity and ease of use in mind. They may be secure when purchased, but they become vulnerable when hackers find new security flaws or vulnerabilities. In addition, IoT devices become vulnerable over time if they are not fixed with regular updates.

    Top IoT Vulnerabilities

    The Open Web Application Security Project (OWASP) has published the IoT vulnerabilities, an excellent resource for manufacturers and users alike.

    1. Weak Password Protection

      Use of easily brute-forced, publicly available, or unchangeable credentials, including backdoors in firmware or client software that grants unauthorized access to deployed systems.

      Weak, guessable, default, and hardcoded credentials are the easiest way to hack and attack devices directly and launch further large-scale botnets and other malware.

      In 2018, California’s SB-327 IoT law passed to prohibit the use of default certificates. This law aims to solve the use of weak password vulnerabilities.

    2. Insecure network services

      Unnecessary or unsafe network services that run on the devices, particularly those that are exposed to the internet, jeopardize the availability of confidentiality, integrity/authenticity of the information, and open the risk of unauthorized remote control of IoT devices.

      Unsecured networks make it easy for cybercriminals to exploit weaknesses in protocols and services that run on IoT devices. Once they have exploited the network, attackers can compromise confidential or sensitive data transmitted between the user’s device and the server. Unsecured networks are especially vulnerable to Man-in-the-Middle (MITM) attacks, which steal device credentials and authentication as part of broader cyberattacks.

    3. Insecure Ecosystem Interfaces

      Insecure web, backend API, cloud, or mobile interfaces in the ecosystem outside of the device that allows compromise of the device or its related components. Common issues include a lack of authentication/authorization, lacking or weak encryption, and a lack of input and output filtering.

      Useful identification tools help the server distinguish legitimate devices from malicious users. Insecure ecosystem interfaces, such as application programming interfaces (APIs), web applications, and mobile devices, allow attackers to compromise devices. Organizations should implement authentication and authorization processes to authenticate users and protect their cloud and mobile interfaces.

    4. Insecure or Outdated Components

      Use of deprecated or insecure software components/libraries that could allow the device to be compromised. This includes insecure customization of operating system platforms, and the use of third-party software or hardware components from a compromised supply chain.

      The IoT ecosystem can be compromised by code and software vulnerabilities as well as legacy systems. Using unsafe or outdated components, such as open source or third-party software, can create security vulnerabilities that expand an organization’s attack surface.

    5. Lack of Proper Privacy Protection

      User’s personal information stored on the device or in the ecosystem that is used insecurely, improperly, or without permission.

      IoT devices often collect personal data that organizations must securely store and process in order to comply with various data privacy regulations. Failure to protect this data can result in fines, loss of reputation and loss of business. Failure to implement adequate security can lead to data leaks that jeopardize user privacy.

    6. Insecure Default Settings

      Devices or systems shipped with insecure default settings or lack the ability to make the system more secure by restricting operators from modifying configurations.

      IoT devices, like personal devices, come with hard-coded, default settings that allow for easy configuration. However, these default settings are very insecure and vulnerable to attackers. Once compromised, hackers can exploit vulnerabilities in a device’s firmware and launch broader attacks aimed at businesses.

    7. Lack of Physical Hardening

      Lack of physical hardening measures, allowing potential attackers to gain sensitive information that can help in a future remote attack or take local control of the device.

      The nature of IoT devices suggests that they are deployed in remote environments rather than in easy-to-manage, controlled scenarios. This makes it easy for attackers to target, disrupt, manipulate, or sabotage critical systems within an organization.

    8. Lack of secure update mechanisms

      Lack of ability to securely update the device. This includes lack of firmware validation on device, lack of secure delivery (un-encrypted in transit), lack of anti-rollback mechanisms, and lack of notifications of security changes due to updates.

      Unauthorized firmware and software updates pose a great threat to launch attacks against IoT devices.


    How Encryption Consulting’s PKI-as-a-service helps secure your IoT devices?

    Encryption Consulting LLC (EC) will completely offload the Public Key Infrastructure environment and build the PKI infrastructure to lead and manage the PKI environment (on-premises, PKI in the cloud, cloud-based hybrid PKI infrastructure) of your organization. Encryption Consulting will deploy and support your PKI using a fully developed and tested set of procedures and audited processes. Admin rights to your Active Directory will not be required, and control over your PKI and its associated business processes will always remain with you. Furthermore, for security best practices, the CA keys will be held in FIPS 140-2 Level 3 HSMs hosted either in your secure datacentre or in our Encryption Consulting datacentre in Dallas, Texas.


    Free Downloads

    Datasheet of Public Key Infrastructure

    We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

    Implementing & migrating PKI solutions for enterprises

    About the Author

    Parnashree Saha is a data protection senior consultant at Encryption Consulting LLC working with PKI, AWS cryptographic services, GCP cryptographic services, and other data protection solutions such as Vormetric, Voltage etc.

    Let's talk