Skip to content

CodeSign Secure v3.02: Future of Code Signing with PQC

Developed by Encryption Consulting, CodeSign Secure is a centralized, secure, and scalable code signing platform built to help organizations sign code confidently without compromising on security or speed. We have designed it for modern DevOps environments, integrating with popular CI/CD pipelines (like Azure DevOps, Jenkins, GitLab, and so on), and enforcing strict access controls and approval workflows to keep your signing process clean and compliant.

Encryption Consulting’s CodeSign Secure continues to evolve with the release of version 3.02, delivering a landmark update that integrates Post-Quantum Cryptography (PQC) support. This new version introduces PQC, making it one of the first enterprise code signing solutions to prepare for the quantum computing era. V3.02 includes built-in support for MLDSA (Multivariate Lattice Digital Signature Algorithm) and LMS (Leighton-Micali Signatures), two NIST-recognized, quantum-resistant signature schemes.

Whether you’re already thinking about quantum readiness or just looking to strengthen your code signing infrastructure, Code Sign Secure v3.02 is built to help you stay ahead – securely and smartly.

Why PQC Matters in Code Signing

Let’s face it – quantum computing is no longer just a distant theory. It’s rapidly progressing, and while it’s not mainstream yet, it’s close enough that organizations can’t afford to ignore its impact on cybersecurity, especially when it comes to code signing.

Traditional digital signature algorithms like RSA and ECC have served us well for years, but they’re vulnerable to attack from powerful quantum computers. Once these machines become capable enough, they could break today’s widely used cryptographic keys in a fraction of the time it takes classical computers. That’s a big deal for code signing.

Code signatures are meant to prove that software hasn’t been tampered with and comes from a trusted source. If quantum computers can forge those signatures, it opens the door to serious threats, such as malware impersonating trusted apps, backdoors inserted into software updates, and more. This is where Post-Quantum Cryptography (PQC) comes in.

PQC algorithms are designed to be resistant to quantum attacks. By adopting PQC in code signing workflows, you’re not just reacting to a future problem—you’re proactively protecting your software supply chain for the long haul. It’s about future-proofing your trust model, especially for long-lived software like IoT firmware, embedded systems, and government-grade applications.

CodeSign Secure v3.02 supports PQC out of the box, giving organizations a head start in adapting to the next era of cryptography without sacrificing usability or performance. It’s a smart move now and a necessary one for the future.

New PQC Algorithms Supported in v3.02

MLDSA (44, 65, 87) and LMS

With version 3.02, our CodeSign Secure introduces support for cutting-edge Post-Quantum Cryptography (PQC) algorithms, giving your code signing process a serious upgrade in future-readiness. So, what’s new?

MLDSA (Multivariate Lattice Digital Signature Algorithm) – 44, 65, 87

These are the three strength levels of a quantum-safe digital signature algorithm based on lattice cryptography. MLDSA is designed to provide strong security against quantum attacks, while still being efficient enough for real-world applications.

  • MLDSA-44: Optimized for speed and smaller signatures, which is ideal for lightweight signing use cases.
  • MLDSA-65: A balanced option offering strong security with good performance.
  • MLDSA-87: The most secure variant, great for signing sensitive or high-value software assets.

By offering multiple MLDSA levels, our CodeSign Secure lets you choose the right balance between performance and protection, depending on your software and regulatory needs.

LMS (Leighton-Micali Signatures) via PQSDK

LMS is one of the first PQC algorithms approved by NIST for digital signatures, and it’s especially well-suited for long-term integrity protection for firmware, IoT devices, and critical infrastructure systems. Our CodeSign Secure v3.02 integrates LMS via PQSDK, allowing you to easily sign software with quantum-resistant credentials that stay trustworthy for years to come.

What’s even better? You can use these PQC algorithms in hybrid mode too, i.e., signing with both classical (RSA/ECC) and quantum-safe keys, giving you the flexibility to transition smoothly without breaking compatibility.

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

Hybrid Signing Workflows: Supporting Classical and PQC Algorithms

Let’s be honest, most organizations aren’t going to ditch RSA or ECC overnight. And that’s totally okay. Transitioning to Post-Quantum Cryptography (PQC) is a journey, not a flip of a switch.

That’s why CodeSign Secure v3.02 is built to support hybrid signing workflows—letting you combine both classical (RSA/ECC) and quantum-safe (MLDSA/LMS) algorithms in the same signing process.

What does that mean for you? You can continue signing software with trusted classical keys for compatibility, while adding a layer of PQC protection for future-proofing. This dual-signature approach ensures your software remains verifiable both today and in a post-quantum world, no matter which type of verifier is checking the signature.

Why it matters:

  • Smooth Transition: Adopt PQC at your own pace without breaking existing systems or user workflows.
  • Increased Security: Benefit from the strengths of both classical and quantum-resistant signatures.
  • Broad Compatibility: Maintain support for legacy tools and platforms while preparing for upcoming standards.

Whether you’re signing application binaries, firmware, or container images, CodeSign Secure handles hybrid signing cleanly and efficiently, integrated into your CI/CD pipelines and approval workflows like it’s always been there.

So, if you’re looking to prepare for tomorrow without disrupting today, hybrid signing is the bridge, and CodeSign Secure makes crossing it seamless.

Using MLDSA and LMS in CodeSign Secure v3.02: Step-by-Step Guide

MLDSA

Step 1. Go to our CodeSign Secure’s Signing Request Tab and click on Create MLDSA Key.

Create MLDSA Key

Step 2. Enter the necessary details in the form(Key parameter is for the MLDSA Algorithm parameter – MLDSA 44, 65, 87)

Enter the necessary details

Step 3. After entering the necessary details, click on the Create button.

click on the Create button

Step 4. Your MLDSA key will be created in the HSM.

MLDSA key will be created

Step 5. Now, we will create a digital signature using this MLDSA private key in the HSM. So, select Get MLDSA Signature and click on it.

Create MLDSA Signature

Step 6. You’ll be prompted for the necessary details:

Fill necessary details
  • Key Alias is the key identifier you entered when creating the MLDSA key
  • Enter a SHA-256 hash of the file you want to sign.

Step 7. After entering the appropriate details, click on the Create button.

click on the Create button

Step 8. A .sig file with the MLDSA Signature will be downloaded to your local system.

MLDSA Signature will be downloaded

Step 9. This is what your MLDSA Signature will look like

MLDSA Signature

LMS via PQSDK

Step 1. Extract the PQSDK tarball for installation

tar -xvzf libpqsdk-*.tar.gz

Step 2. Set up the environment variables required for using PQSDK tools.

source setup.sh

Step 3. Generate an LMS private key using PQSDK

./pq-tool genkey –algo LMS –key-out lms.key

Step 4. Extract the LMS public key from the private key

./pq-tool getpub –key-in lms.key –pub-out lms.pub

Step 5. Sign a file (here, data.txt) using the LMS private key and output a signature file

./pq-tool sign –key-in lms.key –in data.txt –sig-out data.txt.sig

Step 6. Verify the LMS Signature of the signed file using the corresponding public key

./pq-tool verify –pub-in lms.pub –in data.txt –sig-in data.txt.sig

Step 7. Create and Load CodeSafe SEEWorld:

./csg-compile –input sign_lms.c –output sign_lms.see./csg-load –load sign_lms.see –name sign-lms-world

  • csg-compile compiles the signing logic into a SEEWorld (secure enclave)
  • csg-load loads the SEEWORLD into the HSM with a given name

Step 8. Execute LMS signing of a sample file within the loaded CodeSafe SEEWorld environment.

./csg-exec –world sign-lms-world –sign data.txt –sig-out data.txt.sig

Step 9. Verify the signature created inside SEEWorld using the public key

./pq-tool verify –pub-in lms.pub –in data.txt –sig-in data.txt.sig

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

Use Cases of PQC-Based Code Signing

  • Firmware Signing: Firmware updates are a critical target for attackers, especially in IoT, automotive, and industrial systems. Post-quantum code signing ensures that even if quantum computers become a reality tomorrow, firmware updates pushed today stay secure and verifiable. Using PQC algorithms like LMS or ML-KEM helps future-proof these devices against quantum threats, all while maintaining trust in device integrity.
  • Enterprise Applications: Enterprise software often includes confidential logic and connects with sensitive systems. By using PQC-based code signing, organizations can safeguard their internal applications from tampering, even in a post-quantum world. It’s especially important for apps that handle identity, encryption, or compliance-sensitive operations, where traditional signatures might eventually become weak.
  • Long-Term Support (LTS) Software: Software that’s going to be maintained for a decade or more, like OS kernels, embedded device stacks, or regulated financial systems, needs to be protected for the long haul. PQC-based code signing ensures that those signatures will still be considered secure well into the future, even as cryptographic standards evolve. It’s a proactive move for any vendor offering long-term support.

Compliance and Regulatory Alignment with PQC Readiness

Staying ahead of compliance requirements is more than just ticking boxes; it’s about being proactive and future-ready. As regulatory bodies and standards organizations like NIST, ETSI, and NSA start recommending or mandating post-quantum cryptography (PQC), companies that adopt PQC early are putting themselves in a strong position.

For example, NIST has already finalized its first round of PQC algorithm selections, and governments around the world are pushing agencies and vendors to start migrating. If your code signing process already supports algorithms like LMS, ML-KEM, or Dilithium, you’re not just checking a security box; you’re aligning with emerging compliance frameworks that will soon be expected across industries.

Being PQC-ready also helps in audits. Whether it’s for SOC 2, ISO 27001, or sector-specific frameworks (like automotive’s UNECE WP.29 or healthcare’s HIPAA), showing that you’re using quantum-resistant signatures signals forward-thinking risk management and long-term cryptographic hygiene.

In short, PQC isn’t just a technical upgrade; it’s fast becoming a compliance expectation. Getting ahead now avoids the scramble later.

Conclusion

The shift toward post-quantum cryptography isn’t a distant concern; it’s already happening. Whether you’re signing firmware, securing enterprise applications, or delivering long-term support software, adopting PQC ensures your digital signatures stay secure in a post-quantum world.

CodeSign Secure is built with this future in mind. With native support for PQC algorithms like LMS and ML-KEM, integration with trusted HSMs, and seamless compatibility with your existing CI/CD pipelines, it delivers a robust, compliant, and forward-looking code signing solution.

By choosing our CodeSign Secure, you’re not just preparing for tomorrow’s threats; you’re leading with confidence today.

Field-Level Encryption: Ensuring Data Privacy and Security

With the rise of cyber-attacks, data breaches, and privacy concerns, organizations are looking for advanced solutions to safeguard sensitive information or more precision in data security beyond traditional encryption methods like full-disk encryption and column-level encryption.

Full-disk encryption encrypts the entire storage drive, ensuring that all data is protected while the system is powered off. Column-level encryption secures specific columns within a database, allowing for more targeted protection than full-disk encryption but still at a broader level than individual data fields. However, both serve the same purpose of protecting the data at rest.

While these approaches are effective in safeguarding large volumes of data, they often fall short in providing the granularity and precision that modern security requirements demand. Full-disk encryption, for instance, protects data only while the system is powered off; once booted and authenticated, all data becomes accessible to authorized (or compromised) users.

This is where field-level encryption steps in. This method of encryption adds an extra layer of protection to specific pieces of data, ensuring that only authorized parties can access or view sensitive fields in databases or systems. Let’s learn and understand more about FLE, how it works and the challenges that come with its implementation.

What is Field-Level Encryption?

Field-Level Encryption refers to the encryption of individual data fields within a database or storage system, instead of encrypting the entire dataset or storage container. Each field within a data record is encrypted using a unique encryption key, and only authorized users or systems with the correct decryption key can view or modify the contents of those specific fields.

Unlike traditional encryption methods that protect an entire file or database, field-level encryption focuses on protecting sensitive data within those files or databases. This ensures that only particular pieces of sensitive information, such as passwords, credit card numbers, or social security numbers, are encrypted, while other data in the same record remains unencrypted and accessible.

Encryption MethodScopeGranularityUse CasePerformance Impact
Full-Disk EncryptionEntire storage disk or drive Low (entire disk) Laptop encryption, device loss protection Minimal (done at hardware level) 
Column-Level Encryption Specific database columns Moderate (column level) Encrypting SSNs, card numbers in a database Moderate (depending on query complexity) 
Field-Level Encryption Individual fields within a database or file High (field-specific) Fine-grained control over sensitive personal data Higher (due to encryption/decryption per field)

To better understand the concept, let’s consider a customer database where personal information such as names, addresses, and phone numbers is stored. While encrypting the entire database is a common practice, encrypting only sensitive fields such as credit card numbers, social security numbers, or other Personally Identifiable Information (PII) can reduce the amount of data that needs to be encrypted while still ensuring compliance with security standards.

How Does Field-Level Encryption Work?

Field-level encryption works by targeting specific pieces of data within a database. Each field that needs encryption is processed individually, and encryption keys are assigned to each field. Let’s break down the process:

  1. Encryption Key Assignment
    A unique encryption key is assigned to each field that requires protection. This key can be either a symmetric key (the same key used for both encryption and decryption) or an asymmetric key (a public-private key pair). This depends on the implementation and the level of security required. 
  2. Data Encryption
    Before sensitive data is stored, it is encrypted using the designated key for that field. For example, a user’s credit card number might be encrypted with a specific key, and this key would only be known to authorized users or systems. 
  3. Data Storage
    Once the data is encrypted, it is stored in the database alongside other unencrypted fields. The encrypted data is stored as unreadable ciphertext, which adds an additional layer of protection. 
  4. Data Decryption
    When authorized users or systems need to access the encrypted field, they must use the corresponding decryption key to convert the ciphertext back into its original readable form. Only those with the appropriate permissions or keys can decrypt and access the sensitive data. 
  5. Key Management
    The encryption and decryption process hinges on the management of keys. If the keys are compromised or lost, encrypted data may become inaccessible, which underscores the importance of robust key management systems.

This approach offers a more granular level of control over which data is protected, allowing organizations to protect only the most sensitive information while leaving other less sensitive data in an accessible format.

Types of Field-Level Encryption

There are several different methods of implementing field-level encryption, each suitable for different use cases and security requirements. Some of the most common types include:

Transparent Field-Level Encryption

This type of encryption is typically implemented at the application or database layer. It is transparent because the encryption and decryption processes are automatically handled by the application or database engine. Users or applications do not need to manually encrypt or decrypt data; it is done behind the scenes without any intervention.

For instance, Microsoft SQL Server’s Always Encrypted feature allows sensitive data such as social security numbers or credit card numbers to be encrypted in the database while remaining accessible to authorized applications.

Manual Field-Level Encryption

In contrast to transparent encryption, manual field-level encryption requires explicit encryption and decryption operations to be performed by the application or user. This method offers more control over how the encryption is implemented, but may also increase complexity and development time.

For example, a fintech startup building a custom API to store customer bank account details might use a cryptographic library like AWS KMS to manually encrypt/decrypt each account number before writing to or reading from the database.

Key-Value Field-Level Encryption

This is a flexible approach in which each field is encrypted using its own unique key. For example, an e-commerce site might encrypt each customer’s credit card information using different encryption keys. This offers an added layer of security since even if one key is compromised, other data remains secure.

To explain it better, let’s consider a health-tech platform that stores patient medical records. They can generate a unique encryption key per patient, which is stored securely in a key management system to ensure that the compromise of one record does not affect others.

Field-Level Encryption with Tokenization

Tokenization is often used in conjunction with field-level encryption to further protect sensitive data. In this approach, the sensitive field (e.g., credit card number) is replaced with a token (a random value) that has no real meaning outside the system. The actual data is stored in an encrypted format, and the token is used for processing or referencing the data without revealing the sensitive information.

For instance,  payment processors like Stripe use tokenization to replace customer credit card numbers with randomly generated tokens. These tokens are used during transactions, while the actual card data is encrypted and stored securely in PCI-compliant servers.

Regulations & How FLE Helps

To better understand how Field-Level Encryption (FLE) aligns with global data protection regulations, the table below outlines key laws, the types of sensitive data they cover, their stance on encryption, and how FLE specifically supports compliance. This comparison highlights the practical benefits of FLE in meeting legal, technical, and ethical standards for handling sensitive information.

Regulation Sensitive Data Types Encryption Requirement How FLE Helps 
GDPR(General Data Protection Regulation) Names, emails, IPs, and location Recommended Supports data minimization and pseudonymization by encrypting only personal fields. 
HIPAA(Health Insurance Portability and Accountability Act) Health records, insurance info, treatments Addressable Encrypts ePHI at the field level and supports audit trails for access monitoring. 
PCI-DSS(Payment Card Industry Data Security Standard) PAN, CVV, cardholder info Mandatory Encrypts payment fields to reduce PCI scope and protect against breaches. 
CCPA(California Consumer Privacy Act) Personal identifiers, browsing behavior Strongly Encouraged Secures key personal fields and simplifies compliance with data access/deletion rights. 

Tailored Encryption Services

We assess, strategize & implement encryption strategies and solutions.

Benefits of Field-Level Encryption

Field-Level Encryption (FLE) offers a highly targeted approach to secure sensitive information, which is particularly useful for organizations handling personal, financial, or regulated data. By applying encryption only where it’s truly needed, FLE strikes a balance between security, performance, and flexibility. Here are some of the benefits of FLE:

  1. Granular Data Protection
    One of the main benefits of field-level encryption is that it provides granular control over which data is encrypted. This allows organizations to focus their resources on protecting only the most sensitive pieces of data, reducing overhead and improving performance compared to encrypting entire databases. 
  2. Enhanced Security
    Encrypting individual fields means that even if a hacker gains unauthorized access to the database, they will only be able to see encrypted versions of sensitive fields. Without the decryption keys, accessing or tampering with that data becomes significantly harder. 
  3. Regulatory Compliance
    Many industries are governed by strict regulatory requirements, such as PCI DSS (Payment Card Industry Data Security Standard) or GDPR (General Data Protection Regulation). These regulations require organizations to implement strong data protection measures for sensitive data. Field-level encryption helps businesses meet these requirements by ensuring that PII and other sensitive data are encrypted at rest and during transit. 
  4. Reduced Risk of Data Breaches
    Field-level encryption minimizes the exposure of sensitive data. Even if a malicious actor is able to exploit vulnerabilities and access the database, the encrypted fields would remain unreadable without the proper keys. This significantly reduces the risk of a data breach. 
  5. Improved Data Integrity
    By encrypting individual fields, field-level encryption can also help to ensure the integrity of the data. Any unauthorized attempts to modify encrypted fields would result in unreadable or invalid data, signalling a potential breach or tampering attempt. 
  6. Flexibility in Data Access
    Since only selected fields are encrypted, organizations can still access and process non-sensitive data without significant delays or performance hits. This allows businesses to maintain operational efficiency while securing sensitive data.

Challenges of Field-Level Encryption 

Implementing field-level encryption offers strong data protection but comes with its own set of challenges. Below are some of the key difficulties organizations face when adopting this approach: 

  1. Key Management Complexity
    One of the major challenges of implementing field-level encryption is managing the encryption keys. Each field that is encrypted requires a unique key, and these keys must be securely stored and rotated regularly. Without proper key management practices, the security of the entire encryption system could be compromised. 
  2. Performance Overhead
    Encryption and decryption processes can introduce performance overhead, especially when dealing with large datasets. If the application or database frequently needs to access encrypted fields, it can result in slower read and write operations, which may affect system performance. 
  3. Complex Integration
    Integrating field-level encryption into an existing system can be complex, especially for legacy applications that were not originally designed with encryption in mind. Organizations may need to update their infrastructure, databases, and applications to support encryption at the field level. 
  4. Limited Querying Capabilities
    Encrypted data is typically stored as ciphertext, which makes it difficult to perform queries on sensitive fields without first decrypting them. This can impact the ability to run certain types of queries or generate reports that rely on sensitive data fields.

Use Cases for Field-Level Encryption

Field-level encryption is particularly useful in industries where sensitive data is regularly processed. Here are some common use cases:

  1. Financial Sector
    In banking and finance, customer data such as bank account numbers, credit card details, and transaction histories must be securely protected. Field-level encryption ensures that only authorized personnel can access this sensitive information, reducing the risk of financial fraud and data breaches. 
  2. Healthcare Industry
    Healthcare organizations must comply with regulations like HIPAA (Health Insurance Portability and Accountability Act), which mandates the protection of patient health records. Field-level encryption can be used to secure patient data, such as social security numbers, diagnoses, and medical history, while allowing other non-sensitive data to remain accessible for analysis and reporting. 
  3. E-Commerce
    In e-commerce, customer data such as credit card numbers and addresses must be protected to prevent fraud and identity theft. Field-level encryption ensures that sensitive data is encrypted before being stored or transmitted, while allowing the e-commerce platform to process orders and handle other non-sensitive information seamlessly. 
  4. Government and Public Sector
    Government agencies and public organizations often deal with classified or sensitive data. Field-level encryption can be used to protect classified information, ensuring that only authorized personnel have access to it while maintaining the integrity of the system. 

How can Encryption Consulting help?

At Encryption Consulting, we offer comprehensive Encryption Advisory Services designed to enhance your organization’s data security posture. Our services help you identify and address encryption-related vulnerabilities, strengthen cryptographic protocols, and ensure full compliance with industry regulations and standards.

Our Encryption Audit Service provides a thorough examination of your current encryption practices, uncovering gaps and weaknesses that could lead to data breaches or compliance issues. Through detailed assessments and expert analysis, we help you align your encryption strategy with best-in-class security practices.

We leverage a custom encryption assessment framework tailored to your specific environment, incorporating globally recognized standards such as NIST, FIPS 140-2, GDPR, and PCI DSS. This framework enables us to deliver precise, actionable recommendations that improve your cryptographic architecture, key management, and data protection mechanisms. 

Discover how our Encryption Advisory Services can secure your digital assets and future-proof your security infrastructure. For more information or to schedule a consultation, contact our team of professional advisors today.

Conclusion

Field-Level Encryption provides a powerful solution for protecting sensitive data, offering granular control over which data is encrypted and who can access it. By encrypting specific fields within a database, organizations can safeguard their data while maintaining efficiency and compliance with regulations. Despite challenges such as key management and performance overhead, the benefits of field-level encryption, enhanced security, regulatory compliance, and reduced risk of data breaches make it an essential tool for modern data protection.

As cyber threats continue to evolve, field-level encryption will remain a crucial component in the fight against data breaches and privacy violations. Emerging trends such as homomorphic encryption, post-quantum algorithms, and encryption-as-a-service are shaping the future of FLE.

Public vs. Private Keys: Your Guide to Online Security and Privacy 

Imagine you send a message to a person on the other side of the world. You lock the message in a box with a key, and then send it across continents, hoping no one interrupts it. But there’s a problem—if anyone, anywhere, manages to get a copy of that key, they can unlock the box and read everything inside.

This was the unfortunate state of digital communication in the early days. Before the internet became the vast, interconnected web we know today, people relied on a single shared key to encrypt and decrypt messages—a method known as symmetric cryptography, where the same key is used for both encryption and decryption, such as in algorithms like AES (Advanced Encryption Standard) or DES (Data Encryption Standard).

It worked well enough in controlled environments, but as communication expanded globally, symmetric cryptography faced a major challenge: securely distributing the encryption key. There was always the risk of eavesdropping, where attackers could intercept the key during transmission. And as more people connected, the system became harder to manage—scaling securely across countless users was nearly impossible. So the question arose: how could you safely share a key with someone halfway across the globe without anyone intercepting it? 

As the popularity of the internet increased, sensitive information like bank details, corporate secrets, government intelligence, and private conversations began moving through exposed digital networks. This exposed the limitations of symmetric cryptography.

Then came a revolutionary idea that would save the future of cybersecurity: public and private key cryptography, also known as asymmetric cryptography. Instead of relying on a single secret key shared between sender and receiver, this new method uses two keys: one public, one private. The public key could be shared openly with anyone, while the private key remained safely guarded. Messages encrypted with the public key could only be decrypted with the corresponding private key, and vice versa.

Suddenly, it became possible to send confidential information over insecure networks without ever needing to exchange secret keys in advance. Techniques like RSA and ECC made this possible—mathematically complex systems that form the backbone of secure communication today. 

Now, let’s look at how public and private keys work together.  

How do private and public keys work together? 

Public and private keys work like a digital lock-and-key system, ensuring that online communication remains private, secure and trustworthy. They work in two main ways: 

  1. Encryption & Decryption – Keeping Information Private
    • Purpose: To ensure that the message is only received by the person it’s meant for.
    • The public key can be accessed by everyone (shared with others), but the private key is kept secret.
    • When you want to share a private message with someone:
      • You use their public key to encrypt (lock) the message.
      • Only their private key can decrypt (unlock) and read it.
    • Even if someone intercepts the message, they would require the private key to read it.
    • Algorithm examples: This principle is used in encryption algorithms like RSA-OAEP and ECIES (Elliptic Curve Integrated Encryption Scheme).
    • Important note: In practice, public key encryption is often used to securely exchange a symmetric key, which is then used for encrypting the actual data. This is how TLS (used in HTTPS) works—combining the efficiency of symmetric encryption with the security of asymmetric key exchange.
    • Real-life example: Secure websites (HTTPS), online banking, and messaging apps use this hybrid approach to keep your data safe.
  2. Digital Signatures – Proving Identity and Authenticity
    • Purpose: To prove that the message was sent by you and hasn’t been altered.
    • You use your private key to create a digital signature—a unique stamp on your message.
    • This process typically involves a hash function (which generates a fixed-size digest of the message) and a signature algorithm (like RSA or ECDSA) to encrypt the hash with your private key.
    • The person receiving your message can verify that signature using your public key.
    • If it checks out, they know:
      • That you sent the message (authenticity).
      • That the message has not been changed (integrity).
    • Real-life example: Software updates, digital contracts, and blockchain transactions use this to prevent tampering and confirm identity.

Public vs Private Keys

CategoryAspectPublic KeyPrivate Key
ComponentsRSA Elements Modulus (n), Public exponent (e) Modulus (n), Private exponent (d) 
ECC Elements High Curve name, Public point (Q = d × G) Curve name, Private scalar (d) 
VisibilityVisibilityShared publiclyKept secret 
Distribution Freely distributed (e.g., with TLS certificates)  Never shared; kept secure
UsagePurpose  Encrypt data, verify digital signatures Decrypt data, create digital signatures 
Who Uses It Anyone (recipients, verifiers) Only the owner 
ExamplesEmail encryption, website certificate validation  Digital signing, secure authentication 
Storage File Formats.crt, .cer, .pem .key, .pem, .pfx 
Typical Storage Location Public repositories, certificates  Kept in secure storage (HSMs, encrypted files) 
 Security Role in Security Establishes trust and enables secure communicationEnables confidentiality and authentication 
If ExposedTrust may be reduced (e.g., impersonation) Severe risk — allows full compromise of encrypted/signed content 
Ownership & Control Associated with identity (e.g., domain, person)Controlled exclusively by the key owner 

Real-Life Examples

  1. Secure Web Browsing (HTTPS)

    When you visit a secure website like https://example.com:

    As part of its SSL/TLS certificate, the website sends you its public key. Using this key, your browser and the website perform a process called the TLS handshake to establish a secure communication channel. During this handshake, a session key is created using public key cryptography. This session key is then used to encrypt and decrypt all data exchanged between the browser and the website, ensuring fast and secure communication.

    While the website’s public key is used during the handshake to establish the session key, the actual data (like passwords) is encrypted using this session key, not the public key itself. Only the website’s private key can decrypt the session key securely, ensuring that your sensitive data can only be read by the real site.

  2. Cryptocurrencies (e.g., Bitcoin, Ethereum)

    The ownership of your crypto wallet is verified through your private key, which is used to sign transactions. Your public key, on the other hand, is used to derive your wallet address, which is shared publicly to receive funds.

    In the case of Bitcoin, the wallet address is a hashed version of your public key, and it’s the address others use to send you cryptocurrency. If someone knows your public key (or wallet address), they can send you crypto—but only your private key can sign and authorize transactions to spend that crypto, ensuring that only you can control the funds in your wallet.

  3. Email Encryption (PGP/GPG)

    With PGP (Pretty Good Privacy) or its open-source implementation, GPG (GNU Privacy Guard), you give your public key to friends, allowing them to encrypt emails sent to you. To encrypt the message, PGP/GPG first uses symmetric encryption to encrypt the actual message, ensuring efficient encryption of larger content. Then, it encrypts the session key used for symmetric encryption with your public key, leveraging asymmetric encryption for secure key exchange.

    When you receive the email, you use your private key to decrypt the session key, and with the session key, the message itself is decrypted. This ensures that only you, with your private key, can read the encrypted email.

  4. SSH Authentication (Server Access)

    SSH (Secure Shell) typically uses public key authentication with a challenge-response mechanism to securely authenticate users. In this model, your private key is stored on your computer, while the public key is stored on the server.

    When you attempt to connect, the server generates a random challenge (usually a large number or string), which is sent to your client. The client then uses the private key to sign the challenge. This signed response is sent back to the server. The server, having the public key, can verify the signature. If the server successfully verifies the response, it grants access to the user.

    This challenge-response model works as a secure proof of possession: even if an attacker intercepts the challenge, they cannot respond without having access to the private key. This method is significantly stronger than password-based authentication because the private key never leaves your device, reducing the risk of interception. Moreover, the challenge-response mechanism ensures that only someone with access to the correct private key can authenticate, making it highly resistant to brute force or phishing attacks.

  5. Software Updates & Signatures

    When developers release software updates, they sign the updates using a private key. This signature doesn’t encrypt the software itself; instead, it ensures the integrity and authenticity of the update, allowing you to verify that the software has not been tampered with and is indeed from the legitimate developer.

    Your device uses the developer’s public key to verify the signature against the update, checking that it matches the original software. If the signature is valid, you can be confident that the update is authentic and has not been altered during transmission.

    Common code-signing algorithms like RSA or ECDSA are used to create and verify these digital signatures, ensuring the security of the software distribution process.

Enterprise PKI Services

Get complete end-to-end consultation support for all your PKI requirements!

Security Considerations

In public-key cryptography security is important because compromise can lead to data breaches, impersonation, and fraud. Here’s a breakdown of key security considerations: 

  1. Protecting the Private Key

    The private key is the cornerstone of asymmetric cryptography—if it’s compromised, an attacker can decrypt sensitive data, forge signatures, or impersonate the legitimate user. Proper protection is therefore critical.

    Best Practices for Protection:
    • Hardware Security Modules (HSMs): Use HSMs—dedicated physical devices that securely generate, store, and manage cryptographic keys. Alternatives include Trusted Platform Modules (TPMs) and smart cards.
    • Encryption at Rest: Encrypt the private key file on disk using strong symmetric encryption algorithms like AES-256.
    • Strong Passphrases: Use strong, unique passphrases to protect the key file itself from unauthorized access. Note: passphrases do not encrypt the key but restrict access to the encrypted key file.
    • Access Controls: Enforce strict file permissions, use role-based access control (RBAC), and isolate key access to only those who truly need it.
    • Multi-Factor Authentication (MFA): Require MFA to access any system or application that utilizes private keys, adding a critical layer of defense.
    • Avoid Key Sharing: Never share private keys—even within internal teams. Each individual or service should use its own key pair.
    • Key Rotation and Expiration: Regularly rotate keys and define expiration policies to minimize the impact of potential key compromise.
  2. Ensuring Public Key Authenticity

    While public keys are meant to be shared, verifying who they belong to is crucial. Without proper verification, attackers can perform Man-in-the-Middle (MitM) attacks by substituting their own keys.

    Methods for Verifying Public Key Authenticity:
    • Digital Certificates and Public Key Infrastructure (PKI): PKI provides a centralized trust model. It uses X.509 certificates, which bind a public key to an identity (like a domain or person) and are digitally signed by a trusted Certificate Authority (CA).This CA signature can be verified by clients using a chain of trust that leads back to a root CA they already trust.

      Example: In HTTPS (TLS), a browser trusts that it’s talking to bank.com because the site’s certificate is signed by a known CA.

    • Web of Trust (Used in PGP/GPG): In contrast to PKI, the Web of Trust is a decentralized model. Users verify each other’s identities and sign each other’s public keys, creating a network of trust relationships.

      Example: Alice verifies Bob’s key and signs it. Carol may trust Bob’s key because she trusts Alice.

    • Certificate Pinning: Applications or browsers “pin” a specific public key or certificate authority. Only the pinned key (or a certificate signed by it) is accepted in future sessions. This prevents attackers from using fraudulent certificates—even if a trusted CA is compromised.

      Example: Mobile apps often use certificate pinning to prevent accepting spoofed certificates.

    • Key Fingerprints: A fingerprint is a short, unique hash of a public key. Two users can verify the fingerprint out-of-band (e.g., over a phone call or in person) to confirm the key’s authenticity.

      Example: During SSH setup, users may compare key fingerprints via a secure, trusted communication channel.

How Encryption Consulting Can Help?

At Encryption Consulting, we specialize in designing, implementing, and managing secure Public Key Infrastructure (PKI) solutions tailored to your organization’s needs. 

Whether you’re aiming to strengthen encryption, enable secure digital identities, or ensure compliance with industry standards, our PKI services provide the foundation for secure communication and digital trust. 

Our offerings include: 

  • PKI Assessments – Evaluate the current state of your PKI environment and identify areas for improvement.
  • PKI Architecture Design – Build scalable, secure, and standards-compliant PKI infrastructures.
  • Certificate Lifecycle Management – Automate and manage certificate issuance, renewal, and revocation.
  • PKI-as-a-Service (PKIaaS) – Fully managed PKI solutions hosted and operated by our experts.
  • Asymmetric Encryption Consulting – Guidance on the secure and effective use of public/private key cryptography.
  • Operational Resilience Planning – Ensure your key infrastructure is robust against evolving cyber threats.

Our expertise ensures that your use of public and private keys is not only technically sound but also operationally resilient, protecting your data, applications, and users in today’s threat landscape. 

Conclusion

Public and private key cryptography has become the backbone of modern digital security. It solves the core problem of trust in open networks—enabling secure communication, identity verification, and data integrity without the need to secretly exchange keys. From browsing secure websites and sending encrypted emails to managing cryptocurrency wallets and verifying software updates, public-private key pairs are quietly working behind the scenes to protect our digital lives. 

By understanding how these keys work and why protecting them is important, individuals and organizations can take informed steps to strengthen their cybersecurity. As online threats continue to grow, Public Key Infrastructure (PKI) remains one of the most powerful tools we have to ensure privacy, security, and authenticity in an increasingly connected world. 

Looking ahead, with the emergence of quantum computing, traditional public key algorithms may become vulnerable. Organizations should begin exploring post-quantum cryptography to prepare for the next era of cryptographic security.

How Code Signing Helps in the Software Development Cycle

With the growing digital technologies, cyber threats are evolving at the same pace, ensuring that the security and integrity of software are non-negotiable. Code signing plays a major role in securing software authenticity, preventing tampering, and establishing trust between developers and end-users. However, simply signing the code is not enough for organizations. They must adopt best practices to mitigate risks and maintain a secure software supply chain. 

Before diving into the best practices, let’s first understand what Code Signing is, its role in DevOps and CI/CD pipelines, and how it helps prevent real-world cyberattacks. 

What is Code Signing, and how does it work?

Code signing is a security mechanism used to verify the authenticity and integrity of software code, scripts, or executables. It involves digitally signing software to confirm that it comes from a trusted source and has not been altered or tampered with since it was signed. 

The process begins with hashing, where the software is converted into a unique, fixed-length string (such as SHA-256).  This hash is then encrypted with the developer’s private key (securely stored, often in a Hardware Security Module, or HSM) to create a digital signature, which is embedded into the software along with metadata such as the timestamp and certificate (public-key) details.  

When a user downloads the software, their system decrypts the signature using the developer’s public key (from the attached certificate) and compares it with a freshly computed hash of the downloaded file. If they match, the file is verified as unaltered. 

Code Signing Flow
Code Signing Flow Diagram

Why Code Signing Matters in Software Development?

As we have already mentioned, Code Signing plays an important role in securing software authenticity and tampering. We will be exploring in detail how code signing is crucial in software development. 

Ensures Software Integrity 

Code signing guarantees that the software has not been altered since it was signed by the developer. When the code is signed, a cryptographic hash is generated and encrypted with the developer’s private key. If even a single byte in the file changes (due to malware injection or corruption), the hash verification fails, thereby alerting users that the software may be compromised. This prevents attackers from distributing modified versions of legitimate software, protecting both developers and end-users. 

Verifies Authenticity and Trustworthiness 

Without code signing, users have no reliable way to confirm whether software comes from a legitimate publisher or not. Code signing certificates that are issued by trusted Certificate Authorities (CAs) can bind the software to a verified organization or developer. When users install signed software, their operating system displays the publisher’s name (e.g., “Microsoft Corporation” instead of “Unknown Publisher”) that increases trust and reduces security warnings. This is especially important for enterprise software, drivers, and financial applications where source verification is critical. 

Unverified publisher vs Verified publisher

Issued by trusted Certificate Authorities (CAs), these certificates come in different types based on the level of validation and trust they offer. 

  1. Individual Validated (IV) Certificates: Designed for individual developers, IV certificates verify the person’s identity using official documents. While suitable for personal or small-scale projects, they offer basic trust and may still trigger warnings on certain systems.
  2.  Organization Validated (OV) Certificates: Issued to legally registered organizations after verifying their business credentials. OV certificates display the company name in the digital signature, offering a moderate level of trust for distributing both internal and public software. 
  3. Extended Validation (EV) Certificates: The highest assurance certificate, EV certificates involve thorough business and identity verification. They deliver instant reputation benefits, suppress security warnings, and prominently display the verified publisher’s name during installations — ideal for public, enterprise, and security-sensitive software. 

Prevents Malware and Supply Chain Attacks 

Cybercriminals often distribute malware by impersonating legitimate software or injecting malicious code into updates. Code signing mitigates this risk by ensuring that only properly signed and verified code executes. If an attacker attempts to modify a signed executable, the digital signature is broken, and the system blocks it. This is particularly vital in supply chain security, where attackers compromise software vendors to distribute trojanized updates, such as the SolarWinds attack. Code signing acts as a safeguard against such threats. 

Facilitates Secure Software Updates

Software updates are a common attack vector where hackers use unsigned updates to push malware. Code signing ensures that only the original publisher can allow updates. When an application checks for updates, it verifies the digital signature before installation, which helps in preventing man-in-the-middle (MITM) attacks and unauthorized modifications. 

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

Why Recent Cyber Attacks Prove Secure Code Signing is Essential?

Numerous high-profile cyberattacks demonstrate how attackers exploit vulnerabilities in software supply chains, making Code Signing even more essential. In the past couple of years, there is a steep rise of around 742% in next-generation Software supply chain attacks

next-generation Software supply chain attacks
Software Supply chain attacks

Below are some real-world cyberattacks and the lessons learned over the past few years. 

3CX Supply Chain Attack (March 2023) 

The 2023 3CX supply chain attack made a significant rise in software supply chain compromises. Attackers invaded the company’s build system to install malicious updates that were digitally signed with 3CX’s legitimate certificates. This sophisticated attack was attributed to North Korean state-sponsored actors that affected over 600,000 organizations globally through trojanized versions of the 3CX desktop app.  

The incident revealed critical vulnerabilities in the build system security and demonstrated how even properly signed software updates can’t be blindly trusted. It signifies the need for multi-layered verification of software integrity that includes precise checks of build environments and continuous monitoring for abnormal behaviour in signed applications. 

Other than checks and monitoring, we require multiple layers of protection beyond Code Signing. Some of these include: 

Software Vulnerability Scanning

Software vulnerability scanners are tools that automatically check applications, code, and software environments for known security weaknesses. These scanners identify issues like outdated libraries, misconfigurations, insecure code patterns, and publicly known vulnerabilities (CVEs). By scanning software before release and during regular updates, developers can detect and fix security problems early, reducing the risk of attackers exploiting them later. Vulnerability scanning is a key part of modern DevSecOps practices, ensuring continuous security throughout the software lifecycle. 

Software Bill of Materials (SBOM)

An SBOM (Software Bill of Materials) is a detailed list of all the components, libraries, and dependencies used in a software application. It works like an ingredient list for software, making it easy to track what’s inside a program. By maintaining an SBOM, organizations can quickly identify if any third-party or open-source component they use has known vulnerabilities, outdated libraries, or licensing risks. It plays a vital role in improving software security, especially in managing and reducing risks from the software supply chain. 

MOVEit Transfer Exploitation (June 2023) 

The MOVEit attack by the Cl0p ransomware group showed how supply chain vulnerabilities can bypass code signing protections. While not a direct code-signing breach, the mass exploitation of this widely used file transfer solution allowed attackers to intercept software updates and patches in transit.  

This created a scenario where properly signed software could be swapped with malicious versions during delivery. The incident demonstrated that code signing alone is insufficient – organizations must also verify the integrity of distribution channels and implement checksum validation to ensure that signed packages remain unaltered after signing. 

Applied Materials Incident (February 2023) 

The semiconductor giant’s breach involved stolen credentials being used to access sensitive systems, potentially including code-signing infrastructure. While full details remain undisclosed, the attack demonstrated how social engineering and credential theft can avoid even strong code-signing protections.  

This case also underscores the risks of integrating code signing into automated CI/CD pipelines without proper security controls. While automation boosts development speed and efficiency, it can also increase exposure if not managed securely. When signing processes are built into pipelines, attackers who compromise build servers or CI/CD environments can potentially push malicious code and have it automatically signed without triggering alerts. 

To mitigate this, organizations must treat code signing operations as high-trust actions. That means implementing multi-factor authentication, limiting access to signing credentials, using hardware security modules (HSMs) or cloud-based key management solutions, and adding manual approval steps or policy checks before any signing takes place. Automation should be balanced with strong access controls and monitoring to ensure security doesn’t get sacrificed for speed.  

SolarWinds Attack (December 2020) 

The SolarWinds attack was a sophisticated supply chain compromise in which Russian state-sponsored hackers infiltrated the company’s software development systems and secretly inserted malicious code into legitimate updates for SolarWinds’ Orion IT monitoring platform. These tampered updates were digitally signed with SolarWinds’ valid certificates and then distributed to approximately 18,000 customers, including government agencies and major corporations, enabling attackers to spy on victims for a very long time. 

By utilizing trusted update mechanisms and misusing code signing, the breach revealed critical weaknesses in software supply chain security, proving that even properly signed software can be weaponized if build environments are compromised. Therefore, Code Signing alone is not enough — if the build environment is compromised, even signed software can be malicious. To counter such risks, the industry is increasingly turning to reproducible builds as a powerful defense. 

A reproducible build ensures that every time source code is compiled, it produces the same binary output, making it possible to verify that what was built is exactly what was intended. This process is often combined with pre-build and post-build hash validation, where a hash of the source code is recorded before the build, and the hash of the output binary is checked after the build. Any mismatch between the expected and actual output can signal tampering or unauthorized changes. Together, reproducible builds and hash validation help detect build-time compromises and add an extra layer of trust and transparency to the software supply chain. 

Some Best Practices in Code Signing for Securing the Software Supply Chain

As stated earlier, improper code signing practices can introduce vulnerabilities that make applications susceptible to supply chain attacks, malware injections, and unauthorized modifications. Implementing high-end code-signing best practices helps organizations maintain trust, prevent tampering, and safeguard end-users from malicious threats. We will now be discussing some of the code-signing best practices that need to be implemented for secure software development. 

Secure Private Key Storage in HSM 

The foundation of secure code signing lies in protecting private keys from theft or misuse. Hardware Security Modules (HSMs) provide the highest level of protection by storing keys in specialized and tamper-resistant hardware that prevents extraction even if any server is compromised. These devices enforce strict access controls to ensure that keys can only be used for cryptographic operations but never exposed in plaintext.  

Organizations handling sensitive software should use FIPS 140-2 Level 3-certified HSMs as they not only secure keys but also perform all cryptographic operations internally, thereby eliminating risks associated with memory-scraping attacks. This approach seems more essential after incidents like the SolarWinds attack, where compromised build systems could have been avoided by HSM-protected keys. 

Enforce Multi-Factor Authentication and Approval Workflows 

While HSMs technically protect keys, procedural controls prevent unauthorized use. Implementing multi-factor authentication (MFA) for accessing signing systems ensures that stolen credentials alone can’t initiate signing operations.  

More importantly, establishing multi-person approval workflows where critical releases require authorization from multiple trusted team members creates accountability and reduces risks from both insider threats and credential compromise.  

These controls should be integrated directly into CI/CD pipelines, with clear audit logs showing who approved each signing event. For example, after the JetBrains TeamCity breach, organizations realized that automated signing without human oversight could let attackers freely sign malicious code once they infiltrated build systems. 

Implement Comprehensive Certificate Lifecycle Management

Effective code signing requires active management of certificates beyond just their initial issuance. Organizations should issue short-lived certificates that automatically expire after weeks rather than years, immediately revoke certificates at any sign of compromise through OCSP/CRL checks, and systematically rotate keys to limit exposure windows.

The MOVEit attack showed how long-valid certificates can become liabilities for an organization when vulnerabilities emerge. Modern approaches like certificate transparency logs and automated monitoring tools can help detect suspicious certificate usage patterns before they lead to breaches. For enterprises, integrating these practices with existing PKI infrastructure ensures consistent policy enforcement across all development teams. 

Ensure Secure Signing Access Controls and Enforcement 

Implementing least-privilege RBAC (Role-Based Access Control) for code signing is essential to mitigate supply chain risks. Production signing should be restricted to authorized release engineers through mandatory multi-factor authentication, while developers receive limited permissions in the test environment.  

Automated policy enforcement must block high-risk actions, such as bulk signing of unusual file types, with all activities logged for audit. Modern solutions integrate these controls directly into CI/CD pipelines, combining technical restrictions with workflow approvals. This layered approach, when paired with HSM-protected keys, makes sure that stolen credentials alone can’t compromise signing operations, as demonstrated by post-SolarWinds security enhancements. Regular access reviews maintain both security and operational efficiency. 

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

Establish Continuous Monitoring and Incident Response 

Effective code signing security requires real-time monitoring of all signing activities through centralized logging and alerting. Security teams should monitor for abnormalities such as bulk signing requests, unusual file types being signed, or signing events arising from unexpected locations.  

Integration with SIEM (Security Information and Event Management) systems enables correlation with other security events, facilitating the detection of coordinated attacks. Prepared incident response plans are also equally important, as they outline important steps for key rotation, certificate revocation, and software recall in the event of breaches.  

The Okta breach demonstrated how a delayed response to credential compromises can exponentially increase damage, making rapid detection and containment capabilities critical for signing infrastructure. 

Encryption Consulting’s Role in Enhancing Software Security Through Code Signing 

Encryption Consulting strengthens software security through its CodeSign Secure solution, which enforces best code signing practices to build verifiable trust in software integrity. The platform automates secure signing workflows using HSM-protected keys, cryptographic validation, and granular access controls, all tightly integrated with CI/CD pipelines to ensure security doesn’t slow down development. 

Beyond signing, CodeSign Secure embraces a multi-layered approach to secure the entire software supply chain. This includes support for reproducible builds and pre/post-build hash validation to detect tampering during the build process, generation and management of Software Bills of Materials (SBOMs) to track component-level risks, and seamless integration with vulnerability scanners to identify known threats before code is released. 

By combining enterprise-grade automation with deep security expertise, Encryption Consulting helps organizations stay compliant, prevent supply chain attacks, and turn code signing into a strategic defense layer that protects both their software and brand in today’s high-risk digital environment. 

Conclusion 

Code signing is crucial in securing the software supply chain against tampering, malware, and unauthorized modifications. Real-world attacks have demonstrated that compromised signing processes can lead to devastating breaches, underscoring the importance of secure code signing in maintaining trust and compliance.  

By implementing advanced practices, such as HSM-protected keys, granular access controls, and automated policy enforcement, organizations can ensure software integrity throughout the entire software development lifecycle, from development to deployment.

Event ID 74 in AD CS – Decoded

Introduction

In a Microsoft PKI environment, Event ID 74 from the Microsoft-Windows-CertificationAuthority source signals a problem that shouldn’t be ignored. This event indicates that the Certificate Authority (CA) failed to publish a Base Certificate Revocation List (CRL) to one of its distribution points—usually an Active Directory location or a network share. 

This isn’t just a routine log entry; it’s essentially your CA saying, “I tried to share the latest list of revoked certificates, but I couldn’t.” And that’s a serious red flag for any organization that relies on certificate-based authentication and encryption. 

Why is CRL Publishing Critical? 

The Certificate Revocation List (CRL) is a cornerstone of Public Key Infrastructure (PKI). It tells systems which certificates should no longer be trusted, whether due to compromise, expiration, or manual revocation. Without an up-to-date CRL, clients might unknowingly trust a revoked certificate, which can open the door to security risks like man-in-the-middle attacks or unauthorized access. 

When the CA fails to publish a CRL—especially a base CRL—it can cause significant issues. Clients may start rejecting all certificates issued by that CA once the current CRL expires. This can disrupt secure communications, halt authentication processes, and cause downtime across services that rely on the CA. 

In short, CRL publishing isn’t just a background task; it’s a critical part of maintaining trust and uptime in any PKI-enabled environment. 

Understanding the context of Event ID 74 

What Triggers Event ID 74? 

Event ID 74 typically appears when Active Directory Certificate Services (AD CS) fails to publish a Base CRL to one of its configured locations. These locations are defined in the CA’s CRL Distribution Points (CDP) and can include LDAP paths, HTTP URLs, or local/network file paths. 

When the CA attempts to update the CRL and something goes wrong—like a permissions issue, missing Active Directory object, or network hiccup—it logs Event ID 74 with a detailed message. The event usually includes: 

  • The CA Key Identifier (essentially, which CA tried to publish), 
  • The target URL or location where the CRL was supposed to be published,
  • An error code (like 0x80072098 for insufficient access or 0x8007208d for object not found),
  • And sometimes the name of the host server that encountered the problem. 

So, while it looks like just another log event at first glance, the data it contains is crucial for pinpointing what exactly went wrong during the publishing process.

Severity of the Issue 

Interestingly, Microsoft classifies Event ID 74 as a “Low” severity issue in their official PKI guidance. And while that might be technically true in the short term, especially if other CRL publication points are still working, this label can be a bit misleading. 

In the real world, if CRL publishing consistently fails and no valid CRL is available when the current one expires, you’re looking at major service disruptions. Clients that rely on certificate validation could stop working. Applications that enforce revocation checking might start rejecting valid certificates simply because they can’t verify their status. 

So yes, the severity might be low now that the event is logged, but if left unresolved, it can snowball into a high-impact availability issue. This is especially true in environments with strict revocation checking or short CRL validity periods. 

Common Causes of Event ID 74 

Permission Issues 

One of the most frequent culprits behind Event ID 74 is a simple but critical permissions problem. Specifically, the Certificate Authority doesn’t have the necessary rights to publish the CRL to the Active Directory. 

When this happens, you’ll typically see the error code: 0x80072098 – ERROR_DS_INSUFF_ACCESS_RIGHTS  

This means the CA tried to write an object in Active Directory (usually under the CDP or AIA containers) but was denied access. This often occurs if:

  • The CA server account doesn’t have Write permissions on the target directory object. 
  • The necessary security ACLs weren’t configured during the setup or were removed accidentally. 
  • AD replication or policy changes affected permission inheritance.

Checking and setting the right permissions on the CDP and AIA containers in AD is often all it takes to resolve this. 

Missing Directory Objects

Another common cause is when the Active Directory object the CA is trying to write doesn’t exist, which leads to 0x8007208D – ERROR_DS_OBJ_NOT_FOUND 

This can happen if: 

  • The CDP/AIA containers or child objects were never created. 
  • The expected LDAP path is wrong or outdated. 
  • Something in your environment, such as a failed migration or AD cleanup, removed these objects. 

In these cases, the CA is essentially trying to publish to a destination that no longer exists or was never properly set up to begin with.  

Network Connectivity Problems 

Sometimes the issue isn’t with AD or permissions, it’s the network. If the CA can’t reach a domain controller or the domain controller can’t respond in time, the CRL publishing fails. 

This is especially common in: 

  • Multi-site environments with slow WAN links. 
  • Misconfigured DNS. 
  • Or domain controllers that are overloaded or down. 

Even transient network hiccups can cause this event to pop up occasionally, so it’s worth checking connectivity and name resolution if the other causes don’t pan out. 

Investigating Event ID 74 

Reviewing Event Logs 

The first step in troubleshooting Event ID 74 is to head straight to the Event Viewer on your CA server. You’ll find the log under: Applications and Services Logs > Microsoft > Windows > CertificateServices-CA > Operational 

Look for Event ID 74 entries. Once you find one, take a close look at the message. It should include key details such as: 

  • The distribution point (e.g., LDAP or file path) 
  • The CA Key Identifier (to confirm which CA instance logged the event) 
  • The error code and possibly an accompanying text description 
  • The host that was involved in the operation 

This information helps you narrow down whether the issue is related to permissions, missing objects, or network-related problems. 

Key Parameters to Check 

While reviewing the event, pay special attention to the following: 

  • URL or Path: Is it pointing to the right location in the Active Directory or the correct file share? 
  • Error Code: Look up the exact error code. Codes like 0x80072098 (permissions) and 0x8007208d (object not found) are very telling. 
  • Server Host Name: Helpful if you’re troubleshooting in a multi-server CA hierarchy. 

Even if you’ve seen Event ID 74 before, don’t assume it’s the same root cause each time. The parameters can differ depending on the distribution point or what has changed in the environment.  

Sometimes, Event ID 74 shows up alongside other warnings or errors, like failed publication of the Authority Information Access (AIA) or delta CRLs. These can offer additional context or clues. 

If multiple publication errors appear together, that might suggest a broader issue, like a broken connection to Active Directory or a permissions misconfiguration across several containers. 

Pro tip: Also, check the Directory Services and DNS Client logs if you suspect domain controller or name resolution issues. 

Enterprise PKI Services

Get complete end-to-end consultation support for all your PKI requirements!

Resolving Event ID 74 

Now that you know what triggered Event ID 74 and where to look, let’s talk about how to fix it. Depending on the root cause, the steps can vary, but here are the most common and effective resolutions: 

Step 1: Verifying Permissions 

If the error code is 0x80072098 (insufficient access), your CA likely doesn’t have the required permissions in Active Directory.  

Here’s how to fix it: 

  • Open Active Directory Sites and Services (enable “Show Services Node” if it’s hidden)
  • Navigate to: Services > Public Key Services > CDP and AIA containers. 
  • Right-click the container > Properties > Security tab. 
  • Make sure the CA’s computer account (or the group it belongs to) has Write permission on the relevant container. 

If necessary, use dsacls.exe or PowerShell to verify/set permissions at a more granular level. 

Step 2: Validating Directory Objects

If you see 0x8007208d (object not found), the CA is trying to publish an object that doesn’t exist. 

To resolve: 

  • Use adsiedit.msc to browse the CN=CDP and CN=AIA containers under CN=Public Key Services. 
  • Look for the expected publication points. 
  • If they’re missing, you may need to manually create them or reconfigure the CA to use valid, existing paths. 

Be especially cautious if you’re restoring or migrating to a CA; objects might not have been recreated or re-linked properly.

Step 3: Checking Network Connectivity 

If everything seems correct on the CA and in Active Directory, check for network issues: 

  • Run ping, nslookup, or Test-ComputerSecureChannel to ensure the CA can contact the domain controller. 
  • Use tools like PortQry or Test-NetConnection to verify that the RPC and LDAP ports are open. 
  • Check time sync (Kerberos issues can sometimes present weird side effects) 

Slow links or DNS misconfiguration can silently break CRL publishing without throwing obvious errors, so don’t overlook the basics.

Step 4: Reviewing Configuration 

Open the Certification Authority console, right-click the CA, and select Properties. 

  • On the Extensions tab, review the CRL Distribution Points (CDP). 
  • Make sure the paths are valid and reachable and do not point to deprecated or incorrect containers. 
  • If you’ve recently renamed servers or restructured AD, these paths might be stale. 

After making changes, don’t forget to: 

  • Restart the CA service. 
  • Publish a new CRL manually using certutil -crl 
  • Confirm that it appears at the correct location. 

Conclusion

While Event ID 74 might not scream “critical” at first glance, ignoring it can quietly chip away at the reliability and trustworthiness of your PKI. When CRLs don’t publish properly, revocation checking fails—and that can bring down services, break authentication, and create serious security blind spots. 

The key takeaway? Address it early. Most of the time, it’s a straightforward fix—permissions, missing AD objects, or a misconfigured path. But left unchecked, it can escalate into something that’s much harder (and more painful) to untangle. 

That’s why proactive monitoring and regular maintenance of your AD CS setup is so important. Staying ahead of issues like this saves time, reduces risk, and keeps your security posture strong. 

And hey—if you’d rather not get deep into the weeds yourself, this is exactly the kind of thing our Encryption Consulting’s PKI consultancy can help with. Whether it’s troubleshooting a single event or doing a full health check of your environment, having experts step in can make all the difference. Sometimes, a quick session with someone who’s seen these issues a hundred times is all it takes to get you back on track. 

Microsoft Introduces Powerful Enhancements to Active Directory Certificate Services (ADCS) in 2025 

Microsoft has unveiled substantial updates to Active Directory Certificate Services (ADCS), delivering critical improvements in scalability, performance, auditability, and security. These enhancements mark a significant evolution in enterprise certificate management, especially vital for organizations that rely on ADCS to support identity assurance, secure communications, and protect sensitive data through Public Key Infrastructure (PKI) services. 

Our blog explores the major features introduced in recent releases, offering a technical breakdown of their capabilities and implications for enterprise environments.

CRL Partitioning for Efficient Revocation at Scale 

Traditionally, managing Certificate Revocation Lists (CRLs) in large-scale environments has been inefficient and bandwidth-heavy. Clients validating a certificate were forced to download the entire CRL, even when only one revoked certificate needed verification. This legacy design posed scalability challenges for environments with high certificate turnover or limited network resources. 

The New Approach

Microsoft addresses this with CRL Partitioning, a long-requested feature that introduces smarter, more granular revocation handling: 

  • Partitioned CRLs: The monolithic CRL can now be divided into multiple partitions based on certificate serial numbers. 
  • Targeted Downloads: Clients request only the relevant partition of the CRL necessary for validating a certificate. 
  • Reduced Bandwidth and Latency: Significant improvements in performance for validation operations, particularly in environments with constrained bandwidth or high-frequency validations (e.g., load balancers, VPNs, or large user bases). 

CRL Partitioning is backward compatible and designed to coexist with existing mechanisms like Online Certificate Status Protocol (OCSP). Microsoft enables dual publishing, where both the monolithic CRL and its partitions are simultaneously available, ensuring seamless transition and operational continuity. 

Step-by-Step guide to enable CRL Partitioning

To enable CRL partition refer to the following steps: 

  1. Run the following command to enable the CRL partitioning flag:
    certutil -setreg ca\CRLFlags +0x00400000
  2. Set the maximum number of partitions:
    This command defines how many CRL partitions you want to maintain (e.g., 10):
    certutil -setreg ca\CRLMaxPartitions 10
  3. Restart the CA Service:
    Run the following commands to apply the changes:
    net stop certsvc
    net start certsvc

The following image presents the properties of a partitioned CRL, showing the configured CRL distribution point for a specific partition, validating that clients can locate and download the appropriate CRL files.

The following image displays the Certification Authority management console, highlighting how issued certificates are now assigned to different CRL partition indexes, confirming the CRL partitioning feature is active.

Removal of 4KB Extension Limit: Unlocking Policy Flexibility 

Earlier versions of ADCS imposed a 4KB size limit on certificate extensions, restricting the complexity of certificate metadata and policy information that could be embedded within the certificate. 

4kb-extension limit

With the removal of this limitation, organizations can now: 

  • Define complex certificate policies that reflect nuanced security or business requirements. 
  • Embed custom extensions, including advanced identity attributes or hardware-specific markers. 
  • Align with changing standards and future-ready cryptographic schemes, such as those needed for post-quantum cryptography. 

This enhancement brings ADCS in line with the capabilities of modern Certificate Authorities and paves the way for increased adoption in hybrid, cloud-native, and IoT-driven infrastructures. 

Run the following commands to add 0x1000 to the DBFlags registry key value and then restart ADCS:  

certutil -setreg DBFlags +0x1000  

net stop certsvc && net start certsvc  

Now, to verify the limit settings, run the following commands and check the MaxLength property of ExtensionRawValue in the output:  

16kb-limit

ADCS Audit Logging Enhancements for Deeper Insight on Security Operations 

Security teams require detailed audit trails for digital certificate operations to support compliance, incident response, and forensic investigations. Recognizing this need, Microsoft has introduced enhanced audit logging in Windows Server 2025 for ADCS. 

Event ID  Event Summary  
4886 Certificate Services received a certificate request.
Request ID: %1   Requester: %2   Attributes: %3  
4887 Certificate Services approved a certificate request and issued a certificate.
Request ID: %1   Requester: %2   Attributes: %3   Disposition: %4   SKI: %5   Subject: %6  
4888 Certificate Services denied a certificate request.
Request ID: %1   Requester: %2   Attributes: %3   Disposition: %4   SKI: %5   Subject: %6  
4889 Certificate Services set the status of a certificate request to pending. Request ID: %1   Requester: %2   Attributes: %3   Disposition: %4   SKI: %5   Subject: %6  

New Fields in Event ID 4886 – Certificate Request Received  

Field Name  Description  
Subject (from CSR)  Represents the subject value extracted from the Certificate Signing Request (CSR), if available.  
SAN (from CSR)  Refers to the Subject Alternative Name (SAN) extension obtained from the CSR, if present.  
Requested Template  Specifies the certificate template name as provided in the request—either as a version 2 template extension or a version 1 template property/attribute.  
RequestOSVersion  Indicates the client’s operating system version using the szOID_OS_VERSION attribute. Refer to Section 2.2.2.7.1 of [MS-WCCE] for details.
Note: Provided by the client; not used for making security decisions.  
RequestCSPProvider   Details the Cryptographic Service Provider (CSP) used to generate the key pair, identified via the szOID_ENROLLMENT_CSP_PROVIDER attribute. Refer to Section 2.2.2.7.2 of [MS-WCCE].
Note: Client-provided; not intended for security decision-making.  
RequestClientInfo  Captures supplementary client details through the szOID_REQUEST_CLIENT_INFO attribute. Refer to Section 2.2.2.7.4 of [MS-WCCE]. Note: Provided by the client; not used for security decisions.  

New Fields in Event ID 4887 – Certificate Issued   

Field  Description  
Subject Alternative Name  Contains the SAN extension values in the issued certificate, if present.  
Certificate Template  Indicates the name of the certificate template used during issuance.  
Serial Number  Shows the unique serial number assigned to the issued certificate.  

New Fields in Event IDs 4886 through 4889 – Common Enhancements  

Field  Description  
 
Authentication Service  
Specifies the authentication service used in the request. Values may include “NTLM”, “Kerberos”, and “Schannel”, as defined by RPC authentication service constants.  
Authentication Level  Represents the level of authentication applied in the request. Logged values can be “Default”, “None”, “Connect”, “Call”, “Packet”, “Integrity”, or “Privacy”, based on RPC standards.
DCOM or RPC  Indicates whether the request was made using “DCOM” or “RPC”. “RPC” is used for requests via protocols like [MS-ICPR]; otherwise, “DCOM” is recorded.  

These expanded logs significantly boost visibility and enable:  

  • Anomaly detection, such as spotting suspicious SAN values or unauthorized issuance of high-privilege templates (e.g., Domain Controller certificates) 
  • Correlation of events across systems for root-cause analysis 
  • Audit readiness for frameworks like ISO 27001, NIST, PCI DSS, and others 

Security teams can now proactively monitor and build baselines for certificate issuance patterns, helping detect insider threats and misconfigurations early. 

Enterprise PKI Services

Get complete end-to-end consultation support for all your PKI requirements!

Microsoft’s Updated ADCS Hardening Guidance 

Certificate Authorities (CAs) represent Tier 0 assets within an enterprise network, making their protection a top priority. Microsoft’s hardening guidelines for ADCS have been updated to reflect modern threat vectors and attack techniques. 

Key Recommendations for Securing Certificate Authorities

  1. Protect the CA’s private key with a Hardware Security Module (HSM)
    • Use FIPS 140-2 Level 3 compliant HSMs to enforce key isolation
  2. Minimize CA attack surface
    • Remove unused certificate templates
    • Enforce least privilege for template enrollment
    • Disable Supply in Request unless strictly controlled
  3. Harden network access
    • Enforce HTTPS for web enrollment services
    • Apply Extended Protection for Authentication
    • Implement mitigations from KB5005413 to reduce NTLM relay risk 
  4. Audit and rotate credentials
    • Regularly review the ACLs on templates and CA objects
    • Monitor enrollment activities for anomalies

Even minor missteps such as over-permissive templates or unmonitored role access can lead to privilege escalation or compromise of the CA. 

Key Recommendations for Securing Certificate Templates

Restrict Enroll and AutoEnroll Permissions  

Overly permissive certificate templates act as one of the easiest ways to move laterally in a domain and exploit. This is because templates that allow ‘any authenticated user or domain user’ to enroll can be misused to gain higher privileges. Therefore, the following points should be implemented:  

  1. Remove broad groups such as “Authenticated Users” or “Domain Users” from Enroll or AutoEnroll permissions.  
  2. Certificate permissions are to be assigned to only a specific set of security groups or specific accounts, such as the IT security team or certain devices.   

Publish only the “necessary ones”  

Some certificate templates created for old projects or tests tend to pile up over time as they are never used. But every published certificate template acts as a potential attack vector. This is because unused templates may have outdated settings or weak permissions, providing attackers an entry. Therefore, the following practices are recommended:  

  1. The regular auditing of templates to understand the purpose and need of the templates.  
  2. For high-sensitivity templates like Enrollment Agent or Key Recovery Agent, publish them only when needed, then disable them afterward.  
  3. During CA setup, use LoadDefaultTemplates=0 in the CAPolicy.inf file to prevent the default templates from auto-publishing.  

Securing the templates with the “Supply in request” option  

The “supply in request” option allows users to mention the subject name they want on their certificate. However, it may also allow an attacker to provide any name, resulting in them requesting a certificate for someone like a domain admin or a domain controller and then using it to impersonate the identity. Therefore, to prevent this, the following settings must be implemented:  

  1. Allow this permission only for the high-privilege accounts.
  2. Enforce additional controlling layers, including the following: 
    • Manager approval workflows,
    • Authorized signatures,
    • Tight monitoring and review
  3. Implement enhanced auditing processes to keep track of who is requesting what, especially in the case of sensitive certificates.  

Operational Considerations for Enterprises

Organizations should begin planning now to adopt these enhancements. Key steps include: 

  • Update monitoring and SIEM tools to parse new audit events and fields 
  • Review certificate templates and remove or tighten access to sensitive ones 
  • Plan and test CRL partitioning in lab environments before production rollout 
  • Implement key storage policies aligned with hardware-backed trust models 
  • Audit NTLM usage in certificate issuance and authentication to prepare for deprecation 

How Encryption Consulting Can Help

Upgrading and hardening ADCS is complex but essential. At Encryption Consulting, we specialize in helping organizations like yours identify and mitigate security risks through tailored PKI Assessments. Our team of experts can provide a customized strategy to protect your PKI architecture from emerging threats, ensuring your data and infrastructure remain secure. Our full range of Public Key Infrastructure (PKI) services helps you safeguard your digital assets and enhance your organization’s overall security posture.  

For those seeking a hands-off solution, our PKI as a Service (PKIaaS) delivers all the benefits of PKI without the burden of in-house management. We ensure to provide four parameters:  

  • Scalability: We will help your PKI infrastructure grow as your business expands.   
  • Cost Efficiency: We reduce overhead by offloading infrastructure maintenance.   
  • Security: We ensure your organization stays compliant and secure with up-to-date PKI management.   
  • Compliance: We ensure your solution meets all regulatory requirements.   

With Encryption Consulting’s PKIaaS, you can focus on your core business while we handle the complexities of PKI management.   

Let us provide the peace of mind that comes from knowing your digital trust and security needs are in expert hands. Reach out today at [email protected] to explore how we can help your organization stay secure against cyber threats.  

Ready to Secure and Modernize Your ADCS Environment?

Let us help you take full advantage of the latest ADCS advancements. Reach out to encryption consulting by dropping an email at [email protected] to explore how we can elevate your certificate infrastructure to meet the demands of today’s zero-trust world. 

Understanding Elliptic Curve Cryptography (ECC)

Imagine you’re sending a private message to a friend, shopping online, or transferring cryptocurrency. In each case, you trust that your secrets—your words, your credit card number, your digital coins—stay safe from attackers’ eyes through encryption. But how does that happen? 

Many applications require secure communication, data integrity, authentication, or non-repudiation. These often rely on the robust security provided by a concept called public-key cryptography, also known as asymmetric cryptography. 

Public-key cryptography plays an important role in many secure systems, enabling safe data exchange across the internet. Now, there are several important types of algorithms used in public-key cryptography, such as RSA, ECC, DSA, DH, and ElGamal, each relying on different mathematical principles to ensure security – for example, RSA uses factorization, ECC uses elliptic curves, and DH leverages discrete logarithms. In this blog, we will be learning and understanding one such algorithm called ECC or Elliptic Curve Cryptography. 

What is Elliptic Curve Cryptography or ECC? 

ECC is a type of public-key cryptography, a system where you have two keys: one you share with the world (public key) and one you keep as a secret (private key). However, what makes ECC special is its use of elliptic curves, which, in cryptographic contexts, are defined over finite fields by an equation like y² = x³ + ax + b. Mathematicians have been studying these curves for centuries. However, in 1985, two brilliant thinkers—Neal Koblitz and Victor S. Miller—discovered that they could use these curves to revolutionize cryptography, enabling stronger security with smaller key sizes compared to other methods.

ECC
Elliptic Curve

The main advantage of Elliptic Curve Cryptography (ECC) lies in the mathematical problem it relies on for security. To better understand this concept, let’s look at the example below:

Imagine a graph with a special, curvy line on it – that’s our elliptic curve. Now, there are two people, Alice and Bob, who have agreed on a common starting point, G, on the curve. They also agreed upon a set of rules (specifically scalar multiplication), or a recipe for easier understanding, to be followed when moving on the curve.

Alice picks a secret number (let’s say ‘a’), and Bob picks their own secret number (‘b’). These are their private keys – they keep these numbers completely to themselves, like a secret code only they know.

Alice takes the agreed-upon starting point ‘G’ and follows the “recipe” ‘a’ times, performing scalar multiplication to compute aG. Imagine taking ‘a’ steps on the curve according to the rules. The point she ends up at is her public key, denoted as A (where A = aG). She can share this point ‘A’ with anyone. Similarly, Bob takes the same starting point ‘G’ and follows the “recipe” ‘b’ times, computing bG, ending up at his public key, denoted as B (where B = bG), which he can also share.

Now, Alice takes Bob’s public key ‘B’ and follows the same “recipe” she used with her secret number ‘a’ times, computing aB (which is a(bG) = abG). This leads her to a specific point on the curve. At the same time, Bob takes Alice’s public key ‘A’ and follows the “recipe” he used with his secret number ‘b’ times, computing bA (which is b(aG) = abG). This also leads him to the same point on the curve, the shared secret abG.

Due to the special mathematical properties of elliptic curves and the “recipe,” the point Alice arrives at will be exactly the same as the point Bob arrives at! For secure communication between two parties over an unsecure network, a secret key needs to be generated so that it can then be used for efficient and secure data transmission or for authentication purposes. This shared point is their secret key.

Now, even if someone is eavesdropping and sees the curve, the starting point ‘G’, Alice’s public key ‘A’, and Bob’s public key ‘B’, it’s incredibly difficult for them to figure out Alice’s secret number ‘a’ or Bob’s secret number ‘b’. This is because determining the number of steps taken to reach ‘A’ or ‘B’ from ‘G’ is a challenging mathematical problem known as the Elliptic Curve Discrete Logarithm Problem (ECDLP). Reversing this “secret walk” on the elliptic curve – finding out how many steps (‘a’ or ‘b’) were taken to get from ‘G’ to ‘A’ or ‘B’ – is computationally infeasible with current technology.

Essentially, the challenge of figuring out how many steps were taken to get from the starting point to the final point using ECDLP is the seemingly simple but computationally incredibly difficult task of determining the secret number ‘a’. Even with powerful computers, trying to reverse this process on a properly chosen elliptic curve with a sufficiently large key size is so time-consuming that it’s practically impossible within any reasonable timeframe.

This makes ECC a powerful and efficient tool for securing our digital world, from websites and emails to cryptocurrencies and mobile devices.

ECC vs RSA

When it comes to public-key cryptography, two algorithms have long stood out as heavyweights: RSA (Rivest–Shamir–Adleman) and ECC (Elliptic Curve Cryptography). Both achieve the fundamental goal of secure communication through public and private key pairs, but they go about it in fundamentally different ways, leading to some crucial distinctions.

FeatureRSAECC
Security Based OnFactoring large composite numbersElliptic Curve Discrete Logarithm Problem
Key Size (in bits)Larger for equivalent securitySmaller for equivalent security
PerformanceCan be slower for comparable securityGenerally faster for comparable security
Resource UsageHigher (storage, bandwidth, processing)Lower (storage, bandwidth, processing)
AdoptionHistorically widespread, still commonRapidly growing, especially in modern apps
Best Suited ForWide range of applications, legacy systemsMobile, IoT, modern secure protocols

Comparing the Security Strengths

When it comes to locking up your digital secrets, cryptography offers symmetric block ciphers and asymmetric-key algorithms. Symmetric block ciphers, like the Advanced Encryption Standard (AES), use a single key to both encrypt and decrypt data, while Asymmetric-key algorithms, like ECC and RSA, use a pair of keys: a public key to encrypt (or verify signatures) and a private key to decrypt (or sign).

Now, security strength is about how hard it is for an attacker to crack the system. For symmetric ciphers, strength is tied to the key size since the only way to break them (without flaws) is by brute-forcing every possible key. Asymmetric algorithms, though, face different attacks (like solving discrete logarithms for ECC or factoring for RSA), so their key sizes don’t directly match symmetric ones. Notably, ECC’s advantage is more effective in constrained environments, such as mobile devices and IoT systems, where its smaller key sizes provide strong security with less computational overhead. Then, how do we measure their strengths together?

According to the National Institute of Standards and Technology (NIST), symmetric and asymmetric algorithms can be compared by their equivalent security levels or in bits of security (NIST SP 800-57 Part 1, Revision 5), which provides the following mappings to estimate the computational effort required for an attack. This allows security professionals to select an algorithm based on performance versus security needs.

Symmetric Key SizeECC Key SizeRSA Key SizeSecurity Strength (Bits)
128 (AES-128)256–2833072128
192 (AES-192)384–5117680192
256 (AES-256)512+15360256

As you can see, ECC achieves the same security as AES or RSA with much smaller keys. A 256-bit ECC key matches AES-128’s strength, while RSA needs 3072 bits to keep up. Now, let’s learn about the whys and why-nots of using ECC in real-life applications in detail.

Pros and Cons of Using Elliptic Curve Cryptography

Like any cryptographic algorithm, ECC comes with its own set of strengths and weaknesses. Understanding the pros and cons is important for learning about its role in modern security and for making informed decisions about its use. 

Pros of ECC

  • Smaller Key Size: ECC can achieve the same level of security as older public-key systems like RSA with significantly shorter key lengths, including reduced storage, lower bandwidth consumption, and faster computation.  
  • Mathematical Complexity: The underlying mathematics of elliptic curves provides a strong foundation for security, with no known sub-exponential-time attack against the Elliptic Curve Discrete Logarithm Problem (ECDLP) on a well-chosen curve, making it difficult for attackers to find shortcuts to break the encryption. 
  • Advanced Security Features: ECC enables advanced security features like Perfect Forward Secrecy (PFS) when used in protocols like ECDHE, enhancing the resilience of secure communication. 

Cons of ECC

  • Relatively New: Compared to the decades of analysis that RSA has undergone, ECC is a newer technology. While it has been extensively studied and is considered secure by experts, it hasn’t faced the same length of public scrutiny. 
  • Curve Selection is Critical: The security of ECC heavily relies on the choice of the elliptic curve parameters. Using weak or poorly chosen curves can lead to vulnerabilities.  
  • Less Widespread Legacy Support: While adoption is growing rapidly, some older systems or protocols might have limited or no support for ECC compared to the more universally supported RSA. 

These pros and cons should be considered in the context of specific use cases, system requirements, and security goals to determine whether ECC is the right choice for a given application. 

Tailored Encryption Services

We assess, strategize & implement encryption strategies and solutions.

Applications of Elliptic Curve Cryptography

The robust security of ECC is used in many applications that you likely use every day. Here are some key areas where ECC plays a crucial role: 

  1. Digital Signatures and Code Signing: ECC provides a strong and efficient way to create digital signatures for documents, software, and firmware updates. These signatures ensure the authenticity of the sender and the integrity of the data, guaranteeing that it hasn’t been tampered with since it was signed.
  2. Securing the Web (HTTPS): When you browse a website with “https://” and see that padlock icon, ECC is often at work during the TLS/SSL handshake. Its smaller key sizes enable faster and more efficient establishment of secure connections between your browser and the web server, which is especially beneficial for mobile devices with limited processing power and bandwidth.
  3. Cryptocurrencies and Blockchain: ECC, particularly the ECDSA (Elliptic Curve Digital Signature Algorithm) variant, is a fundamental building block for most cryptocurrencies like Bitcoin and Ethereum. It’s used to:
    1. Secure wallets: Ensuring that only the owner of the private key can authorize transactions from their digital wallet.
    2. Verify transactions: Allowing the network to cryptographically verify the authenticity and integrity of every transaction recorded on the blockchain.
  4. Internet of Things (IoT) Devices: The resource-constrained nature of many IoT devices makes ECC an ideal choice for securing their communication and data. Its small key sizes and efficient operations are well-suited for devices with less processing power, memory, and battery life, ensuring secure data transmission from smart sensors to connected appliances.
  5. Secure Shell (SSH): For secure remote access to servers and systems, SSH often utilizes ECC for key exchange and authentication, providing a more efficient and equally secure alternative to traditional algorithms.
  6. Secure Key Exchange: ECC is also widely used in the Elliptic Curve Diffie-Hellman (ECDH) protocol for secure key exchange, enabling two parties to establish a shared secret key over an insecure channel, which is critical for encrypting subsequent communications.

Encryption Consulting’s CodeSign Secure

CodeSign Secure helps software developers and organizations build trust with users. It ensures the authenticity and integrity of software releases by leveraging ECC for enhanced security and efficiency. 

CodeSign Secure uses ECC, specifically the Elliptic Curve Digital Signature Algorithm (ECDSA), with key types like P-256 and P-384, to create digital signatures. These signatures verify that the software hasn’t been tampered with. This approach provides high-speed signing and verification with top-tier security. It protects against threats like malware injection or unauthorized code modifications. However, CodeSign Secure is more than just ECC—it’s a comprehensive solution to elevate your organization’s security standards. 

CodeSign Secure leverages ECC, specifically through the Elliptic Curve Digital Signature Algorithm (ECDSA), to create digital signatures that verify software hasn’t been tampered with. This will provide you with high-speed signing and verification with top-tier security, protecting against threats like malware injection or unauthorized code modifications. But CodeSign Secure isn’t just about ECC – it’s a full-fledged solution to improve your organization’s security standards. 

It supports various hardware security modules (HSMs) that are compatible with standards like PKCS#11 and FIPS 140-3, which store ECC keys in tamper-proof hardware. It also allows the automation of signing workflows to reduce the chances of human error by integrating seamlessly into CI/CD pipelines like Jenkins, Azure DevOps, GitHub Actions, and many more. 

It ensures digital signatures meet strict industry requirements, reducing non-compliance risks. Detailed audit trails and logging reports track every operation. With its seamless scalability, CodeSign Secure is an ideal solution for businesses to strengthen and improve their software’s security. 

Conclusion

Elliptic Curve Cryptography stands as a powerful and increasingly vital algorithm due to its elegant mathematical foundation, which allows it to deliver strong protection with remarkable efficiency and significantly smaller key sizes compared to older algorithms. Its widespread adoption is reflected in numerous security standards, including TLS 1.3, FIDO2, and S/MIME.  

From securing our web browsing and mobile communications to strengthening the functionality of cryptocurrencies and ensuring the integrity of software through code signing, ECC’s influence is pervasive. Furthermore, while ECC provides robust security against current computational threats, researchers are actively investigating its potential evolution and the development of ECC-based strategies that could offer protection even in a post-quantum computing era. 

Understanding the CA/Browser Forum Code Signing Requirements  

In June 2023, the CA/Browser Forum rolled out a significant update to their code signing requirements, directly impacting developers, DevOps teams, and businesses that rely on publicly trusted Certificate Authorities (CAs) to sign and secure their software. These updates, which came into effect on June 1, 2023, mandate that code signing private keys be securely stored and protected in a certified Hardware Security Module (HSM). This change affects both Extended Validation (EV) and non-EV certificates. 

This blog will dive into the technical aspects of the new code signing requirements, outline the challenges organizations may face, and explain the necessary steps to comply with the updated standards. 

Why the HSM Requirement? 

The new requirements stem from the need for stronger security practices in software development. With cyberattacks becoming increasingly sophisticated, it is crucial to ensure the integrity of software from the moment it is signed. Protecting the code signing private keys in an HSM drastically reduces the risk of key compromise, as HSMs are designed to be tamper-resistant and provide a secure environment for cryptographic operations. 

Effective June 1, 2023, all certificate requesters must use an HSM that meets the following standards:

These certifications are widely recognized as industry standards for secure cryptographic modules, ensuring that private keys are protected in a hardware environment that cannot be bypassed or tampered with. 

Key Challenges Under the New Code Signing Requirements 

While the shift to using HSMs for code signing is a positive move for security, it introduces several technical challenges that need to be addressed: 

  1. Proving Compliance with the HSM Requirement 

    The first hurdle for organizations is proving that their private keys are securely stored within an HSM. This proof is necessary for obtaining code signing certificates from CAs. While many CAs offer USB-based HSMs as a solution, this can be cumbersome for businesses operating in the cloud or with large-scale code signing requirements. USB HSMs are not ideal for teams spread across different regions or those who require rapid, large-scale signing operations. 

    A more scalable option is to use network-based HSMs, which can either be on-premises or rented from cloud providers. Solutions like AWS CloudHSM, Azure Dedicated HSM, Google Cloud HSM, and nShield as a Service are all viable options. However, it’s essential to ensure that the HSM you choose supports the verification methods required by your CA, such as key attestation (i.e., confirming that the private key resides within the HSM). 

  2. Managing Key and Certificate Operations 

    Once your private keys are secured in an HSM, managing them becomes more complex. HSMs are designed to keep private keys within their secure environment, meaning that you cannot directly export keys for use in software. Instead, you need to interact with the HSM through an intermediary layer, such as: 

    These operations require specialized software and cryptographic tools. Additionally, you need to ensure that access permissions for the keys are tightly controlled. Each action performed on the HSM (e.g., key usage, certificate issuance) must be logged and audited to comply with security best practices. 

  3. Integrating Code Signing with HSMs 

    One of the most technical challenges is integrating HSMs with the various code signing tools used across platforms. Most organizations use third-party tools for code signing, such as: 

    • signtool (Windows) 
    • jarsigner (Java) 
    • codesign and productsign (macOS) 
    • rpmsign and debsign (Linux) 
    • cosign (containers) 

    When using an HSM, integrating these tools requires the use of platform-specific cryptographic service providers (CSPs). For example: 

    • Windows: Uses KSP (Key Storage Providers) or CSP (Cryptographic Service Providers). 
    • Java: Uses JCE (Java Cryptography Extension) providers
    • Linux: Relies on PKCS#11 libraries for hardware-based cryptography. 
    • macOS: Uses CTK (Cryptographic Toolkit). 

    These integrations can be tricky, as the signing tools need to be configured to interact with the HSM via the appropriate service provider. This often requires custom configuration and testing to ensure seamless operation. 

  4. Optimizing Performance in CI/CD Pipelines 

    For modern DevOps environments, speed and efficiency are paramount. Introducing an HSM into your code signing process can slow down the pipeline if not properly configured. A common approach many teams initially adopt is to upload the data to be signed, perform the signing operation, and then download the signed result. However, this method can be inefficient, consuming considerable bandwidth and causing delays. 

    To optimize this process, many organizations switch to client-side hashing. With client-side hashing, the code is hashed on the client side before being sent to the HSM for signing, reducing the amount of data transferred and speeding up the signing process. However, this method requires that both your cryptographic service provider and signing infrastructure support this feature. 

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

How Encryption Consulting Can Support Your Compliance Journey ?

At Encryption Consulting, we specialize in helping organizations meet the CA/Browser Forum code signing requirements. Here’s how we can support you through this transition: 

  • HSM Integration: Whether you opt for USB-based or network-based HSMs, we can assist in selecting the best solution for your needs and guide you through the integration process. 
  • Key Management Solutions: We offer expertise in managing your private keys securely within HSMs, ensuring that all key and certificate operations are carried out securely and in compliance with industry standards. 
  • Code Signing Optimization: Our team can help integrate your code signing process with the necessary cryptographic service providers, ensuring that the process is smooth and secure across all platforms. 
  • CI/CD Pipeline Optimization: We can assist you in implementing efficient code signing strategies within your CI/CD pipeline to minimize any performance hits. 

Introducing CodeSign Secure: The Solution You Need 

To make your code signing process even more secure and efficient, we recommend CodeSign Secure. Our solution offers a seamless and automated approach to code signing that fully integrates with HSMs, helping you meet the latest CA/Browser Forum requirements without the complexity. CodeSign Secure simplifies key management, improves compliance, and ensures that your signing operations remain fast and efficient, even in large-scale DevOps environments. 

With CodeSign Secure, you get: 

  • Secure integration with hardware security modules 
  • Streamlined management of code signing certificates 
  • Enhanced performance for CI/CD pipelines 
  • Full compliance with the CA/Browser Forum code signing requirements 

Let us help you keep your software secure, compliant, and fast with CodeSign Secure. Reach out to Encryption Consulting today to learn how we can optimize your code signing process. 

Conclusion

The updated CA/Browser Forum code signing requirements mark a crucial shift towards stronger security practices within the software development lifecycle. While transitioning to HSM-based code signing comes with technical challenges, the benefits in terms of improved security and integrity are undeniable. By effectively managing HSM integration, optimizing key management workflows, and ensuring streamlined performance in your CI/CD pipeline, you can meet the new standards while maintaining development efficiency. 

Encryption Consulting is here to guide you through every step of the process, ensuring your code signing practices are secure and fully compliant with the latest industry standards. 

Key Insights from NIST’s Latest Report on Crypto-Agility 

If you have been keeping up with post-quantum cryptography (PQC), the latest release from the U.S. National Institute of Standards and Technology (NIST) is worth noting. NIST has published a draft Cybersecurity White Paper titled “Considerations for Achieving Crypto-Agility“, which outlines the practical challenges, trade-offs, and strategies that organizations must consider when transitioning cryptographic systems for the post-quantum era. 

The goal of this draft is to build a shared understanding of the challenges and to identify existing approaches related to crypto agility based on discussions NIST has held with various organizations and individuals. It also serves as a pre-read for an upcoming NIST-hosted virtual workshop, where the cryptographic community will further explore these issues and help shape the final version of the paper. 

You might be wondering, who really needs to focus on this? The answer is simple: pretty much everyone involved in cybersecurity. Whether you are designing protocols, managing IT systems, developing software or standards, building hardware, or shaping policy, the insights in this white paper are directly relevant to your role in ensuring secure and agile cryptographic systems for the future. 

What is Crypto-Agility? 

As defined by NIST, Cryptographic Agility refers to the capabilities needed to replace and adapt cryptographic algorithms in protocols, applications, software, hardware, and infrastructures without interrupting the flow of a running system in order to achieve resiliency. 

In simpler terms, crypto agility is the capability to switch to stronger cryptographic algorithms when existing ones become vulnerable quickly and seamlessly. It is essential because advancements in quantum computing can break current encryption methods, making it necessary to switch to quantum-resistant algorithms. Crypto agility ensures that systems can make this switch efficiently without needing to rebuild entire applications or infrastructures. 

This flexibility allows systems to adopt stronger algorithms and retire weak ones with minimal changes to applications or infrastructure. Achieving crypto agility often involves updates to APIs, software libraries, or hardware, and maintaining interoperability in protocols when introducing new cipher suites

More than just a technical goal, crypto agility is now a strategic necessity. It requires coordinated efforts from developers, IT administrators, policymakers, and security professionals to ensure systems remain secure, resilient, and future-ready in the face of evolving threats. 

Why Do These Cryptographic Transitions Take So Long? 

You might wonder if we know a cryptographic algorithm is outdated, why not just replace it right away? In reality, it’s rarely that simple. A good example is Triple DES. It was meant to be a temporary patch for the aging DES algorithm. However, even after the more secure AES standard was introduced in 2001, Triple DES continued to be used and was only officially phased out in 2024. That is a 23-year transition for something that was meant to be temporary. 

The reason transitions like this take decades is that many systems were not built with flexibility in mind. This makes transitioning to stronger algorithms slow, complex, and expensive. 

  • Hardcoded Algorithms

    In many older systems, cryptographic algorithms are hardcoded directly into the application’s source code. This means swapping them out often requires rewriting and retesting the entire application. This makes updates time-consuming and risky.

  • Backward Compatibility and Interoperability Challenges

    Another major challenge during cryptographic transitions is the need to maintain backward compatibility. A good example is SHA-1, a widely used hash function that was once considered secure. Even though its weaknesses were discovered as early as 2005, SHA-1 continued to be used for years because many systems relied on it for digital signatures, authentication, and key derivation.

    Even after NIST urged agencies to stop using SHA-1 by 2010, support for it had to remain in certain protocols like TLS to preserve interoperability. As a result, known weak algorithms were kept in use far longer than recommended because not all systems could adapt in time.

    This example illustrates a key challenge: when applications lack crypto-agility and cannot adapt quickly, outdated algorithms end up being used longer than necessary just to maintain compatibility with older systems.

  • Constant Need for Transition

    Cryptographic security is not static. It needs to evolve as computing power increases. Take RSA, for example. When RSA was first approved for digital signatures in 2000, a 1024-bit modulus was required to provide at least 80 bits of security. However, due to advances in computing power and cryptanalysis, by 2013, the minimum recommended modulus size was increased to 2048 bits to maintain a security level of at least 112 bits.

    These transitions often need to happen during a device’s lifetime. If systems are not designed to support larger key sizes or stronger algorithms, they may become obsolete and require replacement. That is why planning for upgrades from the start is both practical and cost-effective.

    Since 2005, NIST SP 800-57 Part 1 has projected the need to transition to 128-bit security by 2031. In 2024, NIST Internal Report (IR) 8547 stated that 112-bit security for current public-key algorithms would be deprecated by 2031, enabling a direct transition to post-quantum cryptography.

  • Resource and Performance Challenges

    Cryptographic transitions often come with performance trade-offs, especially when moving to post-quantum algorithms. Many of these newer algorithms require larger public keys, signatures, or ciphertexts. For instance, an RSA signature providing 128-bit security uses a 3072-bit key, while the post-quantum ML DSA signature defined in FIPS 204 is significantly larger at 2420 bytes, more than six times the size.

    This increase in size affects not just storage and processing but also network bandwidth, slowing down communication and putting pressure on constrained environments like IoT devices or embedded systems. As a result, transitions can be slower than expected, adding yet another layer of complexity.

That is why crypto agility is essential. It provides a framework for building systems that can adapt to new algorithms more smoothly, even when those changes demand more from the underlying infrastructure.

Making Security Protocols Crypto-Agile 

Security protocols rely on cryptographic algorithms to deliver key protections like confidentiality, integrity, authentication, and non-repudiation. For these protocols to work correctly, communicating parties must agree on a common set of cryptographic algorithms known as a cipher suite.  

Crypto agility in this context means the ability to easily switch from one cipher suite to another, more secure one, without breaking compatibility. To support this, protocols should be designed modularly, allowing new algorithms to be added and old ones phased out smoothly.  

In this section, we will explore the challenges and recommended practices for achieving crypto agility in security protocols.   

  • Clear Algorithm Identifiers

    Protocols should use unambiguous, versioned identifiers for algorithms or cipher suites to support crypto agility. For example, TLS 1.3 uses cipher suite identifiers that encapsulate combinations of algorithms, while Internet Key Exchange Protocol version 2 (IKEv2) negotiates each algorithm separately using distinct identifiers.

    Reusing names across variants or key sizes such as, labelling both AES-128 and AES-256 simply as “AES” can create confusion and increase the risk of misconfigurations during transitions. Well-defined identifiers enable phased rollouts, support fallback mechanisms, and improve troubleshooting as systems evolve.

  • Timely Updates

    Standards Developing Organizations (SDOs) must update mandatory or recommended algorithms before advances in cryptanalysis or computing weaken them without needing to change the entire security protocol. One way to achieve this is by separating the core protocol specification from a companion document that lists supported algorithms, allowing updates to the algorithms without altering the protocol itself.

    It is important for SDOs to introduce new algorithms before the current ones become too weak and to provide a smooth transition. A delay in updating algorithms could lead to the prolonged use of outdated or insecure cryptographic methods.

  • Strict Deadlines

    Legacy algorithms must be retired on clear and enforceable schedules. Organizations should avoid letting timelines slip. At the same time, standards groups like the Internet Engineering Task Force (IETF) and the National Institute of Standards and Technology (NIST) should coordinate across systems to reduce fragmentation and ensure smooth interoperability during transitions.

  • Hybrid Cryptography

    Hybrid cryptography combines traditional algorithms like ECDSA with post-quantum algorithms such as ML-DSA. This approach supports crypto agility by enabling a smooth transition from classical cryptographic systems to quantum-resistant algorithms. Hybrid schemes are commonly used for signatures and key establishment, where both traditional and PQC public keys are certified either in one certificate or separately to balance compatibility and forward security.

    While hybrid schemes are essential for validating crypto agility in real-world deployments, they introduce challenges such as increased protocol complexity, larger payloads, layered encapsulation, and performance trade-offs, which need careful consideration, especially in resource-constrained environments.

  • Balancing Security and Simplicity

    Cipher suites should maintain consistent strength across all components. Mixing weak and strong primitives in one suite undermines overall security. Overly complex negotiation logic also increases the risk of bugs and downgrade attacks. Streamlined suites improve analysis, simplify testing, and support more reliable cryptographic transitions.

PQC Advisory Services

Prepare for the quantum era with our tailored post-quantum cryptography advisory services!

Building Crypto-Agility for Applications 

Crypto APIs separate application logic from cryptographic implementations, allowing applications to perform encryption, signing, hashing, or key management without embedding cryptographic code directly. These operations are handled by dedicated libraries or providers. This abstraction makes it easier to switch between algorithms, such as AES-CCM and AES-GCM, without major changes to the application. Crypto APIs also enable seamless integration with protocols like TLS and IPsec, which rely on these interfaces for cryptographic operations.  

To support crypto agility, system designers must streamline the replacement of algorithms across software, hardware, and infrastructure. The cryptographic interface must be user-friendly and well-documented to reduce errors and support long-term security.  

NIST also explores in detail a few use cases for using crypto APIs, such as: 

  • Using an API in a Crypto Library Application

    A Cryptographic Service Provider (CSP) delivers algorithm implementations through a crypto API and may also manage protected key storage. Enterprise policies set by the Chief Information Security Officer (CISO) define which algorithms are allowed, and CSPs can enforce these rules during runtime. For instance, encryption with Triple DES is not allowed, but decryption might still be allowed for backward compatibility. Application developers must ensure that cryptographic libraries can be updated efficiently to support the rapid adoption of newer, more secure primitives.

    crypto-agility
    Applications using Crypto API

    An application refers to the end-user or process that executes cryptographic functions. The protocol defines the rules for communication and data transfer, ensuring that data is exchanged securely and consistently. Policy enforcement is typically handled through a cryptographic API, which implements the policies set by the Chief Information Security Officer (CISO) to determine which cryptographic algorithms are allowed. Providers are cryptographic service providers (CSPs) that offer supported algorithms through software libraries, hardware modules, or cloud-based services, based on the organizational requirements.

  • Using APIs in the Operating System Kernel

    Some security protocols, such as IPsec and disk encryption, need to operate in the operating system kernel, which is the core component of the system that runs first when the machine is powered on and manages all system resources. To support crypto agility, the kernel must have access to a crypto API, but typically, only a subset of it is available based on the required operations.

    Since supported algorithms are often determined at build time, adding new ones post-build (e.g., via plugins) can be difficult. As a result, crypto agility is more limited at the kernel level compared to user-space applications. Some systems also perform self-tests during boot to ensure cryptographic functions work as expected, but long-term crypto agility depends on sound initial decisions aligned with evolving cryptographic standards.

  • Hardware Considerations for Crypto Agility

    Hardware-based cryptography may use dedicated chips like Trusted Platform Modules (TPMs), Hardware Security Modules (HSMs), or personal crypto tokens that securely store private keys and handle cryptographic operations. These devices offer strong security but are much harder to update than software. Some CPUs offer built-in instructions to accelerate specific crypto functions, improving efficiency but not necessarily agility.

    Achieving agility at this layer requires selecting strong, future-proof algorithms and close collaboration between developers and cryptographers to anticipate long-term needs.

Key Trade-Offs and Areas for Improvement 

NIST emphasizes that crypto agility is a shared responsibility between cryptographers, developers, implementers, and security practitioners. To be actionable, crypto-agility requirements must be customized to fit the specific needs and limitations of each system. This section outlines core challenges, key trade-offs, and areas that need further improvement.  

  • Resource Limitations

    Resource limitations are one of the most difficult challenges for achieving crypto agility. Protocols often need to support multiple cryptographic algorithms, but newer algorithms, like post-quantum ones, often require much larger keys, signatures, or ciphertexts than those they replace. These larger sizes can overwhelm existing protocol limits or hardware capabilities. Protocol designers must anticipate these demands to avoid shortsighted design decisions that hinder future transitions.

    Hardware limitations pose another challenge, as they may not have enough capacity to support multiple algorithms on a single platform. Algorithm designers should consider reusability and compatibility across different algorithms, ensuring that subroutines like hash functions can be shared to save hardware resources rather than relying on unique components that are rarely used.

    There is also a growing need to design algorithms based on diverse assumptions so that if one fails, others still provide security.

  • Agility-Aware Design

    Crypto APIs provide a practical means to substitute cryptographic algorithms without requiring a complete rewrite of application logic. This flexibility is essential when algorithms are deprecated due to emerging threats. However, achieving crypto agility at the kernel level is more challenging, as cryptographic functions are often hardcoded during kernel compilation, making post-deployment updates difficult.

    To address this, NIST recommends designing APIs and interfaces that do not assume fixed algorithms or key sizes. Protocols that support negotiation mechanisms, such as TLS cipher suites, should be used to allow flexibility in selecting cryptographic methods. Additionally, it is beneficial for standards to include a “Crypto Agility Considerations” section to guide developers in creating systems that can easily adapt to evolving cryptographic requirements over time. These practices help ensure that systems remain secure and adaptable as cryptographic needs change.

  • Complexity and Security Risks

    While crypto agility increases flexibility, it also introduces new complexity and risk. Supporting multiple algorithm options can lead to configuration errors, security bugs, and broader attack surfaces. For example, if cipher suite negotiation is not protected, attackers can downgrade to weaker algorithms. Similarly, exposing too many cryptographic options in libraries or APIs can increase the risk of security flaws. For enterprise IT administrators, it is necessary to ensure that the configuration is updated to reflect new security requirements.

    Also, transitioning from one algorithm to another is risky. Yet, most security evaluations today focus on static configurations and not on the dynamics of transitioning between algorithms. Future assessments should explicitly consider cryptographic transitions as part of their security posture.

  • Crypto Agility in Cloud

    Cloud environments provide scalability and flexibility for cryptographic operations but can limit crypto agility due to vendor lock-in. Developers often rely on provider-specific cryptographic APIs, hardware, or key management services, which can restrict algorithm or provider changes.

    Some CSPs offer access to external, application-specific HSMs to avoid lock-in, but this approach adds operational complexity. In addition, adopting confidential computing architectures can prevent cloud providers from accessing sensitive data or keying material by isolating the processing environment. However, the provider may still retain administrative control, including the ability to remove the application entirely. In some cloud environments, the cloud provider can delete keys from an HSM, even without direct access to those keys.

    Therefore, to support crypto agility, organizations must carefully evaluate the cryptographic controls, flexibility, and responsibilities in their chosen cloud environment.

  • Maturity Assessment for Crypto Agility

    To help organizations evaluate and improve their crypto agility, NIST highlights the Crypto Agility Maturity Model (CAMM). This model defines five levels:

    1. Level 0: Not possible
    2. Level 1: Possible
    3. Level 2: Prepared
    4. Level 3: Practiced
    5. Level 4: Sophisticated

    These levels assess how well a system or organization can handle crypto transitions. For example, a system that supports cryptographic modularity at Level 2 (“Prepared”) can replace individual cryptographic components without disrupting the rest of the system. Although CAMM is mostly descriptive today, it offers a valuable framework for guiding improvements. The concept of crypto agility maturity is still developing, and expanding the model with more precise metrics and broader applicability could further strengthen its value.

  • Strategic Planning for Managing Crypto Risk

    Crypto agility should be part of an organization’s long-term risk management strategy, not just a one-time effort. NIST recommends a proactive approach that blends governance, automation, and prioritization based on actual cryptographic risk.

    crypto-agility
    Crypto agility strategic plan for managing organization’s crypto risks

    Key steps include integrating crypto agility into governance policies; creating inventories to identify where and how cryptography is used; evaluating enterprise management tools to ensure they support crypto risk monitoring; prioritizing systems based on exposure to weak cryptography; and taking action to either migrate to stronger algorithms or deploy mitigation techniques like zero-trust controls.

    These activities should be cyclical, enabling organizations to evolve with new threats, technologies, and compliance mandates.

  • Standards, Regulations, and Compliance

    Standards and regulations often drive crypto-agility efforts by mandating the transition away from obsolete algorithms. For example, NIST SP 800-131A set a deadline to end support for Triple DES by the end of 2023.

    Compliance with these standards is crucial. However, successful transitions also require practical strategies to handle legacy data. This includes securely decrypting or re-signing data that was protected using deprecated algorithms. Protocols and software must be updated to reflect algorithm transitions.

  • Enforcing Crypto Policies

    Enforcing crypto-security policies is a critical yet difficult aspect of crypto agility. Organizations need to coordinate the transition from weak to strong algorithms in a way that avoids service disruptions. This requires synchronizing updates across systems, enforcing algorithm restrictions within protocols, and supporting secure configurations through APIs. This process demands close collaboration among cryptographers, developers, IT administrators, and policymakers. Effective crypto agility depends not just on technical solutions but also on shared visibility and communication across all stakeholders to ensure that systems remain secure and responsive in the face of evolving threats and regulatory requirements.

How Can Encryption Consulting’s PQC Advisory Services Help? 

Navigating the transition to post-quantum cryptography requires careful planning, risk assessment, and expert guidance. At Encryption Consulting, we provide a structured approach to help organizations seamlessly integrate PQC into their security infrastructure. 

We begin by assessing your organization’s current encryption environment and validating the scope of your PQC implementation to ensure it aligns with industry best practices. This initial step helps establish a solid foundation for a secure and efficient transition. 

From there, we work with you to develop a comprehensive PQC program framework customized to your needs. This includes projections for external consultants and internal resources needed for a successful migration. As part of this process, we conduct in-depth evaluations of your on-premises, cloud, and SaaS environments, identifying vulnerabilities and providing strategic recommendations to mitigate quantum risks. 

To support implementation, our team provides project management estimates, delivers training for your internal teams, and ensures your efforts stay aligned with evolving regulatory standards. Once the new cryptographic solutions are in place, we perform post-deployment validation to confirm that the implementation is both effective and resilient. 

Conclusion

Cryptographic agility is not just a technical goal but a critical part of building resilient systems in a world of constant change. It requires collaboration between cryptographers, developers, implementers, and security professionals, all working together to stay ahead of emerging threats. As cryptographic standards shift and new risks appear, organizations must be prepared to adapt quickly and securely. By making agility a core part of system design and security planning, we can build a future where strong protection keeps pace with innovation. 

OpenSSL Signing with EC’s PKCS#11 Wrapper

Introduction

When it comes to securing digital communications or verifying the integrity of data, OpenSSL is one of the go-to tools for developers and security professionals. Whether you’re signing files, creating certificates, or verifying digital signatures, OpenSSL provides a flexible and powerful command-line interface to handle cryptographic operations.

But there’s a catch—by default, OpenSSL works with software-based keys stored on disk. While that may be fine for development or internal testing, it’s not ideal for production environments where the security of private keys is critical. This is where PKCS#11 steps in.

PKCS#11 is a standard API that lets software interact with cryptographic tokens like smart cards, USB tokens, and HSMs (Hardware Security Modules). These tokens are designed to store keys securely and perform operations like signing or encryption directly within the device—meaning your private key never leaves the hardware.

Now, integrating OpenSSL with a PKCS#11-compatible device isn’t always plug-and-play. That’s why we use a PKCS#11 wrapper, which acts as a bridge between OpenSSL and the hardware-backed cryptographic provider. The wrapper makes it possible to run familiar OpenSSL commands while offloading the actual signing operations to the HSM or token.

In this blog, I’ll walk you through how to perform OpenSSL-based signing using CodeSign Secure and Encryption Consulting’s PKCS#11 wrapper on both Ubuntu and Windows, covering setup, configuration, and execution. Whether you’re working in enterprise environments or building a secure signing process for your app, this setup will help you strengthen your cryptographic hygiene without a steep learning curve.

OpenSSL PKCS11 – Ubuntu

Installation on Client System

Step 1: Go to EC CodeSign Secure’s v3.01 Signing Tools section and download the PKCS11 Wrapper for Ubuntu.

Signing Tools

Step 2: After that, generate a P12 Authentication certificate from the System Setup > User > Generate Authentication Certificate dropdown.

P12 Certificate

Step 3: Go to your Ubuntu client system and edit the configuration files (ec_pkcs11client.ini and pkcs11properties.cfg) downloaded in the PKCS11 Wrapper.

edit config files

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

Prerequisites for Ubuntu System

Now, let’s install some prerequisites in your client system to run the PKCS11 Wrapper.

Step 1: Install OpenSSL:sudo apt install -y openssl libengine-pkcs11-openssl gnutls-bin xxd

install OpenSSL

Step 2: Create a config file for PKCS#11

create Config file

Step 3: Enter the appropriate details in the config file:

openssl_conf = openssl_init
[openssl_init]
engines = engine_section
[engine_section]
pkcs11 = pkcs11_section
[pkcs11_section]

#Path to the OpenSSL PKCS11 Engine
dynamic_path = “<Path to libpkcs11.so>”
MODULE_PATH = “<Path to ec_pkcs11client.so>”

appropriate details

The ec_pkcs11client.so path depends on where you store the file.

The libpkcs11.so path depends on your Linux distribution and version:

  • Ubuntu 18.04: /usr/lib/x86_64-linux-gnu/engines-1.1/libpkcs11.so
  • Ubuntu 20.04: /usr/lib/x86_64-linux-gnu/engines-1.1/libpkcs11.so
  • Ubuntu 22.04: /usr/lib/x86_64-linux-gnu/engines-3/libpkcs11.so

Step 4: Set the environment variable for openssl.conf file

export OPENSSL_CONF=<path to openssl.conf>

Set the environment variable

Perform Signing and Verification using PKCS11 Wrapper

Now that all the configurations and prerequisites have been installed. Let’s perform the signing operation first.

The signing command will look something like this (ensure you run this command only inside the folder where your PKCS11 Wrapper is installed):

openssl pkeyutl -engine pkcs11 -sign -in <path of the file you want to sign> -inkey “pkcs11:object=<private key alias>;type=private” -keyform engine -out <path of the Signed file>

For Example: openssl pkeyutl -engine pkcs11 -sign -in testfile.txt -inkey “pkcs11:object=CertEnrollTest;type=private” -keyform engine -out readme.sign.sha256

After successfully signing the file, let’s verify it using this command:

openssl pkeyutl -engine pkcs11 -verify -in <path of the file you want to sign> -inkey “pkcs11:object=<private key alias>;type=private” -keyform engine -sigfile <path of the Signed file>

For example: openssl pkeyutl -engine pkcs11 -verify -in testfile.txt -inkey “pkcs11:object=CertEnrollTest;type=private” -keyform engine -sigfile readme.sign.sha256

Signing and Verification

OpenSSL PKCS11 – Windows

Installation on Client System

Step 1: Go to EC CodeSign Secure’s v3.01’s Signing Tools section and download the PKCS11 Wrapper for Windows.

Signing Tools section

Step 2: After that, generate a P12 Authentication certificate from the System Setup > User > Generate Authentication Certificate dropdown.

P12 Authentication certificate

Step 3: Go to your Windows client system and edit the configuration files (ec_pkcs11client.ini and pkcs11properties.cfg) downloaded in the PKCS11 Wrapper.

edit the configuration files
edit the configuration files
edit the configuration files

Prerequisites for Windows System

Now, let’s install some prerequisites in your client system to run the PKCS11 Wrapper.

Step 1: Download and install OpenSSL on your system from here.

Download and install OpenSSL

Enterprise Code-Signing Solution

Get One solution for all your software code-signing cryptographic needs with our code-signing solution.

Step 2: Manually compile the OpenSSL PKCS#11 using these methods

Use this command:

pacman -S git pkg-config libtool autoconf automake make gcc openssl-devel git clone https://github.com/OpenSC/libp11.git cd libp11 autoreconf -fi ./configure –prefix=/usr/local make && make install

Step 3: In the configuration file (openssl.cnf) in the folder C:\Program Files\Common Files\SSL

Add these lines:
openssl_conf = openssl_init
[openssl_init]
engines = engine_section
[engine_section]
pkcs11 = pkcs11_section
[pkcs11_section]

#Path to the Compiled OpenSSL PKCS11 from OpenSC – libp11
dynamic_path = <Path to compiled libp11 pkcs11.dll>MODULE_PATH = <Path to ec_pkcs11client.dll>

configuration file

configuration file

Perform Signing and Verification using PKCS11 Wrapper

Now that all the configurations and prerequisites have been installed. Let’s perform the signing operation first.

The signing command will look something like this (ensure you run this command only inside the folder where your PKCS11 Wrapper is installed):

openssl dgst -engine pkcs11 -keyform engine -sign “pkcs11:object=<private key alias>;type=public” -sha256 -out <path of signed file> <path of file to sign>

For Example:

openssl dgst -engine pkcs11 -keyform engine -sign “pkcs11:object=CertEnrollTest;type=public” -sha256 -out test-signed.bin testfile.txt

After successfully signing the file, let’s verify it using this command:

openssl dgst -engine pkcs11 -keyform engine -verify “pkcs11:object=<private key alias>;type=public” -sha256 -signature <path of signed file> <path of file to sign>

For example:

openssl dgst -engine pkcs11 -keyform engine -verify “pkcs11:object=CertEnrollTest;type=public” -sha256 -signature test-signed.bin testfile.txt

Signing and Verification

Security Considerations

When you’re dealing with cryptographic operations—especially digital signing—the protection of your private keys is absolutely non-negotiable. A leaked or compromised private key is pretty much a worst-case scenario because it opens the door to impersonation, unauthorized access, and a whole lot of trust issues.

So, let’s talk about a few best practices to help keep your keys (and your reputation) safe:

  • Never store private keys in plain files: It might be convenient, but storing private keys as flat files on disk (even with file system permissions) is risky. All it takes is a misconfigured backup system or a curious admin to accidentally expose them. Instead, use secure key storage mechanisms—preferably hardware-backed.
  • Use HSMs or secure tokens whenever possible: Hardware Security Modules (HSMs) and smart cards are built specifically for secure key storage. They don’t just store keys—they perform operations (like signing or decryption) inside the device, so the private key never actually leaves. This adds a strong layer of protection, especially against malware or insider threats.
  • Lock down access to your PKCS#11 modules: Make sure only authorized users or services can talk to the PKCS#11 interface. Use PINs, role-based access, and proper auditing to prevent abuse. If your wrapper or HSM vendor supports logging, enable it—you’ll want a clear trail of who accessed what and when.
  • Avoid leaving tokens unlocked: It’s tempting to script things and leave tokens unlocked to “keep things running,” but that’s risky. Instead, look into automated unlock mechanisms that still respect session isolation or tools that securely cache credentials for short durations with tight access controls.
  • Validate what you’re signing: It sounds obvious, but make sure you’re not signing random or malicious files by accident. Build in sanity checks or use pre-signing verification steps to ensure only approved content goes through your signing flow.

At the end of the day, cryptographic tools are only as secure as the way they’re used. PKCS#11 gives you the power to offload critical operations to hardware—but you’ve still got to respect the basics: protect your keys, limit access, and always think one step ahead of an attacker.

Conclusion

Signing with OpenSSL using PKCS#11 might seem a bit technical at first, but once you’ve set it up—on either Ubuntu or Windows—it’s a smooth and secure workflow. By integrating a PKCS#11 wrapper, you can shift sensitive signing operations away from the software layer and into secure hardware like HSMs or tokens. This not only protects your private keys but also helps you meet compliance standards and security best practices.

If you’re looking for a streamlined way to get started, EC’s PKCS#11 wrapper makes the integration process much simpler. It’s built to be reliable, cross-platform, and compatible with a wide range of HSMs—so you can focus more on securing your operations and less on dealing with low-level plumbing. In today’s threat landscape, protecting your keys is protecting your business. Whether you’re signing code, documents, or certificates, combining OpenSSL with a robust PKCS#11 wrapper like EC’s is a smart, future-proof move.