Skip to content

Agent vs. Agentless: Choosing Your Certificate Lifecycle Management Deployment 

Effective Certificate Lifecycle Management (CLM) is crucial for modern digital security. When implementing a CLM solution, a key decision is choosing between agent-based and agentless architectures, which impacts deployment, operations, and scalability. 

Agent-Based CLM Deployments 

An agent-based architecture involves installing a lightweight software component (agent) directly on each endpoint (e.g., servers, devices, VMs) requiring certificate management. These agents communicate with a central CLM platform, performing tasks like scanning, CSR generation, and automated installation locally. 

Key Advantages

  • Granular Control & Deep Visibility: Agents offer fine-grained control and access to local configurations, enabling proactive issue resolution. 
  • Real-time Monitoring: Continuous, real-time monitoring allows for immediate detection and remediation of certificate issues. 
  • Complex Environment Support: Ideal for diverse OS, legacy systems, or air-gapped networks. 
  • Enhanced Security: Provides endpoint-level security features like encrypted local private key storage. 
  • Cross-Network Capabilities: Agents can manage devices in segmented networks by initiating outbound connections. 
  • Automation: Automates processes directly on devices, reducing manual intervention. 

Disadvantages & Considerations

  • Deployment & Maintenance Overhead: Significant effort for installation, configuration, and ongoing updates across many endpoints. 
  • Resource Consumption: Agents consume CPU, memory, and disk space on endpoints. 
  • Change Management: Requires robust change management for rollouts and updates. 

Agentless CLM Deployments 

An agentless architecture eliminates endpoint software installation. A centralized CLM platform interacts remotely using existing network protocols, APIs, or standard certificate management protocols. 

Key Advantages

  • Simplified Deployment & Scalability: No endpoint software reduces complexity, making it easy to deploy and scale in dynamic environments. 
  • Reduced Overhead & Cost Efficiency: Lower operational costs due to no agent development, deployment, or updates. 
  • Minimal Endpoint Resource Usage: All CLM tasks are performed on the central server, freeing endpoint resources. 
  • Broader Environment Support: Compatible with various platforms, including network appliances, IoT devices, and cloud infrastructure. 
  • Rapid Implementation: Beneficial for immediate deployment or environments with agent installation restrictions. 
  • Enhanced Automation: Centralizes and automates all certificate lifecycle processes. 

Disadvantages & Considerations

  • Limited Granularity: May offer less deep insight into highly specific local certificate stores compared to agents. 
  • Network Dependencies: Relies heavily on robust network connectivity and proper firewall rules. 
  • Security Risks: Potential for credential compromise or unauthorized access if not rigorously secured. 
  • Complexity of Remote Access: Configuring access permissions and protocol settings for diverse endpoints can be intricate. 

Hybrid Approaches: The Best of Both Worlds? 

Many large enterprises have a mix of legacy and modern infrastructure, making a purely agent-based or agentless approach impractical. A hybrid CLM deployment combines both models. 

How it Works?

  • Strategic Deployment: Agents are deployed for critical, sensitive, or hard-to-reach systems needing deep visibility and real-time control. 
  • Agentless for Scale: Agentless capabilities manage scalable, dynamic environments like cloud resources, Kubernetes clusters, and network devices. 
  • Unified Platform: An ideal CLM solution supports both models from a single, centralized platform for holistic visibility. 

Certificate Management

Prevent certificate outages, streamline IT operations, and achieve agility with our certificate management solution.

Making the Right Choice: Key Decision Factors

The best choice for your CLM deployment hinges on your organization’s unique infrastructure, security posture, and operational goals. 

Infrastructure Landscape

Your environment’s diversity is crucial. Highly heterogeneous setups, encompassing various operating systems and device types, often benefit from a hybrid approach or a robust agentless solution capable of broad protocol support. For large, dynamic, and modern environments, such as those with ephemeral containers, rapidly scaling cloud instances, or extensive cloud-native deployments, agentless solutions are typically favored due to their inherent agility and ease of management at scale.

Conversely, for large, complex, and traditional environments with legacy systems, diverse on-premises configurations, or highly specialized hardware, agent-based solutions often provide the necessary deep visibility and granular control. Don’t forget your network topologies; firewall rules, segmentation, and available bandwidth heavily influence the practicality and performance of remote access for agentless solutions. 

Security Posture & Compliance

Consider your organization’s risk tolerance regarding agent deployment (potential endpoint compromise if not properly secured) versus remote access risks (credential management, network exposure). Evaluate how each CLM model contributes to detailed audit trails and helps meet compliance requirements. Also, assess the importance of enforcing consistent certificate policies directly at the endpoint level, which agents are often better equipped to do, especially for local key protection. 

Operational Considerations  

Think about how well the CLM solution integrates with your existing tools like ITSM, SIEM, CMDB, and orchestration platforms; seamless integration reduces friction. Evaluate your team’s skillset and available resources—are they more adept at managing agents or configuring network settings and APIs for agentless solutions? Budget is another factor, but look beyond just licensing costs to the total cost of ownership, including operational overhead and maintenance. Finally, assess any potential performance overhead an agent might introduce on critical systems. 

Future-Proofing

Your CLM choice should align with your long-term strategy. If you’re embracing cloud adoption, select a solution that adapts seamlessly to hybrid and multi-cloud environments. For organizations with high DevOps/DevSecOps maturity, ensure the CLM solution integrates smoothly into CI/CD pipelines for automated and programmable certificate provisioning in fast-paced development cycles. 

Key Differences: Agent-Based vs. Agentless 

Feature Agent-Based Agentless 
Setup Complexity Requires installation on every endpoint; potential reboots. Centralized and straightforward; no endpoint software needed. 
Control Granularity Device-level control; deep insight into local stores; proactive fixes. Relies on native endpoint capabilities (SSH, APIs); less insight into local application configurations. 
Compatibility Suitable for diverse environments, but requires specific agent versions per OS. Leverages standard protocols; certified integrations 
Scalability Complex to scale due to per-endpoint installation and maintenance. Highly scalable; ideal for dynamic, ephemeral environments. 
Security Encrypted local storage; endpoint-level policy enforcement. Agent can be a target. Depends on device-native protocols and secure credential management. Focus on central platform security. 
Maintenance Ongoing agent updates, patching, and configuration changes required. Minimal; primarily managing the central CLM platform and integrations. 
Network Dependencies Can operate disconnected for periods; agents initiate outbound connections. Highly dependent on network connectivity, routing, and firewall rules for inbound access. 
Resource Consumption Agents share resources (CPU, memory, disk) on endpoints, potentially impacting performance. No local resource consumption on endpoints; all CLM tasks on the central server. 
Service Account Mgmt. Needs separate account management/credentials for each agent; complex at scale. Simplified through centralized credential rotation. 

How can Encryption Consulting Help? 

Encryption Consulting, through its CertSecure Manager CLM solution, effectively addresses the agent vs. agentless dilemma by providing a flexible and unified platform. This allows organizations to leverage agentless capabilities for modern, dynamic environments like cloud-native setups and DevOps. Simultaneously, CertSecure Manager supports agent-based deployments for complex, legacy on-premises systems or highly segmented networks, providing the granular control, deep visibility, and local key protection essential for these specific needs. This comprehensive hybrid approach ensures seamless, automated CLM across an entire, diverse IT landscape from a single pane of glass, optimizing both security and operational efficiency. 

Conclusion 

There’s no single “correct” answer for CLM deployment. Agent-based offers robust control and deep visibility, while agentless provides simplicity, scalability, and cost efficiency. For most enterprises, a hybrid approach will be the most effective, leveraging the strengths of both. 

The ultimate goal is robust automation and comprehensive visibility over your entire certificate landscape. By carefully evaluating your environment and choosing a CLM solution with flexible deployment, you build a resilient, proactive security posture against certificate-related outages, compliance failures, and breaches. 

Enterprise Guide to PQC Migration 

Think about the major shifts we have seen over the years. Every few decades, a new wave of technology arrives and reshapes everything around us. The internet connected people in ways never imagined, smartphones put the world in our pockets, and cloud computing changed how businesses operate. And now, it is quantum computing’s turn. However, unlike those past revolutions, this one will not start with a flashy product launch or a viral app. Instead, it will arrive quietly, the moment quantum computers become powerful enough to break the cryptographic systems that secure our digital world.  

Today, encryption protects everything from financial transactions and corporate secrets to personal messages and software updates by relying on mathematical problems that are very difficult for classical computers to solve, like factoring large prime numbers (RSA) or solving elliptic curve equations (ECC).

But quantum computers operate on entirely different principles. Once they reach a certain threshold, they will be able to solve these complex problems exponentially faster using algorithms like Shor’s. Tasks that would take classical machines thousands of years could be done in minutes or less. In fact, in December 2024, Google announced its 105-qubit Willow quantum processor, which performed a computation in under five minutes that would have taken the world’s fastest supercomputer an estimated 10 septillion years to solve. That is nearly a quadrillion times longer than the age of the universe.

This kind of advancement has caught the attention of researchers, governments, technology leaders, and cybersecurity experts. The systems we rely on today were never designed to withstand quantum threats, and patching them is not enough. We need a new foundation for digital security, one that can endure the power of quantum computation. This is where Post-Quantum Cryptography (PQC) comes in.

What is PQC? 

Post-Quantum Cryptography refers to a new generation of cryptographic algorithms that are designed to resist quantum attacks. These algorithms rely on mathematical problems like structured lattices, hash-based constructions, and multivariate equations that remain difficult for both classical and quantum computers. One of the growing concerns driving the adoption of PQC is the risk of “harvest now, decrypt later” attack, where adversaries collect encrypted data today with the expectation that they will be able to decrypt it in the future once quantum computing capabilities become available. In fact, around 65 percent of organizations are already concerned about this threat, making it a critical motivation for beginning the transition to quantum-resistant solutions as early as possible.

To help everyone prepare, the U.S. National Institute of Standards and Technology (NIST) has been leading a global initiative to standardize post-quantum algorithms. After years of analysis, testing, and public review, these are the main algorithms that have been selected:

Current Specification Name Initial Specification Name FIPS Name Parameter Sets Type
ML-KEM-1024 CRYSTALS – Kyber FIPS 203 Kyber512
Kyber768
Kyber1024
Lattice-Based Cryptography
ML-DSA-87 CRYSTALS – Dilithium FIPS 204 Dilithium2
Dilithium3
Dilithium5
Lattice-Based Cryptography
SLH-DSA SPHINCS+ FIPS 205 SPHINCS+ – 128s
SPHINCS+ – 192s
SPHINCS+ – 256s
Hash-Based Cryptography
FN-DSA FALCON FIPS 206 Falcon – 512
Falcon – 1024
Lattice-Based Cryptography
HQC HQC (Hamming Quasi–Cyclic) Pending Standardization HQC – 128
HQC – 192
HQC – 256
Code-Based Cryptography

These standardized algorithms serve as the foundation for future-proof encryption and are now ready to be integrated into real-world systems.

Roadmap to PQC migration 

Adopting post-quantum cryptography is not a one-time fix. It is a step-by-step process that touches every layer of an organization’s digital infrastructure. To make this transition easier, organizations can follow a well-defined PQC migration roadmap divided into four coordinated phases. While the exact sequence and timing may differ based on organizational needs, the phases are often closely connected and build on each other to ensure a smooth and effective transition. These four phases are:

PQC Migration Roadmap

Phase 1: Preparation 
This first step lays the groundwork. It involves setting clear migration goals, appointing a responsible lead, identifying key stakeholders, and aligning teams around a shared strategy. A solid foundation at this stage ensures smoother progress ahead. 

Phase 2: Baseline Assessment 
With leadership in place, the focus shifts to understanding the cryptographic environment of the organization. This includes creating a detailed inventory of cryptographic assets, such as digital certificates, algorithms, protocols, and keys, across your organization and prioritizing them based on sensitivity, expected lifespan, business needs, and exposure to quantum threats

Phase 3: Planning and Implementation 
This is where strategy meets execution. Organizations begin designing, acquiring, and deploying PQC solutions. Whether these solutions are developed in-house or procured from partners, this phase focuses on integration, testing, and secure rollout across systems. 

Phase 4: Monitoring and Evaluation 

The final phase ensures that the cryptographic posture stays strong over time. It includes tracking progress, validating the effectiveness of implementations, aligning with compliance standards, and continuously adapting as quantum technology evolves. 

With the full PQC migration roadmap now outlined, let us take a closer look at each phase and explore the key activities involved in building a secure and future-ready migration path. 

Preparation 

The first phase focuses on establishing a strong foundation for PQC migration. This involves clarifying objectives, setting clear goals, assigning leadership, understanding the current cryptographic environment, and aligning all stakeholders around the migration effort. 

To build this foundation, the following key steps should be taken: 

Understand Why PQC Matters and Define Your Goals 

Start by determining whether your organization should begin migrating now or later. This depends on how long your data needs to remain secure, how complex your systems are, and how long the PQC migration process might take. While there is no exact timeline for when quantum computers will become a real threat, it is likely to happen sooner than many expect.

Even if your data is not at risk today, it may still be stored by attackers and decrypted in the future once quantum computers mature. Therefore, while defining the migration goals, one must consider factors like the organization’s attack surface, interdependence, sensitivity of the data, and the urgency of action. 

Assign a PQC Migration Lead 

Choose someone to lead the PQC migration process. This individual or team will be responsible for setting clear timelines, coordinating with vendors, managing internal communication, and pushing the effort forward. Since PQC impacts many parts of the organization, the lead must be capable of working across departments and communicating with both technical and business teams.

Alongside appointing a lead, it is also recommended to establish a dedicated PQC team. This team should assign clear responsibilities, oversee risk mapping, align efforts with the overall business strategy, and act as the nerve center for the PQC migration project. Together, the lead and the team play an active role in embedding PQC goals into all enterprise initiatives, ensuring that quantum readiness becomes a core part of the organization’s long-term security and technology planning while enabling a coordinated and future-ready approach. 

Review Existing Cryptographic Assets and Dependencies 

The next step is to understand your organization’s current cryptographic setup. Start by reviewing any existing inventories, risk assessments, and cryptographic bills of materials. These may have been created at different times for various purposes. While reviewing them, document where your data and assets are located, who is responsible for them, who has access, why they were created, and how they are used.

This thorough review provides a complete picture of your cryptographic environment and helps to identify any gaps. It also helps prevent duplication of effort during migration and can reveal if there are any existing infrastructures, platforms, or libraries that already support post-quantum cryptography. 

Identify Key Stakeholders and Begin Engagement 

Identifying and involving relevant stakeholders early in the process is essential. These may include system owners, application teams, infrastructure managers, compliance leaders, and third-party vendors. Engaging these groups early helps to assess solution readiness, understand potential integration challenges, and uncover practical considerations such as hardware limitations, cost implications, or software constraints.

Document the goals and scope of the migration effort, share clear summaries of planned changes, and monitor how well teams are aligned and responding. This communication builds internal support and reduces friction during implementation. 

Outcomes

By the end of this phase, the organization will have established a strong foundation for PQC migration by clearly defining its rationale for transitioning to post-quantum cryptography and setting a realistic timeline based on its unique risk profile and data sensitivity. A qualified migration lead and team are appointed to coordinate the effort, ensuring alignment across departments and with external partners.

Existing cryptographic inventories are thoroughly reviewed and documented, providing a clear view of the organization’s current cryptographic posture and readiness. Key stakeholders, including internal teams and external vendors, are identified and actively engaged through consistent communication, helping to align priorities, gather essential insights, and build broad-based support. As a result, the organization is equipped with the clarity, leadership, and stakeholder alignment necessary to confidently advance to the next phase of PQC migration. 

Baseline Assessment  

Building on the foundation established in Phase 1, the next phase focuses on developing a comprehensive understanding of the organization’s cryptographic environment. This includes identifying what cryptographic assets are in place, how critical they are, and what resources are needed to support further discovery and prioritization. The insights gained during this phase empower the migration lead and supporting teams to make informed decisions about where to begin implementation and how to allocate effort based on risk, data sensitivity, system dependencies, and urgency. 

To support this phase, the following activities are critical for understanding the organization’s current cryptographic landscape and potential risks: 

Define a Discovery Strategy and Allocate Resources 

In this step, the PQC migration lead checks if the existing inventory from Phase 1 is detailed enough or if further discovery of cryptographic assets and their usage is needed. Based on this assessment, a discovery plan is established, outlining whether to continue exploring cryptographic assets or to move forward with prioritization. As part of this process, it is important to map cryptographic assets to business risk in order to understand which systems or data are most critical and require immediate attention. The lead also figures out the budget and resources needed to support the next steps.  

At this stage, engaging with external partners or vendors who specialize in PQC readiness assessments, such as Encryption Consulting, can provide valuable guidance and significantly accelerate discovery efforts. These collaborations offer deeper visibility into cryptographic usage, help uncover hidden vulnerabilities, and validate internal assessments. A clear and realistic discovery plan, shaped by available resources, helps keep the process focused, avoids unnecessary effort, and sets the stage for smooth progress in later phases. 

Develop a Centralized Cryptographic Inventory 

With the discovery plan in place, the focus shifts to creating or refining a centralized inventory of cryptographic assets. This inventory is crucial for managing PQC migration at scale, identifying where legacy cryptography is used, and ensuring no critical systems are overlooked. The migration lead and his team work closely with system operators to document key technical details such as what algorithms are in use, which assets they protect, and how systems are architected. 

To support this effort, organizations may use automated tools that scan hardware, software, and embedded components to detect cryptographic algorithms and key usage. The choice of tools depends on the organization’s risk tolerance and the level of visibility needed. Additionally, teams also collect detailed information about each asset, including who manages it, the data it protects, and any protocols or interfaces it relies on. It is also important to flag unknowns, like offline or undocumented keys, to highlight blind spots that could impact PQC migration. Grouping assets by supplier helps prepare the organization for vendor coordination in the upcoming phases. 

Identify and Prioritize Critical Cryptographic Assets 

With a clear inventory in place, the next step is to determine which assets are most critical and should be prioritized for PQC migration. The migration lead evaluates assets based on data sensitivity, expected lifespan, exposure to threats, complexity of dependencies, and the asset’s role in key business initiatives. Prioritizing assets by their impact on business operations ensures that migration efforts align with enterprise goals and risk tolerance. This evaluation helps to identify systems that need immediate PQC updates and those that can follow in later stages. The lead works with vendors and system operators to finalize these priorities. 

A detailed risk assessment may also be conducted to uncover security, operational, and compliance risks, especially considering threats from future quantum computers. Whether or not a formal risk assessment is done, the team should use timelines for migration urgency and asset lifespan to create a practical, prioritized migration plan that balances risk and resources. In cases where high-risk systems are clearly identified during discovery, early migration efforts may begin in this phase, especially when supported by vendor-ready solutions. This proactive approach helps reduce exposure while the broader planning continues. 

Outcomes

By the end of Phase 2, the organization has gained a structured and actionable understanding of its cryptographic landscape. The migration lead has developed a discovery plan, evaluated whether further inventorying is needed, and identified an appropriate budget to support it. A centralized and categorized inventory has been created, capturing essential technical details and exposing gaps in visibility.

The organization has also identified which cryptographic assets are more critical and established migration priorities based on data sensitivity, system lifespan, and quantum risk exposure. In cases where high-risk systems have been clearly identified, early migration efforts may also be initiated at this stage. Equipped with this clarity, the organization is ready to move into the planning and implementation phase with precision and purpose. 

PQC Advisory Services

Prepare for the quantum era with our tailored post-quantum cryptography advisory services!

Planning and Implementation 

This phase is where planning meets action. With a prioritized asset list in hand, your organization begins defining how to migrate each asset to post-quantum cryptography (PQC). This stage marks the full-scale enterprise rollout of PQC, including the start of decommissioning legacy cryptographic systems. Alongside these long-term PQC migration strategies, immediate security measures are put in place to reduce risk during the transition period. Budgeting, technical evaluations, vendor coordination, and solution deployment all come together in this phase to ensure the transition to PQC is both practical and aligned with the organization’s needs. 

To prepare for implementation, organizations should focus on the following coordinated planning and implementation efforts: 

Set a Migration Plan and Budget 

Using the prioritized list from Phase 2, the migration lead decides the next steps for each asset, whether to mitigate risk, start migration to PQC, or accept quantum risk for now. This includes coordinating with vendors to better understand each asset’s risk and requirements, especially if solutions will come from external providers.

The team estimates migration costs, identifies necessary resources, and sets a realistic budget. To plan effectively, it is important to outline what work needs to be done, when it will happen, and what each task might cost.  

Identify Suitable Solutions and Strengthen Short-Term Defenses  
Next, the focus shifts to finding the right technical solutions. The migration lead works with system operators to figure out what can be done in-house and what must be acquired from vendors. For each system, it is important to identify whether the migration can happen through software updates or if hardware upgrades will be required. The team should also verify whether available solutions comply with recognized PQC standards like those from NIST. Exploring agile cryptography options is encouraged to enable easier future updates and avoid vendor lock-in. 

Throughout this process, engagement with vendors remains critical. Key questions to address include: When will PQC-ready products be available? Will updates require hardware changes? What will they cost? How disruptive will implementation be? And will vendors provide a cryptographic bill of materials (CBOM) for visibility? 

While waiting for full PQC migration, implement short-term steps to reduce risk. This can include shortening certificate lifespans, increasing key sizes, modernizing protocols like TLS 1.3, tightening physical security, and adding extra data protection layers. Where commercial solutions are not yet available, internal development is initiated in parallel, supported by detailed planning, realistic timelines, and continuous market monitoring. 

Implement PQC Solutions 
With planning, budgeting, and technical evaluations in place, the organization moves to execution. The migration lead begins acquiring commercial PQC solutions or allocating resources for internal development based on the asset priorities established earlier. 

Finally, implementation begins. This includes installing PQC updates, deploying internal solutions, and applying both short-term and long-term mitigations. The migration lead manages timelines, tracks progress, and prepares for any disruptions that migration efforts might cause. If systems are deployed in phases, it is important to ensure forward and backward compatibility between legacy and updated components. As updates are rolled out, the organization updates its inventory to reflect the new cryptographic status of each system. Where applicable, this is also the time to begin decommissioning legacy cryptographic systems that are no longer needed, reducing long-term risk exposure. 

Outcomes

By the end of this phase, the organization transitions from planning to hands-on execution. A clear PQC migration plan and budget drive the process forward. The organization identifies the right solutions, determines how they will be implemented, and puts short-term protections in place for sensitive systems. This stage also marks the beginning of decommissioning legacy cryptographic systems, reducing long-term exposure to known vulnerabilities.

Vendors are engaged, compliance is documented, and agile cryptographic strategies are considered for long-term flexibility. PQC solutions are either acquired or developed and rolled out across systems. The organization’s inventory is updated throughout, offering a current and accurate view of progress and preparedness for the next phase. 

Monitoring and Evaluation 

The final phase focuses on sustaining momentum and ensuring long-term effectiveness. As PQC solutions are deployed, the organization shifts focus to tracking, validating, and maintaining the progress of its PQC migration. This includes ensuring cryptographic upgrades are effective, documenting compliance, measuring the success of migration efforts, and staying aligned with evolving industry standards. The migration lead also reassesses workforce readiness and creates ongoing processes to adopt crypto agility, which means the ability to quickly adapt cryptographic systems as new threats and technologies emerge. This approach ensures the organization remains resilient against future challenges.  

For this phase, the following operational steps are essential: 

Validate Implementation and Ensure Standards Alignment 

The first step in this phase is verifying that deployed PQC solutions are operating as intended. The migration lead works with system operators to validate that systems meet their cryptographic and operational requirements, such as forward and backward compatibility, and that any identified issues have been resolved. Once verified, the inventory should be updated to reflect the current status. 

Alongside technical validation, the organization must make sure its migration efforts align with relevant industry regulations. For example, healthcare providers may need to align with HIPAA, while organizations operating in the EU should reference NIS2. These standards encourage activities like regular risk assessments, system resilience, and future-proofing, and aligning with them strengthens both security posture and audit readiness. 

Track Progress and Support the Workforce 

The migration lead defines performance indicators that track how well PQC is being implemented. This might include metrics on how much sensitive data is now protected with post-quantum cryptography or how many systems have migrated. These indicators are tied to the migration strategy established earlier and may be benchmarked against NIST or NSA guidance to ensure relevance and consistency. 

At the same time, it is important to assess whether internal teams are ready to support and maintain PQC systems. If gaps are identified, targeted training should be provided to ensure teams have the necessary knowledge and skills to manage, operate, and troubleshoot post-quantum cryptographic solutions effectively. The migration lead works with system owners and operators to identify any skills gaps or workflow challenges, and helps develop targeted training, adjusting responsibilities, or bringing in new talent. This ensures that post-migration support is not only technically sound but also sustainable over time. 

Establish Continuous Monitoring and Future Preparedness 

PQC migration is not a one-time effort. The organization must stay agile by regularly monitoring the cryptographic landscape, updating its inventory, tracking compliance with new standards, and re-evaluating risk. The migration lead should establish a routine process for documenting changes, retiring obsolete systems, staying ahead of quantum developments, and maintaining long-term cryptographic resilience by adapting crypto-agility. 

Outcomes

By the end of this phase, the organization has validated the success of its PQC migration and ensured alignment with applicable regulatory requirements. Progress is tracked through clear, measurable indicators, and the workforce is equipped to manage post-quantum systems effectively. Continuous monitoring processes are established to update inventories, adjust to new risks, and stay aligned with evolving standards.

With crypto agility embedded into its strategy, the organization is positioned to swiftly adapt to future cryptographic challenges and remain secure and adaptable in a rapidly changing threat landscape. 

How can Encryption Consulting help?  

If you are still wondering where and how to begin your post-quantum journey, Encryption Consulting is here to support you. You can count on us as your trusted partner, and we will guide you through every step with clarity, confidence, and real-world expertise.  

  • PQC Assessment

    We begin by helping you get a clear picture of your current cryptographic setup. This includes discovering and mapping out all your cryptographic assets, such as certificates, keys, and other cryptographic dependencies. We identify which systems are at risk from quantum threats and assess how ready your current setup is, including your PKI, HSMs, and applications. The result? A clear, prioritized action plan backed by a detailed cryptographic inventory report and a quantum risk impact analysis.

  • PQC Strategy & Roadmap

    We develop a step-by-step migration strategy that fits your business operations. This includes aligning your cryptographic policies with NIST and NSA guidelines, defining governance frameworks, and establishing crypto agility principles to ensure your systems can adapt over time. The outcome is a comprehensive PQC strategy, a crypto agility framework, and a phased migration roadmap designed around your specific priorities and timelines.

  • Vendor Evaluation & Proof of Concept

    Choosing the right tools and partners matters. We help you define requirements for RFPs or RFIs, shortlist the best-fit vendors for quantum-safe PQC algorithms, key management, and PKI solutions, and run proof-of-concept testing across your critical systems. You get a detailed vendor comparison report and recommendations to help you choose the best.

  • PQC Implementation

    Once the plan is in place, it is time to put it into action. Our team helps you seamlessly integrate post-quantum algorithms into your existing infrastructure, whether it is your PKI, enterprise applications, or broader security ecosystem. We also support hybrid cryptographic models combining classical and quantum-safe algorithms, ensuring everything runs smoothly across cloud, on-premises, and hybrid environments. Along the way, we validate interoperability, provide detailed documentation, and deliver hands-on training to make sure your team is fully equipped to manage and maintain the new system.

  • Pilot Testing & Scaling

    Before rolling out PQC enterprise-wide, we test everything in a safe, low-risk environment. This helps validate performance, uncover integration issues early, and fine-tune the approach before full deployment. Once everything is tested successfully, we support a smooth, scalable rollout, replacing legacy cryptographic algorithms step by step, minimizing disruption, and ensuring systems remain secure and compliant. We continue to monitor performance and provide ongoing optimization to keep your quantum defense strong, efficient, and future-ready.

Reach out to us at [email protected] and let us build a customized roadmap that aligns with your organization’s specific needs.  

Conclusion 

The quantum era is not coming; it is already on the horizon. While we may not know exactly when large-scale quantum computers will arrive, we do know that the time to prepare is now. Waiting until the last minute simply is not an option when your data and systems could be vulnerable for years. 

Transitioning to post-quantum cryptography may feel complex, but with the right plan and the right partners, it becomes manageable and even empowering. By following a phased approach, your organization can take control of the journey and build a future-ready security posture that keeps your most valuable assets protected. 

Whether you’re still figuring out where to start or you’re ready to dive into implementation, the most important thing is to take that first step and keep the momentum going. And if you are looking for a partner to walk that path with you, we are here. 

At Encryption Consulting, we are ready to help you move forward with clarity, confidence, and a plan that fits your goals. Let us get started and make sure your organization is secure, not just for today, but for what is next. 

Integrating CertSecure Manager with Azure Key Vault

CertSecure Manager is a powerful, enterprise-grade Certificate Lifecycle Management (CLM) solution developed by Encryption Consulting. It enables organizations to automate the discovery, issuance, renewal, and revocation of digital certificates across diverse environments. 

With its robust policy enforcement, RBAC controls, automation workflows, and third-party integrations, CertSecure Manager simplifies the complexities of PKI operations while improving security, compliance, and operational efficiency. 

One of the latest capabilities in CertSecure Manager is integration with Azure Key Vault (AKV), which allows issued certificates to be automatically uploaded into Azure Key Vault for secure, centralized storage. 

Why Integrate with Azure Key Vault? 

Azure Key Vault is Microsoft’s secure storage service for managing keys, secrets, and certificates. With integration into CertSecure Manager, organizations can: 

  • Store certificates securely right after issuance 
  • Automate the storage process in cloud environments 
  • Reduce manual operations and human errors 
  • Maintain full auditability and RBAC-based access control 

Integration Overview 

Once configured by an administrator, any authorized CertSecure Manager user (with “Generate Certificate with Private Key” permission) can choose to upload issued certificates to Azure Key Vault in just a few clicks, no separate logins or transfers required. 

Prerequisites for Admins

To enable Azure Key Vault integration, a system administrator must perform the following steps: 

Step 1: Register an Application in Microsoft Entra ID (Azure AD)

  1. Go to Microsoft Entra ID → App registrations.
  2. Click “New registration”.
    • Name: CertSecure_Manager_AKV
    • Supported account types: Single tenant
    • Redirect URI: Leave blank
  3. Click Register.

Step 2: Generate a Client Secret 

  1. Open the registered app → Go to Certificates & secrets.
  2. Under Client secrets, click “New client secret”.
    • Name it (e.g., AKVTesting)
    • Set an expiration (6 or 12 months)
  3. Save and copy the secret value immediately as it won’t be shown again.

Step 3: Note the Required Values for Integration 

You’ll need these when registering the Azure Key Vault in CertSecure Manager

Value Where to Find
Tenant ID App → Overview → Directory (tenant) ID 
Client ID App → Overview → Application (client) ID 
Client Secret From Step 2

Step 4: Assign Access to Azure Key Vault 

You can assign access using either Role-Based Access Control (RBAC) or Access Policy. 

Option A: RBAC (Recommended) 

  1. Go to your Azure Key Vault → Access control (IAM). 
  2. Click Add → Add role assignment. 
  3. Role: Key Vault Certificates Officer 
  4. Assign to: your app (e.g., CertSecure_Manager_AKV) 
  5. Click Review + assign.

Option B: Access Policy (Legacy Method) 

  1. In your Key Vault, go to Access policies.
  2. Click Create.
  3. Permissions: Under Certificate permissions, select: Get, List, and Import
  4. Select the app as the principal.
  5. Click Review + Create.

Uploading Certificates to Azure Key Vault 

Once integrated, certificate uploads are simple and user-driven: 

  1. Navigate to Enrollment → Generate Certificate.
  2. Fill in the certificate request details.
  3. Click Generate Certificate.
  4. If Azure Key Vault is configured, a pop-up window appears:
    • Select the Azure Key Vault
    • Enter a unique certificate name
    • Choose the output format: PEM or PFX
  5. Click Yes to proceed.

Logging and Audit Trail 

A log entry will be recorded under: 

Misc → Logging → Certificate Management 

This entry will reflect the success or failure of the certificate upload, ensuring full traceability for compliance or troubleshooting. 

Certificate Management

Prevent certificate outages, streamline IT operations, and achieve agility with our certificate management solution.

Conclusion 

The integration of CertSecure Manager with Azure Key Vault empowers organizations to streamline certificate handling in cloud environments while strengthening their security posture. By automating the upload of issued certificates directly into Azure, teams reduce manual overhead, improve operational efficiency, and maintain strict access controls through Azure’s RBAC or access policies. 

To learn more, contact our team for a tailored demonstration. 

Types of Certificates You Can Self-Sign And The Risks of Doing So

In the current cybersecurity landscape, digital certificates are like passports for machines and applications. They help establish trust, encrypt communications, and verify identities. But not all certificates are issued by a trusted authority, some are self-signed. And that brings us to a common question: What types of certificates can I self-sign? And more importantly, should I? 

What Is a Self-Signed Certificate? 

A self-signed certificate is a digital certificate that’s signed by the same entity that created it. In other words, you’re vouching for yourself. There’s no third-party Certificate Authority (CA) involved to validate your identity or confirm that your certificate can be trusted. Self-signed certificates can be useful in internal networks and development phases, but they pose significant risks when not properly managed. Security teams often lack visibility into their usage, location, and ownership, making it difficult to detect compromise or revoke them if breached.

Without validation from Certificate Authorities and proper PKI hygiene, including secure key storage, self-signed certificates can become serious vulnerabilities, especially in production environments. Mismanagement increases the risk of spoofing and exploitation, highlighting the need for strict oversight and control. 

What Types of Certificates Can You Self-Sign? 

While self-signed certificates often raise red flags, there are a few scenarios where they can be practical, if used with caution and proper oversight. 

  1. Development & Testing Environments

    In isolated dev/test setups, self-signed certificates are a quick and cost-free way to simulate secure connections. They allow developers to test SSL/TLS functionality without needing a Certificate Authority (CA). However, these certificates should never make their way into production. Even in test environments, it’s important to track and manage them to avoid accidental misuse.

  2. Internal Use & Temporary Projects

    Not all certificates need to be publicly trusted, especially when you’re working behind the scenes. In controlled environments, self-signed certificates can be a practical solution for internal tools and short-term projects.

    For example, internal applications like intranet portals or admin dashboards accessed only by authorized users may not require CA validation. In such cases, the absence of a trusted signature has minimal impact, provided the environment is secure and access is restricted. However, even internally, it’s wise to consider using an internal private PKI for better oversight and scalability.

Similarly, if you’re spinning up a quick internal demo or a short-lived app for a limited audience, self-signed certificates offer a fast and cost-effective way to enable secure connections. But convenience shouldn’t come at the cost of security. These certificates must still be tracked, managed, and decommissioned properly to avoid lingering vulnerabilities. 

When You Should Not Use Self-Signed Certificates 

While self-signed certificates have their place, there are several critical use cases where they simply don’t belong. Here’s where you should always opt for CA-issued certificates: 

  1. External-Facing Services

    When your digital services are exposed to the outside world, whether through a public website, a customer-facing app, or an external API, trust is everything. And self-signed certificates simply don’t deliver it.

    Browsers and client applications are built to reject or warn against self-signed certificates. If your website or app uses one, users will be greeted with alarming security alerts like “Your connection is not private.” These warnings not only disrupt the user experience but also erode trust in your brand. In many cases, users will abandon the session altogether.

    Beyond the optics, there’s a real security risk. Without validation from a trusted Certificate Authority (CA), self-signed certificates are vulnerable to spoofing and man-in-the-middle attacks. And if users bypass the warning, they may unknowingly expose sensitive data.

  2. Code Signing

    When distributing software, a code signing certificate proves your code is authentic and untampered. Self-signed certificates don’t offer this assurance and are typically rejected by operating systems and app stores. Users will see warnings that your software is from an unknown source, hardly reassuring.

Why Self-Signed Certificates Are Risky in Production 

While self-signed certificates are convenient, they come with a laundry list of risks, especially when used in production environments. 

  1.  They’re Not Trusted by Default

    Browsers, operating systems, and most applications don’t trust self-signed certificates. That’s why you see those scary “Your connection is not private” warnings when visiting a site with a self-signed cert. It’s not just annoying, it’s a red flag for users and a potential trust-breaker.

  2. No Revocation Mechanism

    With CA-issued certificates, you can revoke them if the private key is compromised. Self-signed certificates don’t have a built-in revocation mechanism like CRLs (Certificate Revocation Lists) or OCSP (Online Certificate Status Protocol). If something goes wrong, you’re stuck.

  3. Poor Visibility and Management

    Self-signed certificates often fly under the radar. They’re not tracked in centralized systems, which means they can expire without warning or be forgotten entirely. This opens the door to outages or, worse, security breaches.

  4. Compliance and Audit Issues

    Many regulatory frameworks, like PCI-DSS, HIPAA, and SOC 2, require the use of certificates issued by trusted CAs. Using self-signed certificates in production could put your organization out of compliance and at risk of penalties.

Best Practices for Using Self-Signed Certificates 

If you decide to use self-signed certificates, here are a few tips to do it safely: 

  1. Use Strong Cryptography: Stick with RSA 2048+ or ECC and use SHA-256 or better for hashing. 
  2. Set Short Expiry Periods: Don’t let self-signed certs live forever. Rotate them regularly. 
  3. Secure the Private Key: Store it in a secure location with strict access controls. 
  4. Track and Monitor: Maintain an inventory of all self-signed certificates and monitor their expiration dates. 

Certificate Management

Prevent certificate outages, streamline IT operations, and achieve agility with our certificate management solution.

How could Encryption Consulting help?   

Encryption Consulting’s Certificate Management Solution CertSecure Manager is a comprehensive solution for all your digital certificate management requirements. 

You can use CertSecure Manager’s built-in certificate discovery feature to identify high-risk certificates, such as self-signed and wildcard certificates, across your environment, including network endpoints and certificate stores. It also provides insights into self-signed certificates issued by your internal or public CAs, using its certificate inventory and high-risk reporting tools. 

Use Encryption Consulting’s PKI-As-A-Service to simplify your PKI deployment with end-to-end certificate issuance, automated lifecycle management, policy enforcement, and seamless compliance with industry security standards, eliminating the need for using self-signed certificates. 

Additionally, Encryption Consulting’s advisory services could help your organization discover enterprise-grade data protection with end-to-end encryption strategies that enhance compliance, eliminate risk blind spots, and align security with your business objectives across cloud, on-prem, and hybrid environments. 

  • For more information related to CertSecure Manager please visit: 

    CertSecure Manager

  • For more information related to PKIaaS please visit: 

    PKI-as-a-Service | Managed PKI | Encryption Consulting 

  • For more information related to our products and services please visit:

    Encryption Advisory Services 

    Encryption Consulting

  • Conclusion

    Just because you can self-sign a certificate doesn’t mean you should. Self-signed certificates are fine in controlled environments or for internal use. But for anything public-facing or mission-critical, the risks far outweigh the benefits. Trust is the cornerstone of digital security, and trust is best established through a reputable Certificate Authority. 

Quantum-Proof with CNSA 2.0

It’s no secret that a cryptographically relevant quantum computer (CRQC) would have the power to break widely used public-key cryptographic systems which are currently used for asymmetric key exchanges and digital signatures with potentially devastating impact to systems. National security systems (NSS) use public key cryptography as a critical component to protect the confidentiality, integrity, and authenticity of national security information. 

Anne Neuberger, the White House’s top cyber advisor, recently spoke about this at the Royal United Services Institute (RUSI) in London. She said that releasing these new algorithms was “a momentous moment” because it shows real progress in moving toward the next generation of cryptography

She explained that the reason for this shift is the concern over CRQCs, which, as she put it, could break the encryption “that’s at the root of protecting both corporate and national security secrets.” 

Let’s understand what is Commercial National Security Algorithm Suite 2.0 (CNSA 2.0).

CNSA 2.0

The CNSA 2.0 (Commercial National Security Algorithm Suite 2.0) is the next-generation encryption standard developed by the NSA to protect the nation’s most sensitive systems, especially as quantum computing becomes a real threat. These algorithms are specially designed to be quantum-resistant, ensuring long-term security even in a future where traditional encryption could be broken.  

CNSA 2.0 will be mandatory for all National Security Systems (NSS) that use public-standard algorithms, whether they are newly designed or already deployed. Older encryption sets like Suite B or CNSA 1.0 will no longer be enough, all must transition to CNSA 2.0. As the NSA puts it, this move ensures systems are “secure today and resilient tomorrow.” 

What policies should be followed to meet NSS algorithm requirements?

The algorithms included in CNSA 2.0 were chosen from those standardized by NIST, the U.S. government’s official authority for commercial encryption. NSA selected the best-performing options for national security needs, not just strong, but optimized for mission-critical operations. The NSA has tested and analyzed these algorithms and believes they are strong enough to protect U.S. national security for the long haul.

The agency considers CNSA 2.0 fit to guard everything from battlefield comms to diplomatic cables. They are not leaving users in the dark. The NSA is working with the IETF to publish RFCs (Requests for Comments), technical guides that will help integrate these algorithms securely into real-world systems. The guiding principle here is, that it’s not just about what algorithms you use, it’s how you use them. 

Even systems that are already up and running aren’t spared. Unless they receive a formal waiver, all fielded equipment must be upgraded in a timely manner. This rule is backed by key security memos like NSM-8, NSM-10, and policies such as CNSSP 11 and CNSSP 15

Different types of systems will follow different transition paths. Military-grade or “high-grade” devices will shift according to CJCSN 6510 and CNSSAM 01-07-NSM, while commercial equipment will stay on CNSA 1.0 for now but must switch over between 2025 and 2030, depending on the system. All QR implementations will have to go through NIAP certification and NIST validation, no shortcuts allowed. 

This guidance is not only for government insiders. The NSA is making the CNSA 2.0 plan public so that vendors, industry partners, and anyone looking to interact with NSS can start preparing. That includes updating devices, software, and secure communication platforms.  

As the NSA moves forward with CNSA 2.0, there’s also a cleanup of technical terms. Algorithms like ML-KEM and ML-DSA are now the only approved names under the standard, replacing their pre-standard labels CRYSTALS-Kyber and CRYSTALS-Dilithium. Only the finalized versions, published as FIPS 203 and 204 are allowed. Any older or tweaked versions, even if labeled similarly, do not meet CNSA 2.0 compliance. 

The following table lists the algorithms and their functions, specifications, and parameters. 

Algorithm Function Specification Parameters
General Purpose Algorithms
Advanced Encryption Standard (AES)  Symmetric block cipher for information protection  FIPS PUB 197  Use 256-bit keys for all classification levels. 
ML-KEM (previously CRYSTALS Kyber)  Asymmetric algorithm for key establishment  FIPS PUB 203  ML-KEM-1024 for all classification levels. 
ML-DSA (previously CRYSTALS Dilithium)  Asymmetric algorithm for digital signatures in any use case, including signing firmware and software  FIPS PUB 204  ML-DSA-87 for all classification levels. 
Secure Hash Algorithm (SHA)  Algorithm for computing a condensed representation of information  FIPS PUB 180-4  Use SHA-384 or SHA-512 for all classification levels. 
Algorithms Allowed in Specific Applications
Leighton-Micali Signature (LMS)  Asymmetric algorithm for digitally signing firmware and software  FIPS PUB 800-208   All parameters approved for all classification levels. LMS SHA 256/192 is recommended. 
Xtended Merkle Signature Scheme (XMSS)  Asymmetric algorithm for digitally signing firmware and software  FIPS PUB 800-208  All parameters approved for all classification levels. 
Secure Hash Algorithm 3 (SHA3)  Algorithm used for computing a condensed representation of information as part of hardware integrity  FIPS PUB 202  SHA3-384 or SHA3-512 allowed for internal hardware functionality only (e.g., boot-up integrity checks) 

CNSA 2.0 Transition Timeframe and Deployment Milestones

The NSA aims to make all National Security Systems (NSS) quantum-resistant by 2035, aligning with the goals set out in National Security Memorandum-10 (NSM-10). While 2035 is the end goal, the transition begins much sooner, with key milestones outlined in the updated CNSSP 15 policy. 

For existing systems, any NSS that is already validated under a NIAP or CSfC profile will remain approved until the end of its validation period. No mandatory transition to CNSA 2.0 will be enforced before December 31, 2025. However, any NSS not already compliant with CNSA 1.0 has just six months from the release of the updated CNSSP 15 to become CNSA 2.0 compliant or must submit a waiver request within 90 days

From January 1, 2027, all new acquisitions for NSS will need to support CNSA 2.0 algorithms, unless an exception is clearly noted. By December 31, 2030, all equipment and services that cannot support CNSA 2.0 must be phased out. And by December 31, 2031, the use of CNSA 2.0 algorithms becomes mandatory across the board, unless explicitly exempted. 

The transition won’t happen within a day. NSA expects a hybrid period where systems may be using both CNSA 1.0 and CNSA 2.0, depending on the maturity of their respective components. New systems and upgrades should be designed to prefer CNSA 2.0, and as standards become finalized, these systems should eventually only accept CNSA 2.0 algorithms

Standard Development Organizations (SDOs) like IETF will continue to publish supporting material such as RFCs. NSA is actively collaborating with them to ensure these documents are available to guide protocol configurations and help vendors adopt quantum-resistant solutions effectively. 

NSA expects all equipment transitions to be completed by December 31, 2030, well ahead of the 2035 deadline, allowing secure systems to be future-ready in time for full-scale quantum threats. 

PQC Advisory Services

Prepare for the quantum era with our tailored post-quantum cryptography advisory services!

General Method for Transitioning to CNSA 2.0 Algorithms

Here are some points to consider when transitioning to CNSA 2.0 algorithms: 

  • Protection profiles will be updated by NIAP to clearly state that products must support CNSA 2.0 algorithms, based on NIST and other international standards. 
  • Any new hardware or software must comply with the new profiles. Older systems must also comply when they undergo their next update in order to remain NIAP certified. 
  • As soon as tested and validated CNSA 2.0 solutions are ready, they should become the default configuration in all eligible systems. 
  • The timeline for removing old, vulnerable algorithms will be guided by NIAP’s protection profiles and NSM-10 tech refresh cycles
  • If legacy systems can’t be updated regularly, they’ll need an official waiver and a clear plan for future compliance to continue operating. 

Other CNSA 2.0 Requirements for NSS

Following timeline is part of NSA’s broader effort to future-proof national security systems in anticipation of powerful quantum computers. The staged rollout gives industry and agencies time to adapt. 

  • Software and firmware signing must begin transitioning immediately, with CNSA 2.0 preferred by 2025 and exclusively used by 2030. 
  • Web browsers, servers, and cloud services must support and prefer CNSA 2.0 by 2025 and move to exclusive use by 2033. 
  • Traditional networking equipment like VPNs and routers should support and prefer CNSA 2.0 by 2026 and use it exclusively by 2030. 
  • Operating systems are expected to support and prefer CNSA 2.0 by 2027 and adopt it exclusively by 2033. 
  • Niche equipment, such as constrained devices and large PKI systems, should support and prefer CNSA 2.0 by 2030 and use it exclusively by 2033. 
  • Custom applications and legacy systems must be updated or replaced entirely to meet CNSA 2.0 standards by 2033. 
CNSA Timeline
CNSA 2.0 Timeline

Let’s dive into the technical details of hash-based signature algorithms and explore their significance in post-quantum cryptography (PQC)

Understanding Hash-based Signature in CNSA 2.0

Hash-based signature schemes are a key part of CNSA 2.0’s quantum-resilient strategy, for protecting long-lived software and firmware. These algorithms are suited for tasks where a signature must remain trusted for years, such as firmware signing.  

The two main hash-based algorithms approved by the NSA for National Security Systems (NSS) are: 

  • LMS (Leighton-Micali Signature Scheme) 
  • XMSS (eXtended Merkle Signature Scheme) 

Both of these are standardized by NIST in SP 800-208 and have been validated under Federal Information Processing Standards (FIPS). NSA recommends using them particularly in systems that require stateful signature mechanisms, where each signature must be tracked to ensure a key isn’t reused which is a critical requirement for maintaining security over time. 

NSA prefers LMS with SHA-256 as defined in Section 4.2 of the NIST standard. This specific configuration offers a good balance of performance and security and is highly suitable for embedded devices or hardware systems that may not receive frequent updates. 

On the other hand, HSS (Hierarchical Signature Scheme) and XMSSMT (multi-tree XMSS) though related to LMS/XMSS, are not allowed in CNSA 2.0. The NSA has explicitly stated that these multi-tree variants do not meet the required standards for NSS use. 

Another algorithm, SLH-DSA (also known as SPHINCS+), is not approved at all for use in NSS under CNSA 2.0, even though it is also hash-based. This ensures that only the most thoroughly assessed and implementation-ready algorithms are trusted for national security purposes. 

Use of SHA-3 and Hash Function Policy Under CNSA 2.0

In CNSA 2.0, the use of SHA-3 is allowed only under very limited circumstances. Specifically, vendors may use SHA3-384 or SHA3-512 within internal hardware components that do not interface with external systems. This includes use cases like secure boot or system integrity checks, where the cryptographic process stays entirely within a vendor-controlled environment. NSA has made this exception to help speed up the transition to post-quantum cryptography without disrupting established internal processes. 

However, SHA-3 is not approved as a general-purpose hash function in CNSA 2.0. Its use is only permitted when clearly defined by an approved cryptographic standard, such as within LMS as specified by NIST SP 800-208, or for very specific internal applications. Similarly, SHAKE, another variant from the SHA-3 family, is also not allowed for broad cryptographic use. NSA’s position is clear that relying on SHA-3 outside its approved use cases creates unnecessary complexity, increases the burden of interoperability testing, and may undermine the reliability of systems meant to follow strict national security standards.

The SHA-2 family, especially SHA-384 and SHA-512, continues to be the backbone of CNSA 2.0’s hash function policy. SHA-384 remains the standard, while SHA-512 is permitted where performance considerations justify it, though systems using SHA-512 must carefully evaluate potential interoperability impacts.  

If a cryptographic system incorporates SHA-3 or other hash variants as part of a defined function inside an NSA-approved algorithm, such as LMS or XMSS, this is permitted, but only within the boundaries of that algorithm’s design. General or external application of SHA-3 outside these definitions remain non-compliant with CNSA 2.0. 

NSA has left the door open for the future inclusion of other NIST-approved algorithms, but only under very specific conditions: the algorithm must become widely adopted, meet NSA’s independent security assessments, and maintain compatibility with other systems. For now, the focus remains firmly on using the current suite of well-vetted algorithms to avoid fragmentation across systems that secure national security data.

Validation Requirements

When using LMS or XMSS, there are different validation steps depending on what the system does: 

  • If a system only verifies signatures, it must pass CAVP (Cryptographic Algorithm Validation Program) testing. 
  • If it also generates signatures (i.e., acts as the signer), it must be validated under CMVP (Cryptographic Module Validation Program). 

Signature generation is more sensitive because it involves managing the cryptographic state, which, if misused (like reusing keys), can expose the system to attacks. That’s why no waivers are allowed here, validation is mandatory. 

NSA emphasizes that signing and state management should ideally be implemented in hardware, such as an HSM (Hardware Security Module), to minimize human or software error. Even during backup operations, key states must be preserved to avoid any possibility of state reuse. 

Vendors who aren’t part of NSS but are providing code or products that will interact with NSS are still expected to meet the same cryptographic quality. That means any code involved in signature verification must be capable of passing CAVP validation, even if the signer itself isn’t part of NSS. 

For commercial product evaluations, the NSA does not expect signature generation to happen within the “Target of Evaluation” (TOE) — only signature verification. So, CAVP testing is enough to demonstrate CNSA 2.0 compliance in most vendor scenarios. 

Firmware often cannot be updated once deployed. So, choosing a quantum-resistant signature algorithm now, like LMS or XMSS, for firmware is crucial. NSA stresses starting the transition early, rather than waiting for other algorithms (like ML-DSA) to become validated. This ensures a long-term cryptographic root of trust is in place before the rest of the system even begins upgrading to post-quantum standards. 

PQC Advisory Services

Prepare for the quantum era with our tailored post-quantum cryptography advisory services!

Quantum Alternatives for NSS

To prepare National Security Systems (NSS) for the quantum era, the NSA has addressed alternative cryptographic options, such as: 

  • Pre-shared keys (PSKs) may help reduce quantum threats, but their effectiveness can vary; organizations should consult NSA or CSfC guidance before relying on them. 
  • Quantum computers pose a far greater risk to public-key cryptography than to symmetric cryptography; symmetric algorithms with large key sizes (like those in CNSA 2.0) are still considered secure. 
  • NSA is acting now on quantum threats because National Security Systems (NSS) have long lifespans, systems built today may be in use for decades and need future-proof protection. 
  • Quantum Key Distribution (QKD) uses quantum physics to securely share encryption keys but does not offer complete cryptographic protection and is not considered a practical solution for NSS. 
  • NSA does not recommend using QKD for NSS and advises agencies not to invest in or deploy QKD systems without direct consultation. 
  • Quantum Random Number Generators (RNGs) use quantum effects to generate randomness, any RNG certified by appropriate standards is acceptable if implemented correctly. 

Adapting Hybrid Cryptography

The focus is on balancing strong protection with practical, standards-based implementation, with backward compatibility in mind. The following points outline the necessary considerations and NSA’s guidance. 

  • A hybrid solution combines multiple cryptographic algorithms (classical + quantum-resistant) to strengthen key exchange or authentication. 
  • NSA trusts CNSA 2.0 algorithms alone and does not require hybrid solutions for NSS security — though hybrid setups may be used for interoperability or technical limitations. 
  • Using hybrid cryptography can add complexity to implementation and testing, increasing the risk of bugs and configuration errors. 
  • Hybrid solutions may also slow down standardization efforts, since protocols must agree on how to combine and manage multiple algorithms. 
  • In cases like IKEv2 (a VPN protocol), NSA supports a hybrid solution due to technical constraints with public key size limits — a smaller key is used first, then a larger encrypted one. 
  • NSA does not support using hybrid or non-standard quantum-resistant solutions in mission systems unless specifically advised; such solutions may lead to incompatibility or inefficiency
  • Hybrid solutions with symmetric key overlays (like RFC 8773 or 8784) may be used in special cases, but these are exceptions, not the norm

How can Encryption Consulting help?

If you are wondering where and how to begin your post-quantum journey, Encryption Consulting is here to support you. You can count on us as your trusted partner, and we will guide you through every step with clarity, confidence, and real-world expertise.  

  • Cryptographic Discovery and Inventory

    This is the foundational phase where we build visibility into your existing cryptographic infrastructure. We identify which systems are at risk from quantum threats and assess how ready your current setup is, including your PKI, HSMs, and applications. The goal is to identify what cryptographic assets exist, where they are used, and how critical they are. Comprehensive scanning of certificates, cryptographic keys, algorithms, libraries, and protocols across your IT environment, including endpoints, applications, APIs, network devices, databases, and embedded systems.

    Identification of all systems (on-prem, cloud, hybrid) utilizing cryptography, such as authentication servers, HSMs, load balancers, VPNs, and more. Gathering key metadata like algorithm types, key sizes, expiration dates, issuance sources, and certificate chains. Building a detailed inventory database of all cryptographic components to serve as the baseline for risk assessment and planning.

  • PQC Assessment

    Once visibility is established, we conduct interviews with key stakeholders to assess the cryptographic landscape for quantum vulnerability and evaluate how prepared your environment is for PQC transition. Analyzing cryptographic elements for exposure to quantum threats, particularly those relying on RSA, ECC, and other soon-to-be-broken algorithms. Reviewing how Public Key Infrastructure and Hardware Security Modules are configured, and whether they support post-quantum algorithm integration. Analyzing applications for hardcoded cryptographic dependencies and identifying those requiring refactoring. Delivering a detailed report with an inventory of vulnerable cryptographic assets, risk severity ratings, and prioritization for migration.

  • PQC Strategy & Roadmap

    With risks identified, we work with you to develop a custom, phased migration strategy that aligns with your business, technical, and regulatory requirements. Creating a tailored PQC adoption strategy that reflects your risk appetite, industry best practices, and future-proofing needs. Designing systems and workflows to support easy switching of cryptographic algorithms as standards evolve. Updating security policies, key management procedures, and internal compliance rules to align with NIST and NSA (CNSA 2.0) recommendations. Crafting a step-by-step migration roadmap with short-, medium-, and long-term goals, broken down into manageable phases such as pilot, hybrid deployment, and full implementation.

  • Vendor Evaluation & Proof of Concept

    At this stage, we help you identify and test the right tools, technologies, and partners that can support your post-quantum goals. Helping you define technical and business requirements for RFIs/RFPs, including algorithm support, integration compatibility, performance, and vendor maturity. Identifying top vendors offering PQC-capable PKI, key management, and cryptographic solutions. Running PoC tests in isolated environments to evaluate performance, ease of integration, and overall fit for your use cases. Delivering a vendor comparison matrix and recommendation report based on real-world PoC findings.

  • Pilot Testing & Scaling

    Before full implementation, we validate everything through controlled pilots to ensure real-world viability and minimize business disruption. Testing the new cryptographic models in a sandbox or non-production environment, typically for one or two applications. Validating interoperability with existing systems, third-party dependencies, and legacy components. Gathering feedback from IT teams, security architects, and business units to fine-tune the plan. Once everything is tested successfully, we support a smooth, scalable rollout, replacing legacy cryptographic algorithms step by step, minimizing disruption, and ensuring systems remain secure and compliant. We continue to monitor performance and provide ongoing optimization to keep your quantum defense strong, efficient, and future-ready.

  • PQC Implementation

    Once the plan is in place, it is time to put it into action. This is the final stage where we execute the full-scale migration, integrating PQC into your live environment while ensuring compliance and continuity. Implementing hybrid models that combine classical and quantum-safe algorithms to maintain backward compatibility during transition. Rolling out PQC support across your PKI, applications, infrastructure, cloud services, and APIs. Providing hands-on training for your teams along with detailed technical documentation for ongoing maintenance. Setting up monitoring systems and lifecycle management processes to track cryptographic health, detect anomalies, and support future upgrades.

Transitioning to quantum-safe cryptography is a big step, but you do not have to take it alone. With Encryption Consulting by your side, you will have the right guidance and expertise needed to build resilient, future-ready security posture. 

Reach out to us at [email protected] and let us build a customized roadmap that aligns with your organization’s specific needs.  

Conclusion

In conclusion, the transition to CNSA 2.0 marks a critical step in securing National Security Systems against emerging quantum threats. With clear timelines, trusted algorithms, and structured guidance, NSA is laying the foundation for a future-ready cryptographic environment. Early adoption, cryptographic agility, and adherence to standards will be essential to ensure secure, interoperable, and resilient systems in adopting post-quantum cryptographic capabilities.

Source: 

Role of PKIaaS in Device Certificates

What Are Device Certificates in SPDM?

In the SPDM (Security Protocol and Data Model) framework defined by the DMTF, device certificates are X.509 certificates that establish the identity of a hardware component, such as a Root of Trust or a secure device module. They are installed directly onto hardware entities such as network cards, baseboard management controllers (BMCs), TPMs (Trusted Platform Modules), or secure firmware components.  

The primary function of these device certificates is to enable device authentication during SPDM handshakes. When one device wants to establish a secure connection with another, it requests the peer’s certificate. This allows the requesting device to verify the identity of the other by validating the certificate’s digital signature and checking its issuer against a trusted certificate authority. 

Beyond simple authentication, these certificates are also essential for establishing trust chains. A typical certificate presented by a device is not standalone it is part of a chain of trust that extends back to a known and trusted Root Certificate Authority (CA). This hierarchy ensures that if Root CA is trusted, then the device’s certificate can also be trusted, provided the entire chain is intact and valid. 

Additionally, SPDM uses these certificates to support asymmetric cryptography for secure session establishment. Once identities are verified, the devices use public-private key pairs to agree on encryption keys, enabling confidential and tamper-proof communication. Algorithms like ECDSA or RSA are used to sign and verify messages as part of this process, ensuring the integrity and authenticity of the data being exchanged. 

In practice, SPDM-compliant device certificates are issued to specific components that form the security backbone of a platform. These include the Root of Trust for Measurement (RTM), which initiates secure boot processes, and other components like firmware, TPMs, or NICs that participate in attestation and secure communication. These certificates bind cryptographic keys to physical hardware, enabling trusted operations across the system. 

Cryptograhic Requirements in SPDM

This section explains the detailed cryptographic and certificate requirements in SPDM as per DSP0274 v1.3.0. Each requirement, such as certificate format, key algorithms, signature schemes, and validation rules, etc. 

  1. X.509 Certificate Format Requirement

    SPDM mandates that all device certificates must be in X.509 version 3 format, encoded in DER (Distinguished Encoding Rules). This ensures that the certificates are structured in a globally recognized and parse-able format. X.509 v3 allows the inclusion of critical extensions like Subject Key Identifier (SKI) and Authority Key Identifier (AKI), which are vital for building and validating trust chains in SPDM communication.

  2. Certificate Chain Structure

    The certificate chain used in SPDM must follow a strict order, beginning with the leaf certificate (device certificate), followed by one or more intermediate certificates, and ending with a Root Certificate Authority (CA) certificate. The responder device sends this chain during the GET_CERTIFICATE SPDM command. The requester validates the chain by checking digital signatures step by step, from the leaf up to Root CA, which it must already trust. This structure ensures that every device’s identity can be traced back to a trusted origin.

  3. Key Usage and Extensions

    The leaf certificate (i.e., the device certificate) must include the Key Usage extension with the digitalSignature bit set. This explicitly authorizes the certificate holder to perform digital signatures — a critical function during the SPDM challenge-response authentication. Furthermore, both the device and intermediate certificates must include the Subject Key Identifier (SKI) and Authority Key Identifier (AKI) extensions. These extensions help link each certificate to its issuer and are essential for automated chain validation.

  4. Allowed Public Key Algorithms

    SPDM supports public key cryptography using both Elliptic Curve Cryptography (ECC) and RSA, but with strict limitations to ensure strong security. For ECC, SPDM allows curves such as secp256r1, secp384r1, and secp521r1. These curves are chosen for their balance between security and performance, especially in embedded or low-power hardware. For RSA, SPDM requires that the key length be at least 2048 bits, and that the RSA keys be used with RSASSA-PSS padding, which provides stronger resistance to signature forgery than older PKCS#1 v1.5 padding.

  5. Approved Signature Algorithms

    When a certificate signs another certificate (e.g., intermediate signing leaf, root signing intermediate), or when a device signs a challenge during authentication, the signature algorithm must be among those approved by SPDM. This includes ECDSA (Elliptic Curve Digital Signature Algorithm) for ECC-based certificates and RSASSA-PSS for RSA-based certificates. These signature schemes are selected for their widespread standardization, cryptographic strength, and compatibility with modern cryptographic libraries.

  6. Hash Functions Used

    SPDM relies on secure hash algorithms to support both signature generation and verification, as well as for creating transcript hashes used during session establishment. The accepted hash algorithms in SPDM are SHA-256, SHA-384, and SHA-512. The specific algorithm to be used is negotiated between the requester and responder during capability exchange. Weaker hash functions like SHA-1 are explicitly disallowed due to known vulnerabilities. The choice of hash function also determines which signature algorithm variant is used (e.g., ECDSA with SHA-384).

  7. Size and Encoding Constraints

    To avoid large payloads during SPDM message exchanges, certificate chains must comply with size constraints negotiated during the session setup. For instance, a requester might limit the maximum size of the certificate chain or maximum number of intermediate certificates it can accept. All certificates must also be encoded in DER (binary) format, as opposed to PEM (base64), to comply with SPDM transport and parsing rules.

  8. Private Key Requirements

    While SPDM doesn’t directly enforce how private keys are generated or stored, it implicitly expects that each device must securely generate and store its private key in a way that prevents extraction or tampering. This is crucial because the CHALLENGE_AUTH command in SPDM requires the device to sign a random nonce with its private key. If an attacker could gain access to that key, they could impersonate the device. Thus, use of TPMs, secure elements, or HSM-backed PKIaaS issuance is recommended for key generation and storage.

  9. Cryptographic Capability Negotiation

    Before SPDM communication begins, the requester and responder exchange their supported cryptographic capabilities via messages like NEGOTIATE_ALGORITHMS. This includes their preferred public key algorithms, hash functions, and measurement summary hash types. Only mutually supported combinations are used during the rest of the session. This dynamic negotiation makes SPDM flexible, yet ensures that only strong and standardized crypto is used.

Role of PKIaaS in SPDM Device Certificates

PKIaaS issues device certificates following the X.509 standards in a way that aligns with SPDM requirements. This includes using the right cryptographic algorithms (like ECC and RSA), the correct key sizes, and proper certificate formats and extensions (like Subject Key Identifier, Basic Constraints, and Authority Information Access). By doing so, it ensures that every device’s certificate is recognized, trusted, and verified during SPDM authentication flows. 

PKIaaS not only generates and signs these certificates, but also takes care of their renewal, revocation, and policy enforcement over time. This is crucial because many SPDM-enabled devices have long life cycles, such as servers and embedded systems, and their certificates need to remain valid, secure, and compliant without human intervention. PKIaaS provides secure interfaces and supports protocols for certificate issuance, such as SCEP (Simple Certificate Enrollment Protocol), EST (Enrollment over Secure Transport), ACME (Automatic Certificate Management Environment) or Custom REST APIs, these allow devices or provisioning tools to programmatically request and install device certificates. 

Through PKIaaS, every device such as a TPM (Trusted Platform Module), a BMC, or a NIC (Network Interface Card) receives a unique, verifiable digital identity. This identity is used during SPDM interactions to prove that the device is genuine, uncompromised, and allowed to participate in the system. It enables a zero-trust model, where each device must prove its trustworthiness before any secure communication or action can take place. 

In short, PKIaaS ensures that SPDM-based security works reliably and at scale, by delivering the digital trust foundation needed for device-to-device authentication and encrypted communication in modern computing platforms.

PKIaaS workflow to support SPDM

Let’s breakdown the workflow of PKIaas starting from requesting to obtaining the decice certificate: 

  1. CA Setup

    PKIaaS hosts a Root Certificate Authority (CA) and one or more Intermediate CAs, often backed by HSMs (Hardware Security Modules). These are responsible for issuing certificates securely.

  2. Key Generation and CSR

    Each device generates its own private-public key pair and creates a Certificate Signing Request (CSR) containing its public key and unique identifiers like serial number or device ID.

  3. Certificate Issuance

    The CSR is sent to PKIaaS using a secure protocol like EST, SCEP, or ACME. PKIaaS validates the request and issues a signed X.509 certificate with the exact format, extensions, and algorithms required by SPDM for authentication. This certificate complies with SPDM requirements (e.g., using ECC, including required X.509 extensions).

  4. Certificate Installation

    The device stores the signed certificate in its secure storage (such as flash memory or a TPM). It will later use this certificate in SPDM handshakes.

  5. Authentication During SPDM Communication

    During SPDM authentication, the device presents this certificate to prove its identity. The peer verifies the certificate chain using the Root CA provided by PKIaaS and confirms authenticity by checking a cryptographic signature.

  6. Lifecycle Management

    PKIaaS monitors the validity period of issued certificates, automatically renews them when they are close to expiration, and revokes any that are compromised. It also maintains audit logs for compliance and monitoring.

Enterprise PKI Services

Get complete end-to-end consultation support for all your PKI requirements!

Benefits of Utilizing PKIaaS for SPDM

This section highlights how PKIaaS enhances the implementation of SPDM by simplifying certificate management and enforcing cryptographic compliance. It outlines key benefits such as scalability, automation, and secure device authentication across large hardware environments. 

  • Automated Certificate Lifecycle Management

    PKIaaS automates the issuance, renewal, and revocation of SPDM-compliant certificates. This reduces manual intervention, eliminates the risk of expired or misconfigured certificates, and ensures that secure communication can be maintained over a device’s entire lifecycle.

  • Scalability for Large Device Fleets

    SPDM is commonly used in environments with thousands of devices (servers, NICs, BMCs, etc.). PKIaaS provides the infrastructure to scale certificate operations securely and efficiently across all devices, even during manufacturing or deployment at scale.

  • Consistent Compliance with SPDM Standards

    PKIaaS enforces cryptographic profiles, key usages, and certificate extensions (like SKI, AKI) in line with SPDM specifications (Sections 6.1 and 6.2). This ensures all issued certificates are valid for use during SPDM authentication and validation flows.

  • Integration with Secure Hardware Modules

    PKIaaS can be integrated with HSMs, TPMs, and RoTs to issue certificates without exposing private keys. This aligns with SPDM’s design for tamper-resistant identity and secure boot validation.

  • Support for Future Crypto Migration (e.g., PQC)

    As SPDM evolves to adopt post-quantum cryptography (PQC), PKIaaS platforms can support hybrid or PQC algorithms, offering crypto agility without re-architecting the security framework.

  • Policy Enforcement and Auditing

    With built-in access controls, audit logs, and policy enforcement mechanisms, PKIaaS enables traceability and compliance, important for regulated industries deploying SPDM in critical infrastructure.

How can Encryption Consulting help?

Encryption Consulting (EC) provides the strategic, technical, and operational expertise required to plan, build, and manage a secure, scalable, and compliant PKIaaS platform. With deep domain experience in cryptographic infrastructure, Encryption Consulting helps enterprises at every stage of their PKIaaS journey, from architecture design to implementation, automation, and lifecycle governance. 

  1. CA Management: Deploy and maintain a highly available and compliant CA infrastructure to support diverse security needs. Handle certificate issuance, renewal, and revocation for all certificate types. Maintain strict security controls and industry compliance, including GDPR, eIDAS, and FIPS 140-3, while providing redundancy and high availability. 
  2. Policy Management: Define and enforce certificate policies, validity periods, and key usage rules across your organization. Ensure alignment with security frameworks by automating policy enforcement. Implement customizable certificate profiles with strict access controls. 
  3. Automated Enrollment: Enable seamless certificate requests and installations through automated enrollment protocols. Support SCEP, EST, and ACME for streamlined certificate issuance and renewal. Ensure secure, policy-driven enrolment with enterprise identity and access management. 

      Conclusion

      In conclusion, integrating PKIaaS with SPDM offers a foundation for secure, scalable, and standards-compliant device authentication across modern hardware platforms. By automating certificate lifecycle management and enforcing cryptographic policies aligned with SPDM specifications, PKIaaS not only simplifies deployment at scale but also strengthens the overall trust framework essential for platform integrity and secure boot mechanism.

      What is OCP 2.6 and how PKI plays an important role in it?

      The Open Compute Project (OCP) is an initiative founded in 2011 by Facebook, with the goal of revolutionizing the way data centers are built. The project’s core mission is to share designs and best practices for data center hardware that improve efficiency, reliability, and cost-effectiveness. The OCP encourages an open-source approach, where designs for hardware like servers, storage systems, networking components, and other essential infrastructure elements are made publicly available for anyone to use, modify, and improve. 

      As the project has evolved, it has become a collaborative community, with contributions from major tech companies such as Microsoft, Meta, Google, HPE, and Dell. These companies and others join forces under the OCP umbrella to accelerate innovation and address the growing demands of data center operations in the modern cloud and enterprise environments. 

      OCP Datacenter NVMe SSD Specification

      One of the core areas of focus within OCP is storage. NVMe (Non-Volatile Memory Express) is a storage protocol designed to take full advantage of the speed and low latency of Solid-State Drives (SSDs). When used in data centers, NVMe SSDs are critical for delivering the high-performance storage needed for modern applications that demand rapid access to large volumes of data. “With the growing need for faster, more efficient data storage, NVMe technology is essential for modern data centers to handle the massive amount of data generated in real-time,” states the OCP Storage Group. 

      The OCP Datacenter NVMe SSD Specification sets the standard for these SSDs, providing guidelines on their design, performance, and compatibility for data center environments. The specification defines key aspects such as: 

      • Endurance: The number of read/write cycles an SSD can endure, ensuring that it can withstand the demands of high-use data center workloads. 
      • Performance Metrics: Benchmarks for throughput, latency, and IOPS (Input/Output Operations Per Second), ensuring that NVMe SSDs meet the high-performance needs of hyperscale data centers. 
      • Form Factor: Physical attributes, such as size and connectivity, to ensure compatibility with a variety of server systems. 
      • Reliability: Data protection measures, including error correction, wear leveling, and power loss protection to ensure continuous and safe operation. 

      Industry Collaboration and the Importance of Compliance

      The OCP Datacenter NVMe SSD Specification facilitates alignment between hyperscale companies and SSD manufacturers, creating a standardized set of expectations. This is crucial because, in the absence of such standards, data centers could face significant integration challenges, increased costs, and delays in deployment. According to the OCP, “The collaboration between industry leaders ensures that all players — from manufacturers to cloud providers are working off the same blueprint, driving faster innovation and higher quality.” 

      The Benefits of Compliance with OCP Specifications

      Let’s break down the benefits of compliance with OCP specifications.

      1. Efficiency in SSD Development: OCP provides a clear set of rules for designing SSDs. This helps manufacturers work more efficiently and save time and money when making new products. As a result, products can be made and delivered to the market faster. 
      2. Cost Savings: By using the same technology and standards across different companies, manufacturers can save money. These savings can then be passed on to their customers. Hyperscale companies (big data center operators) save a lot by avoiding the need to create custom, expensive solutions. 
      3. Improved Product Quality: When manufacturers follow OCP standards, they ensure their SSDs perform well and are reliable. This reduces the risk of failure, especially in busy data center environments where high performance is essential. 
      4. Faster Time-to-Market: Since OCP standards are already set, companies don’t have to go through the long and costly process of getting products individually certified. This allows them to release new products much more quickly. 
      5. Innovation and Flexibility: Even though OCP encourages standardization, it still allows companies to innovate and improve within the given framework. This means that companies can come up with new ideas while making sure their products are still reliable and compatible with the rest of the industry. 

      How OCP 2.6 Benefits Data Centers?

      1. High-Performance Storage: The primary benefit of OCP 2.6-compliant NVMe SSDs is the high performance they provide. With low latency and high throughput, these SSDs are ideal for the massive data needs of modern hyperscale data centers that need to handle large volumes of data with minimal delays. “OCP 2.6 drives have been optimized to meet the speed and performance demands of data centers supporting AI, machine learning, and cloud computing,” states an expert from Google Cloud
      2. Better Scalability: OCP-compliant SSDs are designed to work seamlessly across multiple environments, supporting scalable storage solutions that can grow as data needs increase. “The scalability provided by OCP 2.6 is essential for organizations that require continuous growth in storage capacity without sacrificing performance,” says a representative from Intel
      3. Future-Proofing: As the industry moves toward more sophisticated storage technologies, OCP 2.6 ensures that data centers are equipped to handle future workloads by standardizing components that can support newer, faster storage protocols and features. “By aligning with OCP 2.6, data centers are prepared for the next generation of storage technology,” adds a senior engineer at NVIDIA.

      Role of PKI in OCP

      The importance of Public Key Infrastructure (PKI) in OCP environments cannot be overstated. As OCP promotes open-source hardware and standardized designs for data centers, PKI provides the foundation for security. In OCP’s open-source model, devices and services often come from various manufacturers and sources. PKI establishes trust by ensuring that each device and service is verified before it can join the network or communicate with other components. 

      Public Key Infrastructure (PKI) plays a critical role in securing communication and data within the Open Compute Project (OCP), especially when dealing with the large-scale, open-source hardware and software solutions that OCP promotes for data centers. PKI is a security framework that relies on cryptographic keys to encrypt data, authenticate users, and ensure data integrity. In OCP environments, PKI provides trust and security across various components like servers, storage devices, networking equipment, and cloud services

      In the context of OCP, where data centers often scale rapidly, maintaining a secure infrastructure is vital. PKI ensures that only authorized devices and users can access the system, preventing unauthorized access and data breaches. With the open nature of OCP, it is especially important to establish a trusted environment where every device is properly authenticated, and communications are secure. 

      Enterprise PKI Services

      Get complete end-to-end consultation support for all your PKI requirements!

      Building PKI with OCP in Mind

      When building a PKI system for OCP environments, there are several steps to consider: 

      1. Define Certificate Authority (CA): The first step in implementing PKI is to establish a Certificate Authority (CA), which is responsible for issuing and managing digital certificates. These certificates authenticate devices and users, ensuring that only trusted parties can access sensitive systems. In OCP, where many devices may be connected and disconnected frequently, establishing a central CA is vital for maintaining security. 
      2. Automate Certificate Management: Given the scale of OCP data centers, automating certificate management is crucial. Devices and services may join or leave the network at any time. PKI solutions should be automated to handle the issuance, renewal, and revocation of certificates. Automated certificate management ensures certificates are always valid, minimizing the risk of security breaches due to expired certificates or human error. 
      3. Integrate PKI with OCP Hardware: In the OCP ecosystem, hardware is standardized and open, meaning PKI needs to integrate seamlessly with these devices. For example, devices like NVMe SSDs and servers should be equipped to request, store, and use certificates automatically upon connection. This integration ensures that hardware components can communicate securely and that any new device added to the network is authenticated and trusted. 
      4. Use Strong Encryption Algorithms: To maintain the integrity of PKI, strong encryption is required. Algorithms such as RSA, ECC (Elliptic Curve Cryptography), and AES should be used to ensure that data is securely transmitted and stored. This ensures the privacy and security of sensitive information across all devices and systems in the OCP data center
      5. Integrate Post-Quantum Cryptography (PQC): As the computing power of quantum computers grows, traditional encryption algorithms like RSA and ECC will become vulnerable. Post-Quantum Cryptography (PQC) offers quantum-resistant algorithms that will be essential to protecting data and ensuring security in a post-quantum world. Integrating PQC into the PKI system for OCP ensures long-term security by preparing the infrastructure for the future of encryption. Algorithms like Lattice-based cryptography and Hash-based signatures are being researched as potential candidates to replace current cryptographic systems, ensuring that OCP data centers are ready to handle future threats from quantum computing. 
      6. Monitor and Audit: PKI setups require continuous monitoring and auditing to ensure that the infrastructure remains secure. In an open-source OCP environment, it’s important to track and review access logs and certificate usage regularly. This helps identify and mitigate any potential security threats before they cause damage. 

      How Encryption Consulting can help?

      At Encryption Consulting, we don’t just talk about standards, we build systems that put them into practice. Whether you’re deploying Microsoft ADCS, leveraging cloud-based CAs like AWS Private CA or Azure Key Vault, using open-source solutions like EJBCA, or adopting our PKI-as-a-Service platform, we ensure your revocation infrastructure is secure and reliable. We design systems that deliver signed revocation data over HTTP without breaking validation, implement high-availability OCSP and CRL responders, and integrate revocation checks into CI/CD pipelines and Zero Trust environments. Our CertSecure Manager platform automates certificate lifecycle management, ensuring your operations run smoothly. Most importantly, we help you avoid circular trust loops by carefully validating any HTTPS endpoints independently. 

      Conclusion

      In conclusion, PKI is essential for ensuring the security and trustworthiness of OCP environments. By providing authentication and encryption, PKI helps protect sensitive data and ensures that only authorized users and devices can access critical systems. Building a PKI system with OCP in mind means automating certificate management, integrating with OCP hardware, and using strong encryption methods. This approach not only secures the environment but also ensures that OCP’s open-source hardware can scale effectively while maintaining high levels of security. 

      Lessons from a successful modern PKI Design

      When a PKI has been untouched for over a decade, the question is often not if improvements are needed, but where to begin. That was the case for one of the largest rural lifestyle retailers in the U.S., founded in 1938 and now operating more than 2,000 stores across 49 states. Serving millions of customers with critical products for home, land, pet, and animal care, the company relies heavily on its digital backbone to maintain operational trust and service integrity.

      Approach of PKI Design

      As part of a broader initiative to strengthen its IT security posture, the organization initiated a detailed assessment of its Public Key Infrastructure (PKI). The assessment quickly revealed that the existing infrastructure was exposed to several risks, including unchecked CA validity periods, irregular CRL publication intervals, and private keys stored on software-based machines. These findings highlighted the need for a structured approach. 

      Rather than patching things reactively, a custom roadmap was developed to address core risks and reduce the overall attack surface within the existing PKI environment. However, instead of jumping straight into implementation, the organization opted for the most deliberate and strategic step of designing the PKI. This approach allowed them to carefully lay out the architecture of the new environment, clearly defining the role of the existing PKI, identifying the right starting point for the transition, and establishing a structured migration plan. The goal was to bring clarity, simplify management, and build a sustainable PKI, one equipped with the right technologies and aligned with modern security standards. 

      After all, you don’t build a bank vault without blueprints, so why build a PKI without a design for it first?

      The PKI design was shaped through multiple in-depth working sessions and technical discussions with key stakeholders from across the organization. These sessions focused on establishing a clear trust hierarchy (Root and Issuing CAs), aligning the architecture with Active Directory Forest design, and streamlining certificate lifecycle management for various users, devices, and application scenarios. The resulting PKI architecture was customized to support the organization’s distributed AD domain structure, ensuring that each domain’s unique use cases and operational workflows were reflected in the final design. 

      Before diving into what the PKI design fixed, it is important to understand why a fix was needed in the first place and the specific challenges or limitations that were addressed in the workshops.

      Challenges

      The organization’s Public Key Infrastructure (PKI) had been operating on aging infrastructure, primarily hosted on Windows Server 2012 R2. Through in-depth workshop discussions and a comprehensive review of existing PKI policies, procedures, and standards, the following architectural and operational weaknesses were identified: 

      • The PKI trust chain was built around a single Root Certification Authority (CA) and multiple Issuing CAs. However, there was no formal backup or disaster recovery strategy in place for any of the CAs, leaving the environment vulnerable in the event of a CA failure or data center outage, the organization had no reliable path to restore certificate issuance. 
      • The Root CA and Issuing CA private keys were stored without the protection of a Hardware Security Module (HSM). As one Senior Security Engineer put it, “That’s like locking the crown jewels in a filing cabinet.” The absence of HSM protection significantly increased the risk of key theft or compromise. 
      • The Root CA certificate had a validity period of over 20 years, far exceeding CA/B Forum recommendations. Such long-lived certificates expand the potential blast radius in case of key compromise and limit the organization’s ability to adapt cryptographically over time. 
      • The Active Directory Forest was still operating on a legacy Windows Server version that had reached End of Life (EOL) and End of Support (EOS), which restricted integration with newer PKI capabilities, such as certificate auto-enrollment, modern cryptographic templates, and policy-based issuance controls. 
      • CRL validity period were inconsistently configured across the three Issuing CAs, ranging from more than 7 days for base CRLs and more than 24 hours for delta CRLs. This inconsistency created operational overhead and increased the risk of revocation-related errors going unnoticed without constant manual oversight. 
      • Multiple MDM platforms were being utilized to manage different device types, with Android, macOS, and iOS devices spread across distinct systems. This fragmented MDM setup introduced unnecessary complexity and inefficiencies in certificate deployment and endpoint trust. 
      • Within the security settings of all three Issuing CAs, multiple user accounts and groups had been granted certificate issuance and management rights, but without any clear role mapping or justification. This lack of access control governance introduced risk of misuse or misconfiguration. 
      • The PKI environment was assessed to have a weak governance framework. Certificate templates lacked proper documentation, defined ownership, and clear purpose mapping. In the absence of centralized oversight, the environment had accumulated redundant, outdated, and unused templates, increasing the likelihood of issuing misconfigured certificates. 

      Enterprise PKI Services

      Get complete end-to-end consultation support for all your PKI requirements!

      Solution

      Following a series of whiteboarding sessions and technical workshops with key stakeholders, a new PKI architecture was designed to address the core challenges and establish a strong foundation for future security, scalability, and operational efficiency. The decisions outlined below reflect the outcomes of those sessions and collectively formed the basis of the organization’s PKI modernization strategy. 

      • To move away from outdated infrastructure, it was decided that Windows Server 2022 or higher would be used to pilot the new PKI setup. This approach allowed for the evaluation of compatibility with modern security features and validation of the architecture in a controlled environment prior to full-scale implementation. 
      • Two dedicated Issuing CAs, each aligned to distinct use cases, were planned to be hosted in the data center. This structure provided logical segmentation and supported tailored certificate issuance policies.  
      • In addition to the production environment, it was also decided to establish corresponding Issuing CAs in a Disaster Recovery (DR) environment, designed to ensure high availability. These DR CAs were intended for failover scenarios and were not issued certificates during normal operations but would be regularly tested and kept synchronized through manual configuration replication. 
      • The Root CA certificate validity was reduced to 10 years, while Issuing CAs were assigned a 5-year validity. End-entity certificates were configured as per the industry best practices. 
      • Industry best practices were enforced to configure Certificate Revocation List (CRL) publication and certificate status checking. For e.g., subordinate CAs were set to publish base CRLs every 7 days and delta CRLs every 24 hours, with a 2-day overlap period to minimize revocation-related disruptions during outages. CRLs and AIA extensions were distributed via HTTP (primary), LDAP (secondary), and OCSP to ensure real-time and reliable certificate status validation. 
      • A comprehensive overhaul of certificate templates was planned to eliminate redundancy, define clear ownership, and map each template to its intended use. This enabled stricter issuance policies and reduced operational ambiguity. Access to Issuing CAs was also restructured, ensuring that only authorized roles had the ability to issue or manage certificates, effectively closing gaps caused by undocumented or excessive user/group permissions. 
      • To address fragmentation and simplify certificate lifecycle management, the design recommended leveraging Microsoft Cloud PKI. This allowed seamless integration with existing MDM platforms while reducing dependency on on-premises infrastructure, ensuring a unified and scalable trust model anchored by a single source of truth. 

      Business Impact

      By addressing longstanding architectural gaps and operational inefficiencies, the organization was able to strengthen its digital trust foundation and align with modern security standards. The key business outcomes included: 

      • With private keys now secured in hardware security modules (HSMs), the risk of compromise was significantly reduced. The environment moved away from outdated infrastructure and legacy dependencies, enabling smoother integration with modern platforms and cloud-native services. 
      • Operational clarity improved through tighter governance, certificate templates were streamlined, access controls were properly mapped, and issuance processes became far more predictable and auditable. This not only reduced administrative overhead but also minimized the likelihood of errors or misconfigurations. 
      • Business continuity was strengthened with the introduction of disaster recovery CAs, ensuring that critical certificate issuance could continue even during outages. The overall trust model became more scalable, resilient, and easier to manage, allowing IT and security teams to focus on proactive improvements rather than firefighting issues in a fragmented setup. 
      • Most importantly, the organization established a future-ready PKI framework capable of supporting growing users, devices, and applications, meeting the requirements for crypto-agility, and automation initiatives across its various use cases.

      Conclusion

      This modernization journey underscores a reality of secure PKI begins with smart design, and not just reactive fixes. By prioritizing architecture, enforcing access controls, adopting HSM-backed key storage, and streamlining certificate governance, the organization built a scalable, resilient trust foundation. Most importantly, it positioned itself to support future growth, automation, and crypto-agility, proving that a well-planned PKI is not just a security upgrade, but a strategic enabler for the digital enterprise. 

      PCI DSS v4.0.1 Requirement on CBOM: A Quick Guide

      In today’s digital world, strong cryptography is the foundation of effective data protection. For industries that handle sensitive information like credit card data, implementing strong cryptographic controls is not optional but mandatory. With the release of PCI DSS v4.0, a new era of compliance has arrived, emphasizing flexibility, risk-based approaches, and deeper transparency. 

      Among the rising concepts that help achieve this transparency is the Cryptographic Bill of Materials (CBOM), a comprehensive inventory of all cryptographic components used within a system, application, or infrastructure. Assuming that you’re already aware of the topics – PCI DSS and CBOM, we’ll just give you a glance and then directly jump to the requirement section. 

      What is PCI DSS?

      The Payment Card Industry Data Security Standard (PCI DSS) is a set of security standards designed to ensure that all companies that process, store, or transmit credit card information maintain a secure environment. It was established by major credit card companies like Visa, MasterCard, American Express, Discover, and JCB in 2004. 

      PCI DSS 4.0.1 is a minor revision to version 4.0, released by the PCI Security Standards Council (PCI SSC). It aims to address implementation feedback, fix typographical errors, and provide better clarity in controls, without changing the core intent of version 4.0. 

      Key Focus Areas in PCI DSS 4.0.1: 

      1. Enhanced flexibility in implementation 
      2. Emphasis on continuous security and monitoring 
      3. Stronger authentication requirements 
      4. Clearer guidance for cryptographic operations 
      5. Improved scoping and segmentation expectations 

      What is Cryptographic Bill of Materials (CBOM)?

      A Cryptographic Bill of Materials (CBOM) is a comprehensive inventory of all cryptographic assets used within a system, application, or software environment. It is similar in concept to a Software Bill of Materials (SBOM), but it focuses specifically on cryptographic components and their dependencies. It includes: 

      1. Cryptographic libraries and modules (e.g., OpenSSL, BouncyCastle) 
      2. Algorithms in use (e.g., AES-256, RSA-2048) 
      3. Certificates and key pairs 
      4. Key management mechanisms 
      5. Hardware Security Modules (HSMs) or Trusted Platform Modules (TPMs) 

      Some users often get confused between CBOM and SBOM. The difference is very clear – Software Bill of Materials (SBOM) is an inventory of all software components that make up a software application, while Cryptographic Bill of Materials (CBOM) focuses only on cryptographic assets and their related dependencies, ignoring the rest of the software components. 

      Requirement 12.3.3

      “Cryptographic cipher suites and protocols in use are documented and reviewed at least once every 12 months”

      This requirement is about ensuring you don’t just deploy encryption and forget it. Instead, you must continuously track and assess your cryptographic environment, a principle at the heart of CBOM. 

      Let’s now break down the 3 sub-requirements, and analyse how each aligns with CBOM best practices: 

      1. An up-to-date inventory of all cryptographic cipher suites and protocols in use, including purpose and where used. 

      What PCI DSS Expects: You must know exactly what cryptography is deployed in your environment, why it’s being used, and where it is being used, including in applications, systems, APIs, devices, and third-party services. 

      CBOM Relevance: This is the core of CBOM, i.e having a structured, versioned inventory of all cryptographic components. 

      A strong CBOM should contain: 

      • Cipher suites (e.g., TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) 
      • Protocols (e.g., TLS 1.2, TLS 1.3, IPsec, SSH) 
      • Algorithms (e.g., RSA, ECDSA, AES, SHA-256) 
      • Key lengths and configurations 
      • Purpose (e.g., “used for REST API data in transit”) 
      • Where used (e.g., “web load balancer, SFTP server, mobile app backend”) 

      2. Active monitoring of industry trends regarding continued viability of all cryptographic cipher suites and protocols in use. 

      What PCI DSS Expects: You’re not just documenting your crypto once, you’re continuously watching for deprecations, known attacks, and cryptanalysis research that could make today’s algorithms unsafe tomorrow. 

      CBOM Relevance: A CBOM is not static, it must be living and adaptive. That means: 

      • Monitoring sources like NIST, IETF, ISO, and security advisories 
      • Understanding when an algorithm is moving from “approved” to “discouraged” or “deprecated” 
      • Identifying exposure points in your CBOM that depend on soon-to-be-weak crypto 

      3. Documentation of a plan to respond to anticipated changes in cryptographic vulnerabilities 

      What PCI DSS Expects: If a cipher or protocol in use becomes vulnerable, what’s your plan? You should already have one before the weakness is exploited. 

      CBOM Relevance: CBOMs enable proactive remediation by: 

      • Helping you instantly locate where a deprecated algorithm is used 
      • Prioritizing remediation based on exposure and criticality 
      • Mapping crypto dependencies (e.g., “TLS 1.2 is used by our main login portal and 6 microservices”) 

      Your plan might include: 

      • Timelines: e.g., deprecate TLS 1.0 within 30 days 
      • Fallbacks: Support TLS 1.3 with strong cipher negotiation 
      • Stakeholders: Who is responsible for testing and deploying the change 
      • Validation steps: Ensure cryptographic strength before go-live 

      Tailored Encryption Services

      We assess, strategize & implement encryption strategies and solutions.

      How Organizations Can Implement CBOM?

      To comply with Requirement 12.3.3 and achieve real crypto visibility, organizations must operationalize CBOM as part of their security and compliance lifecycle. 

      1. Discovery and Inventory
        • Scan environments using tools like nmap, sslscan, or custom API hooks
        • Document all cipher suites, certificates, keys, algorithms, and libraries
      2. Classification and Context
        • Define the purpose of each cryptographic component
        • Link components to applications, services, APIs, or endpoints
      3. Version Control and Storage
        • Store the CBOM in a version-controlled repository
        • Track all changes, patches, and upgrades over time
      4. Validation and Verification
        • Regularly test configurations using automated tools
        • Integrate crypto validation into CI/CD pipelines
      5. Monitoring and Alerting
        • Subscribe to threat intelligence sources (e.g., NIST, IETF, CVE feeds)
        • Automate alerts for deprecated or insecure algorithms
      6. Governance and Ownership
        • Assign responsibility to cryptographic owners
        • Schedule annual reviews aligned with PCI DSS assessments
      7. Plan for Crypto Agility
        • Ensure systems are designed to easily switch ciphers and protocols
        • Maintain a retirement plan for outdated components

      How can Encryption Consulting help?

      Navigating the complexities of PCI DSS v4.0.1, particularly emerging expectations around cryptographic transparency and the Cryptographic Bill of Materials (CBOM), requires more than just checkbox compliance. It calls for strategic alignment, deep technical understanding, and a clear action plan. 

      At Encryption Consulting, we specialize in delivering end-to-end compliance services tailored to your organization’s unique risk landscape. Our structured assessments help identify cryptographic assets, pinpoint gaps in documentation, and uncover risks related to undocumented or deprecated algorithms. From there, we develop an actionable, prioritized roadmap to help you achieve and maintain PCI DSS readiness including preparation for future CBOM-related requirements. 
       
      Our approach covers these essential areas: 

      • Cryptographic Inventory & Discovery: We assess your environment to build a detailed cryptographic inventory, helping you identify keys, certificates, algorithms, and libraries across your systems. 
      • Gap Analysis Against PCI DSS and CBOM Readiness: Our assessments highlight where current practices may fall short of emerging expectations, including cryptographic lifecycle management. 
      • Roadmap for Remediation: We deliver a practical, phased roadmap with clear remediation steps, adopting best practices for sustainable compliance. 
      • Expert Guidance: Our consultants work closely with your team at every stage, providing clarity and ensuring alignment with both current PCI DSS controls and future CBOM requirements.

      Conclusion

      PCI DSS 4.0.1 Requirement 12.3.3 is more than a checkbox, it’s a strategic mandate to understand, monitor, and manage cryptographic risk. In a world where algorithms age quickly and attackers grow smarter, cryptographic transparency is non-negotiable. 

      A CBOM acts as a living blueprint of your cryptographic environment. It supports security teams, compliance auditors, developers, and executive stakeholders in making informed, risk-based decisions about cryptographic hygiene. 

      By implementing a CBOM: 

      • You’ll be better prepared to comply with PCI DSS 4.0.1 
      • You’ll reduce time to respond to crypto-related vulnerabilities 
      • You’ll elevate your overall cryptographic governance maturity 

      Streamlining PKI Operations: Integrating CertSecure Manager with DigiCert CA 

      The digital world moves quickly, and properly managing your digital certificates is non-negotiable. It’s essential for security, compliance, and uninterrupted business operations. CertSecure Manager by Encryption Consulting is a modern Certificate Lifecycle Management (CLM) solution that automates and simplifies certificate issuance, renewal, discovery, and governance across hybrid environments. 

      This blog walks you through how CertSecure Manager integrates seamlessly with DigiCert CA, enabling enterprises to centralize control while leveraging the scalability and reliability of DigiCert’s world-class public key infrastructure. 

      Why DigiCert? 

      DigiCert is a leading global certificate authority known for high-assurance digital certificates, robust validation processes, and trusted root infrastructure. Organizations relying on DigiCert for their public-facing or internal certificates can now extend their automation capabilities by integrating it with CertSecure Manager. 

      Integration Highlights 

      • Automated Certificate Issuance & Renewal 
        CertSecure Manager automates certificate issuance and renewal directly from DigiCert via secure API integrations. 
      • Policy Enforcement 
        Enforce consistent issuance policies across all endpoints. 
      • Unified Visibility 
        Centralized visibility into all DigiCert-issued certificates through CertSecure Manager’s dashboard. 
      • Smooth Provisioning 
        Automatically deploy and renew certificates on endpoints and servers. 
      • Reports 
        Detailed reports on certificate inventory, expiration trends, and compliance metrics for internal audits and external regulatory requirements. 

      How the Integration Works

      1. API Credential Setup

        Generate an API key from the DigiCert CertCentral console with the necessary permissions.

      2. Connector Configuration in CertSecure Manager
        • Go to the CertSecure Manager UI and download the DigiCert connector installer from Utilities>Connectors>Download.
        • Once the connector is configured with all the required details, you can complete the CA integration by navigating to Administration > CA Management, clicking Add CA, and entering the necessary information.
      3. Certificate Request Workflow
        • Users or systems request certificates via CertSecure Manager.
        • certificate request workflow
        • Requests follow the organization-defined approval workflows.
        • Upon approval, the platform calls DigiCert APIs to issue the certificate.
      4. Certificate Deployment & Tracking
        • Certificates can be automatically deployed to target devices.
        • CertSecure Manager tracks certificate metadata, monitors expiration, and automates renewal.

      For a detailed guide on DigiCert Integration, visit DigiCert Integration Guide. Here, you will find information about prerequisites, API key generation, and integration with CertSecure Manager. 

      Supported Features 

      • SSL/TLS Certificate Issuance 
      • Certificate Renewal and Revocation 
      • Wildcard and SAN Certificates 
      • Role-Based Access Control 
      • Audit Logging and Reporting 
      • ITSM and CMDB Integration 

      Security and Compliance 

      • CertSecure Manager maintains audit logs and provides full traceability. 
      • Role-based permissions ensure only authorized users can request, approve, or revoke certificates. 

      Certificate Management

      Prevent certificate outages, streamline IT operations, and achieve agility with our certificate management solution.

      Conclusion 

      Integrating CertSecure Manager with DigiCert CA brings together robust PKI infrastructure and intelligent certificate automation. This integration helps reduce manual effort, eliminate outages due to expired certificates, and enhance compliance across your organization’s digital assets. 

      For organizations seeking scalable, secure, and automated certificate management, this integration is a key step toward strengthening enterprise PKI posture.