Introduction
Modernizing your Public Key Infrastructure (PKI) is the most effective way to address the “Forge” and “Late” risks in the TNFL framework. Old-school PKI is often too static and manual; modern PKI must be automated, short-lived, and identity-centric.
For detailed information on Trust Now, Forge Later (TNFL), please refer to the dedicated blog.
Following is a practical step-by-step PKI modernization plan emphasizing “how” to mitigate TFNL:
Phase 1: Cryptographic Discovery and Inventory
Before modifying any Certificate Authority (CA) or rotating keys, the organization must establish an authoritative map of its cryptographic landscape. TNFL is fundamentally a risk to authenticity; it targets signatures that must remain verifiable over long periods. Therefore, this phase focuses on identifying where classical signature algorithms (RSA/ECDSA) are embedded in “long-lived” certificates.
- Discover and locate certificates and keys across internal networks, cloud environments, Kubernetes clusters, load balancers, DevOps pipelines, and endpoints.
- For every discovered asset, document its survival requirement
- Low Exposure: A 90-day internal TLS certificate carries minimal TNFL risk because it rotates before a “Forge Later” attack can be executed.
- High Exposure: Code-signing certificates, firmware roots, timestamping authorities, and legal documents may require validity for 10–30 years. These are your critical TNFL domains.
- Catalog non-upgradeable or constrained systems, such as legacy HSMs, smart cards, TPM-bound keys, IoT devices, industrial control systems, secure boot implementations, and hardware with embedded trust anchors.
- Any system that hardcodes RSA/ECDSA and cannot be patched for new algorithms is a structural liability. These must be segmented, protected by compensating controls, or prioritized for a hardware refresh.
Phase 2: Governance and Enrollment Readiness
PKI modernization must be governed by an explicit cryptographic policy. This includes defining which algorithms are approved, which systems require long-term survivability, and which certificate types must transition first.
- Define approved cryptographic standards and policies, including algorithms (e.g., transition from RSA-2048 to ML-DSA 65), a migration timeline, enforcement dates for classical-only issuance, and the criteria for hybrid or PQC issuance.
- Validate that all systems, including web servers, Java runtimes, and IoT stacks, can parse new Object Identifiers (OIDs), handle larger certificate sizes, and process extended trust chains. Governance ensures that enrollment and issuance changes do not occur blindly.
Phase 3: Finalize PQC Migration Approach for PKI
Before standing up a new Post-Quantum Cryptography (PQC) Issuing CA, you must decide on a deployment model that dictates how legacy and modern systems will interact:
-
Hybrid/Composite: Certificates where classical (RSA/ECC) and PQC signatures (ML-DSA) coexist. This provides “defense-in-depth,” ensuring security as long as at least one algorithm remains unbroken.
Hybrid typically means two separate certificates for the same subject and key pair context:
- Certificate A → Classical algorithm (e.g., RSA-4096 or ECDSA P-384)
- Certificate B → PQC algorithm (e.g., ML-DSA)
Composite means one single certificate that contains, for e.g., id-MLDSA65-RSA3072-PKCS15-SHA512
-
Parallel Hierarchies: Maintaining two distinct PKI stacks. You maintain two trust chains:
- Legacy PKI- Root → Intermediate → End-Entity: using RSA 4096
- PQC PKI- Root → Intermediate → End-Entity: using ML-DSA
- PQC-ready clients use the new chain, while legacy clients stay on the old one.
-
Complete Replacement: A hard cutover to PQC. This is the cleanest architecture but carries the highest risk of “bricking” legacy devices that cannot parse new, larger PQC keys.
- You decommission Classical Root and Intermediates using RSA/ECC
- Move entirely to PQC Root and Intermediate using ML-DSA
Once the model is selected, the mechanisms used to request and issue certificates must be overhauled to handle the unique physical characteristics of PQC, such as significantly larger key sizes and new metadata.
- Active Directory (ADCS): You must duplicate and update certificate templates to support PQC-ready key parameters. Auto-enrollment policies must be revised to target only compatible endpoints, preventing “enrollment loops” where a legacy machine tries—and fails—to install a PQC certificate it cannot process.
- DevOps & Automation: CSR (Certificate Signing Request) generation workflows must be re-tested. Many automated scripts have hardcoded buffer limits that will fail when presented with the larger payloads required for composite or hybrid requests.
- Protocol Support: For automated enrollment via ACME, EST, or SCEP, your CA endpoints must be updated to support new profiles. You must ensure that load balancers and firewalls do not drop these larger packets as “malformed” traffic.
Phase 4: Securing and Modernizing the Root CA
Modernizing your root is not an “in-place upgrade.” It requires the creation of a new, parallel trust anchor specifically engineered for the post-quantum era. You must set up a completely new Root CA designed for crypto-agility. This root will eventually sign your Issuing CAs using quantum-resistant algorithms.
- The root private key must be generated within a FIPS 140-3 Level 3 (or higher) Hardware Security Module (HSM). This standard is critical as it mandates identity-based authentication and physical tamper-resistance specifically tested for modern cryptographic modules.
- Generation must occur during a formally documented Key Ceremony. This involves “M-of-N” dual control (where multiple trusted officers must present physical keys to activate the HSM) and independent auditors to ensure the private key never leaves the secure hardware.
- Once generated, the Root CA must remain strictly offline and air-gapped. It is only powered on for high-integrity events, such as signing an Issuing CA’s Certificate
- The issuing CA will later generate a CSR using its PQC-capable key pair and submit it to the new root for signing. The root verifies the CSR and signs it with its private key, creating the trust chain.
- An end-entity (server or user) cannot use a PQC certificate if the device doesn’t already “know” the new root. This is the most common failure point in modernization. The new PQC Root certificate must be pushed to all trust stores across the enterprise before issuing any operational certificates.
- Windows: Deployment via Group Policy Objects (GPO).
- Mobile/Mac: Deployment via Mobile Device Management (MDM) profiles.
- Linux/Cloud: Integration into base images and configuration management (Ansible/Terraform).
- If you skip pre-distribution, systems will fail to validate the new PQC chain and force a fallback to the legacy classical root. This fallback leaves you perpetually dependent on RSA/ECDSA anchors that an attacker can “Forge Later”.
Phase 5: Issuing CA Transition
The Issuing CA acts as the bridge between the high-security Root and the daily operational needs of the enterprise. This stage is critical for establishing crypto-agility in live traffic.
- The Issuing CA must generate its own PQC or hybrid key pair (e.g., using ML-DSA or a composite RSA + ML-DSA pair) within a dedicated Hardware Security Module (HSM).
- The Issuing CA produces a Certificate Signing Request (CSR). This request includes the public key and identity attributes, signed by its own private key to prove that the CA truly controls the keys it is registering.
- The CSR is then submitted to the offline Root CA (established in Phase 4). The Root signs the CSR, creating the Issuing CA certificate and officially linking it to the new quantum-resistant trust hierarchy.
- Once operational, the Issuing CA begins signing end-entity certificates. If you have selected a hybrid model, the CA must be configured with specific Certificate Templates that support dual signatures. In this mode, every certificate contains both a classical signature (for legacy systems) and a PQC signature (for modern systems).
Phase 6: Stress-Testing the Revocation and Handshake
Unlike classical PKI, PQC introduces significantly larger data payloads. You must validate that your infrastructure can validate these larger certificates without failing.
- PQC signatures make Certificate Revocation Lists much larger. Test your bandwidth to ensure CRL distribution points don’t bottleneck.
- Online Certificate Status Protocol (OCSP) responders must be tested for their ability to sign and deliver PQC-valid responses within the time limits required by modern browsers.
- Authenticate connections under TLS 1.3, where PQC key exchanges (such as ML-KEM) are integrated alongside classical mechanisms
- Watch for “packet fragmentation” at the firewall. Because PQC keys are larger, the initial TLS “Client Hello” or “Server Hello” may exceed the standard MTU (Maximum Transmission Unit), causing network equipment to drop the connection.
A fully operational, high-performance Issuing CA that is actively providing quantum-resistant identities while being monitored for real-world network performance and stability.
Phase 7: Pilot Testing and Gradual CA Switch
The transition to PQC is not a “big bang” event; it is a series of calculated waves. Success depends on a Gradual CA Switch that allows you to monitor network impact before committing the entire enterprise.
- Select a small, representative subset of your environment to act as the pilot group. Choose non-critical internal web applications, a single Kubernetes namespace, or a specific branch office. Avoid high-volume production databases at this stage.
- Use this group to test Hybrid Certificates (Classical + PQC). Verify that legacy clients can still connect using the classical signature while modern clients successfully validate the PQC signature.
- Establish a baseline for:
- Handshake Latency: Measure the millisecond increase in TLS connection times.
- Performance tuning: Monitor for dropped connections caused by certificates exceeding the standard 1500-byte MTU.
- CPU/Memory Load: Track the impact on load balancers and web servers handling the more complex PQC algorithms.
- Once the pilot is stable, begin shifting your Issuing CAs. Instead of deleting the old CA, you will run them in parallel and slowly “drain” the legacy trust.
Phase 8: Full-Scale Rollout and Decommission
As you move into full-scale production, the focus shifts to ensuring 100% coverage.
- Update your CA software, such as Active Directory or CLM policies, to mandate the use of the new PQC-ready templates.
- Do not decommission the legacy CA immediately. Keep it in a “Read-Only” state for 6–12 months to allow for emergency rollbacks if an unforeseen compatibility issue arises in a long-lived legacy system.
- Maintain a live CBOM (from Phase 1), a continuous inventory of every algorithm and key length used across your environment. The CBOM should flag any “Shadow IT” or legacy systems that still use RSA-2048 in zones mandated for PQC. Use the CBOM to identify “Non-Agile” endpoints or devices that are hard-coded to a specific algorithm and cannot receive automated updates. These represent your remaining structural TNFL liabilities.
- Perform a “sweep” of the environment. Any remaining classical certificates should be flagged as high-risk TNFL vulnerabilities.
Phase 9: Continuous Monitoring and Logging
Continuous monitoring and logging must be performed by integrating with monitoring tools, such as SIEM, to recognize PQC-specific anomalies.
- If a PQC-ready server suddenly “falls back” to a classical handshake, it may indicate a downgrade attack by an adversary trying to bypass your quantum-safe defenses.
- Centralize errors related to “Invalid OIDs” or “Signature Verification Failures,” which often signal that a client’s trust store is out of sync with your new PQC root.
Phase 10: Automation and Agility
Manual certificate management is a structural vulnerability. In a post-quantum world, the speed at which you can rotate keys is more important than the strength of the keys themselves.
True agility means your applications are not “hard-coded” to a specific algorithm.
- Use modular cryptographic libraries. Your applications should call for a “High-Security Signature” rather than “RSA-2048.” This allows you to update the backend algorithm (to ML-DSA) without touching the application code.
- Store your cryptographic requirements (key length, algorithm type) in central configuration files or a Certificate Lifecycle Management (CLM) tool. When standards change, you update the policy once, and the automation pushes it everywhere.
- Use the ACME (Automated Certificate Management Environment) protocol to handle the entire “Request → Validate → Issue → Install” loop without human eyes.
- Integrate your CLM with load balancers (F5, Citrix) and cloud providers. When a certificate is renewed, the CLM should automatically “bind” the new PQC certificate to the service and restart the listener.
- Test your system’s ability to rotate 1,000+ certificates in under 24 hours. This is your insurance policy if an early PQC algorithm is found to be flawed.
How Does CBOM Enable PKI Modernization to Prevent TNFL?
A Cryptographic Bill of Materials (CBOM) is the foundational “source of truth” required to navigate the complex transition to Post-Quantum Cryptography (PQC) and mitigate the risks of Trust Now, Fail Later (TNFL). For TNFL specifically, the concern is that attackers may not just decrypt past data, but potentially forge digital trust in the future by breaking signature schemes or exploiting weak validation chains. CBOM helps identify where trust is rooted and enforced, such as code signing certificates, root and intermediate CAs, embedded public keys in firmware, pinned certificates in applications, and device trust stores.
CBOM helps in analyzing, with evidence for:
- Which certificates, chains, and signature algorithms exist today (RSA/ECDSA, key sizes, curves, hashes)?
- Where signatures are validated (apps, devices, middleboxes, libraries) and what those validators can or can’t parse?
- Where “trust” is embedded (pinned certs, embedded roots, hardcoded public keys, firmware trust stores)?
- Which systems will “halt” when you introduce larger certs or new OIDs or new chains (common in PQC and composite/hybrid approaches)?
These are the places where a future cryptographic break would have long-term systemic impact. By mapping these dependencies, organizations can prioritize which trust anchors and signing systems must be made quantum-resilient first. The following describes the role of CBOM in modernizing PKI for PQC and mitigating TNFL:
- Cryptographic Discovery: Discovery goes beyond simple certificate counting. It identifies every touchpoint in the PKI ecosystem, including TLS/mTLS identities (Kubernetes, SPIFFE), Code Signing pipelines, and Trust Stores across various operating systems and devices. It also captures the specific versions of cryptographic libraries (like OpenSSL or BoringSSL) that dictate what a system can or cannot process.
- Build Inventory: The CBOM catalogues this data into a single inventory. It links assets, such as the app or device, to their specific crypto usage and their dependencies. This mapping is what turns a simple list into a viable modernization roadmap.
- Risk-Based Prioritization: CBOM enables organizations to rank migration waves by exposure and the lifetime of trust. For TNFL specifically, CBOM highlights high-stakes areas such as code signing and root integrity, where a future cryptographic break would be catastrophic, enabling teams to secure these “long-lived” assets first.
- Crypto-Agility and Governance: Finally, a CBOM is not a static document; it is a living control for Crypto-Agility. It provides continuous visibility into the environment, alerting security teams to non-compliant algorithms, such as RSA-1024 keys or “shadow” CAs.
How can Encryption Consulting Help?
If you are wondering where and how to begin your post-quantum journey, Encryption Consulting’s CBOM Solution is here to make that path clear and practical.
We have already established that transitioning to post-quantum cryptography is not just about replacing algorithms. It requires visibility into your current cryptographic landscape, understanding where vulnerabilities exist, and building a roadmap that aligns with your business, compliance, and operational goals. That is where our CBOM (Cryptographic Bill of Materials) Solution plays a critical role.
Our CBOM solution helps you discover, inventory, and classify all cryptographic assets across your environment. We identify where classical algorithms such as RSA and ECC are being used, map certificate dependencies, analyze key usage, and highlight systems that may be vulnerable to future quantum threats. With this visibility, we guide you through:
- Assessing quantum risk exposure across applications, PKI, HSMs, and infrastructure
- Prioritizing systems for migration based on business impact and compliance needs
- Designing hybrid, composite, or parallel PKI strategies
- Developing a phased PQC migration roadmap aligned with NIST standards
- Implementing crypto-agile architectures to avoid future large-scale disruptions
Encryption Consulting combines deep PKI expertise, real-world deployment experience, and forward-looking quantum readiness strategy. You can count on us as your trusted partner to guide you step by step with clarity, confidence, and practical execution from cryptographic discovery to full post-quantum readiness.
Encryption Consulting also offers a high-assurance, flexible, and scalable Managed PKI and PKI-as-a-Service (PKIaaS) solution designed to simplify certificate management and strengthen your organization’s digital trust infrastructure.
Expert Guidance and PQC Readiness
Our team of PKI specialists supports your organization in designing and managing a crypto-agile PKI. We provide guidance on best practices, policy implementation, and operational strategy, enabling your team to focus on business priorities while ensuring a secure and adaptable PKI.
Cost and Operational Efficiency
By leveraging our PKI-as-a-Service, we help organizations reduce hardware, software, and maintenance costs while streamlining PKI management with expert support.
Scalable, High-Availability PKI
Our PKIaaS platform scales seamlessly for DevOps, cloud, and IoT environments. With a high-availability, single-tenant architecture, it supports millions of certificate endpoints and hybrid certificates, ensuring consistent performance without increasing operational risk.
Rapid Deployment and Integration
Deploy a fully managed PKI quickly across on-prem, cloud, or hybrid infrastructures. Automated provisioning, enrollment, and renewal seamlessly connect with your existing DevOps pipelines, identity systems, and Zero Trust architecture, ensuring a smooth transition to quantum-safe cryptography.
Automated Certificate Lifecycle
Simplify day-to-day PKI operations with fully automated certificate issuance, renewal, revocation, and rotation. We support protocols such as ACME, SCEP, EST, and WSTEP, ensuring secure, consistent, and scalable certificate provisioning across users, devices, and applications.
Policy-Driven Compliance
Centralized policy enforcement enables you to define and enforce certificate policies, including validity periods and key usage rules, across your organization. It allows you to integrate PQC capabilities and ensure alignment with security frameworks and compliance standards such as GDPR, HIPAA, PCI DSS, and NIST. Additionally, it supports customizable certificate profiles with strict access controls, ensuring secure and compliant certificate issuance.
Private, Secure CA Management
We provide a private, single-tenant Certificate Authority environment with strict access controls. Only authorized systems, devices, and users can request certificates, ensuring high assurance for all cryptographic operations.
Deployment Options That Fit Your Needs
We offer flexibility in how PKI is implemented:
- On-Premises: Deploy a fully managed PKI within your own infrastructure, keeping root and issuing CAs under your control while benefiting from our expert guidance.
- Cloud PKI (SaaS): Leverage a secure, cloud-hosted PKI to manage certificates and digital identities with minimal operational overhead.
- Managed PKIaaS: Get a fully customized, enterprise-grade PKI solution hosted in Encryption Consulting’s cloud with expert management, delivering maximum agility and post-quantum readiness, robust compliance, and seamless scalability without the operational burden.
With Encryption Consulting, your organization gains a PKI platform that’s not only reliable and secure but also ready to evolve as cryptographic standards advance. Rapid algorithm transitions and post-quantum preparedness become manageable, rather than disruptive.
Conclusion
Modernizing your PKI is no longer a “nice-to-have” infrastructure update; it is a critical defensive mechanism against the Trust Now, Forge Later (TNFL) threat. By transitioning from manual, static architectures to automated, crypto-agile systems, you effectively neutralize an attacker’s ability to weaponize your current identity against a quantum future.
The path to quantum resilience is a journey of visibility and velocity. It begins with a comprehensive discovery of your “frozen” identities (Phase 1) and culminates in a state where your organization can rotate a thousand keys in a single day (Phase 10). By following this 10-phase modernization plan, you aren’t just swapping algorithms; you are rebuilding the foundation of digital trust to ensure that your signatures and your liability remain firmly under your control in the Quantum Era.
- Introduction
- Phase 1: Cryptographic Discovery and Inventory
- Phase 2: Governance and Enrollment Readiness
- Phase 3: Finalize PQC Migration Approach for PKI
- Phase 4: Securing and Modernizing the Root CA
- Phase 5: Issuing CA Transition
- Phase 6: Stress-Testing the Revocation and Handshake
- Phase 7: Pilot Testing and Gradual CA Switch
- Phase 8: Full-Scale Rollout and Decommission
- Phase 9: Continuous Monitoring and Logging
- Phase 10: Automation and Agility
- How Does CBOM Enable PKI Modernization to Prevent TNFL?
- How can Encryption Consulting Help?
- Conclusion
