- What are NIST PQC Security Categories?
- What's Actually at Risk?
- Why the Traditional Security Scale Does Not Work With PQC?
- The Core Idea of References for Quantum-Safe Algorithms
- The Five Security Categories
- The Grover Misconception and Why MAXDEPTH Changes Everything?
- Understanding What "Breaking" Means
- Understanding the Three Practical Properties
- PQC Level Selection Framework
- How Can Encryption Consulting Help?
- Conclusion
Imagine a safe with a combination lock. Normally, the only way to open it is by trying different number combinations until you find the right one.
Some safes have very long combinations, so it would take millions of years to guess them. That’s why they’re considered secure. Now, imagine a future machine that can instantly see the correct combination without trying every possibility. Safes that once seemed impossible to open could suddenly be unlocked in seconds.
The new tool is the quantum computer. While classical computers use bits as the basic unit of information, quantum computers use quantum bits (qubits). This allows quantum computers to perform complex mathematical calculations and solve problems exponentially faster than classical computers.
The locks that become vulnerable in this scenario are RSA, Diffie-Hellman, and Elliptic Curve Cryptography (ECC), the three algorithms that secure the majority of encrypted traffic on the internet today.
For decades, security decisions were built around a shared understanding of how difficult these systems were to break. But quantum computing changes the rules entirely. The scale used to measure security strength no longer tells the whole story.
The challenge is even more unsettling because the threat isn’t fully here yet, but we already know it’s coming. That’s why the National Institute of Standards and Technology (NIST) spent nearly a decade preparing for it. Their goal was to design and standardize a new generation of cryptographic algorithms built to withstand the power of quantum computers.
This effort led to the development of Post-Quantum Cryptography (PQC). It refers to the new class of cryptographic algorithms specifically designed to remain secure even against quantum computers capable of running algorithms such as Shor’s algorithm, which could break traditional public-key cryptographic systems.
What are NIST PQC Security Categories?
PQC refers to cryptographic algorithms designed to remain secure against attacks from both classical and quantum computers. NIST defines five security categories (1–5), each anchored to a specific, well-understood reference problem that remains computationally hard even against quantum hardware. The categories are split across two problem types:
- Categories 1, 3, and 5 are anchored to brute-force key search against AES-128, AES-192, and AES-256, respectively, the attack model relevant to encryption and key exchange. Category 1 matches the security posture of today’s standard internet protocols; Category 3 is NIST’s recommended default for new deployments; Category 5 is designed for long-lived secrets, root certificate authorities, and high-assurance systems.
- Categories 2 and 4 are anchored to collision search against SHA-256 and SHA-384, respectively, the attack model relevant to digital signatures and hash-based schemes.
What’s Actually at Risk?
Most encrypted data today is protected by one of three algorithms, RSA, Diffie–Hellman, or ECC. All of them rely on the same concept that their security comes from mathematical problems that are extremely difficult to reverse.
For example:
- RSA relies on the difficulty of factoring very large numbers into their prime components.
- Diffie–Hellman depends on the hardness of the discrete logarithm problem.
- ECC relies on solving the elliptic curve discrete logarithm problem.
For classical computers, these problems are computationally infeasible at large scales. And this is what protects your TLS connections, SSH sessions, digital signatures, and encrypted emails today.
However, in 1994, mathematician Peter Shor demonstrated that a sufficiently powerful quantum computer could solve both factoring and discrete logarithm problems in polynomial time, exponentially faster than any known classical approach.
The practical implication is dramatic. To explain, breaking RSA-2048 with classical computers would require roughly 2¹¹² operations, an amount so large that it’s effectively impossible with current technology. But Shor’s algorithm suggests that a quantum computer could solve the same problem with only thousands of operations.
Therefore, if large-scale quantum computers become practical, all three pillars of modern public-key cryptography, including RSA, Diffie–Hellman, and ECC, will be vulnerable to sufficiently large fault-tolerant quantum computers running Shor’s algorithm. And with them, much of the internet’s existing public-key infrastructure.
Symmetric cryptography algorithms like AES or ChaCha20, which encrypt the data after keys are exchanged, do not rely on the same algebraic structures that Shor’s algorithm exploits.
There is a quantum algorithm that affects symmetric encryption known as Grover’s algorithm, which speeds up brute-force key searches. However, its advantage is only quadratic, not exponential.
In a simple analysis, this means the effective security of AES-128 drops from 2¹²⁸ operations to about 2⁶⁴ against a quantum attacker. That might sound alarming, but in practice it isn’t nearly as bad as it appears.
Therefore, the immediate and far more urgent problem lies on the public-key side of cryptography. This is because if public-key cryptographic algorithms become vulnerable, the internet needs replacements with algorithms built on mathematical problems that even quantum computers cannot efficiently solve.
That field is known as PQC, and it’s exactly what NIST spent nearly a decade evaluating and standardizing. But choosing the right algorithms was only half the challenge. The harder problem was figuring out how to measure their security in the first place.
Why the Traditional Security Scale Does Not Work With PQC?
For classical cryptography, measuring security is relatively straightforward. Researchers study the best-known attack against an algorithm, estimate how many operations that attack would require, and then take the log base 2 of that number. The result becomes the algorithm’s security level in bits.
Because different cryptographic systems can all be expressed using the same measurement, they can be compared directly. For example:
- AES-128 provides about 128 bits of security, meaning the best-known attack would require roughly 2128 operations.
- ECDH using the P-256 curve provides roughly 128 bits of security, based on the difficulty of solving the elliptic-curve discrete logarithm problem
- RSA-3072 also falls into approximately the 128-bit security range, based on the estimated difficulty of factoring a 3072-bit integer.
PQC does not yet have the same level of long-term confidence as classical cryptography, as it relies on relatively newer hardness assumptions such as Learning With Errors (LWE), syndrome decoding from error-correcting codes, and hash-based constructions. As a result, security estimates for these schemes continue to evolve through ongoing research and cryptanalysis.
However, as the process unfolded, ongoing cryptanalysis began to challenge some of those assumptions. In multiple cases, security estimates had to be revised downward as researchers uncovered more efficient attacks. This wasn’t a failure of the process; it was the process working as intended, exposing weaknesses under sustained global scrutiny.
The most striking example was SIKE. Initially considered a promising candidate, SIKE was ultimately broken after researchers discovered a new classical attack that could recover private keys in a matter of hours on a standard laptop, without any quantum hardware involved. The result was decisive and led to its removal from consideration.
So, the first problem is that precise bit-security claims for novel PQC algorithms are only as reliable as the current state of cryptanalysis, which is still actively developing.
The second challenge comes from hardware uncertainty. Estimating resistance to quantum attacks requires predicting what future quantum computers will actually look like.
The third problem is even more fundamental. Classical and quantum operations are not comparable units. A quantum gate today costs roughly a billion to a trillion times more than a classical gate by current estimates, in terms of hardware, energy, and time.
Faced with these uncertainties, NIST adopted a different approach. Instead of attempting to assign precise numerical security levels to new PQC algorithms, they chose to define security by comparison, anchoring new systems to the well-understood strength of existing classical cryptography.
The Core Idea of References for Quantum-Safe Algorithms
The framework built by NIST is based on a simple observation that even if we cannot reliably say that a post-quantum cryptographic algorithm has exactly X bits of quantum security, we can still determine whether it is harder or easier to break than a reference problem we already understand well.
Instead of chasing a precise measurement, NIST defines security floors. The reference problems chosen are AES key search and collision search on specific hash widths, SHA-256 for Category 2, and SHA-384 for Category 4. SHA-256 and SHA-384 are both members of the SHA-2 family, but NIST’s security categories do not reference the family as a whole; they pin to individual hash output lengths because each width delivers a distinct security floor.
SHA-256’s 256-bit output provides 128 bits of collision resistance (by the birthday paradox), which is the exact floor Category 2 requires. SHA3-256 from the newer SHA-3 family provides an equivalent guarantee and is formally cited by NIST alongside SHA-256 as an acceptable reference for Category 2. These symmetric primitives have been studied more extensively than almost any other cryptographic building block.
Thousands of researchers have analyzed them and the quantum attacks against them, in particular Grover’s algorithm, which affects the key-search side, and the Brassard–Høyer–Tapp algorithm, which affects the collision-search side. Because these reference problems are well-analyzed, they provide a stable anchor. So rather than saying “this algorithm has 143 bits of security,” NIST says something more practical:
This algorithm must be at least as hard to break as AES-192 under any attack we can realistically model.
That’s not a precise number. It’s a category. And because the reference problems themselves are well understood, the category remains meaningful even as quantum hardware evolves and new cryptanalysis emerges. The result is five security categories, each anchored to a specific reference task.
The Five Security Categories
To standardize how post-quantum algorithms are evaluated, NIST defined five security categories based on the estimated difficulty of breaking established cryptographic primitives like AES and SHA-2 under both classical and quantum attacks. Each category represents a different security strength, attack model, and long-term protection target, helping organizations choose appropriate algorithms for encryption, digital signatures, and long-term data security. Let’s examine each category in detail.
Category 1
Category 1 is anchored to brute-force key search against AES-128. An attacker attempting to break a Category 1 post-quantum algorithm must expend at least as much computational effort as trying every possible 128-bit AES key. When realistic circuit costs are included, this corresponds to roughly 2143 classical gate operations.
In classical cryptographic terms, this level of security aligns roughly with RSA-3072 and the NIST P-256 curve used in modern ECC. This is the protection level behind most TLS connections across the internet today.
For organizations whose data needs to remain confidential for the next five to ten years, migrating from RSA-3072 to a Category 1 post-quantum algorithm maintains a comparable security posture.
The concrete algorithm operating at this level is ML-KEM-512, the smallest parameter set of the Kyber key-encapsulation mechanism. It uses an 800-byte public key and a 768-byte ciphertext. By comparison, Elliptic Curve Diffie-Hellman on P-256 accomplishes the same function with a 64-byte public key. That gap illustrates what many engineers informally call the post-quantum tax.
Category 2
Category 2 changes the reference problem from key search to collision search on SHA-256. This distinction is important because encryption and digital signatures are attacked in fundamentally different ways.
Breaking encryption is typically a key-search problem, where the attacker attempts to discover a specific secret key. Forging a digital signature, on the other hand, is often a collision problem. The attacker attempts to find two different inputs that produce the same hash value so that a malicious document can replace a legitimate one while still verifying against the signature.
These two attack types behave differently under quantum computing. Treating them as equivalent would evaluate signature schemes against the wrong threat model, which is why Category 2 exists.
For SHA-256, classical collision search requires about 2128 operations, based on the birthday paradox. A quantum algorithm known as the Brassard-Høyer-Tapp algorithm reduces this to roughly 285 operations, although it requires an enormous amount of quantum memory of comparable scale. Hardware capable of supporting that attack does not currently exist and is not expected in the near future.
Because of this, NIST positions Category 2 slightly below Category 3. In most plausible technological scenarios, SHA-256 collisions would become feasible somewhat earlier than AES-192 key recovery.
The signature scheme designed for this category is ML-DSA-44, the smallest parameter set of the Dilithium digital signature family.
Category 3
Category 3 is anchored to brute-force key search against AES-192, corresponding to roughly 2207 classical gate operations. At first glance, increasing the security level from Category 1 to Category 3 may appear modest because the key size increases by only 64 bits. In practice, that difference is enormous. A gap of 264 represents roughly 18 quintillion times more computational effort. If breaking a Category 1 system required every computer on Earth to run continuously for a billion years, breaking a Category 3 system would require 18 quintillion times more work.
In classical cryptographic terms, Category 3 corresponds roughly to RSA 7680 bits or the NIST P-384 elliptic-curve standard. The P-384 curve was historically used for SECRET-level classified communications under the NSA’s NSA Suite B framework.
Category 3 is the level NIST recommends for most new deployments. It provides a comfortable security margin for a 10-to-20-year protection window and fits within the expected development timeline for large-scale quantum computers.
The principal algorithms operating at this level are ML-KEM-768 for key encapsulation and ML-DSA-65 for digital signatures. Their public keys are approximately 1184 bytes and 1952 bytes, respectively, making them the likely workhorse parameter sets for most real-world post-quantum deployments.
Category 4
Category 4 is the collision-search counterpart to Category 3. Instead of referencing AES key search, it is anchored to the collision resistance of SHA-384, which corresponds to approximately 2192 classical operations.
In practice, Category 4 serves primarily as an analytical framework rather than a commonly targeted deployment level. Categories 2 and 4 function as intermediate benchmarks used to evaluate schemes that rely heavily on hash-function security assumptions.
Category 5
Category 5 represents the highest security tier in the framework. It is anchored to brute-force key search against AES-256, corresponding to roughly 2272 classical gate operations.
Under quantum attack using Grover’s algorithm, the cost becomes approximately 2298 / MAXDEPTH quantum gates, where MAXDEPTH represents the realistic limit on how long a quantum computation can run sequentially before noise and error correction overheads make further execution impossible.
Even under optimistic projections for quantum hardware development, this level of computation remains far beyond any plausible adversary. In classical terms, Category 5 corresponds roughly to RSA 15360 bits or the NIST P-521 curve. P-521 was the curve recommended for TOP SECRET communications under the Suite B cryptographic framework.
This category is appropriate for root certificate authorities, classified government communications, long-lived encryption keys, and systems designed to withstand the harvest-now, decrypt-later threat model where adversaries store encrypted data today in the hope of decrypting it once quantum computers become viable.
Several post-quantum algorithms operate at this level, including ML-KEM-1024, ML-DSA-87, and SLH-DSA-SHA2-256s.
| PQC Algorithm Family | Parameter Sets | Security Category |
|---|---|---|
| ML-DSA | ML-DSA-44 | 2 |
| ML-DSA-65 | 3 | |
| ML-DSA-87 | 5 | |
| SLH-DSA | SLH-DSA-SHA2-128[s/f] | 1 |
| SLH-DSA-SHAKE-128[s/f] | 1 | |
| SLH-DSA-SHA2-192[s/f] | 3 | |
| SLH-DSA-SHAKE-192[s/f] | 3 | |
| SLH-DSA-SHA2-256[s/f] | 5 | |
| SLH-DSA-SHAKE-256[s/f] | 5 | |
| LMS, HSS | With SHA-256/192 | 3 |
| With SHAKE256/192 | 3 | |
| With SHA-256 | 5 | |
| With SHAKE256 | 5 | |
| XMSS, XMSSMT | With SHA-256/192 | 3 |
| With SHAKE256/192 | 3 | |
| With SHA-256 | 5 | |
| With SHAKE256 | 5 | |
| ML-KEM | ML-KEM-512 | 1 |
| ML-KEM-768 | 3 | |
| ML-KEM-1024 | 5 |
The Grover Misconception and Why MAXDEPTH Changes Everything?
This is the part that most coverage gets wrong, and getting it right changes how you think about the urgency and the risk.
The popular statement is: “Grover’s algorithm halves the bit security of symmetric cryptography. AES-128 drops to 64-bit security. AES-256 drops to 128-bit. So just double your key sizes.” This is technically true in one narrow sense and practically misleading in almost every sense that matters.
Grover’s provides a quadratic speedup. This part is real. But Grover’s is inherently sequential; each iteration builds on the result of the last one. You cannot split it across multiple quantum processors like you can split classical computation. If you try to parallelize by running multiple smaller Grover searches in parallel, each smaller search does proportionally less useful work, so you need more parallel instances exponentially to compensate. The total resource cost, including hardware, energy, and time, climbs dramatically faster than the naive 264 estimate suggests.
NIST captures this through a parameter called “MAXDEPTH”, which is defined as the realistic maximum number of sequential quantum gate operations a quantum computer can execute in one computation before decoherence, errors, or practical constraints force a stop. Plausible values range from 240 (roughly what near-future quantum architectures could execute serially in a year, based on the hardware architectures studied when NIST wrote the evaluation criteria) through 264 and up to a theoretical ceiling of 296.
With this constraint, attacking AES-128 via Grover requires 2170 / MAXDEPTH quantum gates. At MAXDEPTH 240, that’s 2130. At MAXDEPTH 264, it’s 2106. Neither of those numbers is 264. Neither of them is cheap.
For AES-256, the estimate is 2298/MAXDEPTH. At MAXDEPTH 264, that’s 2234, a number that doesn’t get meaningfully threatened even if quantum hardware improves by many orders of magnitude beyond current projections.
So, when NIST says a Category 1 algorithm must resist attacks as costly as AES-128 key search, they’re not saying it has 64-bit quantum security. They’re saying it has something closer to 106 to 130 bits of quantum security, depending on hardware assumptions.
The cost ratio between a quantum gate and a classical gate is currently somewhere between 109 and 1012. A quantum attack on Category 1 that requires 2106 quantum gates, where each gate costs a trillion times what a classical gate costs, may still be economically less viable than a classical brute-force attack for a long time.
NIST accounts for this by allowing security evaluations to weight quantum gates more expensively than classical gates in cost models. And importantly, for the highest security categories, NIST recommends assuming this cost disparity eventually disappears, that future quantum hardware becomes as cheap to run as classical hardware. Category 5 is designed to survive that scenario.
Understanding What “Breaking” Means
Security levels only make sense when you’re precise about what it means to “break” a cryptographic algorithm. This is why NIST defines security requirements very carefully and these definitions directly shape how security categories should be interpreted.
For key encapsulation mechanisms (KEMs) and encryption schemes, the required notion of security is Indistinguishability under Adaptive Chosen Ciphertext Attack (IND-CCA2). In practical terms, this means that even if an attacker can request an enormous number of decryptions (up to (264)) on ciphertexts of their choice, they still cannot distinguish which of two chosen plaintexts corresponds to a given challenge ciphertext.
This closely reflects real-world conditions, where attackers may observe traffic, manipulate connections, or probe systems through decryption queries. IND-CCA2 ensures that none of these capabilities provides a meaningful advantage.
In more limited scenarios, such as purely ephemeral key exchange, where key pairs are generated fresh for each session and never reused, a weaker notion, Indistinguishability under Chosen Plaintext Attack (IND-CPA), may be acceptable. However, this relaxation only applies under strict deployment assumptions and should not be generalized.
For digital signatures, the standard is Existential Unforgeability under Chosen Message Attack (EUF-CMA). Under this definition, an attacker may request signatures on a large number of messages of their choosing (again up to (264)) yet still cannot produce a valid signature on any new message. The term “existential” is important because the attacker doesn’t need to target a specific message; they succeed if they can forge any new valid message-signature pair. If even this minimal form of forgery is infeasible, the scheme is considered secure.
Finally, to meet a given PQC security category, an algorithm must uphold these guarantees against all relevant attack models: classical, quantum, or hybrid within the defined cost threshold. A scheme that is secure in the classical sense but vulnerable to a more efficient quantum attack below the required threshold does not qualify, regardless of its other strengths.
Understanding the Three Practical Properties
The five security categories are the headline. But NIST also evaluated three practical properties that determine whether an algorithm that passes its security definition in theory stays secure in real deployments.
1. Perfect Forward Secrecy (PFS)
Perfect forward secrecy ensures that even if a long-term private key is compromised in the future, past communications remain secure. This is achieved by generating fresh, ephemeral key pairs for every session and discarding them immediately after use.
In older systems like RSA-based key exchange, this is impractical due to slow key generation. In contrast, modern PQC schemes like ML-KEM generate keys in microseconds, making per-session forward secrecy effectively “free” and significantly improving real-world security.
2. Side-Channel Resistance
Side-channel resistance determines whether an algorithm remains secure when implemented on real hardware. Even if the math is sound, attackers can exploit variations in execution time, power consumption, or memory access patterns to extract secret keys. These attacks are well-documented in real systems. The primary defense is constant-time implementation, where execution behavior does not depend on secret values. NIST explicitly favored algorithms that can achieve this without major performance tradeoffs, as this is critical for secure deployment.
3. Multi-Key and Misuse Resistance
Real-world deployments involve large-scale systems and human error, so algorithms must remain secure under both conditions. Multi-key attack resistance ensures that security does not degrade significantly when an attacker targets many keys simultaneously, as in cloud environments.
Misuse resistance focuses on resilience against implementation mistakes such as nonce reuse, weak randomness, or incorrect state handling. Designs like stateless hashing (in SLH-DSA) and deterministic signing (in ML-DSA) reduce reliance on perfect implementation, helping maintain security even when things go wrong.
PQC Level Selection Framework
If you’re making actual deployment decisions, here’s the practical framework.
1. Match the security category to your risk horizon
If you’re replacing systems that currently operate at standard internet security levels (AES-128 with RSA-3072 or ECC P-256), targeting Category 1 or 2 maintains your existing baseline—that’s the minimum. For infrastructure that is difficult to upgrade, such as embedded systems, HSMs, or long-lived PKI, Category 3 is the recommended default.
It provides a safety margin against future advances, especially in quantum computing. For highly sensitive data with a long confidentiality horizon (20+ years), or environments exposed to harvest-now-decrypt-later threats, Category 5 is the right choice. The performance overhead compared to Category 3 is relatively small and often justified by the higher assurance.
2. Choose algorithms based on deployment context
For key exchange, ML-KEM is the standard choice across nearly all environments due to its performance and security balance. For digital signatures, ML-DSA is the practical default where moderate increases in signature size are acceptable. In contrast, SLH-DSA is best suited for high-assurance scenarios—such as root certificate authorities or critical signing infrastructure—where long-term trust and conservative assumptions matter more than efficiency.
3. Design for cryptographic agility
NIST’s category system is intentionally structured as an upgrade path. As computational capabilities evolve, lower security levels may be deprecated—just as 80-bit and 112-bit security were phased out in the past. The key advantage of PQC standardization is that moving from one category to a higher one (e.g., Category 1 to 3) typically involves changing parameters within the same algorithm family, not replacing the algorithm entirely.
This allows systems built today to adapt smoothly in the future. True cryptographic agility means planning for that transition in advance, so upgrades are incremental rather than disruptive.
How Can Encryption Consulting Help?
If you are wondering where and how to begin your post-quantum journey, Encryption Consulting is here to support you. You can count on us as your trusted partner, and we will guide you through every step with clarity, confidence, and real-world expertise.
We begin with a Cryptographic Discovery and Inventory, scanning your entire environment to identify certificates, keys, algorithms, and protocols across endpoints, applications, APIs, and infrastructure. This builds the baseline you need before any migration can begin.
From there, we conduct a PQC Assessment to evaluate your exposure to quantum threats, identify RSA- and ECC-dependent systems, and deliver a prioritized report of vulnerable assets with risk severity ratings.
With that clarity, we develop a PQC Strategy and Roadmap, a phased migration plan aligned to your risk appetite, regulatory requirements, and long-term security goals, including cryptographic agility so your systems can adapt as standards evolve.
We then support Vendor Evaluation and Pilot Testing, helping you select the right tools, run proof-of-concept tests, and validate interoperability before any full-scale rollout.
Finally, we manage Full Implementation, deploying hybrid classical and quantum-safe models, rolling out PQC across your PKI and infrastructure, and setting up monitoring for long-term cryptographic health.
CBOM Secure
Encryption Consulting’s CBOM Secure tool plays a key role in helping organizations prepare. Instead of dealing with spreadsheets, manual OpenSSL outputs, or scattered configuration files, our CBOM tool gives a clear view of crypto usage across environments. It shows which algorithms are in use, what needs to change for post-quantum security, and whether systems meet security goals. For organizations getting ready for board meetings, architecture choices, or compliance planning, our tool provides clarity and speed.
Our CBOM Secure is more than just a reporting tool; it also speeds up the process. It automates crypto inventories, checks TLS configurations, validates algorithms, and aligns policies, so teams can move from discovery to action without guessing. In future releases, Encryption Consulting plans to add automated fixes, cloud-native integrations, and policy enforcement to keep configurations in line with security standards at all times.
Now is a great time to get started: test PQC in a staging environment, map your current crypto usage, and begin creating internal policies. If your organization wants to pilot quantum-safe projects, give feedback, or help shape new features, we at Encryption Consulting encourage you to reach out. The earlier the teams start, the easier the long-term work will be.
Conclusion
The most important insight behind NIST’s post-quantum security categories is that stability matters more than precision. Rather than chasing exact bit-security values, the framework anchors algorithms to reference problems we already understand deeply: key search against AES and collision search on SHA-256. As long as an algorithm is at least as hard to break as its reference task, the category holds.
For engineers and architects, the practical takeaway is straightforward: Category 1 preserves today’s internet security baseline, Category 3 is the recommended target for most new deployments, and Category 5 is the right choice for long-lived secrets and high-assurance systems.
NIST’s initial standardization of ML-KEM, ML-DSA, and SLH-DSA is not the end of the process. FN-DSA is finalized, and additional candidates from the ongoing fourth-round evaluation remain under active consideration. As that pipeline matures and most deployments move through a hybrid classical and post-quantum transition period, the category framework provides the stable vocabulary needed to make those decisions coherently across protocols, infrastructure, and trust hierarchies.
- What are NIST PQC Security Categories?
- What's Actually at Risk?
- Why the Traditional Security Scale Does Not Work With PQC?
- The Core Idea of References for Quantum-Safe Algorithms
- The Five Security Categories
- The Grover Misconception and Why MAXDEPTH Changes Everything?
- Understanding What "Breaking" Means
- Understanding the Three Practical Properties
- PQC Level Selection Framework
- How Can Encryption Consulting Help?
- Conclusion
