Skip to content

Google Cloud Platform (GCP) – Introduction to Google Cloud HSM

Introduction

Google’s Cloud HSM service provides hardware-backed keys to Cloud KMS (Key Management Service). This gives customers the ability to manage and use their cryptographic keys while being protected by fully managed Hardware Security Modules (HSM). The Cloud HSM service is highly available and scales horizontally automatically. Created keys would be regionally bound to the KMS region in which the keyring is defined. With Cloud HSM, the keys that users create and use cannot be materialized outside of the cluster of HSMs belonging to the region specified at the time of key creation.

Using Cloud HSM, users can verifiably attest that their cryptographic keys are created and used exclusively within a hardware device. No application changes are required for existing Cloud KMS customers to use Cloud HSM. The Cloud HSM service is accessed using the same API and client libraries as the Cloud KMS software backend.

The Cloud HSM service uses HSMs, which are FIPS 140-2 Level 3-validated and are always running in FIPS mode. FIPS standard specifies the cryptographic algorithms and random number generation used by the HSMs.

Provisioning and Handling of HSMs

Provisioning of HSMs is carried out in a lab equipped with numerous physical and logical safeguards, including multi-party authorization controls to help prevent single-actor compromise.
The following are Cloud HSM system-level invariants:

  1. Customer keys cannot be extracted as plaintext.
  2. Customer keys cannot be moved outside the region of origin.
  3. All configuration changes to provisioned HSMs are guarded through multiple security safeguards.
  4. Administrative operations are logged, adhering to separation of duties between Cloud HSM administrators and logging administrators.
  5. HSMs are designed to be protected from tampering, such as by the insertion of malicious hardware or software modifications, or unauthorized extraction of secrets, throughout the operational lifecycle.

Tailored Cloud Key Management Services

Get flexible and customizable consultation services that align with your cloud requirements.

Vendor-controlled firmware on Cloud HSM

HSM firmware is digitally signed by the HSM vendor. Google cannot create or update the HSM firmware. All firmware from the vendor is signed, including development firmware that is used for testing.

Google Cloud HSM Key Hierarchy

Cloud HSM wraps customer keys, and Cloud KMS keys wrap HSM keys, which are then stored in Google’s datastores.

Cloud-HSM-Hierarchy


Cloud HSM does have a key that controls the migration of the materials inside the administrative domain of Cloud HSM.

The root key of Cloud HSM has two primary characteristics:
The root key is generated on the HSM and, throughout its lifespan, never leaves the well-defined boundaries of the HSM. However, cloning is possible, and backups of HSMs are allowed.

The root key can be used as an encryption key to wrap customer keys that HSMs use. Wrapped customer keys can be used on the HSM, but the HSM never returns an unwrapped customer key. HSMs can only use customer keys for operational purposes.

Key storage

HSMs are not used as a permanent data storage solution for keys. HSMs only store keys while they are in use. Since HSM storage is constrained, HSM keys are encrypted and then stored in the Cloud KMS key datastore.
The Cloud KMS datastore is highly available, durable, and heavily protected. Some of its features are:

  1. Availability: Cloud KMS uses Google’s internal data storage, which is highly available, and also supports a number of Google’s critical systems.
  2. Durability: Cloud KMS uses authenticated encryption to store customer key material in the datastore. Additionally, all metadata is authenticated using a hash-based message authentication code (HMAC) to ensure it has not been altered or corrupted while at-rest. Every hour, a batch job scans all key material and metadata and verifies that the HMACs are valid and that the key material can decrypt successfully.

    Cloud KMS uses several types of backups for the datastore:

    • By default, the datastore keeps a change history of every row for several hours. In an emergency, this lifetime can be extended to provide more time to remediate issues.
    • Every hour, the datastore records a snapshot. The snapshot can be validated and used for restoration, if needed. These snapshots are kept for four days.
    • Every day, a full backup is copied to disk and tape.
  3. Residency: Cloud KMS datastore backups reside in their associated Google Cloud region. These backups are all encrypted at-rest.
  4. Protection: At the Cloud KMS application layer, customer key material is encrypted before it is stored. Datastore engineers do not have access to plaintext customer key material. Additionally, the datastore encrypts all data it manages before writing to permanent storage. This means access to underlying storage layers, including disks or tape, would not allow access to even the encrypted Cloud KMS data without access to the datastore encryption keys. These datastore encryption keys are stored in Google’s internal KMS.

Conclusion

Google Cloud HSM is a cluster of FIPS 140-2 Level 3 certified Hardware Security Modules which allow customers to host encryption keys and perform cryptographic operations on it. Although Cloud HSM is very similar to most network HSMs, Google’s implementation to bring HSM to the cloud did require some changes to be made. Nevertheless, Cloud HSM is one of the best options from Google Cloud Platform to keep data secure and private on a tamper-proof HSM.

What’s trending in enterprise security?

Multi-Cloud, Hybrid Cloud Security: Options and Flexibility

Multi-cloud and hybrid cloud strategies. The cloud is in the top three IT investment priorities for businesses, according to the newest Flexera survey. In fact, our own David Close, chief solutions architect at Futurex, wrote about how enterprises are commonly using multiple clouds for diversification and to fulfill requirements and regulations in his article, Maintaining Control Over Your Security Infrastructure in a Multi-Cloud World.

“The movement toward broad acceptance of cloud-based encryption and key management will accelerate as more of the pieces come together,” adds Ryan Smith, vice president of global business development at Futurex, in his Help Net Security article outlining cryptographic trends. At Futurex, we have definitely seen organizations become more aggressive with the cloud, especially financial services organizations, that are moving toward payment processing in the cloud.

“Financial services is among the sectors looking to [the] cloud to secure workloads. Sophisticated cyberattacks pushed businesses to shape up cloud security strategies… Hybrid cloud is a popular approach as a way to balance security and cost,” echoes Katie Malone in CIO Dive.

  1. The cloud will play a bigger role in financial services
  2. Increased cloud infrastructure deployments and spending across all industries
  3. Prioritization of security in the cloud
  4. Increased hybrid cloud use for cryptographic needs, such as payment processing
  5. More attention to encryption key management

Tailored Cloud Key Management Services

Get flexible and customizable consultation services that align with your cloud requirements.

The Importance of Cloud Security, Encryption Key Security

Cloud security continues to be one of the biggest issues concerning IT departments, with 96% of respondents in a recent survey, The State of Cloud Security 2020, expressing concerns. “A fundamental principle of enterprise security is robust key management and ensuring critical data is protected by well-managed encryption processes, wherever the data resides,” states Close.

It’s vital for enterprises to maintain control of their security infrastructure from end to end, a requirement that has become more complex with the advent of the cloud — and multi-cloud. Since encryption keys are what are used to unlock data, enterprises must maintain control over the keys, and have air-tight protections in place to keep them from becoming compromised in any way.

We know that the core of encryption is key management — hardware security modules (HSMs) — are tasked with managing the lifecycle of encryption keys used across an organization’s entire real estate of applications. Sophisticated key management solutions are essential to any cryptographic operation because encrypted information is only as secure as the encryption keys. If the keys are compromised, then so is the encrypted data. I wrote about this in detail in my recent article, Key Management with Acuity: On-Premises, Cloud, Hybrid, published in Infosecurity.

What About a Hybrid Approach?

When it comes to encryption key management and securing cryptographic infrastructures, there are several options for organizations: on-premises, cloud, or hybrid. Today, we have seen many organizations seeking a hybrid model. They like the combination of physically overseeing their own HSMs plus the accessibility and convenience of the cloud. A hybrid approach, using both on-premises HSMs and cloud HSMs, allows organizations to construct an elastic infrastructure model for scalability, backup, and failover.

In fact, Forrester’s research indicates that 74% of enterprises describe their strategy as hybrid/multi-cloud. A recent CISO Mag roundtable, Gearing for Greatness: The Future of India’s BFSI Ecosystem, gathered financial services organizations to weigh in on hybrid approaches to HSMs. Highlights of the webinar are here.

While there is no one-size-fits-all approach when securing your cryptographic infrastructure, there are increasingly more options especially as cloud providers are giving organizations more flexible options such as retaining control of the keys. Organizations can now shift from one cloud provider to another or embrace a multi-cloud strategy.

I think my colleague, David Close, says it best when he recommends, “Whether it’s managing workloads, handling spikes and surges, providing disaster recovery, holding data at rest, or satisfying audit requirements, having a robust key management system as part of your security infrastructure is ever-critical.”

Microsoft Azure Services – Azure Key Vault

The present world sees more and more organizations migrating to Cloud Service Providers to get the advantages associated with cloud computing, such as cost saving, security, flexibility, mobility, and sustainability. Out of those, Security is a critical aspect of any Cloud Service model, as it is applicable to any Cloud Service offerings that involve sensitive data.

Today we will discuss the Microsoft Azure’s Key Vault service in the above context.

Now, let us understand the actual meaning of vault – it is a treasury, which is used to store your valuable items. When we comprehend this meaning in reference to the Azure Cloud world, it gives the correct impression i.e. a treasury of my keys and secrets. It acts as central storage for all sensitive information that can be stored secretly via encryption and can be retrieved/used based on permissions.
To elaborate further, the Microsoft Azure Key Vault service focuses on the security of the below subjects:

  1. Secret Management The Azure Key Vault service can be used to securely store and control access of secrets, such as authentication keys, storage account keys, passwords, tokens, API keys, .pfx files, and other secrets.
  2. Key Management The Azure Key Vault service can be used to manage the encryption keys for data encryption.
  3. Certificate Management The Azure Key Vault service enables you to provision, manage, and deploy SSL/TLS certificates seamlessly for use with Azure integrated services.

Security being the primary driving force of Azure Key Vault’s existence, Microsoft offers the following tiers based on key protection:

  1. Standard Tier Uses Software vaults for storing and managing cryptographic keys, secrets, certificates and storage account keys. This is compliant with FIPS 140-2 level 2 (vaults).
  2. Premium Tier Uses a Managed HSM Pool for storing and managing HSM-backed cryptographic keys. This is compliant with FIPS 140-2 level 3 (managed HSM pools).

Terminology used in Azure Key Vault:

Secret

A Secret is a small data blob (up to 10 KB in size) used in the authorization of users/applications with the help of a Key Vault. In a nutshell, Key Vault helps in mitigating the risk associated with the storage of secrets in a non-secure location.

Keys

Keys are also used in the authorization of users/applications to perform any operation while invoking the cryptographic functions of the Key Vault. Unlike secrets, Keys doesn’t leave the secure boundaries of the Key Vault.

Key Vault Owner

An administrator who creates the Key Vault and authorizes the users/applications for various authentication specific operations.

Key Owner/Secret Owner/Vault Consumer

An administrator who owns the Key/Secret for the specific user/application and is responsible for Key/Secret creation in the Key Vault.Kindly note that Key Vault owner and Key/Secret owner roles might be handled by the same administrator, but it’s not necessary.

Service Principal

Identity created (user group/application) for use with applications to access Azure resources.

Application Owner

An administrator who handles the application configuration, including authentication against the Azure Active Directory in the form of URI using Key Vault.

Application

An application authenticates itself from the Key Vault with the help of Keys/Secrets.

Access Policy

Statements that grant access to service principal permissions to perform various operations on keys/secrets in Key Vault.

Tailored Cloud Key Management Services

Get flexible and customizable consultation services that align with your cloud requirements.

Ways to access Keys and Secrets in a Key Vault:

  1. To access the keys/secrets, users/applications must have the valid Azure Active Directory token representing the Security Principal with the appropriate permissions of the target Key Vault.
  2. Users/applications can use REST-based APIs or Windows PowerShell to retrieve secrets and Keys (public keys only) from the Key Vault.

Steps to authenticate an application with the Key Vault:

  1. The application which needs authentication is registered with Azure Active Directory as a Service Principal.
  2. The key Vault Owner/Administrator will then create a Key Vault and then attaches the ACLs (Access Control Lists) to the Vault so that the Application can access it.
  3. The application initiates the connection and authenticates itself against the Azure Active Directory to get the token successfully.
  4. The application then presents this token to the Key Vault to get access.
  5. The Vault validates the token and grants access to the application based on successful token verification.


At last, let’s discuss some of the benefits of using Azure Key Vault.

Benefits of using Azure Key Vault:

  1. As the keys saved in vault will be served via URIs, this avoids the risk of accidental exposure and storage of keys in non-secure locations.
  2. By design, even the vendor (Microsoft) can’t extract or see customer keys, hence, its fully protected at the vendor level too.
  3. If your organization needs security compliance while requiring the Key Vault, Azure Key Vault is a good option, as the Key Vault service is FIPS 140-2 Level 2 (Vault) / FIPS 140-2 Level 3 (Managed HSM Pools) compliant.
  4. Key usage details are logged, so the log data can be used for audit purpose in case of any key compromise situation.

Conclusion

Azure Key Vault streamlines the secret, key, and certificate management process and enables you to maintain strict control over secrets/keys that access and encrypt your data. This expedites the overall project delivery by having developers create keys quickly for development and testing, and then seamlessly migrate them to production keys.

Transitioning to FIPS 140-3 – Timeline and Changes

FIPS 140 (“Federal Information Processing Standard”) is a series of security standards published by the U.S. government that specify security requirements for the evaluation of cryptographic modules. FIPS 140-3 is the newest version; this iteration of FIPS has necessary changes related to the design, implementation, and operation of a cryptographic module.

What is FIPS 140-3?

FIPS 140-3 is a standard developed by the National Institute of Standards and Technology (NIST) and Communications Security Establishment Canada (CSEC) to define the requirements to be satisfied by a cryptographic module to protect sensitive information.

FIPS 140-3 supersedes FIPS 140-2 and outlines updated federal security requirements for cryptographic modules. The new standards align with ISO/IEC 19790:2012(E) and include modifications of the Annexes that are allowed by the Cryptographic Module Validation Program (CMVP), as a validation authority.

FIPS 140-3 became effective September 22, 2019, permitting CMVP to begin accepting validation submissions under the new scheme beginning September 2020. The CMVP continues to validate cryptographic modules to Federal Information Processing Standard (FIPS) 140-2 Security Requirements for Cryptographic Modules until September 22, 2021.

Status of FIPS 140-2

FIPS 140-2 modules can remain active for 5 years after validation or until September 21, 2026, when the FIPS 140-2 validations will be moved to the historical list.  Even on the historical list, CMVP supports the purchase and use of these modules for existing systems. CMVP recommends purchasers consider all modules that appear on the Validated Modules Search Page and meet their requirements for the best selection of cryptographic modules, regardless of whether the modules are validated against FIPS 140-2 or FIPS 140-3.

Transition schedule from FIPS 140-2 to FIPS 140-3

The time of the transition is shown below:

DateActivity
March 22, 2019FIPS 140-3 Approved
September 22, 2019FIPS 140-3 Effective Date
Drafts of SP 800-140x (Public comment closed 12-9-2019)
March 20, 2020Publication of SP 800-140x documents
May 20, 2020Updated CMVP Program Management Manual for FIPS 140-2
July 1, 2020Tester competency exam updated to include FIPS 140-3
September 21, 2020FIPS 140-3 Implementation Guidance
CMVP Management Manual for FIPS 140-3
September 22, 2020CMVP accepts FIPS 140-3 submissions
September 21, 2021CMVP stops accepting FIPS 140-2 submissions for new validation certificates
September 21, 2026Remaining FIPS 140-2 certificates are moved to the Historical List

Table: Transition schedule

FIPS 140-3 approved Cryptographic Algorithms:

When we say FIPS Approved algorithm, it generally refers to an algorithm or technique that is either specified in a FIPS or NIST recommendation or adopted in a FIPS or NIST recommendation (specified in an appendix or in a document referenced by the FIPS or NIST recommendation).

Block Cipher Algorithms:

Several block cipher algorithms have been specified for use by the Federal Government. The approval status of the block cipher encryption/decryption modes of operation are provided in the below table:

AlgorithmStatus
Two-key TDEA EncryptionDisallowed
Two-key TDEA DecryptionLegacy use
Three-key TDEA EncryptionDeprecated through 2023
Disallowed after 2023
Three-key TDEA DecryptionLegacy use
SKIPJACK EncryptionDisallowed
SKIPJACK DecryptionLegacy use
AES-128 Encryption and DecryptionAcceptable
AES-192 Encryption and DecryptionAcceptable
AES-256 Encryption and DecryptionAcceptable

Table: Approval Status of Symmetric Algorithms Used for Encryption and Decryption

Customizable HSM Solutions

Get high-assurance HSM solutions and services to secure your cryptographic keys.

Digital Signatures:

Digital signatures are used to provide assurance of origin authentication and data integrity. DSA, ECDSA and RSA are allowed, but only with certain parameters. The transition guidance gives a handy summary, shown below:

Digital Signature ProcessDomain ParametersStatus
Digital Signature Generation
<112 bits of security strength:
DSA: (L, N) ≠ (2048, 224), (2048,256) or (3072, 256)
ECDSA: len(n) < 224
RSA: len(n) < 2048
Disallowed
≥ 112 bits of security strength:
DSA: (L, N) = (2048, 224), (2048,256) or (3072, 256)
ECDSA or EdDSA: len(n) ≥ 224
RSA: len(n) ≥ 2048
Acceptable
Digital Signature Verification
< 112 bits of security strength:
DSA32: ((512 ≤ L < 2048) or (160 ≤ N < 224))
ECDSA: 160 ≤ len(n) < 224
RSA: 1024 ≤ len(n) < 2048
Legacy use
≥ 112 bits of security strength:
DSA: (L, N) = (2048, 224), (2048,256) or (3072, 256)
ECDSA and EdDSA: len(n) ≥ 224
RSA: len(n) ≥ 2048
Acceptable

Hash Functions:

A hash function takes a group of characters (called a key) and maps it to a value of a certain length (called a hash value or hash). The hash value is representative of the original string of characters but is normally smaller than the original.
A hash function is used to produce a condensed representation of its input, taking an input of arbitrary length and outputting a value with a predetermined length. Hash functions are used in the generation and verification of digital signatures, for key derivation, for random number generation, in the computation of message authentication codes, and for hash-only applications.
The Transition guidelines document summarizes when SHA-1, SHA-2 etc. can be used.

Hash FunctionUseStatus
SHA-1
Digital signature generationDisallowed, except where specifically allowed by NIST protocol-specific guidance
Digital signature verificationLegacy use
Non-digital signature applicationsAcceptable
SHA-2 family (SHA224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256)Acceptable for all hash function applications
SHA-3 family(SHA3-224, SHA3- 256, SHA3-384, and SHA3-512)Acceptable for all hash function applications
TupleHash and ParallelHashAcceptable for the purposes specified in SP 800-185

Table: Approval Status of Hash Functions

FIPS 140-2 Vs. FIPS 140-3

SpecificationsFIPS 140-2FIPS 140-3
Cryptographic ModuleThe FIPS 140-2 standard (issued 2001) was written with the idea that all modules were hardware modules. Later different types of modules (hybrid, software and firmware) were added and defined in the IG (IGs 1.9, 1.16 and 1.17).FIPS 140-3 will include the hardware module, firmware module, software module, hybrid-software module, and hybrid-firmware module
Cryptographic BoundaryFIPS 140-2 IG 1.9 restricted hybrid modules to a FIPS 140-2 Level 1 validationThere is also no restriction as to the level at which a hybrid module may be validated in the new standard.
RolesThe FIPS 140-2 standard (section 4.3.1), requires that a module support both a crypto officer role, and a user role, and the support of a maintenance role was optional.FIPS 140-3 still has these same three roles, but only the crypto officer role is required (section 7.4.2). The user role and the maintenance role are now optional.
AuthenticationISO 19790:
Level 1 -no authentication requirements
Level 2 – minimum role-based authentication
Level 3 – identity-based authentication
ISO 19790:
FIPS 140-3 is similar to FIPS 140-2 for authentication at security levels 1-3.
Level 4 is also added in FIPS 140-3, For level 4 authentication, it must be multi-factor identity based.

Table: Approval Status of Symmetric Algorithms Used for Encryption and Decryption

Summary:

FIPS 140-3 has been finally approved and launched as the latest standard for the security evaluation of cryptographic modules. It covers a large spectrum of threats and vulnerabilities as it defines the security requirements starting from the initial design phase leading towards the final operational deployment of a cryptographic module. FIPS 140-3 requirements are primarily based on the two previously existing international standards ISO/IEC 19790:2012 “Security Requirements for Cryptographic Modules” and ISO 24759:2017 “Test Requirements for Cryptographic Modules”.

FIPS 140-3 Timelines:


The Timeline: FIPS 140-3 Timelines:

Sources

www.nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar2.pdf

www.csrc.nist.gov/projects/cryptographic-module-validation-program

www.csrc.nist.gov/Projects/fips-140-3-transition-effort

Data Loss Prevention in Cloud Computing – GCP’s DLP API

Organizations often have to detect, redact, and sometimes encrypt Personally Identifiable Information (PII) or other sensitive data, such as credit card numbers, which would protect them against data exposure. If any part of the network is compromised, it will act as another safeguard which will keep the data redacted or encrypted. Google Cloud Platform’s Cloud Data Loss Prevention (DLP) API gives its clients an option to detect the presence of PII and other privacy-sensitive data in user-supplied, unstructured data streams, such as a paragraph, images, or audio recordings (which needs to be converted into text via Speech-to-Text API).

DLP API can classify and redact sensitive data. It supports several customizations, including regular expressions (regex), dictionaries, and other predefined detection rules. For DLP API to work, text or images should be provided, and it works on data already present on GCS, Big Query, and Cloud Storage.

Google Cloud DLP Graph


DLP API includes an API with language-specific SDKs, customization support, the ability to redact and work on files on Google Cloud Storage (GCS) and Big Query, and operability on images.

Features of DLP API

  1. DLP API has over 120 pre-build detectors (InfoType Detector), and organizations can create custom detectors for their specific use-case.
  2. After detecting sensitive data, DLP API can redact, mask, tokenize, and transform text and images to ensure privacy.
  3. DLP API is a managed service. GCP can scale DLP API according to the data input provided.
  4. The API’s classification results can be sent directly to Big Query for detailed analysis,or exported to another environment.
  5. Cloud DLP handles data securely and undergoes multiple independent third-party audits to test data safety, privacy, and security.


Tailored Cloud Key Management Services

Get flexible and customizable consultation services that align with your cloud requirements.

DLP Proxy Architecture

One way to remove PII data is to route all queries and results through a module that parses, inspects, and logs the findings, and de-identifies those results using Cloud DLP, before returning the requested data or forwarding it to the next step. This module or service is termed a DLP Proxy.
The DLP proxy application accepts an SQL query as input, runs that query on the database, and then applies Cloud DLP to the results before returning them to the user requesting the data.

Architecture of DLP Proxy Application


Fig: the architecture of the DLP proxy applicationCloud DLP allows detailed configuration of what types of data to inspect for and how to transform the data based on these inspection findings or data structures (like field names). To simplify the creation and management of the configuration, organizations can use Cloud DLP templates. The DLP proxy application references both inspect and de-identify templates.

Cloud Audit Logs is an integrated logging service from Google Cloud Platform used in the architecture shown above. Cloud Audit Logs provides an audit trail of calls made to the DLP API. The audit log entries include information about who made the API call, which Cloud project it was run against, and details about the request, including if a template was used as part of the request. If you use the application’s configuration file to turn on auditing, Cloud Audit Logs records a summary of the inspection findings.

Cloud Key Management Service (Cloud KMS)is a cloud-hosted key management service from Google Cloud that lets you manage your cloud services’ cryptographic keys.

Cloud DLP methods for tokenization and date shifting use cryptography to generate replacement values. These cryptographic methods use a key to encrypt those values consistently to maintain referential integrity or, for reversible processes, to detokenize. You can directly provide this key to Cloud DLP when the call is made, or you can wrap it by using Cloud KMS. Wrapping your key in Cloud KMS provides another layer of access control and auditing, and is therefore the preferred method for production deployments.

For production configuration, organizations should use the principle of least privilege to assign permissions. The following diagram would incorporate this principle.


Fig: the architecture of the DLP proxy application with least privilegeThe preceding diagram shows how in a typical production configuration, there are three personas with different roles and access to the raw data:

  1. Infrastructure adminInstalls and configures the proxy to access the Cloud DLP proxy’s compute environment.
  2. Data analystAccesses the client that connects to the DLP proxy.
  3. Security adminClassifies the data, creates the Cloud DLP templates, and configures Cloud KMS.

Conclusion

Google Cloud Platform’s Data Loss Protection API provides a service that can make organizations manage sensitive data, including detecting and redaction, masking, and tokenizing such data. This can help organizations comply with regulations such as GDPR, and reduce the risk of data exposure and data breaches.

To get hands-on experience on Google Cloud’s DLP API, try the website located here.

Certificate Extensions – Basic Constraints

Certificate extensions are an integral part of the certificate structure as per X.509 standard for public key certificates. This structure is expressed in a formal language called Abstract Syntax Notation One (ASN.1).

There are a number of certificate extensions in the structure of a certificate. In this article, we will discuss the Basic Constraints certificate extensions. The details corresponding to Basic Constraints can be found in the following RFC:
www.tools.ietf.org/html/rfc5280#section-4.2.1.9

The Basic Constraints certificate extension has the following ASN.1 structure as per the RFC standard:

id-ce-basicConstraints OBJECT IDENTIFIER ::=  { id-ce 19 }

   BasicConstraints ::= SEQUENCE {

        cA                      BOOLEAN DEFAULT FALSE,

        pathLenConstraint       INTEGER (0..MAX) OPTIONAL }

The X.500 was the first version where it was not possible to identify the certificate subject/holder in the structure. Later came, X.509 standard structure which was improved to identify the subject. The following fields can be found in the Basic Constraints structure:

  1. Subject Type
  2. Path Length

Subject type denotes the holder of the certificate and it can be of two types:

  1. CA Certificate

    The holder of this certificate is a CA

  2. End Entity

    The holder of this certificate is an End Entity like domain/organization etc.

Path Length (although an optional field) denotes how many CA’s there are under a CA. For example, a CA with a Path length constraint of 0 cannot have any subordinate CAs which can issue certificates to other end entities.

Let’s discuss Subject types as CA Certificates.

CA certificates are used for the following purposes:

  1. To sign certificates used in HTTPS and CRLs
  2. To validate/authenticate the signatures on issued certificates

Basic Constraints for a CA certificate can be found in the following way

  1. Open any SSL/TLS based url in the browser
  2. Click on the “Pad lock” icon
  3. Click on “Certificate”
  4. Click on “Details”
  5. Select “Basic Constraints”


From the above CA screen shot, we can clearly see that the Subject type is given as “CA” and there is no defined path length for the certificate. This means that the certificate holder is the CA and unlimited certificates are allowed in the certificate chain under this CA.

Enterprise PKI Services

Get complete end-to-end consultation support for all your PKI requirements!


The “certificate chain” for a certificate can be found in the following way:

  1. Open any SSL/TLS based url in the browser
  2. Click on the “Pad lock” icon
  3. Click on “Certificate”
  4. Click on “Certification Path”


From the certification chain in the screen shot, it is clear that there are 3 certificates in the chain. The first two certificates are CA certificates and the third certificate is the end entity certificate, as CA certificates are those at the top of the order and the last certificate in the chain belongs to an end entity.

Let’s discuss Subject types as End Entity Certificates

End entity certificates are used to authenticate end entities (user, device, domain etc.) to clients including: TLS, S/MIME, and encryption certificates. The End Entity certificates can be found in the following way:

  1. Open any SSL/TLS based url in the browser
  2. Click on the “Pad lock” icon
  3. Click on “Certificate”
  4. 4.Click on “Details”


In the screen shot, we can observe that the “Subject Type” field is “End Entity” and the path length field “None” implies that this certificate cannot be used to issue/sign other certificates. Also, if Basic Constraint is not included in the certificate structure, the certificate is, by default, meant to be an End Entity certificate.

Let’s discuss Path Length in the image below

In the certificate chain diagram above, the first certificate specifies a path length constraint 2 which means there could be a maximum of 2 CA certificates under this certificate, other than the end entity certificate.

Next in the hierarchy, the second certificate specifies the path length constraint 1. This condition also holds true because there is only one CA certificate under this certificate.
The third certificate in the list specifies the path length constraint 0. This condition holds true as there is no CA certificate under this certificate.
The path length constraint restriction is not applicable to the final certificate, as it is an end entity certificate.

Conclusion

The Basic Constraint certificate extension is critical in restricting any End Entity certificate to issue/sign any other certificates, as this is a violation of the Basic Constraints certificate extension. Only certificates issued with “Subject Type” as “CA” and with the “Path Length” attribute would be able to issue/sign other certificates.

Cloud Data Lake Security

As businesses continually move their services online to services, like Google Cloud Platform (GCP) and Amazon Web Services (AWS), the need to protect this data grows as well. This solution is referred to as Data Lake Protection. A Data Lake is one place where all of an organization’s data is stored at the same time. Storing all information in one place has a multitude of advantages. Data from different teams can be accessed all in one place and analyzed to correlate data and create better strategies. IT infrastructure also becomes much simpler to manage, as there is only one location where all data is stored. Processes from analysis to auditing are made much more streamlined as well.


Data Lakes make the majority of processes and tasks much simpler, but this means Data Lake Protection is the number one priority of organizations using these Data Lakes. The main desire when creating a secure Data Lake Protection plan is to limit access to data to only those who need access. This is called the principle of Least Privilege.

The way Least Privilege works is that access to one portion of the organization is disabled for those who do not need access to it. For example, if Gary in sales wants to access the human resource records on Roger, he cannot due to the policies in place only allowing him access to sales, and sales-related, data. One way many companies create a Data Lake is by migrating their data to the cloud.

Data Lakes on the Cloud

Cloud Service Providers (CSPs) like GCP, AWS, and Microsoft Azure provide an easy and inexpensive way of creating a Data Lake for any organization’s data. By migrating IT infrastructure, like databases, from on-premises to the cloud, a Data Lake is formed. Cloud Data Lakes are becoming more and more common on the cloud, as CSPs provide a variety of helpful tools to analyze and secure data. Encryption management can be left to the CSP, or the user can control it with Hardware Security Modules, encryption key management, and Google Cloud Functions.

Tailored Cloud Key Management Services

Get flexible and customizable consultation services that align with your cloud requirements.

Best Practices

To begin the process of protecting an organization’s Data Lake, there are best practices one should follow. These best practices are:

  1. Principle of Least PrivilegeAs previously noted, the principle of least privilege is the most important practice to maintain in a Data Lake. This principle ensures that data can only be accessed by those who need access to it. This stops everyone in an organization from having access to all the information in a Data Lake, such as Personally Identifiable Information, or PII.
  2. ZoningMany organizations divide their Data Lake information into different zones, to make granting access and permissions much easier. Organizations will usually form four zones which are called the temporal, raw, trusted, and refined zones. The temporal zone holds temporary data that does not require long term storage. The raw zone holds data that is sensitive and unencrypted, before it has been processed and is secure. The trusted zone holds data that has been deemed secure and is ready to be used in applications. Anyone needing processed data, such as end users, will find it in the trusted zone of the Data Lake. The final zone, the refined zone, holds data that has been run through other applications and returned here as a final output.
  3. Data EncryptionOne important step to securing Data Lakes is the use of data encryption. By following compliance guidelines, such as the Federal Information Processing Standards (FIPS), the most advanced encryption algorithms can be selected for your Data Lake.
  4. SIEM Tool UseSecurity Information and Event Management (SIEM) tools and software work to detect threats, ensure compliance, and manage any other security issues in an organization’s Data Lake. These tools assist companies with providing the highest level of Data Lake Protection possible by finding threats within an IT infrastructure before those threats can compromise data.

EC Data Lakes

A great way to begin protecting your organization’s Data Lake is by utilizing Encryption Consulting’s training sessions. At Encryption Consulting, we offer a variety of training services, including learning to use AWS’ Data Protection Service, GCP’s Key Management Services, and Microsoft Azure’s Key Vault. We can also help install and configure Hardware Security Modules to protect your data encryption keys.

Our Cloud Utility Functions, Cloud Data Protector and Bucket Protector, were created specifically with Cloud Data Lake Protection in mind. Cloud Data Protector encrypts on-premises data before it is sent to Google Cloud Platform. Bucket Protector works within Google Cloud Platform itself, to encrypt data as it is uploaded into buckets. Encryption advisory services and enterprise encryption platform implementation services are also offered.

Conclusion

As you can see, Data Lakes provide organizations with a multitude of benefits, from making processes simpler to cutting back on infrastructure costs. By creating zones, using SIEM tools and software, and following the principle of least privilege, your data lake will stay secure from any attempted compromise. To learn more about the services Encryption Consulting can offer you, visit our website: www.encryptionconsulting.com .

Homomorphic Encryption – Basics

Organizations nowadays are storing and performing computation of the data on the cloud instead of handling themselves. Cloud Service Providers (CSPs) provide these services at an affordable cost and low maintenance. But to ensure compliance and retain privacy, organizations need to transfer the data in an encrypted format, which does ensure the confidentiality of the data. However, once the data reaches the cloud, the CSP has to decrypt the data to perform operation or computation.

Decrypting the data to the CSP loses the data’s confidentiality, which may concern the organization for not being compliant to data privacy regulations such as GDPR, FIPS, PCI DSS,  CCPA, etc.

What is Homomorphic Encryption?

Homomorphic Encryption makes it possible to do computation while the data remains encrypted. This will ensure the data remains confidential while it is under process, which provides CSPs and other untrusted environments to accomplish their goals. At the same time, we retain the confidentiality of the data.

Like other asymmetric encryptions, homomorphic encryption is encrypted using a public key and can only be decrypted by the respective private key. But while the data is encrypted, operations can be performed on the data, which retains confidentiality, and helps organizations achieve compliance even when using untrusted environments.

Why do we need Homomorphic Encryption?

Data creation has been increased tremendously in recent times, sent/stored in multiple environments belonging to other parties such as CSPs or other third-party organizations. From startups to big organizations, everyone uses CSPs to store or process data, where tools such as Big Query are used for data processing.

CSPs do provide some control over the data customers store in their environments, but those controls depend on CSPs. While users can encrypt and store data on CSPs, conducting computation on the data would be limited. Thus, standard encryption is only limited to data storage alone and does not provide any meaningful analysis that can be used.

To be able to process data while ensuring data privacy, researchers are focusing on privacy-enabled computation. Homomorphic Encryption (HE) is one of the promising approaches in this direction.

Types of Homomorphic Encryption

Homomorphic Encryption allows computation on encrypted data without decrypting. Mathematical operations that can be performed on the ciphertext differentiates the types of Homomorphic Encryptions.
They are mainly of two types:

  1. Partial Homomorphic Encryption (PHE) (supports either addition/multiplication, but not both)
  2. Fully Homomorphic Encryption (FHE) (supports both addition and multiplication)

Partial Homomorphic Encryption such as RSA and Paillier cryptosystems does support additive and multiplicative homomorphism. In 2009, Craig Gentry proposed an FHE scheme based on lattices for the first time. An FHE scheme usually supports addition and multiplication ciphertexts as follows:

HE(a+b) = HE(a) + HE(b) and HE(a*b) = HE(a) * HE(b)

Addition/Multiplication of plaintext is equal to the addition/multiplication of two ciphertexts.

Tailored Encryption Services

We assess, strategize & implement encryption strategies and solutions.

Applications

HE makes it possible to achieve privacy-preserving computation in almost every scenario. Some of those include:

Limitations and Drawbacks

Homomorphic Encryption computations are slow, and only a finite number of operations can be performed on the encrypted data. FHE based computation is at least 106 times slower than computation on the plaintext.

Homomorphic Encryption is also not feasible for multiple users. If we have a database, which we would need multiple users to access, we would need to create a separate database for every user, which is encrypted using the user’s public key. This would become impractical if the number of users increases or the size of the database increases.

Conclusion

Homomorphic Encryption in the current state is computationally expensive and practically inefficient. It can certainly be used to encrypt data, while we can perform different computations on the data. HE enables privacy-preserving computation, which helps us work with untrusted environments while maintaining the data’s confidentiality. Check out Format Preserving Encryption if interested in privacy-preserving computations.

Fixing Expired SSL Certificates – Why and How?

Introduction

In today’s Internet world, the image of a green pad lock in the browser is unanimously thought to be a synonym of trust. This green pad lock is being used to represent active and valid SSL certificates, indicating the trustworthiness in terms of security and proper authentication of your website. Protecting your website is crucial for your organization’s reputation and gaining customers’ trust.

SSL (Secure Socket Layer) provides end-to-end security between the client and server, by establishing a secure channel with the help of encryption. SSL exchanges the cryptographic information on behalf of the client and server and forms a trust relationship between them to ensure the information exchanged is private and secure.

One of the most important aspects in the SSL certificate lifecycle is its expiry. The dates associated with the expiry of the certificate play a very critical role to provide assurance of the server’s security landscape. The validity of the server’s certificate presents the unique identity of the server to the browser to comprehend the identity of the server.
Fixing expired certificates without any prolonged delay is vital to any organization to avoid any data theft or damage. Websites with no or expired certificates are prone to attacks which lead to serious consequences.

Why do certificates expire?

There have been long debates in regard to why long-lasting certificates don’t exist.
The answer is very simple – Security. Let me explain you why & how.
The certificate enables two attributes: Authentication and Encryption
The authentication attribute of a certificate validates and verifies the true identity of the end entity, i.e., domain, by various means during the validation process. Based on the validation process outcome, certificate is supposed to be issued for end entity to owner A.

Now, let’s assume the ownership of the end entity changes from owner A to owner B, however, the certificate issued to the end entity is still valid. What if the new owner B misuses the certificate or domain in the name of owner A, as the certificate still contains the information proprietary to owner A. Thus, it’s important for Certificate Authorities that are issuing trusted certificates to ensure that the information they’re using to authenticate domains and organizations is as up-to-date and accurate as possible, hence it is mandatory to associate an expiry date with the certificate.

Significance of expired SSL certificates

As mentioned earlier, every SSL certificate has a validity period associated with it. Once this period is over, the SSL certificate becomes invalid and the browser starts displaying a warning message on the webpage.
In general, the validity period of SSL certificates is 3 years or less. During this period, the certificate signifies that the information contained therein is accurate and up-to-date. This also manifests trustworthiness, legitimate ownership of the domain, security, and privacy on the platform.

It is important for your organization to monitor the certificates regularly and renew them before they are expired. In the field, the Certificate Authority vendors send out notifications at regular intervals for the renewal of the certificate to be expired in the near future, else the expired certificate might result in an outage for the business users and mission-critical applications. In addition, there are vendors which provide certificate lifecycle management solutions through their proprietary software. These software solutions automate the overall certificate lifecycle management process, including renewal of the certificates.

Enterprise PKI Services

Get complete end-to-end consultation support for all your PKI requirements!

Fixing expired SSL certificate

Organizations should always be alerted before the certificate is expired, however, that’s not the case all the time. Following is the way to renew an expired certificate:
Renew the expired certificate

  1. Generating a New CSR (Certificate Signing Request)

    This can be generated on the platform of your SSL service provider or by contacting your SSL service provider.

  2. Selecting the appropriate SSL Certificate

    You need to select the appropriate SSL certificate as per your requirements. There are various certificates that carry different validation levels

  3. Domain Validation

    Domain validation is needed in order to prove ownership of the domain by your organization. In general, there are three methods for domain validation:

    • Email validation
    • HTTP validation
    • DNS-based validation
  4. Installing the SSL Certificate

    Once the domain validation is completed, a new certificate is issued for your domain. Once the new certificate is received via email, you can go ahead and install the certificate on the server or appliance.

Note: In general, the SSL certificate can be renewed before the certificate is expired. If the certificate is already expired, then you might need to raise a new request to issue a new certificate.

Implications of an expired SSL certificate

When using an expired SSL certificate, there is a continuous risk to the encryption and mutual authentication of website. Websites with expired certificates are prone to attacks by hackers. Unsecured websites could be hacked and critical information might be leaked out.
Browsers show a warning message for websites with expired certificates. This might result in the loss of business for your organization, as some prospective customers might choose not to initiate business communication with someone who is not secure.
In the Internet age, a secure online presence presents lots of business opportunities, with respect to prospective customers however, this requires your SSL certificate to be up-to-date to maintain a trust relationship.

Conclusion

Keeping SSL certificates active is crucial in maintaining authenticity and trustworthiness of your website. In addition to safeguarding the information, SSL certificates help to establish positive customer impacts. Understanding certificate expiration and why to fix expired certificates is important in enhancing the reputation of your brand and business.

Cloud Security Compliance Standards – PCI DSS and GDPR

Introduction

Customers and Cloud Service Provider (CSP) share the responsibility of security and compliance. Thus, the organization would have the freedom to have architect their security and compliance needs, according to the services they utilize from the CSP and the services they intend to achieve. CSP has the responsibility to provide services securely and to provide physical security of the cloud.

If, however, a customer opts for Software-as-a-service, then the CSP provides standard compliance. Still, the organization has to check if it meets its regulations and compliance levels to strive to achieve. All Cloud services (such ad different forms of databases) are not created equal. Policies and procedures should be agreed upon between CSP and client for all security requirements and operations responsibility.

Let’s dive into particular compliance and regulations maintained within the industry.

PCI DSS on Cloud

Payment Card Industry Data Security Standards (PCI DSS) is a set of security standards formed in 2004 to secure credit and debit card transactions against data theft and fraud. PCI DSS is a set of compliance, which is a requirement for any business.

Let’s suppose payment card data is stored, processed, or transmitted to a cloud environment. In that case, PCI DSS will apply to that environment and will involve validation of CSP’s infrastructure and the client’s usage of that environment.

PCI DSS Requirement Responsibility assignment for management of controls
IaaS PaaS SaaS
Install and maintain a firewall configuration to protect cardholder data Client and CSP Client and CSP CSP
Do not use vendor-supplied default for system passwords and other security parameters Client and CSP Client and CSP CSP
Protect stored cardholder data Client and CSP Client and CSP CSP
Encrypt transmission of cardholder data across an open, public network Client Client and CSP CSP
Use and regularly update anti-virus software or programs Client Client and CSP CSP
Develop and maintain secure systems and applications Client and CSP Client and CSP Client and CSP
Restrict access to cardholder data by business need to know Client and CSP Client and CSP Client and CSP
Assign a unique ID to each person with computer access Client and CSP Client and CSP Client and CSP
Restrict physical access to cardholder data CSP CSP CSP
Track and monitor all access to network resources and cardholder data Client and CSP Client and CSP CSP
Regularly test security systems and processes Client and CSP Client and CSP CSP
Maintain a policy that addresses information security for all personnel Client and CSP Client and CSP Client and CSP

Tailored Cloud Key Management Services

Get flexible and customizable consultation services that align with your cloud requirements.

GDPR

General Data Protection Regulation (GDPR) is the core of Europe’s digital privacy legislation. “The digital future of Europe can only be built on trust. With solid common standards for data protection, people can be sure they are in control of their personal information,” said Andrus Ansip, vice-president for the Digital Single Market, speaking when the reforms were agreed in December 2015.

GDPR applies to all companies, which collect and process EU resident’s data. Non-EU companies would need to appoint a GDPR representative and be held liable for all fines and sanctions. Critical Requirements of GDPR are:

  1. Lawful, fair, and transparent processing

  2. Limitation of purpose, data, and storage

    Collect only necessary information and discard any personal information after processing is complete

  3. Data subject rights

    A customer can ask what data an organization has on them and the intended use of the data.

  4. Consent

    Organizations must ask for the consent of the customer if personal data is processed beyond legitimate purposes. The customer can also remove consent anytime they wish.

  5. Personal data breaches

    Based on the severity and regulatory, the customer must be informed within 72 hours of identifying the breach.

  6. Privacy by Design

    Organizations should incorporate organizational and technical mechanisms to protect personal data in the design of new systems and processes

  7. Data Protection Impact Assessment

    Data Protection Impact Assessment should be conducted when initiating a new project, change, or product.

  8. Data transfers

    Organizations have to ensure personal data is protected and GDPR requirements are respected, even if a third party does it

  9. Data Protection Officer

    When there is significant personal data processing in an organization, the organization should assign a Data Protection Officer.

  10. Awareness and training

    Organizations must create awareness among employees about crucial GDPR requirements

To achieve GDPR on the cloud, we need to take these additional steps

  • Organizations should know the location where the data is stored and processed by CSP
  • Organizations should know which CSP and cloud apps meet their security standards. Organizations should take adequate security measures to protect personal data from loss, alteration, and unauthorized processing.
  • Organizations should have a data processing agreement with CSP and cloud apps they shall be using.
  • Organizations should only collect necessary data that it would need and should limit the processing of personal data any further.
  • Organizations should ensure that data processing agreement is respected, and personal data is not used for other purposes by CSP or cloud apps.
  • Organizations should be able to erase data at will from all data sources in CSP.

Conclusion

Regulations and Compliances depend on the country organizations operate in. It is essential to research CSP and the regulations and compliance they are following. You can find more information about the CSPs on their respective websites:

If an organization fails to abide by the set of regulations applicable in the country or region. In that case, they may face fines and may lose the ability to operate in that country.