Read time: 20 minutes

Deploying an Active Directory Certificate Services is a straightforward way for enterprises to build their PKI infrastructure. But it does have its shortcomings, such as

  • Lack of deployment in multiple regions
  • High latency on CDP and AIA points

To overcome this, organizations need to deploy region-specific PKI infrastructure, which can be harder to maintain and introduces complexity to the whole infrastructure.

But using Azure, organizations can deploy a PKI infrastructure that can be operated worldwide with low latency and high availability.

In this article, we will be showing you how your own PKI architecture is on Azure.

If this is your first time deploying a PKI, I recommend following ADCS Two Tier PKI Hierarchy Deployment as it is a more straightforward approach and also touches the basics.

Prerequisites

  • An Azure account where we will create Virtual Machines and blob storage
  • A custom domain name
  • An offline Windows Server VM, which will be our Root CA

Preparing CDP and AIA points

We will create blob storage that will act as our CDP/AIA points for our PKI infrastructure. We will also associate it with our custom domain to redirect it to our blob.

Creating Azure Blob Storage

  • First, we would need to log into our Azure account and navigate to Storage Accounts
  • We will be creating a new storage account. So click Create on the top left corner.
  • Provide the necessary details on the basics. For Redundancy, I would recommend at least Zone-redundant Storage (ZRS)
  • On the Advanced tab, leave everything on default and click next

On the Networking tab, it is recommended to have public access from selected virtual networks and IP addresses and select the Virtual network where all the virtual machines will be deployed. If no virtual network exists, do create one.

  • On the Data Protection tab, click Next.
  • On the Encryption tab, leave everything default and click Next.
  • Provide relevant tags and click Next.
  • On the review tab, you can review everything looks good and click Create.
  • This will create the blob storage. Next, we will associate this blob storage with our custom domain and ensure it is accessible via HTTP.

Mapping a custom domain to Azure Blog Storage

For this step, you would need a custom domain. Once you log in, you can navigate to DNS settings

  • In DNS settings, navigate to DNS records and enter a CNAME record.
  • Now we need to retrieve the hostname for your storage account. For this, we can navigate Settings>Endpoints on the left pane and copy the static website under Static Website. It should be something like
    https://pkitest.z13.web.core.windows.net/ (https://<storage account name>.z13.web.core.windows.net/)

    Remove the https:// and additional /. It would look like
    pkitest.z13.web.core.windows.net, which is our hostname
  • Now in the DNS settings, for the hostname of the custom domain, provide pkitest and for the hostname, provide the hostname of the storage endpoint
  • Click to create a record
  • Navigate to Azure Storage account, click on Networking under Security + Networking and select Custom Domain on the tab above.
  • Provide the subdomain you created.
  • Click Save. After successful validation, you will get a validation notification

Disabling secure transfer required

For this blob being a CDP/AIA point, we need HTTP access to the blog, which is why we would need to turn off the secure transfer. If enabled, HTTP access would not be possible; our PKI wouldn’t be able to use this blob as CDP/AIA point.

  • Navigate to Configuration under Settings
  • Set Secure Transfer Required to Disabled
  • Click Save

Testing Accessibility of Storage Account

This section will ensure our storage account is accessible via a custom domain.

  • First, we would create a container and upload a file to it
  • Navigate to Containers under Data Storage
  • On the top left corner, click
  • Provide the name, set public level access as a blob, and click Create
  • The container will be created
  • Click on the name and navigate inside it
  • On the top left corner, click
  • Select any file for testing (preferably a pdf or txt file)
  • Click Upload, and once uploaded, it should be available in the container
  • Now, we will try to access the file using a custom domain. The URL should be

http://<subdomain.customdomain>/<mycontainer>/<myblob>

So for us, the domain should be

http://pkitest.encryptionconsulting.com/pkitest/TestFile.pdf

Ensure the file is opened in HTTP and it does display the file or downloads it

  • Now, we will try to access the file using a custom domain. The URL should be

http://<subdomain.customdomain>/<mycontainer>/<myblob>

So for us, the domain should be

http://pkitest.encryptionconsulting.com/pkitest/TestFile.pdf

Ensure the file is opened in HTTP and it does display the file or downloads it

This concludes our section on preparing CDP and AIA points. Next, we will begin creating our PKI. Now you may delete the test file from the container as it would only contain the cert and CRLs.

Creating Domain Controller

This Step-by-Step guide uses an Active Directory Domain Services (AD DS) forest named encon.com. DC01 functions as the domain controller.

  • Firstly, we will deploy a VM on Azure. Ensure both the IPs are static.
  • While deploying, ensure,
  • VMs are deployed on the same Virtual Network
  • If deployed on the same region, ensure the subnet is the same
  • Public IP Address is static
  • Once the VM is created, navigate to Networking under Settings and click on the Network Interface
  • Navigate to IP Configuration under settings
  • Click on ipconfig1 on the menu and change IP private settings to Static from Dynamic
  • Click Save and go back to the VM

Provide other parameters as per your requirement and create the VM.

Configuring Network

Once the VM is created, log in and follow the steps below

  • Login to DC01 as a local user
  • Click Start, type ncpa.cpl , and press ENTER
  • Click on Ethernet, and then click Properties under Activity
  • Double Click on Internet Protocol Version 4 (IPv4)
  • Only change the DNS Server Address, and provide the private IPv4 of DC01

For Alternate DNS, provide 8.8.8.8 or any other public DNS service you want.

  • Click OK and restart the VM from the Portal
  • Once Restarted, log in to DC01 as a local user
  • Click Start, type sysdm.cpl , and press ENTER
  • Changer PC name to DC01, and Restart Now when prompted.

Installing Active Directory Domain Services and Adding a new Forest

  • Open Server Manager. To do so, you can click the Server Manager icon in the toolbar or click Start, then click Server Manager.
  • Click Manage, and then click Add Roles and Features
  • Before you Begin, click Next
  • On Installation Type, click Next
  • On Server Selection, click Next
  • On Server Roles, choose Active Directory Domain Services, click Add Features, and then click Next
  • On Features, click Next
  • On AD DS, click Next
  • On Confirmation, click Install.
  • After installation, either
  • Click on Promote this server to a domain controller on Add Roles and Features Wizard
  • Or, click on Promote this server to a domain controller on Post Deployment Configurations in Notifications
  • On Deployment Configuration, choose to Add a new forest and provide the root domain name (“encon.com”)
  • On Domain Controller options, provide Directory Services Restore Mode password and click Next
  • Under DNS options, click Next
  • Under Additional options, click Next
  • Under Paths, click Next
  • Under Review options, click Next
  • Under Prerequisites check, click Install
  • Once installed, the remote connection would be terminated.
  • Login to DC01 as encon\<username
  • DC01 is now ready

Creating Offline Root CA

The standalone offline root CA should not be installed in the domain. It should not even be connected to a network at all.

We will be creating this Root CA on-premises. I will create this on Proxmox, but you can use VMware or VirtualBox for this installation.

After installing Windows Server 2019, follow the steps below

  • Log onto CA01 as CA01\Administrator.
  • Click Start, click Run, and then type notepad C:\Windows\CAPolicy.inf and press ENTER.
  • When prompted to create a new file, click Yes.
  • Type in the following as contents of the file.

[Version]

Signature=”$Windows NT$”

[Certsrv_Server]

RenewalKeyLength=2048 ; recommended 4096

RenewalValidityPeriod=Years

RenewalValidityPeriodUnits=20

AlternateSignatureAlgorithm=0

  • Click File and Save to save the CAPolicy.inf file under C:\Windows directory. Close Notepad

Installing Offline Root CA

  • Log onto CA01 as CA01\Administrator.
  • Click Start, and then click Server Manager.
  • Click Manage, and then click Add Roles and Features
  • On the Before You Begin page, click Next.
  • On the Select Server Roles page, select Active Directory Certificate Services, and then click Next.
  • On the Introduction to Active Directory Certificate Services page, click Next.
  • On the Select Role Services page, ensure that Certification Authority is selected,  then Next.
  • On the Specify Setup Type page, ensure that Standalone is selected, and then click Next.
  • On the Specify CA Type page, ensure that Root CA is selected, and then click Next.
  • On the Set Up Private Key page, ensure that Create a new private key is selected, and then click Next.
  • Leave the defaults on the Configure Cryptography for CA page, and click Next.
  • On Configure CA Name page, under the Common name for this CA, clear the existing entry and type Encon Root CA. Click Next.
  • On the Set Validity Period page, under Select validity period for the certificate generated for this CA, clear the existing entry and type 20. Leave the selection box set to Years. Click Next.
  • Keep the default settings on the Configure Certificate Database page, and click Next.
  • Review the settings on the Confirm Installation Selections page and then click Install.
  • Review the information on the Installation Results page to verify that the installation is successful, and click Close.  

Post Installation Configuration on Root CA

  • Ensure that you are logged on to CA01 as CA01\Administrator.
  • Open a command prompt. To do so, you can click Start, click Run, type cmd and then click OK.

To define the Active Directory Configuration Partition Distinguished Name, run the following command from an administrative command prompt:

Certutil -setreg CA\DSConfigDN “CN=Configuration,DC=Encon,DC=com”

  • To define CRL Period Units and CRL Period, run the following commands from an administrative command prompt:
    • Certutil -setreg CA\CRLPeriodUnits 52
    • Certutil -setreg CA\CRLPeriod “Weeks”

Certutil -setreg CA\CRLDeltaPeriodUnits 0

  • To define CRL Overlap Period Units and CRL Overlap Period, run the following commands from an administrative command prompt:
    • Certutil -setreg CA\CRLOverlapPeriodUnits 12
    • Certutil -setreg CA\CRLOverlapPeriod “Hours”
  • To define Validity Period Units for all issued certificates by this CA, type the following command and then press Enter. In this lab, the Enterprise Issuing CA should receive a 20-year lifetime for its CA certificate. To configure this, run the following commands from an administrative command prompt:
    • Certutil -setreg CA\ValidityPeriodUnits 20
    • Certutil -setreg CA\ValidityPeriod “Years”

Configuration of CDP and AIA points

Multiple methods are configuring the Authority Information Access (AIA) and certificate revocation list distribution point (CDP) locations. The AIA points to the public key for the certification authority (CA). You can use the user interface (in the Properties of the CA object), certutil, or directly edit the registry. The CDP is where the certificate revocation list is maintained, which allows client computers to determine if a certificate has been revoked. This lab will have three locations for the AIA and four locations for the CDP.

Configuring AIA points

A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator. When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command:

certutil -setreg CA\CACertPublicationURLs “1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:http://pkitest.encryptionconsulting.com/pkitest/%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11”

Note: You need to modify the http address on the AIA location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

Configuring the CDP Points

A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator. When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command:

certutil -setreg CA\CACertPublicationURLs “1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:http://pkitest.encryptionconsulting.com/pkitest/%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11”

Note: You need to modify the http address on the AIA location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

Configuring the CDP Points

The certutil command to set the CDP modifies the registry, so ensure that you run the command from a command

certutil -setreg CA\CRLPublicationURLs “1:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:http://pkitest.encryptionconsulting.com/pkitest/%3%8%9.crl \n10:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10”

Note: You need to modify the http address on the CDP location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

At an administrative command prompt, run the following commands to restart Active Directory Certificate Services and publish the CRL

net stop certsvc && net start certsvc

certutil -crl

Creating Issuing CA

Enterprise CAs must be joined to the domain. Before you install the Enterprise Issuing CA (CA02), you must first join the server to the domain. Then you can install the Certification Authority role service on the server.

  • Firstly, we will deploy a VM on Azure. Ensure both the IPs are static.
  • While deploying, ensure,
  • VMs are deployed on the same Virtual Network
  • If deployed on the same region, ensure the subnet is the same
  • Public IP Address is static
  • Once the VM is created, navigate to Networking under Settings and click on the Network Interface
  • Navigate to IP Configuration under settings
  • Click on ipconfig1 on the menu and change IP private settings to Static from Dynamic
  • Click Save and go back to the VM
  • Provide other parameters as per your requirement and create the VM.

Configuring Network

  • Login to CA02 as a local user
  • Click Start, type ncpa.cpl , and press ENTER
  • Click on Ethernet, and then click Properties under Activity
  • Double Click on Internet Protocol Version 4 (IPv4)
  • Only change the DNS Server Address, and provide the private IPv4 of DC01 (if both belong to the same region), or provide the public IP address of DC01 (if they belong to different regions)
  • Click OK and restart the VM from the Portal
  • Once Restarted, log in to CA02 as a local user
  • Click Start, type sysdm.cpl , and press ENTER
  • Changer PC name to CA02 and provide domain name in the domain. Provide credentials for DC01 and wait until you get a success message
  • Click on Restart Now when prompted.

Creating CAPolicy in Issuing CA

  • Log onto CA01 as CA01\Administrator.
  • Click Start, click Run, and then type notepad C:\Windows\CAPolicy.inf and press ENTER.
  • When prompted to create a new file, click Yes.
  • Type in the following as contents of the file.

[Version]
Signature=”$Windows NT$”
[PolicyStatementExtension]
Policies=InternalPolicy
[InternalPolicy]
OID= 1.2.3.4.1455.67.89.5
URL= http://pkitest.encryptionconsulting.com/pkitest/cps.txt
[Certsrv_Server]
RenewalKeyLength=2048
RenewalValidityPeriod=Years
RenewalValidityPeriodUnits=10
LoadDefaultTemplates=0
AlternateSignatureAlgorithm=0

Click File and Save to save the CAPolicy.inf file under C:\Windows directory. Close Notepad

Publishing Root CA Certificates and CRLs in CA02

  • Log into CA01 as a local administrator
  • Navigate to C:\Windows\System32\CertSrv\CertEnroll
  • Copy the CRLs and Certificates present

Paste the files into the C drive in CA02
Note: If you are using RDP, you can copy and paste directly

On CA02, to publish Encon Root CA Certificate and CRL in Active Directory, run the following commands at an administrative command prompt.

certutil -f -dspublish “C:\CA01_Encon Root CA.crt” RootCA
certutil -f -dspublish “C:\Encon Root CA.crl” CA01

To add Fabrikam Root CA Certificate and CRL in CA02.Fabrikam.com local store, run the following command from an administrative command prompt.

certutil -addstore -f root “C:\CA01_Encon Root CA.crt”
certutil -addstore -f root “C:\Encon Root CA.crl”

Installing Issuing CA

Ensure you are logged in as Encon User in CA02

  • Click Start, and then click Server Manager.
  • Click Manage, and then click Add Roles and Features
  • Click Next on Before you Begin
  • On Installation Type, click Next
  • On Server Selection, click Next
  • On Server Roles, choose Active Directory Certificate Services, click on Add Features when prompted and click Next
  • On Features, click Next
  • On AD CS, click Next.
  • On Role Services, Choose Certificate Authority Web Enrollment, click on Add Features when prompted, and click Next
  • On Web Server Role (IIS) and Role Services, click Next
  • On Confirmation, click Install

Configuration of Issuing CA

  • After installation, either
  • Click on Configure Active Directory Certificate Services on the destination server in Add Roles and Features Wizard
  • Or, click on Configure Active Directory Certificate Services on Notification Center
  • On Credentials, click Next
  • Under Role Services, choose both Certificate Authority as well as Certificate Authority Web Enrollment
  • On Setup type, ensure Enterprise CA is chosen and click Next
  • On CA Type, choose Subordinate CA, and click Next
  • On Private Key, choose to Create a new private key
  • On Cryptography, leave defaults and click Next
  • On CA Name, provide Common Name as Encon Issuing CA and leave the everything default value.
  • On Certificate Request, ensure Save a certificate request to file is selected and click Next
  • On Certificate Database, click Next
  • On Confirmation after reviewing, Click Configure
  • Issuing CA should now be configured. Click Close.
  • After Issuing CA is configured, a file will appear on the C drive. Copy this file to C drive in Root CA.

Issue Encon Issuing CA Certificate

  • Copy Issuing CA req file to Root CA C drive
  • Open Command Prompt
  • Run the command
    certreq -submit “C:\CA02.encon.com_encon-CA02-CA.req”
  • Select Root CA from the Certification Authority List
  • Once a request is submitted, you will get a RequestID
  • Open Certificate Authority from Tools in Server Manager
  • Navigate to Pending Requests
  • Right Click on the RequestID that you got while submitting the request, click All Tasks, and click Issue
  • Once issued, navigate to the command prompt again, and run
    certreq -retrieve 2 “C:\CA02.encon.com_Encon Issuing CA.crt”
  • Select Root CA from the Certification Authority List
  • Once retrieved, the successful message is displayed
  • Copy the issued certificate from Root CA to CA02
  • Login to CA02 as an Encon user and copy the certificate to the C drive
  • Open Certificate Authority from Tools in Server Manager
  • Right-click on Encon Issuing CA, click on All Tasks, and click Install CA Certificate

Navigate to C drive, and select All files beside File name until the copied certificate is visible

  • Select the issued certificate and click Open
  • Right-click on Encon Issuing CA, click on All Tasks, and click Start Service

Post Installation Configuration on Issuing CA

Ensure that you are logged on to CA02 as Encon User

  • Open a command prompt. To do so, you can click Start, click Run, type cmd and then click OK.
  • To define CRL Period Units and CRL Period, run the following commands from an administrative command prompt:
    • Certutil -setreg CA\CRLPeriodUnits 1
    • Certutil -setreg CA\CRLPeriod “Weeks”
    • Certutil -setreg CA\CRLDeltaPeriodUnits 1
    • Certutil -setreg CA\CRLDeltaPeriod “Days”
  • To define CRL Overlap Period Units and CRL Overlap Period, run the following commands from an administrative command prompt:
    • Certutil -setreg CA\CRLOverlapPeriodUnits 12
    • Certutil -setreg CA\CRLOverlapPeriod “Hours”
  • To define Validity Period Units for all issued certificates by this CA, type the following command and then press Enter. In this lab, the Enterprise Issuing CA should receive a 20-year lifetime for its CA certificate. To configure this, run the following commands from an administrative command prompt:
    • Certutil -setreg CA\ValidityPeriodUnits 5
    • Certutil -setreg CA\ValidityPeriod “Years”

Configuration of CDP and AIA points

Multiple methods are configuring the Authority Information Access (AIA) and certificate revocation list distribution point (CDP) locations. The AIA points to the public key for the certification authority (CA). You can use the user interface (in the Properties of the CA object), certutil, or directly edit the registry. The CDP is where the certificate revocation list is maintained, which allows client computers to determine if a certificate has been revoked. This lab will have three locations for the AIA and three for the CDP.

Configuring AIA points

A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator. When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command: certutil -setreg CA\CACertPublicationURLs “1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:http://pkitest.encryptionconsulting.com/pkitest/%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11”

Note: You need to modify the http address on the AIA location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

Configuring the CDP Points

The certutil command to set the CDP modifies the registry, so ensure that you run the command from a command

certutil -setreg CA\CRLPublicationURLs “65:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:http://pkitest.encryptionconsulting.com/pkitest/CertEnroll/%3%8%9.crl\n79:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10”

Note: You need to modify the http address on the AIA location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

Configuring the CDP Points

The certutil command to set the CDP modifies the registry, so ensure that you run the command from a command

certutil -setreg CA\CRLPublicationURLs “65:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:http://pkitest.encryptionconsulting.com/pkitest/CertEnroll/%3%8%9.crl\n79:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10”

Note: You need to modify the http address on the CDP location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you. Also, as per the CDP point, the CertEnroll folder will exist inside the pkitest container in Azure Blob. This is because the folder will be recursively copied from the CertSrv folder to the blob storage

At an administrative command prompt, run the following commands to restart Active Directory Certificate Services and publish the CRL

net stop certsvc && net start certsvc

certutil -crl

Uploading Certificates and CRLs to the Blob storage

Per our CDP and AIA points, the certificates would be available at the blob storage in Azure. If we run PKIView.msc at Issuing CA, we will run into errors where the certificates or CRLs are not found

To resolve this, we need to upload

  • Root CA certificates
  • Root CA CRL
  • Issuing CA Certificates

Issuing CA CRLs will be uploaded using a script we will run next.

To upload the files, copy them from their respective machines and keep them handy on your host machine. You can find these files at C:\Windows\System32\certsrv\CertEnroll on both Root CA and issuing CA.

Note: Do not copy the CRLs of Issuing CA.

  • Once copied, follow the steps below
  • Navigate to the storage account, and click on the pkitest you created
  • Click on Containers under Data Storage
  • Click on the pkitest folder
  • Click on Upload on the top left
  • Click on the browse icon and select all the files that need to be uploaded and click Open
  • Check to Overwrite if files already exist and then click Upload
  • After uploading, all the files should be available
  • Once the files are uploaded, navigate to CA02 and open PKIView.msc again. Now CDP points of Root and Issuing CA should be available, but the AIA point would still show an error as we didn’t copy those files to the pkitest folder

Script to copy Issuing CA CRLs

Before we begin, we need to download AzCopy. Once downloaded, extract the app into C:\ to be accessible. We will be using this location on our script. Change the script’s path if you intend to store the application in a different location.

Now you would also need a folder to store the code. I would recommend creating a folder on C drive as AZCopyCode. Download the script from below and store it there. We would need to make some changes to make it work

Note: This code was initially created by dstreefkerk. As per Windows Server 2022, this code does work. I have made some changes and fixed a few bugs.

Code Changes

  • Navigate to the storage account, and click on the pkitest you created
  • Click on Containers under Data Storage
  • Click on the pkitest folder
  • Click on Shared Access Token under Settings. Provide appropriate permissions, and choose an expiry date (preferably one year)
  • Click Generate SAS token and copy Blob SAS Token
  • Open the code in notepad or your preferred code editor
  • Paste the SAS token for the variable $azCopyDestinationSASKey
  • Navigate to properties under Settings, copy the URL and paste it for $azCopyDestination
  • Change log and log archive locations if applicable.
  • Change AzCopy location on $azCopyBinaryPath if you stored azcopy in another location.
  • Once changes are made, store them in C:\AZCopyCode\Invoke-UpdateAzureBlobPKIStorage.ps1
  • Open Powershell in CA02
  • Navigate to C:\AZCopyCode
  • Run Invoke-UpdateAzureBlobPKIStorage.ps1
  • Once copied, it will show how many files are copied, with 100% and all done with 0 Failed
  • Open PKIView.msc, and now no errors should be visible
  • The overall PKI should be healthy.

Troubleshooting

For this scenario, we suppose you get an error

Copy the URL by right-clicking on the location and copying it to a notepad. It should look something like this

http://pkitest.encryptionconsulting.com/pkitest/Encon%20Root%20CA.crl%20

If you try opening this on the browser, it would still give an error as there is a trailing %20 at the end, indicating a space at the end. To resolve this, CDP and AIA points need to be changed on Root CA, and the issuing CA needs to be recreated again.

Automating the script

  • We would automate this script using Task Scheduler to run this script every week. You can tweak this as per your requirements.
  • Open Task Scheduler
  • Left click on Task Scheduler (local) and click on Create a Basic Task
  • Provide name and description for the Task
  • Task Trigger is configured to weekly
  • Select Data and Time when the script will run
  • On Action, select Start A program and click Next
  • Under Start, a Program, In Program/Script write
  • Click yes on the prompt
  • Check the Open Properties dialog and click Finish
  • Once completed, AZ Copy should be available in Task Scheduler Library.
  • Right Click AZ Copy and click Run
  • Refresh and check History Tab. Action Completed should appear in History

Conclusion

This concludes our AD CS installation with Azure Blob Storage. It is easier to manage, but we also achieve high availability using Azure’s Blob Storage. This will help organizations create PKI that can be operational worldwide with minimal latency and high performance no matter where you are. If you face any issues, do remember to reach out to info@encryptionconsulting.com.

References

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 9 minutes

Upgrading Vormetric Data Security Manager (DSM) can seem relatively straightforward, but if something goes wrong, the data can be lost or can get corrupted. This article does not provide the steps you would need to upgrade, but rather would provide what organizations need to be careful about, a few technical and non-technical points that you may not find in the upgrade guide. This article will highlight all the steps you need to be careful about and a few details you might be curious about.Readers can use this article as a checklist or a base to develop a plan for upgrading their DSM. Every DSM upgrade can be different and unique to the organization, so it would be best if the organization gets some outside consultation to get the best planning for their environment.

Planning

Everything starts with good planning, and the DSM upgrade is no different. A full scan of the current environment is vital to assess any future roadblocks or challenges you might face.

  1. Keeping a note of all the agents along with their agent versions and guardpoints When you upgrade the DSM, you can check the list of all the agents available and if all the guardpoints are up or not. If something doesn’t match as expected, you may need to troubleshoot them before proceeding. Agent versions can also highlight if you have any agents in the environment which are incompatible with the DSM version you desire to upgrade. A compatibility matrix will help you determine which agents are compatible and which need to be upgraded. You can also develop an upgrade path that you would need. Thales has defined an upgrade path that must be followed if you desire a particular version. For example, if you upgrade to 6.2 from 5.3, you cannot directly upgrade to 6.2 without upgrading to the intermediate version 6.0.0.

  1. Production cutover

    Many organizations looking forward to upgrading should already have a production DSM working in their environment. They would not like to upgrade their existing production DSM, instead would use another DSM to upgrade and conduct pilot testing before switching to the new DSM as the production DSM. Organizations should plan the cutover accordingly.

Planning can ensure the following steps are smooth and DSM can be upgraded to the desired version without hiccups.

Upgrading DSMs

Planning can help soothe the upgrade, but upgrading a production DSM can expose challenges you might not have expected. Some of the challenges we have faced are:

  1. Cutting over from old DSM to new DSM without agent registration
  2. Upgrading DSM to a particular version where agents remain compatible
  3. Configuring HA cluster with DSMs being on different subnets
  4. Upgrading DSMs can also take time, and without proper planning of cutover, the production can also face downtime. But with adequate planning, DSM cutover can be seamless with no downtime.

Organizations should prepare exhaustively for upgrading DSMs as a wrong move can cost them terabytes of unobtainable data.

Migration to CipherTrust Manager

Thales announced that Vormetric DSM will reach the end of life on June 2024; as a result, many organizations are also looking forward to migrating their DSMs to CipherTrust Manager.

Migrating to CTMs can come with a unique wave of challenges where some organizations might be concerned with the availability of policies, keys, usersets, process sets, and other configurations that may already exist in their DSM. Other concerns organizations may have can be:

  • Policies, keys, user sets, and process sets migration
  • Migration of hosts
  • High Availability configuration
  • Downtime and system disruption
  • Data Loss

If organizations are already on DSM 6.4.5 or 6.4.6, they can migrate to CipherTrust Manager and restore the following objects:

  1. Agent Keys, including versioned keys and KMIP accessible keys
  2. Vault Keys
  3. Bring Your Own Keys (BYOK) objects
  4. Domains
  5. CipherTrust Transparent Encryption (CTE) configuration

The objects that are not restored are:

  1. Most Key Management Interoperability (KMIP) keys except KMIP-accessible agent keys
  2. Keys associated with CipherTrust Cloud Key Manager (CCKM), including key material as a service (KMaaS) keys.
  3. DSM hostname and host configuration

When customers migrate, we recommend exhaustive pilot testing to ensure the migration goes smoothly without data loss. Customers should also conduct a cleaning session of the DSM where all decommissioned servers and users are removed before migrating to CipherTrust Manager. If customers migrate from DSM to CTM, agents running VTE agents (below version 7.0) would need to be upgraded to CipherTrust Transparent Encryption before upgrading so that migration can be seamless.

Benefits of migrating to CipherTrust Manager

  1. Improved browser-based UI
  2. No hypervisor limitation
  3. Fully featured remote CLI interface
  4. Embedded CipherTrust Cloud Key Manager (CCKM)
  5. Better Key Management capabilities with KMIP key material – integrates policy, logging, and management – bringing simplified and richer capabilities to KMIP
  6. Supports CTE agents (renamed from VTE agents) from version 7.0
  7. New API for better automation

Even though these improvements are for the CTM platform as a whole, there are also some specialized improvements organizations may want to consider:

  1. Password and PED Authentication

    Similar to S series Luna HSM, Thales provides the choice of password and Pin Entry Device (PED) as authentication mechanisms for CTM. This can provide improved security but is only available for k570 physical appliances.

  2. Multi-cloud deployment is also possible where customers can deploy on multi-cloud such as AWS, Azure, GCP, VMWare, HyperV, Oracle VM, and more. They can also form hybrid clusters of physical and virtual appliances for high-availability environments.

Conclusion

Upgrading to DSMs can be challenging and require thorough planning and troubleshooting, but migrating to CTMs can be another level, as CTMs are a different product. If organizations opt to migrate, they would need to develop new runbooks, standard operation procedures (SOPs), and architecture and design documents. With two years remaining to reach the end of life, organizations should plan if they plan to migrate to CTM or if they would want to explore other options.

Encryption Consulting can help customers plan and also help in upgrading and migrating their existing DSMs. Being a certified partner with Thales and supporting Vormetric customers with their implementations and upgrading for years, Encryption Consulting can help plan a full-fledged migration to CipherTrust Manager. Encryption Consulting can also provide gap assessments and pre-planning to ensure customers do not face any data loss or serious challenges that would lead to downtime or other production issues. Contact us for more information.

Sources

  1. CipherTrust Platform Documentation Portal (thalesdocs.com)
  2. CipherTrust Platform Documentation Portal (thalesdocs.com)

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 3 minutes, 44 seconds

The Domain Name System (DNS) is one of the best-known protocols on the Internet. Its main function is to translate human-readable domain names into their corresponding IP addresses. It is important as all devices on the Internet derive the IP address of a particular server from the DNS regularly. The translation process through which DNS queries are exchanged between the client and the DNS server or the resolver. DNS tree is architected from the top down and is called “DNS Hierarchy, “depicted in Fig. 1.

Fig 1: DNS Hierarchy

There are two types of DNS resolvers

  • Authoritative

    Authoritative name servers give answers in response to queries about IP addresses. They only respond to queries about domains to be configured to respond.

  • Recursive

    Recursive resolvers provide the proper IP address requested by the client. They do the translation process by themselves and return the final response to the client.

In this article, we will be focusing on the second type, recursive DNS resolvers.

DNS Cache Poisoning Attacks

Classic DNS cache poisoning attacks (around 2008) targeted a DNS resolver by having an off-path attacker fooling a vulnerable DNS resolver into issuing a query to an upstream authoritative name server.

The attacker attempts to inject negative responses with the spoofed IP of the name server. If the rogue response arrives before any legitimate ones which matches the “secrets” in the query, then the resolver will accept and cache the rogue results.

The attacker would also need to guess the correct source/destination IP, source/destination port, and the query’s transaction ID (TxID), which is 16-bit long. When the source and destination port (i.e., 53) were fixed, 16-bit was the only randomness. Thus an off-path attacker can brute force all possible values with 65,536 responses, and a few optimizations such as birthday attacks that can speed the attack even further.

Defenses against DNS Cache Poisoning attacks

Since then, several defenses have been promoted to mitigate the threat of DNS cache poisoning. They effectively render the classic attack useless. We describe below the solutions deployed which includes randomization of:

  1. The source port is perhaps the most effective and widely deployed defense as this increases the randomness to 32 bits from 16 bits. An off-path attacker would now have to guess both the source port and Transaction ID (TxID) together.

  2. Capitalization of letters in domain names (0x20 encoding) – The randomness can often depend on the number of letters, which can be quite effective, especially for longer domain names. It is a simple protocol change, but it has significant compatibility issues with authoritative name servers. Thus, the most popular public resolvers do not use 0x20 encoding. For example, Google DNS uses 0x20 encoding only for allowed name servers; Cloudflare has recently disabled 0x20 encoding.

  3. Choices of name servers(server IP addresses). Randomness also depends on the number of name servers. Most domains utilize less than ten name servers, summarizing to only two to three bits. It has also been shown that an attacker can generate query failures against certain name servers and effectively “pin” a resolver to the one remaining name server.

  4. DNSSEC – The success of DNSSEC depends on the support of both resolvers and authoritative name servers. But, only a small fraction of domains are signed – 0.7% (.com domains), 1% (.org domains), and 1.85% for Top Alexa 10K domains, as reported in 2017. In the same study, it is stated that only 12% of the resolvers enabling DNSSEC do attempt to validate the records received. Thus, the overall deployment rate of DNSSEC is far from satisfactory.

Conclusion

DNS Cache poisoning attack is ever-changing, with new attack surfaces appearing. As we previously stated, modern DNS infrastructure has multiple layers of caching. The client often initiates a query using an API to an OS stub resolver, a separate system process that maintains OS-wide DNS cache. Stub resolver does not perform any iterative queries; instead, it forwards the request to the next layer. A DNS forwarder also forwards queries to its upstream recursive resolver. DNS forwarders are commonly found in Wi-Fi routers (e.g., in a home), and they maintain a dedicated DNS cache. The recursive resolver does the real job of iteratively querying the authoritative name servers. The answers are then returned and cached in each layer. All layers of caches are technically subject to the DNS cache poisoning attack. But we generally tend to ignore stub resolvers and forwarders, which are also equally susceptible to attacks. As the industry moves forward, we should be better prepared for such attacks and have better defenses accordingly.

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 6 minutes

Data security is one of the essential parts of an organization; it can be achieved using various methods. Encryption Key has a significant role in the overall process of data. Data encryption converts the plaintext into an Encoded form (non-readable), and only authorized persons/parties can access it.

Many algorithms are available in the market for encrypting such data. The encrypted data has been safe for some time, but we never think it is permanently secure. As time goes on, there is a chance that someone gets the data hacked.

Fig: Encryption and Decryption Process

In this article, we have considered various encryption algorithms and techniques for improving the security of the data, Information Security using encryption. Comparisons of encryption algorithms based on their performance, efficiency in hardware and software, key size, availability, implementation techniques, and speed.

Summary of the algorithms

We compare the measured speed of encryption algorithms with various other algorithms available as standard in Oracle JDK, using Eclipse IDE, and then summarize multiple other characteristics of those algorithms. The encryption algorithms consider here are AES (with 128 and 256-bit keys), DES, Triple DES, IDEA, and BlowFish (with a 256-bit key).

Performance of the algorithms

The figure below shows the time taken to encrypt various numbers of 16-byte blocks of data using the algorithms mentioned above.

It is essential to note right from the beginning that beyond some ridiculous point, it is not worth sacrificing speed for security. However, the measurements obtained will still help us make certain informed decisions.

Characteristics of algorithms

Table 1 summarizes the main features of each encryption algorithm, with what we believe is a fair overview of the current security status of the algorithm.

FactorsRSADES3DESAES
Created ByIn 1978 by Ron Rivest, Adi Shamir, and Leonard AdlemanIn 1975 by IBMIn 1978 by IBMIn 2001 by Vincent Rijmen and Joan Daemen
Key LengthIt depends on the number of bits in modulus n, where n = p*q56 bits168 bits (k1, k2, and k3)
112 bits (k1 and k2)
128, 192, or 256 bits
Rounds1164810-128 bit key,
12-192 bit key,
14-256 bit key
Block SizeVariable64 bits64 bits128 bits
Cipher TypeAsymmetric Block CipherSymmetric Block CipherSymmetric Block CipherSymmetric Block Cipher
SpeedSlowestSlowVery SlowFast
SecurityLeast SecureNot Secure enoughAdequate SecurityExcellent Security

Table 1: Characteristics of commonly used encryption algorithms

Comparison

The techniques have been compared based on that how much:

  • CPU processing speed for encrypting and decrypting data.
  • Rate of key generation.
  • Key size.
  • Security consideration.
  • Efficient on the hardware and software in case of implementation.
  • The amount of memory required to hold the data in the encryption process.
  • Number of users accommodated by the model.
  • Time required by the model to recover the data in case of key failure.
  • Time available to the hacker to produce various types of attacks.
  • The complexity of algorithm technique.
Fig: Comparison of encryption algorithm based on Percentage Efficiency

Formulation and Case Study

Case Study

Symmetric ciphers use the same key for encrypting and decrypting, so the sender and the receiver must both know — and use — the same secret key. All key lengths are deemed sufficient to protect classified information up to the “Secret” level, with “Top Secret” information requiring either 192- or 256-bit key lengths. There are 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys — a round consists of several processing steps that include substitution, transposition, and mixing of the input plaintext and transform it into the final output of ciphertext.

AES Design

Rounds

Padding is the method of adding additional Dummy data. During the encryption process of a message, if the message is not divisible by the block length, then the padding is used. E.g., if the message consists of 426 bytes, we need seven additional bytes of padding to make the message 432 bytes long because 432 is divisible by 16. Three key sizes can be used in AES, and depending on key sizes, the number of rounds in AES changes. The standard key size in AES is 128 bits, and the rounds are 10. for AES encryption, two sub keys are generated and in 1st round a round key is added in the first round.

No.Key SizeNo of Rounds
1128 bits10
2192 bits12
3256 bits14

For 128 bits, plain text and 128 bits key are used, and 10 rounds are performed to find the ciphertext. In the first step, 10 round keys are generated for each round, and there is a separate round key. But in the first round, an extra round key, the initial round, is added to the round, and then transformation is started. The transformation consists of four steps.

  1. Substitute Bytes
  2. Shift Rows
  3. Mix Columns
  4. Add Round Key

The Following figure explains all the encryption stages from plain text to ciphertext.

Fig: Shows the stages of each round

Encryption with AES

The encryption phase of AES can be broken into three steps: the initial round, the main rounds, and the final round. All of the stages use the same sub-operations in different combinations as follows:

  1. Initial RoundAdd Round Key
  2. Main Round
    • Sub Bytes
    • Shift Rows
    • Mix Columns
    • Add Round Key
  3. Final Round:
    • Sub Bytes
    • Shift Rows
    • Add Round Key
  4. Add Round Key

    This is the only phase of AES encryption that directly operates on the AES round key. In this operation, the input to the round is exclusive-or with the round key.

  5. Sub Bytes

    Involves splitting the input into bytes and passing each through a Substitution Box or S-Box. Unlike DES, AES uses the same S-Box for all bytes. The AES S-Box implements inverse multiplication in Galois Field 2.

  6. Shift Rows

    Each row of the 128-bit internal state of the cipher is shifted. The rows in this stage refer to the standard representation of the internal state in AES, which is a 4×4 matrix where each cell contains a byte. Bytes of the internal state is placed in the matrix across rows from left to right and down columns.

  7. Mix Columns

    Provides diffusion by mixing the input around. Unlike Shift Rows, Mix Columns performs operations splitting the matrix by columns instead of rows. Unlike standard matrix multiplication, Mix Columns performs matrix multiplication per Galois Field 2.

Decryption with AES

To decrypt an AES-encrypted ciphertext, it is necessary to undo each stage of the encryption operation in the reverse order in which they were applied. The three-stage of decryption is as follows:

  1. Inverse Final Round
    • Add Round Key
    • Shift Rows
    • Sub Bytes
  2. Inverse Main Round
    • Add Round Key
    • Mix Columns
    • Shift Rows
    • Sub Bytes
  3. Inverse Initial Round
    • Add Round Key

Conclusion

The study of various algorithms shows that the model’s strength depends upon the key management , type of cryptography, number of keys, number of bits used in a key. All the keys are based on mathematical properties. The keys having more number of bits requires more computation time, indicating that the system takes more time to encrypt the data. AES data encryption is a more mathematically efficient and elegant cryptographic algorithm, but its main strength is the option for various key lengths. AES allows you to choose a 128-bit, 192-bit, or 256-bit key, making it exponentially strong. AES uses permutation-substitution, which involves a series of substitution and permutation steps to create the encrypted block

References

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 3 minutes 42 seconds

Kubernetes is an open-source container-orchestration system used to automate deploy, scale, and manage containerized applications. Kubernetes manages all elements that make up a cluster, including each microservice in an application to entire clusters. Organizations using these containerized applications as microservices can provide them more flexibility and security benefits than monolithic software platforms and introduce other complexities.

Recommendations

  1. Kubernetes Pod security
    1. Containers built to run applications should run as non-root users
    2. Run containers with immutable file systems whenever possible
    3. Regularly scan container images for potential vulnerabilities or misconfigurations
    4. Use Pod Security Policies to enforce a minimum level of security, including:
      1. Preventing privileged containers
      2. Denying container features that are frequently exploited to breakout, like hostPID, hostIPC, hostNetwork, allowedHostPath
      3. Rejecting containers that execute as root user or allow elevation to root
  2. Network separation and hardening
    1. Lockdown access to the control plane nodes using a firewall and RBAC (Role-Based Access Control)
    2. Limiting access to the Kubernetes, etcd server
    3. Configuring control plane components to use authenticated, encrypted communications using TLS/SSL certificates
    4. Set up network policies to isolate resources. Pods and services in different namespaces can communicate unless additional separation is applied, such as network policies.
    5. All credentials and sensitive information should be placed in Kubernetes Secrets rather than in configuration files. Encrypt Secrets using a robust encryption method
  3. Authentication and authorization
    1. Disable anonymous login
    2. Using strong user authentication
    3. Create RBAC policies that limit administrator, user, and service account activity
  4. Log auditing
    1. Enable audit logging
    2. Persist logs to ensure availability in case of pod, node, or container level failure
    3. Configuring a metrics logger
  5. Upgrading and application security practices
    1. Immediately apply security patches and updates
    2. Performing periodic vulnerability scans and penetration tests
    3. Removing components from the environment when they are no longer needed

Architectural overview

Kubernetes uses a cluster architecture. A Kubernetes cluster comprises many control planes and one or more physical or virtual machines called worker nodes which host Pods, which contain one or more containers. The container is an executable image that includes a software package and all its dependencies.

The control plane makes decisions about clusters. This includes scheduling the running of containers, detecting/responding to failures, and starting new Pods if the number of replicas listed in the deployment file is unsatisfied.

Kubernetes Pod security

Pods consist of one or more containers and are the smallest deployable Kubernetes unit. Pods can often be a cyber actor’s initial execution environment upon exploiting a container. Pods should be hardened to make exploitation much more complex and limit the impact on compromise.

“Non-root” and “rootless” container engines

Many container services run as privileged root users, and applications can execute inside the container as root despite not requiring privileged execution. Preventing root execution using non-root containers or a rootless container engine limits the impact of a container compromise. These methods affect the runtime environment significantly; thus, applications should be tested thoroughly to ensure compatibility.

Non-root containers

Container engines that allow containers to run applications as non-root users with non-root group membership. This non-default setting is configured while building the image.

Rootless container engines

Some container engines can run in an unprivileged context rather than using a daemon running as root. For this scenario, execution would appear to use the root user from the containerized application’s perspective, but the execution is remapped to the engine’s user context on the host.

Immutable container file systems

Containers are permitted mostly unrestricted execution within their context. A threat actor who has gained execution in a container can create files, download scripts, and modify applications within the container. Kubernetes can lockdown a container’s file system, thereby preventing many post-exploitation activities. These limitations can also affect legitimate container applications and can also potentially result in crashes or abnormal behavior. Kubernetes administrators can mount secondary read/write file systems for specific directories where applications require write access to prevent legitimate damaging applications.

Building secure container images

Container images are usually created by either building a container from scratch or building on top of an existing image pulled from a repository. Even after using trusted repositories to build containers, image scanning is key to ensuring deployed containers are secure. Images should be scanned throughout the container build workflow to identify outdated libraries, known vulnerabilities, or misconfigurations, such as insecure ports or permissions.One approach for implementing image scanning is by using an admission controller. Admission controller is a Kubernetes-native feature that can intercept and process requests to the Kubernetes API before the persistence of the object but after a request is authenticated and authorized. A custom webhook can be implemented to scan any image before deploying it in the cluster. The admission controller could block deployments if the picture doesn’t comply with the security policies defined in the webhook configuration.

Conclusion

This was an introduction to properly managing and securing Kubernetes clusters and securely deploying them in your environment. We will dive deeper into more controls and policies organizations can use to strengthen their security.

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 4 minutes, 51 seconds

In the early 1990s, Netscape began developing SSL; therefore, an initial draft was submitted for SSL v2.0 in 1995. SSL v2.0 had major security flaws that led to the creation of SSL v3.0. The draft for SSL v3.0 was submitted to the IETF in 1996. In Netscape’s words, SSL v3.0 could be a security protocol that prevents eavesdropping, tampering, or message forgery over the Internet. The IETF published RFC 61012 (Request for Comment) as specification for SSL v 3.0. SSL began to be known as TLS, and the next version of TLS came in 1999 with RFC 22463. In a nutshell, SSL v 3.0 and TLS 1.0 don’t have variations that a developer should worry about; however, it’s better to use TLS 1.0. The next version of TLS, TLS 1.1, came into existence in 2006 and is outlined in RFC 43464. TLS 1.1 has enhancements over TLS 1.0. The next version, TLS 1.2, was released in 2008 and is defined through RFC 52465. TLS 1.2 has had major changes since TLS 1.1, and it includes support for newer and secure cryptographic algorithms. In August 2018, TLS 1.3 was released. The differences between TLS 1.2 and 1.3 are extensive and significant, improving each performance and security. Simultaneously, TLS 1.2 remains in widespread use given its absence of known vulnerabilities and its continued usage in enterprise environments.

Outdated TLS versions

Sensitive data always require robust protection. TLS protocols provide confidentiality, integrity, and often authenticity protections to information while in transit over a network. This can be achieved by providing a secured channel between a server and a client to communicate for a session. Over time, new TLS versions are developed, and some of the previous versions become outdated for vulnerabilities or technical reasons; and, therefore, should no longer be used to protect data.

TLS 1.2 or TLS 1.3 should be used, and any organizations should not use SSL 2.0, SSL 3.0, TLS 1.0, and TLS 1.1.

Outdated Cipher suits

In TLS 1.2, the term “cipher suites” refers to the negotiated and agreed-upon set of cryptographic algorithms for the TLS transmission. The TLS client offers a list of cipher suites, and the server selects negotiated cipher suites from the list. The cipher suites in TLS 1.2 consist of an encryption algorithm, a key exchange algorithm, an authentication mechanism, and a key derivation mechanism.

Cipher suites are identified as obsolete when one or more of the mechanisms is weak. Fragile encryption algorithms in TLS 1.2 are defined as NULL, RC2, RC4, DES, IDEA, and TDES/3DES; organizations should not use cipher suits with these algorithms. TLS 1.3 removes these cipher suites, but implementations supporting TLS 1.3 and organizations should check TLS 1.2 for obsolete cipher suites.

Outdated Key exchange mechanisms

Weaker key exchange mechanisms indicated by cipher suite include those designated as EXPORT or ANON. Cipher suites that use these key exchange mechanisms should not be used. In TLS sessions, even if the cipher suite is acceptable, key exchange mechanisms may use weak keys that allow exploitation. TLS key exchange methods include RSA key transport and DH or ECDH key establishment. DH and ECDH have static as well as ephemeral mechanisms. NSA recommends RSA key transport and ephemeral DH (DHE) or ECDH (ECDHE) mechanisms, with RSA or DHE key exchange using at least 3072-bit keys and ECDHE key exchanges using the secp384r1 elliptic curve. For RSA key transport and DH/DHE key exchange, keys less than 2048 bits should not be used, and ECDH/ECDHE using custom curves should not be used.

Risk of outdated TLS protocols

Outdated TLS protocols use cipher suites that are not supported or recommended, and using older TLS versions would require effort to keep the libraries and drive up the cost of product maintenance. Apart from the above-discussed scenario, some additional ones can be:

  • Using outdated TLS versions would force organizations to use outdated, vulnerable cipher suites and not support newer recommended cipher suits.
  • TLS 1.0 and 1.1 are vulnerable to downgrade attacks since they rely on SHA-1 hash for the integrity of exchanged messages. Even authentication of handshakes is done based on SHA-1, which makes it easier for an attacker to impersonate a server for MITM attacks. TLS 1.1 or below does not provide the option to select more robust hashing algorithms, which the newer protocols do.
  • Supporting older protocols drive up cost as all vulnerabilities need to be patched, libraries need to be supported, and the attack surface increases.

Identifying and analyzing outdated TLS protocols

Since there are many ways that obsolete TLS configurations may be exhibited in traffic, the following detection strategy is recommended. Signatures can be simplified using this strategy:

  • First, identify the client’s offering and servers negotiating obsolete TLS versions. If a client offers or a server negotiates SSL 2.0, SSL 3.0, or an outdated TLS version, no further traffic analysis is required, and remediation strategies should be employed.
  • Next, for sessions using TLS 1.2, organizations should identify and remediate devices using obsolete cipher suites. Identify clients only offering and servers negotiating outdated TLS cipher suites and update their configurations to be compliant.
  • Finally, organizations should identify and remediate devices using weak key exchange methods for sessions using TLS 1.2 or TLS 1.3 and recommended cipher suites.

Benefits of upgrading to newer protocols

Apart from getting rid of vulnerabilities and having better security for the environment, organizations do tend to gain a few benefits by upgrading to newer protocols:

  • Increase performance in the overall environment.
  • Improved security.
  • Better support and patches for the vulnerabilities found, along with research for new vulnerabilities.
  • Better hashing algorithms for integrity check and authentication of handshakes.

Conclusion

Organizations encrypt network traffic to protect data in transit. However, using obsolete TLS configurations provides a false sense of security since it looks like the data is protected, even though it is not. Organizations should plan to discontinue outdated TLS configurations in the environment by detecting, remediating, and then blocking obsolete TLS versions, cipher suites, and key exchange methods.

Resources:

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 12 minutes, 37 seconds

An Enterprise Encryption Policy is vital to the security of an organization. This policy provides a uniform way of ensuring encryption best practices are properly implemented throughout your organization. Additionally, a strong Enterprise Encryption Policy can be tailored to the encryption strategy your organization has created, giving you an organization-specific solution to your encryption gaps. Creating an encryption policy takes a lot of planning and organization, to properly implement. Your Enterprise Encryption Policy will likely suggest different tools to be used for data encryption, like code signing solutions, to give your data the security it deserves.

Encryption Basics

The first step to designing a strong Enterprise Encryption Policy is to ensure you understand the basics of encryption. There are two different types of encryption methods: asymmetric and symmetric encryption. Symmetric encryption utilizes one key when encrypting data. This means both the person encrypting data and the person decrypting data use the same key. The initial plaintext data is encrypted with the symmetric key and turned into ciphertext. This ciphertext is then decrypted later with the same key used to encrypt it, thus reproducing the plaintext. The key should be securely transmitted to both the person decrypting and encrypting the data, as only those individuals who need to encrypt or decrypt the data should access to the key. If the security of the key is improperly handled, then a malicious threat actor could steal the plaintext data and use it for unwanted purposes.

The other type of encryption is asymmetric encryption. This mode of encryption deals with an encryption key pair as opposed to a single encryption key.  The key pair is created with a public and private key. As their names suggest, the public key is available to anyone while the private key is known only to the key pair creator. This key pair is mathematically linked, such that if the private key or public key encrypts any plaintext, the other key is able to decrypt the resulting ciphertext. The way the asymmetric encryption process works with data-in-transit is that the sender of the data encrypts the data with the private key of their key pair. The public key is then sent along to the recipient of the data who uses that public key to decrypt the data. Since these keys are linked, the recipient knows that the data is actually from the person they think it is from. In this way, asymmetric encryption provides a valid way to authenticate that the data-in-transit has not been changed and to validate the identity of the data sender.

Now, one final component of encryption we should understand is the different states data can be in, which include data-in-transit/data-in-motion, data-at-rest, and data-in-use:

  • Data-in-Transit: Data-in-transit, or data-in-motion, is any data being transported to another location. This type of data is viable to threat actors via Man in the Middle Attacks. These attacks intercept the data while it is on the way to its destination, allowing the data to be read or stolen while in transit. This is a big issue if data like a piece of software or a firmware update were to be sent to a client and a Man in the Middle attack occurred. If the software was not encrypted, then the threat actor could edit that software/update to contain a malware payload and send it along to the recipient. That recipient would then update their software or utilize that software and have a malware payload downloaded to their machine. Luckily, there are a number of different methods used to protect data-in-transit, including: Secure Sockets Layer (SSL)/ Transport Layer Security (TLS), Secure Shell (SSH), and Virtual Private Networks (VPNs).
  • Data-at-Rest: Data-at-rest refers to any data stored on a device or in a database that is not currently in use or in transit. A lot of attacks on organizations target data-at-rest, as it is easily accessible if improper data protection controls are in place and Personally Identifiable Information, PII, is also normally stored in a database or device storage. This is why organizations use different encryption techniques to protect data-at-rest. These techniques include: Database Encryption, Full Disk Encryption (FDE), File and Folder Encryption (FFE), and Virtual Encryption. Additionally, Hardware Security Modules, HSMs, are used to store keys used to encrypt data-at-rest and data-in-transit.
  • Data-in-Use: Data-in-use is data currently having operations done on it. This includes data being generated, updated, erased, or viewed. Data-in-use is the hardest type of data to protect, as using the data requires it to be decrypted, so all encryption methods are not the perfect ways to protect data-in-use. Certain types of encryption, like Format Preserving Encryption (FPE), can be used to protect data-in-use. FPE encrypts the data, but leaves it in the same format that it was originally in, so performing computations on data encrypted via FPE is possible. Another way to protect data is to ensure that only those users who need to have access to the data actually have access to the data. This also applies to the encryption keys that may be used when the data is at rest. As long as proper management and approvals for encryption keys and data are in place, your data-in-use should remain secure.

Now that we understand the basics of encryption, let’s take a look at what you need to consider when developing an Enterprise Encryption Policy Strategy.

What to Consider when Strategizing?

Strategizing for the development of an Enterprise Encryption Policy is an extremely important step in actually developing the Policy. There are a number of different points to consider when strategizing for your Enterprise Encryption Policy, which begins with collaboration. An organization creating a Policy should work with every team that may be included in this policy or who may have useful information for the Policy. This includes compliance teams, as they will be able to help determine what types of data must be discovered and classified, as well as what methods for encryption or key protection are necessary. The other stakeholders included in this project can also assist in connecting this policy to other policies already in place in your organization, such as key protection policies. The teams actually implementing the Enterprise Encryption Policy controls should also be included in the strategizing process.

The next step to consider when strategizing for your Enterprise Encryption Policy creation is classifying data. Using the compliance team’s knowledge, you should determine what the different standards and compliance regulations you need to follow are, as that will tell your organization if or what data must be classified and how that data should be protected. To classify data, an organization must sift through all their data to determine the different types of data they are storing. If certain types of data, like Social Security Numbers, phone numbers, or addresses, are stored or utilized in some way, then that data must be protected. Once data is classified, it makes it much easier to protect via encryption, tokenization, or other methods. Data is normally classified in 1 of 4 different ways. Public is the first classification level, which is data that is open to the public or will be open to the public. If this data is lost or stolen, it does not cause any issues, as it is publicly known. The second level is Business Use, which is data that is used in day-to-day business use cases. This is the classification level of most data, and while it would be a hinderance to loss it, it would not cripple your organization. The third level is Confidential data, which is data that less people have access to and that would cause a competitive advantage to be lost if it were leaked. Finally, there is Restricted data, which is the least accessible information in the organization. Restricted data could cause millions of dollars in revenue loss if it were leaked, and could end up leading to law suits if that data is lost or stolen.

Another key point to strategize for when creating your organization’s policy is roles and access control of data. Ensuring only those who need access to certain data is very important to data security, as allowing someone to access restricted data who should not be able to could leave to data being stolen or lost that is vital to the organization. You can implement proper access control by assigning roles to users. These roles can be based off data classification levels, job title, or even the section of a company a user works in. That means someone could have a restricted data role, a sales associate role, or a human resources role respectively. Another important point to consider with access control is segregation of duties. Segregation of duties is the idea that multiple people are needed to complete a certain task, so if someone requests access to restricted data, one or more approvers would have to allow them access to that data before they could actually access and use the restricted data. This offers more chances to stop a potential insider threat, as there would have to be multiple people in on the insider threat to steal data.

Now, one of the keys when creating a strategy for your policy, you must consider the types of encryption you want to put in place. These can vary based on the compliance regulations and standards your organization is required to follow and the level of security you desire to implement in your IT infrastructure. Every organization should strive to have the strongest possible encryption and security in place that does not cripple or slow their operations. Your company can utilize any kind of encryption algorithm, but ensuring data is protected in all formats is key. If data is only protected in motion, then data can be compromised either at rest or in use. Also, keep track of the National Institute of Science and Technology’s (NIST’s) updates to regulations and standards, as these provide organizations with the best possible practices to follow to ensure they are using the strongest encryption algorithms, key lengths, etc. Your enterprise can manage your different encryption methods either manually, or with Enterprise Encryption Platform Services, like MicroFocus or Protegrity. This should also you help you determine the next step in your strategy, which is encryption key management.

Encryption key management is arguably the most important part of your encryption strategy, as many of the most recent supply chain attacks have shown. The first target a threat actor will try and find is the private key of the asymmetric encryption key pair.  Now, keys can be stored in a variety of places, but not all of them are secure. Some organizations store their keys in plaintext on their devices, which is the least secure method of storing keys. Software based key security is available, but this is still not the best practice, as these keys are still prone to being stolen. The best practice, as purported by the NIST, is to use Federal Information Processing Standards (FIPS) validated Hardware Security Modules, as they provide the most secure method of protecting encryption keys, or keys of any type. FIPS validated HSMs can range from FIPS 140-2 Level 1 to Level 4. Level 1 provides the least amount of security, requiring a working encryption algorithm of any type and production-grade equipment. Level 2 takes all of level 1’s requirements and adds role-based authentication, tamper evident physical devices, and an Operating System approved by Common Criteria at EAL2. The majority of organizations tend to use a FIPS 140-2 level 3 HSM, as it provides all of level 2’s requirements, and adds tamper-resistant devices, a separation of the logical and physical interfaces that have “critical security parameters” enter or leave the system, and identity-based authentication. Private keys leaving or entering the system must also be encrypted before they can be moved to or from the system. The final level, level 4, requires everything in level 3 along with the ability of the device to be tamper active and that the contents of the device be able to be erased if certain environmental attacks are detected.

The final part of a strong strategy for creating your Enterprise Encryption Policy should be determining what solutions you want to implement. These can range from certificate management solutions, enterprise encryption platform services, code signing solutions, key management solutions, and Public Key Infrastructures. Choosing the right solution for your organization is extremely important, as you will have many different business needs that need to be met by these solutions. Choosing the best solution can be difficult, but focusing on the different compliance standards you must follow, as well as the best practices put forth by NIST standards, will give your enterprise the best possible solutions to meet the project requirements. Now that we’ve developed a strategy for your Enterprise Encryption Policy, let’s see what the key components of that policy are.


Encryption Assessment

Key Enterprise Encryption Policy Areas

  • Encryption Technical Standards: The encryption technical standards portion of the Enterprise Encryption Policy deals with the technical details of the different keys, encryption algorithms, etc. for the enterprise. This section includes key length, key strength, key types, and other technical key details. The strength of the encryption algorithms used, the types of encryption algorithms used, and the strength of any ciphers used are all additional portions of the encryption technical standards. The point of this policy area is to ensure that there is uniformity between all business units within the enterprise, with respect to the keys and other aspects of encryption.
  • Data to Encrypt: This policy area is meant to deal with the idea of what data needs to be encrypted why. This works with data classification methods, data classification policies, and other data identification methods. Data can be classified into a number of different categories, and an example of this can be found in the what to consider when strategizing section of this document.
  • What Stage to Encrypt at: This deals with the different types of data and where to encrypt data at. In this policy area, you should determine the best solutions to encrypting data-at-rest, data-in-motion, and data-in-use. These solutions can be determined based on the compliance standards and best practices that your organization must follow.
  • Key Protection: This policy area handles the strength of keys and how those keys are protected. The most common method is to follow NIST standards and have Multi-Factor Authentication in place for key storage and access. Additionally, tools like Hardware Security Modules secure encryption private keys in the strongest way possible.
  • Key Escrow: Key escrow refers to retrieving keys when legal reasons, like compliance audits, or disaster recovery. If an unforeseen disaster were to occur at your organization, and keys needed to be recovered or reset, then key escrow would need to be in place in the Enterprise Encryption Policy.
  • Training: Training is vital to an organization, as every team member and business unit should know how to access data, what data types they can access, how they should classify new data, and protect new encryption keys or data. As long as everyone knows how to access and handle data, they can work well within the organization.
  • Monitoring: Monitoring encryption and data throughout your enterprise is vital, as audit trail must be created, certificates and keys must be tracked, and strong access control must be in place.

Encryption Consulting Products and Services

If your organization is planning on creating a strong Enterprise Encryption Policy, Encryption Consulting can help. We offer a wide range of products and services that will help keep your enterprise secure. For our services, we offer encryption, Public Key Infrastructure (PKI), and Hardware Security Module (HSM) assessment, design, and implementation services for your organization. We can also train your employees on the use of HSMs, PKI, and Amazon Web Services. We also provide a code signing solution, CodeSign Secure 3.0, which has monitoring, virus/malware scanning, and Multi-Factor Authentication, among other tools. Our PKI-as-a-Service is another great way to have a strong Enterprise Encryption Policy, without the need to manage every part of the PKI. If you have any questions about Encryption Consulting’s services or products, please visit our website at www.encryptionconsulting.com .

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 10 minutes, 30 seconds

In this discussion whiteboard, let us understand what is an e-signature? What is digital signature? What is meant by electronic signature? Are both the signatures similar or different? Which signature is more secure and what are various use cases for digital signature as well as electronic signatures? How is code signing relevant to digital signature? What is Encryption Consulting’s CodeSign Secure and how is it relevant to your organization? Let’s get into the topic to understand responses to these questions:

If you are new to the concept of e-signatures then there are high chances of getting confused between “Digital signature” and “Electronic signature”. Quite often you would encounter people use both digital signature and electronic signature terms interchangeably which is not completely true as there are some key significant differences between these two types of e-signatures. The major difference is security – digital signatures are mainly used to secure documentation and provide authorization as they are authorized by Certificate Authorities (CAs) where as electronic signatures only provide the intent of the signer. Let us first understand what is a digital signature and electronic signature.

What is a Digital Signature?

Digital signature is a type of electronic signature as the both are meant to be used of document signing except that digital signatures are more secure and authentic. In digital signature, the signer of the document is mandated to have a Public Key Infrastructure (PKI) based digital certificate authorized by certificate authority linked to the document. This provides authenticity to the document as it is authorized by trusted certificate authorities. Let us understand in a simple way about digital signature by taking paper based documents as example. There are usually two concerns when you involve in documentation process, one is the authenticity of the person signing the contract and other is whether the document integrity is protected without any tampering. To overcome these concerns we have notaries in place for providing authorization and safeguarding integrity of the document.

Similar to the notary in physical contracts we have certificate authorities (CAs) authorizing digital signatures with PKI based digital certificates. In digital signatures, a unique fingerprint is formed between the digital document and the PKI based digital certificate which is leveraged to achieve the authenticity of the document and its source, assurance of tamper proof document.

Currently there are two major document processing platforms which provide digital signature service with strong PKI based digital certificates:

  • Adobe Signature
  • Microsoft Word Signature

Adobe Signature: There are two types of signatures provided by Adobe – Certified and Approval signatures. Certificate signature is used for authentication purpose where a blue ribbon is displayed in the top of the document indicating the actual author of the document and issuer of PKI based digital certificate. Approval signature on the other hand captures the physical signature of the issuer or author and other significant details.

Microsoft Signature: Microsoft supports two types of signatures one is visible signature and other is invisible signature. In visible signature, there is a signature field provided for signing similar to physical signature. Invisible signature is more secure as it cannot be accessed or tampered by unauthorized users. Invisible signature is commonly used for document authentication and enhanced security.

What is electronic signature?

An electronic signature is not as secure and complex as digital signature as there are no PKI based certificates involved. Electronic signature is mainly used to identify the intent of the document issuer or author and it can be in any form such as electronic symbol or process. Electronic signature can be captured in as simple way as check box as its primary purpose is to capture the intention to sign contract or document. These signatures are also legally binding. In instances where the document is required to be signed by two parties for binding legally to execute certain duties and do not require high level of security and authorization electronic signatures are used instead of digital signatures.

Key differences between digital signature and electronic signature

Let us understand the key differences between the two signatures by comparing the crucial parameters in a tabular form.

ParameterDigital SignatureElectronic Signature
PurposeMain purpose is to secure the document or contract through PKI based digital certificatePurpose of electronic signature is to verify the document or contract
AuthorizationYes. Digital signatures can be validated and verified by certificate authorities providing PKI certificatesNo. Usually it is not possible to authorize electronic signatures
SecurityComprises of better security features due to digital certificate based authorizationComprises of less number of security features compared to digital signature
Types of SignsIn general two types are available. One by Adobe and other by MicrosoftMain types of electronic signatures are verbal, scanned physical signatures, e-ticks
VerificationYes. Digital signatures can be verifiedNo. Electronic signatures cannot be verified
FocusPrimary focus is to secure the document or contractPrimary focus is to show intention of signing a document or contract
BenefitsPreferred majorly more than electronic signature due to high level of securityEasy to use compared to digital signature but less secure

As per the above comparison it is clearly evident that digital signature takes upper hand compared to electronic signatures. However, while considering the legally binding objective both the signatures will serve the purpose. Digital signatures are now highly preferred due to their enhanced security through PKI based certificates which will provide the much required authorization and integrity of the document.

PKI Assessment


What is Code Signing?

Code signing is the process of applying a digital signature to any software program that is intended for release and distribution to another party or user, with two key objectives. One is to prove the authenticity and ownership of the software. The second is to prove the integrity of the software i.e. prove that the software has not been tampered with, for example by the insertion of any malicious code. Code signing applies to any type of software: executables, archives, drivers, firmware, libraries, packages, patches, and updates. An introduction to code signing has been provided in earlier articles on this blog. In this article, we look at some of the business benefits of signing code.

Code signing is a process to validate the authenticity of software and it is one type of digital signature based on PKI. Code signing is a process to confirm the authenticity and originality of digital information such as a piece of software code. It assures users that this digital information is valid and establishes the legitimacy of the author. Code signing also ensures that this piece of digital information has not changed or been revoked after it was validly signed. Code Signing plays an important role as it can enable identification of a legitimate software versus malware or rogue code. Digitally signed code ensures that the software running on computers and devices is trusted and unmodified.

Software powers your organization and reflects the true value of your business. Protecting the software with a robust code signing process is vital without limiting access to the code, assuring this digital information is not malicious code and establishing the legitimacy of the author.

Encryption consulting’s (EC) CodeSign Secure platform

Encryption consulting (EC) CodeSign secure platform provides you with the facility to sign your software code and programs digitally. Hardware security modules (HSMs) store all the private keys used for code signing and other digital signatures of your organization. Organizations leveraging CodeSign Secure platform by EC can enjoy the following benefits:

  • Easy integration with leading Hardware Security Module (HSM) vendors
  • Authorized users only access to the platform
  • Key management service to avoid any unsafe storage of keys
  • Enhanced performance by eliminating any bottlenecks caused


Why to use EC’s CodeSign Secure platform?

There are several benefits of using Encryption consulting’s CodeSign Secure for performing your code sign operations. CodeSign Secure helps customers stay ahead of the curve by providing a secure Code Signing solution with tamper proof storage for the keys and complete visibility and control of Code Signing activities. The private keys of the code-signing certificate can be stored in an HSM to eliminate the risks associated with stolen, corrupted, or misused keys. Client-side hashing ensures build performance and avoids unnecessary movement of files to provide a greater level of security. Client-side hashing ensures build performance and avoids unnecessary movement of files to provide a greater level of security. Client-side hashing ensures build performance and avoids unnecessary movement of files to provide a greater level of security. Seamless authentication is provided to code signing clients via CodeSign Secure platform to make use of state-of-the-art security features including client-side hashing, multi-factor authentication, device authentication, and as well as multi-tier approvers workflows, and more. Support for InfoSec policies to improve adoption of the solution and enable different business teams to have their own workflow for Code Signing. CodeSign Secure is embedded with a state-of-the-art client-side hash signing mechanism resulting in less data travelling over the network, making it a highly efficient Code Signing system for the complex cryptographic operations occurring in the HSM.

Explore more about our CodeSign Secure platform features and benefits in the below link:
https://www.encryptionconsulting.com/code-signing-solution/

Use cases covered as part of Encryption Consulting’s CodeSign Secure platform

There are multiple use cases that can be implemented using CodeSign Secure platform by Encryption Consulting. Majority of the use cases can be relevant to digital signature concept discuss above. CodeSign Secure platform will cater to all round requirements of your organization. Let us look into some of the major use cases covered under Encryption Consulting’s CodeSign Secure:

  • Code Signing: Sign code from any platform, including Apple, Microsoft, Linux, and much more.
  • Document Signing: Digitally sign documents using keys that are secured in your HSMs.
  • Docker Image Signing: Digital fingerprinting to docker images while storing keys in HSMs.
  • Firmware Code Signing: Sign any type of firmware binaries to authenticate the manufacturer to avoid firmware code tampering.

Organizations with sensitive data, patented code/programs can benefit from CodeSign Secure platform. Online distribution of the software is becoming de-facto today considering the speed to market, reduced costs, scale, and efficiency advantages over traditional software distribution channels such as retail stores or software CDs shipped to customers. Code signing is a must for online distribution. For example, third party software publishing platforms increasingly require applications (both desktop as well as mobile) to be signed before agreeing to publish them. Even if you are able to reach a large number of users, without code signing, the warnings shown during download and install of unsigned software are often enough to discourage the user from proceeding with the download and install. Encryption Consulting will provide strongly secured keys in FIPS certified encrypted storage systems (HSMs) during the code signing operation. Faster code signing process can be achieved through CodeSign secure as the signing occurs locally in the build machine. Reporting and auditing features for full visibility on all private key access and usage to InfoSec and compliance teams.

Get more information on CodeSign Secure in the datasheet link provided below:
https://www.encryptionconsulting.com/wp-content/uploads/2020/03/Encryption-Consulting-Code-Signing-Datasheet.pdf

Which signature to use for your organization?

This solely depends on the purpose and intent of using the signature for your organization. You might need to perform a clear assessment or approach expert consultants like us – Encryption consulting to understand which certificate will suit your purpose better.

Encryption Consulting’s Managed PKI

Encryption Consulting LLC (EC) will completely offload the Public Key Infrastructure environment, which means EC will take care of building the PKI infrastructure to lead and manage the PKI environment (on-premises, PKI in the cloud, cloud-based hybrid PKI infrastructure) of your organization.

Encryption Consulting will deploy and support your PKI using a fully developed and tested set of procedures and audited processes. Admin rights to your Active Directory will not be required and control over your PKI and its associated business processes will always remain with you. Furthermore, for security reasons the CA keys will be held in FIPS140-2 Level 3 HSMs hosted either in in your secure datacentre or in our Encryption Consulting datacentre in Dallas, Texas.

Conclusion

Encryption Consulting’s PKI-as-a-Service, or managed PKI, allows you to get all the benefits of a well-run PKI without the operational complexity and cost of operating the software and hardware required to run the show. Your teams still maintain the control they need over day-to-day operations while offloading back-end tasks to a trusted team of PKI experts.

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read Time: 7 min

In today’s world, protecting your data is the most critical job at hand for any security expert. Once the data is protected with the help of some data protection tool and passphrases or passwords, then the next challenge is how to protect the passphrases or passwords or secrets itself. That’s when you need a software or hardware tool which can help you manage the secrets effectively and efficiently. AWS Secrets Manager is one such tool that can manage, retrieve, and rotate the passwords, database credentials, API keys, and other secrets throughout their lifecycle. It provides the central credential management with security at its best, resulting in avoidance of hard coding of credentials in the code.

Today, we will discuss the AWS Secrets Manager and its role in credential management facilitating some of the critical security use cases.

Characteristics of AWS Secrets Manager

AWS Secrets Manager provides various characteristics with respect to credentials management, such as:

  1. Integration with AWS KMS: AWS Secrets Manager is fully integrated with AWS KMS service and encrypts secrets as data-at-rest encryption with the Customer managed KMS keys. While retrieving the secrets, it decrypts the secrets using the same CMK KMS keys used earlier for encryption and transmits the secrets to your local environment securely.
  2. Secret Rotation: AWS Secrets Manager enables you to meet security and compliance requirements as per your organization’s goal. It provides you the secret rotation functionality on-demand or on a scheduled basis through the AWS management console, AWS SDK, or AWS CLI.
  3. Integrating with AWS Database services: AWS Secrets Manager supports native AWS database services such as Amazon RDS, Amazon DocumentDB, and Amazon Redshift. It also provides you the capability to rotate other types of secrets such as API Keys, OAuth tokens, and other credentials with the help of customized lambda functions.
  4. Contains multiple versions of secrets: AWS Secrets Manager can contain multiple versions of secrets with the help of staging labels attached with the version while rotating the secrets. Each secrets’ version contains a copy of the encrypted secret value.
  5. Manage access with fine-grained policies:  AWS Secrets Manager provides you flexible access management using IAM policies and resource-based policies. For e.g., you can retrieve secrets from your custom application running on EC2 to connect to a specific database instance (on-prem or cloud).
  6. Secure and audit secrets centrally: AWS Secrets Manager is fully integrated with AWS CloudTrail service for logging and audit purposes. For e.g., AWS CloudTrail will show the API calls related to creating the secret, retrieving the secret, deleting the secret, etc.


Move your IT infrastructure to Cloud.

We have discussed some of the characteristics of the Secrets Manager. Now, below are the key points to be kept in mind while working with Secrets Manager:

  1. You can manage secrets for databases, resources in On-prem & AWS cloud, SaaS applications, third-party API keys, and SSH keys, etc.
  2. AWS Secrets Manager provides compliance with all the major industry standards such as HIPAAPCI-DSS, ISO, FedRAMP, SOC, etc.
  3. Secrets Manager doesn’t store the secrets in plaintext in persistent storage.
  4. Since the Secrets Manager provides the secrets over the secure channel, it doesn’t allow any request from any host in an unsecure fashion.
  5. Secrets Manager supports the AWS tags feature, so you can implement tag-based access control on secrets managed by the secrets manager.
  6. To keep the traffic secured and without passing through the open internet, you can configure a private endpoint within your VPC to allow communication between your VPC and Secrets Manager.
  7. Secrets Manager doesn’t delete the secrets immediately; rather, it schedules the deletion for a minimum period of 7 days. Within those 7 days, you may recover the secrets depending upon your requirements and post the scheduled period; the secrets are deleted permanently. However, through the AWS CLI, you may delete any secrets on an immediate basis.
  8. The AWS Secrets Manager offers a cost-effective pricing model where it charges $0.40 per secret per month or $0.05 per 10K API calls.

Use cases for AWS Secrets Manager

  1.  Secrets Manager avoids the need for hard-coding the credentials or sensitive information in your application code. It serves the purpose of having an API call to the secrets manager to retrieve the secret programmatically. Having this mechanism in place restricts anyone from compromising sensitive information or credentials as secret information doesn’t exist in the plaintext in the code.
  2. Secrets Manager provides centralized credential management, which reduces the operational burden resulting in the active rotation of credentials at regular intervals to improve the security posture of the organization.

Resources: https://aws.amazon.com/secrets-manager/pricing/

Conclusion:

Secret management plays a critical role in data protection for any organization in any environment (On-prem or Cloud). AWS Secrets Manager provides a rich feature set when it comes to secret management solutions. It supports a wide variety of secrets such as database credentials, credentials for On-prem resources, SaaS application credentials, API keys, and SSH keys, etc. In today’s security world, there are a number of secret management solutions available; however, considering the fact that AWS Secrets Manager works seamlessly in the AWS environment, it also provides great compatibility with other environments (On-prem) as well.

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 7 minutes, 30 seconds

In this discussion whiteboard, let us understand what is PKI? What are several components involved in Public Key Infrastructure (PKI)? Most importantly, how the recent global pandemic situation across the world is forcing companies to prefer remote working facilities and this in turn is posing a lot of threat for firm’s sensitive data. To secure the sensitive data, we need to understand how to scale the Public Key Infrastructure remotely in order to defend various data breach attacks. Let’s get into the topic:

What is Public Key Infrastructure – PKI?

PKI or Public Key Infrastructure is cyber security technology framework which protects the client – server communications. Certificates are used for authenticating the communication between client and server. PKI also uses X.509 certificates and Public keys for providing end-to-end encryption. In this way, both server and client can ensure trust on each other and check the authenticity for proving the integrity of the transaction. With the increase in digital transformation across the globe, it is highly critical to use Public Key Infrastructure for ensuring safe and secure transactions. PKI has vast use cases across several sectors and industries including Medical and Finance. 

What are important components in Public Key Infrastructure?

There are three key components: Digital CertificatesCertificate Authority, and Registration Authority. PKI can protect the environment using the three critical components. These components play a crucial role in protecting and securing digital communications, electronic transactions.

  • Digital Certificates: Most critical component in Public Key Infrastructure (PKI) is Digital certificates. These certificates are used to validate and identify the connections between server and client. This way, the connections formed are very secure and trusted. Certificates can be created individually depending on the scale of operations. If the requirement is for a large firm, PKI digital certificates can be purchased from trusted third party issuers.  
  • Certificate Authority: Certificate Authority (CA) provides authentication and safeguards trust for the certificates used by the users. Whether it might be individual computer systems or servers, Certificate Authority ensures digital identities of the users is authenticated. Digital certificates issued through certificate authorities are trusted by devices.  
  • Registration Authority: Registration Authority (RA) is an approved component by Certificate Authority for issuing certificates for authenticated users based requests. RA certificate requests ranges from individual digital certificate to sign email messages to companies planning to setup their own private certificate authority. RA sends all the approved requests to CA for certificate processing. 
Explore the complete information about Public Key Infrastructure here:

Why should firms automate their Public Key Infrastructure (PKI)?

Manually managing certificates and their lifecycle requires lot of technical expertise and skill. Also, huge amount of time is consumed for the certificate management process. Along with this criteria, there are high chances human errors creeping into the process. A simple error can prove very costly for your firm’s cyber security as it might lead to a data breach. In order to overcome the hurdles of finding experienced resources for managing the certificate lifecycle cyber security experts have come up with the process of automating PKI. This will not only save time and money for the organization but also satisfies the compliance and regulatory requirements. 

What are the benefits of PKI automation?

As discussed before, firms are now looking towards automation of their Public Key Infrastructure to enhance the expertise in managing their certificates lifecycle and provide increased security for their high sensitive data. At a high level, there are three benefits identified for shifting towards PKI automation.

  • All-inclusive Data Security
  • Operational Efficiency
  • Business Continuity Management

All-inclusive Data Security: PKI automation will help in drastically reducing the human errors which would result in increasing risk of data breach. Automation will help in managing the certificate lifecycle with precision. Activities such as certificate renewal and/or replacement can be performed on-time. PKI Automation ensures that all the machines which requires new certificate deployment or replacement are immediately addressed with accuracy. This will eliminate the any risk of non-compliance due to outdated certificates in critical systems.

Operational Efficiency: Operational efficiency is an important parameter for any organization’s success. PKI automation will save ample amount of time that goes into manually managing the certificate lifecycle. Also, there will be better efficiency in handling the certificate activities. Leveraging automation of PKI will help in reducing the cost burden on the firms. Considering all the mentioned factors we can safely quote that operational efficiency will be enhanced through PKI automation. 

Business Continuity Management: If there is one important lesson we learnt from the recent global pandemic is handling unexpected outages due to known and unknown factors. A recent survey provided data that poor certificate management is the major cause for system outrages. Manual handling of certificate management is the main reason for unwanted certificate expiry and improper deployment of new certificates. PKI automation process which includes automated discovery of endpoint machines, new certificate deployment and renewal or re-issuance of near expiry certificates will eliminate the risk of system outages and in-turn strengthens the Business continuity management of the organization.

How to automate PKI?

There are several ways to automate Public Key Infrastructure (PKI) depending on the organization requirements. You need to choose the appropriate implementation method to automate your PKI for enhanced efficiency. Method of implementation also depends on your Certificate Authority (CA) and its provision of APIs for integration. Let us discuss at a high level on four different ways to implement PKI automation. 

  • REST API Integration.
  • Simple Certificate Enrollment Protocol (SCEP).
  • Enrollment over Secure Transport (EST).
  • Active Directory Auto-Enrollment. 

One of the prominent and most common way of automating your PKI is using API integration. If your Certificate Authority (CA) and corresponding tools, software support API integration then you can leverage REST API Integration. You can perform API integration either from scratch where you develop your own scripts for making API calls with server for requesting certificate and passing it on to device. Other way is through leveraging the existing tools in market which will help in performing integration for automating PKI. Prominent software solutions such as Tanium, Casper, etc. provide you with integration support for automation.

Second option is SCEP. SCEP is an open-source certificate management protocol that stands for Simple Certificate Enrollment Protocol, automating the task of certificate issuance. SCEP is a readily available protocol supported by majority of operating systems such as Android, Microsoft windows, Linux, iOS and other major OS. This option requires SCEP agent on the device and works in concurrence with your enterprise device management tools. Enabling software sends script down the device for retrieving the certificate and configuration details hits SCEP service. One of the major advantage is SCEP agent is aware of retrieving certificates to the device.

To understand in detail about SCEP and its benefits please go through our elaborative article:

Third option available for implementing PKI automation is EST – Enrollment over Secure Transport. EST is an enhancement to SCEP and provides all the functionalities we get from SCEP. Additional feature offered by EST is the support of Elliptic Curve Cryptography (ECC). Both SCEP and EST are used to automate the Certificate enrollment process, but the difference is that SCEP uses Shared Secret protocol and CSRs for enrolling Certificates, whereas EST uses TLS for authentication. EST uses TLS to securely transport the messages and Certificates, whereas SCEP uses PkcsPKIEnvelope envelopes to secure the messages.

Last option for our discussion to automate certificate management is Microsoft Active Directory (AD) Auto-Enrollment. Windows PCs and servers can utilize this option using Microsoft certificate store. Services such as Internet Information Services (IIS), Exchange server uses Microsoft certificate store for auto Enrollment. As you can understand, this option will be only applicable on Windows machines which use Microsoft services.

Finally, which option to choose for implementing PKI automation is solely and completely dependent on the organization’s IT infrastructure. Consulting firms like us will come into play in this step of selecting the implementation of PKI automation with less effort, overheads and more efficiency.

Encryption Consulting’s Managed PKI

Encryption Consulting LLC (EC) will completely offload the Public Key Infrastructure environment, which means EC will take care of building the PKI infrastructure to lead and manage the PKI environment (on-premises, PKI in the cloud, cloud-based hybrid PKI infrastructure) of your organization.

Encryption Consulting will deploy and support your PKI using a fully developed and tested set of procedures and audited processes. Admin rights to your Active Directory will not be required and control over your PKI and its associated business processes will always remain with you. Furthermore, for security reasons the CA keys will be held in FIPS 140-2 Level 3 HSMs hosted either in in your secure datacentre or in our Encryption Consulting datacentre in Dallas, Texas.

Conclusion

Encryption Consulting’s PKI-as-a-Service, or managed PKI, allows you to get all the benefits of a well-run PKI without the operational complexity and cost of operating the software and hardware required to run the show. Your teams still maintain the control they need over day-to-day operations while offloading back-end tasks to a trusted team of PKI experts.

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Let's talk