PKI

Troubleshooting LDAP issues

Troubleshooting LDAP issues can seem tricky, which is where this blog should help you on your troubleshooting journey. We will discuss 2 scenarios that should solve your LDAP errors.

Scenario 1

This scenario takes into consideration that your certificate was not published in Active Directory. To resolve this issue run the commands:

To resolve AIA issues:     certutil -dspublish -f <path to root certificate> RootCA

To resolve CDP issues:    certutil -dspublish -f <path to root crl> <hostname>

If the issue exists for Issuing CA, you need to replace RootCA with SubCA and use the issuing CA’s hostname.

issuing CA’s hostname

After this, you can check if the certificate is present in Active Directory or not.

For that, log in to Domain Controller, and open adsiedit.msc, connect to Configuration, and then we can navigate to Services > Public Key Services > AIA and check the present certificates. If the certificates are present and you still receive that error, then follow Scenario 2.

Domain Controller

Scenario 2

If Scenario 1 does not fix the issue, then it may be possible that LDAP URL was incorrectly configured while configuring the AIA points on your Root CA.

AIA points on Root CA

To resolve this, first open PKIView.msc to check which LDAP URL your PKI is looking for. For this scenario, my PKI is looking for:

ldap:///CN=Encon%20Root%20CA,CN=AIA,CN=Public%20Key%20Services,CN=Services,CN=ROOTCAOCS,CN=Configuration,DC=Encon,DC=com?cACertificate?base?objectClass=certificationAuthority

But the certificate is published on:

CN=Encon Root CA,CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=encon,DC=com

You can check the distinguished name on the object present in ADSIedit.msc.

To resolve this, we would follow the steps:

  1. Create a new Container Structure in your Domain partition

    CN=Encon%20Root%20CA,CN=AIA,CN=Public%20Key%20Services,CN=Services,CN=ROOTCAOCS,CN=Configuration,DC=Encon,DC=com

  2. Create an Object under Configuration

    Root CA - Domain partition
  3. Choose the object class “container”

    Root CA - Container
  4. Provide the exact value ROOTCAOCS as highlighted above

    Root CA - Object Creation
  5. Click Finish

    Root CA - Creation of object
  6. Follow steps 2-5 to create further containers in ROOTCAOCS > Services > Public Key Services > AIA

    containers in AIA
  7. Run the command on Domain Controller to extract the published object

    LDIFDE -d ” CN=Encon Root CA,CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=encon,DC=com” -f c:\export.txt

    Root CA - extract the published object
  8. Make changes to export.txt where you replace the existing dn with the LDAP URL your PKI is looking for

    CN=Encon Root CA,CN=AIA,CN=Public Key Services,CN=Services,CN=ROOTCAOCS,CN=Configuration,DC=encon,DC=com

    You also need to remove GUID, USN Information, and other details.

    the existing dn with the LDAP URL
  9. Publish the u ldifde -i -f c:\export.txt

    Root CA - Export file
  10. Object should now be in new place

    ADSI - Root CA
  11. PKI View should show no errors

    PKI - Issuing CA

Conclusion

Issues with CDP and AIA LDAP locations can be tricky. Misconfiguration can often cause issues, which can be harder to track. This should solve all the LDAP URL issues that you may face in your PKI environment. LDAP issues can be tricky at times, but if Scenario 1 does not fix your issue, Scenario 2 definitely will.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Download
Implementing & migrating PKI solutions for enterprises

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read Time: 5 minutes

Active Directory (AD) is a critical component of many organizations’ IT infrastructure. It provides a central repository for a user, group, and computer accounts, as well as a variety of other objects, such as shared resources and security policies. In order for AD to function properly, certain ports must be opened on the firewall to allow communication between AD servers and clients.

Ports required for AD communication

The following ports are required for basic AD communication:

  • TCP/UDP port 53: DNS
  • TCP/UDP port 88: Kerberos authentication
  • TCP/UDP port 135: RPC
  • TCP/UDP port 137-138: NetBIOS
  • TCP/UDP port 389: LDAP
  • TCP/UDP port 445: SMB
  • TCP/UDP port 464: Kerberos password change
  • TCP/UDP port 636: LDAP SSL
  • TCP/UDP port 3268-3269: Global catalog

In addition to these ports, other ports may be required depending on your AD environment’s specific components and features. For example, if you are using Group Policy, the following ports will also be required:

  • TCP port 80: HTTP
  • TCP port 443: HTTPS
  • TCP port 445: SMB

If you are using ADFS (Active Directory Federation Services) for single sign-on, the following ports will also be required:

  • TCP port 80: HTTP
  • TCP port 443: HTTPS
  • TCP port 49443: ADFS

Ports required for PKI communication

In order for a PKI to function properly, certain ports need to be opened on the firewall to allow communication between the various components of the PKI system. These ports include:

  1. TCP port 80

    This port is used for HTTP communication, which is required for clients to access the certificate revocation list (CRL) and other information from the certificate authority (CA) server.

  2. TCP port 389

    This port is used for LDAP communication, which is required for clients to access the certificate database on the CA server.

  3. TCP port 636

    This port is used for LDAPS communication, a secure version of LDAP that uses SSL/TLS for encryption. This is required if you are using LDAP over a public network.

  4. TCP port 9389

    This port is used for the Web Services for Management (WS-Management) protocol, which is required for clients to access the CA server using the Certificates snap-in in the Microsoft Management Console (MMC).

In addition to these ports, you may also need to open other ports depending on your PKI system’s specific components and configuration. For example, if you are using Online Certificate Status Protocol (OCSP) to check the status of certificates, you will need to open TCP port 2560.

Troubleshooting firewall issues with PKI

To troubleshoot common firewall issues with a PKI, you can follow these steps:

  • Verify that the necessary ports are open on the firewall. You can do this by using the netstat command to list all of the open ports on the system and compare the results with the list of ports that are required for your PKI system.
  • Check the firewall logs to see any entries related to the PKI system. This can help you to identify any specific rules or settings that may be blocking the necessary ports.
  • Test the connectivity between the PKI components to ensure they can communicate properly. You can do this by using the ping, telnet, or tracert commands to test the connectivity between the client and the CA server and between other components of the PKI system.
  • If you are still having issues with the firewall, try temporarily disabling the firewall to see if this resolves the problem. This will help you to determine whether the firewall is the cause of the issue or if there is a problem with another component of the PKI system.

Conclusion

Maintaining the proper firewall configuration is important in ensuring that your Active Directory and PKI system functions properly. By verifying that the necessary ports are open and troubleshooting any firewall issues that may arise, you can help to keep your Active Directory and PKI system secure and reliable.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Download
Implementing & migrating PKI solutions for enterprises

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read Time: 10 Minutes

CDP and AIA Points can sometimes be confusing but are the most important pillars of a functional PKI environment. Configuration of proper CDP and AIA points can result in a better and healthy PKI environment, and debugging those issues can be equally tricky. This article intends to be a way to stop any CDP/AIA issues that one may face during the PKI configuration and debugging stages.

Configuration of CDP/AIA Points

Proper configuration of CDP and AIA points can be tricky. It involves proper permission of where CRL needs to be published and where they would be accessed.

Improper configuration of CDP can result in two scenarios

  • CRL fails to be published, or
  • CRLs cannot be accessed

Similarly, if AIA is not configured properly, then certificates of root and issuing CAs cannot be accessed.

In both of the scenarios, PKI fails to function properly.

Configuration of AIA

AIA is the easiest to configure. If OCSP is used, the AIA decimal point may include 34; otherwise, it’s always 2.

Display Name Decimal Value
Publish at this location 1
Include in the AIA extension of issued certificates 2
Include in the Online Certificate Status Protocol (OCSP) extension 32

Decimal Value 1 is included when publishing in the %windir%\System32\certsrv\CertEnroll location. It can also be included to publish directly onto the AIA location if proper permissions are provided.

[Note: If the location is behind a load balancer, the CA cannot access both servers and may cause failure. Please do not publish on those locations; instead manual copy is recommended]

Decimal Value 2 is included when the URL is to be added to the issued certificates. These URLs act as the AIA points from where the certificates can be extracted.

Example:

Here, we provide a local CertEnroll folder with decimal value 1, as it is where the AIA would be published. The HTTP location has a decimal value of 2, where we won’t publish the certificates, but it will act as an AIA location from where other clients can access its certificate.

Configuration of AIA

Configuration of CDP

Configuration of CDP can depend on how the CA is configured, how the CRLs are published and accessed, and if Delta CRLs are published.

Display Name Description Decimal Value
Publish CRLs at this location Used by the CA to determine whether to publish base CRLs to this location 1
Include in the CRL Distribution Point (CDP) of issued certificates Used by clients during revocation checking to find base CRL Location 2
Include in [base] CRLs Used by clients during revocation checking to find delta CRL location from base CRLs 4
Include in all CRLs An offline CA can use it to specify the LDAP URL for manual publishing CRLs. You must also set the explicit configuration container in the URL or the DSConfigDN value in the registry.
certutil -setreg CA\DSConfigDN CN=
8
16
32
Publish Delta CRLs to this location Used by the CA to determine whether to publish Delta CRLs to this location 64

Decimal values are used accordingly based on how the CDP needs to be configured.

Decimal Value 1 is mostly used to publish CRLs to %windir%\System32\certsrv\CertEnroll location or to those locations where CA has proper permissions to access. If Delta CRLs are also published, 65 (1+64) is used.

[Note: If the location is behind a load balancer, the CA cannot access both servers and may cause failure. Please do not publish on those locations; instead manual copy is recommended]

Decimal Value 2 includes the URL or location and lets it act as a CDP location from where base or delta CRLs are accessed.

Example

Configuration of CDP

Decimal Value 79 = CRLs are published at that location (1) + Location is added to CDP (2) + Delta CRL location (4) + Include in all CRLs (8) + Publish Delta CRL at this location (64)

Decimal Value 65 = Publish CRL at this location (1) + Publish Delta CRL at this location (64)

Decimal Value 6 = Location is added to CDP (2) + Delta CRL location (4)

CRL Replacement Tokens

In the above examples, you may have noticed %1, %3, %4, %8, and %9. This represents how the CA is configured and what the file’s name should be. If the files are renamed, the AIA and CDP points may fail as the naming convention doesn’t match.

Token Name Description Map Value
ServerDNSName The DNS name of the CA Server %1
ServerShortName The NetBIOS name of the server %2
CAName The name of the CA %3
Cert_Suffix The renewal extension of the CA %4 (as per Windows 2000 mapping)
CertificateName   %4 (as per Windows 2003 mapping
ConfigurationContainer The location of the Configuration container in AD %6
CATruncatedName The “sanitized” name of the CA %7
CRLNameSuffix The renewal extension of the CRL %8
DeltaCRLAllowed If Delta CRL is allowed, + is added at the end of the file to indicate a delta crl %9
CDPObjectClass   %10
CAObjectClass   %11

Based on the mapping value, the AIA and CDP points are given a naming convention to find the correct file from those locations.

Debugging CDP/AIA location issues

If the CDP/AIA locations are properly configured, these steps will temporarily help resolve the issues. For example, we will use AIA issues, which will also work for CDP issues.

After we open and check PKIView.msc, we can see where the issue is. We can copy the URL to a notepad for further investigation.

Debugging CDP/AIA location issues

The AD doesn’t have our certificate if the issue is on the LDAP location. This is quite an easy fix.

Certificates retrieved via LDAP are retrieved from Domain Controller. If we open Domain Controller and ADSIEdit.msc, then we can navigate to Services > Public Key Services > AIA and check the present certificates. Since our Issuing CA Certificate is absent, the PKI environment cannot retrieve the certificate from that location.

LDAP

To resolve this, we navigate to our issuing CA and run the command
certutil -dspublish -f <path to certificate> SubCA

Issuing CA Certificate

Once the command runs successfully, we can refresh our PKIView.msc to check if the issue is resolved, and we should see a clean slate

PKI clean slate

However, if the AIA location #2, or the HTTP location is causing errors, this error is because the certificate isn’t present at that server endpoint.

PKI - AIA location

To resolve this, we copy the certificate from %windir%\System32\certsrv\CertEnroll location to our web server, which hosts our certificate.

hosts our certificate

This would resolve our issue, which we can check on PKIView.msc again.

configuration of CDP/AIA

Conclusion

Issues with CDP and AIA locations can be tricky. Misconfiguration can often cause issues, which can be harder to track. With this guide, we hope to make the configuration of CDP/AIA points much easier with debugging steps to support any technical issues.

If you need help with your PKI environment, feel free to email us at info@encryptionconsulting.com.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Download
Implementing & migrating PKI solutions for enterprises

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read Time: 7 minutes

Issuing CA often needs to be decommissioned for a variety of reasons, such as

  • The operating system is reaching the end of its life
  • CA might be compromised
  • CA is having operational issues and is too complicated to clean up.

No matter the reason, migrating to a new issuing CA can seem easy. Still, it can come with new challenges, such as mitigating risks, minimizing operational impact, etc.

While migrating issuing CAs, we need to ensure,

  1. The current certificates that are issued remain operational until their validity runs out.
  2. The old issuing CA should not issue any more certificates.
  3. We can move to other CDP/AIA points according to the required changes, but the issuing CA would have a minimal operation and no impact on the PKI infrastructure.

Prerequisites

Before we begin, you need a new server that will act as the new issuing CA. The server should be configured, and issuing CA should be installed.

Steps to Migrate

These steps will help organizations migrate to new issuing CA.

If CDP/AIA points are not being changed, steps 2, 4, and 5 are optional and shouldn’t be followed.

  1. Log onto the old Issuing CA
  2. Identify CDP/AIA Distribution Points (optional)
    1. Open command prompt
    2. Type the command

      certutil -getreg CA\CACertPublicationURLs

      migrate to the new issuing CA
    3. Document the HTTP AIA points. Ignore LDAP and C:\%windir%

      As per the screenshot, the AIA point is http://pki.encon.com/CertEnroll/%1_%3%4.crt

      %1 refers to ServerDNSName

      %3 refers to CaName

      %4 refers to Certificate Name

      Actual URL: http://pki.encon.com/CertEnroll/iCA2016.encon.com_iCA2016.crt

    4. Type the command

      certutil -getreg CA\CRLPublicationURLs

    5. Document the HTTP CDP Points. Ignore LDAP and C:\%windir%

      As per the screenshot, the CDP point is http://pki.encon.com/CertEnroll/%3%8%9.crl

      %3 refers to CaName

      %8 refers to CRLNameSuffix

      %9 refers to DeltaCRLAllowed

      Actual URL: http://pki.encon.com/CertEnroll/iCA2016.crl

  3. Disabling the Delta CRL and issue a new long Certificate Revocation List (CRL)
    1. Open Certificate Authority Console
      Certificate Revocation List (CRL)
    2. Right-click on Revoked Certificates, and then click properties
      Revoked Certificates properties
    3. Uncheck “Publish Delta CRL”
      Uncheck Publish Delta CRL
    4. Edit the “CRL publication interval” to 99 years

      Click OK

    5. Open command prompt as administrator
    6. Type the following command

      certutil -crl

  4. Copy old CA’s certificate (crt) and Certificate Revocation List (CRLs) files to new CDP/AIA Points (optional)
    1. Navigate to %windir%\System32\CertSrv\CertEnroll
    2. Copy the old CA’s crt and CRL files to new CDP/AIA Points
      CRL files to new CDP/AIA Points
  5. Redirect AIA and CDP points of old CA to the new location

    This can be done using an

    1. IIS redirect, or
    2. DNS CNAME

      redirecting the AIA and CRL of the old Certification Authority.

  6. Document all certificate templates and stop certificate publishing on old Issuing CA
    1. Open the command line with elevated privileges
    2. Run

      Certutil -catemplates > c:\catemplates.txt

      and document all certificate templates published at the old Certification Authority

      Certification Authority

      In total, certificate templates are present.

    3. Launch the Certification Authority console
    4. Navigate to “Certificate Templates”
    5. Highlight all templates in the right pane, right-click and then click “Delete”
      CRL Distribution point

      The old Certification Authority cannot issue any certificates and has all of its AIA and CRLs redirected to a new CRL Distribution point. The next steps will detail how users can document the certificates templates published on the old issuing CA and how to make them available at the new issuing CA.

  7. Sort Certification Authority Database, identify and document all certificates issued based on certificate templates
    1. Open Certificate Authority Console.
    2. Highlight Issued Certificates.
    3. Move to the right and sort by “Certificate Templates.”
      Certificate Authority Console
    4. Identify the certificates that are issued by default certificate template types.
    5. Document the certificates issued by custom certificate templates i.e. any template other than the default certificate templates.
  8. Document certificates based on default certificate template types
    1. Open command prompt with elevated privileges
    2. Run

      Certutil -view -restrict “Certificate Template=Template” -out “SerialNumber,NotAfter,DistinguishedName,CommonName”> c:\TemplateType.txt

      Note: Replace the Template with the correct template name

    3. Review the output on TemplateType.txt and document all certificates that need immediate action (requiring issuance from the new CA infrastructure if needed, such as Web Server Certificate)
    4. Consult with application administrators using the certificates to determine the best approach to replace the certificates if needed
  9. Document certificates based on custom certificate types
    1. Open Certification Authority Console
    2. Right click on Certificate Templates, and click Manage
      custom certificate types
    3. Double-click the certificate template and click on the “Extensions” tab
    4. custom certificate types
    5. Click on “Certificate Template Information”
    6. Description of Certificate template information
    7. Copy the Object Identifier (OID) number – the number will look similar to

      1.3.6.1.4.1.311.21.8.5363900.15781253.12798358.11444918.12080715.141.12736713.11129372

    8. Open the command prompt with elevated privileges
    9. Run

      Certutil -view -restrict “Certificate Template= 1.3.6.1.4.1.311.21.8.5363900.15781253.12798358.11444918.12080715.141.12736713.11129372″ -out “SerialNumber,NotAfter,DistinguishedName,CommonName”> c:\CustomTemplateType.txt

      Note: Replace the OID number with the number identified in step 5

      Certutil
    10. Examine the output of c:\CustomTemplateType.txt and document all the certificates needing immediate action (requiring issuance from the new CA infrastructure if needed, such as custom SSL certificates).
    11. Consult with the application administrator using the certificates to determine the best approach to replace the certificates if needed
  10. Enable certificate templates needed on the results of steps 7 to 9 on new Issuing CA
    1. Login to new Issuing CA.
    2. Right-click on “Certificate Templates,” click New, and click “Certificate Templates to Issue.”
      Certificate Template to issue
    3. Choose all certificate templates needed in the “Enable Certificate Templates” windows and click >OK.
      Enable Certificate Templates

Conclusion

With these steps, organizations can migrate to the new issuing CA while decommissioning the old Issuing CA. With Windows 2012 ending in October 2023 organizations need something to help migrate to newer operating systems with minimal impact.

If your organization needs assistance with this migration, feel free to email us at info@encryptionconsulting.com, and we will ensure that your migration goes as smoothly as possible.

If you need help with your PKI environment, feel free to email us at info@encryptionconsulting.com.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Download
Implementing & migrating PKI solutions for enterprises

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

PKI

Enabling LDAPS with Microsoft PKI

Reading time: 3 minutes, 27 seconds

LDAPS is one of the most crucial functionalities to properly protect and secure credentials in your PKI environment. By default, LDAP communications between client and server applications are not encrypted. This means that it would be possible to use a network monitoring device or software and view the communications traveling between LDAP client and server computers. This is especially problematic when an LDAP simple bind is used because credentials (username and password) are passed over the network unencrypted. This could quickly lead to the compromise of credentials.

Prerequisites

A functional Microsoft PKI should be available and configured. While viewing PKIView.msc, no errors should appear

If you need help in deploying your own PKI, you can refer to this article to build your own Two Tier PKI

Installing AD LDS

This step should be carried out on LDAP Server or on Domain Controllers which would be responsible for hosting LDAPS service.

  • Open Server Manager
  • From manage, open Add Roles and Features
  • On Before you Begin, click Next
  • On Installation type, ensure Role based or feature based installation, and click Next
  • On Server Selection, click Next.
  • On Server Roles, click Active Directory Lightweight Directory Services, and click Add Features, and then click Next
  • On Features, click Next
  • On AD LDS, click Next
  • On Confirmation, click Install
  • Post Installation, AD LDS needs to be configured

Configuring AD LDS

  • Run AD LDS setup wizard. Click Next on first page.
  • Ensure unique instance is selected, and click Next
  • Provide Instance name and Description, and click Next
  • Leave default ports and click Next

If AD LDS is installed on domain controller, then LDAP port would be 50000 and SSL port would be 50001

  • On Application Directory Partition, click Next
  • On File locations, click Next
  • On Service Account Selection, you may leave it on the Network service account, or choose a preferred account that can control LDAPS service
  • On AD LDS administrators, leave the current admin, or choose another account from the domain
  • Choose all LDF Files to be imported, and click Next
  • On Ready to Install, click Next
  • After Installation, click Finish

Publishing a certificate that supports Server Authentication

  • Login to the Issuing CA as enterprise admin
  • Ensure you are in Server Manager
  • From the Tools menu, open Certificate Authority

Expand the console tree, and right click on Certificate Templates

  • Select Kerberos Authentication (as it provides Server Authentication). Right click and select Duplicate Template. We can now customize the template.
  • Change Template Display Name and Template Name on General tab. Check Publish Certificate in Active Directory. This will ensure that the certificate appears when we enrol domain controllers using that template
  • On Request Handling, check Allow private key to be exported.
  • On the Security tab, provide Enroll permissions to appropriate users
  • Click Apply

Issue the Certificate on Issuing CA

  • Login to the Issuing CA as enterprise admin
  • Ensure you are in Server Manager
  • From the Tools menu, open Certificate Authority

Expand the console tree, and click on Certificate Templates

On the menu bar, click Action > New > Certificate Template to Issue

  • Choose the LDAPS certificate
  • Click OK and it should now appear in Certificate Templates

Requesting a certificate for Server Authentication

  • Log into LDAP server or domain controller.
  • Type win+R and run mmc
  • Click File and click Add/Remove Snap-in
  • Choose Certificates and click Add
  • Choose Computer account
  • If the steps are followed on LDAPServer where AD LDS is installed, click Local computer, or choose Another computer and choose where it would need to be installed
  • Expand the console tree, and inside Personal, click Certificates
  • Right click on Certificates and click All Tasks and select Request New Certificate
  • Follow the instructions, choose LDAPS template that we issued earlier and Install.}
  • Once Installed click Finish
  • Open the certificate, and in Details tab, navigate to Enhanced Key Usage to ensure Server Authentication is present.

Validating LDAPS connection

  • Login to LDAP Server as Enterprise admin
  • Type win+R and run ldp.exe
  • On the top menu, click on Connections, and then click Connect
  • In server, provide domain name, ensure SSL is checked and proper port is provided and click OK
  • No errors should appear. If connection was unsuccessful, the following output may appear

Conclusion

This should enable LDAPS which can be used to properly protect credentials used in your PKI environment as well as enable other applications to use LDAPS.

If you need help with your PKI environment, feel free to email us at info@encryptionconsulting.com.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Download
Implementing & migrating PKI solutions for enterprises

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 20 minutes

Deploying an Active Directory Certificate Services is a straightforward way for enterprises to build their PKI infrastructure. But it does have its shortcomings, such as

  • Lack of deployment in multiple regions
  • High latency on CDP and AIA points

To overcome this, organizations need to deploy region-specific PKI infrastructure, which can be harder to maintain and introduces complexity to the whole infrastructure.

But using Azure, organizations can deploy a PKI infrastructure that can be operated worldwide with low latency and high availability.

In this article, we will be showing you how your own PKI architecture on Azure.

Note: If this is your first time deploying a PKI, I recommend following ADCS Two Tier PKI Hierarchy Deployment as it is a more straightforward approach and also touches the basics.

Prerequisites

  • An Azure account where we will create Virtual Machines and blob storage
  • A custom domain name
  • An offline Windows Server VM, which will be our Root CA

[NOTE: This is a test scenario. As such, CDP and AIA points may not match your requirements. Do use values that are appropriate as per your requirements.]

Preparing CDP and AIA points

We will create blob storage that will act as our CDP/AIA points for our PKI infrastructure. We will also associate it with our custom domain to redirect it to our blob.

Creating Azure Blob Storage

  1. First, we would need to log into our Azure account and navigate to Storage Accounts

    Azure Blob Storage
  2. We will be creating a new storage account. So click Create on the top left corner.

    storage accounts
  3. Provide the necessary details on the basics. For Redundancy, I would recommend at least Zone-redundant Storage (ZRS)

    Zone-redundant Storage (ZRS)
  4. On the Advanced tab, leave everything on default and click next

  5. On the Networking tab, it is recommended to have public access from selected virtual networks and IP addresses and select the Virtual network where all the virtual machines will be deployed. If no virtual network exists, do create one.

    Azure Networking tab
  6. On the Data Protection tab, click Next.
  7. On the Encryption tab, leave everything default and click Next.
  8. Provide relevant tags and click Next.
  9. On the review tab, you can review everything looks good and click Create.

This will create the blob storage. Next, we will associate this blob storage with our custom domain and ensure it is accessible via HTTP.

Mapping a custom domain to Azure Blog Storage

For this step, you would need a custom domain. Once you log in, you can navigate to DNS settings

  1. In DNS settings, navigate to DNS records and enter a CNAME record.
  2. Now we need to retrieve the hostname for your storage account. For this, we can navigate Settings > Endpoints on the left pane and copy the static website under Static Website. It should be something like pkitest.z13.web.core.windows.net (https://.z13.web.core.windows.net/)

    Remove the https:// and additional /. It would look like pkitest.z13.web.core.windows.net, which is our hostname

  3. Now in the DNS settings, for the hostname of the custom domain, provide pkitest and for the hostname, provide the hostname of the storage endpoint

    Provide custom domain

    Click to create a record

  4. Navigate to Azure Storage account, click on Networking under Security + Networking and select Custom Domain on the tab above.

  5. Provide the subdomain you created.

    Security and Networking
  6. Click Save. After successful validation, you will get a validation notification

    validation notification for azure account

Disabling secure transfer required

For this blob being a CDP/AIA point, we need HTTP access to the blog, which is why we would need to turn off the secure transfer. If enabled, HTTP access would not be possible; our PKI wouldn’t be able to use this blob as CDP/AIA point.

  1. Navigate to Configuration under Settings

  2. Set Secure Transfer Required to Disabled

    Disabling secure transfer required
  3. Click Save

Testing Accessibility of Storage Account

This section will ensure our storage account is accessible via a custom domain.

  1. First, we would create a container and upload a file to it

  2. Navigate to Containers under Data Storage

    Testing Accessibility of Storage Account
  3. On the top left corner, click

  4. Provide the name, set public level access as a blob, and click Create

    The container will be created

    public level access as a blob
  5. Click on the name and navigate inside it

  6. On the top left corner, click

  7. Select any file for testing (preferably a pdf or txt file)

    upload file in azure portal
  8. Click Upload, and once uploaded, it should be available in the container

    azure container
  9. Now, we will try to access the file using a custom domain. The URL should be

    http://<subdomain.customdomain>/<mycontainer>/<myblob>

    So for us, the domain should be

    http://pkitest.encryptionconsulting.com/pkitest/TestFile.pdf

    Ensure the file is opened in HTTP and it does display the file or downloads it

    preparing CDP and AIA points

This concludes our section on preparing CDP and AIA points. Next, we will begin creating our PKI. Now you may delete the test file from the container as it would only contain the cert and CRLs.

Creating Domain Controller

This Step-by-Step guide uses an Active Directory Domain Services (AD DS) forest named encon.com. DC01 functions as the domain controller.

Firstly, we will deploy a VM on Azure. Ensure both the IPs are static.

While deploying, ensure,

  1. VMs are deployed on the same Virtual Network
  2. If deployed on the same region, ensure the subnet is the same
  3. Public IP Address is static

    Creating Domain Controller
  4. Once the VM is created, navigate to Networking under Settings and click on the Network Interface

    Networking under Virtual Machine
    1. Navigate to IP Configuration under settings
    2. Click on ipconfig1 on the menu and change IP private settings to Static from Dynamic

      azure ipconfig1
    3. Click Save and go back to the VM

Provide other parameters as per your requirement and create the VM.

Configuring Network

Once the VM is created, log in and follow the steps below

  1. Login to DC01 as a local user
  2. Click Start, type ncpa.cpl , and press ENTER
  3. Click on Ethernet, and then click Properties under Activity
  4. Double Click on Internet Protocol Version 4 (IPv4)
  5. Only change the DNS Server Address, and provide the private IPv4 of DC01
  6. For Alternate DNS, provide 8.8.8.8 or any other public DNS service you want.

    Configuring Network in Virtual Machine
  7. Click OK and restart the VM from the Portal
  8. Once Restarted, log in to DC01 as a local user
  9. Click Start, type sysdm.cpl , and press ENTER
  10. Changer PC name to DC01, and Restart Now when prompted.

Installing Active Directory Domain Services and Adding a new Forest

  1. Open Server Manager. To do so, you can click the Server Manager icon in the toolbar or click Start, then click Server Manager.
  2. Click Manage, and then click Add Roles and Features
  3. Before you Begin, click Next
  4. On Installation Type, click Next
  5. On Server Selection, click Next
  6. On Server Roles, choose Active Directory Domain Services, click Add Features, and then click Next
  7. On Features, click Next
  8. On AD DS, click Next
  9. On Confirmation, click Install.
  10. After installation, either

    1. Click on Promote this server to a domain controller on Add Roles and Features Wizard

      Installing Active Directory Domain Services and Adding a new Forest
    2. Or, click on Promote this server to a domain controller on Post Deployment Configurations in Notifications

      Post Deployment Configurations
  11. On Deployment Configuration, choose to Add a new forest and provide the root domain name (“encon.com”)

    Deployment Configuration and Add new Forest
  12. On Domain Controller options, provide Directory Services Restore Mode password and click Next
  13. Under DNS options, click Next
  14. Under Additional options, click Next
  15. Under Paths, click Next
  16. Under Review options, click Next
  17. Under Prerequisites check, click Install
  18. Once installed, the remote connection would be terminated.
  19. Login to DC01 as encon\

    Azure Remote Desktop Connection
  20. DC01 is now ready

Creating Offline Root CA

The standalone offline root CA should not be installed in the domain. It should not even be connected to a network at all.

We will be creating this Root CA on-premises. I will create this on Proxmox, but you can use VMware or VirtualBox for this installation.

After installing Windows Server 2019, follow the steps below

  1. Log onto CA01 as CA01\Administrator.
  2. Click Start, click Run, and then type notepad C:\Windows\CAPolicy.inf and press ENTER.
  3. When prompted to create a new file, click Yes.
  4. Type in the following as contents of the file.

    [Version]
    Signature="$Windows NT$"
    [Certsrv_Server]
    RenewalKeyLength=2048 ; recommended 4096
    RenewalValidityPeriod=Years
    RenewalValidityPeriodUnits=20
    AlternateSignatureAlgorithm=0
    
  5. Click File and Save to save the CAPolicy.inf file under C:\Windows directory. Close Notepad

Installing Offline Root CA

  1. Log onto CA01 as CA01\Administrator.
  2. Click Start, and then click Server Manager.
  3. Click Manage, and then click Add Roles and Features
  4. On the Before You Begin page, click Next.
  5. On the Select Server Roles page, select Active Directory Certificate Services, and then click Next.
  6. On the Introduction to Active Directory Certificate Services page, click Next.
  7. On the Select Role Services page, ensure that Certification Authority is selected, then Next.
  8. On the Specify Setup Type page, ensure that Standalone is selected, and then click Next.
  9. On the Specify CA Type page, ensure that Root CA is selected, and then click Next.
  10. On the Set Up Private Key page, ensure that Create a new private key is selected, and then click Next.
  11. Leave the defaults on the Configure Cryptography for CA page, and click Next.
  12. On Configure CA Name page, under the Common name for this CA, clear the existing entry and type Encon Root CA. Click Next.
  13. On the Set Validity Period page, under Select validity period for the certificate generated for this CA, clear the existing entry and type 20. Leave the selection box set to Years. Click Next.
  14. Keep the default settings on the Configure Certificate Database page, and click Next.
  15. Review the settings on the Confirm Installation Selections page and then click Install.
  16. Review the information on the Installation Results page to verify that the installation is successful, and click Close.

Post Installation Configuration on Root CA

  1. Ensure that you are logged on to CA01 as CA01\Administrator.
  2. Open a command prompt. To do so, you can click Start, click Run, type cmd and then click OK.
  3. To define the Active Directory Configuration Partition Distinguished Name, run the following command from an administrative command prompt

    Certutil -setreg CA\DSConfigDN "CN=Configuration,DC=Encon,DC=com"
  4. To define CRL Period Units and CRL Period, run the following commands from an administrative command prompt:

    1. Certutil -setreg CA\CRLPeriodUnits 52
    2. Certutil -setreg CA\CRLPeriod "Weeks"
    3. Certutil -setreg CA\CRLDeltaPeriodUnits 0
  5. To define CRL Overlap Period Units and CRL Overlap Period, run the following commands from an administrative command prompt:

    1. Certutil -setreg CA\CRLOverlapPeriodUnits 12
    2. Certutil -setreg CA\CRLOverlapPeriod "Hours"
  6. To define Validity Period Units for all issued certificates by this CA, type the following command and then press Enter. In this lab, the Enterprise Issuing CA should receive a 20-year lifetime for its CA certificate. To configure this, run the following commands from an administrative command prompt:

    1. Certutil -setreg CA\ValidityPeriodUnits 20
    2. Certutil -setreg CA\ValidityPeriod "Years"

Configuration of CDP and AIA points

Multiple methods are configuring the Authority Information Access (AIA) and certificate revocation list distribution point (CDP) locations. The AIA points to the public key for the certification authority (CA). You can use the user interface (in the Properties of the CA object), certutil, or directly edit the registry. The CDP is where the certificate revocation list is maintained, which allows client computers to determine if a certificate has been revoked. This lab will have three locations for the AIA and four locations for the CDP.

Configuring AIA points

A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator. When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command:

certutil -setreg CA\CACertPublicationURLs "1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:http://pkitest.encryptionconsulting.com/pkitest/%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11"

Note: You need to modify the http address on the AIA location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

Configuring the CDP Points

The certutil command to set the CDP modifies the registry, so ensure that you run the command from a command

certutil -setreg CA\CRLPublicationURLs "1:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:http://pkitest.encryptionconsulting.com/pkitest/%3%8%9.crl \n10:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10"

Note: You need to modify the http address on the CDP location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

At an administrative command prompt, run the following commands to restart Active Directory Certificate Services and publish the CRL

net stop certsvc && net start certsvc

certutil -crl

Creating Issuing CA

Enterprise CAs must be joined to the domain. Before you install the Enterprise Issuing CA (CA02), you must first join the server to the domain. Then you can install the Certification Authority role service on the server.

Firstly, we will deploy a VM on Azure. Ensure both the IPs are static.

While deploying, ensure,

  1. VMs are deployed on the same Virtual Network
  2. If deployed on the same region, ensure the subnet is the same
  3. Public IP Address is static

    Certification Authority role service
  4. Once the VM is created, navigate to Networking under Settings and click on the Network Interface

  5. Network Interface
    1. Navigate to IP Configuration under settings
    2. Click on ipconfig1 on the menu and change IP private settings to Static from Dynamic

      CA Private settings
    3. Click Save and go back to the VM

    Provide other parameters as per your requirement and create the VM.

    Configuring Network

    1. Login to CA02 as a local user
    2. Click Start, type ncpa.cpl, and press ENTER
    3. Click on Ethernet, and then click Properties under Activity
    4. Double Click on Internet Protocol Version 4 (IPv4)
    5. Only change the DNS Server Address, and provide the private IPv4 of DC01 (if both belong to the same region), or provide the public IP address of DC01 (if they belong to different regions)

      IP address of DC01
    6. Click OK and restart the VM from the Portal
    7. Once Restarted, log in to CA02 as a local user
    8. Click Start, type sysdm.cpl, and press ENTER
    9. Changer PC name to CA02 and provide domain name in the domain. Provide credentials for DC01 and wait until you get a success message

      Issuing CA Configuring Domain
    10. Click on Restart Now when prompted.

    Creating CAPolicy in Issuing CA

    1. Log onto CA01 as CA01\Administrator.
    2. Click Start, click Run, and then type notepad C:\Windows\CAPolicy.inf and press ENTER.
    3. When prompted to create a new file, click Yes.
    4. Type in the following as contents of the file.

      [Version]
      Signature="$Windows NT$"
      [PolicyStatementExtension]
      Policies=InternalPolicy
      [InternalPolicy]
      OID= 1.2.3.4.1455.67.89.5
      URL= http://pkitest.encryptionconsulting.com/pkitest/cps.txt
      [Certsrv_Server]
      RenewalKeyLength=2048
      RenewalValidityPeriod=Years
      RenewalValidityPeriodUnits=10
      LoadDefaultTemplates=0
      
    5. Click File and Save to save the CAPolicy.inf file under C:\Windows directory. Close Notepad

    Publishing Root CA Certificates and CRLs in CA02

    1. Log into CA01 as a local administrator
    2. Navigate to C:\Windows\System32\CertSrv\CertEnroll
    3. Copy the CRLs and Certificates present

      Root CA Certificates and CRLs in CA02
    4. Paste the files into the C drive in CA02

      Note: If you are using RDP, you can copy and paste directly

    5. On CA02, to publish Encon Root CA Certificate and CRL in Active Directory, run the following commands at an administrative command prompt.

              certutil -f -dspublish "C:\CA01_Encon Root CA.crt" RootCA
              certutil -f -dspublish "C:\Encon Root CA.crl" CA01
          
    6. To add Fabrikam Root CA Certificate and CRL in CA02.Fabrikam.com local store, run the following command from an administrative command prompt.

          certutil -addstore -f root "C:\CA01_Encon Root CA.crt"
          certutil -addstore -f root "C:\Encon Root CA.crl"
          

    Installing Issuing CA

    1. Ensure you are logged in as Encon User in CA02

    2. Click Start, and then click Server Manager

    3. Click Manage, and then click Add Roles and Features

    4. Click Next on Before you Begin

      Installing Issuing CA
    5. On Installation Type, click Next

      issuing CA - Installation type
    6. On Server Selection, click Next

      Destination Server
    7. On Server Roles, choose Active Directory Certificate Services, click on Add Features when prompted and click Next

      Add features - AD CS
    8. On Features, click Next

      CA Features
    9. On AD CS, click Next.

      Active Directory Certificate Services
    10. On Role Services, Choose Certificate Authority Web Enrollment, click on Add Features when prompted, and click Next

      Certificate Authority Web Enrollment
    11. On Web Server Role (IIS) and Role Services, click Next

      Web Server Role (IIS) and Role Services
    12. On Confirmation, click Install

      Web Server (IIS)

    Configuration of Issuing CA

    1. After installation, either
      1. Click on Configure Active Directory Certificate Services on the destination server in Add Roles and Features Wizard

        Configuration of Issuing CA
      2. Or, click on Configure Active Directory Certificate Services on Notification Center

        AD CS Notification Center
    2. On Credentials, click Next

      AD CS credentials
    3. Under Role Services, choose both Certificate Authority as well as Certificate Authority Web Enrollment

      AD CS Role Services
    4. On Setup type, ensure Enterprise CA is chosen and click Next

      Enterprise CA
    5. On CA Type, choose Subordinate CA, and click Next

      AD CS - CA type
    6. On Private Key, choose to Create a new private key

      AD CS - Specify the type of private key
    7. On Cryptography, leave defaults and click Next

      AD CS - cryptography options
    8. On CA Name, provide Common Name as Encon Issuing CA and leave the everything default value.

      AD CS - CA Name
    9. On Certificate Request, ensure Save a certificate request to file is selected and click Next

      AD CS - Certificate Request
    10. On Certificate Database, click Next

      AD CS - Certificate Database
    11. On Confirmation after reviewing, Click Configure

      AD CS Configuration
    12. Issuing CA should now be configured. Click Close.

      AD CS configuration results
    13. After Issuing CA is configured, a file will appear on the C drive. Copy this file to C drive in Root CA.

      Issuing CA is configured

    Issue Encon Issuing CA Certificate

    1. Copy Issuing CA req file to Root CA C drive
    2. Open Command Prompt
    3. Run the command

      certreq -submit "C:\CA02.encon.com_encon-CA02-CA.req"
    4. Select Root CA from the Certification Authority List

      Encon Root CA
    5. Once a request is submitted, you will get a RequestID

      Certificate Request
    6. Open Certificate Authority from Tools in Server Manager

      Certificate Authority tools
    7. Navigate to Pending Requests

      Encon Root CA - pending request
    8. Right Click on the RequestID that you got while submitting the request, click All Tasks, and click Issue

      Choose Certificate Authority
    9. Once issued, navigate to the command prompt again, and run

      certreq -retrieve 2 "C:\CA02.encon.com_Encon Issuing CA.crt"
    10. Select Root CA from the Certification Authority List

      Encon Root CA
    11. Once retrieved, the successful message is displayed

      certification request ID
    12. Copy the issued certificate from Root CA to CA02

      issued certificate from Root CA to CA02
    13. Login to CA02 as an Encon user and copy the certificate to the C drive

    14. Open Certificate Authority from Tools in Server Manager

      Certificate Authority tools
    15. Right-click on Encon Issuing CA, click on All Tasks, and click Install CA Certificate

      Install CA Certificate
    16. Navigate to C drive, and select All files beside File name until the copied certificate is visible

      copy certificates
    17. Select the issued certificate and click Open

      Right-click on Encon Issuing CA, click on All Tasks, and click Start Service

      Encon Issuing CA

    Post Installation Configuration on Issuing CA

    1. Ensure that you are logged on to CA02 as Encon User
    2. Open a command prompt. To do so, you can click Start, click Run, type cmd and then click OK.
    3. To define CRL Period Units and CRL Period, run the following commands from an administrative command prompt:

      1. Certutil -setreg CA\CRLPeriodUnits 1
      2. Certutil -setreg CA\CRLPeriod “Weeks”
      3. Certutil -setreg CA\CRLDeltaPeriodUnits 1
      4. Certutil -setreg CA\CRLDeltaPeriod “Days”
    4. To define CRL Overlap Period Units and CRL Overlap Period, run the following commands from an administrative command prompt:

      1. Certutil -setreg CA\CRLOverlapPeriodUnits 12
      2. Certutil -setreg CA\CRLOverlapPeriod “Hours”
    5. To define Validity Period Units for all issued certificates by this CA, type the following command and then press Enter. In this lab, the Enterprise Issuing CA should receive a 20-year lifetime for its CA certificate. To configure this, run the following commands from an administrative command prompt:

      1. Certutil -setreg CA\ValidityPeriodUnits 5
      2. Certutil -setreg CA\ValidityPeriod “Years”

    Configuration of CDP and AIA points

    Multiple methods are configuring the Authority Information Access (AIA) and certificate revocation list distribution point (CDP) locations. The AIA points to the public key for the certification authority (CA). You can use the user interface (in the Properties of the CA object), certutil, or directly edit the registry.

    The CDP is where the certificate revocation list is maintained, which allows client computers to determine if a certificate has been revoked. This lab will have three locations for the AIA and three for the CDP.

    Configuring AIA points

    A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator.

    When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command:

    certutil -setreg CA\CACertPublicationURLs “1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt\n2:http://pkitest.encryptionconsulting.com/pkitest/%1_%3%4.crt\n2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11”

    Note: You need to modify the http address on the AIA location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

    Configuring the CDP Points

    The certutil command to set the CDP modifies the registry, so ensure that you run the command from a command

    certutil -setreg CA\CRLPublicationURLs “65:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl\n2:http://pkitest.encryptionconsulting.com/pkitest/CertEnroll/%3%8%9.crl\n79:ldap:///CN=%7%8,CN=%2,CN=CDP,CN=Public Key Services,CN=Services,%6%10”

    Note: You need to modify the http address on the CDP location. For this scenario, our http container address was http://pkitest.encryptionconsulting.com/pkitest/, which can vary for you.

    Also, as per the CDP point, the CertEnroll folder will exist inside the pkitest container in Azure Blob. This is because the folder will be recursively copied from the CertSrv folder to the blob storage

    At an administrative command prompt, run the following commands to restart Active Directory Certificate Services and publish the CRL

    net stop certsvc & &net start certsvc

    certutil -crl

    Uploading Certificates and CRLs to the Blob storage

    Per our CDP and AIA points, the certificates would be available at the blob storage in Azure. If we run PKIView.msc at Issuing CA, we will run into errors where the certificates or CRLs are not found

    Uploading Certificates and CRLs to the Blob storage

    To resolve this, we need to upload

    • Root CA certificates
    • Root CA CRL
    • Issuing CA Certificates

    Issuing CA CRLs will be uploaded using a script we will run next.

    To upload the files, copy them from their respective machines and keep them handy on your host machine. You can find these files at C:\Windows\System32\certsrv\CertEnroll on both Root CA and issuing CA.

    Note: Do not copy the CRLs of Issuing CA.

    Issuing CA CRLs

    Once copied, follow the steps below

    1. Navigate to the storage account, and click on the pkitest you created

      Azure Storage account
    2. Click on Containers under Data Storage

      Azure Containers
    3. Click on the pkitest folder
    4. Click on Upload on the top left

      Azure AD Accounts
    5. Click on the browse icon and select all the files that need to be uploaded and click Open

      Microsoft Azure - upload files
    6. Check to Overwrite if files already exist and then click Upload

      Microsoft azure - upload files
    7. After uploading, all the files should be available

      CA02

    Once the files are uploaded, navigate to CA02 and open PKIView.msc again. Now CDP points of Root and Issuing CA should be available, but the AIA point would still show an error as we didn’t copy those files to the pkitest folder

    CDP points of Root and Issuing CA

    Script to copy Issuing CA CRLs

    Before we begin, we need to download AzCopy. Once downloaded, extract the app into C:\ to be accessible. We will be using this location on our script. Change the script’s path if you intend to store the application in a different location.

    AZCopyCode

    Now you would also need a folder to store the code. I would recommend creating a folder on C drive as AZCopyCode. Download the script from below and store it there. We would need to make some changes to make it work.

    Note: This code was initially created by dstreefkerk. As per Windows Server 2022, this code does work. I have made some changes and fixed a few bugs.

    Code: https://github.com/Encryption-Consulting-LLC/AzCopyCode/blob/main/Invoke-UpdateAzureBlobPKIStorage.ps1

    Github Gist to be embedded:

    <script src=”https://gist.github.com/coffee-coded/4cbeb0de02628bc2da6b182dc11bad0b.js”></script>

    Code Changes

    1. Navigate to the storage account, and click on the pkitest you created

      Azure Storage account
    2. Click on Containers under Data Storage

      Azure Containers
    3. Click on the pkitest folder
    4. Click on Shared Access Token under Settings. Provide appropriate permissions, and choose an expiry date (preferably one year)

      Shared Access Token
    5. Click Generate SAS token and copy Blob SAS Token
    6. Open the code in notepad or your preferred code editor
    7. Paste the SAS token for the variable

      $azCopyDestinationSASKey

    8. Navigate to properties under Settings, copy the URL and paste it for $azCopyDestination
    9. Change log and log archive locations if applicable.
    10. Change AzCopy location on $azCopyBinaryPath if you stored azcopy in another location.
    11. Once changes are made, store them in C:\AZCopyCode\Invoke-UpdateAzureBlobPKIStorage.ps1
    12. Open Powershell in CA02
    13. Navigate to C:\AZCopyCode
    14. Run Invoke-UpdateAzureBlobPKIStorage.ps1
    15. Once copied, it will show how many files are copied, with 100% and all done with 0 Failed

      Update Azure Blob storage
    16. Open PKIView.msc, and now no errors should be visible

      PKI View with no errors
    17. The overall PKI should be healthy.

      PKIView.msc

    Troubleshooting

    For this scenario, we suppose you get an error

    CDP Locations

    Copy the URL by right-clicking on the location and copying it to a notepad. It should look something like this

    pkitest.encryptionconsulting.com/pkitest/Encon%20Root%20CA.crl%20

    If you try opening this on the browser, it would still give an error as there is a trailing %20 at the end, indicating a space at the end. To resolve this, CDP and AIA points need to be changed on Root CA, and the issuing CA needs to be recreated again.

    Automating the script

    We would automate this script using Task Scheduler to run this script every week. You can tweak this as per your requirements.

    1. Open Task Scheduler
    2. Left click on Task Scheduler (local) and click on Create a Basic Task

      ADCS - Task Scheduler
    3. Provide name and description for the Task

      Provide name and description for the Task
    4. Task Trigger is configured to weekly

      ADCS - Task Trigger
    5. Select Data and Time when the script will run

      ADCS - automatic script
    6. On Action, select Start A program and click Next

      ADCS - Start A program
    7. Under Start, a Program, In Program/Script write

      powershell -file "C:\AZCopyCode\Invoke-UpdateAzureBlobPKIStorage.ps1"
      ADCS - er shell
    8. Click yes on the prompt

      Blob PKI Storage
    9. Check the Open Properties dialog and click Finish

      ADCS - Wizard Properties
    10. Once completed, AZ Copy should be available in Task Scheduler Library.

      AZ Copy available in Task Scheduler Library
    11. Right Click AZ Copy and click Run

    12. Refresh and check History Tab. Action Completed should appear in History

      ADCS full installation

    Conclusion

    This concludes our AD CS installation with Azure Blob Storage. It is easier to manage, but we also achieve high availability using Azure’s Blob Storage. This will help organizations create PKI that can be operational worldwide with minimal latency and high performance no matter where you are. If you face any issues, do remember to reach out to info@encryptionconsulting.com

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Download
Implementing & migrating PKI solutions for enterprises

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 9 minutes

Upgrading Vormetric Data Security Manager (DSM) can seem relatively straightforward, but if something goes wrong, the data can be lost or can get corrupted. This article does not provide the steps you would need to upgrade, but rather would provide what organizations need to be careful about, a few technical and non-technical points that you may not find in the upgrade guide. This article will highlight all the steps you need to be careful about and a few details you might be curious about.Readers can use this article as a checklist or a base to develop a plan for upgrading their DSM. Every DSM upgrade can be different and unique to the organization, so it would be best if the organization gets some outside consultation to get the best planning for their environment.

Planning

Everything starts with good planning, and the DSM upgrade is no different. A full scan of the current environment is vital to assess any future roadblocks or challenges you might face.

  1. Keeping a note of all the agents along with their agent versions and guardpoints When you upgrade the DSM, you can check the list of all the agents available and if all the guardpoints are up or not. If something doesn’t match as expected, you may need to troubleshoot them before proceeding. Agent versions can also highlight if you have any agents in the environment which are incompatible with the DSM version you desire to upgrade. A compatibility matrix will help you determine which agents are compatible and which need to be upgraded. You can also develop an upgrade path that you would need. Thales has defined an upgrade path that must be followed if you desire a particular version. For example, if you upgrade to 6.2 from 5.3, you cannot directly upgrade to 6.2 without upgrading to the intermediate version 6.0.0.

  1. Production cutover

    Many organizations looking forward to upgrading should already have a production DSM working in their environment. They would not like to upgrade their existing production DSM, instead would use another DSM to upgrade and conduct pilot testing before switching to the new DSM as the production DSM. Organizations should plan the cutover accordingly.

Planning can ensure the following steps are smooth and DSM can be upgraded to the desired version without hiccups.

Upgrading DSMs

Planning can help soothe the upgrade, but upgrading a production DSM can expose challenges you might not have expected. Some of the challenges we have faced are:

  1. Cutting over from old DSM to new DSM without agent registration
  2. Upgrading DSM to a particular version where agents remain compatible
  3. Configuring HA cluster with DSMs being on different subnets
  4. Upgrading DSMs can also take time, and without proper planning of cutover, the production can also face downtime. But with adequate planning, DSM cutover can be seamless with no downtime.

Organizations should prepare exhaustively for upgrading DSMs as a wrong move can cost them terabytes of unobtainable data.

Migration to CipherTrust Manager

Thales announced that Vormetric DSM will reach the end of life on June 2024; as a result, many organizations are also looking forward to migrating their DSMs to CipherTrust Manager.

Migrating to CTMs can come with a unique wave of challenges where some organizations might be concerned with the availability of policies, keys, usersets, process sets, and other configurations that may already exist in their DSM. Other concerns organizations may have can be:

  • Policies, keys, user sets, and process sets migration
  • Migration of hosts
  • High Availability configuration
  • Downtime and system disruption
  • Data Loss

If organizations are already on DSM 6.4.5 or 6.4.6, they can migrate to CipherTrust Manager and restore the following objects:

  1. Agent Keys, including versioned keys and KMIP accessible keys
  2. Vault Keys
  3. Bring Your Own Keys (BYOK) objects
  4. Domains
  5. CipherTrust Transparent Encryption (CTE) configuration

The objects that are not restored are:

  1. Most Key Management Interoperability (KMIP) keys except KMIP-accessible agent keys
  2. Keys associated with CipherTrust Cloud Key Manager (CCKM), including key material as a service (KMaaS) keys.
  3. DSM hostname and host configuration

When customers migrate, we recommend exhaustive pilot testing to ensure the migration goes smoothly without data loss. Customers should also conduct a cleaning session of the DSM where all decommissioned servers and users are removed before migrating to CipherTrust Manager. If customers migrate from DSM to CTM, agents running VTE agents (below version 7.0) would need to be upgraded to CipherTrust Transparent Encryption before upgrading so that migration can be seamless.

Benefits of migrating to CipherTrust Manager

  1. Improved browser-based UI
  2. No hypervisor limitation
  3. Fully featured remote CLI interface
  4. Embedded CipherTrust Cloud Key Manager (CCKM)
  5. Better Key Management capabilities with KMIP key material – integrates policy, logging, and management – bringing simplified and richer capabilities to KMIP
  6. Supports CTE agents (renamed from VTE agents) from version 7.0
  7. New API for better automation

Even though these improvements are for the CTM platform as a whole, there are also some specialized improvements organizations may want to consider:

  1. Password and PED Authentication

    Similar to S series Luna HSM, Thales provides the choice of password and Pin Entry Device (PED) as authentication mechanisms for CTM. This can provide improved security but is only available for k570 physical appliances.

  2. Multi-cloud deployment is also possible where customers can deploy on multi-cloud such as AWS, Azure, GCP, VMWare, HyperV, Oracle VM, and more. They can also form hybrid clusters of physical and virtual appliances for high-availability environments.

Conclusion

Upgrading to DSMs can be challenging and require thorough planning and troubleshooting, but migrating to CTMs can be another level, as CTMs are a different product. If organizations opt to migrate, they would need to develop new runbooks, standard operation procedures (SOPs), and architecture and design documents. With two years remaining to reach the end of life, organizations should plan if they plan to migrate to CTM or if they would want to explore other options.

Encryption Consulting can help customers plan and also help in upgrading and migrating their existing DSMs. Being a certified partner with Thales and supporting Vormetric customers with their implementations and upgrading for years, Encryption Consulting can help plan a full-fledged migration to CipherTrust Manager. Encryption Consulting can also provide gap assessments and pre-planning to ensure customers do not face any data loss or serious challenges that would lead to downtime or other production issues. Contact us for more information.

Sources

  1. CipherTrust Platform Documentation Portal (thalesdocs.com)
  2. CipherTrust Platform Documentation Portal (thalesdocs.com)

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 3 minutes, 44 seconds

The Domain Name System (DNS) is one of the best-known protocols on the Internet. Its main function is to translate human-readable domain names into their corresponding IP addresses. It is important as all devices on the Internet derive the IP address of a particular server from the DNS regularly. The translation process through which DNS queries are exchanged between the client and the DNS server or the resolver. DNS tree is architected from the top down and is called “DNS Hierarchy, “depicted in Fig. 1.

Fig 1: DNS Hierarchy

There are two types of DNS resolvers

  • Authoritative

    Authoritative name servers give answers in response to queries about IP addresses. They only respond to queries about domains to be configured to respond.

  • Recursive

    Recursive resolvers provide the proper IP address requested by the client. They do the translation process by themselves and return the final response to the client.

In this article, we will be focusing on the second type, recursive DNS resolvers.

DNS Cache Poisoning Attacks

Classic DNS cache poisoning attacks (around 2008) targeted a DNS resolver by having an off-path attacker fooling a vulnerable DNS resolver into issuing a query to an upstream authoritative name server.

The attacker attempts to inject negative responses with the spoofed IP of the name server. If the rogue response arrives before any legitimate ones which matches the “secrets” in the query, then the resolver will accept and cache the rogue results.

The attacker would also need to guess the correct source/destination IP, source/destination port, and the query’s transaction ID (TxID), which is 16-bit long. When the source and destination port (i.e., 53) were fixed, 16-bit was the only randomness. Thus an off-path attacker can brute force all possible values with 65,536 responses, and a few optimizations such as birthday attacks that can speed the attack even further.

Defenses against DNS Cache Poisoning attacks

Since then, several defenses have been promoted to mitigate the threat of DNS cache poisoning. They effectively render the classic attack useless. We describe below the solutions deployed which includes randomization of:

  1. The source port is perhaps the most effective and widely deployed defense as this increases the randomness to 32 bits from 16 bits. An off-path attacker would now have to guess both the source port and Transaction ID (TxID) together.

  2. Capitalization of letters in domain names (0x20 encoding) – The randomness can often depend on the number of letters, which can be quite effective, especially for longer domain names. It is a simple protocol change, but it has significant compatibility issues with authoritative name servers. Thus, the most popular public resolvers do not use 0x20 encoding. For example, Google DNS uses 0x20 encoding only for allowed name servers; Cloudflare has recently disabled 0x20 encoding.

  3. Choices of name servers(server IP addresses). Randomness also depends on the number of name servers. Most domains utilize less than ten name servers, summarizing to only two to three bits. It has also been shown that an attacker can generate query failures against certain name servers and effectively “pin” a resolver to the one remaining name server.

  4. DNSSEC – The success of DNSSEC depends on the support of both resolvers and authoritative name servers. But, only a small fraction of domains are signed – 0.7% (.com domains), 1% (.org domains), and 1.85% for Top Alexa 10K domains, as reported in 2017. In the same study, it is stated that only 12% of the resolvers enabling DNSSEC do attempt to validate the records received. Thus, the overall deployment rate of DNSSEC is far from satisfactory.

Conclusion

DNS Cache poisoning attack is ever-changing, with new attack surfaces appearing. As we previously stated, modern DNS infrastructure has multiple layers of caching. The client often initiates a query using an API to an OS stub resolver, a separate system process that maintains OS-wide DNS cache. Stub resolver does not perform any iterative queries; instead, it forwards the request to the next layer. A DNS forwarder also forwards queries to its upstream recursive resolver. DNS forwarders are commonly found in Wi-Fi routers (e.g., in a home), and they maintain a dedicated DNS cache. The recursive resolver does the real job of iteratively querying the authoritative name servers. The answers are then returned and cached in each layer. All layers of caches are technically subject to the DNS cache poisoning attack. But we generally tend to ignore stub resolvers and forwarders, which are also equally susceptible to attacks. As the industry moves forward, we should be better prepared for such attacks and have better defenses accordingly.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 6 minutes

Data security is one of the essential parts of an organization; it can be achieved using various methods. Encryption Key has a significant role in the overall process of data. Data encryption converts the plaintext into an Encoded form (non-readable), and only authorized persons/parties can access it.

Many algorithms are available in the market for encrypting such data. The encrypted data has been safe for some time, but we never think it is permanently secure. As time goes on, there is a chance that someone gets the data hacked.

Fig: Encryption and Decryption Process

In this article, we have considered various encryption algorithms and techniques for improving the security of the data, Information Security using encryption. Comparisons of encryption algorithms based on their performance, efficiency in hardware and software, key size, availability, implementation techniques, and speed.

Summary of the algorithms

We compare the measured speed of encryption algorithms with various other algorithms available as standard in Oracle JDK, using Eclipse IDE, and then summarize multiple other characteristics of those algorithms. The encryption algorithms consider here are AES (with 128 and 256-bit keys), DES, Triple DES, IDEA, and BlowFish (with a 256-bit key).

Performance of the algorithms

The figure below shows the time taken to encrypt various numbers of 16-byte blocks of data using the algorithms mentioned above.

It is essential to note right from the beginning that beyond some ridiculous point, it is not worth sacrificing speed for security. However, the measurements obtained will still help us make certain informed decisions.

Characteristics of algorithms

Table 1 summarizes the main features of each encryption algorithm, with what we believe is a fair overview of the current security status of the algorithm.

FactorsRSADES3DESAES
Created ByIn 1978 by Ron Rivest, Adi Shamir, and Leonard AdlemanIn 1975 by IBMIn 1978 by IBMIn 2001 by Vincent Rijmen and Joan Daemen
Key LengthIt depends on the number of bits in modulus n, where n = p*q56 bits168 bits (k1, k2, and k3)
112 bits (k1 and k2)
128, 192, or 256 bits
Rounds1164810-128 bit key,
12-192 bit key,
14-256 bit key
Block SizeVariable64 bits64 bits128 bits
Cipher TypeAsymmetric Block CipherSymmetric Block CipherSymmetric Block CipherSymmetric Block Cipher
SpeedSlowestSlowVery SlowFast
SecurityLeast SecureNot Secure enoughAdequate SecurityExcellent Security

Table 1: Characteristics of commonly used encryption algorithms

Comparison

The techniques have been compared based on that how much:

  • CPU processing speed for encrypting and decrypting data.
  • Rate of key generation.
  • Key size.
  • Security consideration.
  • Efficient on the hardware and software in case of implementation.
  • The amount of memory required to hold the data in the encryption process.
  • Number of users accommodated by the model.
  • Time required by the model to recover the data in case of key failure.
  • Time available to the hacker to produce various types of attacks.
  • The complexity of algorithm technique.
Fig: Comparison of encryption algorithm based on Percentage Efficiency

Formulation and Case Study

Case Study

Symmetric ciphers use the same key for encrypting and decrypting, so the sender and the receiver must both know — and use — the same secret key. All key lengths are deemed sufficient to protect classified information up to the “Secret” level, with “Top Secret” information requiring either 192- or 256-bit key lengths. There are 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys — a round consists of several processing steps that include substitution, transposition, and mixing of the input plaintext and transform it into the final output of ciphertext.

AES Design

Rounds

Padding is the method of adding additional Dummy data. During the encryption process of a message, if the message is not divisible by the block length, then the padding is used. E.g., if the message consists of 426 bytes, we need seven additional bytes of padding to make the message 432 bytes long because 432 is divisible by 16. Three key sizes can be used in AES, and depending on key sizes, the number of rounds in AES changes. The standard key size in AES is 128 bits, and the rounds are 10. for AES encryption, two sub keys are generated and in 1st round a round key is added in the first round.

No.Key SizeNo of Rounds
1128 bits10
2192 bits12
3256 bits14

For 128 bits, plain text and 128 bits key are used, and 10 rounds are performed to find the ciphertext. In the first step, 10 round keys are generated for each round, and there is a separate round key. But in the first round, an extra round key, the initial round, is added to the round, and then transformation is started. The transformation consists of four steps.

  1. Substitute Bytes
  2. Shift Rows
  3. Mix Columns
  4. Add Round Key

The Following figure explains all the encryption stages from plain text to ciphertext.

Fig: Shows the stages of each round

Encryption with AES

The encryption phase of AES can be broken into three steps: the initial round, the main rounds, and the final round. All of the stages use the same sub-operations in different combinations as follows:

  1. Initial RoundAdd Round Key
  2. Main Round
    • Sub Bytes
    • Shift Rows
    • Mix Columns
    • Add Round Key
  3. Final Round:
    • Sub Bytes
    • Shift Rows
    • Add Round Key
  4. Add Round Key

    This is the only phase of AES encryption that directly operates on the AES round key. In this operation, the input to the round is exclusive-or with the round key.

  5. Sub Bytes

    Involves splitting the input into bytes and passing each through a Substitution Box or S-Box. Unlike DES, AES uses the same S-Box for all bytes. The AES S-Box implements inverse multiplication in Galois Field 2.

  6. Shift Rows

    Each row of the 128-bit internal state of the cipher is shifted. The rows in this stage refer to the standard representation of the internal state in AES, which is a 4×4 matrix where each cell contains a byte. Bytes of the internal state is placed in the matrix across rows from left to right and down columns.

  7. Mix Columns

    Provides diffusion by mixing the input around. Unlike Shift Rows, Mix Columns performs operations splitting the matrix by columns instead of rows. Unlike standard matrix multiplication, Mix Columns performs matrix multiplication per Galois Field 2.

Decryption with AES

To decrypt an AES-encrypted ciphertext, it is necessary to undo each stage of the encryption operation in the reverse order in which they were applied. The three-stage of decryption is as follows:

  1. Inverse Final Round
    • Add Round Key
    • Shift Rows
    • Sub Bytes
  2. Inverse Main Round
    • Add Round Key
    • Mix Columns
    • Shift Rows
    • Sub Bytes
  3. Inverse Initial Round
    • Add Round Key

Conclusion

The study of various algorithms shows that the model’s strength depends upon the key management , type of cryptography, number of keys, number of bits used in a key. All the keys are based on mathematical properties. The keys having more number of bits requires more computation time, indicating that the system takes more time to encrypt the data. AES data encryption is a more mathematically efficient and elegant cryptographic algorithm, but its main strength is the option for various key lengths. AES allows you to choose a 128-bit, 192-bit, or 256-bit key, making it exponentially strong. AES uses permutation-substitution, which involves a series of substitution and permutation steps to create the encrypted block

References

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 3 minutes 42 seconds

Kubernetes is an open-source container-orchestration system used to automate deploy, scale, and manage containerized applications. Kubernetes manages all elements that make up a cluster, including each microservice in an application to entire clusters. Organizations using these containerized applications as microservices can provide them more flexibility and security benefits than monolithic software platforms and introduce other complexities.

Recommendations

  1. Kubernetes Pod security
    1. Containers built to run applications should run as non-root users
    2. Run containers with immutable file systems whenever possible
    3. Regularly scan container images for potential vulnerabilities or misconfigurations
    4. Use Pod Security Policies to enforce a minimum level of security, including:
      1. Preventing privileged containers
      2. Denying container features that are frequently exploited to breakout, like hostPID, hostIPC, hostNetwork, allowedHostPath
      3. Rejecting containers that execute as root user or allow elevation to root
  2. Network separation and hardening
    1. Lockdown access to the control plane nodes using a firewall and RBAC (Role-Based Access Control)
    2. Limiting access to the Kubernetes, etcd server
    3. Configuring control plane components to use authenticated, encrypted communications using TLS/SSL certificates
    4. Set up network policies to isolate resources. Pods and services in different namespaces can communicate unless additional separation is applied, such as network policies.
    5. All credentials and sensitive information should be placed in Kubernetes Secrets rather than in configuration files. Encrypt Secrets using a robust encryption method
  3. Authentication and authorization
    1. Disable anonymous login
    2. Using strong user authentication
    3. Create RBAC policies that limit administrator, user, and service account activity
  4. Log auditing
    1. Enable audit logging
    2. Persist logs to ensure availability in case of pod, node, or container level failure
    3. Configuring a metrics logger
  5. Upgrading and application security practices
    1. Immediately apply security patches and updates
    2. Performing periodic vulnerability scans and penetration tests
    3. Removing components from the environment when they are no longer needed

Architectural overview

Kubernetes uses a cluster architecture. A Kubernetes cluster comprises many control planes and one or more physical or virtual machines called worker nodes which host Pods, which contain one or more containers. The container is an executable image that includes a software package and all its dependencies.

The control plane makes decisions about clusters. This includes scheduling the running of containers, detecting/responding to failures, and starting new Pods if the number of replicas listed in the deployment file is unsatisfied.

Kubernetes Pod security

Pods consist of one or more containers and are the smallest deployable Kubernetes unit. Pods can often be a cyber actor’s initial execution environment upon exploiting a container. Pods should be hardened to make exploitation much more complex and limit the impact on compromise.

“Non-root” and “rootless” container engines

Many container services run as privileged root users, and applications can execute inside the container as root despite not requiring privileged execution. Preventing root execution using non-root containers or a rootless container engine limits the impact of a container compromise. These methods affect the runtime environment significantly; thus, applications should be tested thoroughly to ensure compatibility.

Non-root containers

Container engines that allow containers to run applications as non-root users with non-root group membership. This non-default setting is configured while building the image.

Rootless container engines

Some container engines can run in an unprivileged context rather than using a daemon running as root. For this scenario, execution would appear to use the root user from the containerized application’s perspective, but the execution is remapped to the engine’s user context on the host.

Immutable container file systems

Containers are permitted mostly unrestricted execution within their context. A threat actor who has gained execution in a container can create files, download scripts, and modify applications within the container. Kubernetes can lockdown a container’s file system, thereby preventing many post-exploitation activities. These limitations can also affect legitimate container applications and can also potentially result in crashes or abnormal behavior. Kubernetes administrators can mount secondary read/write file systems for specific directories where applications require write access to prevent legitimate damaging applications.

Building secure container images

Container images are usually created by either building a container from scratch or building on top of an existing image pulled from a repository. Even after using trusted repositories to build containers, image scanning is key to ensuring deployed containers are secure. Images should be scanned throughout the container build workflow to identify outdated libraries, known vulnerabilities, or misconfigurations, such as insecure ports or permissions.One approach for implementing image scanning is by using an admission controller. Admission controller is a Kubernetes-native feature that can intercept and process requests to the Kubernetes API before the persistence of the object but after a request is authenticated and authorized. A custom webhook can be implemented to scan any image before deploying it in the cluster. The admission controller could block deployments if the picture doesn’t comply with the security policies defined in the webhook configuration.

Conclusion

This was an introduction to properly managing and securing Kubernetes clusters and securely deploying them in your environment. We will dive deeper into more controls and policies organizations can use to strengthen their security.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download
Encryption Services

About the Author

Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Let's talk