Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
Troubleshooting LDAP issues can seem tricky, which is where this blog should help you on your troubleshooting journey. We will discuss 2 scenarios that should solve your LDAP errors.
Scenario 1
This scenario takes into consideration that your certificate was not published in Active Directory. To resolve this issue run the commands:
To resolve AIA issues: certutil -dspublish -f <path to root certificate> RootCA
To resolve CDP issues: certutil -dspublish -f <path to root crl> <hostname>
If the issue exists for Issuing CA, you need to replace RootCA with SubCA and use the issuing CA’s hostname.
After this, you can check if the certificate is present in Active Directory or not.
For that, log in to Domain Controller, and open adsiedit.msc, connect to Configuration, and then we can navigate to Services > Public Key Services > AIA and check the present certificates. If the certificates are present and you still receive that error, then follow Scenario 2.
Scenario 2
If Scenario 1 does not fix the issue, then it may be possible that LDAP URL was incorrectly configured while configuring the AIA points on your Root CA.
To resolve this, first open PKIView.msc to check which LDAP URL your PKI is looking for. For this scenario, my PKI is looking for:
You also need to remove GUID, USN Information, and other details.
Publish the u ldifde -i -f c:\export.txt
Object should now be in new place
PKI View should show no errors
Conclusion
Issues with CDP and AIA LDAP locations can be tricky. Misconfiguration can often cause issues, which can be harder to track. This should solve all the LDAP URL issues that you may face in your PKI environment. LDAP issues can be tricky at times, but if Scenario 1 does not fix your issue, Scenario 2 definitely will.
Free Downloads
Datasheet of Public Key Infrastructure
We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
Active Directory (AD) is a critical component of many organizations’ IT infrastructure. It provides a central repository for a user, group, and computer accounts, as well as a variety of other objects, such as shared resources and security policies. In order for AD to function properly, certain ports must be opened on the firewall to allow communication between AD servers and clients.
Ports required for AD communication
The following ports are required for basic AD communication:
TCP/UDP port 53: DNS
TCP/UDP port 88: Kerberos authentication
TCP/UDP port 135: RPC
TCP/UDP port 137-138: NetBIOS
TCP/UDP port 389: LDAP
TCP/UDP port 445: SMB
TCP/UDP port 464: Kerberos password change
TCP/UDP port 636: LDAP SSL
TCP/UDP port 3268-3269: Global catalog
In addition to these ports, other ports may be required depending on your AD environment’s specific components and features. For example, if you are using Group Policy, the following ports will also be required:
TCP port 80: HTTP
TCP port 443: HTTPS
TCP port 445: SMB
If you are using ADFS (Active Directory Federation Services) for single sign-on, the following ports will also be required:
TCP port 80: HTTP
TCP port 443: HTTPS
TCP port 49443: ADFS
Ports required for PKI communication
In order for a PKI to function properly, certain ports need to be opened on the firewall to allow communication between the various components of the PKI system. These ports include:
This port is used for LDAP communication, which is required for clients to access the certificate database on the CA server.
TCP port 636
This port is used for LDAPS communication, a secure version of LDAP that uses SSL/TLS for encryption. This is required if you are using LDAP over a public network.
TCP port 9389
This port is used for the Web Services for Management (WS-Management) protocol, which is required for clients to access the CA server using the Certificates snap-in in the Microsoft Management Console (MMC).
In addition to these ports, you may also need to open other ports depending on your PKI system’s specific components and configuration. For example, if you are using Online Certificate Status Protocol (OCSP) to check the status of certificates, you will need to open TCP port 2560.
Troubleshooting firewall issues with PKI
To troubleshoot common firewall issues with a PKI, you can follow these steps:
Verify that the necessary ports are open on the firewall. You can do this by using the netstat command to list all of the open ports on the system and compare the results with the list of ports that are required for your PKI system.
Check the firewall logs to see any entries related to the PKI system. This can help you to identify any specific rules or settings that may be blocking the necessary ports.
Test the connectivity between the PKI components to ensure they can communicate properly. You can do this by using the ping, telnet, or tracert commands to test the connectivity between the client and the CA server and between other components of the PKI system.
If you are still having issues with the firewall, try temporarily disabling the firewall to see if this resolves the problem. This will help you to determine whether the firewall is the cause of the issue or if there is a problem with another component of the PKI system.
Conclusion
Maintaining the proper firewall configuration is important in ensuring that your Active Directory and PKI system functions properly. By verifying that the necessary ports are open and troubleshooting any firewall issues that may arise, you can help to keep your Active Directory and PKI system secure and reliable.
Free Downloads
Datasheet of Public Key Infrastructure
We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
CDP and AIA Points can sometimes be confusing but are the most important pillars of a functional PKI environment. Configuration of proper CDP and AIA points can result in a better and healthy PKI environment, and debugging those issues can be equally tricky. This article intends to be a way to stop any CDP/AIA issues that one may face during the PKI configuration and debugging stages.
Configuration of CDP/AIA Points
Proper configuration of CDP and AIA points can be tricky. It involves proper permission of where CRL needs to be published and where they would be accessed.
Improper configuration of CDP can result in two scenarios
CRL fails to be published, or
CRLs cannot be accessed
Similarly, if AIA is not configured properly, then certificates of root and issuing CAs cannot be accessed.
In both of the scenarios, PKI fails to function properly.
Configuration of AIA
AIA is the easiest to configure. If OCSP is used, the AIA decimal point may include 34; otherwise, it’s always 2.
Display Name
Decimal Value
Publish at this location
1
Include in the AIA extension of issued certificates
2
Include in the Online Certificate Status Protocol (OCSP) extension
32
Decimal Value 1 is included when publishing in the %windir%\System32\certsrv\CertEnroll location. It can also be included to publish directly onto the AIA location if proper permissions are provided.
[Note: If the location is behind a load balancer, the CA cannot access both servers and may cause failure. Please do not publish on those locations; instead manual copy is recommended]
Decimal Value 2 is included when the URL is to be added to the issued certificates. These URLs act as the AIA points from where the certificates can be extracted.
Example:
Here, we provide a local CertEnroll folder with decimal value 1, as it is where the AIA would be published. The HTTP location has a decimal value of 2, where we won’t publish the certificates, but it will act as an AIA location from where other clients can access its certificate.
Configuration of CDP
Configuration of CDP can depend on how the CA is configured, how the CRLs are published and accessed, and if Delta CRLs are published.
Display Name
Description
Decimal Value
Publish CRLs at this location
Used by the CA to determine whether to publish base CRLs to this location
1
Include in the CRL Distribution Point (CDP) of issued certificates
Used by clients during revocation checking to find base CRL Location
2
Include in [base] CRLs
Used by clients during revocation checking to find delta CRL location from base CRLs
4
Include in all CRLs
An offline CA can use it to specify the LDAP URL for manual publishing CRLs. You must also set the explicit configuration container in the URL or the DSConfigDN value in the registry. certutil -setreg CA\DSConfigDN CN=
8
16
32
Publish Delta CRLs to this location
Used by the CA to determine whether to publish Delta CRLs to this location
64
Decimal values are used accordingly based on how the CDP needs to be configured.
Decimal Value 1 is mostly used to publish CRLs to %windir%\System32\certsrv\CertEnroll location or to those locations where CA has proper permissions to access. If Delta CRLs are also published, 65 (1+64) is used.
[Note: If the location is behind a load balancer, the CA cannot access both servers and may cause failure. Please do not publish on those locations; instead manual copy is recommended]
Decimal Value 2 includes the URL or location and lets it act as a CDP location from where base or delta CRLs are accessed.
Example
Decimal Value 79 = CRLs are published at that location (1) + Location is added to CDP (2) + Delta CRL location (4) + Include in all CRLs (8) + Publish Delta CRL at this location (64)
Decimal Value 65 = Publish CRL at this location (1) + Publish Delta CRL at this location (64)
Decimal Value 6 = Location is added to CDP (2) + Delta CRL location (4)
CRL Replacement Tokens
In the above examples, you may have noticed %1, %3, %4, %8, and %9. This represents how the CA is configured and what the file’s name should be. If the files are renamed, the AIA and CDP points may fail as the naming convention doesn’t match.
Token Name
Description
Map Value
ServerDNSName
The DNS name of the CA Server
%1
ServerShortName
The NetBIOS name of the server
%2
CAName
The name of the CA
%3
Cert_Suffix
The renewal extension of the CA
%4 (as per Windows 2000 mapping)
CertificateName
%4 (as per Windows 2003 mapping
ConfigurationContainer
The location of the Configuration container in AD
%6
CATruncatedName
The “sanitized” name of the CA
%7
CRLNameSuffix
The renewal extension of the CRL
%8
DeltaCRLAllowed
If Delta CRL is allowed, + is added at the end of the file to indicate a delta crl
%9
CDPObjectClass
%10
CAObjectClass
%11
Based on the mapping value, the AIA and CDP points are given a naming convention to find the correct file from those locations.
Debugging CDP/AIA location issues
If the CDP/AIA locations are properly configured, these steps will temporarily help resolve the issues. For example, we will use AIA issues, which will also work for CDP issues.
After we open and check PKIView.msc, we can see where the issue is. We can copy the URL to a notepad for further investigation.
The AD doesn’t have our certificate if the issue is on the LDAP location. This is quite an easy fix.
Certificates retrieved via LDAP are retrieved from Domain Controller. If we open Domain Controller and ADSIEdit.msc, then we can navigate to Services > Public Key Services > AIA and check the present certificates. Since our Issuing CA Certificate is absent, the PKI environment cannot retrieve the certificate from that location.
To resolve this, we navigate to our issuing CA and run the command certutil -dspublish -f <path to certificate> SubCA
Once the command runs successfully, we can refresh our PKIView.msc to check if the issue is resolved, and we should see a clean slate
However, if the AIA location #2, or the HTTP location is causing errors, this error is because the certificate isn’t present at that server endpoint.
To resolve this, we copy the certificate from %windir%\System32\certsrv\CertEnroll location to our web server, which hosts our certificate.
This would resolve our issue, which we can check on PKIView.msc again.
Conclusion
Issues with CDP and AIA locations can be tricky. Misconfiguration can often cause issues, which can be harder to track. With this guide, we hope to make the configuration of CDP/AIA points much easier with debugging steps to support any technical issues.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
Issuing CA often needs to be decommissioned for a variety of reasons, such as
The operating system is reaching the end of its life
CA might be compromised
CA is having operational issues and is too complicated to clean up.
No matter the reason, migrating to a new issuing CA can seem easy. Still, it can come with new challenges, such as mitigating risks, minimizing operational impact, etc.
While migrating issuing CAs, we need to ensure,
The current certificates that are issued remain operational until their validity runs out.
The old issuing CA should not issue any more certificates.
We can move to other CDP/AIA points according to the required changes, but the issuing CA would have a minimal operation and no impact on the PKI infrastructure.
Prerequisites
Before we begin, you need a new server that will act as the new issuing CA. The server should be configured, and issuing CA should be installed.
Steps to Migrate
These steps will help organizations migrate to new issuing CA.
If CDP/AIA points are not being changed, steps 2, 4, and 5 are optional and shouldn’t be followed.
Log onto the old Issuing CA
Identify CDP/AIA Distribution Points (optional)
Open command prompt
Type the command
certutil -getreg CA\CACertPublicationURLs
Document the HTTP AIA points. Ignore LDAP and C:\%windir%
Right-click on Revoked Certificates, and then click properties
Uncheck “Publish Delta CRL”
Edit the “CRL publication interval” to 99 years
Click OK
Open command prompt as administrator
Type the following command
certutil -crl
Copy old CA’s certificate (crt) and Certificate Revocation List (CRLs) files to new CDP/AIA Points (optional)
Navigate to %windir%\System32\CertSrv\CertEnroll
Copy the old CA’s crt and CRL files to new CDP/AIA Points
Redirect AIA and CDP points of old CA to the new location
This can be done using an
IIS redirect, or
DNS CNAME
redirecting the AIA and CRL of the old Certification Authority.
Document all certificate templates and stop certificate publishing on old Issuing CA
Open the command line with elevated privileges
Run
Certutil -catemplates > c:\catemplates.txt
and document all certificate templates published at the old Certification Authority
In total, certificate templates are present.
Launch the Certification Authority console
Navigate to “Certificate Templates”
Highlight all templates in the right pane, right-click and then click “Delete”
The old Certification Authority cannot issue any certificates and has all of its AIA and CRLs redirected to a new CRL Distribution point. The next steps will detail how users can document the certificates templates published on the old issuing CA and how to make them available at the new issuing CA.
Sort Certification Authority Database, identify and document all certificates issued based on certificate templates
Open Certificate Authority Console.
Highlight Issued Certificates.
Move to the right and sort by “Certificate Templates.”
Note: Replace the Template with the correct template name
Review the output on TemplateType.txt and document all certificates that need immediate action (requiring issuance from the new CA infrastructure if needed, such as Web Server Certificate)
Consult with application administrators using the certificates to determine the best approach to replace the certificates if needed
Document certificates based on custom certificate types
Open Certification Authority Console
Right click on Certificate Templates, and click Manage
Double-click the certificate template and click on the “Extensions” tab
Click on “Certificate Template Information”
Copy the Object Identifier (OID) number – the number will look similar to
Note: Replace the OID number with the number identified in step 5
Examine the output of c:\CustomTemplateType.txt and document all the certificates needing immediate action (requiring issuance from the new CA infrastructure if needed, such as custom SSL certificates).
Consult with the application administrator using the certificates to determine the best approach to replace the certificates if needed
Enable certificate templates needed on the results of steps 7 to 9 on new Issuing CA
Login to new Issuing CA.
Right-click on “Certificate Templates,” click New, and click “Certificate Templates to Issue.”
Choose all certificate templates needed in the “Enable Certificate Templates” windows and click >OK.
Conclusion
With these steps, organizations can migrate to the new issuing CA while decommissioning the old Issuing CA. With Windows 2012 ending in October 2023 organizations need something to help migrate to newer operating systems with minimal impact.
If your organization needs assistance with this migration, feel free to email us at info@encryptionconsulting.com, and we will ensure that your migration goes as smoothly as possible.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
LDAPS is one of the most crucial functionalities to properly protect and secure credentials in your PKI environment. By default, LDAP communications between client and server applications are not encrypted. This means that it would be possible to use a network monitoring device or software and view the communications traveling between LDAP client and server computers. This is especially problematic when an LDAP simple bind is used because credentials (username and password) are passed over the network unencrypted. This could quickly lead to the compromise of credentials.
Prerequisites
A functional Microsoft PKI should be available and configured. While viewing PKIView.msc, no errors should appear
If you need help in deploying your own PKI, you can refer to this article to build your own Two Tier PKI
Installing AD LDS
This step should be carried out on LDAP Server or on Domain Controllers which would be responsible for hosting LDAPS service.
Open Server Manager
From manage, open Add Roles and Features
On Before you Begin, click Next
On Installation type, ensure Role based or feature based installation, and click Next
On Server Selection, click Next.
On Server Roles, click Active Directory Lightweight Directory Services, and click Add Features, and then click Next
On Features, click Next
On AD LDS, click Next
On Confirmation, click Install
Post Installation, AD LDS needs to be configured
Configuring AD LDS
Run AD LDS setup wizard. Click Next on first page.
Ensure unique instance is selected, and click Next
Provide Instance name and Description, and click Next
Leave default ports and click Next
If AD LDS is installed on domain controller, then LDAP port would be 50000 and SSL port would be 50001
On Application Directory Partition, click Next
On File locations, click Next
On Service Account Selection, you may leave it on the Network service account, or choose a preferred account that can control LDAPS service
On AD LDS administrators, leave the current admin, or choose another account from the domain
Choose all LDF Files to be imported, and click Next
On Ready to Install, click Next
After Installation, click Finish
Publishing a certificate that supports Server Authentication
Login to the Issuing CA as enterprise admin
Ensure you are in Server Manager
From the Tools menu, open Certificate Authority
Expand the console tree, and right click on Certificate Templates
Select Kerberos Authentication (as it provides Server Authentication). Right click and select Duplicate Template. We can now customize the template.
Change Template Display Name and Template Name on General tab. Check Publish Certificate in Active Directory. This will ensure that the certificate appears when we enrol domain controllers using that template
On Request Handling, check Allow private key to be exported.
On the Security tab, provide Enroll permissions to appropriate users
Click Apply
Issue the Certificate on Issuing CA
Login to the Issuing CA as enterprise admin
Ensure you are in Server Manager
From the Tools menu, open Certificate Authority
Expand the console tree, and click on Certificate Templates
On the menu bar, click Action > New > Certificate Template to Issue
Choose the LDAPS certificate
Click OK and it should now appear in Certificate Templates
Requesting a certificate for Server Authentication
Log into LDAP server or domain controller.
Type win+R and run mmc
Click File and click Add/Remove Snap-in
Choose Certificates and click Add
Choose Computer account
If the steps are followed on LDAPServer where AD LDS is installed, click Local computer, or choose Another computer and choose where it would need to be installed
Expand the console tree, and inside Personal, click Certificates
Right click on Certificates and click All Tasks and select Request New Certificate
Follow the instructions, choose LDAPS template that we issued earlier and Install.}
Once Installed click Finish
Open the certificate, and in Details tab, navigate to Enhanced Key Usage to ensure Server Authentication is present.
Validating LDAPS connection
Login to LDAP Server as Enterprise admin
Type win+R and run ldp.exe
On the top menu, click on Connections, and then click Connect
In server, provide domain name, ensure SSL is checked and proper port is provided and click OK
No errors should appear. If connection was unsuccessful, the following output may appear
Conclusion
This should enable LDAPS which can be used to properly protect credentials used in your PKI environment as well as enable other applications to use LDAPS.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
To overcome this, organizations need to deploy region-specific PKI infrastructure, which can be harder to maintain and introduces complexity to the whole infrastructure.
But using Azure, organizations can deploy a PKI infrastructure that can be operated worldwide with low latency and high availability.
In this article, we will be showing you how your own PKI architecture on Azure.
Note: If this is your first time deploying a PKI, I recommend following ADCS Two Tier PKI Hierarchy Deployment as it is a more straightforward approach and also touches the basics.
Prerequisites
An Azure account where we will create Virtual Machines and blob storage
A custom domain name
An offline Windows Server VM, which will be our Root CA
[NOTE: This is a test scenario. As such, CDP and AIA points may not match your requirements. Do use values that are appropriate as per your requirements.]
Preparing CDP and AIA points
We will create blob storage that will act as our CDP/AIA points for our PKI infrastructure. We will also associate it with our custom domain to redirect it to our blob.
Creating Azure Blob Storage
First, we would need to log into our Azure account and navigate to Storage Accounts
We will be creating a new storage account. So click Create on the top left corner.
Provide the necessary details on the basics. For Redundancy, I would recommend at least Zone-redundant Storage (ZRS)
On the Advanced tab, leave everything on default and click next
On the Networking tab, it is recommended to have public access from selected virtual networks and IP addresses and select the Virtual network where all the virtual machines will be deployed. If no virtual network exists, do create one.
On the Data Protection tab, click Next.
On the Encryption tab, leave everything default and click Next.
Provide relevant tags and click Next.
On the review tab, you can review everything looks good and click Create.
This will create the blob storage. Next, we will associate this blob storage with our custom domain and ensure it is accessible via HTTP.
Mapping a custom domain to Azure Blog Storage
For this step, you would need a custom domain. Once you log in, you can navigate to DNS settings
In DNS settings, navigate to DNS records and enter a CNAME record.
Now we need to retrieve the hostname for your storage account. For this, we can navigate Settings > Endpoints on the left pane and copy the static website under Static Website. It should be something like pkitest.z13.web.core.windows.net (https://.z13.web.core.windows.net/)
Remove the https:// and additional /. It would look like pkitest.z13.web.core.windows.net, which is our hostname
Now in the DNS settings, for the hostname of the custom domain, provide pkitest and for the hostname, provide the hostname of the storage endpoint
Click to create a record
Navigate to Azure Storage account, click on Networking under Security + Networking and select Custom Domain on the tab above.
Provide the subdomain you created.
Click Save. After successful validation, you will get a validation notification
Disabling secure transfer required
For this blob being a CDP/AIA point, we need HTTP access to the blog, which is why we would need to turn off the secure transfer. If enabled, HTTP access would not be possible; our PKI wouldn’t be able to use this blob as CDP/AIA point.
Navigate to Configuration under Settings
Set Secure Transfer Required to Disabled
Click Save
Testing Accessibility of Storage Account
This section will ensure our storage account is accessible via a custom domain.
First, we would create a container and upload a file to it
Navigate to Containers under Data Storage
On the top left corner, click
Provide the name, set public level access as a blob, and click Create
The container will be created
Click on the name and navigate inside it
On the top left corner, click
Select any file for testing (preferably a pdf or txt file)
Click Upload, and once uploaded, it should be available in the container
Now, we will try to access the file using a custom domain. The URL should be
Ensure the file is opened in HTTP and it does display the file or downloads it
This concludes our section on preparing CDP and AIA points. Next, we will begin creating our PKI. Now you may delete the test file from the container as it would only contain the cert and CRLs.
Creating Domain Controller
This Step-by-Step guide uses an Active Directory Domain Services (AD DS) forest named encon.com. DC01 functions as the domain controller.
Firstly, we will deploy a VM on Azure. Ensure both the IPs are static.
While deploying, ensure,
VMs are deployed on the same Virtual Network
If deployed on the same region, ensure the subnet is the same
Public IP Address is static
Once the VM is created, navigate to Networking under Settings and click on the Network Interface
Navigate to IP Configuration under settings
Click on ipconfig1 on the menu and change IP private settings to Static from Dynamic
Click Save and go back to the VM
Provide other parameters as per your requirement and create the VM.
Configuring Network
Once the VM is created, log in and follow the steps below
Login to DC01 as a local user
Click Start, type ncpa.cpl , and press ENTER
Click on Ethernet, and then click Properties under Activity
Double Click on Internet Protocol Version 4 (IPv4)
Only change the DNS Server Address, and provide the private IPv4 of DC01
For Alternate DNS, provide 8.8.8.8 or any other public DNS service you want.
Click OK and restart the VM from the Portal
Once Restarted, log in to DC01 as a local user
Click Start, type sysdm.cpl , and press ENTER
Changer PC name to DC01, and Restart Now when prompted.
Installing Active Directory Domain Services and Adding a new Forest
Open Server Manager. To do so, you can click the Server Manager icon in the toolbar or click Start, then click Server Manager.
Click Manage, and then click Add Roles and Features
Before you Begin, click Next
On Installation Type, click Next
On Server Selection, click Next
On Server Roles, choose Active Directory Domain Services, click Add Features, and then click Next
On Features, click Next
On AD DS, click Next
On Confirmation, click Install.
After installation, either
Click on Promote this server to a domain controller on Add Roles and Features Wizard
Or, click on Promote this server to a domain controller on Post Deployment Configurations in Notifications
On Deployment Configuration, choose to Add a new forest and provide the root domain name (“encon.com”)
On Domain Controller options, provide Directory Services Restore Mode password and click Next
Under DNS options, click Next
Under Additional options, click Next
Under Paths, click Next
Under Review options, click Next
Under Prerequisites check, click Install
Once installed, the remote connection would be terminated.
Login to DC01 as encon\
DC01 is now ready
Creating Offline Root CA
The standalone offline root CA should not be installed in the domain. It should not even be connected to a network at all.
We will be creating this Root CA on-premises. I will create this on Proxmox, but you can use VMware or VirtualBox for this installation.
After installing Windows Server 2019, follow the steps below
Log onto CA01 as CA01\Administrator.
Click Start, click Run, and then type notepad C:\Windows\CAPolicy.inf and press ENTER.
Click File and Save to save the CAPolicy.inf file under C:\Windows directory. Close Notepad
Installing Offline Root CA
Log onto CA01 as CA01\Administrator.
Click Start, and then click Server Manager.
Click Manage, and then click Add Roles and Features
On the Before You Begin page, click Next.
On the Select Server Roles page, select Active Directory Certificate Services, and then click Next.
On the Introduction to Active Directory Certificate Services page, click Next.
On the Select Role Services page, ensure that Certification Authority is selected, then Next.
On the Specify Setup Type page, ensure that Standalone is selected, and then click Next.
On the Specify CA Type page, ensure that Root CA is selected, and then click Next.
On the Set Up Private Key page, ensure that Create a new private key is selected, and then click Next.
Leave the defaults on the Configure Cryptography for CA page, and click Next.
On Configure CA Name page, under the Common name for this CA, clear the existing entry and type Encon Root CA. Click Next.
On the Set Validity Period page, under Select validity period for the certificate generated for this CA, clear the existing entry and type 20. Leave the selection box set to Years. Click Next.
Keep the default settings on the Configure Certificate Database page, and click Next.
Review the settings on the Confirm Installation Selections page and then click Install.
Review the information on the Installation Results page to verify that the installation is successful, and click Close.
Post Installation Configuration on Root CA
Ensure that you are logged on to CA01 as CA01\Administrator.
Open a command prompt. To do so, you can click Start, click Run, type cmd and then click OK.
To define the Active Directory Configuration Partition Distinguished Name, run the following command from an administrative command prompt
To define CRL Period Units and CRL Period, run the following commands from an administrative command prompt:
Certutil -setreg CA\CRLPeriodUnits 52
Certutil -setreg CA\CRLPeriod "Weeks"
Certutil -setreg CA\CRLDeltaPeriodUnits 0
To define CRL Overlap Period Units and CRL Overlap Period, run the following commands from an administrative command prompt:
Certutil -setreg CA\CRLOverlapPeriodUnits 12
Certutil -setreg CA\CRLOverlapPeriod "Hours"
To define Validity Period Units for all issued certificates by this CA, type the following command and then press Enter. In this lab, the Enterprise Issuing CA should receive a 20-year lifetime for its CA certificate. To configure this, run the following commands from an administrative command prompt:
Certutil -setreg CA\ValidityPeriodUnits 20
Certutil -setreg CA\ValidityPeriod "Years"
Configuration of CDP and AIA points
Multiple methods are configuring the Authority Information Access (AIA) and certificate revocation list distribution point (CDP) locations. The AIA points to the public key for the certification authority (CA). You can use the user interface (in the Properties of the CA object), certutil, or directly edit the registry. The CDP is where the certificate revocation list is maintained, which allows client computers to determine if a certificate has been revoked. This lab will have three locations for the AIA and four locations for the CDP.
Configuring AIA points
A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator. When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command:
At an administrative command prompt, run the following commands to restart Active Directory Certificate Services and publish the CRL
net stop certsvc && net start certsvc
certutil -crl
Creating Issuing CA
Enterprise CAs must be joined to the domain. Before you install the Enterprise Issuing CA (CA02), you must first join the server to the domain. Then you can install the Certification Authority role service on the server.
Firstly, we will deploy a VM on Azure. Ensure both the IPs are static.
While deploying, ensure,
VMs are deployed on the same Virtual Network
If deployed on the same region, ensure the subnet is the same
Public IP Address is static
Once the VM is created, navigate to Networking under Settings and click on the Network Interface
Navigate to IP Configuration under settings
Click on ipconfig1 on the menu and change IP private settings to Static from Dynamic
Click Save and go back to the VM
Provide other parameters as per your requirement and create the VM.
Configuring Network
Login to CA02 as a local user
Click Start, type ncpa.cpl, and press ENTER
Click on Ethernet, and then click Properties under Activity
Double Click on Internet Protocol Version 4 (IPv4)
Only change the DNS Server Address, and provide the private IPv4 of DC01 (if both belong to the same region), or provide the public IP address of DC01 (if they belong to different regions)
Click OK and restart the VM from the Portal
Once Restarted, log in to CA02 as a local user
Click Start, type sysdm.cpl, and press ENTER
Changer PC name to CA02 and provide domain name in the domain. Provide credentials for DC01 and wait until you get a success message
Click on Restart Now when prompted.
Creating CAPolicy in Issuing CA
Log onto CA01 as CA01\Administrator.
Click Start, click Run, and then type notepad C:\Windows\CAPolicy.inf and press ENTER.
Select Root CA from the Certification Authority List
Once retrieved, the successful message is displayed
Copy the issued certificate from Root CA to CA02
Login to CA02 as an Encon user and copy the certificate to the C drive
Open Certificate Authority from Tools in Server Manager
Right-click on Encon Issuing CA, click on All Tasks, and click Install CA Certificate
Navigate to C drive, and select All files beside File name until the copied certificate is visible
Select the issued certificate and click Open
Right-click on Encon Issuing CA, click on All Tasks, and click Start Service
Post Installation Configuration on Issuing CA
Ensure that you are logged on to CA02 as Encon User
Open a command prompt. To do so, you can click Start, click Run, type cmd and then click OK.
To define CRL Period Units and CRL Period, run the following commands from an administrative command prompt:
Certutil -setreg CA\CRLPeriodUnits 1
Certutil -setreg CA\CRLPeriod “Weeks”
Certutil -setreg CA\CRLDeltaPeriodUnits 1
Certutil -setreg CA\CRLDeltaPeriod “Days”
To define CRL Overlap Period Units and CRL Overlap Period, run the following commands from an administrative command prompt:
Certutil -setreg CA\CRLOverlapPeriodUnits 12
Certutil -setreg CA\CRLOverlapPeriod “Hours”
To define Validity Period Units for all issued certificates by this CA, type the following command and then press Enter. In this lab, the Enterprise Issuing CA should receive a 20-year lifetime for its CA certificate. To configure this, run the following commands from an administrative command prompt:
Certutil -setreg CA\ValidityPeriodUnits 5
Certutil -setreg CA\ValidityPeriod “Years”
Configuration of CDP and AIA points
Multiple methods are configuring the Authority Information Access (AIA) and certificate revocation list distribution point (CDP) locations. The AIA points to the public key for the certification authority (CA). You can use the user interface (in the Properties of the CA object), certutil, or directly edit the registry.
The CDP is where the certificate revocation list is maintained, which allows client computers to determine if a certificate has been revoked. This lab will have three locations for the AIA and three for the CDP.
Configuring AIA points
A certutil command is a quick and common method for configuring the AIA. The certutil command to set the AIA modifies the registry, so ensure that you run the command from a command prompt run as administrator.
When you run the following certutil command, you will be configuring a static file system location, a HTTP location for the AIA, and a lightweight directory access path (LDAP) location. Run the following command:
Also, as per the CDP point, the CertEnroll folder will exist inside the pkitest container in Azure Blob. This is because the folder will be recursively copied from the CertSrv folder to the blob storage
At an administrative command prompt, run the following commands to restart Active Directory Certificate Services and publish the CRL
net stop certsvc & &net start certsvc
certutil -crl
Uploading Certificates and CRLs to the Blob storage
Per our CDP and AIA points, the certificates would be available at the blob storage in Azure. If we run PKIView.msc at Issuing CA, we will run into errors where the certificates or CRLs are not found
To resolve this, we need to upload
Root CA certificates
Root CA CRL
Issuing CA Certificates
Issuing CA CRLs will be uploaded using a script we will run next.
To upload the files, copy them from their respective machines and keep them handy on your host machine. You can find these files at C:\Windows\System32\certsrv\CertEnroll on both Root CA and issuing CA.
Note: Do not copy the CRLs of Issuing CA.
Once copied, follow the steps below
Navigate to the storage account, and click on the pkitest you created
Click on Containers under Data Storage
Click on the pkitest folder
Click on Upload on the top left
Click on the browse icon and select all the files that need to be uploaded and click Open
Check to Overwrite if files already exist and then click Upload
After uploading, all the files should be available
Once the files are uploaded, navigate to CA02 and open PKIView.msc again. Now CDP points of Root and Issuing CA should be available, but the AIA point would still show an error as we didn’t copy those files to the pkitest folder
Script to copy Issuing CA CRLs
Before we begin, we need to download AzCopy. Once downloaded, extract the app into C:\ to be accessible. We will be using this location on our script. Change the script’s path if you intend to store the application in a different location.
Now you would also need a folder to store the code. I would recommend creating a folder on C drive as AZCopyCode. Download the script from below and store it there. We would need to make some changes to make it work.
Note: This code was initially created by dstreefkerk. As per Windows Server 2022, this code does work. I have made some changes and fixed a few bugs.
If you try opening this on the browser, it would still give an error as there is a trailing %20 at the end, indicating a space at the end. To resolve this, CDP and AIA points need to be changed on Root CA, and the issuing CA needs to be recreated again.
Automating the script
We would automate this script using Task Scheduler to run this script every week. You can tweak this as per your requirements.
Open Task Scheduler
Left click on Task Scheduler (local) and click on Create a Basic Task
Once completed, AZ Copy should be available in Task Scheduler Library.
Right Click AZ Copy and click Run
Refresh and check History Tab. Action Completed should appear in History
Conclusion
This concludes our AD CS installation with Azure Blob Storage. It is easier to manage, but we also achieve high availability using Azure’s Blob Storage. This will help organizations create PKI that can be operational worldwide with minimal latency and high performance no matter where you are. If you face any issues, do remember to reach out to info@encryptionconsulting.com
Free Downloads
Datasheet of Public Key Infrastructure
We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
Upgrading Vormetric Data Security Manager (DSM) can seem relatively straightforward, but if something goes wrong, the data can be lost or can get corrupted. This article does not provide the steps you would need to upgrade, but rather would provide what organizations need to be careful about, a few technical and non-technical points that you may not find in the upgrade guide. This article will highlight all the steps you need to be careful about and a few details you might be curious about.Readers can use this article as a checklist or a base to develop a plan for upgrading their DSM. Every DSM upgrade can be different and unique to the organization, so it would be best if the organization gets some outside consultation to get the best planning for their environment.
Planning
Everything starts with good planning, and the DSM upgrade is no different. A full scan of the current environment is vital to assess any future roadblocks or challenges you might face.
Keeping a note of all the agents along with their agent versions and guardpoints When you upgrade the DSM, you can check the list of all the agents available and if all the guardpoints are up or not. If something doesn’t match as expected, you may need to troubleshoot them before proceeding. Agent versions can also highlight if you have any agents in the environment which are incompatible with the DSM version you desire to upgrade. A compatibility matrix will help you determine which agents are compatible and which need to be upgraded. You can also develop an upgrade path that you would need. Thales has defined an upgrade path that must be followed if you desire a particular version. For example, if you upgrade to 6.2 from 5.3, you cannot directly upgrade to 6.2 without upgrading to the intermediate version 6.0.0.
Production cutover
Many organizations looking forward to upgrading should already have a production DSM working in their environment. They would not like to upgrade their existing production DSM, instead would use another DSM to upgrade and conduct pilot testing before switching to the new DSM as the production DSM. Organizations should plan the cutover accordingly.
Planning can ensure the following steps are smooth and DSM can be upgraded to the desired version without hiccups.
Upgrading DSMs
Planning can help soothe the upgrade, but upgrading a production DSM can expose challenges you might not have expected. Some of the challenges we have faced are:
Cutting over from old DSM to new DSM without agent registration
Upgrading DSM to a particular version where agents remain compatible
Configuring HA cluster with DSMs being on different subnets
Upgrading DSMs can also take time, and without proper planning of cutover, the production can also face downtime. But with adequate planning, DSM cutover can be seamless with no downtime.
Organizations should prepare exhaustively for upgrading DSMs as a wrong move can cost them terabytes of unobtainable data.
Migrating to CTMs can come with a unique wave of challenges where some organizations might be concerned with the availability of policies, keys, usersets, process sets, and other configurations that may already exist in their DSM. Other concerns organizations may have can be:
Policies, keys, user sets, and process sets migration
Migration of hosts
High Availability configuration
Downtime and system disruption
Data Loss
If organizations are already on DSM 6.4.5 or 6.4.6, they can migrate to CipherTrust Manager and restore the following objects:
Agent Keys, including versioned keys and KMIP accessible keys
Most Key Management Interoperability (KMIP) keys except KMIP-accessible agent keys
Keys associated with CipherTrust Cloud Key Manager (CCKM), including key material as a service (KMaaS) keys.
DSM hostname and host configuration
When customers migrate, we recommend exhaustive pilot testing to ensure the migration goes smoothly without data loss. Customers should also conduct a cleaning session of the DSM where all decommissioned servers and users are removed before migrating to CipherTrust Manager. If customers migrate from DSM to CTM, agents running VTE agents (below version 7.0) would need to be upgraded to CipherTrust Transparent Encryption before upgrading so that migration can be seamless.
Benefits of migrating to CipherTrust Manager
Improved browser-based UI
No hypervisor limitation
Fully featured remote CLI interface
Embedded CipherTrust Cloud Key Manager (CCKM)
Better Key Management capabilities with KMIP key material – integrates policy, logging, and management – bringing simplified and richer capabilities to KMIP
Supports CTE agents (renamed from VTE agents) from version 7.0
New API for better automation
Even though these improvements are for the CTM platform as a whole, there are also some specialized improvements organizations may want to consider:
Password and PED Authentication
Similar to S series Luna HSM, Thales provides the choice of password and Pin Entry Device (PED) as authentication mechanisms for CTM. This can provide improved security but is only available for k570 physical appliances.
Multi-cloud deployment is also possible where customers can deploy on multi-cloud such as AWS, Azure, GCP, VMWare, HyperV, Oracle VM, and more. They can also form hybrid clusters of physical and virtual appliances for high-availability environments.
Conclusion
Upgrading to DSMs can be challenging and require thorough planning and troubleshooting, but migrating to CTMs can be another level, as CTMs are a different product. If organizations opt to migrate, they would need to develop new runbooks, standard operation procedures (SOPs), and architecture and design documents. With two years remaining to reach the end of life, organizations should plan if they plan to migrate to CTM or if they would want to explore other options.
Encryption Consulting can help customers plan and also help in upgrading and migrating their existing DSMs. Being a certified partner with Thales and supporting Vormetric customers with their implementations and upgrading for years, Encryption Consulting can help plan a full-fledged migration to CipherTrust Manager. Encryption Consulting can also provide gap assessments and pre-planning to ensure customers do not face any data loss or serious challenges that would lead to downtime or other production issues. Contact us for more information.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
The Domain Name System (DNS) is one of the best-known protocols on the Internet. Its main function is to translate human-readable domain names into their corresponding IP addresses. It is important as all devices on the Internet derive the IP address of a particular server from the DNS regularly. The translation process through which DNS queries are exchanged between the client and the DNS server or the resolver. DNS tree is architected from the top down and is called “DNS Hierarchy, “depicted in Fig. 1.
Fig 1: DNS Hierarchy
There are two types of DNS resolvers
Authoritative
Authoritative name servers give answers in response to queries about IP addresses. They only respond to queries about domains to be configured to respond.
Recursive
Recursive resolvers provide the proper IP address requested by the client. They do the translation process by themselves and return the final response to the client.
In this article, we will be focusing on the second type, recursive DNS resolvers.
DNS Cache Poisoning Attacks
Classic DNS cache poisoning attacks (around 2008) targeted a DNS resolver by having an off-path attacker fooling a vulnerable DNS resolver into issuing a query to an upstream authoritative name server.
The attacker attempts to inject negative responses with the spoofed IP of the name server. If the rogue response arrives before any legitimate ones which matches the “secrets” in the query, then the resolver will accept and cache the rogue results.
The attacker would also need to guess the correct source/destination IP, source/destination port, and the query’s transaction ID (TxID), which is 16-bit long. When the source and destination port (i.e., 53) were fixed, 16-bit was the only randomness. Thus an off-path attacker can brute force all possible values with 65,536 responses, and a few optimizations such as birthday attacks that can speed the attack even further.
Defenses against DNS Cache Poisoning attacks
Since then, several defenses have been promoted to mitigate the threat of DNS cache poisoning. They effectively render the classic attack useless. We describe below the solutions deployed which includes randomization of:
The source port is perhaps the most effective and widely deployed defense as this increases the randomness to 32 bits from 16 bits. An off-path attacker would now have to guess both the source port and Transaction ID (TxID) together.
Capitalization of letters in domain names (0x20 encoding) – The randomness can often depend on the number of letters, which can be quite effective, especially for longer domain names. It is a simple protocol change, but it has significant compatibility issues with authoritative name servers. Thus, the most popular public resolvers do not use 0x20 encoding. For example, Google DNS uses 0x20 encoding only for allowed name servers; Cloudflare has recently disabled 0x20 encoding.
Choices of name servers(server IP addresses). Randomness also depends on the number of name servers. Most domains utilize less than ten name servers, summarizing to only two to three bits. It has also been shown that an attacker can generate query failures against certain name servers and effectively “pin” a resolver to the one remaining name server.
DNSSEC – The success of DNSSEC depends on the support of both resolvers and authoritative name servers. But, only a small fraction of domains are signed – 0.7% (.com domains), 1% (.org domains), and 1.85% for Top Alexa 10K domains, as reported in 2017. In the same study, it is stated that only 12% of the resolvers enabling DNSSEC do attempt to validate the records received. Thus, the overall deployment rate of DNSSEC is far from satisfactory.
Conclusion
DNS Cache poisoning attack is ever-changing, with new attack surfaces appearing. As we previously stated, modern DNS infrastructure has multiple layers of caching. The client often initiates a query using an API to an OS stub resolver, a separate system process that maintains OS-wide DNS cache. Stub resolver does not perform any iterative queries; instead, it forwards the request to the next layer. A DNS forwarder also forwards queries to its upstream recursive resolver. DNS forwarders are commonly found in Wi-Fi routers (e.g., in a home), and they maintain a dedicated DNS cache. The recursive resolver does the real job of iteratively querying the authoritative name servers. The answers are then returned and cached in each layer. All layers of caches are technically subject to the DNS cache poisoning attack. But we generally tend to ignore stub resolvers and forwarders, which are also equally susceptible to attacks. As the industry moves forward, we should be better prepared for such attacks and have better defenses accordingly.
Free Downloads
Datasheet of Encryption Consulting Services
Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all
aspects of encryption for our clients.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
Data security is one of the essential parts of an organization; it can be achieved using various methods. Encryption Key has a significant role in the overall process of data. Data encryption converts the plaintext into an Encoded form (non-readable), and only authorized persons/parties can access it.
Many algorithms are available in the market for encrypting such data. The encrypted data has been safe for some time, but we never think it is permanently secure. As time goes on, there is a chance that someone gets the data hacked.
Fig: Encryption and Decryption Process
In this article, we have considered various encryption algorithms and techniques for improving the security of the data, Information Security using encryption. Comparisons of encryption algorithms based on their performance, efficiency in hardware and software, key size, availability, implementation techniques, and speed.
Summary of the algorithms
We compare the measured speed of encryption algorithms with various other algorithms available as standard in Oracle JDK, using Eclipse IDE, and then summarize multiple other characteristics of those algorithms. The encryption algorithms consider here are AES (with 128 and 256-bit keys), DES, Triple DES, IDEA, and BlowFish (with a 256-bit key).
Performance of the algorithms
The figure below shows the time taken to encrypt various numbers of 16-byte blocks of data using the algorithms mentioned above.
It is essential to note right from the beginning that beyond some ridiculous point, it is not worth sacrificing speed for security. However, the measurements obtained will still help us make certain informed decisions.
Characteristics of algorithms
Table 1 summarizes the main features of each encryption algorithm, with what we believe is a fair overview of the current security status of the algorithm.
Factors
RSA
DES
3DES
AES
Created By
In 1978 by Ron Rivest, Adi Shamir, and Leonard Adleman
In 1975 by IBM
In 1978 by IBM
In 2001 by Vincent Rijmen and Joan Daemen
Key Length
It depends on the number of bits in modulus n, where n = p*q
56 bits
168 bits (k1, k2, and k3) 112 bits (k1 and k2)
128, 192, or 256 bits
Rounds
1
16
48
10-128 bit key, 12-192 bit key, 14-256 bit key
Block Size
Variable
64 bits
64 bits
128 bits
Cipher Type
Asymmetric Block Cipher
Symmetric Block Cipher
Symmetric Block Cipher
Symmetric Block Cipher
Speed
Slowest
Slow
Very Slow
Fast
Security
Least Secure
Not Secure enough
Adequate Security
Excellent Security
Table 1: Characteristics of commonly used encryption algorithms
Comparison
The techniques have been compared based on that how much:
CPU processing speed for encrypting and decrypting data.
Rate of key generation.
Key size.
Security consideration.
Efficient on the hardware and software in case of implementation.
The amount of memory required to hold the data in the encryption process.
Number of users accommodated by the model.
Time required by the model to recover the data in case of key failure.
Time available to the hacker to produce various types of attacks.
The complexity of algorithm technique.
Fig: Comparison of encryption algorithm based on Percentage Efficiency
Formulation and Case Study
Case Study
Symmetric ciphers use the same key for encrypting and decrypting, so the sender and the receiver must both know — and use — the same secret key. All key lengths are deemed sufficient to protect classified information up to the “Secret” level, with “Top Secret” information requiring either 192- or 256-bit key lengths. There are 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys — a round consists of several processing steps that include substitution, transposition, and mixing of the input plaintext and transform it into the final output of ciphertext.
AES Design
Rounds
Padding is the method of adding additional Dummy data. During the encryption process of a message, if the message is not divisible by the block length, then the padding is used. E.g., if the message consists of 426 bytes, we need seven additional bytes of padding to make the message 432 bytes long because 432 is divisible by 16. Three key sizes can be used in AES, and depending on key sizes, the number of rounds in AES changes. The standard key size in AES is 128 bits, and the rounds are 10. for AES encryption, two sub keys are generated and in 1st round a round key is added in the first round.
No.
Key Size
No of Rounds
1
128 bits
10
2
192 bits
12
3
256 bits
14
For 128 bits, plain text and 128 bits key are used, and 10 rounds are performed to find the ciphertext. In the first step, 10 round keys are generated for each round, and there is a separate round key. But in the first round, an extra round key, the initial round, is added to the round, and then transformation is started. The transformation consists of four steps.
Substitute Bytes
Shift Rows
Mix Columns
Add Round Key
The Following figure explains all the encryption stages from plain text to ciphertext.
Fig: Shows the stages of each round
Encryption with AES
The encryption phase of AES can be broken into three steps: the initial round, the main rounds, and the final round. All of the stages use the same sub-operations in different combinations as follows:
Initial RoundAdd Round Key
Main Round
Sub Bytes
Shift Rows
Mix Columns
Add Round Key
Final Round:
Sub Bytes
Shift Rows
Add Round Key
Add Round Key
This is the only phase of AES encryption that directly operates on the AES round key. In this operation, the input to the round is exclusive-or with the round key.
Sub Bytes
Involves splitting the input into bytes and passing each through a Substitution Box or S-Box. Unlike DES, AES uses the same S-Box for all bytes. The AES S-Box implements inverse multiplication in Galois Field 2.
Shift Rows
Each row of the 128-bit internal state of the cipher is shifted. The rows in this stage refer to the standard representation of the internal state in AES, which is a 4×4 matrix where each cell contains a byte. Bytes of the internal state is placed in the matrix across rows from left to right and down columns.
Mix Columns
Provides diffusion by mixing the input around. Unlike Shift Rows, Mix Columns performs operations splitting the matrix by columns instead of rows. Unlike standard matrix multiplication, Mix Columns performs matrix multiplication per Galois Field 2.
Decryption with AES
To decrypt an AES-encrypted ciphertext, it is necessary to undo each stage of the encryption operation in the reverse order in which they were applied. The three-stage of decryption is as follows:
Inverse Final Round
Add Round Key
Shift Rows
Sub Bytes
Inverse Main Round
Add Round Key
Mix Columns
Shift Rows
Sub Bytes
Inverse Initial Round
Add Round Key
Conclusion
The study of various algorithms shows that the model’s strength depends upon the key management , type of cryptography, number of keys, number of bits used in a key. All the keys are based on mathematical properties. The keys having more number of bits requires more computation time, indicating that the system takes more time to encrypt the data. AES data encryption is a more mathematically efficient and elegant cryptographic algorithm, but its main strength is the option for various key lengths. AES allows you to choose a 128-bit, 192-bit, or 256-bit key, making it exponentially strong. AES uses permutation-substitution, which involves a series of substitution and permutation steps to create the encrypted block
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
Kubernetes is an open-source container-orchestration system used to automate deploy, scale, and manage containerized applications. Kubernetes manages all elements that make up a cluster, including each microservice in an application to entire clusters. Organizations using these containerized applications as microservices can provide them more flexibility and security benefits than monolithic software platforms and introduce other complexities.
Recommendations
Kubernetes Pod security
Containers built to run applications should run as non-root users
Run containers with immutable file systems whenever possible
Regularly scan container images for potential vulnerabilities or misconfigurations
Use Pod Security Policies to enforce a minimum level of security, including:
Preventing privileged containers
Denying container features that are frequently exploited to breakout, like hostPID, hostIPC, hostNetwork, allowedHostPath
Rejecting containers that execute as root user or allow elevation to root
Network separation and hardening
Lockdown access to the control plane nodes using a firewall and RBAC (Role-Based Access Control)
Limiting access to the Kubernetes, etcd server
Configuring control plane components to use authenticated, encrypted communications using TLS/SSL certificates
Set up network policies to isolate resources. Pods and services in different namespaces can communicate unless additional separation is applied, such as network policies.
All credentials and sensitive information should be placed in Kubernetes Secrets rather than in configuration files. Encrypt Secrets using a robust encryption method
Authentication and authorization
Disable anonymous login
Using strong user authentication
Create RBAC policies that limit administrator, user, and service account activity
Log auditing
Enable audit logging
Persist logs to ensure availability in case of pod, node, or container level failure
Configuring a metrics logger
Upgrading and application security practices
Immediately apply security patches and updates
Performing periodic vulnerability scans and penetration tests
Removing components from the environment when they are no longer needed
Architectural overview
Kubernetes uses a cluster architecture. A Kubernetes cluster comprises many control planes and one or more physical or virtual machines called worker nodes which host Pods, which contain one or more containers. The container is an executable image that includes a software package and all its dependencies.
The control plane makes decisions about clusters. This includes scheduling the running of containers, detecting/responding to failures, and starting new Pods if the number of replicas listed in the deployment file is unsatisfied.
Kubernetes Pod security
Pods consist of one or more containers and are the smallest deployable Kubernetes unit. Pods can often be a cyber actor’s initial execution environment upon exploiting a container. Pods should be hardened to make exploitation much more complex and limit the impact on compromise.
“Non-root” and “rootless” container engines
Many container services run as privileged root users, and applications can execute inside the container as root despite not requiring privileged execution. Preventing root execution using non-root containers or a rootless container engine limits the impact of a container compromise. These methods affect the runtime environment significantly; thus, applications should be tested thoroughly to ensure compatibility.
Non-root containers
Container engines that allow containers to run applications as non-root users with non-root group membership. This non-default setting is configured while building the image.
Rootless container engines
Some container engines can run in an unprivileged context rather than using a daemon running as root. For this scenario, execution would appear to use the root user from the containerized application’s perspective, but the execution is remapped to the engine’s user context on the host.
Immutable container file systems
Containers are permitted mostly unrestricted execution within their context. A threat actor who has gained execution in a container can create files, download scripts, and modify applications within the container. Kubernetes can lockdown a container’s file system, thereby preventing many post-exploitation activities. These limitations can also affect legitimate container applications and can also potentially result in crashes or abnormal behavior. Kubernetes administrators can mount secondary read/write file systems for specific directories where applications require write access to prevent legitimate damaging applications.
Building secure container images
Container images are usually created by either building a container from scratch or building on top of an existing image pulled from a repository. Even after using trusted repositories to build containers, image scanning is key to ensuring deployed containers are secure. Images should be scanned throughout the container build workflow to identify outdated libraries, known vulnerabilities, or misconfigurations, such as insecure ports or permissions.One approach for implementing image scanning is by using an admission controller. Admission controller is a Kubernetes-native feature that can intercept and process requests to the Kubernetes API before the persistence of the object but after a request is authenticated and authorized. A custom webhook can be implemented to scan any image before deploying it in the cluster. The admission controller could block deployments if the picture doesn’t comply with the security policies defined in the webhook configuration.
Conclusion
This was an introduction to properly managing and securing Kubernetes clusters and securely deploying them in your environment. We will dive deeper into more controls and policies organizations can use to strengthen their security.
Free Downloads
Datasheet of Encryption Consulting Services
Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all
aspects of encryption for our clients.
Anish Bhattacharya is a Consultant at Encryption Consulting, working with PKIs, HSMs, creating Google Cloud applications, and working as a consultant with high-profile clients.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.