Certificate Hub runs on a Kubernetes system with the following requirements.
Kubernetes versions
Certificate Hub makes use of fairly generic Kubernetes capabilities. We endeavor to be as generic as possible in our deployment, and we do not require any special Operating System, Kubernetes, or Docker configuration that we are aware of.
- To our knowledge, Certificate Hub use of Kubernetes features is confined to open-source capabilities. Certificate Hub is supported on any active Kubernetes open-source version or commercial derivative.
- We officially support Certificate Hub operation on the current and prior two Kubernetes versions, consistent with generally accepted Kubernetes support policies. However, we have operated Certificate Hub on versions since 2018.
Kubernetes worker nodes
The number of Kubernetes worker nodes to use is a choice for our customers and depends on Certificate Hub and the other applications sharing the Kubernetes cluster. In our AWS environment, we run multiple instances of Certificate Hub, each in its namespace. Five nodes are sufficient for our roughly 20 development, quality assurance, and special-use instances. Not sure if these numbers are still valid. CertHub has grown, and we've had to scale back the number of environments we host.
Kubernetes pod instances
The Kubernetes deployment of Certificate Hub consists of the following pods.
Pod | Description |
---|---|
Entry | The external entry point into the Certificate Hub application. Certificate Hub exposes this pod as a service. You must point to a suitable Ingress at this service. |
Internal API | The heart of Certificate Hub implementing all new Certificate Hub features. This pod is implemented in Java and runs inside an embedded Tomcat instance. |
CertHub API | This component provides the externally accessible CertHub REST API. |
Lemur | The ephemeral pod migrating and updating the legacy database. |
PostgreSQL | An off-the-shelf PostgreSQL docker image. |
Notification | The ephemeral pod periodically invoking the plugin automation script for the notification plugin to send out notification emails on impending expirations. |
Flyway Scripts | The ephemeral pod containing the Flyway tool and bespoke scripts. This simple run-once container connects to the Certificate Hub database, applies the scripts, and is not required again until an upgrade. |
User Create | The ephemeral pod bootstrapping initial users into the database. |
Role Update | The ephemeral pod bootstrapping initial roles into the database. |
UI | This pod hosts the static UI assets. |
In most deployments, one instance of each pod will be sufficient to handle the load. Certificate Hub load is proportional to the number of active users. Background tasks, such as discovery, source certificate retrieval, reporting, and renewal, do not constitute a heavy load.
The deployment scripts instantiate a single instance of each pod. You can modify this in the Kubernetes deployment by updating the replicas settings in the following file.
acm-deployment.yaml
However, there are many other considerations for high availability.
Kubernetes Ingress
Certificate Hub assumes that your Kubernetes deployment includes an Ingress Controller. You must point this controller to the Entry Service, for example, by simply defining an Ingress. The Certificate Hub installation script will create an Ingress entry with the following name.
certhub-ingress
That should work in most cases. But, depending on your type of Ingress Controller, you may need to replace or modify this Ingress.
Certificate Hub expects all URL paths prefixed with a namespace. Traditionally this namespace is the Kubernetes Namespace of the Certificate Hub installation, but you can use any string that contains only letters and numbers. Certificate Hub will not function without this namespace prefix. The Ingress installed by the installation scripts defines a path with such a namespace prefix.
Your Ingress Controller must handle TLS termination. Certificate Hub does not handle TLS termination.
Kubernetes persistent volume
Set a persistent volume for the database where Certificate Hub saves the configuration, certificate data, and reports.
- In AWS deployments, you can generally resize this volume. Kubernetes provides volume expansion support, which we have verified to be effective in our Entrust AWS deployments of Certificate Hub.
- On Kubernetes platforms such as Azure, the storage setup templates may need to be modified.
- In other deployments, like on-premises deployments or deployments using stock PostgreSQL, set the volume size in advance because you cannot resize the database.
For example, a PersistentVolume for 1G is enough storage for 25,000 certs and a few weeks of reports.
Data | Quantity | Bytes/Item | Total |
---|---|---|---|
Certificates | 25,000 certificates | 20 KB/certificate | 500 MB |
Reports | 200 reports | 1 MB/report | 200 MB |
700 MB |
Report size is highly variable, and reports will accumulate quickly if you run them daily. It is probably best to be conservative here. By default, saved reports will be removed after one year. You can remove them earlier by changing the retention value in the report settings.
To set the persistent volume
- Define a persistent volume with enough storage size when Setting a local storage in Kubernetes.
- Provide persistent volume claim size accordingly when Creating the Kubernetes environment.
Third-party command-line tools
Certificate Hub assumes the following third-party command-line tools are already installed.
Tools | Usage |
---|---|
htpasswd | Manage usernames and passwords of HTTP users |
kubectl | Manage Kubernetes clusters |
OpenSSL | Generate application secrets like the Java Web Token (JWT) signing key. |