The operator for Cloud Native PostgreSQL is installed from a standard deployment manifest and follows the convention over configuration paradigm. While this is fine in most cases, there are some scenarios where you want to change the default behavior, such as:
- setting a company license key that is shared by all deployments managed by the operator
- defining annotations and labels to be inherited by all resources created by the operator and that are set in the cluster resource
- defining a different default image for PostgreSQL or an additional pull secret
By default, the operator is installed in the
namespace as a Kubernetes
In the examples below we assume the default name and namespace for the operator deployment.
The behavior of the operator can be customized through a
is located in the same namespace of the operator deployment and with
postgresql-operator-controller-manager-config as the name.
Any change to the config's
Secret will not be automatically
detected by the operator, - and as such, it needs to be reloaded (see below).
Moreover, changes only apply to the resources created after the configuration
The operator first processes the ConfigMap values and then the Secret’s, in this order. As a result, if a parameter is defined in both places, the one in the Secret will be used.
The operator looks for the following environment variables to be defined in the
||default license key (to be used only if the cluster does not define one, and preferably in the
||Enable the Redwood compatibility by default when using EPAS.|
||list of annotation names that, when defined in a
||list of label names that, when defined in a
||name of an additional pull secret to be defined in the operator's namespace and to be used to download images|
||Enables to delete Postgres pod if its PVC is stuck in Resizing condition. This feature is mainly for the Azure environment (default
||when set to
||The name of a ConfigMap in the operator's namespace with a set of default queries (to be specified under the key
||The name of a Secret in the operator's namespace with a set of default queries (to be specified under the key
INHERITED_LABELS support path-like wildcards. For example, the value
example.com/* will match
both the value
When you specify an additional pull secret name using the
the operator will use that secret to create a pull secret for every created PostgreSQL
cluster. That secret will be named
The namespace where the operator looks for the
PULL_SECRET_NAME secret is where
you installed the operator. If the operator is not able to find that secret, it
will ignore the configuration parameter.
Previous versions of the operator copied the
PULL_SECRET_NAME secret inside
the namespaces where you deploy the PostgreSQL clusters. From version "1.11.0"
the behavior changed to match the previous description. The pull secrets
created by the previous versions of the operator are unused.
Defining an operator config map
The example below customizes the behavior of the operator, by defining a
default license key (namely a company key), the label/annotation names to be
inherited by the resources created by any
Cluster object that is deployed
at a later time, and by enabling
in-place updates for the instance manager.
apiVersion: v1 kind: ConfigMap metadata: name: postgresql-operator-controller-manager-config namespace: postgresql-operator-system data: INHERITED_ANNOTATIONS: categories INHERITED_LABELS: environment, workload, app ENABLE_INSTANCE_MANAGER_INPLACE_UPDATES: 'true'
Defining an operator secret
The example below customizes the behavior of the operator, by defining a default license key.
apiVersion: v1 kind: Secret metadata: name: postgresql-operator-controller-manager-config namespace: postgresql-operator-system type: Opaque data: EDB_LICENSE_KEY: <YOUR_BASE64_ENCODED_EDB_LICENSE_KEY_HERE>
Restarting the operator to reload configs
For the change to be effective, you need to recreate the operator pods to reload the config map. If you have installed the operator on Kubernetes using the manifest you can do that by issuing:
kubectl rollout restart deployment \ -n postgresql-operator-system \ postgresql-operator-controller-manager
Otherwise, If you have installed the operator using OLM, or you are running on Openshift, run the following command specifying the namespace the operator is installed in:
kubectl delete pods -n [NAMESPACE_NAME_HERE] \ -l app.kubernetes.io/name=cloud-native-postgresql
Customizations will be applied only to
Cluster resources created
after the reload of the operator deployment.
Following the above example, if the
Cluster definition contains a
annotation and any of the
app labels, these will
be inherited by all the resources generated by the deployment.
PPROF HTTP SERVER
The operator can expose a PPROF HTTP server with the following endpoints on localhost:6060:
- `/debug/pprof/`. Responds to a request for "/debug/pprof/" with an HTML page listing the available profiles - `/debug/pprof/cmdline`. Responds with the running program's command line, with arguments separated by NUL bytes. - `/debug/pprof/profile`. Responds with the pprof-formatted cpu profile. Profiling lasts for duration specified in seconds GET parameter, or for 30 seconds if not specified. - `/debug/pprof/symbol`. Looks up the program counters listed in the request, responding with a table mapping program counters to function names. - `/debug/pprof/trace`. Responds with the execution trace in binary form. Tracing lasts for duration specified in seconds GET parameter, or for 1 second if not specified.
To enable the operator you need to edit the operator deployment add the flag
You can do this by executing these commands:
kubectl edit deployment -n postgresql-operator-system postgresql-operator-controller-manager
Then on the edit page scroll down the container args and add
containers: - args: - controller - --enable-leader-election - --config-map-name=postgresql-operator-controller-manager-config - --secret-name=postgresql-operator-controller-manager-config - --log-level=info - --pprof-server=true # relevant line command: - /manager
Save the changes, the deployment now will execute a rollout and the new pod will have the PPROF server enabled.
Once the pod is running you can exec inside the container by doing:
kubectl exec -ti -n postgresql-operator-system <pod name> -- bash
Once inside execute: