Security

This section contains information about security for Cloud Native PostgreSQL, that are analyzed at 3 different layers: Code, Container and Cluster.

Warning

The information contained in this page must not exonerate you from performing regular InfoSec duties on your Kubernetes cluster. Please familiarize with the "Overview of Cloud Native Security" page from the Kubernetes documentation.

About the 4C's Security Model

Please refer to "The 4C’s Security Model in Kubernetes" blog article to get a better understanding and context of the approach EDB has taken with security in Cloud Native PostgreSQL.

Code

Source code of Cloud Native PostgreSQL is systematically scanned for static analysis purposes, including security problems, using a popular open-source linter for Go called GolangCI-Lint directly in the CI/CD pipeline. GolangCI-Lint can run several linters on the same source code.

One of these is Golang Security Checker, or simply gosec, a linter that scans the abstract syntactic tree of the source against a set of rules aimed at the discovery of well-known vulnerabilities, threats, and weaknesses hidden in the code such as hard-coded credentials, integer overflows and SQL injections - to name a few.

Important

A failure in the static code analysis phase of the CI/CD pipeline is a blocker for the entire delivery of Cloud Native PostgreSQL, meaning that each commit is validated against all the linters defined by GolangCI-Lint.

Source code is also regularly inspected through Coverity Scan by Synopsys via EnterpriseDB's internal CI/CD pipeline.

Container

Every container image that is part of Cloud Native PostgreSQL is automatically built via CI/CD pipelines following every commit. Such images include not only the operator's, but also the operands' - specifically every supported PostgreSQL and EDB Postgres Advanced version. Within the pipelines, images are scanned with:

  • Dockle: for best practices in terms of the container build process
  • Clair: for vulnerabilities found in both the underlying operating system as well as libraries and applications that they run

Important

All operand images are automatically rebuilt once a day by our pipelines in case of security updates at the base image and package level, providing patch level updates for the container images that EDB distributes.

The following guidelines and frameworks have been taken into account for container-level security:

About the Container level security

Please refer to "Security and Containers in Cloud Native PostgreSQL" blog article for more information about the approach that EDB has taken on security at container level in Cloud Native PostgreSQL.

Cluster

Security at the cluster level takes into account all Kubernetes components that form both the control plane and the nodes, as well as the applications that run in the cluster (PostgreSQL included).

Pod Security Policies

A Pod Security Policy is the Kubernetes way to define security rules and specifications that a pod needs to meet to run in a cluster. For InfoSec reasons, every Kubernetes platform should implement them.

Cloud Native PostgreSQL does not require privileged mode for containers execution. PostgreSQL servers run as postgres system user. No component whatsoever requires to run as root.

Likewise, Volumes access does not require privileges mode or root privileges either. Proper permissions must be properly assigned by the Kubernetes platform and/or administrators.

The operator explicitly sets the required security contexts.

On RedHat OpenShift, Cloud Native PostgreSQL runs in restricted security context constraint, the most restrictive one. The goal is to limit the execution of a pod to a namespace allocated UID and SELinux context.

Security Context Constraints in OpenShift

For further information on Security Context Constraints (SCC) in OpenShift, please refer to the "Managing SCC in OpenShift" article.

Network Policies

The pods created by the Cluster resource can be controlled by Kubernetes network policies to enable/disable inbound and outbound network access at IP and TCP level.

Important

The operator needs to communicate to each instance on TCP port 8000 to get information about the status of the PostgreSQL server. Make sure you keep this in mind in case you add any network policy.

Network policies are beyond the scope of this document. Please refer to the "Network policies" section of the Kubernetes documentation for further information.

PostgreSQL

The current implementation of Cloud Native PostgreSQL automatically creates passwords and .pgpass files for the postgres superuser and the database owner. See the "Secrets" section in the "Architecture" page.

You can use those files to configure application access to the database.

By default, every replica is automatically configured to connect in physical async streaming replication with the current primary instance, with a special user called streaming_replica. The connection between nodes is encrypted and authentication is via TLS client certificates (please refer to the "Client TLS/SSL Connections" page for details).

Currently, the operator allows administrators to add pg_hba.conf lines directly in the manifest as part of the pg_hba section of the postgresql configuration. The lines defined in the manifest are added to a default pg_hba.conf.

For further detail on how pg_hba.conf is managed by the operator, see the "PostgreSQL Configuration" page of the documentation.

Important

Examples assume that the Kubernetes cluster runs in a private and secure network.