Skip to main content

Configuring HashiCorp Vault to Generate Dynamic PostgreSQL Credentials

Configuring HashiCorp Vault to Generate Dynamic PostgreSQL Credentials

It has been a common practice in the past to create a static set of database credentials for an application and either stuff them in the source code (very insecure) or export them as environment variables and have the application look them up (a tad less insecure, but far from ideal). Neither of these methods provide the privacy around the credentials that they really should have. Implementing either of the above methods can cause a secrets sprawl and do not truly provide an audit trail of who exactly accessed these credentials and when.

So where does HashiCorp Vault fit into all of this? Vault provides a Secrets Engine that can be configured to generate a set of dynamic credentials that are tightly scoped and have a set TTL (Time-To-Live). The workflow now looks like the following:

  1. A user authenticates to Vault and does a read on the path to where the database Secrets Engine is configured
  2. Vault reaches out to the database and generates a set of credentials that will expire at a set time and sends them back to the user
  3. The user leverages those credentials to log into the database

{% include image name=”vault-dynamic-secrets.png” position=”center” alt=”vault-dynamic-secrets-diagram” %}

With this process you don’t have to worry about someone else viewing these credentials in application logs or configuration variables because they will expire once they reach the end of the configured TTL.

In the following tutorial we use the Userpass Authentication Method and retrieve our credentials via the Vault CLI for simplicity of this demo. To utilize this workflow in an application, the AppRole Auth Method would be used to authenticate to Vault and retrieve the dynamic secret via the REST API.


The following tutorial assumes the following:

  • You have a Vault instance installed
  • You have the permissions to authenticate to Vault with the necessary policies to enable a new Secrets Engines

Step 1: Enabling the Database Secrets Engine

Vault stores secrets in what it calls Secrets Engines, which must be specifically enabled before use. Let’s start by logging into Vault and enabling the Database Secrets Engine. We have specifically specified our path because we want the name to reflect the database we are planning to configure there. By default, a Secrets Engine’s path would take the name of the Secrets Engine.

{% highlight shell %}
$ vault login -method=userpass \
username=jacobm \

$ vault secrets enable -path=psql database
{% endhighlight %}

Step 2: Configuring the Database Secrets Engine

With the Secrets Engine enabled, we need to provide it some configuration details. We will add the following information:

  • plugin_name: the name of the plugin we want to use for this backend. Each database has its own, so it is important to choose the appropriate one
  • allowed_roles: allowed Vault roles to utilize this connection (roles are explained below)
  • connection_url: connection string to point Vault to the database. Do not worry about filling in your username and password here because Vault will handle that.
  • username: username of the static account to use to log into the PostgreSQL database
  • password: password of the static account to use to log into the PostgreSQL database

{% highlight shell %}
{% raw %}
$ vault write psql/config/my-postgresql-database \
plugin_name=postgresql-database-plugin \
allowed_roles=”developer-role” \
connection_url=”postgresql://{{username}}:{{password}}@localhost:5432/” \
username=”postgres” \
{% endraw %}
{% endhighlight %}

Now that we have provided Vault with a way to connect to the database, the final piece of configuration is to create a role in Vault that maps to a role in the database. A role in Vault is a human-friendly identifier to an action. The role, or more specifically the role’s path, is what we want to target when we want Vault to generate a new set of credentials for us. Again, I’ll break down the arguments again to explain exactly what this command is doing:

  • db_name: the name of the database we are targeting (created above)
  • creation_statements: the SQL command(s) that Vault will run on the database
  • default_ttl: the Time-To-Live for the leases associated with this role
  • max_ttl: the maximum Time-To-Live for the leases associated with this role

{% highlight shell %}
{% raw %}
$ vault write psql/roles/developer-role \
db_name=my-postgresql-database \
creation_statements=”CREATE ROLE \”{{name}}\” WITH LOGIN PASSWORD ‘{{password}}’ VALID UNTIL ‘{{expiration}}’; \
default_ttl=”1h” \
{% endraw %}
{% endhighlight %}

Step 3: Generating and Validating a set of Credentials

Now that our Secrets Engine is configured to connect to our database and generate dynamic credentials, test it out and confirm we can log in with our new account.

{% highlight shell %}
$ vault read psql/creds/developer-role

Key Value
— —–
lease_id psql/creds/developer-role/p0dDI226EWOwHUkow9mAFM2O
lease_duration 1h
lease_renewable true
password 8cab931c-d62e-a73d-60d3-5ee85139cd66
username v-root-jacobmam-7dndRpVx6TYEt1uy86WK-1567716062

$ psql -h \
-d postgres \
-U v-root-jacobmam-7dndRpVx6TYEt1uy86WK-1567716062 \
-W 8cab931c-d62e-a73d-60d3-5ee85139cd66

psql (11.5, server 9.6.14)
SSL connection (protocol: TLSv1.2, … )
Type “help” for help.

{% endhighlight %}

Success! We have now configured our Vault server to generate dynamic credentials for our PostgreSQL database. Where we see this workflow providing the most value is when a team needs their application to connect to a database without having to pass it static credentials. This helps minimize secrets sprawl and provides an audit on who exactly is utilizing these credentials and when they accessed them. If you are looking to use this in your application, we would recommend looking into Vault’s AppRole Authentication Method which was purposefully built for machines to authenticate.

Interested in learning more about Vault’s AppRole? Reach out to me in the comments or on social media!

//take the first step

Share this story

Arctiq Team

We service innovation.