Securely using Infrastructure from Upbound Cloud

    Using the infrastructure you've provisioned from Upbound Cloud can be done directly by securely reading infrastructure credentials from the CLI or by consuming infrastructure from a Kubernetes application.


    Previous Steps

    For this tutorial, we assume you've gone through our Getting Started tutorial and have deployed a PostgreSQL database.

    You'll also need to be connected to your Platform via the CLI. You can learn how to do this with our Connect to your Platform guide.

    Usage from CLI

    Once you've deployed a new PostgreSQLInstance from your Workspace, and Upbound Cloud tells you it's ready to use, you're now ready to view it's connection secrets.

    In the Configuration which describes this PostgreSQLInstance, you can see we've defined how connection strings are layed out. From line 55 of /database/postgres/definition.yaml:

    kind: PostgreSQLInstance
    plural: postgresqlinstances
    - username
    - password
    - endpoint
    - port

    You can read the values for this information by connecting to your Platform locally, requesting and parsing the secrets, and decoding them:

    kubectl -n <NAMESPACE> get secret db-conn -o json | jq '.data | to_entries[]' | jq '"\(.key): \(.value | @base64d)"'

    When you run this command, you'll see something like the following appear in your terminal:

    "password: abc12345"
    "port: 5432"
    "username: admin"

    Pretty cool right? It's certainly better than writing those credentials down on a Post-It note or emailing them to a co-worker. Now you can take those connection details and plug them into your favorite database manager tool to start using your new database.

    Usage from a Pod

    To demonstrate how easy it is to use infrastructure secrets from your applications, we've gone ahead and wrote a simple example app which writes to a PostgreSQL database. We've already built an OCI image for it and uploaded it our Upbound Registry account, so all you'll have to do is create and apply a Pod manifest to get it running on your remote cluster.

    Alternatively, you can clone the example app repo, build it yourself, and upload your own OCI image to your private Upbound Registry using these steps. The only difference is when creating the Repository, you'll want to make sure to select Container instead of Package.

    Creating a Remote Cluster

    You're going to need to create a new Kubernetes remote cluster, using one you have running already, or creating a new one using Upbound Cloud with our AWS Reference Platform Configuration already installed.

    Create cluster image

    Get Connection Information

    Once you have your remote cluster running, you'll need to create a kubeconfig file containing it's connection secret. You'll use this later to deploy the application.

    kubectl -n <NAMESPACE> get secret cluster-conn -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-cluster

    Next, using the command in the steps above, find the connection information for the PostgreSQL instance you created:

    kubectl -n <NAMESPACE> get secret db-conn -o json | jq '.data | to_entries[]' | jq '"\(.key): \(.value | @base64d)"'

    Using a Pod Manifest

    Now that you have the necessary connection information for both your remote cluster, and the PostgreSQL data base you created earlier, it's time to write a Pod manifest. Name your manifest pod.yaml and add the following information. Make sure to add the PostgreSQL connection values.

    apiVersion: v1
    kind: Pod
    name: app-postgres-db
    namespace: default
    - name: app-postgres-db
    - name: PGDATABASE
    value: "postgres"
    - name: PGHOST
    value: "<secret endpoint value>"
    - name: PGUSER
    value: masteruser
    - name: PGPASSWORD
    value: "<secret password value>"
    - name: PGPORT
    value: "5432"

    Once created, you can apply the manifest to the remote cluster.

    kubectl --kubeconfig=kubeconfig-cluster apply -f pod.yaml

    The Pod should be running, but you can check to make sure using

    kubectl --kubeconfig=kubeconfig-cluster get pod
    app-postgres-db 1/1 Running 0 3s

    Check the pod logs to see that it's adding a new record to the postgres database every few seconds:

    kubectl --kubeconfig=kubeconfig-cluster logs app-postgres-db -f

    If successful, you'll see the following output:

    2020/11/11 02:51:16 Inserting record 3 in the database
    2020/11/11 02:51:16 Retrieving all records from the database
    2020/11/11 02:51:16 All retrieved records: 1, 2, 3