Writing APBs: Reference

    For completed APB examples, you can browse APBs in the ansibleplaybookbundle organization on GitHub.

    Directory Structure

    The following shows an example directory structure of an APB:

    APB Spec File

    The APB spec file is located at apb.yml and is where the outline of your application is declared. The following is an example APB spec:

    1. name: example-apb
    2. description: A short description of what this APB does
    3. bindable: True
    4. async: optional (1)
    5. metadata:
    6. documentationUrl: <link_to_documentation>
    7. imageUrl: <link_to_url_of_image>
    8. dependencies: ['<registry>/<organization>/<dependency_name_1>', '<registry>/<organization>/<dependency_name_2>']
    9. displayName: Example App (APB)
    10. longDescription: A longer description of what this APB does
    11. providerDisplayName: "Red Hat, Inc."
    12. plans:
    13. - name: default
    14. description: A short description of what this plan does
    15. free: true
    16. metadata:
    17. displayName: Default
    18. longDescription: A longer description of what this plan deploys
    19. cost: $0.00
    20. parameters:
    21. - name: parameter_one
    22. required: true
    23. default: foo_string
    24. type: string
    25. title: Parameter One
    26. maxlength: 63
    27. - name: parameter_two
    28. required: true
    29. default: true
    30. title: Parameter Two
    31. type: boolean
    FieldDescription

    version

    Version of the APB spec. See for details.

    name

    Name of the APB. Names must be valid ASCII and may contain lowercase letters, digits, underscores, periods, and dashes. See Docker’s guidelines for valid tag names.

    description

    Short description of this APB.

    bindable

    Boolean option of whether or not this APB can be bound to. Accepted fields are true or false.

    metadata

    Dictionary field declaring relevant metadata information.

    plans

    A list of plans that can be deployed. See for details.

    Metadata

    FieldDescription

    documentationUrl

    URL to the application’s documentation.

    imageUrl

    URL to an image which will be displayed in the web console for the service catalog.

    dependencies

    List of images which are consumed from within the APB.

    displayName

    The name that will be displayed in the web console for this APB.

    longDescription

    Longer description that will be displayed when the APB is clicked in the web console.

    providerDisplayName

    Name of who is providing this APB for consumption.

    Plans

    Plans are declared as a list. This section explains what each field in a plan describes.

    Plan Metadata

    FieldDescription

    displayName

    Name to display for the plan in the web console.

    longDescription

    Longer description of what this plan deploys.

    cost

    How much the plan will cost to deploy. Accepted field is $x.yz.

    1. parameters:
    2. - name: my_param
    3. title: My Parameter
    4. type: enum
    5. enum: ['X', 'Y', 'Z']
    6. required: True
    7. default: X
    8. display_type: select
    9. display_group: Group 1
    FieldDescription

    name

    Unique name of the parameter passed into the APB.

    title

    Displayed label in the web console.

    type

    Data type of the parameters as specified by link , such as string, number, int, boolean, or enum. Default input field type in the web console will be assigned if no display_type is assigned.

    required

    Whether or not the parameter is required for APB execution. Required field in the web console.

    default

    Default value assigned to the parameter.

    display_type

    Display type for the web console. For example, you can override a string input as a password to hide it in the web console. Accepted fields include text, textarea, password, checkbox, or select.

    display_group

    Will cause a parameter to display in groups with adjacent parameters with matching display_group fields. In the above example, adding another field below with display_group: Group 1 will visually group them together in the web console under the heading Group 1.

    When using a long list of parameters, it can be useful to use a shared parameter list. For an example of this, see the rhscl-postgresql-apb.

    APB Spec Versioning

    The APB spec uses semantic versioning with the format of x.y where x is a major release and y is a minor release.

    The current spec version is 1.0.

    Major Version

    The APB spec will increment the major version whenever an API breaking change is introduced to the spec. Some examples include:

    • Introduction or deletion of a required field.

    • Changing the YAML format.

    • New features.

    Minor Version

    The APB spec will increment the minor version whenever a non-breaking change is introduced to the spec. Some examples include:

    • Introduction or deletion of an optional field.

    • Spelling change.

    • Introduction of new options to an existing field.

    The Dockerfile is what is used to actually build the APB image. As a result, sometimes you will need to customize it for your own needs. For example, if running a playbook that requires interactions with PostgreSQL, you may want to install the required packages by adding the yum install command:

    1. FROM ansibleplaybookbundle/apb-base
    2. MAINTAINER Ansible Playbook Bundle Community
    3. LABEL "com.redhat.apb.spec"=\
    4. "<------------base64-encoded-spec------------>"
    5. COPY roles /opt/ansible/roles
    6. COPY playbooks /opt/apb/actions
    7. RUN chmod -R g=u /opt/{ansible,apb}
    8. ### INSTALL THE REQUIRED PACKAGES
    9. RUN yum -y install python-boto postgresql && yum clean all
    10. USER apb

    APB Actions (Playbooks)

    An action for an APB is the command that the APB is run with. The standard actions that are supported are:

    • provision

    • deprovision

    • bind

    • unbind

    • test

    For an action to be valid, there must be a valid file in the playbooks/ directory named .yml. These playbooks can do anything, which also means that you can technically create any action you would like. For example, the has playbook creating an update action.

    Most APBs will normally have a provision action to create resources and a deprovision action to destroy the resources when deleting the service.

    The bind and unbind actions are used when the coordinates of one service needs to be made available to another service. This is often the case when creating a data service and making it available to an application. Currently, the coordinates are made available during the provision.

    To properly make your coordinates available to another service, use the asb_encode_binding module. This module should be called at the end of the APB’s provision role, and it will return bind credentials to the OpenShift Ansible broker (OAB):

    1. - name: encode bind credentials
    2. asb_encode_binding:
    3. fields:
    4. EXAMPLE_FIELD: foo
    5. EXAMPLE_FIELD2: foo2

    Working With Common Resources

    This section describes a list of common OKD resources that are created when developing APBs. See the for a full list of available resource modules.

    Service

    The following is a sample Ansible task to create a service named hello-world. The namespace variable in an APB will be provided by the OAB when launched from the web console.

    Provision

    1. - name: create hello-world service
    2. k8s_v1_service:
    3. name: hello-world
    4. namespace: '{{ namespace }}'
    5. labels:
    6. app: hello-world
    7. service: hello-world
    8. selector:
    9. app: hello-world
    10. service: hello-world
    11. ports:
    12. port: 8080
    13. target_port: 8080

    Deprovision

    1. - k8s_v1_service:
    2. name: hello-world
    3. namespace: '{{ namespace }}'

    Deployment Configuration

    The following is a sample Ansible task to create a deployment configuration for the image docker.io/ansibleplaybookbundle/hello-world which maps to service hello-world.

    Provision

    1. - name: create deployment config
    2. openshift_v1_deployment_config:
    3. name: hello-world
    4. namespace: '{{ namespace }}'
    5. labels:
    6. app: hello-world
    7. service: hello-world
    8. replicas: 1
    9. selector:
    10. app: hello-world
    11. service: hello-world
    12. spec_template_metadata_labels:
    13. app: hello-world
    14. service: hello-world
    15. containers:
    16. - env:
    17. image: docker.io/ansibleplaybookbundle/hello-world:latest
    18. name: hello-world
    19. ports:
    20. - container_port: 8080
    21. protocol: TCP

    Deprovision

    1. - openshift_v1_deployment_config:
    2. name: hello-world
    3. namespace: '{{ namespace }}'
    4. state: absent

    The following is an example of creating a route named hello-world which maps to the service hello-world.

    1. - name: create hello-world route
    2. openshift_v1_route:
    3. name: hello-world
    4. namespace: '{{ namespace }}'
    5. spec_port_target_port: web
    6. labels:
    7. app: hello-world
    8. service: hello-world
    9. to_name: hello-world

    Deprovision

    Persistent Volume

    The following is an example of creating a persistent volume claim (PVC) resource and deployment configuration that uses it.

    Provision

    1. # Persistent volume resource
    2. - name: create volume claim
    3. k8s_v1_persistent_volume_claim:
    4. name: hello-world-db
    5. namespace: '{{ namespace }}'
    6. state: present
    7. access_modes:
    8. - ReadWriteOnce
    9. resources_requests:
    10. storage: 1Gi

    In addition to the resource, add your volume to the deployment configuration declaration:

    1. - name: create hello-world-db deployment config
    2. openshift_v1_deployment_config:
    3. name: hello-world-db
    4. ---
    5. volumes:
    6. - name: hello-world-db
    7. persistent_volume_claim:
    8. claim_name: hello-world-db
    9. test: false
    10. triggers:
    11. - type: ConfigChange

    Deprovision

    1. - openshift_v1_deployment_config:
    2. name: hello-world-db
    3. namespace: '{{ namespace }}'
    4. state: absent
    5. - k8s_v1_persistent_volume_claim:
    6. name: hello-world-db
    7. namespace: '{{ namespace }}'
    8. state: absent

    You can add optional variables to an APB by using environment variables. To pass variables into an APB, you must escape the variable substitution in your .yml files.

    For example, consider the following roles/provision-etherpad-apb/tasks/main.yml file in the :

    1. - name: create mariadb deployment config
    2. openshift_v1_deployment_config:
    3. name: mariadb
    4. namespace: '{{ namespace }}'
    5. ...
    6. - env:
    7. - name: MYSQL_ROOT_PASSWORD
    8. value: '{{ mariadb_root_password }}'
    9. - name: MYSQL_DATABASE
    10. value: '{{ mariadb_name }}'
    11. - name: MYSQL_USER
    12. value: '{{ mariadb_user }}'
    13. - name: MYSQL_PASSWORD
    14. value: '{{ mariadb_password }}'

    Variables for the APB are defined in the roles/provision-etherpad-apb/defaults/main.yml file:

    1. playbook_debug: no
    2. mariadb_root_password: "{{ lookup('env','MYSQL_ROOT_PASSWORD') | default('admin', true) }}"
    3. mariadb_name: "{{ lookup('env','MYSQL_DATABASE') | default('etherpad', true) }}"
    4. mariadb_user: "{{ lookup('env','MYSQL_USER') | default('etherpad', true) }}"
    5. mariadb_password: "{{ lookup('env','MYSQL_PASSWORD') | default('admin', true) }}"
    6. etherpad_admin_password: "{{ lookup('env','ETHERPAD_ADMIN_PASSWORD') | default('admin', true) }}"
    7. etherpad_admin_user: "{{ lookup('env','ETHERPAD_ADMIN_USER') | default('etherpad', true) }}"
    8. etherpad_db_host: "{{ lookup('env','ETHERPAD_DB_HOST') | default('mariadb', true) }}"
    9. state: present

    Working with Remote Clusters

    When developing APBs, there are a few factors which could prevent the developer from using the full development lifecycle that the apb tooling offers. Primarily, these factors are:

    • Developing against an OKD cluster that exists on a remote host.

    • Developing APBs on a machine that does not have access to the docker daemon.

    If a developer meets any of these criteria, use the following workflow to publish images to the internal OKD registry so that the broker can bootstrap the image (the process of loading APB specs into the broker). The following sections show how to do these steps with the apb tooling and without.

    Pushing APBs

    To use the apb push command when working with a remote OKD cluster:

    1. Ensure the base64-encoded APB spec is a label in the Dockerfile. This is usually done using the apb prepare command. If you do not have the apb tooling installed, you can run:

      1. $ cat apb.yml | base64

      This will return the base64-encoded apb.yml, which you can copy and paste into the Dockerfile under the LABEL "com.redhat.apb.spec" like:

      1. LABEL "com.redhat.apb.spec"=\
      2. "dmVyc2lvbjogMS4wCm5hbWU6IG1lZGlhd2lraS1hcGIKZGVzY3JpcHRpb246IE1lZGlhd2lraSBh\
      3. cGIgaW1wbGVtZW50YXRpb24KYmluZGFibGU6IEZhbHNlCmFzeW5jOiBvcHRpb25hbAptZXRhZGF0\
      4. YToKICBkb2N1bWVudGF0aW9uVXJsOiBodHRwczovL3d3dy5tZWRpYXdpa2kub3JnL3dpa2kvRG9j\
      5. dW1lbnRhdGlvbgogIGxvbmdEZXNjcmlwdGlvbjogQW4gYXBiIHRoYXQgZGVwbG95cyBNZWRpYXdp\
      6. a2kgMS4yMwogIGRlcGVuZGVuY2llczogWydkb2NrZXIuaW8vYW5zaWJsZXBsYXlib29rYnVuZGxl\
      7. L21lZGlhd2lraTEyMzpsYXRlc3QnXQogIGRpc3BsYXlOYW1lOiBNZWRpYXdpa2kgKEFQQilmZGZk\
      8. CiAgY29uc29sZS5vcGVuc2hpZnQuaW8vaWNvbkNsYXNzOiBpY29uLW1lZGlhd2lraQogIHByb3Zp\
      9. ZGVyRGlzcGxheU5hbWU6ICJSZWQgSGF0LCBJbmMuIgpwbGFuczoKICAtIG5hbWU6IGRlZmF1bHQK\
      10. ICAgIGRlc2NyaXB0aW9uOiBBbiBBUEIgdGhhdCBkZXBsb3lzIE1lZGlhV2lraQogICAgZnJlZTog\
      11. VHJ1ZQogICAgbWV0YWRhdGE6CiAgICAgIGRpc3BsYXlOYW1lOiBEZWZhdWx0CiAgICAgIGxvbmdE\
      12. ZXNjcmlwdGlvbjogVGhpcyBwbGFuIGRlcGxveXMgYSBzaW5nbGUgbWVkaWF3aWtpIGluc3RhbmNl\
      13. IHdpdGhvdXQgYSBEQgogICAgICBjb3N0OiAkMC4wMAogICAgcGFyYW1ldGVyczoKICAgICAgLSBu\
      14. YW1lOiBtZWRpYXdpa2lfZGJfc2NoZW1hCiAgICAgICAgZGVmYXVsdDogbWVkaWF3aWtpCiAgICAg\
      15. ICAgdHlwZTogc3RyaW5nCiAgICAgICAgdGl0bGU6IE1lZGlhd2lraSBEQiBTY2hlbWEKICAgICAg\
      16. ICBwYXR0ZXJuOiAiXlthLXpBLVpfXVthLXpBLVowLTlfXSokIgogICAgICAgIHJlcXVpcmVkOiBU\
      17. cnVlCiAgICAgIC0gbmFtZTogbWVkaWF3aWtpX3NpdGVfbmFtZQogICAgICAgIGRlZmF1bHQ6IE1l\
      18. ZGlhV2lraQogICAgICAgIHR5cGU6IHN0cmluZwogICAgICAgIHRpdGxlOiBNZWRpYXdpa2kgU2l0\
      19. ZSBOYW1lCiAgICAgICAgcGF0dGVybjogIl5bYS16QS1aXSskIgogICAgICAgIHJlcXVpcmVkOiBU\
      20. cnVlCiAgICAgICAgdXBkYXRhYmxlOiBUcnVlCiAgICAgIC0gbmFtZTogbWVkaWF3aWtpX3NpdGVf\
      21. bGFuZwogICAgICAgIGRlZmF1bHQ6IGVuCiAgICAgICAgdHlwZTogc3RyaW5nCiAgICAgICAgdGl0\
      22. bGU6IE1lZGlhd2lraSBTaXRlIExhbmd1YWdlCiAgICAgICAgcGF0dGVybjogIl5bYS16XXsyLDN9\
      23. JCIKICAgICAgICByZXF1aXJlZDogVHJ1ZQogICAgICAtIG5hbWU6IG1lZGlhd2lraV9hZG1pbl91\
      24. c2VyCiAgICAgICAgZGVmYXVsdDogYWRtaW4KICAgICAgICB0eXBlOiBzdHJpbmcKICAgICAgICB0\
      25. ZG1pbiBVc2VyIFBhc3N3b3JkKQogICAgICAgIHJlcXVpcmVkOiBUcnVlCiAgICAgIC0gbmFtZTog\
      26. bWVkaWF3aWtpX2FkbWluX3Bhc3MKICAgICAgICB0eXBlOiBzdHJpbmcKICAgICAgICB0aXRsZTog\
      27. ICAgIGRpc3BsYXlfdHlwZTogcGFzc3dvcmQK"
    2. Populate the internal OKD registry with your built APB image.

      This is normally handled by the apb push command. In order to build your image without using the docker CLI, you can take advantage of the S2I functionality of OKD.

      By default, the OAB is configured to look for published APBs in the openshift project, which is a global namespace that exposes its images and image streams to be available to any authenticated user on the cluster. You can take advantage of this by using the oc new-app command in the openshift project to build your image:

      1. $ oc new-app <path_to_bundle_source> \
      2. --name <bundle_name> \
      3. -n openshift

      After a couple of minutes, you should see your image in the internal registry:

      1. $ oc get images | grep <bundle_name>
      2. sha256:b2dcb4b95e178e9b7ac73e5ee0211080c10b24260f76cfec30b89e74e8ee6742 172.30.1.1:5000/openshift/<bundle_name>@sha256:b2dcb4b95e178e9b7ac73e5ee0211080c10b24260f76cfec30b89e74e8ee6742
    3. Bootstrap the OAB. This is normally also handled by the apb push or apb bootstrap command. The apb bootstrap command is preferable for this step because it will also relist the service catalog without having to wait five to ten minutes.

      If you do not have the apb tooling installed, you can alternatively perform the following:

      1. Get the route name for the broker:

      2. Get the list of supported paths for the broker:

        1. $ curl -H "Authorization: Bearer $(oc whoami -t)" -k \
        2. https://asb-1338-ansible-service-broker.172.17.0.1.nip.io/
        3. {
        4. "paths": [
        5. "/apis",
        6. "/ansible-service-broker/", (1)
        7. "/healthz",
        8. "/healthz/ping",
        9. "/healthz/poststarthook/generic-apiserver-start-informers",
        10. "/metrics"
        11. ]
        12. }
      3. Curl the v2/bootstrap path using the value found from the previous step:

        1. $ curl -H "Authorization: Bearer $(oc whoami -t)" -k -X POST \
        2. https://asb-1338-ansible-service-broker.172.17.0.1.nip.io/ansible-service-broker/v2/bootstrap (1)
        3. {
        4. "spec_count": 38,
        5. "image_count": 109
        6. }
        1Replace ansible-service-broker if it differs from the value found in the previous step.

        The oc whoami -t command should return a token and the authenticated user must have permissions as described in Access Permissions.

    1. Verify the new APB exists in the OAB. This is normally the functionality of the apb list command. If you do not have the apb tooling installed, you can alternatively perform the following:

      1. Curl the v2/catalog path using the route and supported path name gathered from the previous v2/bootstrap step:

        1. $ curl -H "Authorization: Bearer $(oc whoami -t)" -k \
        2. https://asb-1338-ansible-service-broker.172.17.0.1.nip.io/ansible-service-broker/v2/catalog

        You should see a list of all bootstrapped APB specs and one that is labeled localregistry-<bundle_name>. Use |grep <bundle_name> to help find it, since the output is in JSON.

    Running APBs

    Due to the limitations when working with remote clusters, you may want the same functionality as the apb run command without having to rely on the apb push command being successful. This is because apb run implicitly performs apb push first before attempting to provision the application.

    In order to work around this:

    1. Follow the steps described in Pushing APBs to push your image onto the internal OKD registry. After the image exists, you should be able to see it with:

      1. $ oc get images | grep <bundle_name>
      2. sha256:bfaa73a5e15bf90faec343c7d5f8cc4f952987afdbc3f11a24c54c037528d2ed 172.30.1.1:5000/openshift/<bundle_name>@sha256:bfaa73a5e15bf90faec343c7d5f8cc4f952987afdbc3f11a24c54c037528d2ed
    2. To provision, use the oc run command to launch the APB:

      1. $ oc new-project <target_namespace>
      2. $ oc create serviceaccount apb
      3. $ oc create rolebinding apb --clusterrole=admin --serviceaccount=<target_namespace>:apb
      4. $ oc run <pod_name> \
      5. --env="POD_NAME=<pod_name>" \
      6. --env="POD_NAMESPACE=<target_namespace>" \
      7. --image=<pull_spec> \ (1)
      8. --restart=Never \
      9. --attach=true \
      10. --serviceaccount=apb \
      11. -- <action> -e namespace=<target_namespace> -e cluster=openshift

    Working With the Restricted SCC

    When building an OKD image, it is important that you do not have your application running as the root user when at all possible. When running under the restriced security context, the application image is launched with a random UID. This causes problems if your application folder is owned by the root user.

    A good way to work around this is to add a user to the root group and make the application folder owned by the root group. See OKD-Specific Guidelines for details on supporting arbitrary user IDs.

    The following is a Dockerfile example of a node application running in /usr/src. This command would be run after the application is installed in /usr/src and the associated environment variables set:

    1. ENV USER_NAME=haste \
    2. USER_UID=1001 \
    3. HOME=/usr/src
    4. RUN useradd -u ${USER_UID} -r -g 0 -M -d /usr/src -b /usr/src -s /sbin/nologin -c "<username> user" ${USER_NAME} \
    5. && chown -R ${USER_NAME}:0 /usr/src \
    6. && chmod -R g=u /usr/src /etc/passwd
    7. USER 1001

    There is a temporary workaround for creating ConfigMaps from Ansible due to a bug in the Ansible modules.

    One common use case for ConfigMaps is when the parameters of an APB will be used within a configuration file of an application or service. The ConfigMap module allows you to mount a ConfigMap into a pod as a volume, which can be used to store the configuration file. This approach allows you to also leverage the power of Ansible’s template module to create a ConfigMap out of APB paramters.

    The following is an example of creating a ConfigMap from a Jinja template mounted into a pod as a volume:

    1. - name: Create hastebin config from template
    2. template:
    3. src: config.js.j2
    4. dest: /tmp/config.js
    5. - name: Create hastebin configmap
    6. shell: oc create configmap haste-config --from-file=haste-config=/tmp/config.js
    7. <snip>
    8. - name: create deployment config
    9. openshift_v1_deployment_config:
    10. name: hastebin
    11. namespace: '{{ namespace }}'
    12. labels:
    13. app: hastebin
    14. service: hastebin
    15. replicas: 1
    16. selector:
    17. app: hastebin
    18. service: hastebin
    19. spec_template_metadata_labels:
    20. app: hastebin
    21. service: hastebin
    22. containers:
    23. - env:
    24. image: docker.io/dymurray/hastebin:latest
    25. name: hastebin
    26. ports:
    27. - container_port: 7777
    28. protocol: TCP
    29. volumeMounts:
    30. - mountPath: /usr/src/haste-server/config
    31. name: config
    32. - env:
    33. image: docker.io/modularitycontainers/memcached:latest
    34. name: memcached
    35. ports:
    36. - container_port: 11211
    37. protocol: TCP
    38. volumes:
    39. - name: config
    40. configMap:
    41. name: haste-config
    42. items:
    43. - key: haste-config
    44. path: config.js

    Customizing Error Messages

    A default error message is returned in the web console when a provision call fails. For example:

    1. Error occurred during provision. Please contact administrator if the issue persists.

    To provide more information for troubleshooting purposes should a failure occur, you can write custom error messages for your APB that the web console can check for and return to the user.

    Kubernetes allows pods to log fatal events to a termination log. The log file location is set by the terminationMessagePath field in a pod’s specification and defaults to /dev/termination-log.

    The broker checks this termination log for any messages that were written to the file and passes the content to the service catalog. In the event of a failure, the web console sdisplays these messages.

    See Kubernetes documentation for more details on pod termination messages.

    The following is an example of how this can be done in an APB utilizing a CloudFormation template:

    1. - name: Writing Termination Message
    2. shell: echo "[CloudFormation Error] - {{ ansible_failed_result.msg }}" > /dev/termination-log