Profiles: Advanced Provider Configuration

    The OpenFaaS design allows it to provide a standard API across several different container ochestration tools: Kubernetes, Docker Swarm, ContainerD, etc. These faas-providers generally implement the same core features and allow your to functions to remain portable and be deployed on any certified OpenFaaS installation regardless of the orchestration layer. However, there are certain workloads or deployments that require more advanced features or fine tuning of configuration. To allow maximum flexibility without overloading the OpenFaaS function configuration, we have introduced the concept of Profiles. This is simply a reserved function annotation that the can detect and use to apply the advanced configuration.

    If you are a function author, using a Profile is a simple as adding an annotation to your function:

    You can do this with the faas-cli flags:

    1. faas-cli deploy --annotation com.openfaas.profile=<profile_name>

    Or in the stack YAML:

    1. functions:
    2. foo:
    3. image: "..."
    4. fprocess: "..."
    5. annotations:
    6. com.openfaas.profile: <profile_name>

    If you need multiple profiles, you can use a comma separated value:

    1. com.openfaas.profile: <profile_name1>,<profile_name2>

    Creating Profiles

    Profiles must be pre-created, similar to Secrets, by the cluster admin. The OpenFaaS API does not provide a way to create Profiles because they are hyper specific to the orchestration tool.

    When installing OpenFaaS on Kubernetes, Profiles use a CRD. This must be installed during or prior to start the OpenFaaS controller. When using the official Helm chart this will happen automatically. Alternatively, you can apply this to install the CRD.

    Profiles in Kubernetes work by injecting the supplied configuration directly into the correct locations of the Function’s Deployment. This allows us to directly expose the underlying API without any additional modifications. Currently, it exposes the following Pod and Container options from the Kubernetes API

    The configuration use the exact options that you find in the Kubernetes documentation.

    Use an Alternative RuntimeClass

    A popular alternative container runtime class is that provides additional sandboxing between containers. If you have created a cluster that is using gVisor, you will need to set the runTimeClass on the Pods that are created. This is not exposed in the OpenFaaS API, but it can be set via a Profile.

    1. Install the latest faas-netes release and the CRD. The is most easily done with arkade

      This default installation will enable Profiles.

    2. Create a Profile to apply the runtime class

      1. kubectl apply -f- << EOF
      2. kind: Profile
      3. name: gvisor
      4. namespace: openfaas
      5. spec:
      6. runtimeClassName: gvisor
      7. EOF
      1. com.openfaas.profile: gvisor

    The following stack file will deploy a SHA512 generating file in a cluster with gVisor

    1. provider:
    2. name: openfaas
    3. gateway: http://127.0.0.1:8080
    4. functions:
    5. stronghash:
    6. skip_build: true
    7. image: functions/alpine:latest
    8. fprocess: "sha512sum"
    9. annotations:
    10. com.openfaas.profile: gvisor

    Use Tolerations and Affinity to Separate Workloads

    The OpenFaaS API exposes the Kubernetes NodeSelector via constraints. This provides a very simple selection based on labels on Nodes.

    The Kubernetes API also exposes two features affinity/anti-affinity and taint/tolerations that further expand the types of constraints you can express. OpenFaaS Profiles allow you to set these options, allowing you to more accurately isolate workloads, keep certain workloads together on the same nodes, or to keep certain workloads separate.

    For example, a mixture of taints and affinity can put less critical functions on that are cheaper while keeping critical functions on standard nodes with higher availability guarantees.

    In this example, we create a Profile using taints and affinity to place functions on the node with a GPU. We will also ensure that only functions that require the GPU are scheduled on these nodes. This ensures that the functions that need to use the GPU are not blocked by other standard functions taking resources on these special nodes.

    1. Install the latest faas-netes release and the CRD. The is most easily done with arkade

      This default installation will enable Profiles.

    2. Label and Taint the node with the GPU

      1. kubectl labels nodes node1 gpu=installed
    3. Let your developers creating functions that need GPU support, they must use this annotation

      1. com.openfaas.profile: withgpu