Profiles: Advanced Provider Configuration
The OpenFaaS design allows it to provide a standard API across several different container ochestration tools: Kubernetes, Docker Swarm, ContainerD, etc. These faas-providers generally implement the same core features and allow your to functions to remain portable and be deployed on any certified OpenFaaS installation regardless of the orchestration layer. However, there are certain workloads or deployments that require more advanced features or fine tuning of configuration. To allow maximum flexibility without overloading the OpenFaaS function configuration, we have introduced the concept of Profiles. This is simply a reserved function annotation that the can detect and use to apply the advanced configuration.
If you are a function author, using a Profile is a simple as adding an annotation to your function:
You can do this with the faas-cli
flags:
faas-cli deploy --annotation com.openfaas.profile=<profile_name>
Or in the stack YAML:
functions:
foo:
image: "..."
fprocess: "..."
annotations:
com.openfaas.profile: <profile_name>
If you need multiple profiles, you can use a comma separated value:
com.openfaas.profile: <profile_name1>,<profile_name2>
Creating Profiles
Profiles must be pre-created, similar to Secrets, by the cluster admin. The OpenFaaS API does not provide a way to create Profiles because they are hyper specific to the orchestration tool.
When installing OpenFaaS on Kubernetes, Profiles use a CRD. This must be installed during or prior to start the OpenFaaS controller. When using the official Helm chart this will happen automatically. Alternatively, you can apply this to install the CRD.
Profiles in Kubernetes work by injecting the supplied configuration directly into the correct locations of the Function’s Deployment. This allows us to directly expose the underlying API without any additional modifications. Currently, it exposes the following Pod and Container options from the Kubernetes API
runtimeClassName
: See https://kubernetes.io/docs/concepts/containers/runtime-class/ for a description and links to any additional documentation about Pod Runtime Classtolerations
: for a description and links to any additional documentation about Tolerations.affinity
: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity for a description and links to any additional documentation about Node Affinity.podSecurityContext
: for a description and links to any additional documentation about the Pod Security Context.
The configuration use the exact options that you find in the Kubernetes documentation.
Use an Alternative RuntimeClass
A popular alternative container runtime class is that provides additional sandboxing between containers. If you have created a cluster that is using gVisor, you will need to set the runTimeClass
on the Pods that are created. This is not exposed in the OpenFaaS API, but it can be set via a Profile.
Install the latest
faas-netes
release and the CRD. The is most easily done witharkade
This default installation will enable Profiles.
Create a Profile to apply the runtime class
kubectl apply -f- << EOF
kind: Profile
name: gvisor
namespace: openfaas
spec:
runtimeClassName: gvisor
EOF
-
com.openfaas.profile: gvisor
The following stack file will deploy a SHA512 generating file in a cluster with gVisor
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
stronghash:
skip_build: true
image: functions/alpine:latest
fprocess: "sha512sum"
annotations:
com.openfaas.profile: gvisor
Use Tolerations and Affinity to Separate Workloads
The OpenFaaS API exposes the Kubernetes NodeSelector
via constraints
. This provides a very simple selection based on labels on Nodes.
The Kubernetes API also exposes two features affinity/anti-affinity and taint/tolerations that further expand the types of constraints you can express. OpenFaaS Profiles allow you to set these options, allowing you to more accurately isolate workloads, keep certain workloads together on the same nodes, or to keep certain workloads separate.
For example, a mixture of taints and affinity can put less critical functions on that are cheaper while keeping critical functions on standard nodes with higher availability guarantees.
In this example, we create a Profile using taints and affinity to place functions on the node with a GPU. We will also ensure that only functions that require the GPU are scheduled on these nodes. This ensures that the functions that need to use the GPU are not blocked by other standard functions taking resources on these special nodes.
Install the latest
faas-netes
release and the CRD. The is most easily done witharkade
This default installation will enable Profiles.
Label and Taint the node with the GPU
kubectl labels nodes node1 gpu=installed
Let your developers creating functions that need GPU support, they must use this annotation
com.openfaas.profile: withgpu