Installing Multicluster
If you’d like to use an existing Ambassador installation, check out the instructions. Alternatively, check out the Ambassador documentation for a more detailed explanation of the configuration and what’s going on.
- Two clusters.
- A in each cluster that shares a common trust anchor. If you have an existing installation, see the documentation to understand what is required.
- Each of these clusters should be configured as contexts.
- Elevated privileges on both clusters. We’ll be creating service accounts and granting extended privileges, so you’ll need to be able to do that on your test clusters.
- Support for services of type
LoadBalancer
in theeast
cluster. Check out the documentation for your cluster provider or take a look at . This is what thewest
cluster will use to communicate witheast
via the gateway.
Step 1: Install the multicluster control plane
On each cluster, run:
To verify that everything has started up successfully, run:
linkerd check --multicluster
For a deep dive into what components are being added to your cluster and how all the pieces fit together, check out the .
Each cluster must be linked. This consists of creating a service account and RBAC in one cluster and adding a secret containing a kubeconfig to the other. To link cluster west
to cluster east
, you would run:
linkerd --context=east multicluster link --cluster-name east |
kubectl --context=west apply -f -
To verify that the credentials were created successfully and the clusters are able to reach each other, run:
linkerd --context=west check --multicluster
You should also see the list of gateways show up by running:
linkerd --context=west multicluster gateways
For a detailed explanation of what this step does, check out the linking the clusters section.
Step 3: Export services
By default, services are not automatically mirrored in linked clusters. For each service you would like mirrored to linked clusters, run:
kubectl get svc foobar -o yaml | \
linkerd multicluster export-service - | \
kubectl apply -f -
Note
This CLI command simply adds two annotations. You can do that yourself if you’d like.
mirror.linkerd.io/gateway-name: linkerd-gateway
mirror.linkerd.io/gateway-ns: linkerd-multicluster
The bundled Linkerd gateway is not required. In fact, if you have an existing Ambassador installation, it is easy to use it instead! By using your existing Ambassador installation, you avoid needing to manage multiple ingress gateways and pay for extra cloud load balancers. This guide assumes that Ambassador has been installed into the ambassador
namespace.
First, you’ll want to inject the ambassador
deployment with Linkerd:
This will add the Linkerd proxy, skip the ports that Ambassador is handling for public traffic and require identity on the gateway port. Check out the docs to understand why it is important to require identity on the gateway port.
cat <<EOF | kubectl --context=${ctx} apply -f -
---
apiVersion: getambassador.io/v2
kind: Module
metadata:
name: ambassador
namespace: ambassador
spec:
add_linkerd_headers: true
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: wildcard
namespace: ambassador
spec:
selector:
matchLabels:
nothing: nothing
acmeProvider:
authority: none
requestPolicy:
insecure:
action: Route
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: public-health-check
namespace: ambassador
spec:
prefix: /-/ambassador/ready
rewrite: /ambassador/v0/check_ready
service: localhost:8877
bypass_auth: true
EOF
The Ambassador service and deployment definitions need to be patched a little bit. This adds metadata required by the . To get these resources patched, run:
kubectl --context=${ctx} -n ambassador patch deploy ambassador -p='
spec:
template:
metadata:
config.linkerd.io/enable-gateway: "true"
'
kubectl --context=${ctx} -n ambassador patch svc ambassador --type='json' -p='[
{"op":"add","path":"/spec/ports/-", "value":{"name": "mc-gateway", "port": 4143}},
{"op":"replace","path":"/spec/ports/0", "value":{"name": "mc-probe", "port": 80, "targetPort": 8080}}
]'
kubectl --context=${ctx} -n ambassador patch svc ambassador -p='
metadata:
annotations:
mirror.linkerd.io/gateway-identity: ambassador.ambassador.serviceaccount.identity.linkerd.cluster.local
mirror.linkerd.io/probe-path: -/ambassador/ready
mirror.linkerd.io/probe-period: "3"
'
With everything setup and configured, you’ll want to pick services to use Ambassador as the gateway instead of the one bundled with Linkerd. You can do this by adding the following annotations to any services you’d like mirrored to other clusters.
mirror.linkerd.io/gateway-name: ambassador
mirror.linkerd.io/gateway-ns: ambassador
Clusters that have already been linked will automatically pick up the service and configure it to use Ambassador as the gateway! From a cluster that is not running Ambassador, you can validate that everything is working correctly by running:
linkerd check --multicluster
Additionally, the ambassador
gateway will show up when listing the active gateways:
linkerd multicluster gateways
Trust Anchor Bundle
To secure the connections between clusters, Linkerd requires that there is a shared trust anchor. This allows the control plane to encrypt the requests that go between clusters and verify the identity of those requests. This identity is used to control access to clusters, so it is critical that the trust anchor is shared.
The easiest way to do this is to have a single trust anchor certificate shared between multiple clusters. If you have an existing Linkerd installation and have thrown away the trust anchor key, it might not be possible to have a single certificate for the trust anchor. Luckily, the trust anchor can be a bundle of certificates as well!
To fetch your existing cluster’s trust anchor, run:
kubectl -n linkerd get cm/linkerd-config -ojsonpath="{.data.global}" | \
jq .identityContext.trustAnchorsPem -r > trustAnchor.crt
Note
This command requires . If you don’t have jq, feel free to extract the certificate with your tool of choice.
Now, you’ll want to create a new trust anchor and issuer for the new cluster:
Note
We use the step cli to generate certificates. openssl
works just as well!
With the old cluster’s trust anchor and the new cluster’s trust anchor, you can create a bundle by running:
cat trustAnchor.crt root.crt > bundle.crt
You’ll want to upgrade your existing cluster with the new bundle. Make sure every pod you’d like to have talk to the new cluster is restarted so that it can use this bundle. To upgrade the existing cluster with this new trust anchor bundle, run:
linkerd upgrade --identity-trust-anchors-file=./bundle.crt | \
kubectl apply -f -
linkerd install \
--identity-trust-anchors-file bundle.crt \
--identity-issuer-certificate-file issuer.crt \
--identity-issuer-key-file issuer.key | \
kubectl apply -f -
Make sure to verify that the cluster’s have started up successfully by running check
on each one.
linkerd check
Linkerd’s multicluster components i.e Gateway and Service Mirror can be installed via Helm rather than the linkerd multicluster install
command.
This while not only allows advanced configuration, but also allows users to bundle the multicluster installation as part of their existing Helm based installation pipeline.
First, Let’s add the Linkerd’s Helm repository by running
# To add the repo for Linkerd2 stable releases:
helm repo add linkerd https://helm.linkerd.io/stable
By default, both the multicluster components i.e Service Mirror and Gateway are installed when no toggle values are added.
helm install linkerd2-multicluster linkerd/linkerd2-multicluster
The chart values will be picked from the chart’s values.yaml
file.
You can override the values in that file by providing your own values.yaml
file passed with a -f
option, or overriding specific values using the family of --set
flags.
Full set of configuration options can be found
The installation can be verified by running
Individual multicluster components can be enabled or disabled by setting serviceMirror
and gateway
respectively. By default, both of these values are true.
For the source cluster to be able to access the target cluster’s services, Access credentials have to be present in the target cluster. This can be done using the linkerd multicluster allow
command through the CLI.
The same functionality can also be done through Helm by disabling gateway
and serviceMirror
while submitting the remote service account name.
helm install linkerd2-mc-soource linkerd/linkerd2-multicluster --set gateway=false --set serviceMirror=false --set remoteMirrorServiceAccountName=source --set installNamespace=false --kube-context target
Note
should be disabled if the access credentials are being created in the same namespace as that of multicluster components to prevent failure due to namespace ownership conflict between the Helm releases.
Now that the multicluster components are installed, operations like linking, etc can be performed by using the linkerd CLI’s multicluster sub-command as per the multicluster task.