Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
17 min read
Share
This new control plane is often referred to as a canary, but, in practice, it is advisable that this be named based on Istio version, as it will remain on the cluster for the long term.
Right now, migration is only possible on a per namespace basis, as it is not yet supported on the level of pods, although it probably will be in future releases.
istiod
deployment running with practically no data plane pods attached.
In the traditional sense, a canary upgrade flow ends with a rolling update of the old application into the new one. That's not what's happening here. Instead, the canary control plane will remain in the cluster for the long term and the original control plane won't be rolled out, instead it is deleted (this is why it's recommended that you name the new control plane based on a version number, rather than naming it canary). We're using the expression canary upgrade here, because that's what's used uniformly in Istio, and because we think it will be easier to understand and remember than trying to coin a new term for the flow that better paraphrases it.
If you need a hand with this, you can use our free version of Banzai Cloud's Pipeline platform to create a cluster.
$ helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
$ helm install istio-operator-v16x --create-namespace --namespace=istio-system --set-string operator.image.tag=0.6.12 --set-string istioVersion=1.6 banzaicloud-stable/istio-operator
Istio
Custom Resource and let the operator reconcile the Istio 1.6 control plane.
$ kubectl apply -n istio-system -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.6/config/samples/istio_v1beta1_istio.yaml
$ kubectl get po -n=istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-55b89d99d7-4m884 1/1 Running 0 17s
istio-operator-v16x-0 2/2 Running 0 57s
istiod-5865cb6547-zp5zh 1/1 Running 0 29s
$ kubectl create ns demo-a
$ kubectl create ns demo-b
$ kubectl patch istio -n istio-system istio-sample --type=json -p='[{"op": "replace", "path": "/spec/autoInjectionNamespaces", "value": ["demo-a", "demo-b"]}]'
$ kubectl get ns demo-a demo-b -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
demo-a Active 2m11s enabled
demo-b Active 2m9s enabled
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-a
labels:
k8s-app: app-a
namespace: demo-a
spec:
replicas: 1
selector:
matchLabels:
k8s-app: app-a
template:
metadata:
labels:
k8s-app: app-a
spec:
terminationGracePeriodSeconds: 2
containers:
- name: echo-service
image: k8s.gcr.io/echoserver:1.10
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: app-a
labels:
k8s-app: app-a
namespace: demo-a
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
k8s-app: app-a
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-b
labels:
k8s-app: app-b
namespace: demo-b
spec:
replicas: 1
selector:
matchLabels:
k8s-app: app-b
template:
metadata:
labels:
k8s-app: app-b
spec:
terminationGracePeriodSeconds: 2
containers:
- name: echo-service
image: k8s.gcr.io/echoserver:1.10
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: app-b
labels:
k8s-app: app-b
namespace: demo-b
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
k8s-app: app-b
$ APP_A_POD_NAME=$(kubectl get pods -n demo-a -l k8s-app=app-a -o=jsonpath='{.items[0].metadata.name}')
$ APP_B_POD_NAME=$(kubectl get pods -n demo-b -l k8s-app=app-b -o=jsonpath='{.items[0].metadata.name}')
$ kubectl exec -n=demo-a -ti -c echo-service $APP_A_POD_NAME -- curl -Ls -o /dev/null -w "%{http_code}" app-b.demo-b.svc.cluster.local
200
$ kubectl exec -n=demo-b -ti -c echo-service $APP_B_POD_NAME -- curl -Ls -o /dev/null -w "%{http_code}" app-a.demo-a.svc.cluster.local
200
$ helm install istio-operator-v17x --create-namespace --namespace=istio-system --set-string operator.image.tag=0.7.1 banzaicloud-stable/istio-operator
Istio
Custom Resource and let the operator reconcile the Istio 1.7 control plane.
$ helm install istio-operator-v17x --create-namespace --name$ kubectl apply -n istio-system -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.7/config/samples/istio_v1beta1_istio.yamlpace=istio-system --set-string operator.image.tag=0.7.1 banzaicloud-stable/istio-operator
$ kubectl get po -n=istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-55b89d99d7-4m884 1/1 Running 0 6m38s
istio-operator-v16x-0 2/2 Running 0 7m18s
istio-operator-v17x-0 2/2 Running 0 76s
istiod-676fc6d449-9jwfj 1/1 Running 0 10s
istiod-istio-sample-v17x-7dbdf4f9fc-bfxhl 1/1 Running 0 18s
$ kubectl patch mgw -n istio-system istio-ingressgateway --type=json -p='[{"op": "replace", "path": "/spec/istioControlPlane/name", "value": "istio-sample-v17x"}]'
$ kubectl label ns demo-a istio-injection- istio.io/rev=istio-sample-v17x.istio-system
The new istio.io/rev
label needs to be used for the new revisioned control planes to be able to perform sidecar injection. The istio-injection
label must be removed because it takes precedence over the istio.io/rev
label for backward compatibility.$ kubectl get ns demo-a -L istio-injection -L istio.io/rev
NAME STATUS AGE ISTIO-INJECTION REV
demo-a Active 12m istio-sample-v17x.istio-system
$ kubectl rollout restart deployment -n demo-a
$ APP_A_POD_NAME=$(kubectl get pods -n demo-a -l k8s-app=app-a -o=jsonpath='{.items[0].metadata.name}')
$ kubectl get po -n=demo-a $APP_A_POD_NAME -o yaml | grep istio/proxyv2:
image: docker.io/istio/proxyv2:1.7.0
image: docker.io/istio/proxyv2:1.7.0
image: docker.io/istio/proxyv2:1.7.0
image: docker.io/istio/proxyv2:1.7.0
demo-a
namespace are already on Istio 1.7 control plane, but the pod in demo-b
is still on Istio 1.6.
$ APP_A_POD_NAME=$(kubectl get pods -n demo-a -l k8s-app=app-a -o=jsonpath='{.items[0].metadata.name}')
$ APP_B_POD_NAME=$(kubectl get pods -n demo-b -l k8s-app=app-b -o=jsonpath='{.items[0].metadata.name}')
$ kubectl exec -n=demo-a -ti -c echo-service $APP_A_POD_NAME -- curl -Ls -o /dev/null -w "%{http_code}" app-b.demo-b.svc.cluster.local
200
$ kubectl exec -n=demo-b -ti -c echo-service $APP_B_POD_NAME -- curl -Ls -o /dev/null -w "%{http_code}" app-a.demo-a.svc.cluster.local
200
$ kubectl label ns demo-b istio-injection- istio.io/rev=istio-sample-v17x.istio-system
$ kubectl get ns demo-b -L istio-injection -L istio.io/rev
NAME STATUS AGE ISTIO-INJECTION REV
demo-b Active 19m istio-sample-v17x.istio-system
$ kubectl rollout restart deployment -n demo-b
$ APP_B_POD_NAME=$(kubectl get pods -n demo-b -l k8s-app=app-b -o=jsonpath='{.items[0].metadata.name}')
$ kubectl get po -n=demo-b $APP_B_POD_NAME -o yaml | grep istio/proxyv2:
image: docker.io/istio/proxyv2:1.7.0
image: docker.io/istio/proxyv2:1.7.0
image: docker.io/istio/proxyv2:1.7.0
image: docker.io/istio/proxyv2:1.7.0
Istio
Custom Resource to delete the Istio 1.6 control plane.
$ kubectl delete -n istio-system -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.6/config/samples/istio_v1beta1_istio.yaml
$ helm uninstall -n=istio-system istio-operator-v16x
Let's say that we have Backyards 1.3.2 with an Istio 1.6 control plane running in our cluster and we'd like to upgrade to Backyards 1.4 with an Istio 1.7 control plane. Here are the high level steps we need to go through to accomplish this with Backyards:
Again, you can create a cluster with our free version of Banzai Cloud's Pipeline platform.
$ backyards-1.3.2 install -a --run-demo
After the command is finished (the Istio operator, Istio 1.6 control plane, Backyards components and a demo application are installed) you should be able to see the demo application working on the Backyards UI, right away.
$ kubectl get po -n=istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-b9b9b5fd-t54wc 1/1 Running 0 4m1s
istio-operator-operator-0 2/2 Running 0 4m39s
istiod-77bb4d75cd-jsl4v 1/1 Running 0 4m17s
$ kubectl get ns backyards-demo -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
backyards-demo Active 2m21s enabled
$ BOOKINGS_POD_NAME=$(kubectl get pods -n backyards-demo -l app=bookings -o=jsonpath='{.items[0].metadata.name}')
$ kubectl get po -n=backyards-demo $BOOKINGS_POD_NAME -o yaml | grep istio-proxyv2:
image: banzaicloud/istio-proxyv2:1.6.3-bzc
image: docker.io/banzaicloud/istio-proxyv2:1.6.3-bzc
$ backyards-1.4 install -a --run-demo
...
Multiple control planes were found. Which one would you like to add the demo application to
? [Use arrows to move, type to filter]
mesh
> cp-v17x
This command first installs the Istio 1.7 control plane and then, as you can see above, detects the two control planes when reinstalling the demo application. The user can then choose the new 1.7 control plane to use for that namespace. On the Backyards UI, you can see that the communication still works, but now on the new control plane.$ kubectl get po -n=istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-5df9c6c68f-vnb4c 1/1 Running 0 3m47s
istio-operator-v16x-0 2/2 Running 0 3m59s
istio-operator-v17x-0 2/2 Running 0 3m43s
istiod-5b6d78cdfc-6kt5d 1/1 Running 0 2m45s
istiod-cp-v17x-b58f95c49-r24tr 1/1 Running 0 3m17s
$ kubectl get ns backyards-demo -L istio-injection -L istio.io/rev
NAME STATUS AGE ISTIO-INJECTION REV
backyards-demo Active 11m cp-v17x.istio-system
$ BOOKINGS_POD_NAME=$(kubectl get pods -n backyards-demo -l app=bookings -o=jsonpath='{.items[0].metadata.name}')
$ kubectl get po -n=backyards-demo $BOOKINGS_POD_NAME -o yaml | grep istio-proxyv2:
image: banzaicloud/istio-proxyv2:1.7.0-bzc
image: docker.io/banzaicloud/istio-proxyv2:1.7.0-bzc
$ kubectl delete -n=istio-system istio mesh
Want to know more? Get in touch with us, or delve into the details of the latest release. Or just take a look at some of the Istio features that Backyards automates and simplifies for you, and which we've already blogged about.
Get emerging insights on emerging technology straight to your inbox.
Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.
The Shift is Outshift’s exclusive newsletter.
Get the latest news and updates on cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.