apiVersion: extensions / v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-12-16T10: 23: 26Z
generation: 1
labels:
run: Nginx
name: Nginx
namespace: default
resourceVersion: "1612985"
selfLink: / apis / extensions / v1beta1 / namespaces / default / deployments / Nginx
uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: Nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: Nginx
spec:
containers:
– image: Nginx
imagePullPolicy: Always
name: Nginx
resources: {}
terminationMessagePath: / dev / termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
– lastTransitionTime: 2018-12-16T10: 23: 26Z
lastUpdateTime: 2018-12-16T10: 23: 26Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
– lastTransitionTime: 2018-12-16T10: 23: 26Z
lastUpdateTime: 2018-12-16T10: 23: 28Z
message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
This will be superfluous for us, so I will delete the unnecessary, because when creating, we specified only the name and image, the rest was filled with default values:
apiVersion: extensions / v1beta1
kind: Deployment
metadata:
labels:
run: Nginx
name: Nginx
spec:
selector:
matchLabels:
run: Nginx
template:
metadata:
labels:
run: Nginx
spec:
containers:
– image: Nginx
name: Nginx
You can also create a template:
gcloud services enable compute.googleapis.com –project = $ {PROJECT}
gcloud beta compute instance-templates create-with-container $ {TEMPLATE} \
–-machine-type = custom-1-4096 \
–-image-family = cos-stable \
–-image-project = cos-cloud \
–-container-image = gcr.io / kuar-demo / kuard-amd64: 1 \
–-container-restart-policy = always \
–-preemptible \
–-region = $ {REGION} \
–-project = $ {PROJECT}
gcloud compute instance-groups managed create $ {TEMPLATE} \
–-base-instance-name = $ {TEMPLATE} \
–-template = $ {TEMPLATE} \
–-size = $ {CLONES} \
–-region = $ {REGION} \
–-project = $ {PROJECT}
High service availability
To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:
esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml
apiVersion: apps / v1
kind: Deployment
metadata:
name: Nginxlamp
spec:
selector:
matchLabels:
app: lamp
replicas: 1
template:
metadata:
labels:
app: lamp
spec:
containers:
– name: lamp
image: mattrayner / lamp: latest-1604-php5
ports:
– containerPort: 80
esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
– name: front
port: 80
targetPort: 80
selector:
app: lamp
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods
NAME READY STATUS RESTARTS AGE
Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m
kubernetes ClusterIP 10.55.240.1 none> 443 / TCP 48m
Now we can create identical copies of our clusters, for example, for Production and Develop, but balancing will not work as expected. The balancer will find PODs by label, and PODs in both production and Developer clusters correspond to this label. Also, placing clusters in different projects will not be an obstacle. Although, for many tasks, this is a big plus, but not in the case of a cluster for developers and production. The namespace is used to delimit the scope. We use them discreetly, when we list PODs without specifying a scope, we are displayed by default , but the PODs are not taken out of the system scope:
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace
NAME STATUS AGE
default Active 5h
kube-public Active 5h
kube-system Active
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = kube-system
NAME READY STATUS RESTARTS AGE
event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Running 0 5h
fluentd-gcp-scaler-697b966945-bkqrm 1/1 Running 0 5h
fluentd-gcp-v3.1.0-xgtw9 2/2 Running 0 5h
heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Running 0 5h
kube-dns-548976df6c-8lvp6 4/4 Running 0 5h
kube-dns-548976df6c-mcctq 4/4 Running 0 5h
kube-dns-autoscaler-67c97c87fb-zzl9w 1/1 Running 0 5h
kube-proxy-gke-bitrix-default-pool-38fa77e9-0wdx 1/1 Running 0 5h
kube-proxy-gke-bitrix-default-pool-38fa77e9-wvrf 1/1 Running 0 5h
l7-default-backend-5bc54cfb57-6qk4l 1/1 Running 0 5h
metrics-server-v0.2.1-fd596d746-g452c 2/2 Running 0 5h
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = default
NAMEREADY STATUS RESTARTS AGE
Nginxlamp-b5dcb7546-g8j5r 1/1 Running 0 4h
Let's create a scope:
esschtolts @ cloudshell: ~ / bitrix (essch) $ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
name: development
esschtolts @ cloudshell: ~ (essch) $ kubectl create -f namespace.yaml
namespace "development" created
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace –show-labels
NAME STATUS AGE LABELS
default Active 5h none>
development Active 16m name = development
kube-public Active 5h none>
kube-system Active 5h none>
The essence of working with scope is that for specific clusters we set the scope and we can execute commands specifying it, while they will apply only to them. At the same time, except for the keys in commands such as kubectl get pods I do not appear in the scope, therefore the configuration files of controllers (Deployment, DaemonSet and others) and services (LoadBalancer, NodePort and others) do not appear, allowing them to be seamlessly transferred between the scope, which especially relevant for the development pipeline: developer server, test server, and production server. Scopes are set in the cluster context file $ HOME / .kube / config using the kubectl config view command . So, in my cluster context entry, the scope entry does not appear (default is default ):
– context:
cluster: gke_essch_europe-north1-a_bitrix
user: gke_essch_europe-north1-a_bitrix
name: gke_essch_europe-north1-a_bitrix
You can see something like this:
Читать дальше