essh@kubernetes-master:~/node-cluster$ exec -l $SHELL
essh@kubernetes-master:~/node-cluster$ gcloud init
Выберем проект:
You are logged in as: [esschtolts@gmail.com].
Pick cloud project to use:
[1] agile-aleph-203917
[2] node-cluster-243923
[3] essch
[4] Create a new project
Please enter numeric choice or text value (must exactly match list
item):
Please enter a value between 1 and 4, or a value present in the list: 2
Your current project has been set to: [node-cluster-243923].
Выберем зону:
[50] europe-north1-a
Did not print [12] options.
Too many options [62]. Enter "list" at prompt to print choices fully.
Please enter numeric choice or text value (must exactly match list
item):
Please enter a value between 1 and 62, or a value present in the list: 50
essh@kubernetes-master:~/node-cluster$ PROJECT_I="node-cluster-243923"
essh@kubernetes-master:~/node-cluster$ echo $PROJECT_I
node-cluster-243923
essh@kubernetes-master:~/node-cluster$ export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json
essh@kubernetes-master:~/node-cluster$ sudo docker-machine create –driver google –google-project $PROJECT_ID vm01
sudo export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json docker-machine create –driver google –google-project $PROJECT_ID vm01
// https://docs.docker.com/machine/drivers/gce/
// https://github.com/docker/machine/issues/4722
essh@kubernetes-master:~/node-cluster$ gcloud config list
[compute]
region = europe-north1
zone = europe-north1-a
[core]
account = esschtolts@gmail.com
disable_usage_reporting = False
project = node-cluster-243923
Your active configuration is: [default]
Добавим копирование файла и выполнение скрипта:
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_address" "static-ip-address" {
name = "static-ip-address"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
metadata = {
ssh-keys = "essh:${file("./node-cluster.pub")}"
}
network_interface {
network = "default"
access_config {
nat_ip = "${google_compute_address.static-ip-address.address}"
}
}
}
resource "null_resource" "cluster" {
triggers = {
cluster_instance_ids = "${join(",", google_compute_instance.cluster.*.id)}"
}
connection {
host = "${google_compute_address.static-ip-address.address}"
type = "ssh"
user = "essh"
timeout = "2m"
private_key = "${file("~/node-cluster/node-cluster")}"
# agent = "false"
}
provisioner "file" {
source = "client.js"
destination = "~/client.js"
}
provisioner "remote-exec" {
inline = [
"cd ~ && echo 1 > test.txt"
]
}
essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply
google_compute_address.static-ip-address: Creating…
google_compute_address.static-ip-address: Creation complete after 5s [id=node-cluster-243923/europe-north1/static-ip-address]
google_compute_instance.cluster: Creating…
google_compute_instance.cluster: Still creating… [10s elapsed]
google_compute_instance.cluster: Creation complete after 12s [id=cluster]
null_resource.cluster: Creating…
null_resource.cluster: Provisioning with 'file'…
null_resource.cluster: Provisioning with 'remote-exec'…
null_resource.cluster (remote-exec): Connecting to remote host via SSH…
null_resource.cluster (remote-exec): Host: 35.228.82.222
null_resource.cluster (remote-exec): User: essh
null_resource.cluster (remote-exec): Password: false
null_resource.cluster (remote-exec): Private key: true
null_resource.cluster (remote-exec): Certificate: false
null_resource.cluster (remote-exec): SSH Agent: false
null_resource.cluster (remote-exec): Checking Host Key: false
null_resource.cluster (remote-exec): Connected!
null_resource.cluster: Creation complete after 7s [id=816586071607403364]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
esschtolts@cluster:~$ ls /home/essh/
client.js test.txt
[sudo] password for essh:
google_compute_address.static-ip-address: Refreshing state… [id=node-cluster-243923/europe-north1/static-ip-address]
google_compute_instance.cluster: Refreshing state… [id=cluster]
null_resource.cluster: Refreshing state… [id=816586071607403364]
Enter a value: yes
null_resource.cluster: Destroying… [id=816586071607403364]
null_resource.cluster: Destruction complete after 0s
google_compute_instance.cluster: Destroying… [id=cluster]
google_compute_instance.cluster: Still destroying… [id=cluster, 10s elapsed]
google_compute_instance.cluster: Still destroying… [id=cluster, 20s elapsed]
google_compute_instance.cluster: Destruction complete after 27s
google_compute_address.static-ip-address: Destroying… [id=node-cluster-243923/europe-north1/static-ip-address]
google_compute_address.static-ip-address: Destruction complete after 8s
Для деплоя всего проекта можно добавить его в репозиторий, а загружать его в виртуальную машину будем через копирование установочного скрипта на эту виртуальную машину с последующим его запуском:
Переходим к Kubernetes
В минимальном варианте создание кластера из трёх нод выглядит примерно так:
essh@kubernetes-master:~/node-cluster/Kubernetes$ cat main.tf
provider "google" {
credentials = "${file("../kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_container_cluster" "node-ks" {
name = "node-ks"
location = "europe-north1-a"
initial_node_count = 3
}
essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform init
essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform apply
Кластер создался за 2:15, а после того, как я добавил europe-north1-a две дополнительные зоны europe-north1 -b, europe-north1-c и установили количество создаваемых инстансев в зоне в один, кластер создался за 3:13 секунде, потому что для более высокой доступности ноды были созданы в разных дата-центрах: europe-north1-a, europe-north1-b, europe-north1-c:
provider "google" {
credentials = "${file("../kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_container_cluster" "node-ks" {
name = "node-ks"
location = "europe-north1-a"
node_locations = ["europe-north1-b", "europe-north1-c"]
initial_node_count = 1
}
Теперь разделим наш кластер на два: управляющий кластер с Kubernetes и кластер для наших POD. Все кластера будет распределены по трём дата – центрам. Кластер для наших POD может авто масштабироваться под нагрузкой до 2 на каждой зоне (с трёх до шести в общем):
Читать дальше