eSSH@Kubernetes-master:~/node-cluster$ cp ~/Downloads/node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON
Скачал terraform:
essh@kubernetes-master:~/node-cluster$ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip >/dev/null 2>/dev/null
essh@kubernetes-master:~/node-cluster$ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip
Archive: terraform_0.12.2_linux_amd64.zip
inflating: terraform
essh@kubernetes-master:~/node-cluster$ ./terraform version
Terraform v0.12.2
Добавили провайдера GCE и запустил скачивание "драйверов" к нему:
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster"
region = "us-central1"
}essh@kubernetes-master:~/node-cluster$ ./terraform init
Initializing the backend…
Initializing provider plugins…
– Checking for available provider plugins…
– Downloading plugin for provider "google" (terraform-providers/google) 2.8.0…
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "…" constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.google: version = "~> 2.8"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Добавлю виртуальную машину:
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
access_config {}
}
essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_compute_instance.cluster will be created
+ resource "google_compute_instance" "cluster" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "f1-micro"
+ metadata_fingerprint = (known after apply)
+ name= "cluster"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags_fingerprint = (known after apply)
+ zone= "europe-north1-a"
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ source = (known after apply)
+ initialize_params {
+ image = "debian-cloud/debian-9"
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ address = (known after apply)
+ name = (known after apply)
+ network = "default"
+ network_ip = (known after apply)
+ subnetwork = (known after apply)
+ subnetwork_project = (known after apply)
+ access_config {
+ assigned_nat_ip = (known after apply)
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_instance.cluster: Creating…
google_compute_instance.cluster: Still creating… [10s elapsed]
google_compute_instance.cluster: Creation complete after 11s [id=cluster]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Добавлю к ноде публичный статический IP-адрес и SSH-ключ:
essh@kubernetes-master:~/node-cluster$ ssh-keygen -f node-cluster
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in node-cluster.
Your public key has been saved in node-cluster.pub.
The key fingerprint is:
SHA256:vUhDe7FOzykE5BSLOIhE7Xt9o+AwgM4ZKOCW4nsLG58 essh@kubernetes-master
The key's randomart image is:
+–[RSA 2048]–+
|.o. +. |
|o. o . = . |
|* + o . = . |
|=* . . . + o |
|B + . . S * |
| = + o o X + . |
| o. = . + = + |
| .=… . . |
| ..E. |
+–[SHA256]–+
essh@kubernetes-master:~/node-cluster$ ls node-cluster.pub
node-cluster.pub
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_address" "static-ip-address" {
name = "static-ip-address"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
metadata = {
ssh-keys = "essh:${file("./node-cluster.pub")}"
}
network_interface {
network = "default"
access_config {
nat_ip = "${google_compute_address.static-ip-address.address}"
}
}
}essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply
Проверим подключение SSH к серверу:
essh@kubernetes-master:~/node-cluster$ ssh -i ./node-cluster essh@35.228.82.222
The authenticity of host '35.228.82.222 (35.228.82.222)' can't be established.
ECDSA key fingerprint is SHA256:o7ykujZp46IF+eu7SaIwXOlRRApiTY1YtXQzsGwO18A.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '35.228.82.222' (ECDSA) to the list of known hosts.
Linux cluster 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
essh@cluster:~$ ls
essh@cluster:~$ exit
logout
Connection to 35.228.82.222 closed.
Установим пакеты:
essh@kubernetes-master:~/node-cluster$ curl https://sdk.cloud.google.com | bash
Читать дальше