Литмир - Электронная Библиотека
A
A

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform apply -var = "project_name = node-cluster-243923"

Our project in the folder is not only a project, but also a module ready to use:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cd ..

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

Or upload to the public repository:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git init

Initialized empty GIT repository in /home/essh/node-cluster/Kubernetes/.git/

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo "terraform.tfstate" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo "terraform.tfstate.backup" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo ".terraform /" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ rm -f kubernetes_key.json

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git remote add origin https://github.com/ESSch/terraform-google-kubernetes.git

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git add.

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git commit -m 'create a k8s Terraform module'

[master (root-commit) 4f73c64] create a Kubernetes Terraform module

3 files changed, 48 insertions (+)

create mode 100644 .gitignore

create mode 100644 main.tf

create mode 100644 variables.tf

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git push -u origin master

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git tag -a v0.0.2 -m 'publish'

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git push origin v0.0.2

After publishing in the module registry https://registry.terraform.io/, having met the requirements such as having a description, we can use it:

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

module "kubernetes" {

# source = "./Kubernetes"

source = "ESSch / kubernetes / google"

version = "0.0.2"

project_name = "node-cluster-243923"

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform init

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

On the next creation of the cluster, I got the error ZONE_RESOURCE_POOL_EXHAUSTED "does not have enough resources available to fulfill the request. Try a different zone, or try again later" , indicating that there are no required servers in this region. For me this is not a problem and I do not need to edit the module's code, because I parameterized the module with the region, and if I just pass the region region = "europe-west2" to the module as a parameter , terraform after the update and initialization command ./terrafrom init and the application command ./terraform apply will transfer my cluster to the specified region. Let's improve our module a little by moving the provider from the Kubernetes child module to the main module (the main script is also a module). Having brought it to the main module, we will be able to use one more module, otherwise the provider in one module will conflict with the provider in another. Inheritance from main module to child modules and their transparency applies only to providers. The rest of the data for transferring from child to parent will have to use external variables, and from parent to child – to parameterize the parent, but this is later, when we will create another model. Also, moving the provider to the parent module will be useful when creating the next module that we will create, since it will create Kubernetes elements that do not depend on the provider, and thus we can untie the Google provider from our module and can be used with other providers supporting Kubernetes. Now we don't need to pass the project name in the variable – it is set in the provider. For the convenience of development, I will use the local connection of the module for now. I created a folder and file for a new module:

essh @ kubernetes-master: ~ / node-cluster $ ls nodejs /

main.tf

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

// module "kubernetes" {

// source = "ESSch / kubernetes / google"

// version = "0.0.2"

//

// project_name = "node-cluster-243923"

// region = "europe-west2"

//}

provider "google" {

credentials = "$ {file (" ./ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-west2"

}

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

region = "europe-west2"

}

module "nodejs" {

source = "./nodejs"

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform init

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

Now let's transfer data from the Kubernetes infrastructure module to the application module:

essh @ kubernetes-master: ~ / node-cluster $ cat Kubernetes / outputs.tf

output "endpoint" {

value = google_container_cluster.node-ks.endpoint

sensitive = true

}

output "name" {

value = google_container_cluster.node-ks.name

sensitive = true

}

output "cluster_ca_certificate" {

value = base64decode (google_container_cluster.node-ks.master_auth.0.cluster_ca_certificate)

}

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

// module "kubernetes" {

// source = "ESSch / kubernetes / google"

// version = "0.0.2"

//

// project_name = "node-cluster-243923"

// region = "europe-west2"

//}

provider "google" {

credentials = file ("./ kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

region = "europe-west2"

}

module "nodejs" {

source = "./nodejs"

endpoint = module.Kubernetes.endpoint

cluster_ca_certificate = module.Kubernetes.cluster_ca_certificate

}

essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / variable.tf

variable "endpoint" {}

variable "cluster_ca_certificate" {}

To check the balancing of traffic from all nodes, start NGINX, replacing the standard page with the hostname. We'll replace it with a simple command call and resume the server. To start the server, let's look at its call in the Dockerfile: CMD ["Nginx", "-g", "daemon off;"] , which is equivalent to calling Nginx -g 'daemon off;' at the command line. As you can see, the Dockerfile does not use BASH as an environment for launching, but starts the server itself, which allows the shell to live in the event of a process crash, preventing the container from crashing and re-creating. But for our experiments, BASH is fine:

essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it Nginx: 1.17.0 which Nginx

/ usr / sbin / Nginx

sudo docker run -it –rm -p 8333: 80 Nginx: 1.17.0 / bin / bash -c "echo \ $ HOSTNAME> /usr/share/Nginx/html/index2.html && / usr / sbin / Nginx – g 'daemon off;' "

32
{"b":"721598","o":1}