Литмир - Электронная Библиотека
A
A

Eugeny Shtoltc

IT Cloud

Prologue

More than 70 (76) tools are considered in practice in the book:

* Google Cloud Platform, Amazone WEB Services, Microsoft Azure;

* console utilities: cat, sed, NPM, node, exit, curl, kill, Dockerd, ps, sudo, grep, git, cd, mkdir, rm, rmdir, mongos, Python, df, eval, ip, mongo, netstat, oc, pgrep, ping, pip, pstree, systemctl, top, uname, VirtualBox, which, sleep, wget, tar, unzip, ls, virsh, egrep, cp, mv, chmod, ifconfig, kvm, minishift;

* standard tools: NGINX, MinIO, HAProxy, Docker, Consul, Vagrant, Ansible, kvm;

* DevOps tools: Jenkins, GitLab CI, BASH, PHP, Micro Kubernetes, kubectl, Velero, Helm, "http load testing";

* cloud Traefic, Kubernetes, Envoy, Istio, OpenShift, OKD, Rancher ,;

* several programming languages: PHP, NodeJS, Python, Golang.

Containerization

Infrastructure development history

Limoncelli (author of "The Practice of Cloud System Administration"), who worked for a long time at Google Inc, believes that 2010 is the year of transition from the era of the traditional Internet to the era of cloud computing.

* 1985-1994 – the time of mainframes (large computers) and intra-corporate data exchange, in which you can easily plan the load

* 1995-2000 – the era of the emergence of Internet companies,

* 2000-2003

* 2003-2010

* 2010-2019

The increase in the productivity of a separate machine is less than the increase in cost, for example, an increase in productivity by 2 times leads to an increase in cost significantly more than 2 times. At the same time, each subsequent increase in productivity is much more expensive. Consequently, each new user was more expensive.

Later, in the period 2000-2003, an ecosystem was able to form, providing a fundamentally different approach:

* the emergence of distributed computing;

* the emergence of low-power mass equipment;

* maturation of OpenSource solutions, allowing you to install software on multiple machines, not bundled with a processor license;

* maturation of telecommunication infrastructure;

* increasing reliability due to the distribution of points of failure;

* the ability to increase performance if needed in the future by adding new components.

The next stage was unification, which was most pronounced in 2003-2010:

* providing in the data center not a place in the closet (power-location), but already unified hardware purchased in bulk for the whole cent;

* saving on resources;

* virtualization of the network and computers.

Amazon set another milestone in 2010 and ushered in the era of cloud computing. The stage is characterized by the construction of large-scale data cents with a deliberate surplus in capacity to obtain a lower cost of computing power due to wholesale, based on savings for oneself and a profitable sale of their surplus at retail. This approach is applied not only to computing power, infrastructure, but also software, forming it as services to reduce the cost of their use by selling them at retail to both large companies and beginners.

The need for uniformity of the environment

Usually, novice Linux developers prefer to work from under Windows, so as not to learn an unfamiliar OS and stuff their own cones on it, because before everything was far from so simple and so debugged. Often, developers are forced to work from under Windows because of corporate preferences: 1C, Directum and other systems run only on Windows, and the rest, and most importantly the network infrastructure, is tailored for this operating system. Working from Windows leads to a large loss of working time for both developers and DevOps on fixing both minor and major differences in operating systems. These differences start to show up from the simplest tasks, for example, that it may be easier to make a page in pure HTML. But an incorrectly configured editor will put in the BOM and line feeds accepted in Windows: "\ n \ r" instead of "\ n"). BOM, when gluing the header, body and footer of the page, will create indents between them, they will not be visible in the editor, since these are formed by bytes of meta information about the file type, which in Linux do not have such a meaning and are perceived as translation of the indentation. Other newlines in GIT do not allow you to see the difference you made, because the difference is on each line.

Now let's take the Front developer. At first glance, what is difficult, because JS (JavaScript), HTML and CSS are interpreted natively by the browser. Previously, the layout of all different pages was done – it was checked by the designer and the customer and was given to the PHP developer for integration with the framework or cms. In order not to edit the header on each page, and then find out for a long time when they started to differ and which one was more correctly used by HAML. HAML adds additional syntax to HTML to avoid boiling: loops, file connections, in our case, a single header and footer. But it requires a special program to transform HTML into pure HTML. In MS Windows, this is solved by installing the compiler program and connecting it to the IDE, since all these features are in the IDE WEBStorm. With CSS and its size, duplicates, dependencies and support for different browsers, everything is much more complicated – LESS was used there, and now he headed the more functional SASS and libraries of support for different browsers, which requires the RUBY compiler and such a bundle usually does not work the first time. And for JS, CoffeScript was used. All this needs to be run through compression and validation programs (HTML validation is usually not needed).

When the project starts to grow and ceases to be separate pages with "JS inserts", and becomes SPA (Single Page Application, one page web applications), where everything is created by JS, and already collectors (Galp, Grunt), package managers and NodeJS are not assembled, the difficulties are becoming more and more. All these programs are free and were originally developed for Linux, designed to work from the BASH console and under Windows do not always behave well and are difficult to automate in graphical interfaces, despite the efforts of the IDE developers. Therefore, many WEB developers have switched from MS Windows to MacOS, which is a fork of UNIX systems, BASH is natively built into it.

Docker as lightweight virtual machines

Initially, the problem of isolation of provisions and projects was solved by virtualization – system software that emulates at a certain level the environment, which can be hardware (a computer as a set of components such as a processor, RAM, network device and others, if necessary) or, less often, operating system. The system administrator chooses the amount of RAM (no more free), processor, network device. Installs the operating system and, if necessary, drivers, installs the necessary programs. If you need a workplace for a second developer, he does the same. To install programs, it looks into the / bin directory of the first one and installs the missing ones. And here the first quiet problem arises, which has not yet manifested itself, that the programs are installed in different versions, but this will be a headache for developers, if one developer has a lot of work, and the other does not, or a headache for this sysadmin – if the developer works, in production – not.

With the increase in the number of jobs, the following problems are added:

* Less than 30% of the performance of the parent system is available to you, because all the commands that the processor must execute are executed by the virtualization program. To increase performance, the VT-X processor mode allows the processor, in which the processor directly executes commands from the virtual environment, and in cases of incompatibility, it throws an exception. True, these throws are hundreds of times more expensive than ordinary commands, so adult virtualization systems (VirtualBox, VMVare, and others) try to filter and modify potentially incompatible commands, which can significantly increase performance.

1
{"b":"721598","o":1}