Difference: 1. The creation speed of traditional virtualization is very slow, while the creation speed of container virtualization is very fast; 2. Traditional virtualization adds links to the system adjustment chain and causes performance loss, while container virtualization has a common core , almost no performance loss; 3. Traditional virtualization supports multiple operating systems, while container virtualization only supports operating systems supported by the kernel.
The operating environment of this tutorial: linux7.3 system, docker-1.13.1 version, Dell G3 computer.
Traditional virtualization technology
Virtualization refers to virtualizing one computer into multiple computers through virtualization technology Logical computer. Multiple logical computers can be run simultaneously on one computer. Each logical computer can run a different operating system, and applications can run in independent spaces without affecting each other, thereby significantly improving the computer's work efficiency.
With the continuous development of hardware manufacturers, many instructions in virtual machines do not need to go through the virtual hardware layer to the real hardware layer. Hardware manufacturers support practical instructions to operate hardware directly in the virtual machine. This The technology we call is hardware-assisted virtualization. Compared with the hardware layer of software virtualization, this kind of hardware-assisted virtualization does not need to simulate all the hardware. Some instructions run directly on the virtual machine to operate the hardware. The performance and efficiency are higher than traditional virtualization.
System-level virtualization
Features:
No need to simulate the hardware layer.
Sharing the kernel of the same host
The difference between traditional virtualization and container virtualization
Container’s core technology
1.CGroup limits the resource usage of containers
2.Namespace mechanism to achieve isolation between containers
3.chroot, file system isolation.
CGroup:
The Linux kernel provides restrictions, records and isolation of resources used by process groups. Proposed by Google engineers, the background is integrated into the kernel. The control and recording of different resource usage is achieved through different subsystems .
/sys/fs/cgroup
Namespace:
pid: The container has its own independent process table and thread No. 1.
net: The container has its own independent network info
ipc: During ipc communication, additional information needs to be added to identify the process
mnt: Each container has its own unique directory mount
utc: Each container has an independent hostname and domain
chroot:
A certain directory in the host is the root directory in the container.
All applications have their own dependencies, among which Includes software and hardware resources. Docker is an open platform for developers that isolates dependencies by packaging each application into a container. Containers are like lightweight virtual machines that can scale to thousands of nodes, helping to increase cloud portability by running the same application in different virtual environments. Virtual machines are widely used in cloud computing to achieve isolation and resource control through the use of virtual machines. The virtual machine loads a complete operating system with its own memory management, making applications more efficient and secure while ensuring their high availability.
What is the difference between Docker containers and virtual machines?
The virtual machine has a complete operating system, and its own memory management is supported through related virtual devices. In a virtual machine, efficient resources are allocated to the user operating system and hypervisor, allowing multiple instances of one or more operating systems to run in parallel on a single computer (or host). Each guest operating system runs as a single entity within the host system.
On the other hand, Docker containers are executed using the Docker engine rather than the hypervisor. Containers are therefore smaller than virtual machines and can start faster due to the sharing of the host kernel, with better performance, less isolation and better compatibility. Docker containers are able to share a kernel and share application libraries, so containers have lower system overhead than virtual machines. As long as users are willing to use a single platform to provide a shared operating system, containers can be faster and use fewer resources. A virtual machine can take minutes to create and start, whereas a container can take just seconds to create and start. Applications contained in containers provide superior performance compared to running applications in virtual machines.
One of the key indicators that Docker containers are weaker than virtual machines is "isolation". Intel's VT-d and VT-x technologies provide ring-1 hardware isolation technology for virtual machines, so virtual machines can take full advantage of it. It helps virtual machines use resources efficiently and prevent interference with each other. Docker containers also do not have any form of hardware isolation, making them vulnerable to attacks.
How to choose?
Choosing containers or virtual machines depends on how the application is designed. If the application is designed to provide scalability and high availability, then containers are the best choice, otherwise the application can be placed in a virtual machine. For businesses with high I/O requirements, such as database services, it is recommended to deploy Docker physical machines, because when Docker is deployed in a virtual machine, I/O performance will be limited by the virtual machine. For businesses such as virtual desktop services that emphasize tenant permissions and security, it is recommended to use virtual machines. The multi-tenant strong isolation feature of virtual machines ensures that while tenants have root permissions on the virtual machine, other tenants and hosts are safe.
Or a better option is a hybrid solution, with containers running in virtual machines. Docker containers can run inside virtual machines and provide them with proven isolation, security properties, mobility, dynamic virtual networking, and more. In order to achieve safe isolation and high utilization of resources, we should basically follow the idea of using virtual machine isolation for business operations of different tenants, and deploy similar types of businesses on the same set of containers.
Conclusion
Docker containers are becoming an important tool in DevOps environments. There are many use cases for Docker Containers in the DevOps world. Running applications on Docker containers and deploying them anywhere (Cloud or on-premises or any flavor of Linux) is now a reality.
Working in heterogeneous environments, virtual machines provide a high degree of flexibility, while Docker containers mainly focus on applications and their dependencies. Docker Containers allow easy porting of application stacks across clouds by using each cloud's virtual machine environment to handle clouds. This represents a useful feature that, without Docker Containers, would have to be implemented in a more complex and tedious way. What is explained here is not about giving up virtual machines, but using Docker containers according to actual conditions in addition to virtual machines when necessary. It is not believed that Docker containers can completely eliminate virtual machines.
Recommended learning: "docker video tutorial"
The above is the detailed content of What is the difference between docker containers and traditional virtualization?. For more information, please follow other related articles on the PHP Chinese website!