Is docker a partitioned operating system?
Docker is not a partitioned operating system; an operating system is a computer program that manages computer hardware and software resources, and docker refers to a docker container, an open source application container engine that can package applications and dependency packages into a reusable ported image and publish to any popular operating system machine.
The operating environment of this tutorial: linux7.3 system, docker version 19.03, Dell G3 computer.
Docker is not a partitioned operating system
Docker refers to the Docker container. It is an open source application container engine that allows developers to package their applications and dependency packages into a portable image, and then Publishing to machines with any popular operating system can also be virtualized.
The operating system (operating system, referred to as OS) is a computer program that manages computer hardware and software resources. The operating system needs to handle basic tasks such as managing and configuring memory, determining the priority of system resource supply and demand, controlling input and output devices, operating the network, and managing the file system. The operating system also provides an interface for users to interact with the system.
docker
Docker is an open source application container engine that allows developers to package their applications and dependencies into a portable image. Then publish to any popular Linux or Windows operating system machine, which can also be virtualized. Containers completely use the sandbox mechanism and will not have any interfaces with each other.
Docker containers are similar to virtual machines, but they are different in principle. Containers virtualize the operating system layer, and virtual machines are virtualized hardware. Therefore, containers are more portable and use servers efficiently. Containers are used more to represent a standardized unit of software. Due to the standardization of containers, it can be deployed anywhere regardless of infrastructure differences. In addition, Docker also provides containers with stronger industry isolation compatibility.
Docker uses the resource separation mechanism in the Linux core, such as cgroups, and Linux core namespaces (namespaces), to create independent containers (containers). This can operate under a single Linux entity, avoiding the additional burden of launching a virtual machine [3]. The Linux kernel's support for namespaces completely isolates the application's view of the working environment, including the process tree, network, user ID and mounted file system, while the core cgroup provides resource isolation, including CPU, memory, block I/O and network. Starting from version 0.9, Dockers began to include the libcontainer library as a direct use of the virtualization facilities provided by the Linux kernel in its own way, based on the interface provided by libvirt's LXC and systemd-nspawn,
According to industry analyst firm “451 Research”: “Dockers are dependency tools that have the ability to package applications and their virtual containers that can be executed on any Linux server, which helps achieve flexibility and portability of applications. The program can be executed anywhere, whether it is a public cloud server, a private cloud server, a stand-alone machine, etc."
operating system
The computing operating system can say something about the computer It is very important. From the user's perspective, the operating system can schedule various resource blocks of the computer system, including software and hardware equipment, data information, etc. The use of computer operating systems can reduce the intensity of manual resource allocation. The user's intervention in computing operations is reduced, and the computer's intelligent work efficiency can be greatly improved. Secondly, in terms of resource management, if multiple users jointly manage a computer system, there may be conflicts in the information sharing between the two users. In order to more reasonably allocate the various resource blocks of the computer and coordinate the various components of the computer system, it is necessary to give full play to the functions of the computer operating system and make optimal adjustments to the efficiency and degree of use of each resource block so that each user can All needs can be met. Finally, with the assistance of computer programs, the operating system can abstractly handle various basic functions provided by computing system resources, display operating system functions to users in a visual way, and reduce the difficulty of using the computer.
The operating system mainly includes the following functions:
Process management, its work is mainly process scheduling. In the case of a single user and a single task, the processor Only one task for one user is exclusive, and the work of process management is very simple. However, in the case of multi-programming or multi-user, when organizing multiple jobs or tasks, it is necessary to solve the problems of processor scheduling, allocation and recycling.
Storage management is divided into several functions: storage allocation, storage sharing, storage protection, and storage expansion.
Device management has the following functions: device allocation, device transmission control, and device independence.
File management: file storage space management, directory management, file operation management, and file protection.
Job Management is responsible for processing any requests submitted by users.
Recommended learning: "docker video tutorial"
The above is the detailed content of Is docker a partitioned operating system?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

There are four ways to package a project in PyCharm: Package as a separate executable file: Export to EXE single file format. Packaged as an installer: Generate Setuptools Makefile and build. Package as a Docker image: specify an image name, adjust build options, and build. Package as a container: Specify the image to build, adjust runtime options, and start the container.

Detailed explanation and installation guide for PiNetwork nodes This article will introduce the PiNetwork ecosystem in detail - Pi nodes, a key role in the PiNetwork ecosystem, and provide complete steps for installation and configuration. After the launch of the PiNetwork blockchain test network, Pi nodes have become an important part of many pioneers actively participating in the testing, preparing for the upcoming main network release. If you don’t know PiNetwork yet, please refer to what is Picoin? What is the price for listing? Pi usage, mining and security analysis. What is PiNetwork? The PiNetwork project started in 2019 and owns its exclusive cryptocurrency Pi Coin. The project aims to create a one that everyone can participate

Answer: PHP microservices are deployed with HelmCharts for agile development and containerized with DockerContainer for isolation and scalability. Detailed description: Use HelmCharts to automatically deploy PHP microservices to achieve agile development. Docker images allow for rapid iteration and version control of microservices. The DockerContainer standard isolates microservices, and Kubernetes manages the availability and scalability of the containers. Use Prometheus and Grafana to monitor microservice performance and health, and create alarms and automatic repair mechanisms.

There are four ways to start a Go program: Using the command line: go run main.go Starting through the IDE's "Run" or "Debug" menu Starting a container using a container orchestration tool (such as Docker or Kubernetes) Using systemd or supervisor on Unix systems Run as a system service

Overview LLaMA-3 (LargeLanguageModelMetaAI3) is a large-scale open source generative artificial intelligence model developed by Meta Company. It has no major changes in model structure compared with the previous generation LLaMA-2. The LLaMA-3 model is divided into different scale versions, including small, medium and large, to suit different application needs and computing resources. The parameter size of small models is 8B, the parameter size of medium models is 70B, and the parameter size of large models reaches 400B. However, during training, the goal is to achieve multi-modal and multi-language functionality, and the results are expected to be comparable to GPT4/GPT4V. Install OllamaOllama is an open source large language model (LL

There are many ways to install DeepSeek, including: compile from source (for experienced developers) using precompiled packages (for Windows users) using Docker containers (for most convenient, no need to worry about compatibility) No matter which method you choose, Please read the official documents carefully and prepare them fully to avoid unnecessary trouble.

PHP distributed system architecture achieves scalability, performance, and fault tolerance by distributing different components across network-connected machines. The architecture includes application servers, message queues, databases, caches, and load balancers. The steps for migrating PHP applications to a distributed architecture include: Identifying service boundaries Selecting a message queue system Adopting a microservices framework Deployment to container management Service discovery

Answer: Use PHPCI/CD to achieve rapid iteration, including setting up CI/CD pipelines, automated testing and deployment processes. Set up a CI/CD pipeline: Select a CI/CD tool, configure the code repository, and define the build pipeline. Automated testing: Write unit and integration tests and use testing frameworks to simplify testing. Practical case: Using TravisCI: install TravisCI, define the pipeline, enable the pipeline, and view the results. Implement continuous delivery: select deployment tools, define deployment pipelines, and automate deployment. Benefits: Improve development efficiency, reduce errors, and shorten delivery time.
