How to solve the error when starting docker
Solution to the error when starting docker: 1. Open docker and add the content "OPTIONS="--selinux-enabled...""; 2. Clear the filter table of iptables; 3. Execute "docker -storage-setup" command and so on.
The operating environment of this article: CentOS 7.2 system, Docker version 18.04.0, Dell G3 computer.
How to solve the error when starting docker?
Summary of error reports when Docker starts up
Eight common Docker faults
Error report one: error initializing graphdriver
Docker start-up error reports
The system is CentOS 7.2
The system kernel and docker version are as follows:
[root@docker ~]# uname -r 3.10.0-327.el7.x86_64 [root@docker ~]# [root@docker ~]# [root@docker ~]# [root@docker ~]# docker version Client: Version: 18.04.0-ce API version: 1.37 Go version: go1.9.4 Git commit: 3d479c0 Built: Tue Apr 10 18:21:36 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 18.04.0-ce API version: 1.37 (minimum version 1.12) Go version: go1.9.4 Git commit: 3d479c0 Built: Tue Apr 10 18:25:25 2018 OS/Arch: linux/amd64 Experimental: false
The startup error message is as follows:
[root@docker ~]# systemctl start docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journ [root@docker ~]# [root@docker ~]# [root@docker ~]# [root@docker ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since 日 2018-04-22 20:52:39 CST; 5s ago Docs: https://docs.docker.com Process: 4810 ExecStart=/usr/bin/dockerd (code=exited, status=1/FAILURE) Main PID: 4810 (code=exited, status=1/FAILURE) 4月 22 20:52:39 docker.cgy.com systemd[1]: Failed to start Docker Application Container Engine. 4月 22 20:52:39 docker.cgy.com systemd[1]: Unit docker.service entered failed state. 4月 22 20:52:39 docker.cgy.com systemd[1]: docker.service failed. 4月 22 20:52:39 docker.cgy.com systemd[1]: docker.service holdoff time over, scheduling restart. 4月 22 20:52:39 docker.cgy.com systemd[1]: start request repeated too quickly for docker.service 4月 22 20:52:39 docker.cgy.com systemd[1]: Failed to start Docker Application Container Engine. 4月 22 20:52:39 docker.cgy.com systemd[1]: Unit docker.service entered failed state. 4月 22 20:52:39 docker.cgy.com systemd[1]: docker.service failed.
The specific cause of the error is not seen from the above error message. . Then I used dockerd
to start it directly, and I saw an error message at the bottom of the output information, as follows:
[root@docker ~]# dockerd INFO[2018-04-22T21:12:46.111704443+08:00] libcontainerd: started new docker-containerd process pid=5903 INFO[0000] starting containerd module=containerd revision=773c489c9c1b21a6d78b5c538cd395416ec50f88 version=v1.0.3 。。。。。。省略一部分输出。。。。。。 INFO[0000] loading plugin "io.containerd.grpc.v1.introspection"... module=containerd type=io.containerd.grpc.v1 INFO[0000] serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug" INFO[0000] serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc" INFO[0000] containerd successfully booted in 0.002763s module=containerd Error starting daemon: error initializing graphdriver: overlay: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to en d_type support. Backing filesystems without d_type support are not supported.
According to the last error Error starting daemon:
Searched this blog and got the solution.
https://blog.csdn.net/liu9718214/article/details/79134900
The specific solution is:
##vim /etc/sysconfig/docker
Add the following content:
OPTIONS="--selinux-enabled --log-driver=journald --signature-verification=false"
/etc/docker/daemon.jsonAdd the following content:
{ "registry-mirrors": ["http://4a1df5ef.m.daocloud.io"], # 是用来pull容器加速用的,跟此次问题无关。 "storage-driver": "devicemapper" # 解决此次问题 }
[root@docker ~]# systemctl restart docker [root@docker ~]# [root@docker ~]# [root@docker ~]# ps aux | grep docker root 5922 1.7 1.6 528432 62568 ? Ssl 21:15 0:00 /usr/bin/dockerd root 5927 1.1 0.5 356984 22100 ? Ssl 21:15 0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml root 6028 0.0 0.0 112664 964 pts/0 S+ 21:15 0:00 grep --color=auto docker
[root@controller ~]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)
Copy after login
The error message is as follows: [root@controller ~]# cat /etc/redhat-release CentOS Linux release 7.0.1406 (Core)
[root@controller ~]# docker run -it -P docker.io/nginx
/usr/bin/docker-current: Error response from daemon: driver failed programming external connectivity on endpoint gloomy_kirch (10289e7a87e65771da90cda531951b7339bee9cb5953474460451cd48013aff0): iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32810 -j DNAT --to-destination 172.17.0.2:80 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1).
Copy after login
This is because the container was successfully started once before running this time. During the last visit, due to the firewall The problem prevented normal access to Nginx, so I cleared the filter table of iptables, restarted iptables, and then ran it again, the above error was reported. SolutionRestart the firewall
[root@controller ~]# docker run -it -P docker.io/nginx /usr/bin/docker-current: Error response from daemon: driver failed programming external connectivity on endpoint gloomy_kirch (10289e7a87e65771da90cda531951b7339bee9cb5953474460451cd48013aff0): iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32810 -j DNAT --to-destination 172.17.0.2:80 ! -i docker0: iptables: No chain/target/match by that name. (exit status 1).
#CentOS 7下执行 [root@controller ~]# systemctl restart firewalld
[root@controller ~]# systemctl restart docker
[root@controller ~]# docker run -it --name nginx -p 80:80 -v /www:/wwwroot docker.io/nginx /bin/bash
root@a8a92c8f7760:/#
Copy after login
Error three: Unable to take ownership of thin-poolDocker daemon failed to start: Unable to take ownership of thin-pool
[root@controller ~]# docker run -it --name nginx -p 80:80 -v /www:/wwwroot docker.io/nginx /bin/bash root@a8a92c8f7760:/#
Apr 27 13:51:59 master systemd: Started Docker Storage Setup. Apr 27 13:51:59 master systemd: Starting Docker Application Container Engine... Apr 27 13:51:59 master dockerd-current: time="2018-04-27T13:51:59.088441356+08:00" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found" Apr 27 13:51:59 master dockerd-current: time="2018-04-27T13:51:59.091166189+08:00" level=info msg="libcontainerd: new containerd process, pid: 20930" Apr 27 13:52:00 master dockerd-current: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (docker--vg-docker--pool) that already has used data blocks Apr 27 13:52:00 master systemd: docker.service: main process exited, code=exited, status=1/FAILURE Apr 27 13:52:00 master systemd: Failed to start Docker Application Container Engine. Apr 27 13:52:00 master systemd: Unit docker.service entered failed state. Apr 27 13:52:00 master systemd: docker.service failed
I feel like the kcs kinda misses telling users the actual problem. Nor does it really make it clear the solution. IF you are using device mapper (instead of loopback) /var/lib/docker contains metadata informing docker about the contents of the device mapper storage area. If you delete /var/lib/docker that metadata is lost. Docker is then able to detect that the thin pool has data but docker is unable to make use of that information. The only solution is to delete the thin pool and recreate it so that both the thin pool and the metadata in /var/lib/docker will be empty.
- Execute command:
rm -rf /var/lib/docker/*
- Execute the command:
rm -rf /etc/sysconfig/docker-storage
- Execute the command:
lvremove /dev/docker-vg/docker-pool
- Use existing docker-vg LVM volume group:
cat <<EOF > /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
- Execute the command:
docker-storage-setup
##Restart docker: - systemctl start docker
The following error is reported when docker run runs the container:
[root@backup-system cpu]# docker run -ti --name hkp_ubuntu --cpuset-cpus=0-3 ubuntu bash docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: failed to write "0-3\n" to "/sys/fs/cgroup/cpuset/docker/cpuset.cpus": write /sys/fs/cgroup/cpuset/docker/cpuset.cpus: invalid argument: unknown.
Therefore, you need to check and adjust the cpuset.cpus of each cgroup first to ensure that the cpu used by the current cgroup is indeed only allocated to it. Then you can set cpu_exclusive at this time./sys/fs/cgroup/cpuset/The current specific reason is that during the experiment, a new container directory was created in
, and container/cpuset.cpus was set to 0-3<div class="code" style="position:relative; padding:0px; margin:0px;"><pre class="brush:php;toolbar:false">[root@backup-system docker]# cat /sys/fs/cgroup/cpuset/container/cpuset.cpus
0-3</pre><div class="contentsignin">Copy after login</div></div>
:After setting /sys/fs/cgroup/cpuset/container/cpuset.cpus
to empty, the above problem is solved. Recommended learning: "
The above is the detailed content of How to solve the error when starting docker. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

There are four ways to package a project in PyCharm: Package as a separate executable file: Export to EXE single file format. Packaged as an installer: Generate Setuptools Makefile and build. Package as a Docker image: specify an image name, adjust build options, and build. Package as a container: Specify the image to build, adjust runtime options, and start the container.

Detailed explanation and installation guide for PiNetwork nodes This article will introduce the PiNetwork ecosystem in detail - Pi nodes, a key role in the PiNetwork ecosystem, and provide complete steps for installation and configuration. After the launch of the PiNetwork blockchain test network, Pi nodes have become an important part of many pioneers actively participating in the testing, preparing for the upcoming main network release. If you don’t know PiNetwork yet, please refer to what is Picoin? What is the price for listing? Pi usage, mining and security analysis. What is PiNetwork? The PiNetwork project started in 2019 and owns its exclusive cryptocurrency Pi Coin. The project aims to create a one that everyone can participate

Overview LLaMA-3 (LargeLanguageModelMetaAI3) is a large-scale open source generative artificial intelligence model developed by Meta Company. It has no major changes in model structure compared with the previous generation LLaMA-2. The LLaMA-3 model is divided into different scale versions, including small, medium and large, to suit different application needs and computing resources. The parameter size of small models is 8B, the parameter size of medium models is 70B, and the parameter size of large models reaches 400B. However, during training, the goal is to achieve multi-modal and multi-language functionality, and the results are expected to be comparable to GPT4/GPT4V. Install OllamaOllama is an open source large language model (LL

Answer: PHP microservices are deployed with HelmCharts for agile development and containerized with DockerContainer for isolation and scalability. Detailed description: Use HelmCharts to automatically deploy PHP microservices to achieve agile development. Docker images allow for rapid iteration and version control of microservices. The DockerContainer standard isolates microservices, and Kubernetes manages the availability and scalability of the containers. Use Prometheus and Grafana to monitor microservice performance and health, and create alarms and automatic repair mechanisms.

PHP distributed system architecture achieves scalability, performance, and fault tolerance by distributing different components across network-connected machines. The architecture includes application servers, message queues, databases, caches, and load balancers. The steps for migrating PHP applications to a distributed architecture include: Identifying service boundaries Selecting a message queue system Adopting a microservices framework Deployment to container management Service discovery

There are many ways to install DeepSeek, including: compile from source (for experienced developers) using precompiled packages (for Windows users) using Docker containers (for most convenient, no need to worry about compatibility) No matter which method you choose, Please read the official documents carefully and prepare them fully to avoid unnecessary trouble.

Containerization improves Java function performance in the following ways: Resource isolation - ensuring an isolated computing environment and avoiding resource contention. Lightweight - takes up less system resources and improves runtime performance. Fast startup - reduces function execution delays. Consistency - Decouple applications and infrastructure to ensure consistent behavior across environments.

Deploy Java EE applications using Docker containers: Create a Dockerfile to define the image, build the image, run the container and map the port, and then access the application in the browser. Sample JavaEE application: REST API interacts with database, accessible on localhost after deployment via Docker.
