If docker is running a python process, a single process can run a single core (limited by GIL). In fact, looking at the process list, you will know that unlike vagrant, the docker process itself is integrated into the system process.
Oh, if you use share, then the process will be tied to this core.
Also, if you use top to see, you can see that the process is 100%, but this 100% is 100% on this core
If you run with 3 cores, it will be 300%
Even if you allocate the same cpu share to each container when starting. When the other two containers are idle, the remaining containers can still fill up the entire core.
cpu share gives me the feeling that it limits the lower limit of the container’s cpu usage. If you want to limit the upper limit of cpu usage, you need to modify the settings of the container through cgroup.
If docker is running a python process, a single process can run a single core (limited by GIL). In fact, looking at the process list, you will know that unlike vagrant, the docker process itself is integrated into the system process.
Watch your progress
How much CPU can it occupy
Oh, if you use share, then the process will be tied to this core.
Also, if you use top to see, you can see that the process is 100%, but this 100% is 100% on this core
If you run with 3 cores, it will be 300%
Even if you allocate the same cpu share to each container when starting. When the other two containers are idle, the remaining containers can still fill up the entire core.
cpu share gives me the feeling that it limits the lower limit of the container’s cpu usage. If you want to limit the upper limit of cpu usage, you need to modify the settings of the container through cgroup.
This is an interesting question, I should try it as an experiment.
Who is going to do the experiment and write a loop to fill up the processor?