Suppose you go to a restaurant and there is one single chef who promises that "I can cook for hundreds of people at the same time and none of you will go hungry", Sounds impossible, right? You can consider this single check as Node JS which manages all these multiple orders and still serves the food to all the customers.
Whenever you ask someone the question "What is Node JS?", a person always gets back the answer "Node JS is a runtime which is used to run JavaScript outside of the browser environment".
But, What does runtime mean?... The runtime environment is a software infrastructure in which code execution is written into a specific programming language. It has all the tools, libraries, and features to run code, handle errors, manage memory, and can interact with the underlying operating system or hardware.
Node JS has all of these.
Google V8 Engine to run the code.
Core libraries and APIs such as fs, crypto, http, etc.
Infrastructure like Libuv and the Event Loop to support asynchronous and non-blocking I/O operations.
So, we can now know why Node JS is called runtime.
This run time consists of two independent dependencies, V8 and libuv.
V8 is an engine that is also used in Google Chrome and it is developed and managed by Google. In Node JS it executes the JavaScript code. When we run the command node index.js then Node JS passes this code to the V8 engine. V8 processes this code, executes it, and provides the result. For example, if your code logs "Hello, World!" to the console, V8 handles the actual execution that makes this happen.
The libuv library contains the C code that enables access to the operating system when we want functionality such as networking, I/O operations, or time-related operations. It works as a bridge between Node JS and the operating system.
The libuv handles following operations:
File system operations: Reading or writing files (fs.readFile, fs.writeFile).
Networking: Handling HTTP requests, sockets, or connecting to servers.
Timers: Managing functions like setTimeout or setInterval.
Tasks like file reading are handled by the Libuv thread pool, timers by Libuv’s timer system, and network calls by OS-level APIs.
Look at the following example.
const fs = require('fs'); const path = require('path'); const filePath = path.join(__dirname, 'file.txt'); const readFileWithTiming = (index) => { const start = Date.now(); fs.readFile(filePath, 'utf8', (err, data) => { if (err) { console.error(`Error reading the file for task ${index}:`, err); return; } const end = Date.now(); console.log(`Task ${index} completed in ${end - start}ms`); }); }; const startOverall = Date.now(); for (let i = 1; i <= 4; i++) { readFileWithTiming(i); } process.on('exit', () => { const endOverall = Date.now(); console.log(`Total execution time: ${endOverall - startOverall}ms`); });
We are reading the same file four times and we are logging the time to read those files.
We get the following output of this code.
Task 1 completed in 50ms Task 2 completed in 51ms Task 3 completed in 52ms Task 4 completed in 53ms Total execution time: 54ms
We can see that we completed all four files reading almost at 50th ms. If Node JS is single-threaded then how are all these files reading operations completed at the same time?
This question answers that the libuv library uses the thread pool. the thread pool is a bunch of threads. By default, the thread pool size is 4 means 4 requests can be processed at once by libuv.
Consider another scenario where instead of reading one file 4 times we are reading this file 6 times.
const fs = require('fs'); const path = require('path'); const filePath = path.join(__dirname, 'file.txt'); const readFileWithTiming = (index) => { const start = Date.now(); fs.readFile(filePath, 'utf8', (err, data) => { if (err) { console.error(`Error reading the file for task ${index}:`, err); return; } const end = Date.now(); console.log(`Task ${index} completed in ${end - start}ms`); }); }; const startOverall = Date.now(); for (let i = 1; i <= 4; i++) { readFileWithTiming(i); } process.on('exit', () => { const endOverall = Date.now(); console.log(`Total execution time: ${endOverall - startOverall}ms`); });
The output will look like:
Task 1 completed in 50ms Task 2 completed in 51ms Task 3 completed in 52ms Task 4 completed in 53ms Total execution time: 54ms
Suppose Read operation 1 and 2 completed and thread 1 and 2 become free.
You can see that the first 4 times we get almost the same time for reading the file but when we read this file the 5th and 6th time then it takes almost double the time to complete the read operations from the first four reading operations.
This happens because the thread pool size is by default 4 so four reading operations are handled at the same time but then again 2 (5th and 6th) times we are reading the file then libuv waits because all the threads have some work. When one of the four threads completes execution then 5th read operation is handled to that thread and the same for 6th time read operation will be done. that's the reason why it takes more time.
So, Node JS is not single-threaded.
But, why do some people refer to it as single-threaded?
This is because the main event loop is single-threaded. This thread is responsible for executing Node JS code, including handling asynchronous callbacks and coordinating tasks. It does not directly handle blocking operations like file I/O.
Code execution flow is like this.
Node.js executes all synchronous (blocking) code line by line using the V8 JavaScript engine.
Asynchronous operations like fs.readFile, setTimeout, or http requests are sent to the Libuv library or other subsystems (e.g., OS).
Tasks like file reading are handled by the Libuv thread pool, timers by Libuv’s timer system, and network calls by OS-level APIs.
Once an async task is complete, its associated callback is sent to the event loop's queue.
The event loop picks up callbacks from the queue and executes them one by one, ensuring non-blocking execution.
You can change the thread pool size using process.env.UV_THREADPOOL_SIZE = 8.
Now, I am thinking that if we set the high number of threads then we will able to handle the high number of requests also. I hope you will think like me about this.
But, It is the opposite of what we were thinking.
If we increase the number of threads beyond a certain limit then it will slow down your code execution.
Look at the following example.
const fs = require('fs'); const path = require('path'); const filePath = path.join(__dirname, 'file.txt'); const readFileWithTiming = (index) => { const start = Date.now(); fs.readFile(filePath, 'utf8', (err, data) => { if (err) { console.error(`Error reading the file for task ${index}:`, err); return; } const end = Date.now(); console.log(`Task ${index} completed in ${end - start}ms`); }); }; const startOverall = Date.now(); for (let i = 1; i <= 4; i++) { readFileWithTiming(i); } process.on('exit', () => { const endOverall = Date.now(); console.log(`Total execution time: ${endOverall - startOverall}ms`); });
output:
With High Thread Pool Size (100 threads)
Task 1 completed in 50ms Task 2 completed in 51ms Task 3 completed in 52ms Task 4 completed in 53ms Total execution time: 54ms
Now, the following output is when we set the thread pool size as 4 (default size).
With Default Thread Pool Size (4 threads)
const fs = require('fs'); const path = require('path'); const filePath = path.join(__dirname, 'file.txt'); const readFileWithTiming = (index) => { const start = Date.now(); fs.readFile(filePath, 'utf8', (err, data) => { if (err) { console.error(`Error reading the file for task ${index}:`, err); return; } const end = Date.now(); console.log(`Task ${index} completed in ${end - start}ms`); }); }; const startOverall = Date.now(); for (let i = 1; i <= 6; i++) { readFileWithTiming(i); } process.on('exit', () => { const endOverall = Date.now(); console.log(`Total execution time: ${endOverall - startOverall}ms`); });
You can see that the total execution time has a 100ms difference. the total execution time ( thread pool size 4) is 600ms and the total execution time (thread pool size 100) is 700ms. so, a thread pool size of 4 is taking less time.
Why the high number of threads != more tasks can be processed concurrently?
The first reason is that each thread has its own stack and resource requirement. If you increase the number of threads then ultimately it leads to out-of-memory or CPU resources conditions.
The second reason is that Operating systems have to schedule threads. If there are too many threads, the OS will spend a lot of time switching between them (context switching), which adds overhead and slows down the performance instead of improving it.
Now, we can say that it is not about increasing the thread pool size to achieve scalability and high performance but, It’s about using the right architecture, such as clustering, and understanding the task nature (I/O vs CPU-bound) and how Node.js’s event-driven model works.
Thank You for reading.
The above is the detailed content of Node JS Internals. For more information, please follow other related articles on the PHP Chinese website!