No matter how concurrent or asynchronous you are, a TCP connection must be established before an http request is sent out. After it is sent out, the client must have a thread in a listening state waiting for the server's return.
The client sends http requests non-blockingly. After each thread request is sent out, a new thread is created to listen for http returns, and the original thread does other work.
This solution will consume double N threads and N TCP connections.
Some considerations for optimization: instead of consuming a thread for monitoring each connection, one thread can monitor the corresponding -> selector of all connections.
Reference NIO selector
Let’s consider it a bit original:
No matter how concurrent or asynchronous you are, a TCP connection must be established before an http request is sent out. After it is sent out, the client must have a thread in a listening state waiting for the server's return.
The client sends http requests non-blockingly. After each thread request is sent out, a new thread is created to listen for http returns, and the original thread does other work.
This solution will consume double N threads and N TCP connections.
Some considerations for optimization: instead of consuming a thread for monitoring each connection, one thread can monitor the corresponding -> selector of all connections.
You can release the connection and resources after sending the request, and let the other party call back your interface after processing