This article will introduce you to asynchronous iterators in Node.js. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to everyone.
Since version 10.0.0, asynchronous iterators have appeared in Node. In this article, we will discuss the role of asynchronous iterators and how they can be used. Where.
Asynchronous iterators are actually asynchronous versions of previous iterators. Asynchronous iterators can be used when we don't know the value and final state of the iteration. The difference between the two is that the promise we get will eventually be decomposed into an ordinary { value: any, done: boolean }
object. In addition, it can be looped through for-await-of
to handle asynchronous iterators. Just like for-of
loops are used to synchronize iterators. [Related recommendations: "nodejs Tutorial"]
const asyncIterable = [1, 2, 3]; asyncIterable[Symbol.asyncIterator] = async function*() { for (let i = 0; i < asyncIterable.length; i++) { yield { value: asyncIterable[i], done: false } } yield { done: true }; }; (async function() { for await (const part of asyncIterable) { console.log(part); } })();
is the opposite of the usual for-of
loop, `for-await-of
loop It will wait for each promise it receives to resolve before moving on to the next one.
Besides streams, there are no structures that support asynchronous iteration, but the asyncIterator
symbol can be manually added to any iterable structure.
Asynchronous iterators are very useful when processing streams. Readable streams, writable streams, duplex streams, and transform streams are all marked with the asyncIterator
symbol.
async function printFileToConsole(path) { try { const readStream = fs.createReadStream(path, { encoding: 'utf-8' }); for await (const chunk of readStream) { console.log(chunk); } console.log('EOF'); } catch(error) { console.log(error); } }
If you write the code this way, you don't need to listen for the end
and data
events as you iterate over each chunk of data, and The for-await-of
loop will end when the stream ends.
You can also easily get data from sources that use pagination through asynchronous iteration. In order to achieve this functionality, you also need a way to reconstruct the response body from the stream provided by the Node https request method. Asynchronous iterators can also be used here, because https requests and responses are both streams in Node:
const https = require('https'); function homebrewFetch(url) { return new Promise(async (resolve, reject) => { const req = https.get(url, async function(res) { if (res.statusCode >= 400) { return reject(new Error(`HTTP Status: ${res.statusCode}`)); } try { let body = ''; /* 代替 res.on 侦听流中的数据, 可以使用 for-await-of, 并把数据块附加到到响应体的剩余部分 */ for await (const chunk of res) { body += chunk; } // 处理响应没有响应体的情况 if (!body) resolve({}); // 需要解析正文来获取 json,因为它是一个字符串 const result = JSON.parse(body); resolve(result); } catch(error) { reject(error) } }); await req; req.end(); }); }
The code gets some cats by making a request to the Cat API (https://thecatapi.com/) picture of. A 7 second delay has also been added to prevent excessive and frequent access to the cat API, as that would be extremely unethical.
function fetchCatPics({ limit, page, done }) { return homebrewFetch(`https://api.thecatapi.com/v1/images/search?limit=${limit}&page=${page}&order=DESC`) .then(body => ({ value: body, done })); } function catPics({ limit }) { return { [Symbol.asyncIterator]: async function*() { let currentPage = 0; // 5 页后停止 while(currentPage < 5) { try { const cats = await fetchCatPics({ currentPage, limit, done: false }); console.log(`Fetched ${limit} cats`); yield cats; currentPage ++; } catch(error) { console.log('There has been an error fetching all the cats!'); console.log(error); } } } }; } (async function() { try { for await (let catPicPage of catPics({ limit: 10 })) { console.log(catPicPage); // 每次请求之间等待 7 秒 await new Promise(resolve => setTimeout(resolve, 7000)); } } catch(error) { console.log(error); } })()
In this way, we will automatically retrieve a full page of cat pictures every 7 seconds.
A more common way to navigate between pages is to implement the next
and previous
methods and expose them as controls:
function actualCatPics({ limit }) { return { [Symbol.asyncIterator]: () => { let page = 0; return { next: function() { page++; return fetchCatPics({ page, limit, done: false }); }, previous: function() { if (page > 0) { page--; return fetchCatPics({ page, limit, done: false }); } return fetchCatPics({ page: 0, limit, done: true }); } } } }; } try { const someCatPics = actualCatPics({ limit: 5 }); const { next, previous } = someCatPics[Symbol.asyncIterator](); next().then(console.log); next().then(console.log); previous().then(console.log); } catch(error) { console.log(error); }
As you As you can see, asynchronous iterators can be very useful when you want to fetch pages of data or do things like infinite scrolling on your program's UI.
These features are available in Chrome 63, Firefox 57, and Safari 11.1.
Can you still think of where asynchronous iterators can be used? Welcome to leave a message below!
For more programming related knowledge, please visit: Programming Video! !
The above is the detailed content of Detailed explanation of Node.js asynchronous iterator and how to use it. For more information, please follow other related articles on the PHP Chinese website!