We know that DOM is an application programming interface for operating XML and HTML documents, and it is very expensive to use scripts to operate DOM. There is an apt metaphor. Imagine that DOM and JavaScript (here ECMScript) are each an island, and they are connected by a toll bridge. Every time ECMAScript accesses the DOM, it must pass through this bridge and pay a "bridge toll." The more times you access the DOM, the higher the cost. Therefore, the recommended approach is to cross as few bridges as possible and try to stay on the ECMAScript island. It is impossible for us not to use the DOM interface, so how can we improve the efficiency of the program?
1. DOM access and modification
Accessing DOM elements is costly (you know the "toll"), and modifying elements is even more costly because it causes the browser to recalculate the geometric changes of the page (reflow and redraw).
Of course the worst case scenario is accessing or modifying elements in a loop, look at the following two pieces of code:
var times = 15000; // code1 console.time(1); for(var i = 0; i < times; i++) { document.getElementById('myDiv1').innerHTML += 'a'; } console.timeEnd(1); // code2 console.time(2); var str = ''; for(var i = 0; i < times; i++) { str += 'a'; } document.getElementById('myDiv2').innerHTML = str; console.timeEnd(2);
As a result, the first running time was a thousand times longer than the second time! (chrome version 44.0.2403.130 m)
1: 2846.700ms 2: 1.046ms
The problem with the first piece of code is that each loop iteration, the element is accessed twice: once to read the value of innerHTML, and another time to rewrite it, that is, each time the loop is crossing the bridge (re- Rowing and redrawing will be explained in the next article)! The results clearly show that the more times the DOM is accessed, the slower the code runs. Therefore, the number of DOM accesses that can be reduced is reduced as much as possible, and processing is left to the ECMAScript side as much as possible.
2. HTML collection & traversing DOM
Another energy-consuming point of operating DOM is traversing DOM. Generally, we will collect a collection of HTML, such as using getElementsByTagName(), or document.links, etc. I think everyone is familiar with this. The result of the collection is an array-like collection that exists in a "live state" in real time, meaning that it automatically updates when the underlying document object is updated. How to say it? It’s very simple to give a chestnut:
<body> <ul id='fruit'> <li> apple </li> <li> orange </li> <li> banana </li> </ul> </body> <script type="text/javascript"> var lis = document.getElementsByTagName('li'); var peach = document.createElement('li'); peach.innerHTML = 'peach'; document.getElementById('fruit').appendChild(peach); console.log(lis.length); // 4 </script>
And this is where the inefficiency comes from! It's very simple. Just like the optimization operation of the array, caching the length variable is ok (reading the length of a collection is much slower than reading the length of an ordinary array, because it needs to be queried every time):
console.time(0); var lis0 = document.getElementsByTagName('li'); var str0 = ''; for(var i = 0; i < lis0.length; i++) { str0 += lis0[i].innerHTML; } console.timeEnd(0); console.time(1); var lis1 = document.getElementsByTagName('li'); var str1 = ''; for(var i = 0, len = lis1.length; i < len; i++) { str1 += lis1[i].innerHTML; } console.timeEnd(1);
Let’s see how much performance improvement can be achieved?
0: 0.974ms 1: 0.664ms
When the length of the collection is large (the demo is 1000), the performance improvement is still obvious.
"High-Performance JavaScript" proposes another optimization strategy, which states, "Because traversing an array is faster than traversing a collection, if you copy the collection elements to the array first, accessing its properties will be faster." The test did not reveal this pattern very well, so don’t bother with it. The test code is as follows: (If you have any questions, please feel free to discuss with me)
console.time(1); var lis1 = document.getElementsByTagName('li'); var str1 = ''; for(var i = 0, len = lis1.length; i < len; i++) { str1 += lis1[i].innerHTML; } console.timeEnd(1); console.time(2); var lis2 = document.getElementsByTagName('li'); var a = []; for(var i = 0, len = lis2.length; i < len; i++) a[i] = lis2[i]; var str2 = ''; for(var i = 0, len = a.length; i < len; i++) { str2 += a[i].innerHTML; } console.timeEnd(2);
At the end of this section, we introduce two native DOM methods, querySelector() and querySelectorAll(). I believe everyone is familiar with them. The former returns an array (note that their return values do not change dynamically like HTML collections), and the latter or returns the first matched element. Well, not all the time it performs better than the former's HTML collection traversal.
console.time(1); var lis1 = document.getElementsByTagName('li'); console.timeEnd(1); console.time(2); var lis2 = document.querySelectorAll('li'); console.timeEnd(2); // 1: 0.038ms // 2: 3.957ms
But because it is a CSS-like selection method, it is more efficient and convenient when making combination selections. For example, do the following combined query:
var elements = document.querySelectorAll('#menu a'); var elements = document.querySelectorAll('div.warning, div.notice');
The above is all about high-performance JavaScript DOM programming. I hope you can understand it and it will be helpful to your learning.