Preventing 'layout thrashing'
When the DOM is written to, layout is ‘invalidated’, and at some point needs to be reflowed. The browser is lazy and wants to wait until the end of current operation (or frame) to perform this reflow.
Although, if we ask for a geometric value back from the DOM before the current operation (or frame) is complete, we force the browser to perform layout early, this is known as ‘forced synchonous layout’, and it kills performance!
The side effects of layout thrashing on modern desktop browsers is not always obvious; but has severe consequences on low powered mobile devices.
In an ideal world we would simply re-order the execution so that we can batch the DOM read and DOM writes together. This means the document only needs to be reflowed once.
What about in the real world?
In reality this isn’t so simple. Large applications have code scattered all over the place, all of which has that dangerous DOM at its finger tips. We can’t easily (and definitely shouldn’t) mash up all our pretty, decoupled code, just so we have control over execution order. What can we do to batch our reads and writes together for optimal performance?
window.requestAnimationFrame schedules a function to be executed at the next frame, similar to
setTimeout(fn, 0). This is super useful because we can use it to schedule all our DOM writes to run together in the next frame, leaving all DOM reads to run in the current synchronous turn.
This means we can keep our nicely encapsulated code where it is, and with a small code tweak, batch our pricey DOM access together! Win!
I created a working example to prove the concept. In the first screen shot you can see the aggressive layout thrashing in the Chrome Timeline.
requestAnimationFrame alterations only one layout event took place, and as a result the operation was ~96% faster!
Can this scale?
In a simple example using raw
requestAnimationFrame does allow us to postpone DOM writes to greatly improve performance, but this technique just doesn’t scale.
In our app we may need to read from the DOM after we have done our write, and then we are in layout thrashing territory again, just in a different frame.
We could push the read into another
requestAnimationFrame but then we cannot guarantee another part of the app has not just pushed a write job into the same frame. Essentially it’s all going to get a bit chaotic, and once again you no longer have control over when DOM read/writes are happening.
FastDom is a small library I wrote to provide a common interface for batching DOM read/write work. It massively speeds up DOM work using similar
requestAnimationFrame techniques described above.
FastDom harmonises DOM interactions by taking read and write jobs and batching them (reads then writes) on the next frame. This means you can build app components in isolation without worrying how they will affect (or be affected) by other components in the app.
Implications of using FastDom
By using FastDom you are making all DOM tasks asynchronous, that means you can’t always make assumptions as to what state the DOM will be in. Work that was previously sync, may not have completed now it is async.
To work around this I like to use an event system to be more explicit about when work has finished, and responding only when I know DOM work I’m dependent on is complete.
Also we are increasing the amount of code we have to write to effectively get the same amount of work done. Personally I think this is a small price to pay for a dramatic increase in performance.
Web apps are lacking a clear way of solving the problem of layout thrashing. As an app grows it gets harder to coordinate all the different parts to ensure a fast end product. If FastDom can help provide a simple interface for developers to solve this problem, it can only mean good things.
Have a look at the FastDom project and feel free to contribute by submitting pull requests or filing issues.