Improving slow mounts in React apps

Aggelos Arvanitakis
ITNEXT
Published in
5 min readMay 2, 2021

--

There are a million articles out there focusing on how you can make your app faster by removing unnecessary re-renders & preventing unneeded component updates , but none of them talks about the one necessary render: the initial mount.

Intro

This is because there’s a theory that not a lot can be done there. When you have an empty screen and you want to render a component, you have to pay the price of its mount. This includes the work that React needs to do to create the virtual DOM, as well as the actual rendering of the HTML. Most of the times, this amount of time is negligable and won’t even be perceived by the human eye. There are times though, where you have to render a list of components that are comprised of a ton of other subcomponents which force React to do a lot of work & calculations before you can see content on your screen. Do you know what I’m talking about? It’s this set of items that gets rendered when you navigate to a particular page or tab and you feel as if your navigation wasn’t 100% responsive; as if there was a lag between you clicking on the link and React showing you what you asked for.

When us developers put the phrase “list of components” next to “performance issues” our mind is trained to think “windowing & virtualization”. While a virtual list could be the answer in some occassions, there are cases where a list of items is not big enough to warrant the use of such technique. In those cases, the internal calculations of the virtualization library cost way more than if we just rendered the entire list of items at once. Think of a list of 20 Facebook/Twitter cards, with each card comprised of 100 subcomponents and a screen that can show 5–6 Facebook/Twitter cards at once. Most of the times it’s easier to just render all 20 at once, than do the necessary scroll-based calculations to only render the correct cards according to the scroll position.

A good way of thinking would also have you validate that you only render what’s visible on the screen, while leaving anything that’s not yet visible (i.e. modals, popups, tooltips) for later, thus limiting the depth of the component tree that React has to maintain behind the scenes. This is a great technique and will definitely yield results, but what if there’s nothing more to be done in this area? What if the actual delay comes from things such as component composition, React context reading, client-cache entry identification & denormalization, expensive calculations, etc.?

Approach

If you nod in agreement behind your screen, then what you need is to defer your render. What is this I hear you ask? Well, it’s the ability to declaratively delay the rendering of certain components in an effort to split a single blocking React workload into smaller chunks. It’s the option to say to React “it’s ok to render that component a few milliseconds later, I don’t mind”. What this effectively does, is split the 200ms that React would need to render a list of 20 expensive components, into chunks of 50ms that render 5 components at a time.

That does not mean we parallelize rendering since JS is still single-threaded. What we do is we queue those render chunks one behind the other, while leaving room for user-input inbetween. Similar to a Webpack mentality, the ideal chunk size differs depending on the number & complexity of the components. In the example above, it might be better to have 10 render chunks of 2 components than 2 chunks of 10 components. You need to test it on your own to find the chunk size sweet spot for your particular set of components.

So, how do we achieve that? Well, actually, it’s super simple. We take a list of items, split it into chunks, render the 1st chunk, yield to the user, render the 2nd chunk, yield to the user and so on. What we need is to alternate between pausing and resuming of the rendering procedure. In JS land, this translates to microtasks. We render some items, queue a task to render more, wait until those are rendered, queue another task to render more and so forth. All that could be implemented as a React utility component like so:

A utility component to defer the rendering of its children

Which could then be used like so:

Using Defer to render 5 Cards at a time

What this code above does is start by instantly rendering the first chunk of items (given as children) and then queue a microtask to render another chunk. This microtask will be executed whenever there’s idle time on the main thread or after 200ms pass, whichever comes first. This will continue until all items are rendered. What’s intersting is that the requestIdleCallback only gets registered when renderedItemsCount changes, in an attempt to limit the amount of microtasks that simultaneously exist in a single process tick. Bear in mind that this microtask will have — at most — 16ms to execute, meaning that the ideal chunk should take 16ms or less to render. If it takes significantly more, Chrome will issue a console warning, but it’s not the end of the world if a microtask takes a bit more. The UX benefits of this approach outweigh the costs of slightly longer microtasks.

Funnily enough, this whole idea is not new. In fact, this is exactly what React does under the hood on v16.x.x. and above. It effectively starts rendering a component within a microtask, continuously checks if 16ms have passed, pauses all current work if 16ms have passed and continues where it left off in the next tick of the event loop. It’s interesting how the same logic can be applied to both cases, just on different layers.

Closing Notes

My overall goal here was to introduce a technique to help increase your UX when mounting heavy components. In fact, the approach isn’t limited to lists. By specifying chunkSize={1} you can use it for single components as well, thus deferring the rendering of a certain “heavy” piece little enough so that the user doesn’t perceive it, but long enough to remove any lags on the overall component mount.

If you want to see a demo of this, I’ve linked a CodeSandbox that showcases that. The demo is intentionally laggy to clearly showcase the difference in the approaches.

Make sure you play around with chunkSize and see what feels better to you.

Thanks a lot for reading!

P.S. 👋 Hi, I’m Aggelos! If you liked this, consider following me on twitter or medium 😀

--

--