Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.
Longer thoughts: When you’re building a really simple website, you can get away with literally writing raw HTML. It doesn’t take long to need a bit more abstraction than that. Even if you’re building a three-page site, that’s three HTML files, and your programmer’s mind will be looking for ways to not repeat yourself. You’ll probably find a way to “include” all the stuff at the top and bottom of the HTML, and just change the content in the middle.
I have tended to reach for PHP for that sort of thing in the past (), although these days I’m feeling much more jamstacky and I’d probably do it with Eleventy and Nunjucks.
Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they “hydrate” from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.
So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.
Here’s Phil:
In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the
, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.
With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.
So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil’s idea is that a service worker (a native browser technology) can save a cache of a site’s header and footer. Then server requests only need to be made for the “guts” while the full HTML document can be created on the fly.
It’s a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil’s site:
on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).
Aside from configuring a service worker, I’d think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.
Direct Link to Article — Permalink
The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.
0 Comments