Imagine a page with a large grid of images, you don’t want your user staring at blank spaces slowly being filled in as the images load. Gross. Instead, give them a quick visual indication that something is loading, with a hint of what’s to come.
Loading artificially slowed, simulating a slower connection. Refresh if you missed the blurred images.
If you’re not familiar with blur placeholders, it’s a technique of creating a blurred version of the original image at a fraction of the file size, and then encoding that blurred version directly into the HTML. The blurred image loads as immediately as the page, whilst the original image is swapped in later.
This technique allows for a really fast, optimised initial loading experience for visitors with any level of internet connection. Those with a fast connection likely won’t even see the blurred image, whilst those on slower connections won’t be jarred by a sudden blank space being filled in.
Creating the blurred image#

My face
61 KB
My blurred face
4 KB (175% smaller)
In order to create this blurred image, we’ll need to perform two steps:
- We should resize the original image to be much smaller than the original. Not only will this of reduce the final size of the blurred image, it will keep the encoding performant. If you care about neither of these things, you can skip this step.
- We’re going to need to create a hash of the blurred image (known as a “blurhash”), which we’ll later convert to a base64 image data string to serve to the client.
Resizing the image#
In the interests of performance and reducing the amount of data we need to store, we’ll want to resize the image to be as small as possible.
Because we’re going to be blurring the image, it means we can resize and store a very tiny thumbnail of the image, and then upscale to the desired size when rendering it. The blurring that the blurhash library performs will mean that the upscaled image will look great, and the user won’t be able to tell it’s being upscaled.
Doing this in Node is simple, we’ll use the popular sharplibrary to resize the image.
npm install sharp
sharp
gives us a powerful, performant, and easy way to do all sorts of image operations. In this case, we’re going to be using it to resize the image to be much smaller than the original.
import path from "path"import fs from "fs"import sharp from "sharp"
// Resolve the path to the imageconst filePath = path.resolve(process.cwd(), "path/to/image.jpg")
// Read the image file into a bufferconst imageBuffer = fs.readFileSync(filePath)
// Calculate optimal dimensions for blurhashconst aspectRatio = Number(width) / Number(height)const minDimension = 32
// Calculate dimensions ensuring minimum of 32 pixels on// the smaller sidelet blurWidth, blurHeightif (aspectRatio >= 1) { // Landscape or square blurHeight = minDimension blurWidth = Math.round(minDimension * aspectRatio)} else { // Portrait blurWidth = minDimension blurHeight = Math.round(minDimension / aspectRatio)}
// Give sharp the buffer, and then call the resize method,// and make sure to return that data as raw bytesconst { data: imageData, info: imageMeta } = await sharp(imageBuffer) .resize(blurWidth, blurHeight, { fit: "inside" }) .ensureAlpha() .raw() .toBuffer({ resolveWithObject: true })
// We'll use the raw bytes in a bit
But wait, won’t resizing the image to be smaller result in a loss of quality? Not really. To illustrate both the resized image and the results of upscaling with blurhash, take a look at the following example:
Resized
32 x 32 pixels
Blurred & Upscaled
128 x 128 pixels
We can resize the original image to be much smaller, blurhash it, and then upscale to the desired size when rendering it. Almost no detail it lost despite the upscaling.
Creating the blurhash#
Once we have our image in the desired size, we’ll use a fantastic library called blurhashto create a hash of the blurred image (a string of seemingly random characters that represents the image). This hash can then be stored, and later decoded to get the image data.
npm install blurhash
With blurhash
installed, we can create a hash from the resized image data we generated earlier:
// Create a new array with the correct formatconst rgbaData = new Uint8ClampedArray(imageMeta.width * imageMeta.height * 4)
for (let i = 0; i < imageData.length; i++) { rgbaData[i] = imageData[i]}
const hash = encode(rgbaData, info.width, info.height, 5, 4)
Wait, what’s all this extra code about? While sharp
returns the image data in a format, blurhash
expects the image data to be in a specific format called Uint8ClampedArray
.
So we simply need to create a new array (rgbaData
) that’s exactly the right size for the image data (width x
height x
4, where the 4 represents the RGBA channels - Red, Green, Blue, and Alpha/transparency), and then copy over all the pixel data from the original format to this new format.
We end up with a hash that looks something like this:
-VHd]Lt7TH.7xuR+.le?t5xu%2t6OZxar?S4XSoy%MOTV[spoMn,%MV@aeafjuWqkqsUNaWBt6WXogtPafjut6ofs;oea#kBoeof
That’s my face at 32x32, blurred, and represented as a string. You’re welcome!
Controlling the blur#
Blurhash gives us two parameters to control the blur, called componentX
and componentY
. Roughly, componentX
controls the horizontal blur, and componentY
controls the vertical blur.
Personally, I’ve found that componentX = 5
and componentY = 4
gives a good balance between blur and detail, but you should play around with the values to get your desired sweet spot.
Note though, “The more components you pick, the more information is retained in the placeholder, but the longer the BlurHash string will be”, so you’ll need to balance that with the amount of detail you want to retain.
Save the blurred placeholder#
Now that you have your blurhash string, you’ll need to store it somewhere. Where you store the blurhash is entirely up to you. You could store it in the database, or in a file, or in a cache, or wherever you want.
So long as you can quickly retrieve for decoding and server side rendering later.
Displaying the blurred placeholder#
Once we have our image resized, compressed into a blurhash, and then stored somewhere, we’ll inevitably want to display it somewhere.
To do that, we’ll need to:
- Decode the blurhash from a string to an array of raw pixel data
- Convert the decoded pixel data to an image format, generally a PNG
- Convert that image into a base64 encoded string
Thankfully we have a library that can do all of this for us: blurhash-base64.
npm install blurhash-base64
Now all we need to do is decode the blurhash:
const hashBase64 = await blurhashToBase64(hash)
And then we can display the image!
<img src="{hashBase64}" />
blurhash-base64
gives us a base64 encoded image data string that we can use to display the image.
Swapping the placeholder for the original image#
Now that we have our blurred placeholder, and we’re rendering it on the server, we’ll need to swap it for the original (non-blurred) image when it’s loaded on the client.
Executing the following script in the client will swap the placeholder for the original image only when the image is within 50px of the viewport.
// Select all images with a data-original-src attributeconst blurredImages = document.querySelectorAll("img[data-original-src]")
// Create an observer to load the images when they're in the viewportconst observer = new IntersectionObserver( (entries) => { entries.forEach((entry) => { if (entry.isIntersecting) { loadHighQualityImage(entry.target) } }) }, { // Begin loading image when it's within 50px of // the viewport for perceived performance rootMargin: "50px", },)
const loadHighQualityImage = (img) => { // Disconnect observer to prevent any accidental // loading due to erratic scrolling observer.disconnect()
// Set the src directly, which will trigger browser // to load the image img.src = img.dataset.originalSrc
// May as well clean up after ourselves delete img.dataset.originalSrc}
// Begin observing all images with a data-original-src attributeblurredImages.forEach((img) => { observer.observe(img)})
And we’re done! Now you can take any image, resize it, blur it, save it, and then display it on your page with a placeholder that’s instantly rendered for users, and then swapped for the original image when it’s loaded.