How to optimize images on Linux

Optimizing images that are published on the web is a great way to reduce bandwidth requirements and to improve loading times and user experience at the same time.

There are various techniques to reduce bandwidth requirements. The most important one is to never actually load images in higher resolution than what they will be shown on a webpage. If an image is shown maximum at 500×500, it’s pointless to have a version that’s bigger than that. The browser will flawlessly resize it after it’s been downloaded but all that extra information needs to make it to the visitor’s computer over the internet, using extra bandwidth.

To have as high-quality images as possible, you should avoid re-processing images over and over because they degrade when re-processed depending on the image format. You can read more about image formats and how they’re best used in a previous article, here: Best image formats for websites.

Introducing Linux command-line utilities

There are countless image manipulation tools in Linux, we’re going to look at a few different ones. They are all free and open software, available for Linux, macOS, Windows.

  • ImageMagick is a comprehensive software solution to convert, edit, resize, modify and edit image files. The software and its documentation is available here: https://imagemagick.org
  • JpegOptim is a tool to optimize and recompress JPEG files (both lossy and lossless operation). Most Linux distributions include it as a package, available here: https://github.com/tjko/jpegoptim.
  • OptiPNG is a lossless PNG optimizer and it’s able to recompress PNG images to a smaller size.

Depending on the package manager, they can be installed by running “apt install imagemagick” or “apt install optipng” or “apt install jpegoptim”, respectively.

On macOS, they’re availabe through the Homebrew package manager from https://brew.sh by running “brew install jpegoptim”, etc.

The advantage of using command-line utilities is that they can easily be scripted, automated and customized to your specific requirements as opposed to a GUI-driven application that will always require user interaction to complete.

Obtaining sample images, processing some JSON

Normally you’ll run these commands on a batch of images from various sources but to be able to provide you information on how these procedures work and on the effectiveness of space reduction, I decided to get a random sample of images from the internet that will hopefully give us an idea how well these procedures are working.

It’s a great little text-processing exercise so I thought I’d share them below.

I did an image search on the word “random” on DuckDuckGo (https://duckduckgo.com/?q=random&t=h_&iax=images&ia=images), then downloaded all the first few hundred images to have a sample library of completely random images. Opening developer tools in Chrome shows that DuckDuckGo fetches every 100 results using XHR (a web request downloading extra information in the background). It’s returned in an easy to process JSON style format that we can extract data from, with a few easy commands. Read more about JSON here: What is JSON and how to use it.

DuckDuckGo search results in JSON in Google Chrome Dev Tools

After appending the content of each of these to a file (double click, copy/paste) named them urls.txt, we can run a one liner shell command to extract URLs out of them. We could go all sophisticated and parse JSON but for now, simply replacing double quotes with new lines then searching for “https…jpg” and removing thumbnails (they all come from bing.net) should be good enough.

The “tr” command replaces quotes to new lines, the grep command searches for words starting with https and ending in jpg, the second grep removes any lines containing bing.net, sort -u removes duplicates and sorts the whole list alphabetically. All of these are built-in shell commands included in every linux or macOS distribution.

$ tr \" "\n" < urls.txt | grep '^https.*jpg$' | grep -vF bing.net | sort -u > jpegurls.txt
$ wc -l jpegurls.txt
     252 jpegurls.txt

Now we need to download all of them using a simple wget command. Wget is linux utility that you can feed a list of urls into using the “-i filename” parameter and it will simulate a browser and download each one. The “-w 10” option adds a 10 second wait after each load request to avoid overloading anyone’s server:

$ wget -i jpegurls.txt -w 10

Some of the files we ended up with have mangled extensions because wget saves already existing ones with .1, .2, .3 extensions to avoid overwrite (called “clobber”), I removed them by running rm *.jpg.[1-9]

We can get quick information on a batch of images by running a “file” command that (in case of images) will display their resolution and JPEG properties. I’m using “-b” to hide file names because who knows what it downloaded from DuckDuckGo…

$ file -b *jpg
JPEG image data, JFIF standard 1.01, resolution (DPI), density 96x96, segment length 16, baseline, precision 8, 570x791, frames 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 750x739, frames 3
JPEG image data, JFIF standard 1.02, aspect ratio, density 100x100, segment length 16, baseline, precision 8, 837x623, frames 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, progressive, precision 8, 1080x540, frames 3
JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=2], baseline, precision 8, 1024x683, frames 3
JPEG image data, JFIF standard 1.02, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 675x540, frames 3
JPEG image data, JFIF standard 1.01, resolution (DPI), density 300x300, segment length 16, comment: "Handmade Software, Inc. Image Alchemy v1.14", baseline, precision 8, 594x440, frames 3

Reducing the size of images

Image Magick contains the “convert” and the “mogrify” utility – they are capable of resizing images according to various criteria. The difference is that “mogrify” overwrites original files by default, while “convert” save them into a new file and it can do a lot more, converting between file formats among others. For demonstration purposes we will use “mogrify” here but similar commands with small syntax differences should work with “convert”, too.

To reduce all files to a maximum of 500 pixels wide (originals in the “jpg” folder, new ones will be saved in “jpg-500” folder):

$ mogrify -path jpg-500 -resize 500 jpg/*jpg

Let’s verify what the new file sizes are and the resolution of new images – we can see that they were reduced to about 1/3 of their original sizes and they are all 500 pixels wide.

$ du -sh jpg jpg-500
27M	jpg
9.4M	jpg-500

$ file -b jpg-500/*jpg
JPEG image data, JFIF standard 1.01, resolution (DPI), density 96x96, segment length 16, baseline, precision 8, 500x694, components 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 500x493, components 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 500x250, components 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, Exif Standard: [TIFF image data, big-endian, direntries=2], baseline, precision 8, 500x333, components 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 500x400, components 3
JPEG image data, JFIF standard 1.01, resolution (DPI), density 300x300, segment length 16, comment: "Handmade Software, Inc. Image Alchemy v1.14", baseline, precision 8, 500x370, components 3
JPEG image data, JFIF standard 1.01, aspect ratio, density 100x100, segment length 16, baseline, precision 8, 500x372, components 3

The -geometry parameter of “mogrify” has various geometry options, for example:

  • geometry 500 : resize 500 pixels wide
  • geometry x500 : resize 500 pixels tall
  • geometry 500×500 : resize 500×500 pixels, while keeping the aspect ratio of the image
  • geometry 500×500! : resize to 500×500 exactly, ignore original aspect ratio

More information on these can be found here: https://imagemagick.org/script/command-line-processing.php#geometry

The default quality of JPEG compression when processing images with Image Magick is 92%, this can be adjusted by using the -quality parameter. The same command to save files at 75% quality looks like this:

$ mogrify -quality 75 -path jpg-500-75 -resize 500 jpg/*jpg

File sizes were naturally reduced further at the cost of losing details:

$ du -sh jpg-500-75
6.5M	jpg-500-75

Removing sensitive and extra information from images

Most image files contain all sorts of random extra information, including color profiles, details of the camera or phone that took the pictures, personal information of the person who took them, and in some cases even GPS coordinates where the photo was taken for easy geotagging. It’s easy to see how it’s a bad idea to publish these on the internet, especially GPS coordinates, for privacy reasons. These are mostly stored in “EXIF” format silently attached to each image.

All the tools we’re discussing here provide ways to remove (strip) these EXIF tags from the images and leave information only that’s necessary to display the picture itself. It adds privacy while reducing file sizes.

With “mogrify”, we can simply add the “-strip” parameter to any command and the resulting file will be without EXIF tags, color profiles, comments, etc.

Running the same command to convert everything to 500px wide and remove comments reduced file sizes further:

$ mogrify -strip -path jpg-500-strip -resize 500 jpg/*jpg

Let’s compare it to the previous one to see how much smaller they got:

$ du -sh jpg-500 jpg-500-strip
9.4M jpg-500
9.0M jpg-500-strip

Wait, there is more…

In the second part of this article, we’ll look at the following topics:

  • using advanced resizing techniques, sharpening images
  • using jpegoptim to reduce JPEG image sizes
  • optimize PNG files
  • conversion between file formats

Update: you can find it here: How to optimize images on Linux, part 2

Related Posts