- Aiden’s Lab Notebook
- Posts
- Crafting Lean Containers: Efficiency Tips for Container Images
Crafting Lean Containers: Efficiency Tips for Container Images
Build lightweight and efficient container images with expert tips on reducing size, optimizing layers, and improving overall performance for developers.
What Makes a Container Image 'Well-made' from an Efficiency Perspective?
Hello, and welcome to your Aiden's Lab Notebook.
In the last post, we looked at what makes a container image 'well-made' from a security perspective.
This time, let's explore how to build efficient container images.
First, let's summarize the requirements from an efficiency standpoint:
Container image size reduction
Container image layer minimization
Container image layer optimization
Reducing image size is straightforward, but what exactly are container image layers?
It's okay if the concept feels new right now. Just follow along with the core ideas I'll cover concisely, and you'll quickly get the hang of it.
What are Container Image Layers?
Container images are built up by stacking multiple file system states on top of each other. To understand this quickly, let's look at a simple Dockerfile example like the one below.
FROM golang:1.23
WORKDIR /src
COPY <<EOF /src/main.go
package main
import "fmt"
func main() {
fmt.Println("hello, world")
}
EOF
RUN go build -o /bin/hello ./main.go
This Dockerfile defines an image that runs a basic Go application. An image built from this will have the following states stacked up:
The initial state of the base image
The state after copying the Go application code (/src/main.go) on top of state 1
The state after building the Go application on top of state 2, which creates the output artifact.
Each of these states that make up the container image is called a container image layer. More precisely, it's about stacking read-only filesystem layers, but you can just keep that in mind for now.
If you look closely at the states we just reviewed, you'll notice one common thread: a state change, or a layer creation, happens wherever there's a Dockerfile instruction like FROM, COPY, or RUN.
Container images are structured this way using layers because it offers two key benefits:
Faster Container Image Builds
Container image layers are reusable.
So, even if the example code we looked at is modified to build a different application, the existing base image layer can be reused, allowing you to build the container image faster.
Reduced Container Image Storage
Since container image layers that have already been created are reusable, you can store container images efficiently.
When you build or pull a container image, any layers that already exist locally are reused, and only the new, needed layers are downloaded and stored.
To make this a bit more intuitive, I built the Dockerfile example we just saw using the docker build command. After the build finishes, you'll see a message similar to this:

If you look at the parts highlighted in the red box in the screenshot, you can see that as the instructions defined in the example code are executed, they are created as separate layers.
We've just covered the core concept of container image layers. Now, let's look at some tips you can use to ensure efficiency when building your container images.
I've focused only on practical, actionable advice that you can apply right away, so definitely check them out!
Tips for Boosting Your Container Image Efficiency
1. Leveraging Multi-Stage Builds
A technique that significantly reduces the final image size by defining multiple build stages within a single Dockerfile is called multi-stage builds.
The most common pattern is to complete the application build or dependency installation in the first build stage, and then in the second (final) stage, copy only the artifacts needed to run the application onto a minimal base image.
This means that large packages or files only needed for the build process are not included in the final image, drastically reducing its size.
To make this easier to understand, let's look at a Dockerfile example. The code below applies the multi-stage build technique to an image that builds a Go application:
In the first stage, it builds an application developed in Go, and in the second (final) stage, it builds the final image by copying only the binary built in the previous stage onto a different base image.
# First stage: Build the Go binary (naming this image 'builder')
FROM golang:1.23 AS builder
WORKDIR /src
COPY <<EOF /src/main.go
package main
import "fmt"
func main() {
fmt.Println("hello, world")
}
EOF
RUN go build -o /bin/hello ./main.go
# Final stage: Copy only the binary built in the previous stage onto a minimal image
# (Copy data from the /bin/hello path in the image named 'builder' from the previous stage)
FROM alpine:latest
COPY --from=builder /bin/hello /bin/hello
CMD ["/bin/hello"]
The COPY --from Dockerfile instruction might look unfamiliar. Let me explain it step-by-step to make it clear.
In the previous stage's image (which is based on golang:1.23 and named builder in the example), the Go binary build is completed, right?
This means that the content of the path where the Go binary build artifact is located within builder (which is /bin/hello in the example above) is copied (COPY) to the same location (/bin/hello) within the alpine:latest image in the final stage.
So, the final image only receives the artifacts built in the previous stage.
Doing this not only reduces the final image size but also excludes data unnecessary for the application's operation, providing the added benefit of reducing the security attack surface.
Since this helps achieve both efficiency and security, you should definitely consider using multi-stage builds when creating container images, right?
2. Utilizing the .dockerignore File
The .dockerignore file allows you to specify files and directories that should be excluded from the build context (the environment where the container image is built) sent to the Docker daemon during a Docker build. It works in a similar format to the .gitignore file we're likely familiar with.
By using a .dockerignore file to exclude data unnecessary for the image build, such as .git folders or development-specific files, from the build context, you can build images faster.
Furthermore, ensuring that the final image contains only the minimum required data also means you can increase the image's security, so properly utilizing the .dockerignore file is important.
3. Minimizing Container Image Layers
Minimizing container image layers is exactly what it sounds like: reducing unnecessary layers. This is because the more layers a container image has, the slower the build process becomes and the more storage space it consumes.
When you're actually defining container images, you'll need to run various commands during the build process. Commands like apt-get update to update available packages, apt-get install to install needed packages, and rm to remove unnecessary temporary data all require using the RUN instruction in your Dockerfile.
If you execute each of these commands with separate RUN instructions, each one creates a new container image layer. Container image optimization starts here.
Let's assume the commands needed for the package installation process are defined using separate RUN instructions like this:
RUN apt-get update
RUN apt-get install -y mysql-client
RUN rm -rf /var/lib/apt
In this case, a total of 3 image layers are created during the package installation process.
To minimize image layers, you can combine them sequentially using && within a single RUN instruction:
RUN apt-get update && apt-get install -y mysql-client && rm -rf /var/lib/apt
By modifying it like this, you reduce the number of image layers needed for the package installation process to just 1, which improves build speed.
4. Define Instructions Likely to Change Frequently Lower Down in the Dockerfile
There's another way to increase container image build speed using the properties of image layers we've discussed so far.
This is by arranging the order of instructions defined in your Dockerfile in a way that's favorable for image building.
What does the order of Dockerfile instructions have to do with building container images? If you look at the concept of image layers we just covered from a slightly different perspective, you'll quickly understand.
We said that image layers are created whenever a Dockerfile instruction is executed during the image build process, right? To put it another way, each layer making up the image acts as a snapshot, storing any new file changes on top of the previous layer.
Let's imagine a container image with layers stacked A-B-C-D.
If there's a new change in the D layer and you rebuild the image, the existing A, B, and C layers are reused as they are, and only a new layer D' is created.
However, if there's a new change in the C layer, only the existing A and B layers can be reused, and you'll have to create new C' and D' layers.
That's why we should place instructions for layers that are expected to change frequently as far down in the Dockerfile as possible to save image build time.
Let's bring back the example we looked at earlier in the '📚 What are Container Image Layers?' section:
The initial state of the base image (FROM)
Copying the Go application code (COPY)
Building the Go application, creating the output artifact (RUN)
Let's imagine you need to add an instruction (RUN) to install packages required for the Go application, to the Dockerfile for this image which has these 3 layers.
Where among the existing instructions should the new instruction be placed?
Since the application code will likely change more frequently than the packages that need to be installed, you should define the Dockerfile instructions in an order like this:
The initial state of the base image (FROM)
Install packages required for the Go application (RUN)
Copying the Go application code (COPY)
Building the Go application, creating the output artifact (RUN)
With this order, even if the Go application code changes and you rebuild the container image, the existing layers 1 and 2 will be reused, allowing you to build the image faster.
Benefits of Prioritizing Container Image Security and Efficiency
So far, we've explored container image security and efficiency. What benefits can you expect if you build container images 'well' like this? We can summarize the benefits into four main points:
Reduced Impact in Case of a Security Incident
If a security incident does occur, because the privileges within the container image are not elevated and the attack surface is small, you can reduce the damage from the incident and respond quickly.
Cost Savings and Increased Deployment Efficiency
Lighter container images speed up the deployment pipeline and reduce storage and network costs.
During Auto Scaling events, the time it takes for instances to boot and services to become ready decreases, increasing deployment efficiency.
Improved Workflow Efficiency for Development and Operations Teams
The development team can build and test images quickly and reliably locally.
Operations teams find deployment and management easier thanks to smaller, predictable images.
Efficient Operations in Cloud Environments
It helps reduce cold start times in Serverless environments, improving user experience and increasing cost-efficiency.
Wrapping Up
Regular rebuilding that incorporates base image updates and reviews of Dockerfile changes are essential for continuously improving image quality.
In today's world, where cloud and microservice architectures are commonplace, the importance of container images cannot be overstated.
If you're using container images in your current projects, why not take a look to see if there are areas where you can improve efficiency?
Since efficiency is essential in the IT industry, it's not just a chance for personal growth but also an opportunity to boost your team's development efficiency!
✨Enjoyed this issue?
How about this newsletter? I’d love to hear your thought about this issue in the below form!
👉 Feedback Form
Your input will be help improve Aiden’s Lab Notebook and make it more useful for you.
Thanks again for reading this notebook from Aiden’s Lab :)