Docker is an open-source platform designed to automate the deployment, scaling, and management of applications using containerization technology. Containers bundle an app and its dependencies into a single package, ensuring consistent environments across development and production.
// Example: Running a container from Docker Hub docker run hello-world // This command downloads and runs a simple test container.
Docker was released in 2013 by Solomon Hykes and revolutionized software deployment by making containerization easy and accessible. It built on earlier container tech like LXC and has since grown to become the industry standard.
Timeline: - 2008: Linux Containers (LXC) emerge - 2013: Docker introduced, simplifying container use - 2015+: Docker ecosystem expands with Docker Hub, Compose, Swarm
Containers share the host OS kernel and isolate applications in lightweight environments. VMs run full guest OS copies, making them heavier but more isolated.
Comparison: - Containers: Lightweight, fast startup, share OS - VMs: Full OS, more resources, slower to start
Docker provides fast deployment, consistency across environments, resource efficiency, easy scalability, and simplified CI/CD pipelines.
Benefits: - Portability: Runs the same anywhere - Efficiency: Low overhead vs VMs - Isolation: Apps don’t interfere
Docker consists of the Docker Engine (daemon), REST API, and CLI client. The engine handles container lifecycle, images, networks, and storage.
Architecture: - Docker Client (CLI) - Docker Daemon (Engine) - Docker Registries (e.g., Docker Hub)
The Engine manages containers. CLI lets users interact with Docker. Registries store images for sharing.
Components: - Engine: Runs and manages containers - CLI: Command line tool to control Docker - Registry: Central image repository (Docker Hub)
Docker Desktop is available for Windows and Mac, while Linux users install Docker Engine directly using package managers.
// Windows/Mac: Download from https://www.docker.com/products/docker-desktop // Linux example (Ubuntu): sudo apt update sudo apt install docker.io sudo systemctl start docker
Docker Hub is a cloud registry service to find and share container images, including official and community repositories.
// Search for images docker search nginx // Pull an image docker pull nginx
The CLI provides commands to build, run, stop, and manage containers and images.
// List running containers docker ps // List all containers (including stopped) docker ps -a // Remove a container docker rm container_id
Start a container with the “hello-world” image to verify Docker installation.
docker run hello-world // Output confirms Docker is working and able to pull images
Docker images are read-only templates used to create containers. They contain everything needed to run an app.
// List downloaded images docker images // Build image from Dockerfile docker build -t myapp .
Containers go through states: created, running, stopped, paused, and removed.
Lifecycle commands: docker create image_name docker start container_id docker stop container_id docker rm container_id
Docker comes in Community (free) and Enterprise (paid) editions, with regular version updates adding features and fixes.
Community Edition: Open-source, for developers Enterprise Edition: Support, advanced security features
Docker Desktop bundles Docker Engine, CLI, Kubernetes, and GUI tools for easy container management on Mac and Windows.
// After install, run: docker version // Verify Kubernetes is enabled (optional) kubectl version
Common commands to manage Docker:
docker pull image_name # Download image docker run image_name # Run container docker ps # List running containers docker stop container_id # Stop container docker rm container_id # Remove container docker images # List images docker build -t name . # Build image from Dockerfile
Docker images are read-only templates containing everything needed to run an application, including code, libraries, and dependencies.
// Example: Pulling an image from Docker Hub docker pull nginx // This downloads the nginx image to your local system
Docker images are made of layers stacked using a union filesystem. Each layer represents a change from the previous.
// Example layers from an image docker history nginx // Shows each layer size and command used
Official images are maintained by Docker or trusted maintainers, while custom images are created for specific needs.
// Official image example docker pull python // Custom image example built from Dockerfile docker build -t myapp:latest .
Pull images using the `docker pull` command to download from Docker Hub registry.
// Pull latest Ubuntu image docker pull ubuntu:latest
List local images and inspect details like ID, size, and creation date.
// List images docker images // Inspect image metadata docker inspect nginx
Tag images to give them meaningful names and versions for easier management.
// Tag an image with version docker tag nginx:latest nginx:v1.0
Remove images to free space when no longer needed.
// Remove an image by name or ID docker rmi nginx:v1.0
Save images to tar files and load them later to transfer between machines.
// Save image to a tar file docker save -o nginx.tar nginx:latest // Load image from tar file docker load -i nginx.tar
Export container filesystems as tar archives and import them as images.
// Export container filesystem docker export container_id -o container.tar // Import as image cat container.tar | docker import - myimage:latest
A Dockerfile is a text file with instructions to build a Docker image step-by-step.
// Basic Dockerfile example FROM ubuntu:latest RUN apt-get update && apt-get install -y nginx CMD ["nginx", "-g", "daemon off;"]
Create Dockerfiles to automate building custom images.
// Dockerfile to create a simple Python app image FROM python:3.9 COPY app.py /app.py CMD ["python", "/app.py"]
The build context is the directory sent to Docker daemon during build, containing files used by the Dockerfile.
// Build image with context as current directory docker build -t mypythonapp .
Use minimal base images, combine RUN commands, and avoid secrets to optimize image size and security.
// Example: Combine RUN commands RUN apt-get update && \ apt-get install -y curl && \ rm -rf /var/lib/apt/lists/*
Exclude files and directories from the build context to speed up builds and avoid leaking sensitive info.
// Example .dockerignore node_modules .git .env
Check Dockerfile syntax, validate commands, and review build logs to fix common build errors.
// Example: Build with verbose output docker build --progress=plain -t myapp . // Check error messages for clues
A Docker container is a lightweight, standalone, and executable software package that includes everything needed to run an application: code, runtime, system tools, libraries, and settings.
// Docker containers share the host OS kernel but run isolated processes // They provide portability and consistency across environments
Containers are created from Docker images, which are read-only templates with instructions to build the container.
// Create and start a container from an image docker run ubuntu // This downloads the ubuntu image (if not present) and creates a container
Interactive mode lets you run a container with a terminal session attached.
// Run container interactively with a shell docker run -it ubuntu /bin/bash // '-i' keeps STDIN open, '-t' allocates a terminal
Detached mode runs containers in the background.
// Run container detached in background docker run -d nginx // '-d' runs container detached
// Stop a running container docker stop container_id_or_name // Start a stopped container docker start container_id_or_name // Restart a container docker restart container_id_or_name // Kill container immediately docker kill container_id_or_name
// List running containers only docker ps // List all containers (including stopped) docker ps -a
// Get detailed info about a container docker inspect container_id_or_name // Outputs JSON with network, volumes, config, and more
// Run bash shell inside running container docker exec -it container_id_or_name /bin/bash // Run any command inside container docker exec container_id_or_name ls /app
// View logs of a container docker logs container_id_or_name // Follow logs in real-time docker logs -f container_id_or_name
// Remove stopped container docker rm container_id_or_name // Remove multiple containers docker rm container1 container2
Save the current state of a container as a new image.
// Commit container changes to new image docker commit container_id new_image_name // Then run new image as container later docker run new_image_name
Containers communicate via networks; Docker provides default bridge network and lets you create custom networks.
// List networks docker network ls // Create a network docker network create my_network // Run container attached to network docker run --network=my_network nginx
// Limit container to 512MB RAM docker run -m 512m ubuntu // Limit container CPU to 1 core docker run --cpus="1.0" ubuntu
Containers have unique IDs and optionally user-defined names for easier management.
// Assign custom name to container docker run --name mycontainer nginx // Refer to container by name instead of ID docker stop mycontainer
By default, container file system changes are ephemeral. Use volumes or bind mounts for persistent storage.
// Run container with host directory mounted docker run -v /host/path:/container/path ubuntu // Data inside /container/path persists on host even if container removed
Docker networking enables communication between containers, and between containers and the outside world.
// Docker automatically creates default networks like 'bridge' on installation // Networking allows containers to talk to each other or expose services externally
Docker has three main built-in network types:
// bridge: Default network, isolated, containers get private IPs // host: Containers use host’s network stack directly (no isolation) // none: No network, container is completely isolated
Create user-defined networks to improve container communication and control.
// Create a bridge network named 'my-net' docker network create my-net // List networks to verify docker network ls
Attach containers to specific networks to control how they communicate.
// Run container connected to 'my-net' docker run -d --name container1 --network my-net nginx // Connect existing container to a network docker network connect my-net container2
Inspect details of networks and containers to troubleshoot or understand network setups.
// Inspect 'my-net' network details docker network inspect my-net // Inspect container’s network settings docker inspect container1
Containers on the same user-defined network can communicate via container name as hostname.
// From container1, ping container2 by name (assuming both on 'my-net') // docker exec -it container1 ping container2
Expose container ports to the host to allow external access to containerized services.
// Publish port 80 of container to host port 8080 docker run -d -p 8080:80 nginx // Access service via http://localhost:8080
Docker supports various drivers and plugins for custom networking, including overlay and macvlan.
// List available network drivers docker network ls --filter driver=overlay // Plugins can be installed to extend network capabilities
Overlay networks allow containers on different Docker hosts to communicate securely.
// Create overlay network for Docker Swarm docker network create -d overlay my-overlay-net // Use in swarm services to enable cross-host communication
Docker provides internal DNS to resolve container names on user-defined networks.
// Containers can resolve each other by container name automatically // Custom DNS servers can be set in Docker daemon or per container
Secure container networks with firewalls, network policies, and restricted access.
// Use iptables rules on host to limit traffic // Use Docker network options to limit container communication // Example: --internal flag to create isolated network docker network create --internal isolated-net
Docker Compose automatically creates a network for services in the same compose file.
# docker-compose.yml snippet services: web: image: nginx db: image: mysql # Both services share the default network and can access each other by service name
Common troubleshooting includes checking network configurations, container connectivity, and port conflicts.
// Check running containers and their networks docker ps // Check logs for network errors docker logs container1 // Verify port conflicts on host sudo netstat -tulpn | grep LISTEN
Using the host network driver removes network isolation and allows containers to share host's network stack.
// Run container with host network docker run --net host -d nginx // Container uses host IP and ports directly
Improve network speed by minimizing hops, using overlay efficiently, and configuring MTU settings.
// Tune MTU size if experiencing packet loss // Optimize overlay network configurations for latency-sensitive apps // Monitor network usage and bottlenecks using tools like cAdvisor or Prometheus
Docker storage allows containers to persist data beyond their lifecycle using volumes, bind mounts, and tmpfs.
# Volumes store data managed by Docker, ideal for persistent data # Bind mounts directly map host directories to container paths # tmpfs stores data in memory, non-persistent after container stops
Volumes are Docker-managed storage, bind mounts rely on host filesystem paths, and tmpfs is ephemeral memory storage.
# Volume example: docker volume create myvolume # Bind mount example: -v /host/path:/container/path # tmpfs example: --tmpfs /container/tmp
Create volumes and mount them inside containers for data persistence.
# Create a volume named mydata docker volume create mydata # Run container with volume mounted docker run -d -v mydata:/data busybox tail -f /dev/null
View details of volumes like mountpoints and usage.
docker volume inspect mydata
Delete unused volumes to free up space.
# Remove a volume docker volume rm mydata # Remove all unused volumes docker volume prune
Use drivers to connect volumes to external storage systems or cloud providers.
# List available volume drivers docker plugin ls # Create volume with specific driver docker volume create --driver local mylocalvolume
Store application data that survives container restarts or re-creation using volumes.
# Mount volume to persist database files docker run -d -v dbdata:/var/lib/mysql mysql
Bind mount host directories to containers to share files or config.
docker run -d -v /home/user/config:/app/config myapp
Backup and restore volumes by copying data using temporary containers.
# Backup volume to tar archive docker run --rm -v mydata:/data -v $(pwd):/backup busybox tar czf /backup/mydata.tar.gz -C /data . # Restore volume from tar archive docker run --rm -v mydata:/data -v $(pwd):/backup busybox sh -c "cd /data && tar xzf /backup/mydata.tar.gz"
Define volumes in docker-compose.yml to manage multi-container setups.
version: '3' services: app: image: myapp volumes: - mydata:/app/data volumes: mydata:
Ensure containers have appropriate permissions to read/write volume data.
# Adjust permissions on host directory before bind mounting sudo chown -R 1000:1000 /host/data
Use shared volumes to let multiple containers access the same data.
docker volume create sharedvolume docker run -d --name container1 -v sharedvolume:/data busybox tail -f /dev/null docker run -d --name container2 -v sharedvolume:/data busybox tail -f /dev/null
Use volumes for persistent data, avoid bind mounts in production, and clean unused volumes regularly.
# Regularly prune unused volumes docker volume prune # Use named volumes instead of anonymous for clarity
Check volume mounts, permissions, and Docker daemon logs if volumes don’t behave as expected.
# Check container logs docker logs container_name # Inspect volume details docker volume inspect volume_name
Use tmpfs mounts for data that should not be persisted and only live in memory.
docker run -d --tmpfs /tmp:rw,size=100m busybox tail -f /dev/null
Docker Compose is a tool to define and run multi-container Docker applications using a YAML file to configure your app’s services.
// docker-compose.yml example defines multiple containers and how they interact version: "3" services: web: image: nginx ports: - "80:80"
Docker Compose is installed separately or bundled with Docker Desktop for easy use on Windows/Mac/Linux.
// Check if Docker Compose is installed docker-compose --version // To install on Linux, run (example for Ubuntu): sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose
The Compose file is YAML formatted and contains sections like version, services, networks, and volumes.
version: "3.9" # Compose file format version services: # Define containers here app: image: node:14 ports: - "3000:3000" volumes: # Define named volumes here db-data: networks: # Define custom networks front-end:
Each service is a container, specified with image, build context, ports, environment variables, volumes, and more.
services: db: image: mysql:5.7 environment: MYSQL_ROOT_PASSWORD: example volumes: - db-data:/var/lib/mysql backend: build: ./backend ports: - "8000:8000" depends_on: - db
Networks let containers communicate; volumes persist data beyond container lifetimes.
networks: app-network: driver: bridge volumes: db-data: driver: local services: db: networks: - app-network volumes: - db-data:/var/lib/mysql app: networks: - app-network
Use docker-compose up
to start all defined containers simultaneously with a single command.
// Start containers in foreground and show logs docker-compose up // Start containers in detached mode (background) docker-compose up -d
Environment variables can be set directly in Compose or loaded from .env
files to manage configuration.
services: app: image: myapp environment: - NODE_ENV=production - API_KEY=${API_KEY} # Injected from .env file
Common Docker Compose commands control container lifecycle and interaction.
// Start services docker-compose up -d // Stop and remove containers, networks docker-compose down // View container logs docker-compose logs -f // Execute command inside running container docker-compose exec app bash
Use Compose to run development services with live reload, debugging, and mounted source code.
services: web: build: . volumes: - ./:/usr/src/app # Mount current directory into container ports: - "3000:3000" command: npm start # Run development server
Scale services horizontally by running multiple container instances using docker-compose up --scale
.
// Run 3 instances of the web service docker-compose up --scale web=3 -d
Compose files evolve with versions; newer versions support more features but require compatible Docker Engine versions.
// Example header version: "3.9" # Latest stable version with features like secrets, configs
You can connect Compose services with existing Docker containers using shared networks or volumes.
// Create a network manually docker network create shared-net // Connect existing container to network docker network connect shared-net existing-container // Reference shared-net in docker-compose.yml networks: shared-net: external: true
Define healthchecks to monitor service status and restart containers if needed.
services: db: image: mysql healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 30s timeout: 10s retries: 5 start_period: 30s
Use Compose to deploy full-stack applications with databases, APIs, frontends, and other services defined together.
// Typical docker-compose.yml with web, api, db services version: "3.9" services: web: build: ./web ports: - "80:80" api: build: ./api depends_on: - db db: image: postgres volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata:
Common troubleshooting steps include checking logs, inspecting network settings, and verifying volume mounts.
// View logs for a specific service docker-compose logs web // Check running containers docker-compose ps // Restart a failing container docker-compose restart api // Remove volumes if corrupted (use carefully) docker volume rm projectname_pgdata
A Docker Registry is a service that stores and distributes Docker images.
// Docker images are pushed to and pulled from registries like Docker Hub or private servers // Example: docker pull nginx
Public registries are open for everyone; private registries restrict access to authorized users.
// Public example: Docker Hub (hub.docker.com) // Private example: self-hosted registry or Docker Trusted Registry
Docker Hub is the default public registry for Docker images with millions of images available.
// Access Docker Hub: https://hub.docker.com/ // Search and download official or community images
Push your local Docker images to Docker Hub to share with others or use in deployments.
# Tag your image with your Docker Hub username and repo name docker tag my-app majiduser/my-app:latest # Log in to Docker Hub docker login # Push image to Docker Hub docker push majiduser/my-app:latest
Download images from Docker registries using the pull command.
# Pull latest nginx image from Docker Hub docker pull nginx:latest
Tags help version and organize images in registries.
# Tagging image with version docker tag my-app majiduser/my-app:v1.0
Run your own private registry to securely store images internally.
# Start private registry container docker run -d -p 5000:5000 --restart=always --name registry registry:2 # Push image to private registry docker tag my-app localhost:5000/my-app docker push localhost:5000/my-app
Use TLS certificates to encrypt communication between Docker clients and registries.
# Configure registry with TLS certs by mounting them docker run -d -p 5000:5000 --restart=always --name registry \ -v /certs:/certs \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ registry:2
Authenticate users to access private registries with username/password or tokens.
# Login to a registry docker login myregistry.example.com
DTR is an enterprise-grade Docker registry solution with advanced security and management features.
// DTR is deployed on Docker EE and integrates with RBAC and LDAP for access control
Regularly clean up unused images and perform maintenance for registry health.
# Garbage collect unused images (in private registry) docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml
Mirror registries to improve availability and performance in different network locations.
// Configure Docker daemon with registry mirror { "registry-mirrors": ["https://mirror.gcr.io"] }
The Docker Registry HTTP API allows programmatic access to image repositories.
// Example: List tags for a repository curl https://registry.hub.docker.com/v2/repositories/library/nginx/tags/
Use CI/CD pipelines to automate building, tagging, and pushing Docker images.
// Example GitHub Actions snippet to build & push Docker image steps: - uses: actions/checkout@v2 - name: Build image run: docker build -t majiduser/my-app:${{ github.sha }} . - name: Login to Docker Hub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Push image run: docker push majiduser/my-app:${{ github.sha }}
Common issues include authentication failures, TLS errors, or image not found problems.
// Check Docker daemon logs for errors journalctl -u docker.service -f // Verify image tags and repository names docker images
Docker security involves protecting container environments from unauthorized access, vulnerabilities, and attacks.
// Best practices: // - Use minimal base images // - Regularly update images and Docker Engine // - Isolate containers properly
User namespaces map container users to different host users to improve isolation and limit privileges.
// Enable user namespaces in daemon.json { "userns-remap": "default" }
Use Docker secrets to safely manage sensitive data like passwords and API keys in Swarm mode.
// Create a secret echo "my_password" | docker secret create db_password - // Use secret in service docker service create --name mydb --secret db_password myimage
Protect the Docker daemon by limiting access, enabling TLS, and avoiding running as root.
// Example: Start Docker daemon with TLS dockerd --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=0.0.0.0:2376
Use official images, minimize layers, and avoid embedding secrets in images to improve security.
// Pull official minimal image docker pull alpine:latest
Use tools like Docker Scan, Trivy, or Clair to detect vulnerabilities in images.
// Scan image with Docker Scan docker scan myimage:latest
Docker Content Trust (DCT) ensures images are signed and verified before use.
// Enable DCT export DOCKER_CONTENT_TRUST=1 // Pull signed image docker pull myimage:latest
Implement RBAC in Docker Enterprise or Kubernetes to control user permissions effectively.
// Example: Kubernetes RBAC role binding kubectl create rolebinding dev-binding --clusterrole=edit --user=devuser --namespace=dev
Use Linux security modules to restrict container capabilities and system calls.
// Run container with default seccomp profile docker run --security-opt seccomp=default.json myimage // Example AppArmor profile enforcement docker run --security-opt apparmor=profile_name myimage
Run containers with non-root users and minimal capabilities to reduce risks.
// Dockerfile example specifying non-root user FROM alpine RUN adduser -D appuser USER appuser CMD ["sh"]
Use environment variables carefully and prefer secrets management for sensitive data.
// Pass secret as environment variable (less secure) docker run -e DB_PASS=my_password myimage // Prefer Docker secrets or external vaults
Isolate container networks, use firewalls, and configure network policies to protect communication.
// Create user-defined network for isolation docker network create isolated_net // Run containers on isolated network docker run --network=isolated_net myimage
Use Docker Bench Security to audit your Docker host against security best practices.
// Run Docker Bench Security git clone https://github.com/docker/docker-bench-security.git cd docker-bench-security sh docker-bench-security.sh
In production, monitor containers, limit resource usage, and regularly update images and hosts.
// Example: Limit container CPU and memory docker run --memory=512m --cpus=1 myimage
Enable logging and auditing to track Docker daemon events and container activity for security analysis.
// Enable Docker daemon audit logging (Linux example) auditctl -w /var/run/docker.sock -p rwxa
Docker Swarm is Docker’s native clustering and orchestration tool. It allows you to manage a cluster of Docker engines as a single virtual system.
// Docker Swarm enables container orchestration with features like // scaling, load balancing, and service discovery.
The architecture includes manager nodes that control the cluster and worker nodes that run services. Managers maintain cluster state and schedule tasks.
// Components: // - Manager nodes: maintain cluster state, handle orchestration // - Worker nodes: execute tasks (containers) assigned by managers
Start a Swarm by initializing the first manager node using the Docker CLI.
// Initialize Swarm on manager node docker swarm init --advertise-addr// Example docker swarm init --advertise-addr 192.168.1.100
Add nodes to the Swarm cluster using join tokens provided by the manager.
// Get join token for workers docker swarm join-token worker // Add worker node docker swarm join --token:2377
Deploy applications as services which run containers replicated across nodes.
// Deploy nginx service with 3 replicas docker service create --name webserver --replicas 3 -p 80:80 nginx
Change the number of service replicas to scale applications up or down.
// Scale service to 5 replicas docker service scale webserver=5
Swarm has built-in DNS-based service discovery and load balances requests between replicas.
// Docker Swarm automatically load balances incoming requests // to the published port among all replicas of the service.
Update services with zero downtime using rolling updates. Rollback if something goes wrong.
// Update image version with rolling update docker service update --image nginx:1.21 webserver // Rollback to previous version docker service rollback webserver
Store sensitive data like passwords securely using Docker secrets.
// Create a secret echo "my_password" | docker secret create db_password - // Use secret in service docker service create --name db --secret db_password mongo
Swarm creates overlay networks to enable secure communication between containers on different nodes.
// Create overlay network docker network create -d overlay my_overlay // Attach service to network docker service create --name app --network my_overlay my_image
Use Docker commands and external tools to monitor service status and collect logs.
// List services and replicas docker service ls // Check tasks of a service docker service ps webserver // View logs of a service docker service logs webserver
Overlay networks span multiple Docker hosts allowing containers to communicate securely across nodes.
// Overlay network provides multi-host container communication. // Example: // docker network create -d overlay my_overlay
Use volumes or external storage plugins to manage persistent data in Swarm services.
// Create volume docker volume create db_data // Use volume in service docker service create --name db --mount type=volume,source=db_data,target=/data/db mongo
Swarm uses mutual TLS encryption for node communication and supports role-based access control.
// Swarm auto-encrypts traffic between nodes with TLS. // You can also rotate certificates: // docker swarm ca --rotate
Common troubleshooting commands help identify issues with nodes, services, and networks.
// Check node status docker node ls // Inspect a service docker service inspect webserver // View detailed logs docker service logs --details webserver
Kubernetes is an open-source container orchestration system that automates deployment, scaling, and management of containerized applications.
// Kubernetes clusters manage containerized apps across multiple hosts, // allowing automatic scaling and self-healing.
Both orchestrate containers, but Kubernetes offers more features, flexibility, and a larger ecosystem, while Docker Swarm is simpler and integrates tightly with Docker.
// Kubernetes: // - More complex, feature-rich // - Supports auto-scaling, rolling updates // Docker Swarm: // - Easier setup // - Basic orchestration features
In Kubernetes, Docker containers run inside Pods, the smallest deployable units.
apiVersion: v1 kind: Pod metadata: name: my-docker-pod spec: containers: - name: my-container image: nginx:latest
Docker Desktop includes a built-in Kubernetes cluster that can be enabled for local development.
// Enable Kubernetes via Docker Desktop settings // Use kubectl to interact with the local cluster kubectl get nodes
Pods run containers, Deployments manage Pods lifecycle, and Services expose Pods for networking.
// Deployment example apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest
CRI is the API between Kubernetes and container runtimes like Docker, containerd, or CRI-O.
// Kubernetes uses CRI to start, stop, and manage containers // Docker runtime support ended after Kubernetes v1.20 in favor of containerd
Build Docker images locally or in CI/CD and push to registries to deploy on Kubernetes.
// Dockerfile example FROM node:16-alpine WORKDIR /app COPY package.json ./ RUN npm install COPY . . CMD ["node", "server.js"] // Build and push image docker build -t myapp:latest . docker tag myapp:latest myregistry/myapp:latest docker push myregistry/myapp:latest
ConfigMaps store non-sensitive config data, Secrets store sensitive info like passwords.
// Create ConfigMap from file kubectl create configmap app-config --from-file=config.properties // Use ConfigMap in Pod envFrom: - configMapRef: name: app-config // Create Secret kubectl create secret generic db-password --from-literal=password='mypassword'
Persistent Volumes (PV) provide storage; Persistent Volume Claims (PVC) request storage for Pods.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
Pods communicate via an internal network; Services provide stable IPs and load balancing.
// Service example exposing Pods apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: ClusterIP
kubectl is the CLI to control Kubernetes; use it to manage Pods running Docker containers.
// Common commands kubectl get pods # List pods kubectl describe pod my-docker-pod # Pod details kubectl logs my-docker-pod # View container logs kubectl exec -it my-docker-pod -- /bin/sh # Access container shell
Helm is a package manager for Kubernetes to deploy complex apps using charts referencing Docker images.
// Install Helm chart example helm repo add stable https://charts.helm.sh/stable helm install my-nginx stable/nginx // Helm charts use values.yaml to specify Docker image tags and configs
Use tools like Prometheus and Grafana to monitor container metrics and health.
// Prometheus scrapes metrics from pods // Grafana dashboards visualize container CPU, memory, network
Use Role-Based Access Control (RBAC), network policies, and Secrets management to secure Kubernetes clusters.
// Example RBAC rule to limit access apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"]
Common troubleshooting steps include checking pod logs, describing pods for events, and verifying resource limits.
// Troubleshoot pod issues kubectl logs pod-name kubectl describe pod pod-name kubectl get events --sort-by=.metadata.creationTimestamp
Docker simplifies CI by providing consistent environments for building, testing, and deploying code. Containers ensure the same runtime across all pipeline stages.
// Example: Use Docker container as CI build environment docker run -v $(pwd):/app -w /app node:16 npm test
Automate image builds triggered by code changes, ensuring that new features are always packaged in fresh containers.
# Sample Docker build in CI script docker build -t myapp:${CI_COMMIT_SHA} .
Run unit and integration tests inside containers to isolate dependencies and environments from the host machine.
docker run --rm myapp:${CI_COMMIT_SHA} npm test
Docker Compose can spin up multi-container setups (databases, caches) needed for integration tests within CI pipelines.
docker-compose -f docker-compose.test.yml up --abort-on-container-exit
After building, push images to registries like Docker Hub or private repos so they can be deployed later.
docker login -u $DOCKER_USER -p $DOCKER_PASS docker push myapp:${CI_COMMIT_SHA}
Deploy new container images automatically to staging or production servers using scripts or orchestration tools.
ssh user@server "docker pull myapp:${CI_COMMIT_SHA} && docker run -d myapp:${CI_COMMIT_SHA}"
Jenkins pipelines can use Docker agents to run builds in isolated containers and manage image lifecycles.
// Jenkinsfile example snippet pipeline { agent { docker { image 'node:16' } } stages { stage('Build') { steps { sh 'npm install' sh 'npm test' } } } }
GitLab runners support Docker executors that build and test images seamlessly as part of CI workflows.
# .gitlab-ci.yml snippet build: image: docker:latest services: - docker:dind script: - docker build -t myapp:$CI_COMMIT_SHA .
GitHub Actions workflows can build, test, and push Docker images using official actions or custom scripts.
# GitHub Actions job example jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Build Docker image run: docker build -t myapp:${{ github.sha }} . - name: Push Docker image run: docker push myapp:${{ github.sha }}
Ensure Docker images and pipelines are scanned for vulnerabilities, and avoid storing secrets in images or logs.
// Scan images with tools like Trivy trivy image myapp:${CI_COMMIT_SHA}
Use secure vaults or pipeline secret managers to inject credentials without exposing them in code or Dockerfiles.
# Example: Using environment variables in CI docker run -e DB_PASSWORD=$DB_PASSWORD myapp:${CI_COMMIT_SHA}
Deploy new versions alongside the current one and switch traffic after testing, enabling instant rollbacks if needed.
// Run old and new containers on different ports docker run -d -p 8080:80 myapp:v1 docker run -d -p 8081:80 myapp:v2 // Switch proxy to point to port 8081 for new version
Gradually route a small percentage of users to a new container version to monitor stability before full rollout.
// Deploy new container with limited replicas kubectl set image deployment/myapp myapp=myapp:v2 kubectl scale deployment myapp --replicas=10 // Use service mesh or load balancer to route 10% traffic to v2
Use logging and monitoring tools (e.g., Prometheus, ELK) to track container health, job status, and pipeline metrics.
// Example: Check container logs docker logs container_id
Common issues include network errors, permission problems, or image caching. Use detailed logs and retry strategies to debug.
// Restart docker service if needed sudo systemctl restart docker // Clear cached images to force rebuild docker builder prune -a
Monitoring containers helps ensure application health, resource optimization, and quick issue detection in containerized environments.
// Monitor container health to avoid downtime // Track CPU, memory, and network usage for efficiency
Docker logs show output from container processes (stdout/stderr) useful for debugging and auditing.
// View logs of a running container docker logs container_id_or_name
Docker supports multiple log drivers to route logs to different destinations like JSON files, syslog, or remote servers.
// Run container with syslog log driver docker run --log-driver=syslog nginx
ELK (Elasticsearch, Logstash, Kibana) collects, indexes, and visualizes logs from multiple containers centrally.
// Setup Logstash to receive Docker logs // Use Filebeat or Docker logging driver to forward logs to Logstash // Visualize logs in Kibana dashboards
Prometheus collects container metrics for real-time monitoring and alerting.
// Deploy Prometheus with Docker to scrape metrics endpoints // Configure Prometheus YAML to monitor container exporters
Grafana visualizes Prometheus data with dashboards showing CPU, memory, network, and disk usage per container.
// Import Docker monitoring dashboards into Grafana // Customize panels to display desired metrics
Healthchecks let Docker monitor container status and restart unhealthy containers automatically.
// Example Dockerfile HEALTHCHECK HEALTHCHECK --interval=30s --timeout=5s CMD curl -f http://localhost/ || exit 1
Track container resource consumption to optimize allocation and detect anomalies.
// View real-time stats docker stats container_id_or_name
Set alerts in monitoring tools to notify when containers exceed resource limits or become unhealthy.
// Configure Prometheus alert rules for high CPU or memory usage // Send alerts via email, Slack, or PagerDuty integrations
`docker stats` provides a live stream of resource usage metrics for running containers.
// Example usage: docker stats // Output includes CPU %, memory usage, network IO, and block IO
Use appropriate log drivers, limit log size, and centralize logs for easier management.
// Example: Limit log size and rotation docker run --log-opt max-size=10m --log-opt max-file=3 nginx
Fluentd aggregates logs from containers and forwards them to various destinations like Elasticsearch or S3.
// Run container with Fluentd logging docker run --log-driver=fluentd nginx
Analyze container logs to identify errors, crashes, or performance issues.
// Check logs for specific errors docker logs container_id | grep "ERROR" // Combine with timestamps for detailed debugging docker logs --timestamps container_id
Monitor multi-node Docker Swarm clusters for service health, load balancing, and resource use.
// Use `docker service ps` to check service tasks docker service ps service_name // Integrate Prometheus and Grafana for Swarm-wide monitoring
Use monitoring data to tune container resource limits, scaling policies, and network configurations for best performance.
// Adjust CPU/memory limits in Docker Compose services: app: deploy: resources: limits: cpus: "1.0" memory: 512M replicas: 3 update_config: parallelism: 2 restart_policy: condition: on-failure
Multi-stage builds let you use multiple FROM statements to reduce final image size by copying only necessary artifacts.
// Example Dockerfile snippet: FROM node:18 AS builder WORKDIR /app COPY package.json . RUN npm install COPY . . RUN npm run build FROM nginx:alpine COPY --from=builder /app/build /usr/share/nginx/html // The final image contains only the built files, not build tools
ARG defines build-time variables, ENV sets environment variables available at runtime.
// Example: ARG APP_VERSION=1.0 ENV APP_ENV=production RUN echo "Building version $APP_VERSION" // Use ARG for build customization and ENV for container runtime config
Order Dockerfile instructions to leverage Docker layer caching and speed up builds.
// Place less frequently changed commands early (like installing dependencies) // Place frequently changed commands later (like copying source code) COPY package.json . RUN npm install COPY . . // This caches npm install step unless package.json changes
Use LABEL instructions to add metadata such as maintainer, version, or description.
// Example: LABEL maintainer="majid@example.com" LABEL version="1.0" LABEL description="My awesome app container" // Labels help identify and manage images
BuildKit is an advanced Docker build backend that improves performance and features like cache mounts and secret handling.
// Enable BuildKit (Linux/macOS terminal): export DOCKER_BUILDKIT=1 // Then run: docker build . // Allows advanced features like --mount=type=cache for dependency caching
Use ARG and shell commands with conditional logic to customize builds.
// Example: ARG INSTALL_EXTRA=false RUN if [ "$INSTALL_EXTRA" = "true" ]; then \ apt-get update && apt-get install -y extra-package; \ fi // Build with extra package: docker build --build-arg INSTALL_EXTRA=true .
Avoid embedding secrets directly; use BuildKit secrets or environment variables.
// BuildKit secret example: RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret // Build with secret: docker build --secret id=mysecret,src=secret.txt .
ONBUILD defines triggers that run when the image is used as a base for another build.
// Example: ONBUILD COPY . /app ONBUILD RUN npm install // Useful for base images that prepare build steps for child images
COPY copies files from build context; ADD can also extract archives and fetch URLs.
// Prefer COPY for clarity and simplicity COPY ./app /app // Use ADD only if you need automatic archive extraction ADD app.tar.gz /app
Use minimal base images like alpine, clean caches, and remove unnecessary files to reduce image size.
// Example: FROM python:3.11-alpine RUN apk add --no-cache build-base \ && pip install --no-cache-dir -r requirements.txt \ && apk del build-base // This reduces image size by removing build tools after install
Use intermediate containers, build with --progress=plain, and add debugging commands.
// Build with verbose output: docker build --progress=plain . // Temporarily add debug command RUN echo "Debug: current dir contents:" && ls -la // Run intermediate image interactively: docker build --target=builder -t temp-image . docker run -it temp-image /bin/sh
Authenticate with private registries to pull base images or push images securely.
// Docker login to private registry: docker login myregistry.example.com // Use private image as base: FROM myregistry.example.com/myimage:latest
Use ARGs, multi-stage builds, and modular Dockerfiles to support reuse and customization.
// Example passing app directory via ARG ARG APP_DIR=app COPY $APP_DIR /app // Use base Dockerfiles with ONBUILD to customize downstream images
Use buildx and manifest to build multi-arch images (amd64, arm64, etc.).
// Enable buildx docker buildx create --use // Build multi-arch image docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest --push . // Allows running image on multiple architectures
Use tools like hadolint to check Dockerfile best practices automatically.
// Run hadolint locally hadolint Dockerfile // Integrate hadolint in CI pipelines to prevent bad Dockerfiles
The Docker REST API allows programmatic control of Docker daemon, enabling management of containers, images, networks, and more.
// Example: GET version info via curl curl --unix-socket /var/run/docker.sock http://v1.41/version
You can interact with the Docker API using HTTP requests over Unix socket or TCP to manage Docker remotely.
// Example: List all containers curl --unix-socket /var/run/docker.sock http://v1.41/containers/json
Python SDK simplifies interaction with Docker API for scripting and automation.
import docker client = docker.from_env() # List containers for container in client.containers.list(): print(container.name, container.status) # Run a new container client.containers.run("nginx", detach=True)
The Go SDK provides Go-native Docker API bindings for building Docker tools.
// Sample code snippet (Go) package main import ( "context" "fmt" "github.com/docker/docker/api/types" "github.com/docker/docker/client" ) func main() { cli, _ := client.NewClientWithOpts(client.FromEnv) containers, _ := cli.ContainerList(context.Background(), types.ContainerListOptions{}) for _, container := range containers { fmt.Println(container.ID, container.Image) } }
The Node.js SDK allows managing Docker with JavaScript in server-side apps.
const Docker = require('dockerode'); const docker = new Docker(); docker.listContainers((err, containers) => { containers.forEach(container => { console.log(container.Id, container.Image); }); });
You can create CLI tools, dashboards, or automation scripts by calling Docker API endpoints.
// Example: Create a custom dashboard showing running containers and stats // Use Docker API + frontend frameworks (React, Vue) to build UI
Securing Docker API access involves TLS certificates, user roles, and restricting API exposure.
// Enable TLS with client certificates for remote API access // Limit socket permissions to trusted users only
Subscribe to real-time events from Docker daemon like container start, stop, and image pulls.
// Example: Listen to events with curl curl --unix-socket /var/run/docker.sock http://v1.41/events // Or with Python SDK for event in client.events(decode=True): print(event)
Start, stop, inspect, and remove containers programmatically using API calls or SDK functions.
// Example: Stop container via REST API curl -X POST --unix-socket /var/run/docker.sock http:/v1.41/containers/{id}/stop // Using Python SDK container = client.containers.get('container_id') container.stop()
Pull, push, inspect, and remove Docker images via API or SDK for automation workflows.
// Pull image with Node.js SDK docker.pull('nginx', (err, stream) => { // handle stream events for progress });
Manage Docker Swarm clusters, services, and nodes via Docker API for container orchestration.
// Create a new service in swarm mode curl -X POST --unix-socket /var/run/docker.sock http:/v1.41/services/create -d '{ "Name": "web", "TaskTemplate": { "ContainerSpec": { "Image": "nginx" } } }'
Retrieve container stats, resource usage, and daemon info to monitor Docker health.
// Get container stats (CPU, memory) curl --unix-socket /var/run/docker.sock http://v1.41/containers/{id}/stats?stream=false
Respect API rate limits, cache responses, and implement retries to avoid overwhelming the daemon.
// Avoid excessive polling; use events API for real-time updates // Implement exponential backoff on retries
Trigger external workflows by forwarding Docker events to webhook endpoints.
// Use event listeners to post JSON payloads to webhook URLs // Example: On container start, send notification to monitoring system
Debug API issues by checking socket permissions, verifying endpoints, and examining error messages.
// Use curl verbose mode for HTTP debugging curl -v --unix-socket /var/run/docker.sock http://v1.41/containers/json // Check Docker daemon logs for errors sudo journalctl -u docker.service
AWS provides managed container services like ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service) to run Docker containers at scale.
// Example: Deploying a Docker image on ECS using AWS CLI aws ecs create-cluster --cluster-name my-cluster aws ecs register-task-definition --cli-input-json file://task-def.json aws ecs run-task --cluster my-cluster --task-definition my-task
Azure offers AKS (Azure Kubernetes Service) and ACI (Azure Container Instances) for container orchestration and serverless container hosting.
// Example: Create AKS cluster using Azure CLI az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 3 --generate-ssh-keys // Deploy container to ACI az container create --resource-group myResourceGroup --name mycontainer --image myimage:latest --cpu 1 --memory 1.5
Google Kubernetes Engine (GKE) provides managed Kubernetes for deploying Docker containers in the cloud.
// Create GKE cluster gcloud container clusters create my-cluster --num-nodes=3 // Deploy Docker image to GKE kubectl run myapp --image=myimage:latest --port=80
Terraform can automate the provisioning of cloud infrastructure to deploy Docker containers.
// Example Terraform snippet for AWS ECS service resource "aws_ecs_cluster" "example" { name = "example-cluster" } resource "aws_ecs_task_definition" "task" { family = "my-task" container_definitions = file("container-def.json") }
Run Docker containers directly on cloud virtual machines for flexible but less managed deployments.
// SSH into cloud VM and run container ssh user@cloud-vm-ip docker run -d -p 80:80 myimage:latest
Serverless container services like AWS Fargate and Azure Container Instances run containers without managing servers.
// Example AWS Fargate run command aws ecs run-task --launch-type FARGATE --network-configuration "awsvpcConfiguration={subnets=[subnet-abc],securityGroups=[sg-123]}" --task-definition my-task
Use cloud storage services (EBS, Azure Disk, Google Persistent Disk) as persistent volumes for Docker containers.
// Attach EBS volume to ECS task via volume definition in task JSON // Mount Azure Disk in AKS using Persistent Volume Claims (PVC)
Use cloud-native CI/CD pipelines like AWS CodePipeline, Azure DevOps, or Google Cloud Build to build and deploy Docker images.
// Example: Google Cloud Build config to build and push Docker image steps: - name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/my-project/myimage', '.'] images: - 'gcr.io/my-project/myimage'
Configure virtual networks, load balancers, and firewall rules to securely expose Docker containers in cloud environments.
// AWS example: Create ALB (Application Load Balancer) forwarding to ECS tasks // Azure: Use Azure Load Balancer or Application Gateway with AKS
Monitor container performance and health using cloud monitoring tools like CloudWatch, Azure Monitor, or Stackdriver.
// Enable CloudWatch Container Insights for ECS or EKS clusters // Use Prometheus and Grafana for Kubernetes monitoring
Optimize cloud spending by right-sizing containers, using spot instances, and auto-scaling workloads.
// Use AWS Spot Instances with ECS to reduce cost // Scale down unused nodes in Kubernetes clusters
Implement security best practices such as least privilege IAM roles, secrets management, and network policies.
// Use IAM roles for ECS task permissions // Store secrets in AWS Secrets Manager or Azure Key Vault // Configure Kubernetes Network Policies
Push and pull Docker images using cloud registries like Amazon ECR, Azure Container Registry, or Google Container Registry.
// Authenticate Docker to AWS ECR aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin.dkr.ecr.us-east-1.amazonaws.com // Push image docker tag myimage:latest /myimage:latest docker push /myimage:latest
Deploy Docker containers across on-premises and multiple cloud providers for flexibility and redundancy.
// Use Kubernetes Federation or multi-cluster tools to manage hybrid deployments // Configure VPN or Direct Connect for secure network connectivity
Diagnose issues using logs, monitoring, cloud provider dashboards, and container-specific tools.
// Check container logs via cloud CLI or dashboard kubectl logs mypod // Use cloud provider console for network and instance status
Keep containers single-purpose and lightweight, each running one process or service for easier maintenance and scaling.
// Example: Separate containers for web and database // Web container runs only web server // DB container runs only database service
Containers are immutable once built. Instead of changing running containers, build new images for updates ensuring consistency and rollback.
// When updating app: // 1. Build new image with changes docker build -t myapp:v2 . // 2. Deploy new container from updated image docker run -d myapp:v2
Never store secrets in images or code. Use Docker secrets, environment variables, or external vaults to manage sensitive data securely.
// Using Docker secrets example: // Create secret echo "my_password" | docker secret create db_password - // Reference in service (Docker Swarm) services: db: image: mysql secrets: - db_password secrets: db_password: external: true
Pass configuration via environment variables but avoid putting secrets directly in Dockerfiles or repos.
// docker-compose.yml example services: app: environment: - DB_HOST=db - DB_USER=root - DB_PASS=${DB_PASS} # Loaded from .env file or CI pipeline
Tag images with semantic versioning or commit hashes to track builds and roll back if necessary.
// Tagging with version docker build -t myapp:1.0.0 . // Tagging with git commit SHA docker build -t myapp:abc123def .
Order Dockerfile commands to maximize caching for faster builds and smaller images.
# Example Dockerfile layering order FROM node:16 WORKDIR /app # Install dependencies early to leverage cache COPY package.json package-lock.json ./ RUN npm install # Copy app source last so changes here don't bust cache COPY . . CMD ["node", "server.js"]
Use volumes or bind mounts to persist data outside containers so data is not lost when containers restart.
// docker-compose.yml volume example services: db: image: postgres volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata:
Use Docker networks to isolate and control communication between containers for security and manageability.
// Creating a user-defined bridge network docker network create my_network // Run containers attached to this network docker run -d --net my_network --name app myapp docker run -d --net my_network --name db postgres
Use orchestration tools like Docker Swarm or Kubernetes to scale containers horizontally and manage load balancing.
// Docker Compose scale example docker-compose up --scale web=3 -d // Kubernetes example (kubectl scale) kubectl scale deployment myapp --replicas=3
Centralize logs using tools like ELK stack, and monitor container health and resource usage with Prometheus, Grafana, or Docker stats.
// View live container stats docker stats // Configure logging driver in Docker Compose services: app: logging: driver: "json-file" options: max-size: "10m" max-file: "3"
Define healthchecks to monitor container status and configure restart policies to ensure resilience.
# Dockerfile healthcheck example HEALTHCHECK --interval=30s --timeout=10s CMD curl -f http://localhost/ || exit 1 # Docker Compose restart policy services: app: restart: unless-stopped
Use logs, exec into running containers, or deploy debug versions of containers to troubleshoot issues in live environments.
// Exec into container shell docker exec -it container_id /bin/bash // Tail logs for errors docker logs -f container_id
Regularly back up volumes and images, and prepare scripts or orchestration manifests for quick recovery.
// Backup a volume docker run --rm -v pgdata:/data -v $(pwd):/backup busybox tar czf /backup/pgdata_backup.tar.gz /data // Restore a volume docker run --rm -v pgdata:/data -v $(pwd):/backup busybox tar xzf /backup/pgdata_backup.tar.gz -C /
Keep Dockerfiles clean, documented, and modular. Avoid installing unnecessary packages and leverage multi-stage builds.
# Multi-stage build example FROM node:16 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build FROM node:16-alpine WORKDIR /app COPY --from=builder /app/dist ./dist CMD ["node", "dist/server.js"]
Manage container creation, updates, restarts, and removal systematically with proper tagging, orchestration, and automation.
// Stop and remove containers and images docker-compose down --rmi all // Remove unused images, containers, networks docker system prune -a
Errors like “image not found” or “permission denied” are common. Identifying error messages helps fix issues quickly.
// Example error: "Error response from daemon: pull access denied" // Solution: Check image name and authentication credentials docker pull myrepo/myimage:tag
When containers fail to start, check logs, environment variables, and entrypoint scripts.
// Check container logs docker logs container_id // Inspect container for errors docker inspect container_id
Build failures often come from syntax errors in Dockerfile or missing files.
// Build image and see error details docker build -t my-app . // Common fix: verify Dockerfile syntax and COPY paths
Containers may fail to connect due to network misconfigurations or firewall rules.
// List Docker networks docker network ls // Inspect a network docker network inspect bridge // Test connectivity inside container docker exec -it container_id ping google.com
Volume mounting problems can cause data loss or permission issues.
// Check volume mounts docker volume ls // Inspect volume details docker volume inspect volume_name // Ensure correct permissions on host directories ls -ld /host/path
Containers might be killed or throttled if CPU, memory, or disk quotas are exceeded.
// Check container stats docker stats container_id // Limit resources in Docker run docker run --memory="500m" --cpus="1" my-app
Daemon crashes or hangs can be diagnosed by examining daemon logs and restarting the service.
// View Docker daemon logs (Linux) journalctl -u docker.service -f // Restart Docker daemon sudo systemctl restart docker
Crash loops often stem from application errors or resource exhaustion.
// View restart count and status docker ps -a // Check logs for crash reason docker logs container_id
Tools like Docker logs, inspect, and third-party debuggers help track issues.
// Use docker inspect for detailed container info docker inspect container_id // Attach to running container for live debugging docker exec -it container_id /bin/bash
Swarm issues often relate to node communication or service deployment errors.
// Check swarm node status docker node ls // Inspect service logs docker service logs service_name
Compose issues can be debugged by checking individual container logs and the docker-compose.yml file.
// View logs of all services docker-compose logs -f // Validate docker-compose file docker-compose config
Authentication failures or network blocks can prevent pushing/pulling images.
// Login again if auth fails docker login // Check firewall and proxy settings blocking registry access
Security issues may include permission denied errors or container escape vulnerabilities.
// Run container with appropriate user permissions docker run -u $(id -u):$(id -g) my-app // Check Docker security logs and audit rules
Slow containers may be caused by resource starvation or inefficient images.
// Profile resource usage docker stats // Optimize Dockerfile and image layers
Tools like Dive, cAdvisor, and Sysdig help analyze images, container performance, and system health.
// Example: Use Dive to analyze image layers dive my-app:latest // Use cAdvisor for container metrics docker run --rm -p 8080:8080 google/cadvisor
The Docker ecosystem includes tools and platforms that support container lifecycle, management, orchestration, and security.
// Ecosystem components include Docker Engine, Compose, Swarm, Kubernetes integrations, registries, and monitoring tools
Docker Desktop provides a GUI for managing containers, images, volumes, and integrates with Kubernetes.
// Start Docker Desktop on your OS and manage containers from the GUI interface // It also provides Docker CLI integration and automatic updates
Docker Compose lets you define multi-container applications using YAML files for easy setup and orchestration.
version: '3.8' services: web: image: nginx:latest ports: - "80:80" db: image: postgres:latest environment: POSTGRES_PASSWORD: example
Docker Machine helps provision Docker hosts on cloud or local VMs. Docker Toolbox was an older tool for legacy Windows/macOS systems.
// Create a Docker host with Machine on VirtualBox docker-machine create --driver virtualbox myvm1 // Use this host eval $(docker-machine env myvm1)
Portainer is a lightweight management UI to easily manage Docker hosts and Swarm clusters.
// Run Portainer container docker volume create portainer_data docker run -d -p 9000:9000 --name portainer --restart=always \ -v /var/run/docker.sock:/var/run/docker.sock \ -v portainer_data:/data portainer/portainer-ce
Dive helps inspect Docker images layer by layer and identify inefficiencies or bloat.
// Install and run Dive on an image dive myimage:latest
Dockly is an interactive CLI tool to manage Docker containers, images, volumes, and networks in terminal.
// Install Dockly globally via npm npm install -g dockly // Run Dockly dockly
These are container runtimes used as alternatives to Docker Engine in Kubernetes environments.
// Kubernetes often uses containerd or CRI-O for running containers efficiently and securely // Docker Engine itself uses containerd under the hood
Plugins extend Docker’s functionality with networking, storage, logging, and more.
// Example: Install a logging plugin docker plugin install fluent/fluentd
Docker offers cloud services for image storage, build automation, and container orchestration.
// Push image to Docker Hub cloud registry docker login docker tag myimage username/myimage:latest docker push username/myimage:latest
Besides Docker Hub, registries like GitHub Container Registry, AWS ECR, and Google Container Registry are used to store container images.
// Example pushing image to GitHub Container Registry docker tag myimage ghcr.io/username/myimage:latest docker push ghcr.io/username/myimage:latest
Tools like Aqua Security, Sysdig, and Clair help scan and protect containers from vulnerabilities.
// Example: Scan image using Clair (simplified) // clairctl analyze myimage:latest // clairctl report myimage:latest
Automate scanning in CI/CD pipelines to catch vulnerabilities before deployment.
// Example GitHub Action step to scan image with Trivy - name: Scan Docker image uses: aquasecurity/trivy-action@v0.6.0 with: image-ref: myimage:latest
Tools like Prometheus, Grafana, cAdvisor, and Datadog monitor container metrics and performance.
// Run cAdvisor for container monitoring docker run -d --name=cadvisor -p 8080:8080 \ --volume=/var/run/docker.sock:/var/run/docker.sock:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:ro \ google/cadvisor:latest
Besides Docker Swarm, Kubernetes and Nomad are popular container orchestration platforms.
// Kubernetes example: deploy app with kubectl kubectl apply -f deployment.yaml
Use Docker to create isolated, reproducible development environments easily.
// Dockerfile example to set up Node.js dev environment FROM node:18 WORKDIR /app COPY package.json ./ RUN npm install COPY . . CMD ["npm", "start"]
Develop and run microservices independently in containers for better modularity.
// Docker Compose example for microservices version: '3' services: auth-service: build: ./auth ports: - "4000:4000" user-service: build: ./user ports: - "4001:4001" depends_on: - auth-service networks: - backend networks: backend:
Attach debuggers to running containers or use logs to diagnose issues.
// Attach to running container shell docker exec -it my_app_container /bin/bash // View logs docker logs my_app_container
Use volume mounts and tools like Nodemon to enable live reload of code changes inside containers.
// Run container with mounted volume for live reload docker run -v $(pwd):/app -p 3000:3000 node:18 nodemon app.js
Encapsulate dependencies inside containers, avoiding “works on my machine” issues.
// All dependencies installed in container via Dockerfile RUN npm install
Use official database images to run local development databases quickly.
// Run local MySQL container docker run -d --name mysql-dev -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 mysql:8
Run tests inside containers to ensure consistent results across environments.
// Run tests in container docker run --rm -v $(pwd):/app -w /app node:18 npm test
Share Dockerfiles and Compose files so all developers have identical environments.
// Share Dockerfile and docker-compose.yml in repo // Clone and run docker-compose up to start environment
Configure IDEs like VSCode to debug and run code inside containers seamlessly.
// VSCode devcontainer.json example { "name": "Node.js Dev", "dockerFile": "Dockerfile", "appPort": [3000], "extensions": ["ms-vscode.node-debug2"] }
Use Docker in CI pipelines to automate building and testing your apps consistently.
// Example GitHub Actions snippet to build and test Docker image jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - run: docker build -t my-app . - run: docker run my-app npm test
Run linters, formatters, and analysis tools inside Docker containers to keep consistency.
// Example: Run ESLint inside container docker run --rm -v $(pwd):/app -w /app node:18 npx eslint .
Keep Docker configs versioned along with your source code for traceability and rollback.
// Commit Dockerfile and docker-compose.yml alongside source code git add Dockerfile docker-compose.yml git commit -m "Add Docker config for dev environment"
Use Docker-based pipelines to get rapid feedback from builds, tests, and deployments.
// Run containerized tests after each commit in CI pipeline // Alerts and reports generated automatically
Docker fits naturally into Agile workflows and DevOps practices for continuous integration and delivery.
// Agile sprint cycles benefit from fast environment setup and tear down with Docker // DevOps pipelines automate container builds, tests, and deployments
Use Docker features like caching, multi-stage builds, and volume mounts to speed development.
// Multi-stage build example to optimize image size FROM node:18 AS build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build FROM node:18-alpine WORKDIR /app COPY --from=build /app/dist ./dist CMD ["node", "dist/index.js"]
Containers are evolving with better orchestration, security, and support for new workloads like AI and edge computing.
// Trend example: Increased use of microVMs for lightweight isolation // Tools like Firecracker run microVMs optimized for containers
Serverless platforms increasingly use containers under the hood for fast, scalable function execution.
// Example: Deploying a containerized function in AWS Lambda // Lambda now supports container images up to 10GB in size
Runtimes like containerd and CRI-O are replacing Docker daemon for more efficient container lifecycle management.
// Kubernetes uses containerd by default now instead of Docker // Example: Installing containerd and using it to run containers
Rootless Docker allows running containers without requiring root privileges, improving security and usability.
// Run Docker daemon in rootless mode: // $ dockerd-rootless.sh // This reduces attack surface and allows non-root users to run containers
Combining Docker with WASM enables portable, fast, and secure containerized apps running across environments.
// WASM modules can be containerized for deployment // Example: Running WASM workloads in container environments
Containers enable lightweight app deployment close to data sources on edge devices, reducing latency.
// Deploy containerized IoT apps on edge gateways using Docker // Example: docker run -d --restart unless-stopped edge-app:latest
Docker containers simplify packaging ML models and dependencies for reproducible AI workflows.
// Dockerfile example for ML model FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["python", "serve_model.py"]
Containers help run modular IoT apps and manage updates efficiently on distributed devices.
// Example: Docker on Raspberry Pi to run IoT sensor software // docker run --rm -it --privileged iot-sensor:latest
Security tools focus on vulnerability scanning, runtime protection, and supply chain security for containers.
// Tools like Trivy scan images for vulnerabilities // Example: trivy image myapp:latest
Infrastructure as code uses declarative YAML/JSON configs to define container deployments and infrastructure.
// Kubernetes manifests or Docker Compose files declare desired state // Example docker-compose.yml version: '3' services: web: image: nginx ports: - "80:80"
GitOps uses Git repos as single sources of truth to automate container deployment workflows.
// Example: Push Docker image and update Kubernetes manifests in Git // ArgoCD or Flux continuously sync cluster state with Git
Organizations deploy containers across multiple clouds for redundancy, cost optimization, and compliance.
// Use Kubernetes Federation or tools like Rancher to manage multi-cloud clusters
Container networking evolves with service meshes (Istio), CNI plugins, and improved ingress for secure, reliable communication.
// Istio example to secure service-to-service communication // Configure sidecar proxies for traffic routing and observability
The Docker and container ecosystem grows with new tools, integrations, and vibrant open source communities.
// Docker Hub, Kubernetes SIGs, CNCF projects continuously innovate
Stay updated with container tech trends, adopt security best practices, and explore emerging runtimes and orchestration models.
// Regularly update container images, automate security scans, and experiment with new runtimes like WASM