Docker

 








Docker Compose 

a. What is Docker Compose?

Docker Compose is a YAML file where you define how multiple containers work together (services, volumes, networks).


b. docker-compose.yml Basics

  • services: Your app containers.
  • volumes: Persistent storage.
  • ports: Port mappings (host:container).
  • networks: Group containers for communication.

c. Hands-on Example

docker-compose.yml

Example 1  (Kafka)

version: '3.8'

services:
  kafka-broker:
    image: confluentinc/cp-kafka:latest
    container_name: kafka-broker
    hostname: kafka-broker
    ports:
      - "9092:9092"
      - "19092:19092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker:19092,PLAINTEXT_INTERNAL://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka-broker:29093
      KAFKA_LISTENERS: PLAINTEXT://kafka-broker:19092,CONTROLLER://kafka-broker:29093,PLAINTEXT_INTERNAL://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /var/lib/kafka/data
      CLUSTER_ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    volumes:
      - kafka-data:/var/lib/kafka/data

  schema-registry:
    image: confluentinc/cp-schema-registry:latest
    container_name: schema-registry
    hostname: schema-registry
    depends_on:
      - kafka-broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka-broker:19092
      SCHEMA_REGISTRY_DEBUG: 'true'
    volumes:
      - schema-registry-data:/var/lib/schema-registry

  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    container_name: kafka-ui
    depends_on:
      - kafka-broker
    ports:
      - "8080:8080"
    environment:
      KAFKA_CLUSTERS_0_NAME: EU-DEV
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka-broker:19092
      KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry:8081
      DYNAMIC_CONFIG_ENABLED: 'true'

volumes:
  kafka-data:
  schema-registry-data:

docker-compose up
docker-compose down



Example 2 (in java project, we can run partially aswl)

version: "3.8"
services:
  postgres-db:
    image: postgres:15.2
    restart: always
    environment:
      POSTGRES_PASSWORD: password
      POSTGRES_USER: XXXXX
      POSTGRES_DB: YYYYY
    ports:
      - "5432"
    volumes:
      - postgres-data:/data/db
  test:
    image: gradle:8.6.0-jdk17-alpine
    working_dir: /app
    depends_on:
      - postgres-db
    volumes:
      - .:/app
      - maven-repository:/root/.m2
      - gradle-cache:/home/gradle/.gradle
    command: 'gradle test'
    environment:
      - spring_profiles_active
      - AWS_REGION
      - LOG4J_LAYOUT
      - JFROG_USERNAME
      - JFROG_PASSWORD
      - SUBMIT_TO_QCENTER
  build:
    image: gradle:8.6.0-jdk17-alpine
    working_dir: /app
    depends_on:
      - postgres-db
    volumes:
      - .:/app
      - maven-repository:/root/.m2
      - gradle-cache:/home/gradle/.gradle
    command: 'gradle build'
    environment:
      - spring_profiles_active
      - AWS_REGION
      - JFROG_USERNAME
      - JFROG_PASSWORD
  run:
    image: gradle:8.6.0-jdk17-alpine
    working_dir: /app
    depends_on:
      - postgres-db
    volumes:
      - .:/app
      - maven-repository:/root/.m2
      - gradle-cache:/home/gradle/.gradle
    command: 'gradle bootRun'
    environment:
      - spring_profiles_active
      - LOG4J_LAYOUT
      - AWS_ACCESS_KEY_ID
      - AWS_SECRET_ACCESS_KEY
      - AWS_REGION
  run-image:
    image: ${PROJECT_NAME_KEBAB_CASE}:${IMAGE_VERSION}
    build:
      context: ""
      args:
        ENV: dev
        SERVICE: ${PROJECT_NAME_KEBAB_CASE}
    depends_on:
      - postgres-db
    command: |-
      java
      -cp app:app/lib/*
      com.sysco.blueprint.BlueprintApplication
    environment:
      - spring_profiles_active
      - LOG4J_LAYOUT
      - AWS_REGION
  sonar-publish:
    image: gradle:8.6.0-jdk17-alpine
    working_dir: /app
    volumes:
      - .:/app
      - maven-repository:/root/.m2
      - gradle-cache:/home/gradle/.gradle
    environment:
      - spring_profiles_active
  clean:
    image: gradle:8.6.0-jdk17-alpine
    working_dir: /app
    depends_on:
      - postgres-db
    volumes:
      - .:/app
      - maven-repository:/root/.m2
      - gradle-cache:/home/gradle/.gradle
    command: 'gradle clean'
    environment:
      - spring_profiles_active
  deploy:
    image: project-java-builder:latest
    working_dir: /app
    depends_on:
      - build-image
    volumes:
      - .:/app
      - maven-repository:/root/.m2
      - gradle-cache:/home/gradle/.gradle
      - terraform-plugin-cache:/root/.terraform.d/plugin-cache
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - spring_profiles_active
      - AWS_ACCESS_KEY_ID
      - AWS_SECRET_ACCESS_KEY
      - ENVIRONMENT
      - AWS_REGION
      - PROJECT_NAME_KEBAB_CASE
      - IMAGE_VERSION
      - DEPLOY_OPTION
  build-image:
    image: project-java-builder:latest
    build:
      context: "terraform"
    command: bash -c "aws --version && terraform --version docker --version "
volumes:
  maven-repository:
    name: maven-repository
  postgres-data:
    name: postgres-data
  gradle-cache:
    name: gradle-cache
  terraform-plugin-cache:
    name: terraform-plugin-cache
docker-compose -f ../docker-compose.yml run --rm builddocker-compose -f ../docker-compose.yml run -p 8080:8080 run-image



Docker images are read-only templates used to create containers, while containers are runnable instances of those images. Volumes are used to persist data generated by and used by Docker containers. Docker Compose is a tool for defining and managing multi-container Docker applications. 
Volumes can be defined and used in Docker Compose files to persist data across container restarts and deployments. To define a volume in a docker-compose.yml file, a volumes section is added at the top level, listing the volumes to be created. Within each service definition, the volumes section specifies how these volumes should be mounted into the containers.
Code
version: "3.8"services:  app:    image: my-app-image    ports:      - "8080:8080"    volumes:      - app_data:/datavolumes:  app_data:
In this example, a volume named app_data is defined. The app service mounts this volume to the /data directory within the container. When the application writes data to /data, it is stored in the app_data volume, ensuring that the data persists even if the container is stopped or removed. Docker Compose automatically creates and manages the app_data volume when the application is started.

There are different types of volumes: bind mounts, named volumes, and anonymous volumes. Bind mounts map a directory or file on the host system to a directory or file within the container. Named volumes are managed by Docker and are stored in a location on the host system that is managed by Docker. Anonymous volumes are similar to named volumes, but they do not have a name and are automatically removed when the container is removed. 

Comments

Popular posts from this blog

Hibernate (Java) -- by jps sasadara

JavaBeans vs Spring beans vs POJOs

Design Patterns