ELK Stack setup with Docker Compose for Spring Boot logging and monitoring – Elasticsearch, Logstash, and Kibana in action

In this post, We are going to see how to set up the ELK stack (Elasticsearch, Logstash, Kibana) using Docker Compose to centralize and analyze logs from a Spring Boot application. Let’s get started!

Introduction to ELK Stack

The ELK stack consists of three main components:

  • Elasticsearch - Stores and indexes log data.
  • Logstash - Collects, processes, and forwards logs to Elasticsearch.
  • Kibana - Provides a web interface to visualize logs and create dashboards.

Using ELK, you can monitor your Spring Boot application logs efficiently, gain insights, and troubleshoot issues faster.

How Spring Boot Logs Are Pushed to the ELK Stack

  1. Generate Logs in Spring Boot:
    Your Spring Boot app creates logs using libraries like Logback or Log4j. These logs contain useful details like errors, warnings, and application activity.

  2. Send Logs to Logstash:
    You configure your Spring Boot app to send logs to Logstash, which is part of the ELK stack. This is usually done by:
    • Writing logs to a file or console and having a Filebeat or Logstash agent pick them up.
    • Sending logs directly to Logstash over a network (e.g., using TCP or HTTP).
  3. Process Logs in Logstash:
    Logstash processes and transforms the logs (e.g., filtering out unnecessary data or adding tags). It prepares the logs for indexing in Elasticsearch.

  4. Store Logs in Elasticsearch:
    Logstash sends the processed logs to Elasticsearch, which stores them in a structured format, making them searchable.

  5. Visualize Logs in Kibana:
    You use Kibana to view and analyze the logs in a user-friendly dashboard. Kibana helps you spot trends, errors, and system performance easily.

Prerequisites

Before starting, ensure you have the following installed:

  • Docker & Docker Compose
  • A basic Spring Boot application with logging enabled.

Understanding the Configuration Files

We will use four main configuration files to set up the ELK stack and integrate it with Spring Boot:

1. docker-compose.yml

  • Purpose: Defines and orchestrates ELK stack services (Elasticsearch, Logstash, Kibana) using Docker containers.
  • Settings: Configures ELK services with port mappings, persistent volumes, and environment variables to enable communication and data storage.

2. logstash.conf

  • Purpose: Configures Logstash to receive logs from the Spring Boot app, process them, and forward them to Elasticsearch.
  • Settings: Sets up an input listener on port 5044 to receive JSON logs and forwards them to Elasticsearch under an index for log organization.

3. logback-spring.xml

  • Purpose: Configures Spring Boot logging to send application logs to Logstash in JSON format via TCP.
  • Settings: Defines a Logstash appender to send logs in JSON format to Logstash over TCP for centralized log management.

4. application.properties

  • Purpose: Manages logging levels, output formats, and other logging-related configurations in Spring Boot.
  • Settings: Adjusts logging levels and patterns to control verbosity and structure logs for better analysis.

Creating the Docker Compose File

Let’s create a docker-compose.yml file to set up Elasticsearch, Logstash, and Kibana.

version: '3.8'

services:
  elasticsearch:
    image: elasticsearch:8.17.1
    container_name: elasticsearch
    restart: always
    volumes:
      - elastic_data:/usr/share/elasticsearch/data/
    environment:
      - xpack.security.enabled=false  # Disable security for local development
      - ES_JAVA_OPTS=-Xmx256m -Xms256m
      - discovery.type=single-node
    ports:
      - '9200:9200'
    networks:
      - elk-network

  logstash:
    image: logstash:8.17.1
    container_name: logstash
    restart: always
    volumes:
      - ./logstash/:/logstash_dir
    command: logstash -f /logstash_dir/pipeline/logstash.conf
    depends_on:
      - elasticsearch
    ports:
      - '5044:5044'
    environment:
      - LS_JAVA_OPTS=-Xmx256m -Xms256m
    networks:
      - elk-network

  kibana:
    image: kibana:8.17.1
    container_name: kibana
    restart: always
    ports:
      - '5601:5601'
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
    depends_on:
      - elasticsearch
    networks:
      - elk-network

networks:
  elk-network:
    driver: bridge

volumes:
  elastic_data: {}

Configuring Logstash to Process Logs

Create a logstash.conf file in the same directory as the docker-compose.yml file:

input {
  tcp {
    port => 5044
    codec => json
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "logs-%{+YYYY.MM.dd}"
  }
}

What is Logback Appender?

A Logback appender is a component in Logback (Spring Boot’s default logging framework) responsible for directing log messages to a specific destination, such as a file, console, database, or remote logging system like Logstash.

Common Logback Appenders:

  1. ConsoleAppender – Logs messages to the console.
  2. FileAppender – Writes logs to a file.
  3. RollingFileAppender – Writes logs to a file with rolling (rotation) capabilities.
  4. LogstashTcpSocketAppender – Sends logs to Logstash over TCP for centralized logging.
  5. AsyncAppender – Buffers logs and sends them asynchronously to improve performance.

Configuring Spring Boot to Send Logs

In your Spring Boot project, update the logback-spring.xml file to send logs to Logstash in JSON format:

<configuration>
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>localhost:5044</destination> <!-- Send logs to Logstash -->
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="LOGSTASH" />
        <appender-ref ref="CONSOLE" />
    </root>
</configuration>

Running the ELK Stack

Start the ELK stack using the following command:

docker-compose up --build -d

Verify that the containers are running:

docker ps

If everything is running fine, your ELK stack is up and running!

Accessing Kibana and Viewing Logs

  • Open Kibana in your browser at http://localhost:5601.
  • Go to “Management > Index Patterns” and create an index pattern with logs-.
  • Use the Discover tab to search and analyze logs.

Testing the Setup

Run your Spring Boot application and check Kibana for incoming logs. Try different log levels (INFO, WARN, ERROR) to see how they are displayed in Kibana.

Troubleshooting Common Issues

If you face issues, consider:

  • Checking logs of containers using docker logs <container_name>.
  • Ensuring port mappings are correct.
  • Checking Elasticsearch health at http://localhost:9200.

We’ve successfully set up the ELK stack for a Spring Boot application using Docker Compose. Now you can monitor and analyze logs efficiently.

Check out the video for a detailed explanation: