Skip to content

Expedient/logstash-snmp-config

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logstash SNMP Trap Container

A Docker container for receiving SNMP traps and forwarding them to Elasticsearch 8.

Features

  • SNMP Trap Input: Listens for SNMP traps on UDP port 1162
  • Built-in MIBs: Includes IETF MIBs by default (no manual import needed)
  • Elasticsearch 8 Data Streams: Modern time-series storage with automatic ILM support
  • Elasticsearch 8 Output: Forwards traps to Elasticsearch with full authentication support
  • SNMPv2c & SNMPv3: Supports both SNMP versions
  • Flexible Configuration: Uses environment variables for easy deployment

Architecture

System Overview

graph LR
    A[Environet] -->|SNMP Traps<br/>UDP 162| C[Logstash Container]
    B[VMware] -->|SNMP Traps<br/>UDP 162| C
    C -->|Parse & Transform<br/>+MIB Translation| D[Logstash Pipeline]
    D -->|HTTPS<br/>API Key Auth| E[Elasticsearch 8]
    E -->|Data Stream| F[logs-snmp.trap-*]

    style C fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
    style E fill:#00BFB3,stroke:#00A99D,stroke-width:2px
    style F fill:#00BFB3,stroke:#00A99D,stroke-width:2px
    style A fill:#FF991F,stroke:#FF8B00,stroke-width:2px
    style B fill:#FF991F,stroke:#FF8B00,stroke-width:2px
Loading

Data Flow Sequence

sequenceDiagram
    participant E as Environet
    participant V as VMware
    participant L as Logstash<br/>SNMP Input
    participant P as Pipeline<br/>Filters
    participant ES as Elasticsearch 8
    participant DS as Data Stream<br/>logs-snmp.trap-{ns}

    Note over E,V: Network Event Occurs

    E->>L: SNMP Trap (UDP 162)<br/>SNMPv2c/v3
    activate L
    L->>L: Receive on port 1162
    L->>L: Parse with IETF MIBs
    L->>P: Pass event
    deactivate L

    activate P
    P->>P: Rename host → source_host
    P->>P: Add data_stream metadata
    P->>P: Apply custom filters (optional)
    deactivate P

    P->>ES: HTTPS POST with API Key
    activate ES
    ES->>DS: Write to data stream
    DS->>DS: Auto-create backing index<br/>.ds-logs-snmp.trap-{ns}-YYYY.MM.DD-000001
    ES-->>P: 200 OK
    deactivate ES

    Note over V,L: Parallel trap from VMware
    V->>L: SNMP Trap (UDP 162)
    activate L
    L->>P: Pass event
    deactivate L
    activate P
    P->>ES: HTTPS POST
    activate ES
    ES->>DS: Write to same data stream
    deactivate ES
    deactivate P
Loading

Container Architecture

%%{init: {'flowchart':{'padding':30, 'nodeSpacing':70, 'rankSpacing':70, 'diagramPadding':25}}}%%
graph TB
    subgraph Docker Container
        A[UDP Port 1162] --> B[SNMP Trap Input]
        B --> C[Logstash Core<br/>8.15.3]
        C --> D[Output Plugin]
        E[Environment Variables] -.->|Configuration| C
        F[IETF MIBs<br/>Built-in] -.->|OID Translation| B
        G[Custom MIBs<br/>Environet/VMware<br/>Coming Soon] -.->|Vendor OIDs| B
    end

    D -->|HTTPS:9243<br/>TLS 1.2+| H[Elasticsearch Cluster]

    H --> I[Data Stream:<br/>logs-snmp.trap-default]
    I --> J[Backing Indices<br/>Auto-rollover]
    J --> K[ILM Policies<br/>Retention]

    style A fill:#FFC107,stroke:#FF8F00,stroke-width:2px
    style B fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
    style C fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
    style D fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
    style H fill:#00BFB3,stroke:#00A99D,stroke-width:2px
    style I fill:#00BFB3,stroke:#00A99D,stroke-width:2px
    style F fill:#36B37E,stroke:#00875A,stroke-width:2px
    style G fill:#FFAB00,stroke:#FF991F,stroke-width:2px,stroke-dasharray: 5 5
Loading

Note: Environet and VMware-specific MIBs are being obtained separately. Once available, they can be added via the Custom MIBs section below to enable vendor-specific OID translation.

Prerequisites

  • Docker
  • Elasticsearch 8 cluster (running and accessible)
  • Network devices configured to send SNMP traps to this container

Quick Start

1. Configure Environment Variables

Copy the example environment file and edit with your settings:

cp .env.example .env

Edit .env with your Elasticsearch connection details:

ELASTICSEARCH_HOSTS=https://your-elasticsearch:9200
ELASTICSEARCH_USER=elastic
ELASTICSEARCH_PASSWORD=your-password
ELASTICSEARCH_SSL_ENABLED=true

2. Build the Container

docker build -t logstash-snmptrap .

3. Run the Container

docker run -d \
  --name logstash-snmptrap \
  --env-file .env \
  -p 1162:1162/udp \
  logstash-snmptrap

Note: If you want to use the standard SNMP trap port (162), you'll need to map it:

docker run -d \
  --name logstash-snmptrap \
  --env-file .env \
  -p 162:1162/udp \
  logstash-snmptrap

4. Verify It's Running

Check the logs:

docker logs -f logstash-snmptrap

You should see Logstash starting up and the SNMP trap input listening on port 1162.

Check the Logstash API:

curl http://localhost:9600/_node/stats/pipelines?pretty

Configuration

Directory Structure

.
├── Dockerfile                  # Docker image definition
├── .env.example                # Environment variables template
├── config/
│   ├── logstash.yml            # Logstash service configuration
│   └── pipelines.yml           # Pipeline definitions
└── pipeline/
    └── snmptrap.conf           # SNMP trap pipeline configuration

Environment Variables

Variable Description Default
ELASTICSEARCH_HOSTS Elasticsearch host(s) (comma-separated) http://elasticsearch:9200
ELASTICSEARCH_USER Elasticsearch username -
ELASTICSEARCH_PASSWORD Elasticsearch password -
ELASTICSEARCH_API_KEY Alternative to user/password auth -
ELASTICSEARCH_SSL_ENABLED Enable SSL/TLS true
ELASTICSEARCH_SSL_VERIFICATION_MODE SSL verification mode (full, certificate, none) full
ELASTICSEARCH_CA_CERT_PATH Path to CA certificate (for self-signed certs) -
DATA_STREAM_NAMESPACE Data stream namespace (e.g., default, production, dev) default

SNMPv3 Configuration

To enable SNMPv3, uncomment and configure these variables in .env:

SNMP_SECURITY_NAME=mySecurityName
SNMP_AUTH_PROTOCOL=sha
SNMP_AUTH_PASS=YourAuthPassword
SNMP_PRIV_PROTOCOL=aes
SNMP_PRIV_PASS=YourPrivPassword
SNMP_SECURITY_LEVEL=authPriv

Then uncomment the SNMPv3 section in pipeline/snmptrap.conf.

Custom MIBs

The container includes IETF MIBs by default. To add vendor-specific MIBs:

  1. Convert your MIB files to .dic or .yaml format using libsmi:

    smidump -f python vendor.mib | smidump -f yaml > vendor.yaml
  2. Create a mibs/ directory and add your converted MIB files

  3. Update the Dockerfile to copy the MIBs:

    COPY --chown=logstash:logstash mibs/ /usr/share/logstash/mibs/
  4. Update pipeline/snmptrap.conf to use the MIBs:

    mib_paths => ["/usr/share/logstash/mibs"]
    

Self-Signed Certificates

If your Elasticsearch cluster uses self-signed certificates:

  1. Create a certs/ directory and add your CA certificate
  2. Update the Dockerfile to copy the certificate:
    COPY --chown=logstash:logstash certs/ /usr/share/logstash/config/certs/
  3. Uncomment the cacert setting in pipeline/snmptrap.conf

Data Storage

SNMP trap data is stored in Elasticsearch using data streams, the modern approach for time-series data in Elasticsearch 8.

Data Stream Naming

The data stream follows Elastic's official naming scheme: {type}-{dataset}-{namespace}

  • Type: logs (SNMP traps are event logs)
  • Dataset: snmp.trap (uses dot notation per Elastic conventions, similar to nginx.access)
  • Namespace: Configurable via DATA_STREAM_NAMESPACE (default: default)

Example: logs-snmp.trap-default

Note: The dataset uses dot notation (.) for hierarchy within the name, while hyphens (-) separate the three main components. This follows Elastic's standard integration naming patterns.

Backing Indices

Data streams automatically create backing indices with the pattern:

.ds-logs-snmp.trap-{namespace}-YYYY.MM.DD-{generation}

Example: .ds-logs-snmp.trap-default-2024.10.13-000001

Benefits of Data Streams

  • Automatic index lifecycle management (ILM) support
  • Optimized for append-only time-series data
  • Simplified querying (query the data stream directly)
  • Automatic rollover and retention policies

Event Fields

The events will contain fields such as:

  • @timestamp: Event timestamp
  • type: "snmptrap"
  • source_host: IP/hostname of the trap source
  • data_stream.type: "logs"
  • data_stream.dataset: "snmp.trap"
  • data_stream.namespace: Your configured namespace
  • Various SNMP-specific fields depending on the trap type and OIDs

Testing

Send a Test Trap

You can test the container by sending a test trap using snmptrap:

# SNMPv2c test trap
snmptrap -v2c -c public localhost:1162 '' 1.3.6.1.4.1.8072.2.3.0.1 \
  1.3.6.1.4.1.8072.2.3.2.1 i 123456

Query Elasticsearch

Check if the trap was received:

# Query the data stream directly
curl -u elastic:password \
  "https://your-elasticsearch:9200/logs-snmp.trap-default/_search?pretty"

# Or query all SNMP trap data streams across namespaces
curl -u elastic:password \
  "https://your-elasticsearch:9200/logs-snmp.trap-*/_search?pretty"

Troubleshooting

Container won't start

Check the logs for errors:

docker logs logstash-snmptrap

Common issues:

  • Elasticsearch connection refused: Check ELASTICSEARCH_HOSTS and network connectivity
  • Authentication failed: Verify ELASTICSEARCH_USER and ELASTICSEARCH_PASSWORD
  • SSL/TLS errors: Check certificate configuration or set ELASTICSEARCH_SSL_VERIFICATION_MODE=none for testing

No traps being received

  1. Verify the container is listening:

    docker exec logstash-snmptrap netstat -an | grep 1162
  2. Check if traps are reaching the container:

    docker exec logstash-snmptrap tcpdump -i any -n port 1162
  3. Verify your network devices are configured to send traps to the correct IP and port

Enable debug output

Uncomment the stdout output in pipeline/snmptrap.conf:

output {
  stdout {
    codec => rubydebug
  }
}

Rebuild the container and check logs to see raw trap data.

Advanced Configuration

Filtering and Parsing

You can add custom filters in pipeline/snmptrap.conf to:

  • Parse specific OIDs
  • Add tags based on trap type
  • Enrich data with additional fields
  • Drop unwanted traps

Example:

filter {
  if [oid] == "1.3.6.1.4.1.9.9.41.2.0.1" {
    mutate {
      add_tag => ["cisco", "cpu-threshold"]
    }
  }
}

Multiple Pipelines

To run multiple pipelines (e.g., for different SNMP versions or sources), add more entries to config/pipelines.yml:

- pipeline.id: snmptrap-v2c
  path.config: "/usr/share/logstash/pipeline/snmptrap-v2c.conf"

- pipeline.id: snmptrap-v3
  path.config: "/usr/share/logstash/pipeline/snmptrap-v3.conf"

Monitoring

Logstash exposes metrics on port 9600. You can monitor the container using:

# Node stats
curl http://localhost:9600/_node/stats?pretty

# Pipeline stats
curl http://localhost:9600/_node/stats/pipelines?pretty

# Hot threads (for troubleshooting)
curl http://localhost:9600/_node/hot_threads?pretty

License

This configuration is provided as-is for use with Elastic Stack. Refer to Elastic's licensing for Logstash and the Elastic Stack.

Resources

About

Logstash container for SNMP trap collection with Elasticsearch 8 data streams

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •