A Docker container for receiving SNMP traps and forwarding them to Elasticsearch 8.
- SNMP Trap Input: Listens for SNMP traps on UDP port 1162
- Built-in MIBs: Includes IETF MIBs by default (no manual import needed)
- Elasticsearch 8 Data Streams: Modern time-series storage with automatic ILM support
- Elasticsearch 8 Output: Forwards traps to Elasticsearch with full authentication support
- SNMPv2c & SNMPv3: Supports both SNMP versions
- Flexible Configuration: Uses environment variables for easy deployment
graph LR
A[Environet] -->|SNMP Traps<br/>UDP 162| C[Logstash Container]
B[VMware] -->|SNMP Traps<br/>UDP 162| C
C -->|Parse & Transform<br/>+MIB Translation| D[Logstash Pipeline]
D -->|HTTPS<br/>API Key Auth| E[Elasticsearch 8]
E -->|Data Stream| F[logs-snmp.trap-*]
style C fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
style E fill:#00BFB3,stroke:#00A99D,stroke-width:2px
style F fill:#00BFB3,stroke:#00A99D,stroke-width:2px
style A fill:#FF991F,stroke:#FF8B00,stroke-width:2px
style B fill:#FF991F,stroke:#FF8B00,stroke-width:2px
sequenceDiagram
participant E as Environet
participant V as VMware
participant L as Logstash<br/>SNMP Input
participant P as Pipeline<br/>Filters
participant ES as Elasticsearch 8
participant DS as Data Stream<br/>logs-snmp.trap-{ns}
Note over E,V: Network Event Occurs
E->>L: SNMP Trap (UDP 162)<br/>SNMPv2c/v3
activate L
L->>L: Receive on port 1162
L->>L: Parse with IETF MIBs
L->>P: Pass event
deactivate L
activate P
P->>P: Rename host → source_host
P->>P: Add data_stream metadata
P->>P: Apply custom filters (optional)
deactivate P
P->>ES: HTTPS POST with API Key
activate ES
ES->>DS: Write to data stream
DS->>DS: Auto-create backing index<br/>.ds-logs-snmp.trap-{ns}-YYYY.MM.DD-000001
ES-->>P: 200 OK
deactivate ES
Note over V,L: Parallel trap from VMware
V->>L: SNMP Trap (UDP 162)
activate L
L->>P: Pass event
deactivate L
activate P
P->>ES: HTTPS POST
activate ES
ES->>DS: Write to same data stream
deactivate ES
deactivate P
%%{init: {'flowchart':{'padding':30, 'nodeSpacing':70, 'rankSpacing':70, 'diagramPadding':25}}}%%
graph TB
subgraph Docker Container
A[UDP Port 1162] --> B[SNMP Trap Input]
B --> C[Logstash Core<br/>8.15.3]
C --> D[Output Plugin]
E[Environment Variables] -.->|Configuration| C
F[IETF MIBs<br/>Built-in] -.->|OID Translation| B
G[Custom MIBs<br/>Environet/VMware<br/>Coming Soon] -.->|Vendor OIDs| B
end
D -->|HTTPS:9243<br/>TLS 1.2+| H[Elasticsearch Cluster]
H --> I[Data Stream:<br/>logs-snmp.trap-default]
I --> J[Backing Indices<br/>Auto-rollover]
J --> K[ILM Policies<br/>Retention]
style A fill:#FFC107,stroke:#FF8F00,stroke-width:2px
style B fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
style C fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
style D fill:#4C9AFF,stroke:#0052CC,stroke-width:2px
style H fill:#00BFB3,stroke:#00A99D,stroke-width:2px
style I fill:#00BFB3,stroke:#00A99D,stroke-width:2px
style F fill:#36B37E,stroke:#00875A,stroke-width:2px
style G fill:#FFAB00,stroke:#FF991F,stroke-width:2px,stroke-dasharray: 5 5
Note: Environet and VMware-specific MIBs are being obtained separately. Once available, they can be added via the Custom MIBs section below to enable vendor-specific OID translation.
- Docker
- Elasticsearch 8 cluster (running and accessible)
- Network devices configured to send SNMP traps to this container
Copy the example environment file and edit with your settings:
cp .env.example .envEdit .env with your Elasticsearch connection details:
ELASTICSEARCH_HOSTS=https://your-elasticsearch:9200
ELASTICSEARCH_USER=elastic
ELASTICSEARCH_PASSWORD=your-password
ELASTICSEARCH_SSL_ENABLED=truedocker build -t logstash-snmptrap .docker run -d \
--name logstash-snmptrap \
--env-file .env \
-p 1162:1162/udp \
logstash-snmptrapNote: If you want to use the standard SNMP trap port (162), you'll need to map it:
docker run -d \
--name logstash-snmptrap \
--env-file .env \
-p 162:1162/udp \
logstash-snmptrapCheck the logs:
docker logs -f logstash-snmptrapYou should see Logstash starting up and the SNMP trap input listening on port 1162.
Check the Logstash API:
curl http://localhost:9600/_node/stats/pipelines?pretty.
├── Dockerfile # Docker image definition
├── .env.example # Environment variables template
├── config/
│ ├── logstash.yml # Logstash service configuration
│ └── pipelines.yml # Pipeline definitions
└── pipeline/
└── snmptrap.conf # SNMP trap pipeline configuration
| Variable | Description | Default |
|---|---|---|
ELASTICSEARCH_HOSTS |
Elasticsearch host(s) (comma-separated) | http://elasticsearch:9200 |
ELASTICSEARCH_USER |
Elasticsearch username | - |
ELASTICSEARCH_PASSWORD |
Elasticsearch password | - |
ELASTICSEARCH_API_KEY |
Alternative to user/password auth | - |
ELASTICSEARCH_SSL_ENABLED |
Enable SSL/TLS | true |
ELASTICSEARCH_SSL_VERIFICATION_MODE |
SSL verification mode (full, certificate, none) |
full |
ELASTICSEARCH_CA_CERT_PATH |
Path to CA certificate (for self-signed certs) | - |
DATA_STREAM_NAMESPACE |
Data stream namespace (e.g., default, production, dev) |
default |
To enable SNMPv3, uncomment and configure these variables in .env:
SNMP_SECURITY_NAME=mySecurityName
SNMP_AUTH_PROTOCOL=sha
SNMP_AUTH_PASS=YourAuthPassword
SNMP_PRIV_PROTOCOL=aes
SNMP_PRIV_PASS=YourPrivPassword
SNMP_SECURITY_LEVEL=authPrivThen uncomment the SNMPv3 section in pipeline/snmptrap.conf.
The container includes IETF MIBs by default. To add vendor-specific MIBs:
-
Convert your MIB files to
.dicor.yamlformat usinglibsmi:smidump -f python vendor.mib | smidump -f yaml > vendor.yaml
-
Create a
mibs/directory and add your converted MIB files -
Update the Dockerfile to copy the MIBs:
COPY --chown=logstash:logstash mibs/ /usr/share/logstash/mibs/ -
Update
pipeline/snmptrap.confto use the MIBs:mib_paths => ["/usr/share/logstash/mibs"]
If your Elasticsearch cluster uses self-signed certificates:
- Create a
certs/directory and add your CA certificate - Update the Dockerfile to copy the certificate:
COPY --chown=logstash:logstash certs/ /usr/share/logstash/config/certs/ - Uncomment the
cacertsetting inpipeline/snmptrap.conf
SNMP trap data is stored in Elasticsearch using data streams, the modern approach for time-series data in Elasticsearch 8.
The data stream follows Elastic's official naming scheme: {type}-{dataset}-{namespace}
- Type:
logs(SNMP traps are event logs) - Dataset:
snmp.trap(uses dot notation per Elastic conventions, similar tonginx.access) - Namespace: Configurable via
DATA_STREAM_NAMESPACE(default:default)
Example: logs-snmp.trap-default
Note: The dataset uses dot notation (.) for hierarchy within the name, while hyphens (-) separate the three main components. This follows Elastic's standard integration naming patterns.
Data streams automatically create backing indices with the pattern:
.ds-logs-snmp.trap-{namespace}-YYYY.MM.DD-{generation}
Example: .ds-logs-snmp.trap-default-2024.10.13-000001
- Automatic index lifecycle management (ILM) support
- Optimized for append-only time-series data
- Simplified querying (query the data stream directly)
- Automatic rollover and retention policies
The events will contain fields such as:
@timestamp: Event timestamptype: "snmptrap"source_host: IP/hostname of the trap sourcedata_stream.type: "logs"data_stream.dataset: "snmp.trap"data_stream.namespace: Your configured namespace- Various SNMP-specific fields depending on the trap type and OIDs
You can test the container by sending a test trap using snmptrap:
# SNMPv2c test trap
snmptrap -v2c -c public localhost:1162 '' 1.3.6.1.4.1.8072.2.3.0.1 \
1.3.6.1.4.1.8072.2.3.2.1 i 123456Check if the trap was received:
# Query the data stream directly
curl -u elastic:password \
"https://your-elasticsearch:9200/logs-snmp.trap-default/_search?pretty"
# Or query all SNMP trap data streams across namespaces
curl -u elastic:password \
"https://your-elasticsearch:9200/logs-snmp.trap-*/_search?pretty"Check the logs for errors:
docker logs logstash-snmptrapCommon issues:
- Elasticsearch connection refused: Check
ELASTICSEARCH_HOSTSand network connectivity - Authentication failed: Verify
ELASTICSEARCH_USERandELASTICSEARCH_PASSWORD - SSL/TLS errors: Check certificate configuration or set
ELASTICSEARCH_SSL_VERIFICATION_MODE=nonefor testing
-
Verify the container is listening:
docker exec logstash-snmptrap netstat -an | grep 1162
-
Check if traps are reaching the container:
docker exec logstash-snmptrap tcpdump -i any -n port 1162 -
Verify your network devices are configured to send traps to the correct IP and port
Uncomment the stdout output in pipeline/snmptrap.conf:
output {
stdout {
codec => rubydebug
}
}
Rebuild the container and check logs to see raw trap data.
You can add custom filters in pipeline/snmptrap.conf to:
- Parse specific OIDs
- Add tags based on trap type
- Enrich data with additional fields
- Drop unwanted traps
Example:
filter {
if [oid] == "1.3.6.1.4.1.9.9.41.2.0.1" {
mutate {
add_tag => ["cisco", "cpu-threshold"]
}
}
}To run multiple pipelines (e.g., for different SNMP versions or sources), add more entries to config/pipelines.yml:
- pipeline.id: snmptrap-v2c
path.config: "/usr/share/logstash/pipeline/snmptrap-v2c.conf"
- pipeline.id: snmptrap-v3
path.config: "/usr/share/logstash/pipeline/snmptrap-v3.conf"Logstash exposes metrics on port 9600. You can monitor the container using:
# Node stats
curl http://localhost:9600/_node/stats?pretty
# Pipeline stats
curl http://localhost:9600/_node/stats/pipelines?pretty
# Hot threads (for troubleshooting)
curl http://localhost:9600/_node/hot_threads?prettyThis configuration is provided as-is for use with Elastic Stack. Refer to Elastic's licensing for Logstash and the Elastic Stack.