Premise
There are some scenarios where there are simply no premade ingest pipelines for some log formats, even log formats that one would think have something perfectly simple premade for them. It’s incredibly easy to tell a tool, such as Snort, to output logs in JSON format and then have filebeat automatically decode those logs using a built in JSON decoder.
This came up for me when I was trying to pull out some meaningful fields from snort with a simple Filebeat and Elastic index setup in my GNS3 lab, which doesn’t have access to fancy enterprise things like Elastic Agent.
This post makes some assumptions that Snort’s already installed and configuring it basically is already understood.
⛬
Parse snort logs as json
The folks who create Snort having written a bit about this on their blog, but I’d argue it’s a bit too general to get a good idea how it might be applied in practice.
- add the following to snort.lua In order to get it spitting out logs in preformatted JSON
- there are more fields than this, they can be found in the Snort reference, might be different depending on Snort version used
alert_json =
{file = true,
fields = 'timestamp pkt_num proto pkt_gen pkt_len dir src_addr src_port dst_addr dst_port service rule priority class action b64_data',
}
- change filebeat.yml such that it only sends the json
- point it at wherever the snort alerts are being generated, probably in
/var/log/something
for most people
filebeat.inputs:
- type: filestream
id: filebeat-firewall
enabled: true
paths:
- /var/log/snort/alert_json.txt
- here’s some sample JSON that should be inside of an alert log when it starts generating content, look at the nice fields ahhh
{ "timestamp" : "05/10-12:59:30.263657",
"pkt_num" : 30,
"proto" : "ICMP",
"pkt_gen" : "raw",
"pkt_len" : 84,
"dir" : "C2S",
"src_addr" : "10.10.30.2",
"dst_addr" : "172.16.30.2",
"service" : "unknown",
"rule" : "1:384:8",
"priority" : 3,
"class" : "Misc activity",
"action" :
"allow",
"b64_data" : "SomeB64StringHereForTheLogLines" }
The important step
-
Take a peep at the filebeat reference documentation for a better explanation than I could give for how this part works.
-
add json processor to filebeat config to dump out the json lines into proper fields
- max depth is never more than one line
- dumps decoded fields into the decoded subfield
processors:
- decode_json_fields:
fields: ["message"]
max_depth: 1
target: "decoded"
overwrite_keys: false
add_error_key: true
- and in the module for snort as well (which isn’t actually necessary in this case since my logs are already in JSON, but maybe you’re using the Snort module for some reason)
# Set paths for the log files when file input is used.
var.paths: ["/var/log/snort/alert_json.txt"]
# Toggle output of non-ECS fields (default true).
Result
- bask in the glory of your easy clean fields. This principal can be applied to just about any JSON logs, but will need the
max_depth
adjusted if they happen to be nested or more complex than this.
Cheers ~
-N