Hello,
I am unable to have the multiline logs produced by a Docker container (raw format exported to a HEC input) merged
docker run --log-driver=splunk \
--log-opt splunk-token=**************** \
--log-opt splunk-url=https://localhost:8088 \
--log-opt splunk-capath=/opt/splunk/etc/auth/server.pem \
--log-opt splunk-caname=SplunkServerDefaultCert \
--log-opt splunk-format=raw \
--log-opt tag="" \
--log-opt splunk-source="test_source"\
--log-opt splunk-sourcetype="docker"\
and props.conf looks like
[docker]
LINE_BREAKER = \d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}
MAX_EVENTS = 10000
TRUNCATE = 111111110
SHOULD_LINEMERGE = false
NO_BINARY_CHECK = True
the logs are normal tomcat logs:
2018-08-07 11:46:05,998 [http://localhost/bin/view/Main/] WARN o.a.s.a.RequestProcessor - Unhandled Exception thrown: class com.xpn.xwiki.XWikiException
2018-08-07 11:46:23,802 [http://localhost/bin/view/Main/] ERROR c.x.x.s.DBCPConnectionProvider - Could not create a DBCP pool. There is an error in the Hibernate configuration file, please review it.
java.sql.SQLException: Cannot load JDBC driver class 'org.hsqldb.jdbcDriver'
at org.apache.commons.dbcp2.BasicDataSource.createConnectionFactory(BasicDataSource.java:2139)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2033)
I have made several attempts (BREAK_ONLY_BEFORE, LINE_BREAKER), but the merging before indexing still doesn't take place; are there special rules for data coming in via HEC ?
Thanks
@petreb
I believe line-merge is only available for the raw endpoint on HTTP Event Collector http://dev.splunk.com/view/event-collector/SP-CAAAE6P#raw, so the events need to be merged before you send them to HEC.
I can recommend you to look at our solution for forwarding docker logs to Splunk https://www.outcoldsolutions.com/docs/collectorfordocker/#join-rules, that allows you to predefine advanced patterns how lines need to be merged before we send them to HEC https://www.outcoldsolutions.com/blog/2018-03-11-forwarding-pretty-json-logs-to-splunk/