- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
field mapping
Hi,
I have a question from one of my customers, and I'm not sure on the answer. Can anyone help me? We index bluecoat proxy logs and map fields, but they want to know if there is any way to validate that the mapping is working, or if the data isn't formatted properly.
Where Splunk internals are attempting to ingest log data (e.g. Bluecoat log data) – based on a configuration mapping –for an expected set of fields, and some of those fields are either
a) Not present in the raw data set being ingested or
b) Not the expected structure (too long / too short / not well formed, garbled etc.) – what happens in that case when Splunk fails to map – does Splunk log the mapping failure event somewhere ?
Any information you could provide on Splunk failure logging around field mapping when ingesting data would be useful for us when verifying how well Bluecoat log ingestion for Splunk is working.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The field mapping is made at search time in almost all cases, so there is likely no point in logging that.
What you could do is to look at the field count for some field that is supposed to be in every event - for proxy logs I guess that clientip, http-status, time-taken
or bytes
(or field names with that intent) should be in every event. If you search for e.g.
sourcetype = bcoat_proxy NOT clientip=*
OR
sourcetype = bcoat_proxy | head 1000 | stats c(bytes)
will give you an indication if the field extraction is working well.
/K
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
These field mappings are done at the indexer layer.
