When you have control of the logging in an application, what is the recommendation to make things as easy as possible for Splunk to digest/report on sets of tuples?
What should the log event(s) look like and then what would the search look like if you want to know each individual type and count.
Specific example: I have a dynamically generated set containing types and counts - myset={(widgetA | 100), (widgetB | 200), (widgetC | 5)}
Short of printing each on a separate line, whats the simplest approach? Is the above a good format?
Having semantic logging that is expressed in key value pairs separated by "=" is bread and butter. The issue here is sets and dynamic membership where the "schema" of the tuple is defined by {(type, count)}. To make this more difficult, but no less relevant: imagine if your tuple was defined by {(type, count, successes, failures)}
all key=value pair formats are auto extracted, so perhaps try:
widgetA=100 widgetB=200 widgetC=5
You could even have other fields that identify the app like:
appname=myappnamehere widgetA=100 widgetB=200 widgetC=5
Enjoy!
all key=value pair formats are auto extracted, so perhaps try:
widgetA=100 widgetB=200 widgetC=5
You could even have other fields that identify the app like:
appname=myappnamehere widgetA=100 widgetB=200 widgetC=5
Enjoy!
Gents, thx for the response(s). There is actually some nuance to this that I don't think was communicated well. I have modified the question to reflect that.
Some added details: If the thing separating your pairs (in the above, the space and equals characters) could occur in your values, then quote them.
key=some_value key2="Some value" key3=12345 key4="prop1=foo prop2=bar"
Something like that will auto-extract very well with splunk, and you end up with:
key = some_value
key2 = Some value
key3 = 12345
key4 = prop1=foo prop2=bar
prop1 = foo
prop2 = bar