Hi,
I have a set of splunk entries where it can be one of several pattern of fields. So for example:
2011-01-01T12:00:00.000-0800 a=1 b=2
2011-01-01T12:00:00.001-0800 a=1 b=2
2011-01-01T12:00:00.002-0800 c=10
2011-01-01T12:00:00.003-0800 c=10
2011-01-01T12:00:00.004-0800 c=10
2011-01-01T12:00:00.005-0800 d=99
So with the above data I want to get the count of the presence of a field. So the output of such a query would be something like this:
fields | count
a | 2
b | 2
c | 3
d | 1
Can anyone suggest a query for me to use to do this?
The best you can do given your requirement of not knowing the fields ahead of time is:
... | stats count(*) | transpose
This will give you a count of ALL fields present in the search.
Hope this helps.
> please upvote and accept answer if you find it useful - thanks!
One way I was going about it was to use rex:
... | rex field=_raw "\t(?
Though this isn't as general as the accepted answer nor probably as fast.
The best you can do given your requirement of not knowing the fields ahead of time is:
... | stats count(*) | transpose
This will give you a count of ALL fields present in the search.
Hope this helps.
> please upvote and accept answer if you find it useful - thanks!
Thanks, this was very helpful.
Is it important to have the results in columns rather than rows?
You could do
... | stats count(a),count(b),count(c),count(d)
which will give you a count of each field in a new column. If you want it in rows instead, as in your example, use transpose:
... | stats count(a),count(b),count(c),count(d) | transpose
Thank you.
Just use wildcards:
... | stats count(*)
This would work if I knew all the fields that would be present, but suppose I didn't know. Is there a way to do this?