I was trying to create a tag/eventtype/equivilent for a message length checksum in our logfiles and it seems eventtypes cannot have subsearches.
Log Entry: 20140815143255713732 R 0004 ,OK)
Fields: time, rw_mode, message_length, message
Each log entry writes out the number of bytes expected followed by the message received, and I was trying to tag to make sure that these two numbers match.
Search: sourcetype=mip | eval msglength=len(message) | search msglength!=message_length
Edit
What I am trying to find out I guess is if there is something (field, tag, eventtype, configuration, don't care) which allows me to just calculate these values on indexing and store what I am guessing would be a "calculated field". This use case aside it would be nice to be able to do a validation test on log entries and flag the broken ones.
That's not going to work as-is because there's a pipe in your search.
http://docs.splunk.com/Documentation/Splunk/6.1.3/Knowledge/Classifyandgroupsimilarevents#Important_...
However, nothing's stopping you from defining a calculated field msglength=len(message)
and moving the comparison into the base search. Then your whole search has no more pipes and can be stored in an event type.
Note, this kind of search isn't going to be fast because Splunk has to load the entire event, calculate the length, and then filter.
It seems that what I am trying to do is exactly why data models exist. Or other ways to calculate fields in props.conf I believe. Still, the question of if there is a way to compute this value once at index time remains open.
I was under the impression that "accelerated" models had some sort of caching? Ah well, question remains open.
Even data models are expressed at search time rather than index time, though, which means your larger concern is still an issue. But it may be an easier way of handling what you're trying to do.
No worries, I had actually tried it before I asked the question since I also read that doc. 😛
Sorry, I was basing my (incorrect) comment on the following quote from http://docs.splunk.com/Documentation/Splunk/6.1.3/Knowledge/Abouteventtypes -
"...you can save any search as an event type."
Sometimes the doc contradicts itself, apparently. Sorry about the wild goose chase.
That's not going to work as-is because there's a pipe in your search.
http://docs.splunk.com/Documentation/Splunk/6.1.3/Knowledge/Classifyandgroupsimilarevents#Important_...
However, nothing's stopping you from defining a calculated field msglength=len(message)
and moving the comparison into the base search. Then your whole search has no more pipes and can be stored in an event type.
Note, this kind of search isn't going to be fast because Splunk has to load the entire event, calculate the length, and then filter.
At index-time you basically have the expressive power of regexes. That means you cannot do maths or even count, hence you cannot calculate the length of a field and index that value... all you can do is index the field itself. That might make filtering by its length a bit faster, no real-world experience with that though.
Are you trying to detect faulty transmissions? If so, going through each event once would be enough... schedule a search for this that triggers some alert if a bad event has occurred, and there's no need to do any index-time calculations at all because you only search once.
My big concern is that this calculation must be performed per entry per search. Since the message length of a given entry is static, it seems like there should be a way to do a calculation when the event is getting indexed. I have updated the question.
It does work in the search bar, when I go to save it as an eventtype I get an error though. I will try out where
All tagging and event type calculation happens at search time. You should be able to take any search and turn it into an event type.
Does the search you posted work from the search bar? I would have guessed that you'd need to use where
instead of search
in the third clause to get the results you want.