You could try it like this:
index=whatever your-search-terms
| transaction 52634 startswith=eval(isnotnull(cs_uri_query)) endswith=eval(like(cs_method, "POST"))
You might need to fine tune the transaction, find more details here: transaction docs
... View more
Good message:
It's a known issue, it's been tested, verified and logged into the issue tracker.
It's not on purpose or anything, but seems to be a bug that only hits the combination of realtime, 7.1 and free license.
Bad message:
Seems there is no workaround or fix yet.
... View more
Port 8884 isn't any default port used by Splunk in any way. Just to make sure I searched the docs and Google for it, but there is absolutely zero about it (as you might already have noticed).
I'd search through all .conf files on the affected servers for the string "8884" - it must be mentioned anywhere.
Did you, by any chance, install a certain add-on or app that might have opened that port?
You could also do this from the CLI: splunk btool inputs list and check the output for 8884 anywhere.
Besides that, I'm a little out of ideas on this.
... View more
Your search should most likely look like this:
index=* sourcetype="*WinEventLog:Security" (4624 OR 4647 OR 4648 OR 551 OR 552 OR 540 OR 528 OR 4768 OR 4769 OR 4770 OR 4771 OR 4768 OR 4774 OR 4776 OR 4778 OR 4779 OR 672 OR 673 OR 674 OR 675 OR 678 OR 680 OR 682 OR 683) (EventCode=4624 OR EventCode=4647 OR EventCode=4648 OR EventCode=551 OR EventCode=552 OR EventCode=540 OR EventCode=528 OR EventCode=4768 OR EventCode=4769 OR EventCode=4770 OR EventCode=4771 OR EventCode=4768 OR EventCode=4774 OR EventCode=4776 OR EventCode=4778 OR EventCode=4779 OR EventCode=672 OR EventCode=673 OR EventCode=674 OR EventCode=675 OR EventCode=678 OR EventCode=680 OR EventCode=682 OR EventCode=683)
| lookup windows_event_lookup.csv EventID AS EventCode OUTPUT Event_Desc
| table user Event_Desc
Putting the search parameters in the first line will make Splunk fetch only those relevant events from the beginning, and also only do the lookup on those events instead of all events. Twice the performance improvement.
The thing about EventCode/EventID being twisted has already been said by others. 😉
... View more
Basically - the initial sourcetype determines the props.conf rules that are being applied to the data at index time.
Therefore, you can rewrite the sourcetype at index-time, but Splunk will not use index-time rules for that new sourcetype. It will however use search-time rules for that new sourcetype.
Therefore, you either need to get data in with the right sourcetype from the very beginning - best practice is not to let Splunk receive on port 514, but a syslog server like syslog-ng, that writes the data to disk, split by hostname/IP of sender.
You can then built proper file monitors for every device and assign them the proper sourcetype. 🙂
... View more
@Yorokobi is right - if you add the HFs as search peers on your Monitoring console, the MC will contact them via port 8089 and you can use it's built-in alert to get a notification when one of them goes down. Actually works for all Splunk instances, be they indexers, search heads, HFs...
... View more
There is no built-in mechanism in Splunk that allows you to urldecode() before writing to an index, so you can't easily manipulate it like this.
You can either stick with the "Decode during search time" approach, but that might making fast searches impossible because the data is simply written to index encoded.
Preprocessing would mean running a scripted input, or something like this. The script would have to ingest the data, urldecode it, and then output it, so Splunk gets the proper data. If the encoded data is important to your use case, that's the way I would go. 🙂
... View more
To build a proper regex, you need to describe your data properly, it has to have some reliable characteristics.
With your example above, multiple characteristics are possible, but without further example data it's hard to find those similarities.
This is an example: ^[^:]+:[^:]+:(?<yourfield>[^:]+:)
This one would assume that there is always to parts in that field, seperated by : , and the value you want to extract is between the second and third : . If that's true - here's your regex 😉
... View more
Try this:
| makeresults
| eval host="host3.CA.domain.com"
| eval host=if(match(host, "^\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}$"), host, replace(host, "^([^\.]+)\..*$", "\1"))
More explanation here in the docs, explanation of the regex here.
... View more
Did you try setting KV_MODE=JSON in the corresponding sourcetype in props.conf?
That should actually extract fields from JSON on it's own.
To access certain fields without changing KV_MODE, take a look at | spath here:
http://docs.splunk.com/Documentation/Splunk/6.3.3/SearchReference/Spath
... View more
Can you please add some details to your question? I don't really get what are trying to do, so maybe adding screenshots, example data etc. makes it easier to grasp it.
... View more
Well, the message is already saying a lot about your problem.
You're running into the configured maximum limit for historical searches.
You can check the Monitoring Console to see how many searches are run, how many are being skipped, and which, and some other information.
There's a lot of possible reasons, and you can find a lot information when using Google on "Splunk maximum number of concurrent historical scheduled searches", including a few good blog posts on this very topic - check them out! 🙂
... View more
I would really advise not to do this.
The Windows Linux Subsystem is nice, but it's far from perfect, and you will run into a bunch of problems using it with something as advanced as Splunk. It lacks a lot of more advanced features, and it is not worth the trouble you'll have with it - instead just spin up a VM 😉
Edit: Sorry for thread necro, it popped up in and I didn't check the timestamp 0=)
... View more
Ah, didn't know it was possible, rarely use the GUI. I fear without actual access troubleshooting this is difficult - maybe you can find any errors in index=_internal ?
... View more
You could either go with crcSalt or initCrcLen .
As your filenames keep changing, the easiest would be a inputs.conf like this:
[monitor:yourfilename]
crcSalt = <SOURCE>
It will just use the (always different) filename as a salt, so the checksum will differ for each new file - that should solve your problem.
If you had the same issue, but the filename would always be the same, you would have to raise the initCrcLen up to the point where the file is actually different.
... View more
Did you add it via GUI? The FORMAT = $1::$2 is essential, else it will most likely not return anything.
I tried that regex here with your sample data, so at least the regex should be fine:
https://regex101.com/r/5JcfIv/1
... View more
Two hints:
The line | search CVE= "*" contains a space, that might cause trouble.
The sort function has an implicit limit of 10000, so you might not get all results. Improve this by using | sort 0 -CVE .
... View more
You should not remove these during indexing, because it will most likely break all your field extractions unless all these information has been extracted as index time fields, which it most likely isn't.
You could use a regex like this:
.*\smessage=(?<_raw>.*)$
This would replace the _raw field, which is what you're getting displayed as the actual event text.
So, you can simply set up a props.conf like this:
[your-sourcetype]
EXTRACT-shorten_raw_text = .*\smessage=(?<_raw>.*)$
Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂
... View more
I've no experience on Splunk Cloud, but on a on premise installation, you would have to do it via config files - no way to do this via the GUI. So unless Splunk Cloud doesn't offer something special for this case, I guess your way is through support then.
... View more