The only reason I can think of to try to do such thing would be to set up a small lab for learning Splunk in a bit more "distributed" setup than just an all-in-one server. But in such case, I'd go f...
See more...
The only reason I can think of to try to do such thing would be to set up a small lab for learning Splunk in a bit more "distributed" setup than just an all-in-one server. But in such case, I'd go for spinning up separate VMs and installing each component on a separate VM. Also be prepared for a very very low performance.
Main question is whether your data is getting truncated when you're displaying it (which is kinda unlikely) or has it been truncated on ingestion. Check your data with this: index=gbi_* (AppName=*)
...
See more...
Main question is whether your data is getting truncated when you're displaying it (which is kinda unlikely) or has it been truncated on ingestion. Check your data with this: index=gbi_* (AppName=*)
| eval strlen=len(_raw)
| stats max(strlen)
Found Solution. <search base="basesearch_time">
<query> | where Month="$month_token$"
| table start end
</query>
<done>
<set token="start">$result.start$</set>
...
See more...
OK. So do search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_id as search | format And substit...
See more...
OK. So do search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_id as search | format And substitute manually your subsearch with the results from this one.
Hi @PickleRick , sorry for that typo search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_...
See more...
Hi @PickleRick , sorry for that typo search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_id as search above is the correct one
From a strictly theoretical perspective, you could store your data on any storage your OS can access. After all Splunk uses system calls to access its files so as long as it can open those files, you...
See more...
From a strictly theoretical perspective, you could store your data on any storage your OS can access. After all Splunk uses system calls to access its files so as long as it can open those files, you're "good". But the problem is that not every storage performs equally well hence the rule of thumb about using local storage only. The "slow" storage which can be used for cold storage which is typically less often used means usually still relatively quick HDDs versus SDD recommended for hot/warm storage. Remember that latency in accessing slow storage would have noticeable impact on overall Splunk's performance, not just those searches that access cold data. That's one thing. Another thing is that if you want to reach over the network for data, Splunk process must be able to access the share the data is stored on so you will definitely _not_ be able to do so running Splunk with either LOCAL_SYSTEM user or the default Splunk user. But still, the most important thing is that you should not use NAS or NFS for Splunk storage - there is too much overhead and the latency is too high for reasonable performance.
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing som...
See more...
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing something like this? The rsyslog config on each host is quite complex.. I'm using 9 different custom ports for up to 20 different source devices. If you are curious its setup like such: port xxxx used for PDU's, port cccc used for switches, port vvvv for routers, etc, etc. The Universal Forwarders then sent the data directly to Splunk Cloud. It's likely not the best, and is certainly not pretty but it gets the job done. Currently there is 2 dedicated UF hosts for two physical sites. These sites are being combined into a single colo, hence the LB question. Thanks!
OK. this thread is soooo long The search you just posted has definitely at least one error - there is no command called "request_id". It's your field's name. So either you copy-pasted it wrong or...
See more...
OK. this thread is soooo long The search you just posted has definitely at least one error - there is no command called "request_id". It's your field's name. So either you copy-pasted it wrong or it's not gonna work at all.
Hi @PickleRick, Yes I have that. If you scroll on above conversation, I have pasted the result. Do you want me to post the result of below query? search sourcetype="my_source" "failed reques...
See more...
Hi @PickleRick, Yes I have that. If you scroll on above conversation, I have pasted the result. Do you want me to post the result of below query? search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | request_id | fields request_id | rename request_id as search
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg numb...
See more...
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg number of hits over a 5 day period increases or decreases by 10%. I can't seem to figure this out, any help would be appreciated.
Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splun...
See more...
Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splunk's index. The rest of the event is irrevocably lost. So you can't display what isn't there - it's simply not saved in your Splunk. That's why @gcusello said you have to talk with your Splunk team about raising the limit for this particular sourcetype if this is an issue for your data.
Hi @PickleRick, Yes I have that data. Basically, right now issue is with sub search I am getting results than actual result present and I am not able to understand why
Again my 3 cents - do you have the field called request_ids in your data? Because that's what your subsearch will generate a condition for. And you don't need an explicit format command if you're no...
See more...
Again my 3 cents - do you have the field called request_ids in your data? Because that's what your subsearch will generate a condition for. And you don't need an explicit format command if you're not overriding any default options for it.
This is such a vague question and there is such little information... 1. Do you see events in other indexes but not in this one or you cannot find any events anywhere? 2. Were there any changes don...
See more...
This is such a vague question and there is such little information... 1. Do you see events in other indexes but not in this one or you cannot find any events anywhere? 2. Were there any changes done lately to the environment? 3. Are you ingesting any data at all? Or was it just a "static" environment. In such case the data might have simply rolled over to frozen (got deleted) due to exceeding retention period. As @gcusello mentioned, KV-store problems don't have much with having the events or not. They can cause other issues but they are not responsible for data suddenly disappearing from indexes.
Supposedly Splunk can support such time resolution but I haven't found any official info on that. You can always test it by defining time parsing rule with nanosecond precission and ingesting non-mon...
See more...
Supposedly Splunk can support such time resolution but I haven't found any official info on that. You can always test it by defining time parsing rule with nanosecond precission and ingesting non-monotonically timed events differing at - for example - nanosecond level. If you can later do your search sorted by _time (regardless of it's the default reverse chronological order or an explicit sort command), that would mean it works properly. Otherwise it would mean that either Splunk doesn't store time with such precision or at least doesn't use it for practical purposes.
As @PickleRick replied, you can avoid this just by using the EVAL or applying filters to look for everything different from null or blank. You can also, create a field extraction using Regex to avoi...
See more...
As @PickleRick replied, you can avoid this just by using the EVAL or applying filters to look for everything different from null or blank. You can also, create a field extraction using Regex to avoid situations like this, for example: | rex field=_raw cs_username="(?<cs_username>.+?)\"\s https://regex101.com/r/f6booK/1
Hi @ITWhisperer , Below is my query which returns 250+ events sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>\”?[\w-]+\”?)” | stats value...
See more...
Hi @ITWhisperer , Below is my query which returns 250+ events sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>\”?[\w-]+\”?)” | stats values(request_id) as request_ids | eval request_ids = "\"" . mvjoin(request_ids, "\" OR \"") . "\"" | eval request_ids= replace(request_ids,"^request_id=","") | format