Hi @bgill0123, You should run your search using a time range of 15 days or more to be able to compare and use "delta" command like below; (assuming you 5 days average hit count field is "five_days_a...
See more...
Hi @bgill0123, You should run your search using a time range of 15 days or more to be able to compare and use "delta" command like below; (assuming you 5 days average hit count field is "five_days_avg_count") | delta five_days_avg_count as diff
| eval perc_diff=abs(diff*100/five_days_avg_count)
| search perc_diff > 10
Hi @Skeer-Jamf, This is out of Splunk context but you can check Linux Keepalived service for redundancy. Keepalived supports active/passive failover mode and load balancing setup is possible. It cr...
See more...
Hi @Skeer-Jamf, This is out of Splunk context but you can check Linux Keepalived service for redundancy. Keepalived supports active/passive failover mode and load balancing setup is possible. It creates and manages virtual IP address that forwards the incoming traffic to healthy backend servers.
Hi @iamsplunker0415, You can use "date_hour" field for filtering hours, please try below sample; index=your_index USA="Washington" NOT date_hour IN (2,3)
Hello Splunk Community, I have a requirement to exclude the events from field values between 2AM-3AM everyday. For Example Field USA has 4 values USA = Texas, California, Washington, New York ...
See more...
Hello Splunk Community, I have a requirement to exclude the events from field values between 2AM-3AM everyday. For Example Field USA has 4 values USA = Texas, California, Washington, New York I want to exclude the events from Washington between 2AM-3AM .However, I want them in remaining time 23 hours period. Is there a search to achieve this?
Thank you @gcusello When I look at the add-on you posted, it seems to be a tool that helps to use OpenAI, within Splunk, to request additional context/perspective on specific questions or monitors ...
See more...
Thank you @gcusello When I look at the add-on you posted, it seems to be a tool that helps to use OpenAI, within Splunk, to request additional context/perspective on specific questions or monitors (e.g. "Is this malicious?") My goal is actually just to get the content into Splunk first. So, for example, a web app that has a chatbot built in. The chatbot might receive input in text (or image/video) from users and then it generates a response (e.g. "Here is the documentation/help file relevant to that particular request"). I want to store all the activity between the application/user and OpenAI for evaluating security or compliance concerns using Splunk searches. Any thoughts on that? I could ask the App Engineer to log the input and output via HEC I guess. But I'm wondering if others have started logging OpenAI calls/responses (or any LLM such as Anthropic or Cohere or Gemini etc.) into Splunk yet? Thanks!
The only reason I can think of to try to do such thing would be to set up a small lab for learning Splunk in a bit more "distributed" setup than just an all-in-one server. But in such case, I'd go f...
See more...
The only reason I can think of to try to do such thing would be to set up a small lab for learning Splunk in a bit more "distributed" setup than just an all-in-one server. But in such case, I'd go for spinning up separate VMs and installing each component on a separate VM. Also be prepared for a very very low performance.
Main question is whether your data is getting truncated when you're displaying it (which is kinda unlikely) or has it been truncated on ingestion. Check your data with this: index=gbi_* (AppName=*)
...
See more...
Main question is whether your data is getting truncated when you're displaying it (which is kinda unlikely) or has it been truncated on ingestion. Check your data with this: index=gbi_* (AppName=*)
| eval strlen=len(_raw)
| stats max(strlen)
Found Solution. <search base="basesearch_time">
<query> | where Month="$month_token$"
| table start end
</query>
<done>
<set token="start">$result.start$</set>
...
See more...
OK. So do search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_id as search | format And substit...
See more...
OK. So do search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_id as search | format And substitute manually your subsearch with the results from this one.
Hi @PickleRick , sorry for that typo search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_...
See more...
Hi @PickleRick , sorry for that typo search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | fields request_id | rename request_id as search above is the correct one
From a strictly theoretical perspective, you could store your data on any storage your OS can access. After all Splunk uses system calls to access its files so as long as it can open those files, you...
See more...
From a strictly theoretical perspective, you could store your data on any storage your OS can access. After all Splunk uses system calls to access its files so as long as it can open those files, you're "good". But the problem is that not every storage performs equally well hence the rule of thumb about using local storage only. The "slow" storage which can be used for cold storage which is typically less often used means usually still relatively quick HDDs versus SDD recommended for hot/warm storage. Remember that latency in accessing slow storage would have noticeable impact on overall Splunk's performance, not just those searches that access cold data. That's one thing. Another thing is that if you want to reach over the network for data, Splunk process must be able to access the share the data is stored on so you will definitely _not_ be able to do so running Splunk with either LOCAL_SYSTEM user or the default Splunk user. But still, the most important thing is that you should not use NAS or NFS for Splunk storage - there is too much overhead and the latency is too high for reasonable performance.
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing som...
See more...
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing something like this? The rsyslog config on each host is quite complex.. I'm using 9 different custom ports for up to 20 different source devices. If you are curious its setup like such: port xxxx used for PDU's, port cccc used for switches, port vvvv for routers, etc, etc. The Universal Forwarders then sent the data directly to Splunk Cloud. It's likely not the best, and is certainly not pretty but it gets the job done. Currently there is 2 dedicated UF hosts for two physical sites. These sites are being combined into a single colo, hence the LB question. Thanks!
OK. this thread is soooo long The search you just posted has definitely at least one error - there is no command called "request_id". It's your field's name. So either you copy-pasted it wrong or...
See more...
OK. this thread is soooo long The search you just posted has definitely at least one error - there is no command called "request_id". It's your field's name. So either you copy-pasted it wrong or it's not gonna work at all.
Hi @PickleRick, Yes I have that. If you scroll on above conversation, I have pasted the result. Do you want me to post the result of below query? search sourcetype="my_source" "failed reques...
See more...
Hi @PickleRick, Yes I have that. If you scroll on above conversation, I have pasted the result. Do you want me to post the result of below query? search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | request_id | fields request_id | rename request_id as search
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg numb...
See more...
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg number of hits over a 5 day period increases or decreases by 10%. I can't seem to figure this out, any help would be appreciated.
Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splun...
See more...
Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splunk's index. The rest of the event is irrevocably lost. So you can't display what isn't there - it's simply not saved in your Splunk. That's why @gcusello said you have to talk with your Splunk team about raising the limit for this particular sourcetype if this is an issue for your data.
Hi @PickleRick, Yes I have that data. Basically, right now issue is with sub search I am getting results than actual result present and I am not able to understand why
Again my 3 cents - do you have the field called request_ids in your data? Because that's what your subsearch will generate a condition for. And you don't need an explicit format command if you're no...
See more...
Again my 3 cents - do you have the field called request_ids in your data? Because that's what your subsearch will generate a condition for. And you don't need an explicit format command if you're not overriding any default options for it.