I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a va...
See more...
I am a new user to Splunk and working to create an alert that triggers if it has been more than 4 hours since the last alert. I am using the following query, which I have test and come back with a valid result:
index=my_index
| stats max(_time) as latest_event_time
| eval time_difference_hours = (now() - latest_event_time) / 3600
| table time_difference_hours
Result: 20.646666667
When I go in and enable the alert, I set the alert to run every every. Additional I choose a custom condition as the trigger and use the following:
eval time_difference_hours > 4
But the alert does not trigger. As you can see based on the result, it has been 20 hours since the last event was received in Splunk.
Not sure what I am missing. I have also modified the query to include a time span with earliest=-24H and latest=now, but that did work either.
Hi @Amit.Bisht,
Thanks for letting me know. If the Community does not chime in, you can always contact AppD Support.
How do I submit a Support ticket? An FAQ
Splunk version: splunk-9.2.0.1 Host: Linux (Rocky 9) Hello, I am a new user testing Splunk. I installed the instance on Linux (Rocky 9). From reading various Q&A and docs, I see the location to ch...
See more...
Splunk version: splunk-9.2.0.1 Host: Linux (Rocky 9) Hello, I am a new user testing Splunk. I installed the instance on Linux (Rocky 9). From reading various Q&A and docs, I see the location to change the instance address/IP and port is in a file within the installation directory called splunk-launch.conf, though it doesn't look like this file exists anymore. Please guide me through changing these settings in the latest version of Splunk (9.2.0.1) in Unix CLI. My goal is to change the web interface address from http://alpha:8000 to http://beta:8000. Thank you.
Hi All, I have an alert that shows results for 7:00 Am to 7:01 AM with more than 20 results. the cron for the alert is: * 6-15 * * 1-5 condition: more than 4 results I checked and found the...
See more...
Hi All, I have an alert that shows results for 7:00 Am to 7:01 AM with more than 20 results. the cron for the alert is: * 6-15 * * 1-5 condition: more than 4 results I checked and found there were more than 4 results in the timefram 7:00 AM to 7:01 AM but the alert did not trigger an email alert. Though the same alert did trigger at 8 AM. On checking the internal logs I can see that at 7 AM the alert_actions="", but at 8 AM I can see alert_actions="email" which confirms that there was no email action. What all things can I check further to check and confirm?
we are getting WAF log and the events are very big we need to drop some lines from the events that has no meaningful value not the whole event. @gcusello thank you in advance.
Installed Splunk Add-on for Unix and Linux 9.0.0 not getting memory data for ubuntu server? Checks performed 1) Getting data for logical disk space and cpu but not memory 2) sar utility is insta...
See more...
Installed Splunk Add-on for Unix and Linux 9.0.0 not getting memory data for ubuntu server? Checks performed 1) Getting data for logical disk space and cpu but not memory 2) sar utility is installed enabled hardware, CPU, and df metric stanzas added index details too.
This is a really old post but I had the same problem. A search query that appears to be helping me find these problems is:
index=_internal sourcetype=splunkd log_level=ERROR component=HttpInpu...
See more...
This is a really old post but I had the same problem. A search query that appears to be helping me find these problems is:
index=_internal sourcetype=splunkd log_level=ERROR component=HttpInputDataHandler
The results are imperfect because they don't exactly match what's shown in the authentication failures, but in my case, it appears the errors are being caused by a source that is sending in blank/missing tokens.
I was not getting any data together, that was what was wrong... sorry for the miscommunication. I implemented your idea, and I am getting data now! Thank you!!! However, I am not getting the Empl...
See more...
I was not getting any data together, that was what was wrong... sorry for the miscommunication. I implemented your idea, and I am getting data now! Thank you!!! However, I am not getting the Employee column filled. It might because of the data issue. But, I wanted to know if we can label the fields by source. For example. I have UserNumber in both sources that mean different things and name in both sources that mean different things... How can I help Splunk differentiate them? Is there any resources would you suggest? Thank you so much!
It depends what it is you are trying to do, and what you think is wrong. As it stands, PARENT_ACCOUNT is not a field beyond the stats command (since it isn't listed as an output field - dc just count...
See more...
It depends what it is you are trying to do, and what you think is wrong. As it stands, PARENT_ACCOUNT is not a field beyond the stats command (since it isn't listed as an output field - dc just counts the distinct values of the field without listing them). For the "join", you don't need a join (and they usually should be avoided if possible as they are slow and have limitations). Try something like this: index=* sourcetype=transaction OR sourcetype=users
| eval USERNUMBER=coalesce(USERNUMBER, NUMBER)
| eventstats values(NAME) as Employee by USERNUMBER
| stats dc(PARENT_ACCOUNT) as transactionMade values(Employee) as Employee by POSTDATE, USERNUMBER
| table USERNUMBER Employee PARENT_ACCOUNT POSTDATE transactionMade
We have upwards of 250k forwarders in one of our environments and various levels of DNS caching that make it very difficult for a forwarder to request a deployment server IP from a DNS name and maint...
See more...
We have upwards of 250k forwarders in one of our environments and various levels of DNS caching that make it very difficult for a forwarder to request a deployment server IP from a DNS name and maintain the connection consistently in order for it to get appropriate apps downloaded. I have seen where a system will request an IP from a DNS name, make an initial connection to a deployment server, then send a DNS query again only to be given a different IP address, which causes issues with the forwarder trying to establish a consistent trusted connection to a deployment server. That switch in deployment server destinations causes the forwarder to just try again later, until it can establish a consistent connection randomly. We put our deployment servers behind a load balancer before, but all the connections and logs show the forwarders coming from the same ip address.. something x-forwarded-for should help solve at our scale.
So, I have one source (transactions) with userNumber and another source (users) with number. I want to join both of them. In each source, they have different field names. I want my table to have the ...
See more...
So, I have one source (transactions) with userNumber and another source (users) with number. I want to join both of them. In each source, they have different field names. I want my table to have the employees name, which in in source users, which I get in my 2nd query in the join separately. Below is my SPL as of now:
index=* sourcetype=transaction
| stats dc(PARENT_ACCOUNT) as transactionMade by POSTDATE, USERNUMBER
| join left=L right=R where L.USERNUMBER=R.NUMBER [search sourcetype=users | stats values(NAME) as Employee by NUMBER]
| table USERNUMBER Employee PARENT_ACCOUNT POSTDATE transactionMade
What is it that I am doing wrong?
Question: We are using Commvault Metallic to backup our O365 cloud-based user data in the Microsoft GCC. How can we send the Commvault transaction logs to our on-prem Splunk servers for event analy...
See more...
Question: We are using Commvault Metallic to backup our O365 cloud-based user data in the Microsoft GCC. How can we send the Commvault transaction logs to our on-prem Splunk servers for event analysis and reporting?
Hi @parthiban, you hughlighted only the 403 response code, if you want the full string, you could use: | rex "\"processing_stage\": \"(?<response>[^\"]+)" that you can test at https://regex101.com...
See more...
Hi @parthiban, you hughlighted only the 403 response code, if you want the full string, you could use: | rex "\"processing_stage\": \"(?<response>[^\"]+)" that you can test at https://regex101.com/r/mz4c1L/2 Ciao. Giuseppe
Hi everyone, this would be my first addition into community, have been using it for some time and it has been great. However now i have an issue i am not able to find answer or thread so thought to ...
See more...
Hi everyone, this would be my first addition into community, have been using it for some time and it has been great. However now i have an issue i am not able to find answer or thread so thought to ask if someone is able to help. I have a search which gives me names of people, email addresses and other data. I wish to know if there is a way how to, when clicked on the value in field: Email it would open outlook and fill in the emails that were searched? Lets say I have 10 results, and i would like for all 10 emails to be filled in the outlook email. i am able to do it through drilldown - click.value2 or row.fieldname..but those are going to fill in one specific email. I wish to have this capability but for group emails. Before I go and use sendemail, i was wondering if this can be done via mailto and possibly how? hope you all have good day !
Hi can you paste your inputs.conf on indexer side and outputs.conf from UF side? Please anonymise (read: replace with xxx etc.) all data which can identified your environment and secrets! And remov...
See more...
Hi can you paste your inputs.conf on indexer side and outputs.conf from UF side? Please anonymise (read: replace with xxx etc.) all data which can identified your environment and secrets! And remove put those inside </> element on your reply! r. Ismo
Hi As @richgalloway said, have you look command spath? There are quite many old answers where are asked quite similar questions. Just use google/bing or what ever to find those. https://communit...
See more...
Hi As @richgalloway said, have you look command spath? There are quite many old answers where are asked quite similar questions. Just use google/bing or what ever to find those. https://community.splunk.com/t5/Getting-Data-In/How-to-handle-simple-JSON-array-with-spath/m-p/103174 https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 r. Ismo
Hi , How to extract the fields from below json logs. Here we have fields like content.jobname and content.region .But i need to extract content.payload details.how to extract the value. "co...
See more...
Hi , How to extract the fields from below json logs. Here we have fields like content.jobname and content.region .But i need to extract content.payload details.how to extract the value. "content" : {
"jobName" : "PAY",
"region" : "NZ",
"payload" : [ {
"Aresults" : [ {
"count" : "6",
"errorMessage" : null,
"filename" : "9550044.csv"
} ]
}, {
"Bresults" : [ {
"count" : "6",
"errorMessage" : null,
"filename" : "9550044.csv"
} ]
} ]
}