All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone, I am new to this place and this is my first query, looking for your help. I have a use-case where I am trying to set an alert and make it dynamic based on the SLP query result, my r... See more...
Hello Everyone, I am new to this place and this is my first query, looking for your help. I have a use-case where I am trying to set an alert and make it dynamic based on the SLP query result, my recipient list is constant. but Alert is not working as I expected. I went through a lot of links and Splunk docs but still, I am in middle. My requirement is to send the alert for every row from the result based on status and src(host IP) but I am receiving an alert only for the first row from the result. Here is the query -   index=dummy uri_path | stats count(eval(status>399)) as Error_Count by uri_path, status,user_name, src | where Error_Count > 0   Result - uri_path status user_name src Error_count /user/new 400 XXX 123.21.321.12 1 /user/show 404 YYY 321.12.32.21 1 My Alert Subject -   $result.status$ Error while access API for User $result.user_name$   My Message -   $result.status$ Error got observed while access API $result.uri_path$ with user $result.user_name$ on host $result.src$. For more info please click on below link   My alert subject and message is getting update based on the result but I am constantly getting Alert for first row from result  - Splunk Alert: 400 Error while access API for User XXX. which is correct for first row Some configuration in alert - Alert type - Crone sachedule for 15 minutes, Cron Expression - */15 * * * * , Expire - 24 hour Trigger alert when - is greater then 0, Trigger - for each result. Throttle - yes Suppress results containing field value - src=$result.src$, Suppress triggering for - 20-minutes Still I am getting alert for first row from result,Not sure what I am missing here to get other rows alerts. If you can see I have suppressed based on src and in result SRC is different for both the rows. so based on this I should get both alerts but I am not. Can anyone please help me to understand this, I want to send the alert based on status and src, if any new status + src combination come in result then it should send the result wether it is on first row in result or sencond row in result.  Hope I am able to express my query.  
Hi Team,   Can someone let me know how to view a created dashboard without logging into Splunk.  Requirement is all the users or non splunk users (Globally) has to view the splunk dashboard using ... See more...
Hi Team,   Can someone let me know how to view a created dashboard without logging into Splunk.  Requirement is all the users or non splunk users (Globally) has to view the splunk dashboard using the url. Went through some documentation on Embed  scheduled reports and also through an addon embedded dashboard for splunk but its not working. And we are using Splunk 8.1.1  Can we just make a dashboard public ( need to access just using the url of the required dashboard) without making changes in the server level .  Thanks
i have to upload the .csv file that gets generated on my local machine through a script to SH clustered environment using curl command
Hello The join comamnd below truncate events because I have results if I execute the ode before the join command but I havent results if I execute the second part Considering that my company dont w... See more...
Hello The join comamnd below truncate events because I have results if I execute the ode before the join command but I havent results if I execute the second part Considering that my company dont want to increase the subsearch limit, which other solutions I can apply please??   | inputlookup lookup_patches | search Standard_PC=1 AND StateName="Non-Compl" | search OSVersion="*" | search HOSTNAME=302013154 | join HOSTNAME [| inputlookup lookup_fo_all | fields SITE RESPONSIBLE_USER DEPARTMENT HOSTNAME BUILDING_CODE ROOM TYPE CATEGORY STATUS ] | stats last(SITE) as Site, last(BUILDING_CODE) as Building, last(ROOM) as Room, last(RESPONSIBLE_USER) as Responsible, last(DEPARTMENT) as Department, count by HOSTNAME FileName StateName OSVersion  
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.   Currently we... See more...
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.   Currently we are using several data input such as : -Simple SNMP Getter (dominant part of data input) -UF around 60 agent install       
The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.  
Hi, I have a lookup file which takes some time to load (Look up has 19Lakhs data) - This lookup is used in a dashboard So, we are planning to move with KV store, and created one. How do I get the ... See more...
Hi, I have a lookup file which takes some time to load (Look up has 19Lakhs data) - This lookup is used in a dashboard So, we are planning to move with KV store, and created one. How do I get the exact time taken to load the lookup and the time taken to load the kvstore? Is there any command to capture the time taken to load the lookup and the KVstore? In job inspector  is there any field?
Hi All,   We are planning to upgrade splunk forwarders with ansible. We observed that our forwarders are running on custom path and we cannot run splunk as root user .we were checking tar options b... See more...
Hi All,   We are planning to upgrade splunk forwarders with ansible. We observed that our forwarders are running on custom path and we cannot run splunk as root user .we were checking tar options but i am not understanding how we extract the tar file on custom path. can any one help with this?   Regards,Shivanand
Hi Everyone, I have one requirement. I have several dashboards where default time range is set to last 7 days. But the data is not there . For one dashboard the data is there till 21st Feb  For ... See more...
Hi Everyone, I have one requirement. I have several dashboards where default time range is set to last 7 days. But the data is not there . For one dashboard the data is there till 21st Feb  For second dashboard the data is there till 26th Feb. Is there any condition token that I can put in date/time dropdown that it will  first search for last 24 hours then for last 7 days then for last 30 days and will show the data in available range. Like if data is there for 24 hours then it will display that otherwise if its available for last 7 days it will display that for if data is there in last 30 days it will display that. Is that possible. Can someone guide me on this.
I have custom search with Java scripts in Splunk version 8 with simple xml code. I am unable to get the options called "Event Actions" under each event results of my query. When a search query is en... See more...
I have custom search with Java scripts in Splunk version 8 with simple xml code. I am unable to get the options called "Event Actions" under each event results of my query. When a search query is entered in normal Search app, the results of the query has a part called Event Actions which have the options like Build Event Type, Find Performing Timings, Extract Fields and Show source. I have attached a screenshot for that. These options I am not getting in the results in my custom search app. I need this feature as soon as possible. Could anyone please help me on this
Hi All, I have created a dashboard which was entirely built on dynamic lookup files in a clustered environment. It basically shows the statistics content and required values of the lookup file in di... See more...
Hi All, I have created a dashboard which was entirely built on dynamic lookup files in a clustered environment. It basically shows the statistics content and required values of the lookup file in different panels based on the requirement. Now the requirement is to create the same dashboard in another cluster environment where the lookup files were not exist (we are not authorized to upload the necessary lookup files in this SHC). How to accomplish this? Thanks in advance
When I create new input in Splunk Web how can I enter specific account ID and where and how to register AWS Account ID in Splunk.  May I know how to push specific AWS Account log to Splunk. Thanks!
One of the indexer in production, is in shutdown state. While trying to start splunk service on this server, it fails with the following error message.   homePath='/dev/splunk/var/lib/splunk/audit/... See more...
One of the indexer in production, is in shutdown state. While trying to start splunk service on this server, it fails with the following error message.   homePath='/dev/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue   I did try to read through the troubleshooting article and even with “OPTIMISTIC_ABOUT_FILE_LOCKING=1” splunk service start up is still failing with same error.
Hi Ninjas, I'm trying to make a table that should list date, domains, action_types, action_type_usage_in_MB, Domain_usage_in_GB. Here is my query inprogress:   sourcetype=access_combined domain=abc... See more...
Hi Ninjas, I'm trying to make a table that should list date, domains, action_types, action_type_usage_in_MB, Domain_usage_in_GB. Here is my query inprogress:   sourcetype=access_combined domain=abc | eval raw_len1=(len(_raw)/(1024*1024*1024)) | stats sum(raw_len1) as Domain_usage_in_GB by domain, action_type, _time | eval raw_len2=(len(Domain_usage_in_GB)/(1024)) | stats list(action_type) as action_type, list(raw_len2) as action_type_usage_in_MB, sum(Domain_usage_in_GB) as Domain_usage_in_GB by domain | sort -Domain_usage_in_GB   Here is the output: Expected Output: Challenges: with my query, the GB to MB conversion happening is not happening properly Need to round of MB and GB values Date formating Could you please help me achieve the data
How do I get a complete list of users logging into Splunk Enterprise & ES. Please share SPL strings used. How to prepare a list of users with multiple failed login attemps.
We have a new Splunk cloud instance and have DUO integrated to Splunk via SAML. Authentication works fine but Splunk returns  Saml response does not contain group information.     We have in the ... See more...
We have a new Splunk cloud instance and have DUO integrated to Splunk via SAML. Authentication works fine but Splunk returns  Saml response does not contain group information.     We have in the DUO settings a prefix specified for groups and have made  duo_splunk_admins duo_splunk_users   in the Saml configuration.    
Have recently been administrating our Splunk deployments. My question is in the CISCO TA app our props.conf has [source::*:514] stanza is this meant to say any input from 514 go to Transforms.conf fo... See more...
Have recently been administrating our Splunk deployments. My question is in the CISCO TA app our props.conf has [source::*:514] stanza is this meant to say any input from 514 go to Transforms.conf for parsing? I am trying to trace back the stanza in the Props.conf to a actual input sourcetype, as my inputs.conf is not giving me much info to TCP/UDP 514. TIA
Could you share the link or the download file of the splunk universal forwarder version 7.1.0 agent please
Hi I have this search here where I want to limit the results to only events that have more than 1 url hit on an src_ip. How do I do that? index=security sourcetype=malware (connect OR disconnect O... See more...
Hi I have this search here where I want to limit the results to only events that have more than 1 url hit on an src_ip. How do I do that? index=security sourcetype=malware (connect OR disconnect OR recv) | transaction src_ip | lookup dnslookup clientip as src_ip OUTPUT clienthost as fqdn | rex field=fqdn "(?<hostname>[^.]+)\." | rex field=_raw recv\:\s+User-Agent\:\s+(?<user_agent>.*) | rex field=_raw recv\:\s+Host\:\s+(?<url>.*) | eval url=replace(url,"\.","[.]") | where isnotnull(url) | table _time hostname url user_agent src_ip fqdn dest_port