All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to visualize total execution time for each process. Is there a way to display the results of each process in a single column, like in the example below?    
In an online example that lets you export a splunk result, I found the following code.   <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;t... See more...
In an online example that lets you export a splunk result, I found the following code.   <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=$filename_token$&amp;outputMode=csv">Download CSV</a>  This does almost exactly what I want, so I tried to find more information of what is happening. I see some parameters there and I want to understand them. isDownload=true&amp; timeFormat=%25FT%25T.%25Q%25%3Az&amp; maxLines=0&amp;count=0&amp; filename=$filename_token$&amp; outputMode=csv I think the fields are almost self explaining but I would like to read the official documentation, also I would like the know what other possible parameters I can provide. When looking for the documentation I only found : Search endpoint descriptions - Splunk Documentation But this does not describe the parameters passed in the example. Where can i find an explenation of the parameters used? 
Hi every one, I want a report which showing only the maximum value (days_since) and show the condition base on the maximum value (Pending_since). I would be appreciated for your help. This is my ... See more...
Hi every one, I want a report which showing only the maximum value (days_since) and show the condition base on the maximum value (Pending_since). I would be appreciated for your help. This is my search  indix=............... ................... | eval days_since = floor((now() - _time) / 86400) | eval Pending_since = case(days_since == 0, "Today", days_since < 30, "Pending (< 30 days)", days_since > 45, "Pending ( > 45 days)", days_since > 30, "Pending ( 30>Days<45 )", days_since < 45, "Pending ( 30>Days<45 )", days_since > 1, days_since . " Days")  
Hello! I have just created a trial account to try Open Telemetry integration. When I go to the OTel tab to generate a key and press  , I simply get an error: The UI makes a request to "/c... See more...
Hello! I have just created a trial account to try Open Telemetry integration. When I go to the OTel tab to generate a key and press  , I simply get an error: The UI makes a request to "/controller/restui/otel/onBoardCustomer", wich returns an 500 error: { "displayText" : "Error occurred in on-boarding customer", "messageKey" : null, "localizationBundle" : null, "showWithNotificationService" : true, "rootExceptionClass" : "", "notErrorMessage" : false, "unauthorizedAccess" : false, "noDataFound" : false }  So does the trial account is not licensed for OTel?
I am trying to change the Inactive Account Activity Detected search, so the search reads, the time range of more than 365 days ago (Instead of less than 90 days ago) and greater than 2 hours ago.  Ev... See more...
I am trying to change the Inactive Account Activity Detected search, so the search reads, the time range of more than 365 days ago (Instead of less than 90 days ago) and greater than 2 hours ago.  Every time I add a great than symbol or change 90 days I get an error message in splunk Can anyone change this search so it reads that its looking for inactive accounts of over 365 days ago which have just been logged into today. | `inactive_account_usage("90","2")` | `ctime(lastTime)` | fields + user,tag,inactiveDays,lastTime  
We have a problem keeping the logs form AWS. The hostname is random. I can't specify the host.
How to resolve Unable to initialize modular input "taxii" defined in the app "SA-Splice": Introspecting scheme=taxii: script running failed (exited with code 1)..
I have 2 values  time received =161300 and time sent = 161259, and I want to get the time stamp difference which is 1. diff time received- time sent gives 41 sec which is incorrect. please help w... See more...
I have 2 values  time received =161300 and time sent = 161259, and I want to get the time stamp difference which is 1. diff time received- time sent gives 41 sec which is incorrect. please help with the correct query.  the data given above is in hhmmss format 
Currently when I want to catch errors coming from arbitrary action block I rely on phantom.get_summary() looking at action status. But I notice that Custom Function blocks cannot be found there, so m... See more...
Currently when I want to catch errors coming from arbitrary action block I rely on phantom.get_summary() looking at action status. But I notice that Custom Function blocks cannot be found there, so my question is how do I check the status of a Custom Function block, and catch errors originating there
Hello, I'm working on a use case where I have 1 source and 2 destinations. Everything that is found between the source and the 2 destinations need to be excluded. So I've used: where source = X AND... See more...
Hello, I'm working on a use case where I have 1 source and 2 destinations. Everything that is found between the source and the 2 destinations need to be excluded. So I've used: where source = X AND destination != Y OR destination != Z But this will filter the logs and will display only the logs that comes from source X and the logs that comes from other sources will be excluded as well. How I can exclude only from source X to destination Y and Z ?
I am checking for reboot required, if yes, since how long is the status unchanged from reboot required yes. Logic I am waiting for atleast 2 business days before I send alert to user to reboot his ma... See more...
I am checking for reboot required, if yes, since how long is the status unchanged from reboot required yes. Logic I am waiting for atleast 2 business days before I send alert to user to reboot his machine. Thank you so much for you help.  I did check an answer but did not get it. https://community.splunk.com/t5/Splunk-Search/Get-data-from-the-last-2-business-days/m-p/539517
please i need some informations because i have some issues: 1- i'm using udp port to send logs from my antivirus server to splunk server, I noticed that the logs come after a delay of 2 and 3 hours... See more...
please i need some informations because i have some issues: 1- i'm using udp port to send logs from my antivirus server to splunk server, I noticed that the logs come after a delay of 2 and 3 hours, my question: is it advisable to switch to TCP instead of UDP to guarantee the reception of the logs??   2- I have a problem with sending alert emails, the configuration is correct, well I noticed that the saved password is different to my password (number of stars) assuming my password is 12345678 then I must have 8 stars (********) but when I check the configuration I find only 6 stars which indicates that it is not my password, I I erased all saved passwords but still the same problem note that the alert works perfectly (display on the console) but the email is not sent.    
Hello Splunkers, I was wondering if there is a Splunk documentation or an article about how certain search commands behave in a distributed environment.  (i.e. mainly the usage of Join, Stats, Look... See more...
Hello Splunkers, I was wondering if there is a Splunk documentation or an article about how certain search commands behave in a distributed environment.  (i.e. mainly the usage of Join, Stats, Lookup, Sub Searches, Map, Transaction, Tstats etc.) Descriptions could include about which Splunk node the command first runs, if it goes back and forth between Search Head and Indexer for example or does it only run in one of either. I know how these commands shape and filter certain logs, I just have not fully grasped how Commands are run in the background. All help and comments are appreciated, Thanks, Regards,
I have a Sankey chart that shows comparison of SLA vs TurnAround for each priority of ticket. While values are correct on hovering on middle of the chart, but as I move towards the corner, I see dif... See more...
I have a Sankey chart that shows comparison of SLA vs TurnAround for each priority of ticket. While values are correct on hovering on middle of the chart, but as I move towards the corner, I see different values. I'm unable to understand from where those values are picked up.(Since my search result has only 3 rows for Turnaround values) Any help would be appreciated. TIA!
Hi Team, I creates a statistics panel using the classic dashboard in Splunk, and I would like to apply a similar format to a specific column at once. In this case, if it is possible to operate like... See more...
Hi Team, I creates a statistics panel using the classic dashboard in Splunk, and I would like to apply a similar format to a specific column at once. In this case, if it is possible to operate like Splunk's foreach command with a simple XML source, please tell me how to edit it. I became clear from reading the Splunk documentation (https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Viz/TableFormatsXML) that I can apply a similar format to all columns of a table. Also, is it possible to use wildcards in Simple XML sources?  
Hi Team, I have a requirement for alert creating and scheduling the same in Splunk. So for this below mentioned query : "index=abc sourcetype=xyz host=mno "load is high" There would be only o... See more...
Hi Team, I have a requirement for alert creating and scheduling the same in Splunk. So for this below mentioned query : "index=abc sourcetype=xyz host=mno "load is high" There would be only one event exactly present for every one hour i.e. (every 60 minutes) for this query so our requirement is that if there is no event for 1 hour and 10 minutes (i.e. 80 minutes) then it needs to trigger an email to the recipients.  So how to achieve this in alert configuration and how should i need to schedule the cron as well & also what should be the time range should i need to choose as well and what would be the trigger condition we need to set.   So kindly help on the same.  
Hi Splunkers, I'm trying to figure out the easiest way to monitor  Kubernetes in Splunk Core. I did a little research and found the Splunk App for Infrastructure (SAI), but it seems to be outdated.... See more...
Hi Splunkers, I'm trying to figure out the easiest way to monitor  Kubernetes in Splunk Core. I did a little research and found the Splunk App for Infrastructure (SAI), but it seems to be outdated. In your experience, what's the best way to get the data in? (Otel?) And do you know of an app that comes with preconfigured dashboards etc. for this use case? Thanks in advance!
Hello all, I’m newbie in Splunk and I have VMware ESXi environment (2 hosts) and 15 VMs, Depending on VMware infrastructure end of life, would you please help me about ESXi log collector?
Hi Just wanted to put this on the community in case other AppD users come across it and need a solution. Problem When the Application has not data coming in for Errors per min or no load to have... See more...
Hi Just wanted to put this on the community in case other AppD users come across it and need a solution. Problem When the Application has not data coming in for Errors per min or no load to have an Average Response time, (present in a lot of pre-prod apps) then the metric value Widgets on the Custom Dashboard will  display dashes (--) instead of numerical values, like a zero. AppD Support input AppD says that there is a flag on the controller settings that is related to this and displaying null operands for metric expressions. They informed us on the support ticket that they have enabled it for our SaaS Controllers (v22.6) They also showed us how to update the Widgets to use a metric expression instead of the default configuration.  Solution Update all the affected Metric Value Widgets to use a metric expression that does not affect the metric values in any way we do not want. Example: {errors} + 0 See screenshots below for more details. Before: Before After: After Metric Expression
Hi,  we are trying to pull a specific data from [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] but the problem is our unique event is not suggested for the white/blacklisting such as e... See more...
Hi,  we are trying to pull a specific data from [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] but the problem is our unique event is not suggested for the white/blacklisting such as eventID, category, etc... we only have Task Name to filter it. so we tried:  blacklist = $XmlRegex=(?<=Name='TaskName'\>)(\\TaskNameSample\\) and  whitelist = $XmlRegex=(?<=Name='TaskName'\>)(\\TaskNameSample\\)  but not working... do you have any suggested solution for this?