All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am checking for reboot required, if yes, since how long is the status unchanged from reboot required yes. Logic I am waiting for atleast 2 business days before I send alert to user to reboot his ma... See more...
I am checking for reboot required, if yes, since how long is the status unchanged from reboot required yes. Logic I am waiting for atleast 2 business days before I send alert to user to reboot his machine. Thank you so much for you help.  I did check an answer but did not get it. https://community.splunk.com/t5/Splunk-Search/Get-data-from-the-last-2-business-days/m-p/539517
please i need some informations because i have some issues: 1- i'm using udp port to send logs from my antivirus server to splunk server, I noticed that the logs come after a delay of 2 and 3 hours... See more...
please i need some informations because i have some issues: 1- i'm using udp port to send logs from my antivirus server to splunk server, I noticed that the logs come after a delay of 2 and 3 hours, my question: is it advisable to switch to TCP instead of UDP to guarantee the reception of the logs??   2- I have a problem with sending alert emails, the configuration is correct, well I noticed that the saved password is different to my password (number of stars) assuming my password is 12345678 then I must have 8 stars (********) but when I check the configuration I find only 6 stars which indicates that it is not my password, I I erased all saved passwords but still the same problem note that the alert works perfectly (display on the console) but the email is not sent.    
Hello Splunkers, I was wondering if there is a Splunk documentation or an article about how certain search commands behave in a distributed environment.  (i.e. mainly the usage of Join, Stats, Look... See more...
Hello Splunkers, I was wondering if there is a Splunk documentation or an article about how certain search commands behave in a distributed environment.  (i.e. mainly the usage of Join, Stats, Lookup, Sub Searches, Map, Transaction, Tstats etc.) Descriptions could include about which Splunk node the command first runs, if it goes back and forth between Search Head and Indexer for example or does it only run in one of either. I know how these commands shape and filter certain logs, I just have not fully grasped how Commands are run in the background. All help and comments are appreciated, Thanks, Regards,
I have a Sankey chart that shows comparison of SLA vs TurnAround for each priority of ticket. While values are correct on hovering on middle of the chart, but as I move towards the corner, I see dif... See more...
I have a Sankey chart that shows comparison of SLA vs TurnAround for each priority of ticket. While values are correct on hovering on middle of the chart, but as I move towards the corner, I see different values. I'm unable to understand from where those values are picked up.(Since my search result has only 3 rows for Turnaround values) Any help would be appreciated. TIA!
Hi Team, I creates a statistics panel using the classic dashboard in Splunk, and I would like to apply a similar format to a specific column at once. In this case, if it is possible to operate like... See more...
Hi Team, I creates a statistics panel using the classic dashboard in Splunk, and I would like to apply a similar format to a specific column at once. In this case, if it is possible to operate like Splunk's foreach command with a simple XML source, please tell me how to edit it. I became clear from reading the Splunk documentation (https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Viz/TableFormatsXML) that I can apply a similar format to all columns of a table. Also, is it possible to use wildcards in Simple XML sources?  
Hi Team, I have a requirement for alert creating and scheduling the same in Splunk. So for this below mentioned query : "index=abc sourcetype=xyz host=mno "load is high" There would be only o... See more...
Hi Team, I have a requirement for alert creating and scheduling the same in Splunk. So for this below mentioned query : "index=abc sourcetype=xyz host=mno "load is high" There would be only one event exactly present for every one hour i.e. (every 60 minutes) for this query so our requirement is that if there is no event for 1 hour and 10 minutes (i.e. 80 minutes) then it needs to trigger an email to the recipients.  So how to achieve this in alert configuration and how should i need to schedule the cron as well & also what should be the time range should i need to choose as well and what would be the trigger condition we need to set.   So kindly help on the same.  
Hi Splunkers, I'm trying to figure out the easiest way to monitor  Kubernetes in Splunk Core. I did a little research and found the Splunk App for Infrastructure (SAI), but it seems to be outdated.... See more...
Hi Splunkers, I'm trying to figure out the easiest way to monitor  Kubernetes in Splunk Core. I did a little research and found the Splunk App for Infrastructure (SAI), but it seems to be outdated. In your experience, what's the best way to get the data in? (Otel?) And do you know of an app that comes with preconfigured dashboards etc. for this use case? Thanks in advance!
Hello all, I’m newbie in Splunk and I have VMware ESXi environment (2 hosts) and 15 VMs, Depending on VMware infrastructure end of life, would you please help me about ESXi log collector?
Hi Just wanted to put this on the community in case other AppD users come across it and need a solution. Problem When the Application has not data coming in for Errors per min or no load to have... See more...
Hi Just wanted to put this on the community in case other AppD users come across it and need a solution. Problem When the Application has not data coming in for Errors per min or no load to have an Average Response time, (present in a lot of pre-prod apps) then the metric value Widgets on the Custom Dashboard will  display dashes (--) instead of numerical values, like a zero. AppD Support input AppD says that there is a flag on the controller settings that is related to this and displaying null operands for metric expressions. They informed us on the support ticket that they have enabled it for our SaaS Controllers (v22.6) They also showed us how to update the Widgets to use a metric expression instead of the default configuration.  Solution Update all the affected Metric Value Widgets to use a metric expression that does not affect the metric values in any way we do not want. Example: {errors} + 0 See screenshots below for more details. Before: Before After: After Metric Expression
Hi,  we are trying to pull a specific data from [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] but the problem is our unique event is not suggested for the white/blacklisting such as e... See more...
Hi,  we are trying to pull a specific data from [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] but the problem is our unique event is not suggested for the white/blacklisting such as eventID, category, etc... we only have Task Name to filter it. so we tried:  blacklist = $XmlRegex=(?<=Name='TaskName'\>)(\\TaskNameSample\\) and  whitelist = $XmlRegex=(?<=Name='TaskName'\>)(\\TaskNameSample\\)  but not working... do you have any suggested solution for this? 
How will I whitelist specific TaskName in inputs.conf in Splunk forwarder configuration from WinEventLog Task Scheduler/Operational . Pulled data Example: ....<Data Name='TaskName'>\Job 1</Data>.... See more...
How will I whitelist specific TaskName in inputs.conf in Splunk forwarder configuration from WinEventLog Task Scheduler/Operational . Pulled data Example: ....<Data Name='TaskName'>\Job 1</Data>..... ....<Data Name='TaskName'>\Job 2</Data>..... ....<Data Name='TaskName'>\Other 1</Data>..... I only need to pull data of Job 1 and Job 2. How can I filter multiple jobs in inputs.conf
We install Universal forwarder in Windows Server for us to pull data from [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] to Splunk, to monitor jobs/event. Currently per check we are get... See more...
We install Universal forwarder in Windows Server for us to pull data from [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] to Splunk, to monitor jobs/event. Currently per check we are getting data real time from WinEventLog. Is there a way that we can change the timing/interval in every 10mins? We already tried: interval = 600, interval = <cron> , schedule = 600 and schedule = <cron> but doesn't work.  May we know if you have any solution for this? Please...
I'm trying to upgrade from splunk 8.1 to 9.0 on a single server Windows installation. Upgrading the kvstore from mmapv1 to wiredtiger caused me some headache, but it eventually seems to work. The pro... See more...
I'm trying to upgrade from splunk 8.1 to 9.0 on a single server Windows installation. Upgrading the kvstore from mmapv1 to wiredtiger caused me some headache, but it eventually seems to work. The problem now is that I'm stuck on wiredTiger 4.0, and can't find a way to get it upgraded to 4.2. splunkd.log contains this error message:     08-03-2022 07:58:43.566 +0200 ERROR KVStoreBulletinBoardManager [7688 MainThread] - Failed to upgrade KV Store to the latest version. KV Store is running an old version, service(40). Resolve upgrade errors and try to upgrade KV Store to the latest version again.     But I can't find anything else useful in the logs. splunk show kvstore-status --verbose Any hints as to how to find out whats wrong? Or how to force the upgrade? I tried to delete the mongod-4.0.exe but that caused the kvstore to fail at startup.
Hi Team, I have following data set of two fields recAccuracy and recAccuracyCount. I want to get the sum total of two rows sets as sum total of  "PREMISE_POSSIBLE","STREET_POSSIBLE" and ,"LOCALITY_... See more...
Hi Team, I have following data set of two fields recAccuracy and recAccuracyCount. I want to get the sum total of two rows sets as sum total of  "PREMISE_POSSIBLE","STREET_POSSIBLE" and ,"LOCALITY_POSSIBLE"  as cleansed  103343 and another data set as noncleansed for the rest of field values.   recAccuracy recAccuracyCount LOCALITY_POSSIBLE 64507 PREMISE_POSSIBLE 35493 STREET_DEFINITE 46134 PREMISE_PROBABLE 70789 PREMISE_DEFINITE 363709 LOCALITY_PROBABLE 10586 STREET_POSSIBLE 3343 STREET_PROBABLE 12928   Result : noncleansed  cleansed  103343    504146   I want to draw a pie chart of these two fields. Thanks.
Hi Team, Am new to AppDynamics tool, would like to seek your help in creating Dashboard. Please share any links or documents which will help to create the Dashboards. Thanks Sreenivas
Hi there - hopefully someone can help with this:   I am trying to deploy sysmon via a deployment app however it looks like the script is having some issues: I can see the following error from the s... See more...
Hi there - hopefully someone can help with this:   I am trying to deploy sysmon via a deployment app however it looks like the script is having some issues: I can see the following error from the splunkd logs:   08-03-2022 10:54:32.982 +0800 ERROR ExecProcessor [15204 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\etc\apps\CONF_corp_sysmon\bin\deploy.bat"" Sharing violation I can run the script manually with no issues. Any idea's would be much appreciated! The deploy.bat file is as follows:   IF EXIST "C:\Program Files (x86)" ( SET BINARCH=Sysmon64.exe SET SERVBINARCH=Sysmon64 ) ELSE ( SET BINARCH=Sysmon.exe SET SERVBINARCH=Sysmon ) SET SYSMONDIR=C:\windows SET SYSMONBIN=%SYSMONDIR%\%BINARCH% SET SYSMONCONFIG=%SYSMONDIR%\config.xml SET GLBSYSMONBIN="%programfiles%\splunkuniversalforwarder\etc\apps\CONF_corp_sysmon\bin\%BINARCH%" SET GLBSYSMONCONFIG="%programfiles%\splunkuniversalforwarder\etc\apps\CONF_corp_sysmon\bin\config.xml" sc query "%SERVBINARCH%" | Find "RUNNING" If "%ERRORLEVEL%" EQU "1" ( GOTO startsysmon ) :installsysmon xcopy %GLBSYSMONBIN% %SYSMONDIR% /y xcopy %GLBSYSMONCONFIG% %SYSMONDIR% /y chdir %SYSMONDIR% %SYSMONBIN% -i %SYSMONCONFIG% -accepteula -h md5,sha256 -n -l sc config %SERVBINARCH% start= auto :updateconfig xcopy %GLBSYSMONCONFIG% %SYSMONCONFIG% /y chdir %SYSMONDIR% %SYSMONBIN% -c %SYSMONCONFIG% EXIT /B 0 :startsysmon sc start %SERVBINARCH% If "%ERRORLEVEL%" EQU "1060" ( GOTO installsysmon ) ELSE ( GOTO updateconfig )  
Hi, I have a CSV file that I would like to filter search results using an inputlookup command, but also to include in the returned events a comment field that is part of that same CSV. Here is an ex... See more...
Hi, I have a CSV file that I would like to filter search results using an inputlookup command, but also to include in the returned events a comment field that is part of that same CSV. Here is an example of my table as stuff.csv: src user comment 192.168.1.1   This matches with the IP only   john This matches with the user only 192.168.1.2 bobby This matches with both IP and user   I would like to do something like this:   index=main [|inputlookup stuff.csv | fields - comment] | lookup stuff.csv src,user   The main problem here is that the inputlookup subsearch only returns values that have entries, which effectively act as wildcard if the field is empty, while the lookup command treats empty fields as literal blank values. In this example, assuming all events in my index have values for src and user, only matches with the 3rd row would ever return results from the lookup command. The desired behavior is, for example: Event contains src=192.168.1.1 and any username - The comment on row 1 is appended Event contains user=John and any src - The comment on row 2 is appended Event contains src=192.168.1.2 and user=Bobby - The comment on row 3 is appended   From the snippet above the following behavior is observed: Example 1 - No comment is appended (Undesired) Example 2 - No comment is appended (Undesired) Example 3 - Comment from row 3 is appended as desired   Can I somehow append the comment that associates with the matched row back to the events?
Hi , For analytical purpose we are downloading splunk data , daily we process large amount of data ( 3-4 millions of records) currently we are using native  http client call to splunk export api en... See more...
Hi , For analytical purpose we are downloading splunk data , daily we process large amount of data ( 3-4 millions of records) currently we are using native  http client call to splunk export api endpoint    in c# and able to fetch the data. we are planning to  switch to splunk sdk for better performance in poc, I have used ExportSearchPreviewsAsync() but I am not able to download the data  I am facing below issues , it will be great help if you guys can share your ideas  1)  though I am setting earliest date and latest date in search arg jobs but method call is not taking those values. 2)  data is coming in xml format , tried passing output to csv but no luck 3) also please suggest how can we save searchid from above methodcall
I was tring to ingest data into Splunk via HEC. One field of my data is: myKey1 = " This is my Application message log, myKey2=myValue2 in the text."  There is a Key=VALUE enclosed in the value of ... See more...
I was tring to ingest data into Splunk via HEC. One field of my data is: myKey1 = " This is my Application message log, myKey2=myValue2 in the text."  There is a Key=VALUE enclosed in the value of Field_name. Splunk will parse the data into two key: myKey1 = " This is my Application message log, KEY=VALUE in the text."  myKey2=myValue2 myKey2=myValue2 is part of the myKey1.   I don't want it. What I can do to avoid the influence of an equal sign in the text string?  
Hello,  Can someone  Please help to build rex for field extraction in one event. Currently iam using the below basic rex but its pulling only first line in the results. i need all the results so i ca... See more...
Hello,  Can someone  Please help to build rex for field extraction in one event. Currently iam using the below basic rex but its pulling only first line in the results. i need all the results so i can table them. |rex field=_raw "(TEST_DETAIL_MESSAGE\s\=)(?<MESSAGE>\w+\D+\,)" |rex field=_raw "(TEST_COUNT\s\=)(?<COUNT>\s\d+)"