All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

| makeresults count=1 | eval list_split_failure_1 = "fail:,searching old data:,searching new" | eval list_split_failure_2 = "fail:,searching old ata:,searching new" | eval list_split_success = "fa... See more...
| makeresults count=1 | eval list_split_failure_1 = "fail:,searching old data:,searching new" | eval list_split_failure_2 = "fail:,searching old ata:,searching new" | eval list_split_success = "fail:,searching old qata:,searching old dta:,searching old ta:,searching new" | eval list_split_failure_1 = split(list_split_failure_1, ",") | eval list_split_failure_2 = split(list_split_failure_2, ",") | eval list_split_success = split(list_split_success, ",") Can someone help me to understand why the split function fails for list_split_failure_1 and 2 but succeeds for list_split_success?
Isn't hyphen a minor breaker so I'm wondering why the values with hyphen get double quoted when doing summary indexing? This breaks the tstats TERM and PREFIX usage. Assume I have the following data... See more...
Isn't hyphen a minor breaker so I'm wondering why the values with hyphen get double quoted when doing summary indexing? This breaks the tstats TERM and PREFIX usage. Assume I have the following datas: _time field1 field2 2022-10-05 22:22:22 what-not whatnot   Will end up into summary event index with: 10/05/2022 22:22:22, field="what-not", field=whatnot   What I have missed when populating my summary index?-)
Static data with one common field app Name as splunk query.
Right now we have  Splunk Enterprise version 8.0.5.0 Java Update 8 333 Java Se Development Kit 8 Update 291   Due to vulnerabilities, we need to update the Java version. Can we upgrade to the la... See more...
Right now we have  Splunk Enterprise version 8.0.5.0 Java Update 8 333 Java Se Development Kit 8 Update 291   Due to vulnerabilities, we need to update the Java version. Can we upgrade to the latest version and if not can you please tell me the latest version of Java I can upgrade to?    
How do I specify the time zone in an alert search where I need to exclude a specific time period? - I want to exclude the time period of midnight to 12:20am UTC - I want to be able to change my t... See more...
How do I specify the time zone in an alert search where I need to exclude a specific time period? - I want to exclude the time period of midnight to 12:20am UTC - I want to be able to change my time zone in my preferences as needed - I don't want to change the owner of my alert to "nobody" After my basic search criteria I have this, which works as long as my profile is set to UTC: | eval Hour=strftime(_time,"%H") | eval Minute=strftime(_time,"%M") | search NOT ( (Hour=00 AND Minute >= 00) AND (Hour=00 AND Minute <= 20) )
I have a lookup which has a field with time values (in 24 hr time; i.e. 00:30, 13:45, 23:15), which tells my dashboard the scheduled start time of jobs. I have a number of jobs which are set to run h... See more...
I have a lookup which has a field with time values (in 24 hr time; i.e. 00:30, 13:45, 23:15), which tells my dashboard the scheduled start time of jobs. I have a number of jobs which are set to run hourly, and as such need to have every hour as their start time (XX:00). I've tried adding wildcard functionality to the desired fields in the lookup definition like this:  WILDCARD(Field_Name_1),WILDCARD(Field_Name_2) This has unfortunately not worked as I'd hoped, though, and it does not allow the wildcards to be every number when searching for all jobs which are set to run this hour.    Any ideas on how I can best implement this within the lookup? 
I know I can change the color of a .panel-title, but I haven't found the code to change the .input-label color. This works:       <panel id="panel2"> <title>title</title> <in... See more...
I know I can change the color of a .panel-title, but I haven't found the code to change the .input-label color. This works:       <panel id="panel2"> <title>title</title> <input type="text" token="commentpicker"> <label>Add any additional comments below</label> <default></default> </input> <html> <style> #panel2 .dashboard-panel .panel-title { color: #EC102B; } </style>       But this won't       <input type="multiselect" token="newenabledpicker" searchWhenChanged="false" id="input1"> <label>Modify or Disable (REQUIRED)?</label> <choice value="modify">Modify</choice> <choice value="disable">Disable</choice> <delimiter> </delimiter> </input> <html> <style> #input1 .dashboard-panel .input-label { color: #EC102B; } </style> </html>      
I've discovered issue with WebLogic add-on (1.0.0) for Splunk and I am having hard time figuring out how to fix props.conf to parse timestamp correctly. My Splunk instance is in CEST. There is strin... See more...
I've discovered issue with WebLogic add-on (1.0.0) for Splunk and I am having hard time figuring out how to fix props.conf to parse timestamp correctly. My Splunk instance is in CEST. There is string with timezone after timestamp and CEST/GMT are parsed correctly but UTC is not. Server #1 - Correct     ####<Oct 5, 2022 5:00:21 PM CEST> <Info> ... Parsed timestamp: 10/5/22 5:00:21.000 PM       Server #2 - correct     ####<Oct 5, 2022 3:24:11 PM GMT> <Info> ... Parsed timestamp: 10/5/22 5:24:11.000 PM       Server #3 - incorrect     ####<Oct 5, 2022 4:30:23 PM UTC> <Info> Parsed timestamp: 10/5/22 4:30:23.000 PM Should be: 10/5/22 6:30:23.000 PM         
Hello Splunkers!! We got some issues with internal communications, and wondering about the cause of those internal communications and if this is normal (not an error), can anyone explain when this ... See more...
Hello Splunkers!! We got some issues with internal communications, and wondering about the cause of those internal communications and if this is normal (not an error), can anyone explain when this internal communication happens?  Interface:lo Internal Communications:  (127.0.0.1:8089 => 127.0.0.1:XXXXX, 127.0.0.1:XXXXX => 127.0.0.1:8089, 127.0.0.1:25 => 127.0.0.1:XXXXX) (172.16.18.23:XXXXX => 172.16.17.23:XXXXX) Thank you for the work that will be provided.
The new version of an app created using Splunk Add On Builder 4.1.1 on Splunk Enterprise 9.0.1 breaks on upgrade because the older password constant in 'aob_py3/splunktaucclib/rest_handler/credential... See more...
The new version of an app created using Splunk Add On Builder 4.1.1 on Splunk Enterprise 9.0.1 breaks on upgrade because the older password constant in 'aob_py3/splunktaucclib/rest_handler/credentials.py' was six '*' and the new one is eight '*'. Now is expecting that the password setting in inputs.conf uses the new format. This makes the Input page not load correctly raising error 500. Any ideas on how to solve this problem so the final users can seamlessly upgrade the app and keep the older input configurations? We are trying to work with this solution: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Addon-Builder-4-package-resetting-password-conf-entries/td-p/584187 But we dont like that we need to patch the credentials.py
So I've read the docs on how to properly format a monitor stanza in Windows.. and am trying to monitor a dir full of csv files. Here's the stanza:     # Windows Log Processor [monitor://C:\User... See more...
So I've read the docs on how to properly format a monitor stanza in Windows.. and am trying to monitor a dir full of csv files. Here's the stanza:     # Windows Log Processor [monitor://C:\Users\user\Desktop\ICTExports\*.csv] disabled = false crcSalt = <SOURCE> ignoreOlderThan = 2d index = it_app_ict sourcetype = csv      I added the crcSalt bit because without it the files in the monitor directory were generating the seekptr errors since the first few lines in all the files are identical. And here's the error in splunkd.log:     10-05-2022 10:25:25.768 -0500 INFO TailingProcessor [3624 MainTailingThread] - Parsing configuration stanza: monitor://C:\Users\user\Desktop\ICTExports\*.csv. 10-05-2022 10:25:25.768 -0500 INFO TailReader [3624 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 10-05-2022 10:25:25.768 -0500 INFO TailReader [3624 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 10-05-2022 10:25:25.768 -0500 INFO TailingProcessor [3624 MainTailingThread] - Adding watch on path: C:\Users\user\Desktop\ICTExports.       So it's been ~5 minutes since I last restarted the service and there's no further mention of the monitor path nor any of the .csv's within it. There is one file that falls within the 2d period so Im expecting it to be read. What can I do?   Thanks!
I have two two columns of data, One is Expected box and another is Actual box.  I would like to make Percentage/Average of how much actual Box values are missing compare with Expected box. Also some... See more...
I have two two columns of data, One is Expected box and another is Actual box.  I would like to make Percentage/Average of how much actual Box values are missing compare with Expected box. Also some one my Actual boxes data there are no(which null/undefined)  value.  I would like to ignore those rows  where Actual boxes are  (null/undefined) which is compare with my Expected box.  PS: I am beginner level splunker ? Is there any way I can do this Average based on requirements? This my search query:   index::service sourcetype::service "order_tote_analytics" | spath "data.order_number" | search "data.order_number"=* | spath path=data{}.actual_totes{}.finalBoxAmount output=actualBox| spath path=data{}.estimated_totes{}.box output=estimatedBox | table estimatedBox actualBox       This my table looks like PS: I would love to display that  Percentage of actual values missing in single panel, It would nice to show me, how to do that? This is wrong search query     <panel> <single> <title>Percentage of actual values missing</title> <search> <query>index::service sourcetype::service "order_tote_analytics" | spath "data.order_number" | search "data.order_number"=$orderNumber$ | spath path=data{}.actual_totes{}.finalBoxAmount output=finalBox| spath path=data{}.estimated_totes{}.box output=estimatedBox | stats sum(estimatedBox) as totalBox, sum(finalBox) as finalbox</query> <earliest>$chosenTimePeriod.earliest$</earliest> <latest>$chosenTimePeriod.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xf1813f","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel> <panel>        
I have noticed that Splunk process on my development search-head shows not available abruptly. And then it becomes available. One of those time, I check the status of Splunk service and below is the ... See more...
I have noticed that Splunk process on my development search-head shows not available abruptly. And then it becomes available. One of those time, I check the status of Splunk service and below is the output. Not sure, where to check and how to troubleshoot this problem. I was just assigned this problem, so not sure how often it happens but I have wrote a cronjob to check the status every 5 minutes and got an alert that it happened last night around 3am. But as far as I know, there is no pattern to it.    systemctl status splunk  splunk.service - Splunk Enterprise     Loaded: loaded (/etc/systemd/system/splunk.service; enabled; vendor preset: disabled)     Active: deactivating (stop-sigterm) (Result: exit-code) since Mon 2022-10-03 10:45:41 EDT; 41s ago   Main PID: 76827 (code=exited, status=8)     CGroup: /system.slice/splunk.service             └─78229 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_f5-bigip/bin/Splunk_TA_f5_bigip_main.py      Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: [  OK  ]  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: All installed files intact.  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: Done  Oct 03 10:02:52 splunkbdev01.xxx.xxx splunk[76759]: All preliminary checks passed.  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: Starting splunk server daemon (splunkd)...  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: Done  Oct 03 10:04:31 splunkdev01.xxx.xxx sudo[78359]:   splunk : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/netstat -anp  Oct 03 10:04:31 splunkdev01.xxx.xxx sudo[78359]: pam_unix(sudo:session): session opened for user root by (uid=0)  Oct 03 10:04:43 splunkdev01.xxx.xxx systemd[1]: Started Splunk Enterprise.  Oct 03 10:45:41 splunkdev01.xxx.xxx systemd[1]: splunk.service: main process exited, code=exited, status=8/n/a 
I have an issue where I have set up a Universal Forwarder on a Windows Azure server to monitor data stored on an Azure file share server.  This is my inputs.conf:   [monitor://\\********.file.core... See more...
I have an issue where I have set up a Universal Forwarder on a Windows Azure server to monitor data stored on an Azure file share server.  This is my inputs.conf:   [monitor://\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20*.xml] disabled = 0 index = kana sourcetype = kana_xml crcSalt = <SOURCE>   The issue I have is Splunk thinks the CRC has changed each time the file is written to and re-ingests the whole file. The header of the file does not change, so I'm not sure why this happens. I read some other posts referring to how Azure file share caches data that changes metadata involved in the CRC calculation, but I'm not sure if that is definitely the case. Each file generates approx 6,000 events, but due to the re-ingestion this can amount to over a million events per file. Our license would get eaten up pretty quickly if I left the feed enabled constantly. Another knock on issue to this is when the log fills and a new file is created, Splunk doesn't see the new file and the data feed stops until the Splunk forwarder is restarted. It does however stop ingesting the previous file. Splunk's internal log shows the following details confirming it thinks the file is new:   10-05-2022 11:37:12.556 +0100 DEBUG TailReader [8280 tailreader0] - Defering notification for file=\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml by 3.000ms 10-05-2022 11:37:12.556 +0100 DEBUG TailReader [8280 tailreader0] - Finished reading file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml' in tailreader0 thread, disposition=NO_DISPOSITION, deferredBy=3.000 10-05-2022 11:37:12.556 +0100 DEBUG WatchedFile [8280 tailreader0] - Reached EOF: fname=\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml fishstate=key=0x8908643efe7e891f sptr=865145 scrc=0x77aadaaeb3af22ee fnamecrc=0xbd1b79bedeae4211 modtime=1664963939 10-05-2022 11:37:12.556 +0100 DEBUG WatchedFile [8280 tailreader0] - seeking \\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml to off=857837 10-05-2022 11:37:12.524 +0100 DEBUG TailReader [8280 tailreader0] - About to read data (Reusing existing fd for file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml'). 10-05-2022 11:37:12.524 +0100 INFO WatchedFile [8280 tailreader0] - Will begin reading at offset=0 for file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml'. 10-05-2022 11:37:12.524 +0100 INFO WatchedFile [8280 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml'. 10-05-2022 11:37:12.478 +0100 DEBUG TailReader [8280 tailreader0] - Will attempt to read file: \\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml from existing fd. 10-05-2022 11:37:12.478 +0100 DEBUG TailReader [8280 tailreader0] - Start reading file="\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml" in tailreader0 thread 10-05-2022 11:37:00.394 +0100 INFO Metrics - group=per_source_thruput, series="\\********.file.core.windows.net\kanaresponse\respshare\logs\log20221005_101607.xml", kbps=0.622, eps=0.064, kb=19.295, ev=2, avg_age=1134.000, max_age=2268 10-05-2022 11:36:48.567 +0100 DEBUG TailReader [5484 MainTailingThread] - Enqueued file=\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml in tailreader0   If anyone has any ideas how to circumvent this issue, I'd be hugely grateful. I have tried using MonitorNoHandle, but that doesn't work as (a) Splunk wants the network drive location to be mapped to a drive, which we aren't able to do and (b) it requires individual files to be monitored, which we can't do easily as the new file uses the timestamp of when it is created in it's filename. Thanks 
Hello!, First time posting here. Just started learning Splunk and I am trying to extract events between two date ranges   4/6/2021 and 4/7/2021. I tried one of the earlier suggested answers which w... See more...
Hello!, First time posting here. Just started learning Splunk and I am trying to extract events between two date ranges   4/6/2021 and 4/7/2021. I tried one of the earlier suggested answers which were         index="security" | eval Date="04/07/2021" | eval timestampDate=strptime(Date, "%m/%d/%Y") | eval timestampStart=strptime("04/06/2021", "%m/%d/%Y") | eval timestampEnd=strptime("04/07/2021", "%m/%d/%Y") | eval formattedTimestamp = strftime(timestamp,"%Y-%m-%dT%H:%M:%S") | where timestampDate >= timestampStart AND timestampDate <= timestampEnd         and          index="security" | eval Date="4/7/2021" | where (strptime(Date, "%m/%d/%Y")>=strptime("4/6/2021", "%m/%d/%Y")) AND (strptime(Date, "%m/%d/%Y")<=strptime("4/7/2021", "%m/%d/%Y"))          But the queries return all the events available in the log file. Attaching the screenshots here.  Here the sample from the index. Can someone please assist, thanks in advance.
Hey all, Everything works fine but I keep getting a strange error only in Chrome, ERR_SSL_PROTOCOL_ERROR, but not in FireFox or Edge.  All browsers are running the latest versions, no proxies, or e... See more...
Hey all, Everything works fine but I keep getting a strange error only in Chrome, ERR_SSL_PROTOCOL_ERROR, but not in FireFox or Edge.  All browsers are running the latest versions, no proxies, or enhanced security features (no security plugins). I've gone through the basic troubleshooting steps of clearing cache, reset browser settings, etc.  Still nothing. Any help would be appreciated.  I'm also sneaking another question in here:  I have most devices using Splunk as a syslog device, so aside from sourcetype="linux_messages_syslog" host="name.of.device" (string) What would be a good way to grab data for that specific host/device for a time frame of 12:00:00AM to 4:00:00AM that same day?  Regards, Gabriel  
In Splunk Add-on for Sysmon | Splunkbase (and some other Add-Ons) XML extractions are done via a lot of manual transforms (e.g. [sysmon-version] REGEX = <Version>(\d+)</Version> FORMAT = Version::$1)... See more...
In Splunk Add-on for Sysmon | Splunkbase (and some other Add-Ons) XML extractions are done via a lot of manual transforms (e.g. [sysmon-version] REGEX = <Version>(\d+)</Version> FORMAT = Version::$1). Why aren't you using KV_MODE = XML? And could you please add the field query_type (Network_Resolution DM) and record_type values for A and AAAA records (which do NOT have a type: .. entry).
I am performing a search for two events. A start event and a stop event for a specific job Name. I have ran into an issue where I am getting two start events as a previous days job (2 days ago) sta... See more...
I am performing a search for two events. A start event and a stop event for a specific job Name. I have ran into an issue where I am getting two start events as a previous days job (2 days ago) started late and started after midnight (yesterday)  and yesterdays job started on time before midnight (yesterday). I have tried last(startTimeRaw) and latest(startTimeRaw) but not getting the results expected. index=anIndex sourcetype=aSourceType aJobName ("START of script" OR "COMPLETED OK") | eval startTimeRaw = if (match(_raw, "START of script"), _time, null()) | eval endTimeRaw = if (match(_raw, "COMPLETED OK"), _time, null()) -------------------------------------------- I am getting this as my results for startTimeRaw in my results: -2 days: startTimeRaw = 1664770565 -1 day: startTimeRaw = 1664868051 & 1664948397 In the above I am looking to assign startTimeRaw the last value found or matched (1664948397)
I have a scheduled search that runs every minute, querying the previous one minute of time, and alerts if an event is found. Because the scheduled search interval is the same as the time period ... See more...
I have a scheduled search that runs every minute, querying the previous one minute of time, and alerts if an event is found. Because the scheduled search interval is the same as the time period being searched, is it theoretically possible for an event to pop up in between the scheduled searches and not get alerted? If this is a possibility then it feels like the only option would be to search a slightly longer timeframe so that there is some overlap in the searches, but that could mean duplicate events. The other option would be to use a realtime search rather than a scheduled search., but in general I try to avoid  scheduled realtime searches.
I am working on custom command with couple of external modules which I installed in my 'lib' directory pip3 install -r requirements.txt --target=lib After installing my custom command into Splunk... See more...
I am working on custom command with couple of external modules which I installed in my 'lib' directory pip3 install -r requirements.txt --target=lib After installing my custom command into Splunk (install from file) I keep getting ModuleNotFoundError even all modules live in app 'lib' directory i.e.  ModuleNotFoundError: No module named 'importlib_metadata' What I am doing wrong ? Had no issue before but I was only using 'requests' module so far, now trying a lot more, here is my requirements.txt requests>=2.0.0resilient resilient-lib