All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two two columns of data, One is Expected box and another is Actual box.  I would like to make Percentage/Average of how much actual Box values are missing compare with Expected box. Also some... See more...
I have two two columns of data, One is Expected box and another is Actual box.  I would like to make Percentage/Average of how much actual Box values are missing compare with Expected box. Also some one my Actual boxes data there are no(which null/undefined)  value.  I would like to ignore those rows  where Actual boxes are  (null/undefined) which is compare with my Expected box.  PS: I am beginner level splunker ? Is there any way I can do this Average based on requirements? This my search query:   index::service sourcetype::service "order_tote_analytics" | spath "data.order_number" | search "data.order_number"=* | spath path=data{}.actual_totes{}.finalBoxAmount output=actualBox| spath path=data{}.estimated_totes{}.box output=estimatedBox | table estimatedBox actualBox       This my table looks like PS: I would love to display that  Percentage of actual values missing in single panel, It would nice to show me, how to do that? This is wrong search query     <panel> <single> <title>Percentage of actual values missing</title> <search> <query>index::service sourcetype::service "order_tote_analytics" | spath "data.order_number" | search "data.order_number"=$orderNumber$ | spath path=data{}.actual_totes{}.finalBoxAmount output=finalBox| spath path=data{}.estimated_totes{}.box output=estimatedBox | stats sum(estimatedBox) as totalBox, sum(finalBox) as finalbox</query> <earliest>$chosenTimePeriod.earliest$</earliest> <latest>$chosenTimePeriod.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xf1813f","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel> <panel>        
I have noticed that Splunk process on my development search-head shows not available abruptly. And then it becomes available. One of those time, I check the status of Splunk service and below is the ... See more...
I have noticed that Splunk process on my development search-head shows not available abruptly. And then it becomes available. One of those time, I check the status of Splunk service and below is the output. Not sure, where to check and how to troubleshoot this problem. I was just assigned this problem, so not sure how often it happens but I have wrote a cronjob to check the status every 5 minutes and got an alert that it happened last night around 3am. But as far as I know, there is no pattern to it.    systemctl status splunk  splunk.service - Splunk Enterprise     Loaded: loaded (/etc/systemd/system/splunk.service; enabled; vendor preset: disabled)     Active: deactivating (stop-sigterm) (Result: exit-code) since Mon 2022-10-03 10:45:41 EDT; 41s ago   Main PID: 76827 (code=exited, status=8)     CGroup: /system.slice/splunk.service             └─78229 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_f5-bigip/bin/Splunk_TA_f5_bigip_main.py      Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: [  OK  ]  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: All installed files intact.  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: Done  Oct 03 10:02:52 splunkbdev01.xxx.xxx splunk[76759]: All preliminary checks passed.  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: Starting splunk server daemon (splunkd)...  Oct 03 10:02:52 splunkdev01.xxx.xxx splunk[76759]: Done  Oct 03 10:04:31 splunkdev01.xxx.xxx sudo[78359]:   splunk : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/netstat -anp  Oct 03 10:04:31 splunkdev01.xxx.xxx sudo[78359]: pam_unix(sudo:session): session opened for user root by (uid=0)  Oct 03 10:04:43 splunkdev01.xxx.xxx systemd[1]: Started Splunk Enterprise.  Oct 03 10:45:41 splunkdev01.xxx.xxx systemd[1]: splunk.service: main process exited, code=exited, status=8/n/a 
I have an issue where I have set up a Universal Forwarder on a Windows Azure server to monitor data stored on an Azure file share server.  This is my inputs.conf:   [monitor://\\********.file.core... See more...
I have an issue where I have set up a Universal Forwarder on a Windows Azure server to monitor data stored on an Azure file share server.  This is my inputs.conf:   [monitor://\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20*.xml] disabled = 0 index = kana sourcetype = kana_xml crcSalt = <SOURCE>   The issue I have is Splunk thinks the CRC has changed each time the file is written to and re-ingests the whole file. The header of the file does not change, so I'm not sure why this happens. I read some other posts referring to how Azure file share caches data that changes metadata involved in the CRC calculation, but I'm not sure if that is definitely the case. Each file generates approx 6,000 events, but due to the re-ingestion this can amount to over a million events per file. Our license would get eaten up pretty quickly if I left the feed enabled constantly. Another knock on issue to this is when the log fills and a new file is created, Splunk doesn't see the new file and the data feed stops until the Splunk forwarder is restarted. It does however stop ingesting the previous file. Splunk's internal log shows the following details confirming it thinks the file is new:   10-05-2022 11:37:12.556 +0100 DEBUG TailReader [8280 tailreader0] - Defering notification for file=\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml by 3.000ms 10-05-2022 11:37:12.556 +0100 DEBUG TailReader [8280 tailreader0] - Finished reading file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml' in tailreader0 thread, disposition=NO_DISPOSITION, deferredBy=3.000 10-05-2022 11:37:12.556 +0100 DEBUG WatchedFile [8280 tailreader0] - Reached EOF: fname=\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml fishstate=key=0x8908643efe7e891f sptr=865145 scrc=0x77aadaaeb3af22ee fnamecrc=0xbd1b79bedeae4211 modtime=1664963939 10-05-2022 11:37:12.556 +0100 DEBUG WatchedFile [8280 tailreader0] - seeking \\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml to off=857837 10-05-2022 11:37:12.524 +0100 DEBUG TailReader [8280 tailreader0] - About to read data (Reusing existing fd for file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml'). 10-05-2022 11:37:12.524 +0100 INFO WatchedFile [8280 tailreader0] - Will begin reading at offset=0 for file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml'. 10-05-2022 11:37:12.524 +0100 INFO WatchedFile [8280 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml'. 10-05-2022 11:37:12.478 +0100 DEBUG TailReader [8280 tailreader0] - Will attempt to read file: \\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml from existing fd. 10-05-2022 11:37:12.478 +0100 DEBUG TailReader [8280 tailreader0] - Start reading file="\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml" in tailreader0 thread 10-05-2022 11:37:00.394 +0100 INFO Metrics - group=per_source_thruput, series="\\********.file.core.windows.net\kanaresponse\respshare\logs\log20221005_101607.xml", kbps=0.622, eps=0.064, kb=19.295, ev=2, avg_age=1134.000, max_age=2268 10-05-2022 11:36:48.567 +0100 DEBUG TailReader [5484 MainTailingThread] - Enqueued file=\\********.file.core.windows.net\KanaResponse\RespShare\logs\log20221005_101607.xml in tailreader0   If anyone has any ideas how to circumvent this issue, I'd be hugely grateful. I have tried using MonitorNoHandle, but that doesn't work as (a) Splunk wants the network drive location to be mapped to a drive, which we aren't able to do and (b) it requires individual files to be monitored, which we can't do easily as the new file uses the timestamp of when it is created in it's filename. Thanks 
Hello!, First time posting here. Just started learning Splunk and I am trying to extract events between two date ranges   4/6/2021 and 4/7/2021. I tried one of the earlier suggested answers which w... See more...
Hello!, First time posting here. Just started learning Splunk and I am trying to extract events between two date ranges   4/6/2021 and 4/7/2021. I tried one of the earlier suggested answers which were         index="security" | eval Date="04/07/2021" | eval timestampDate=strptime(Date, "%m/%d/%Y") | eval timestampStart=strptime("04/06/2021", "%m/%d/%Y") | eval timestampEnd=strptime("04/07/2021", "%m/%d/%Y") | eval formattedTimestamp = strftime(timestamp,"%Y-%m-%dT%H:%M:%S") | where timestampDate >= timestampStart AND timestampDate <= timestampEnd         and          index="security" | eval Date="4/7/2021" | where (strptime(Date, "%m/%d/%Y")>=strptime("4/6/2021", "%m/%d/%Y")) AND (strptime(Date, "%m/%d/%Y")<=strptime("4/7/2021", "%m/%d/%Y"))          But the queries return all the events available in the log file. Attaching the screenshots here.  Here the sample from the index. Can someone please assist, thanks in advance.
Hey all, Everything works fine but I keep getting a strange error only in Chrome, ERR_SSL_PROTOCOL_ERROR, but not in FireFox or Edge.  All browsers are running the latest versions, no proxies, or e... See more...
Hey all, Everything works fine but I keep getting a strange error only in Chrome, ERR_SSL_PROTOCOL_ERROR, but not in FireFox or Edge.  All browsers are running the latest versions, no proxies, or enhanced security features (no security plugins). I've gone through the basic troubleshooting steps of clearing cache, reset browser settings, etc.  Still nothing. Any help would be appreciated.  I'm also sneaking another question in here:  I have most devices using Splunk as a syslog device, so aside from sourcetype="linux_messages_syslog" host="name.of.device" (string) What would be a good way to grab data for that specific host/device for a time frame of 12:00:00AM to 4:00:00AM that same day?  Regards, Gabriel  
In Splunk Add-on for Sysmon | Splunkbase (and some other Add-Ons) XML extractions are done via a lot of manual transforms (e.g. [sysmon-version] REGEX = <Version>(\d+)</Version> FORMAT = Version::$1)... See more...
In Splunk Add-on for Sysmon | Splunkbase (and some other Add-Ons) XML extractions are done via a lot of manual transforms (e.g. [sysmon-version] REGEX = <Version>(\d+)</Version> FORMAT = Version::$1). Why aren't you using KV_MODE = XML? And could you please add the field query_type (Network_Resolution DM) and record_type values for A and AAAA records (which do NOT have a type: .. entry).
I am performing a search for two events. A start event and a stop event for a specific job Name. I have ran into an issue where I am getting two start events as a previous days job (2 days ago) sta... See more...
I am performing a search for two events. A start event and a stop event for a specific job Name. I have ran into an issue where I am getting two start events as a previous days job (2 days ago) started late and started after midnight (yesterday)  and yesterdays job started on time before midnight (yesterday). I have tried last(startTimeRaw) and latest(startTimeRaw) but not getting the results expected. index=anIndex sourcetype=aSourceType aJobName ("START of script" OR "COMPLETED OK") | eval startTimeRaw = if (match(_raw, "START of script"), _time, null()) | eval endTimeRaw = if (match(_raw, "COMPLETED OK"), _time, null()) -------------------------------------------- I am getting this as my results for startTimeRaw in my results: -2 days: startTimeRaw = 1664770565 -1 day: startTimeRaw = 1664868051 & 1664948397 In the above I am looking to assign startTimeRaw the last value found or matched (1664948397)
I have a scheduled search that runs every minute, querying the previous one minute of time, and alerts if an event is found. Because the scheduled search interval is the same as the time period ... See more...
I have a scheduled search that runs every minute, querying the previous one minute of time, and alerts if an event is found. Because the scheduled search interval is the same as the time period being searched, is it theoretically possible for an event to pop up in between the scheduled searches and not get alerted? If this is a possibility then it feels like the only option would be to search a slightly longer timeframe so that there is some overlap in the searches, but that could mean duplicate events. The other option would be to use a realtime search rather than a scheduled search., but in general I try to avoid  scheduled realtime searches.
I am working on custom command with couple of external modules which I installed in my 'lib' directory pip3 install -r requirements.txt --target=lib After installing my custom command into Splunk... See more...
I am working on custom command with couple of external modules which I installed in my 'lib' directory pip3 install -r requirements.txt --target=lib After installing my custom command into Splunk (install from file) I keep getting ModuleNotFoundError even all modules live in app 'lib' directory i.e.  ModuleNotFoundError: No module named 'importlib_metadata' What I am doing wrong ? Had no issue before but I was only using 'requests' module so far, now trying a lot more, here is my requirements.txt requests>=2.0.0resilient resilient-lib  
Hi splunkers,   I have problem about usind maxming geoip datavbses I get 4 databases from maxmind (GeoIP2-City.mmdb; GeoLite2-ASN.mmdb; GeoIP2-Country.mmdb; GeoIP2-Anonymous-IP.mmdb) I need t... See more...
Hi splunkers,   I have problem about usind maxming geoip datavbses I get 4 databases from maxmind (GeoIP2-City.mmdb; GeoLite2-ASN.mmdb; GeoIP2-Country.mmdb; GeoIP2-Anonymous-IP.mmdb) I need to use these 4 databases Following the html documentation about iplocation (https://docs.splunk.com/Documentation/Splunk/7.0.2/SearchReference/Iplocation), I copy the databases I need to use under a specific directory and configure limits.conf to point to this directory for any of the databases I need to use. This database was copied over search Head AND Indexers. Limits.conf : [root@vlpsospk04-sh databases]# more ../local/limits.conf [iplocation] db_path = /data/splunk/etc/apps/cnaf_deploy_maxmind_databases/databases/GeoIP2-City.mmdb db_path = /data/splunk/etc/apps/cnaf_deploy_maxmind_databases/databases/GeoLite2-ASN.mmdb db_path = /data/splunk/etc/apps/cnaf_deploy_maxmind_databases/databases/GeoIP2-Country.mmdb db_path = /data/splunk/etc/apps/cnaf_deploy_maxmind_databases/databases/GeoIP2-Anonymous-IP.mmdb Then, when I m using this file configuration, Then restart splunkd process, I get data about GeoIP2-City.mmdb, but nothing about GeoIP2-Anonymous-IP.mmdb as an exemple. In the documentation about iplocation, only one mmdb file is documented, so is this a specific configuration to use multiple mmd files ?   Does someone get results with sevferal databases ?   Thank you !
I have a data where I got empty object. I would like count in total how many empty object in one table data and also make average on this empty object.  PS: I am beginner level splunker and could ... See more...
I have a data where I got empty object. I would like count in total how many empty object in one table data and also make average on this empty object.  PS: I am beginner level splunker and could not able to figure How Can I do average of empty object? This was my failed attempt:   index::service sourcetype::service "order_tote_analytics" | spath "data.order_number" | search "data.order_number"=* | spath path=data{}.actual_totes output=finalBox | eval countNull=if(finalBox == "{}", "this has value", "this is all null") | table finalBox countNull   Above search query return me this
Hi Splunkers, I have a request for our environment: I have to send AWS logs to our Splunk, which is a Cloud one. Googling I found some very usefull guides, for different type of logs, such as the o... See more...
Hi Splunkers, I have a request for our environment: I have to send AWS logs to our Splunk, which is a Cloud one. Googling I found some very usefull guides, for different type of logs, such as the ones of a specific EC2 istance, for example all the logs of  /var/logs of a Linux VM. What I was not able to find, is how to send the AWS Hypervisor logs to Splunk; when I say Hypervisor logs I mean all the one related to VM, and so EC2 istances, management. For example, I want to be able to see on Splunk if some admin has created, deleted, stopped or started an EC2 istance, both a new one or an exiting one. Is there some config docs/guides I can use?
Hey Guys, I have the following data in Splunk. Each eventdata has 4 lines (which are seperated through newLines) and every line in a event represent the value of a variable.  My Question: Can I gen... See more...
Hey Guys, I have the following data in Splunk. Each eventdata has 4 lines (which are seperated through newLines) and every line in a event represent the value of a variable.  My Question: Can I generate a table in which I list every event with the four variables. The table I wont to have should look like the following excel table :   Thanks for your help!
On one of my RHEL 7.9 server, i am seeing splunk service is Active:active(exited). i am using 8.2.2.0. please guide me to fix the issue. [root@srv01 ~]# systemctl -l | grep -i splunk splunk.servic... See more...
On one of my RHEL 7.9 server, i am seeing splunk service is Active:active(exited). i am using 8.2.2.0. please guide me to fix the issue. [root@srv01 ~]# systemctl -l | grep -i splunk splunk.service loaded active exited SYSV: Splunk indexer service [root@srv01 ~]# systemctl status splunk.service Active: active (exited)
Hello as you can see "type" field as 3 values : stand, vd or xe if the "type" field is "vd" or "xe", I need to gather them in a field called "virt" but i dont succeed  could you help me please?... See more...
Hello as you can see "type" field as 3 values : stand, vd or xe if the "type" field is "vd" or "xe", I need to gather them in a field called "virt" but i dont succeed  could you help me please?   <input type="dropdown" token="type" searchWhenChanged="true"> <label>Environnement source</label> <choice value="*">*</choice> <choice value="stand">stand</choice> <choice value="type=(vd OR xe)">virt</choice> <default>*</default> <initialValue>*</initialValue> </input>    
Hi guys, I need to evaluate a disruption.  It can last multiple hours, so I need to use data which is at least 4h old. This query needs to show all disruptions that are longer than 15 minutes with... See more...
Hi guys, I need to evaluate a disruption.  It can last multiple hours, so I need to use data which is at least 4h old. This query needs to show all disruptions that are longer than 15 minutes with it's starting timestamp and it's last occurring timestamp. To group all logged events, I need a transaction which also contains the field CompleteDescription. If this field contains specific values which can be seen in the query, it is a disruption. The query I've build works so far but is to slow to collect data from multiple hours. Does anyone have an idea how to improve the query for more performance? Thank you!     index=log sourcetype=servlog | transaction ThreadId host maxspan=180s startswith=(LogMessage=start) endswith=(LogMessage=end) | stats earliest(_time) as "first", latest(_time) as "last", count by Type, CompleteDescription | eventstats sum(count) as count_full by Type, CompleteDescription | eventstats sum(count_full) as total by Type | eval percentage = round((count_full/total)*100,0) | eval time_diff = round((last - first)/60, 0) | eval CompleteDescription=upper(CompleteDescription) | search Type!=SSL (CompleteDescription = "MISSING RESPONSE" OR CompleteDescription = "TIMEOUT" OR CompleteDescription = "TECHNICAL ERROR" OR CompleteDescription = "INTERNAL SYSTEM ERROR" OR CompleteDescription = "NO REACHABILITY") total >= 10 percentage >= 50 time_diff >= 30 | convert ctime(first) ctime(last) | table Type, CompleteDescription, count_type, count, percentage | sort - percentage, total      
How can i convert timestamp to date for below timestamp to just date 2022-10-04. timestamp: 2022-10-04 19:52:00.151 -0500 Requirement is to visualize values in last 7 days based on date  
Hi, guys, me having issue with telegram alerts. 1. There are 3 available apps in splunkbase for sending alerts from splunk using telegram bots. So I've created bot, and added it to a group that w... See more...
Hi, guys, me having issue with telegram alerts. 1. There are 3 available apps in splunkbase for sending alerts from splunk using telegram bots. So I've created bot, and added it to a group that will save all alerts. After that, I've configured the telegram app in Splunk, added all configs: chat id, bot id, and proxy settings (it's the only way to access the internet) but it's not working, I mean I'm not receiving any alert. I guess the problem is in proxy.
Hey Splunk Community, I'm having an issue with the $SPLUNK/var/lib/splunk/kvstore/mongo directory. I have a tonne of files present in this directory dated from several months ago at around the ... See more...
Hey Splunk Community, I'm having an issue with the $SPLUNK/var/lib/splunk/kvstore/mongo directory. I have a tonne of files present in this directory dated from several months ago at around the 512MB size range ending in ".ns" and ".0".  If we remove these files over say an age of 30 days would this have any impact on the SIEM or if this action safe?
Prompt as I can make arithmetic comparison of two fields. Comparison: more, less. The first field consists of numbers: field="1", field="2" The second of numbers and letters: field="1.route", field... See more...
Prompt as I can make arithmetic comparison of two fields. Comparison: more, less. The first field consists of numbers: field="1", field="2" The second of numbers and letters: field="1.route", field="2.route"