All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I have a question about using savedsearch vs macros for the amount of jobs that are produced in Splunk. I have a situation where we must run alerts every 1 minutes looking for issues, the SPL wi... See more...
Hi I have a question about using savedsearch vs macros for the amount of jobs that are produced in Splunk. I have a situation where we must run alerts every 1 minutes looking for issues, the SPL will call multiple other saved searches and then can call more saved searches etc... We have noticed that this sends the number of jobs very high in Splunk. The question is if we used macros and not saved searches would the number of jobs decrease?
Hi Team, Nessus Data is missing in Splunk, since nessus scanner storage has been increased. Nothing has been changed from Splunk side.  Kindly find the error from the data and kindly assist on it. ... See more...
Hi Team, Nessus Data is missing in Splunk, since nessus scanner storage has been increased. Nothing has been changed from Splunk side.  Kindly find the error from the data and kindly assist on it. ERROR pid=15132 tid=MainThread file=nessus_rest_client.py:request:91 | Failed to connect https://X.X.X.X:8834/scans/4098, reason=Traceback (most recent call last):   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_rest_client.py", line 79, in request     headers=headers)   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunktalib/httplib2/__init__.py", line 1593, in request     (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunktalib/httplib2/__init__.py", line 1335, in _request     (response, content) = self._conn_request(conn, request_uri, method, body, headers)   File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/splunktalib/httplib2/__init__.py", line 1291, in _conn_request     response = conn.getresponse()   File "/opt/splunk/lib/python2.7/httplib.py", line 1148, in getresponse     response.begin()   File "/opt/splunk/lib/python2.7/httplib.py", line 448, in begin     version, status, reason = self._read_status()   File "/opt/splunk/lib/python2.7/httplib.py", line 404, in _read_status     line = self.fp.readline(_MAXLINE + 1)   File "/opt/splunk/lib/python2.7/socket.py", line 480, in readline     data = self._sock.recv(self._rbufsize)   File "/opt/splunk/lib/python2.7/ssl.py", line 754, in recv     return self.read(buflen)   File "/opt/splunk/lib/python2.7/ssl.py", line 641, in read     v = self._sslobj.read(len) SSLError: ('The read operation timed out',)   Thanks, Ashwarya
Hi, My Splunk environment is on-prem. I have a single IDX which runs RHEL on a physical stand-alone server. Indexes are stored on a RAID 5 disk configuration on the same server. My disk is starting... See more...
Hi, My Splunk environment is on-prem. I have a single IDX which runs RHEL on a physical stand-alone server. Indexes are stored on a RAID 5 disk configuration on the same server. My disk is starting to fill up and I was wondering if is is possible to manually delete older warm bucket files on my IDX (basically running an rm linux command)? The indexes in question are not being used/written to any more, so I dont need to search/access them again. Is this OK to do? Will it create any inconsistency issues or errors in my Splunk env? Thanks. 
Hi I have csv file that contain my errorcodes and meaning of them. I import this csv as lookup. the problem is some codes have same meaning and when I get report show them separately e.g Here is t... See more...
Hi I have csv file that contain my errorcodes and meaning of them. I import this csv as lookup. the problem is some codes have same meaning and when I get report show them separately e.g Here is the my csv: code meaning 404    Page not found 402    Page not found   Current output: Code            Meaning                         Count          404            Page not found                  25                      402            Page not found                  25   I need to consider them as one and count them like this: Code                     Meaning                                  TotalCount 404, 402            Page not found                           50   FYI: if meaning are same consider they are same and able to count them   any idea? Thanks
Hello all, I am looking to extract automatically, all key value pairs, from the following event.   Oct 20 12:52:40 11.22.33.44 status: Info: Status: CPULd 0 DskIO 0 RAMUtil 4 QKUsd 0 QKFre 8388608... See more...
Hello all, I am looking to extract automatically, all key value pairs, from the following event.   Oct 20 12:52:40 11.22.33.44 status: Info: Status: CPULd 0 DskIO 0 RAMUtil 4 QKUsd 0 QKFre 8388608 CrtMID 184222 CrtICID 2257015 CrtDCID 83915 InjMsg 126359 InjRcp 133519 GenBncRcp 2695 RejRcp 39342 DrpMsg 6 SftBncEvnt 9183 CmpRcp 139771 HrdBncRcp 3372 DnsHrdBnc 560 5XXHrdBnc 2406 FltrHrdBnc 0 ExpHrdBnc 406 OtrHrdBnc 0 DlvRcp 136399 DelRcp 0 GlbUnsbHt 0 ActvRcp 0 UnatmptRcp 0 AtmptRcp 0 CrtCncIn 0 CrtCncOut 0 DnsReq 5081719 NetReq 2029480 CchHit 5092135 CchMis 1888266 CchEct 891535 CchExp 1239213 CPUTTm 283360 CPUETm 123729299 MaxIO 7211 RAMUsd 182837940 MMLen 0 DstInMem 5 ResCon 0 WorkQ 0 QuarMsgs 0 QuarQKUsd 0 LogUsd 6 SophLd 0 BMLd 0 CASELd 0 TotalLd 53 LogAvail 175G EuQ 0 EuqRls 0 CmrkLd 0 McafLd 0 SwIn 1743004 SwOut 1980200 SwPgIn 8219419 SwPgOut 19669768 RptLd 0 QtnLd 0 EncrQ 0 InjBytes 21864651864   For that, I used the folowing regex: (?<key>\S+)\s(?<value>\d+) However, I can not get a 100% match, because I only want this to be valid for all the data after "Status:" (in bold, in the example).   Can anyone help me get a regex for this key value pair extraction? Thanks in advance.
INFO | 2021-10-18 05:17 AM | BUSINESS RULE | Payload for ID#: 58916 with status Approved is published Second Event msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.6... See more...
INFO | 2021-10-18 05:17 AM | BUSINESS RULE | Payload for ID#: 58916 with status Approved is published Second Event msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","Cre ID":"58916"}   I want to have sum of InputAmountToCredit based on status . status can vary to different statuses and ID is common field for both the events (but it differs in key in both the events)  How can I extact fields for status InputAmountToCredit and ID. I want to sum amount for each Id who is having same status and generate status wise report.   Also my Searches are slow and I want to implement summary index. 1. I am not sure if tomorrow i want to update my summary index will I be able to do so with new events? 2. How can I decide what should be part of summary index as right now I have the requirement with these two events only. I am new to splunk so any pointers will help   Thanks for all the support.
I am struggling to understand data retention for Smartstore index. I guess I am thinking more from a non-smartstore way of data retention.   Below is my requirement Data ingestion per day - 1.5 TB... See more...
I am struggling to understand data retention for Smartstore index. I guess I am thinking more from a non-smartstore way of data retention.   Below is my requirement Data ingestion per day - 1.5 TB per day Total retention - 90 days Local storage - 50 days Smartstore storage - 40 days Multi-site indexer cluster with 2 indexers in each AWS AZ with RF - 3 and SF - 3 From the docs, I understand following parameters are key for retention     maxGlobalDataSizeMB maxGlobalRawDataSizeMB frozenTimePeriodInSecs     Since both maxGlobal* parameters apply to mainly warm buckets, how can I control data retention for each index as a total and not specifically to warm buckets, similar to maxTotalDataSizeMb ? With non-smartstore index, this was so easy, you either just needed maxTotalDataSizeMb and frozenTimePeriodInSecs Also, how can I control size of each index occupies on local storage (which is basically ability to configure something similar to homePath.maxDataSizeMB, which is again ignored for smartstore indexes)?   I am surprised all this is not documented clearly anywhere in official Splunk docs but some info can be found in forums. How can I translate my requirements into these parameters on a per index basis ? Please help  
Hi,  I have data with field names in the format: h00m00 h00m15 h00m30 h00m45 h01m00  .. thru h23m45 I'd like to pull the 'h' value from a drop down and use that in a chart command , so eval hh=sub... See more...
Hi,  I have data with field names in the format: h00m00 h00m15 h00m30 h00m45 h01m00  .. thru h23m45 I'd like to pull the 'h' value from a drop down and use that in a chart command , so eval hh=substr('04:00:00',1,2) from the current token in the drop down should give me 04. Then somehow chart just the h04* fields so I can focus on a specific hour as below where I hardcoded the  h04* | chart sum(h04*) by edate if anyone has any ideas I'd be grateful.
hello, I have list of 20 server IP, I'm not administrator of Splunk, I need to find look match where source or destination ip base on this list of servers IP. thank you.
Hi , I want to upgrade Splunk DB Connect( current version 1.1.7 ) to its latest version. please help me on these queries. 1.Can you suggest which version should I go for? 2.Are there any prerequis... See more...
Hi , I want to upgrade Splunk DB Connect( current version 1.1.7 ) to its latest version. please help me on these queries. 1.Can you suggest which version should I go for? 2.Are there any prerequisite before updating to its higher version? 3. Our current Splunk Enterprise version is 7.1.3, will it(Splunk DB Connect higher version) will be compatible with this version(SE version)? Regards, Rahul
Hello, This is my request:     index=antivirus | stats values(SAVVersion) as SAVVersion, values(EngineVersion) as EngineVersion ,values(VirusDataVersion) as VirusDataVersion, max(LastM... See more...
Hello, This is my request:     index=antivirus | stats values(SAVVersion) as SAVVersion, values(EngineVersion) as EngineVersion ,values(VirusDataVersion) as VirusDataVersion, max(LastMessageTime) as LastMessageTime, max(LastScanDateTime) as LastScanDateTime by Name | sort LastScanDateTime | eval diff=round((Now() - LastScanDateTime)/60/60/24) | eval "active the last seven days ?"=if(round((Now() - LastMessageTime)/60/60/24)>7,"NO","YES") | where (diff > 3) OR isnull(diff) | fillnull value="-" | sort - "active the last seven days ?" - diff     I would like to have only the PCs(Name) not scanned (LastScanDateTime) for more than three days, but my request does not work, it returns all the PCs. Can you please help me? sorry for my english  
Hello team,   I am trying to monitor windows event logs and have installed the universal forwarded with relevant data. I am getting the Application and System logs, however the Security events are ... See more...
Hello team,   I am trying to monitor windows event logs and have installed the universal forwarded with relevant data. I am getting the Application and System logs, however the Security events are not being forwarded. I am adding the inputs.conf details below please let me know what is causing this.   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 16350-16400 index = default_tier1_idx renderXml=false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 0-10000 index = default_tier1_idx renderXml=false [WinEventLog://System] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 7000-7050 index = default_tier1_idx renderXml=false   Thank you
Hi All, Hope you all are doing good. I am trying to extract a field which the different types of data. I want to extract the reference number. DATA:- 0561170-0443 :- 0561170 this is reference num... See more...
Hi All, Hope you all are doing good. I am trying to extract a field which the different types of data. I want to extract the reference number. DATA:- 0561170-0443 :- 0561170 this is reference number 0213_DFS_201021004 :- 201021004 this is reference number 0159_1606766A_191021016 :- 1606766A this is reference number Can you please let me know how i can achieve this, i tried rex but it wont work. Is there any other way to do it?   Thanks in advance
Hi team, 1. I have first query which return me below chart         <baseQuery> |timechart span=4w count(ACT) as countOfOpenSession, distinct_count(UID) as countOfUserID, ... See more...
Hi team, 1. I have first query which return me below chart         <baseQuery> |timechart span=4w count(ACT) as countOfOpenSession, distinct_count(UID) as countOfUserID, distinct_count(CMN) as countOfCustomer             2. then I have second query which return me below table and chart, which is for getting the CMN value which has highest hit value per month.         <baseQuery> | stats count(ACT) as hit by date_month CMN | eventstats max(hit) as maxhit by date_month | where hit=maxhit | fields - maxhit                   Expected Chart I want to get from splunk search: 1. combine the two queries into one. (by the way, baseQuery for the two queries in my scenario are  same.) 2. combine the timeline chart and bar chart into one chart .  3. From the combined chart->on the bars, to display both CMN(customer Name) and hit count   Here is an example chart I want(similar to below)     how to edit the query and format to achieve the expected chart?
Hello team,   I am trying to extract the below highlighted fields. However when I use the expression this is working right on one type of event but picking a different(underlined) field from other ... See more...
Hello team,   I am trying to extract the below highlighted fields. However when I use the expression this is working right on one type of event but picking a different(underlined) field from other event. Please let me know what wrong I am doing here.   (?:[^,]+,){19}\"(?<instance>[^,]+)\",.* Below is the event which is getting extracted as expected. 50271232,00004102,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","psd217",46769357,"MS932","KAVS0260-I \x83W\x83\x87\x83u\x83l\x83b\x83g(AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX:@5V689)\x82\xF0\x8AJ\x8En\x82\xB5\x82܂\xB7","Information","jp1admin","/APP/ABC/AJS2","JOBNET","Server2:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x     Below highlighted is what I need to extract but the underlined is getting extracted. 50271228,00004105,00000000,3088,"20210901225446","20210901225446",4,-1,-1,"SYSTEM","","psd240",316413752,"MS932","KAVS0263-I \x83W\x83\x87\x83u(AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/MCS/\x8AĎ\x8B/09_\x92\x8D\x95\xB6\x91\x97\x90M\x96\xA2\x8DX\x90V\x8D\x80\x96ڃ`\x83F\x83b\x83N/HULFT\x91\x97\x90M\x8C㎞\x8Aԑҋ@1MIN:@50R6189)\x82\xF0\x8AJ\x8En\x82\xB5\x82܂\xB7(host: PSD511, JOBID: 0)","Information","jp1admin","/App/ABC/AJS2","JOB","Server1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/MCS/\x8AĎ\x8B/09_\x92\x8D\x95\xB6\x91\x97\x90M\x96\xA2\x8DX\x90V\x8D\x80\x96ڃ`\x83F\x83b\x83N/HULFT\x91\x97\x90M\x8C㎞\x8Aԑҋ@1MIN","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/MCS/\x8AĎ\x8B/09_\x92\x8D\x95\xB6\x91\x97\x90M\x96\xA2\x8DX\x90V\x8D\x80\x96ڃ`\x83F\x83b\x83N","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/MCS/\x8AĎ\x8B/09_\x92\x8D\x95\xB6\x91\x97\x90M\x96\xA2\x8DX\x90V\x8D\x80\x96ڃ`\x83F\x83b\x83N/HULFT\
Hello fellow Splunkers, It was brought to my attention from our F5 system manager that some logs from F5 are missing in Splunk. Once every few logs sent (and there are a lot of them) it appears the ... See more...
Hello fellow Splunkers, It was brought to my attention from our F5 system manager that some logs from F5 are missing in Splunk. Once every few logs sent (and there are a lot of them) it appears the a log or two just disappear and are not indexed. To ingest the logs we installed the F5 add-on on our HF and configured both ends (F5 and Splunk) according to the documentation at docs.splunk.com Does anyone have any idea what can cause this? We're using Splunk Enterprise 8.0.7 and the Splunk Add-On for F5 4.0.1   Thanks
  Hello EveryOne , Please Help Me Regarding How to Remove Splunk Search heads From Splunk Master Or Cluster   We Have Some Splunk Dedicated Search Heads  In Our Environment. Ex: 15 Search Heads .... See more...
  Hello EveryOne , Please Help Me Regarding How to Remove Splunk Search heads From Splunk Master Or Cluster   We Have Some Splunk Dedicated Search Heads  In Our Environment. Ex: 15 Search Heads . In That 7 Search Heads Are Down, Because The Users Are Stopped Using Splunk Search Heads. . So Now We Are Planning To Remove Those 7 search  heads from our Splunk Master . What Is Procedure . could Anyone Explain Me In Manual Way. It Would Be Great. .   Please Also Check One Screen Shot I Have Shared . Find Below , We Need to Remove These Search Heads.     Regards
Hi Team, I am pulling hair to figure out a query to extract data into a table with following information.  stopping system logging (rsyslog)  stopping the Tripwire agent stopping the Splunk agen... See more...
Hi Team, I am pulling hair to figure out a query to extract data into a table with following information.  stopping system logging (rsyslog)  stopping the Tripwire agent stopping the Splunk agent In hosts. Also want to know who stopped/disabled those services at what time so I can reconcile it with approved changes.  Really appreciate if someone can help. Thank you.
Does anyone know how to change the default time for ad-hoc searches from 30 minutes to 7 days in Splunk Cloud? I changed the setting in Server settings » Search preferences to 7 days. However, the def... See more...
Does anyone know how to change the default time for ad-hoc searches from 30 minutes to 7 days in Splunk Cloud? I changed the setting in Server settings » Search preferences to 7 days. However, the default in Search & Reporting is still 30 minutes. Our data usually takes 48 hours to get though the data pipeline, so a 30 minute time window for searches will never return data.
Apparently, after I fill in the type="laptop", color="blue", the laptop is still in black. Same goes for all the other types. They just remain black. Does anyone know why?