All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am running a Search Head Cluster with 7 search heads on Splunk 8.2.9. 2 of the search heads are generating the following error messages at ~5 second intervals for a period of time before stopp... See more...
Hi, I am running a Search Head Cluster with 7 search heads on Splunk 8.2.9. 2 of the search heads are generating the following error messages at ~5 second intervals for a period of time before stopping:     ERROR DigestProcessor [38271 TcpChannelThread] - Failed signature match ERROR HTTPAuthManager [38271 TcpChannelThread] - Failed to verify HMAC signature, uri: /services/shcluster/member/consensus/pseudoid/raft_request_vote?output_mode=json     The search head cluster is otherwise running as expected as far as I can tell. The search heads that are producing these errors are the only 2 that have been elected as captain in the last 30 days from examining the logs. There are no preferred captain or similar configurations set. I have checked the [shclustering] pass4SymmKey values on each search head. They are all configured to the same value although use different Splunk Secrets to encrypt. I am not sure when the errors first started appearing so can't link this to a specific upgrade on configuration change unfortunately. The thread_id values seem to stay around for between 10-30 minutes. Sometimes 2 thread_ids will be active at once, sometimes none are active for a period. When looking at other logs for a particular thread_id around the same time period (at info logging level) I can't find see anything that adds any more cluses to what is causing the errors.  
we have integration with EventHub using Splunk_TA_microsoft-cloudservices we see that events are missing  what might be the reason  ?  in case the event reached the EventHub with delay , will t... See more...
we have integration with EventHub using Splunk_TA_microsoft-cloudservices we see that events are missing  what might be the reason  ?  in case the event reached the EventHub with delay , will the APP pull the data  ? how much time back the APP is scanning the data  ?
Hi, I am having trouble for routing the logs(first.txt) to separate index1/2 and second.txt to index3/4.   below are my environment inputs.conf [monitor:///home/odelakumar06/first.txt] dis... See more...
Hi, I am having trouble for routing the logs(first.txt) to separate index1/2 and second.txt to index3/4.   below are my environment inputs.conf [monitor:///home/odelakumar06/first.txt] disabled = false host = hf index = firstone sourcetype = firstone _TCP_ROUTING = FirstGroupIndexer [monitor:///home/odelakumar06/second.txt] disabled = false host = hf index = secondone sourcetype = secondone _TCP_ROUTING = SecondGroupIndexer and my outputs.conf is [tcpout] defaultGroup = FirstGroupIndexer,SecondGroupIndexer [tcpout:FirstGroupIndexer] disabled = false server = 34.100.154.111:9997,35.244.6.201:9997 [tcpout:SecondGroupIndexer] disabled = false server = 34.100.190.134:9997,34.93.239.18:9997 and i have one SH and i added all the above indexes in SH.  when i search in SH index=firstone, nothing i am getting. when i see splunkd log getting below errors. Please suggest   02-02-2023 06:33:10.051 +0000 ERROR TcpInputProc [1983 FwdDataReceiverThread] - Message rejected. Received unexpected message of size=1195725856 bytes from src=162.142.125.9:49748 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. host = indx-1 source =/opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd  
Hi Guys, Less Event displayed while searching as * then search hostname while its showing if I search at the beginning with hostname  Please suggest why it is misbehaving and what is the Solution... See more...
Hi Guys, Less Event displayed while searching as * then search hostname while its showing if I search at the beginning with hostname  Please suggest why it is misbehaving and what is the Solution for  this to get all events  .   Thanks in advance. 
| inputlookup suspicious_win_comm.csv lookup table contents has only keyword keyword <- field name tasklist ver ipconfig net time ... See more...
| inputlookup suspicious_win_comm.csv lookup table contents has only keyword keyword <- field name tasklist ver ipconfig net time systeminfo netstat whoami chrome   I want see result like this.   event.commandline matching keyword c:\program<file:///c:/program>; files (x86)\application\chrome.exe --signal-argument http............ chrome   I used this spl my local system. Index=crowdstrike [| inputlookup suspicious_win_comm.csv | eval event.commandline = "*".keyword."*" | fields event.commandline | format ]  
is it possible to splunk 6 version to version 9 rolling upgrade?
  (index="external*" Feedback* "Text") | transaction channel startswith=POST endswith=received maxspan=1m maxevents=2 | xmlkv | dedup ip1 | table ip1 | appendcols [ search index="internal*" "Missi... See more...
  (index="external*" Feedback* "Text") | transaction channel startswith=POST endswith=received maxspan=1m maxevents=2 | xmlkv | dedup ip1 | table ip1 | appendcols [ search index="internal*" "Missing" | eval fields=split(_raw,"|") | eval ip2=mvindex(fields,7) | dedup ip2 | table ip2   I have two separate searches outputting to a table, the first for ip1 which has a small number of ip addresses (10-20 usually). the second for ip2 has a much larger number of ip addresses (1000+ usually). Eg. output might look like  192.168.1.1 192.1681.4 192.168.1.20 192.168.4.50 172.20.20.1 172.1.2.10 192.168.1.60 192.168.4.68 12.140.14.30 13.150.34.40   100.149.50.4   192.168.1.60   172.27.27.4   172.40.40.3   infinity I need to cross reference the full list/range of ip2, for any occurences of ip's from ip1, and maybe have these display in a new column. however this seems to be proving difficult for what i keep trying, and i can only seem to search the output on a row by row comparison basis which is not what i need and will almost never get a result eg. ip1row1 does not match ip2row1, which goes down to ip1rowX & ip2rowX and then multiple empty ip1rows comparing to ip2rowX+ infinity.
Basically I have a set of raw data with different time stamp in CCYYMMDDHHMMSS format. I want to list out the stats which shows how many occurrences of CCYY then MM then DD . I am able to use STRFTIM... See more...
Basically I have a set of raw data with different time stamp in CCYYMMDDHHMMSS format. I want to list out the stats which shows how many occurrences of CCYY then MM then DD . I am able to use STRFTIME to get the segregate the data into desired format as year month and day. My expected result output is Year Year Count Month Month Count Day DayCount 2022 1000 2022-11 250 2022-11-27 20 2023 10 2022-12 100 2022-11-12 5         2022-11-27 35 I used the below |stats count as total by year, month day But the actual output is not as expected Year Year Count Month Month Count Day DayCount 2022 20 2022-11 20 2022-11-27 20 2022 5 2022-12 5 2022-11-12 5 2022 35 2022-27 35 2022-11-27 35   Should be simple enough, just not for me. Please help. Thanks!
I have a json source with input via a Splunk Add-on for AWS input. Sometimes there's a timestamp-like field, sometimes not, so I chose to use a sourcetype with Timestamp handling set to "Current time... See more...
I have a json source with input via a Splunk Add-on for AWS input. Sometimes there's a timestamp-like field, sometimes not, so I chose to use a sourcetype with Timestamp handling set to "Current time" in the GUI which I think sets DATETIME_CONFIG=CURRENT in the sourcetype's props.conf entry.  I expected this to mean that's where the events would get their timestamp from, but log messages in splunkd.log (according to what I see in index=_internal) are still showing that DateParserVerbose is spitting out warnings that "A possible timestamp match ... is outside of the acceptable time window" occurring when events with no recognisable time are indexed. This also means that some events get seemingly random timestamps if some string gets misinterpreted; a handful of events generated yesterday had timestamps in November 2021. Was I wrong expecting a sourcetype's DATETIME_CONFIG worked that way? If not, what might be happening to stop my intended timestamping? If so, how else should I handle events with no timestamps? Thanks for any advice.
I am working on the saved search not index/lookup. I tried this code -  | eval date=strftime(strptime(<fieldname>,"%Y-%m-%d %H:%M:%S"), "%m-%d-%Y %H:%M:%S") but getting the blank data. Pls help  
Yes indexer clustering. I set up 3 win 10 machines with Splunk Enterprise on them and got them to initially connect to master indexer but then got this error. on same dns and firewall turned off on... See more...
Yes indexer clustering. I set up 3 win 10 machines with Splunk Enterprise on them and got them to initially connect to master indexer but then got this error. on same dns and firewall turned off on all 3 machines.   thanks    
Hello, I have an array of timeline event. Timeline: [ [-]        { [-]          deltaToStart: 788          startTime: 2023-02-01T21:56:11Z          type: service1        }        { [-]    ... See more...
Hello, I have an array of timeline event. Timeline: [ [-]        { [-]          deltaToStart: 788          startTime: 2023-02-01T21:56:11Z          type: service1        }        { [-]          deltaToStart: 653          startTime: 2023-02-01T21:56:11.135Z          type: service2        }      ] I would like to table deltaToStart value only of type service1.    Thanks.
Have a index that is throwing up a warning, and the Root Cause says The newly created warm bucket size is too large. The bucket size=32630820864 exceeds the yellow_size_threshold=20971520000 from the... See more...
Have a index that is throwing up a warning, and the Root Cause says The newly created warm bucket size is too large. The bucket size=32630820864 exceeds the yellow_size_threshold=20971520000 from the latest_detected_index. This index was created just all the other indexes, and this one is the only one that is throwing the warning. And there has been at least 6 months of data be sent to this index, and it is saying there is only 14 days of data. What could be the issue with this index.
I have a simple form that has a global search to set up the initial values of a time input.  With that global search, I also set a token for a label on my form. I'd like to update that label when ... See more...
I have a simple form that has a global search to set up the initial values of a time input.  With that global search, I also set a token for a label on my form. I'd like to update that label when a new value is chosen from the time input, but I cannot get it to work. Here is a full simple example to show what I mean.  If I change the time picker, I'd expect the label to be updated to reflect that change.       <form hideFilters="false"> <search id="starttimesearch"> <query> | makeresults | eval startHours=relative_time(now(), "@h-36h") | eval startTimeStr=strftime(startHours, "%B %d, %Y %H:%M") </query> <done> <set token="form.timeRange.earliest">$result.startHours$</set> <set token="form.timeRange.latest">now</set> <set token="time_label">Since $result.startTimeStr$</set> </done> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timeRange" searchWhenChanged="true"> <label>Time</label> <default> </default> <change> <set token="time_change_start">strftime($timeRange.earliest$", "%B %d/%Y %H:%M")</set> <set token="time_change_end">strftime($timeRange.latest$", "%B %d/%Y %H:%M")</set> <eval token="time_label">case($timeRange.latest$ == now(), "Since $time_change_start$", 1==1, "From $time_change_start$ to %time_change_end$)</eval> </change> </input> </fieldset> <row> <panel> <html> The time label is $time_label$ </html> </panel> </row> </form>        
I am encountering the following error in the Gitlab Auditor TA when enabling an input. Does anyone know how to fix it?   Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-gitlab-audi... See more...
I am encountering the following error in the Gitlab Auditor TA when enabling an input. Does anyone know how to fix it?   Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked,]  File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connectionpool.py", line 1010, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/connection.py", line 421, in connect tls_in_tls=tls_in_tls, File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/opt/splunk/etc/apps/TA-gitlab-auditor/bin/ta_gitlab_auditor/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)
Hi All, My Splunk cloud is version 9.0.2208.4. My account role is sc_admin already.  I have around 200 alerts on the alert page. Is there a way to export the 200 alerts from the alert page wit... See more...
Hi All, My Splunk cloud is version 9.0.2208.4. My account role is sc_admin already.  I have around 200 alerts on the alert page. Is there a way to export the 200 alerts from the alert page with just one click? I am very new to Splunk, any help is appreciated! Thanks!    
I want to compare two index index1 and index2  and print values where index1 values does not exists in index2 fro ex: Index1. index2 field1.     field2   1                  1 2             ... See more...
I want to compare two index index1 and index2  and print values where index1 values does not exists in index2 fro ex: Index1. index2 field1.     field2   1                  1 2                  3 3                  4   output      2
My TA-nmon legacy is no longer working with Red Hat RHEL 8 O.S.   Am looking for advice/procedures on converting to TA-nmon Metricator.
Hi, I have a lookup table that contains a list of sessions with permitted time frames (start day & time / end day & time). I am looking for a way to run a scheduled search to remove any expired entr... See more...
Hi, I have a lookup table that contains a list of sessions with permitted time frames (start day & time / end day & time). I am looking for a way to run a scheduled search to remove any expired entries from the lookup table (e.g. sessions with end days / times that have passed). Can multiple entries be removed from a lookup table via a search? I know I can append to a lookup table but not sure about deletion.   Thanks!
Hello, The Subject pretty much says what I am looking for. I am new, 3 weeks in, to Dashboard Studio. One of the (many) functionalities(?) missing is the ability to show and hide visualizations. ... See more...
Hello, The Subject pretty much says what I am looking for. I am new, 3 weeks in, to Dashboard Studio. One of the (many) functionalities(?) missing is the ability to show and hide visualizations. Has anyone figured out a workaround or band-aid in the JSON, or some other override? Thanks in advance and God bless, Genesius