All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i want to pass the input token to my base search. In the panel its shows no results found, but when try click on "open in search" i can able to find the result. whats the issue here?   <field... See more...
i want to pass the input token to my base search. In the panel its shows no results found, but when try click on "open in search" i can able to find the result. whats the issue here?   <fieldset submitButton="false"> <input type="dropdown" token="Month"> <label>Month</label> <fieldForLabel>date</fieldForLabel> <fieldForValue>date</fieldForValue> <search> <query>| inputlookup test.csv | table date</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search id="bs1"> <query>index=xyz source=*.csv |eval Date="$Month$"|eval Date1=date_year + date_month |where Date1=Date |lookup test.csv date as Date OUTPUT source as Source |where source=Source</query> <earliest>0</earliest> <latest></latest> </input> </fieldset> panel> <single> <title>PRODUCTION</title> <search base="bs1"> <query>|search me |stats sum(me) as i </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single>  
Hi folks,   How can I search data of my ES SH from the SHC (splunk cloud). Is there a way to do so? I'm trying to use the | rest /servicesNS/-/-/saved/searches query from my SHC to search the s... See more...
Hi folks,   How can I search data of my ES SH from the SHC (splunk cloud). Is there a way to do so? I'm trying to use the | rest /servicesNS/-/-/saved/searches query from my SHC to search the saved searches from my ES SH but I was unable to do so, it seems there is now way of dispatching REST to the ES SH. What about if I create a summary index with the output of the  | rest /servicesNS/-/-/saved/searches in the ES SH. Will I be able to search that data from my SHC? I appreciate your help.  
Hi, checking to see if anyone uses Splunk to monitor their Proofpoint message queues. If so, how are you doing this via the product. Thanks
Hi, checking to see if anyone uses Splunk to monitor their Proofpoint message queues. If so, how are you doing this via the product. Thanks
Hi  I have two questions here  1.In the drill down search i have given dest=$dest$ and it is not working and when i click on contributing link it is reflecting the same.  2. When i click on dri... See more...
Hi  I have two questions here  1.In the drill down search i have given dest=$dest$ and it is not working and when i click on contributing link it is reflecting the same.  2. When i click on drilldown search it is taking me to the search window with the time range as last 30 mins but what i expect is the  custom timerange when the event got triggered.  i kept offset values to default  Please let me know. Thanks  
I just went through this so posting here as I could not find the commands to fix it and had to open a ticket with support. Root cause was one of my servers had the server.pem certificate expired. Thi... See more...
I just went through this so posting here as I could not find the commands to fix it and had to open a ticket with support. Root cause was one of my servers had the server.pem certificate expired. This prevented the KV Store upgrade from working during the upgrade. This was on a windows heavy forwarder. Those are standalone, all the docs have commands for a SH cluster. So here are the standalone commands, hoping this helps someone. Took me a while to get them out of support. ./splunk migrate kvstore-storage-engine --target-engine wiredTiger ./splunk migrate migrate-kvstore ./splunk show kvstore-status --verbose Thanks, Lee.
We need to index logfiles from our monitored devices which are partitioned into two segments.  The first segment is CSV.  The last segment are events.   col1,col2,col3,…,colN col1,col2,col3,… ,... See more...
We need to index logfiles from our monitored devices which are partitioned into two segments.  The first segment is CSV.  The last segment are events.   col1,col2,col3,…,colN col1,col2,col3,… ,colN col1,col2,col3,… ,colN . . . event1 event2 event3 … eventN   This data from the logfile needs to be sent to one index with a two sourcetypes.  Sourcetype_csv for the first segment in the logfile and sourcetype_events for the last segment in the logfile.  How do we structure the inputs.conf, props.conf, and transforms.conf for this?   We were thinking we could leverage filtering to take advantage that the events are all prefixed and postfixed with “***”.  However, there does not seem to be a way to have the one logfile type partitioned into more than one sourcetype.
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone pl... See more...
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone please suggest any Pre-build dashboards for EventHub data. Thanks,
Hello, I have a tcp stream incoming with xml Call Data Records (CDR).  enclosed at the end is an example. The CDR contains information about the caller and destination phone.  There are several SED... See more...
Hello, I have a tcp stream incoming with xml Call Data Records (CDR).  enclosed at the end is an example. The CDR contains information about the caller and destination phone.  There are several SEDCMDs in the props.conf to take lines like <party type="orig"... and convert them into <orig_party... The problem is with lines whose only differentiation is their position in the data structure.  With the xml lines that start with "<RTCPstats>  I need to modify their fields:  I need the first line to be ingested as "<orig_PS>31367</orig_PS> <orig_OS>6273400</orig_OS>..." and the second line as "<term_PS>31366</term_PS>, <term_OS>6273200</term_OS>...". <flowinfo>     <RTCPstats>PS=31367, OS=6273400,..>   </flowinfo> <flowinfo>     <RTCPstats>PS=31366, OS=6273200,...>    </flowinfo> The actual sed command "sed -r -e '0,/ ?([A-Z_]*)=([0-9]*)/s//<ORIG_\1>\2<\/ORIG_\1>/g'" will do this, but the same entry in the SEDCMD will not.   Altering a line of input is easy: altering only the FIRST instance in a record with embedded newlines is not. What are my options?   SEDCMD  in props.conf? strip out all newlines so sedcmd treats it all as one? (can't really do it w. sed...) regex in transforms.conf? pass input through a script that CAN do this?  Some details: The linebreaker is two carriage returns (literally \r\r, or 0d0d).   There are embedded newlines in each record so any SEDCMD will be applied to each line, not the entire record all at once. I get 60,000 records per minute: this transformation needs to be fast. Sample record: (spaced out neatly)         <e> <st>Perimeta XML CDR</st> <h>the perimeta hostname</h> <t>1664838107186</t> <tid>2814754955435820</tid> <sid>2082</sid> <eid>CDR</eid> <call starttime="1664837476918" starttime_local="2022-10-03T15:51:16-0700" endtime="1664838107179" endtime_local="2022-10-03T16:01:47-0700" duration="630261" release_side="term" bcid="a big string of numbers and letters"> <party type="orig" phone="caller phonenumber" domain="ipaddr1" sig_address="ipaddr1" sig_port="5060" sig_transport="udp" trunk_group="trunkgroupname" trunk_context="nap" sip_call_id="anumber@adestination"/> <party type="term" phone="destination phonenumber" domain="0.0.0.0" routing_number="a routing number" sig_address="<an ip addr>" sig_port="5060" sig_transport="udp" trunk_group="6444" trunk_context="itg" edit_trunk_group="" edit_trunk_context="" sip_call_id="adifferentnumber@adifferentdestination"/> <adjacency type="orig" name="orig_adjacency_system" account="" vpnid="0X00000001" mediarealm="CoreMedia1"/> <adjacency type="term" name="dest_adjacency_system" account="" vpnid="0X00000004" mediarealm="CoreMedia1"/> <category name="cat.sbc.redirected"/> <connect time="1664837483144" time_local="2022-10-03T15:51:23-0700"/> <firstendrequest time="1664838107158" time_local="2022-10-03T16:01:47-0700"/> <disconnect time="1664838107179" time_local="2022-10-03T16:01:47-0700" reason="0"/> <redirector bcid="another string of letters and numbers" editphone="a phone number"/> <post_dial_delay duration="2895"/> <QoS stream_id="1" instance="0" reservetime="1664837476918" reservetime_local="2022-10-03T15:51:16-0700" committime="1664837483144" committime_local="2022-10-03T15:51:23-0700" releasetime="1664838107184" releasetime_local="2022-10-03T16:01:47-0700"> <gate> <flowinfo> <local address="an ip address" port="63130"/> <remote address="another ip address" port="36214"/> <sd>m=audio 0 RTP/AVP 0 a=rtpmap:0 PCMU/8000 a=ptime:20 </sd> <RTCPstats>PS=31367, OS=6273400, PR=31366, OR=6273200, PD=0, OD=0, PL=0, JI=0, TOS=0, TOR=0, LA=0, PC/RPS=31165, PC/ROS=4986400, PC/RPR=31367, PC/RPL=0, PC/RJI=0, PC/RLA=0, RF=91, MOS=43, PC/RAJ=0, PC/RML=0</RTCPstats> </flowinfo> <flowinfo> <local address="an ip address" port="19648"/> <remote address="a diffrent ip address" port="26046"/> <sd>m=audio 0 RTP/AVP 0 a=rtpmap:0 PCMU/8000 a=ptime:20 </sd> <RTCPstats>PS=31366, OS=6273200, PR=31367, OR=6273400, PD=0, OD=0, PL=0, JI=0, TOS=0, TOR=0, LA=0, PC/RPS=0, PC/ROS=0, PC/RPR=0, PC/RPL=0, PC/RJI=0, PC/RLA=0, RF=82, MOS=41, PC/RAJ=0, PC/RML=0</RTCPstats> </flowinfo> </gate> </QoS> </call> </e>          
Hello, While sending alerts to xmatters encountering the below errors: ERROR [xmatters.alert_action.main] [xmatters] [<module>] [62654] <urlopen error [Errno 97] Address family not supported by p... See more...
Hello, While sending alerts to xmatters encountering the below errors: ERROR [xmatters.alert_action.main] [xmatters] [<module>] [62654] <urlopen error [Errno 97] Address family not supported by protocol> Traceback (most recent call last): File "/splunk/lib/python3.7/urllib/request.py", line 1350, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/splunk/lib/python3.7/http/client.py", line 1281, in request self._send_request(method, url, body, headers, encode_chunked) File "/splunk/lib/python3.7/http/client.py", line 1327, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/splunk/lib/python3.7/http/client.py", line 1276, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/splunk/lib/python3.7/http/client.py", line 1036, in _send_output self.send(msg) File "/splunk/lib/python3.7/http/client.py", line 976, in send self.connect() File "/splunk/lib/python3.7/http/client.py", line 1443, in connect super().connect() File "/splunk/lib/python3.7/http/client.py", line 948, in connect (self.host,self.port), self.timeout, self.source_address) File "/splunk/lib/python3.7/socket.py", line 728, in create_connection raise err File "/splunk/lib/python3.7/socket.py", line 711, in create_connection sock = socket(af, socktype, proto) File "/splunk/lib/python3.7/socket.py", line 151, in __init__ _socket.socket.__init__(self, family, type, proto, fileno) OSError: [Errno 97] Address family not supported by protocol During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/splunk/etc/apps/xmatters_alert_action/bin/xmatters.py", line 147, in <module> REQUEST_ID = XM_ALERT.execute() File "/splunk/etc/apps/xmatters_alert_action/bin/xmatters.py", line 119, in execute request_id = xm_client.send_event(self.endpoint_url, xm_event) File "/splunk/etc/apps/xmatters_alert_action/lib/xmatters_sdk/xm_client.py", line 62, in send_event force_https=True File "/splunk/etc/apps/xmatters_alert_action/lib/common_utils/rest.py", line 159, in post return self._send_request(req, headers) File "/splunk/etc/apps/xmatters_alert_action/lib/common_utils/rest.py", line 99, in _send_request res = urlopen(req) File "/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/splunk/lib/python3.7/urllib/request.py", line 543, in _open '_open', req) File "/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/splunk/lib/python3.7/urllib/request.py", line 1393, in https_open context=self._context, check_hostname=self._check_hostname) File "/splunk/lib/python3.7/urllib/request.py", line 1352, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno 97] Address family not supported by protocol> Any clue where i might be wrong?
hi   our system logs test runs as single events. in some cases we would have a re-run of a test. both events are logically related but are separate for each run (the original run and the re-run).... See more...
hi   our system logs test runs as single events. in some cases we would have a re-run of a test. both events are logically related but are separate for each run (the original run and the re-run). I wish to extract data from both events and present it together, have tried several approaches but none worked so far.   step 1: identifying the re-run event and getting a unique identifier for the original run using some textual parsing on the workarea path: index=my_index aa_data_source="my_info" is_rerun=True | eval orig_workarea=workarea_path | rex field=orig_workarea mode=sed "s@/rerun?$@@"   step 2: now, I would like to find and match the original run event for each of the results. tried map: | map search="search index=my_index aa_data_source=my_info workarea_path=$orig_workarea$ " maxsearches=100000   this is probably wrong because it is both resource expensive and after I found the original event per result, I could only use the data of the original event (result of map) - didnt find how to combine it with the re-run event data I searched upon.   I also tried subsearch in various ways, the main problem is that the subsearch cannot use the "orig_workarea" I extract from the primary search because it runs first.   step 3 would be present the results from both events together. meaning - take field_from_eventA, field_from_eventB and place them in the same raw (note that renaming might be required for the fields since both events have the same fields)   kind of in a dead end here, could use ideas on how to implement this search. any ideas are welcome   thanks, noam  
Hi all, Due to utf16/8-mismatch, I find a lot of utf16 \xnn chars in my events; this makes the json-parser  kind of losing it. So I want to get the right utf8 chars out of a dictionary json-table b... See more...
Hi all, Due to utf16/8-mismatch, I find a lot of utf16 \xnn chars in my events; this makes the json-parser  kind of losing it. So I want to get the right utf8 chars out of a dictionary json-table by doing: f=replace(_raw,"\\\\x([0-9a-fA-F]{2})",json_extract(utfx,"{}.\1")) The dictionary simply looks like [{"00":"utf8char-1"}, ..., {"AE":"é"},...] But this doesn't seem to work, the event even gets nilled completely. Something explicit like this does seem to work though: (here for instance, all utf16 \xAE chars get replaced by the "é" char: f=replace(_raw,"\\\\x([0-9a-fA-F]{2})",json_extract(utfx,"{}.9E")) or this: f=replace(_raw,"\\\\x([0-9a-fA-F]{2})","\1")), which simply removes the "\x" ...so is it like the capt.groups of the regex in replace() is not evaluated if it is arg to another function io a plain string? Tx.
I have a case where some indexers take 4 to 5 hours to join the cluster. The system shows no/little system usage (CPU, Mem, I/O). The splunkd.log appears to loop through the same log entries mult... See more...
I have a case where some indexers take 4 to 5 hours to join the cluster. The system shows no/little system usage (CPU, Mem, I/O). The splunkd.log appears to loop through the same log entries multiple times. Then, the indexer continues loading when I see a log entry: Running job=BundleForcefulStateMachineResetJob After this reset job is ran I quickly see the public key for the master loaded and the indexer joins the cluster shortly thereafter. Here is a snippet of the log: 10-13-2022 11:22:02.293 -0700 WARN HttpListener - Socket error from 127.0.0.1:54240 while accessing /servicesNS/splunk-system-user/splunk_archiver/search/jobs: Broken pipe 10-13-2022 11:44:08.721 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (1103256 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 11:44:24.950 -0700 INFO PipelineComponent - CallbackRunnerThread is unusually busy, this may cause service delays: time_ms=1119484 new=0 null=0 total=56 {'name':'DistributedRestCallerCallback','valid':'1','null':'0','last':'3','time_ms':'0'},{'name':'HTTPAuthManager:timeoutCallback','valid':'1','null':'0','last':'1','time_ms':'0'},{'name':'IndexProcessor:ipCallback-0','valid':'1','null':'0','last':'6','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-1','valid':'1','null':'0','last':'19','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-2','valid':'1','null':'0','last':'30','time_ms':'4062'},{'name':'IndexProcessor:ipCallback-3','valid':'1','null':'0','last':'41','time_ms':'4164'},{'name':'MetricsManager:probeandreport','valid':'1','null':'0','last':'0','time_ms':'1103256'},{'name':'PullBasedPubSubSvr:timerCallback','valid':'1','null':'0','last':'2','time_ms':'0'},{'name':'ThreadedOutputProcessor:timerCallback','valid':'4','null':'0','last':'40','time_ms':'0'},{'name':'triggerCollection','valid':'44','null':'0','last':'55','time_ms':'0'} 10-13-2022 12:00:00.001 -0700 INFO ExecProcessor - setting reschedule_ms=3599999, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py 10-13-2022 12:18:32.106 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:19:02.105 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:19:32.106 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:20:02.105 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:20:30.137 -0700 WARN HttpListener - Socket error from 127.0.0.1:54544 while accessing /servicesNS/splunk-system-user/splunk_archiver/search/jobs: Broken pipe 10-13-2022 12:29:09.955 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (2182584 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 12:29:25.957 -0700 INFO PipelineComponent - CallbackRunnerThread is unusually busy, this may cause service delays: time_ms=2198585 new=1 null=0 total=57 {'name':'DistributedRestCallerCallback','valid':'1','null':'0','last':'3','time_ms':'0'},{'name':'HTTPAuthManager:timeoutCallback','valid':'1','null':'0','last':'1','time_ms':'0'},{'name':'IndexProcessor:ipCallback-0','valid':'1','null':'0','last':'6','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-1','valid':'1','null':'0','last':'19','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-2','valid':'1','null':'0','last':'30','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-3','valid':'1','null':'0','last':'41','time_ms':'4000'},{'name':'MetricsManager:probeandreport','valid':'1','null':'0','last':'0','time_ms':'2182584'},{'name':'PullBasedPubSubSvr:timerCallback','valid':'1','null':'0','last':'2','time_ms':'0'},{'name':'ThreadedOutputProcessor:timerCallback','valid':'4','null':'0','last':'40','time_ms':'0'},{'name':'triggerCollection','valid':'45','null':'0','last':'56','time_ms':'0'} 10-13-2022 12:46:13.298 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (496854 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 12:46:13.867 -0700 WARN HttpListener - Socket error from 127.0.0.1:54220 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.907 -0700 WARN HttpListener - Socket error from 127.0.0.1:54254 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.931 -0700 WARN HttpListener - Socket error from 127.0.0.1:54560 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.955 -0700 WARN HttpListener - Socket error from 127.0.0.1:54538 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:17.070 -0700 INFO BundleJob - Running job=BundleForcefulStateMachineResetJob
Dear All,   Please help to recommend  when i export  result to CSV  field work id if number start with 0 it not show when open in csv files   as capture screen below,   If i would l... See more...
Dear All,   Please help to recommend  when i export  result to CSV  field work id if number start with 0 it not show when open in csv files   as capture screen below,   If i would like to change format before export to csv it possible?    Best Regards, CR  
Hi, im working on new use case, but was stuck in few things.  I want to create a use case logic to monitors whenever user/IP are trying to access to log in from non authorize country.    ex... See more...
Hi, im working on new use case, but was stuck in few things.  I want to create a use case logic to monitors whenever user/IP are trying to access to log in from non authorize country.    example a use is support to log in from Berlin but he or she is log in from Chicago.  My ask 1. Is it possible from Splunk end to implement such use case 2. If yes what kind of logs we need to monitor such activity, is FW logs are enough? 3. What will be the query    thanks 
Hi, I am monitoring HTTP response code for a bunch of internal url's and it works as long as the sites are responding. But if the host is not responding I get nothing but an error in the Windows ap... See more...
Hi, I am monitoring HTTP response code for a bunch of internal url's and it works as long as the sites are responding. But if the host is not responding I get nothing but an error in the Windows application eventlog: Get \"http://osi3160.de-prod.dk:8080/ping\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", "monitorType": "http", "url": "http://osi3160.de-prod.dk:8080/ping"} Get \"http://osi3160.de-prod.dk:8080/ping\": context deadline exceeded"}   My agent_config.yaml looks like this: smartagent/http_api-gateway_1: type: http host: osi3160.de-prod.dk port: 8080 path: /ping regex: <body>OK<\/body> httpTimeout: 20s intervalSeconds: 60 Any ideas?
How to use this (sed -i 's/"//g' $LOOKUP_FILE) by using script can any one help thanks lateef
Hi All, When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned. 10-18-2022 11:00:22.349 ERROR Di... See more...
Hi All, When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned. 10-18-2022 11:00:22.349 ERROR DispatchThread [3247729 phase_1] - code=10 error="" 10-18-2022 11:00:22.349 ERROR ResultsCollationProcessor [3247729 phase_1] - SearchMessage orig_component= sid=1666090813.341131_7E89B3C6-34D5-44DA-B19C-E6A755245D39 message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:pldc1splindex1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.  The message.conf shows [DISPATCHCOMM:PEER_PIPE_EXCEPTION__S] message = Search results might be incomplete: the search process on the local peer:%s ended prematurely. action = Check the local peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. severity = warn I also have Splunk Alerts that are showing false positives, the alert search is retuning no results but the Splunk sourcetype=scheduler is sending out emails with success?  Is this related? What does this mean? PEER_PIPE_EXCEPTION__S Splunk Enterprise OnPrem version 9.0.1 on a distributed environment. Thanks
Hi all. Is there an easy and fast way to disable, at all or by some filters, the WARNING BANNERS i get sometimes in SPL Search form panel? The same Warnings are inside the Jobs detail, so i d... See more...
Hi all. Is there an easy and fast way to disable, at all or by some filters, the WARNING BANNERS i get sometimes in SPL Search form panel? The same Warnings are inside the Jobs detail, so i do not want to display them as "banner" inside the page. How? Thanks.
we are using Splunk Add-on for Microsoft Cloud Services to index Input type Azure Event Hub what field can be used as unique  key  ?