All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi   our system logs test runs as single events. in some cases we would have a re-run of a test. both events are logically related but are separate for each run (the original run and the re-run).... See more...
hi   our system logs test runs as single events. in some cases we would have a re-run of a test. both events are logically related but are separate for each run (the original run and the re-run). I wish to extract data from both events and present it together, have tried several approaches but none worked so far.   step 1: identifying the re-run event and getting a unique identifier for the original run using some textual parsing on the workarea path: index=my_index aa_data_source="my_info" is_rerun=True | eval orig_workarea=workarea_path | rex field=orig_workarea mode=sed "s@/rerun?$@@"   step 2: now, I would like to find and match the original run event for each of the results. tried map: | map search="search index=my_index aa_data_source=my_info workarea_path=$orig_workarea$ " maxsearches=100000   this is probably wrong because it is both resource expensive and after I found the original event per result, I could only use the data of the original event (result of map) - didnt find how to combine it with the re-run event data I searched upon.   I also tried subsearch in various ways, the main problem is that the subsearch cannot use the "orig_workarea" I extract from the primary search because it runs first.   step 3 would be present the results from both events together. meaning - take field_from_eventA, field_from_eventB and place them in the same raw (note that renaming might be required for the fields since both events have the same fields)   kind of in a dead end here, could use ideas on how to implement this search. any ideas are welcome   thanks, noam  
Hi all, Due to utf16/8-mismatch, I find a lot of utf16 \xnn chars in my events; this makes the json-parser  kind of losing it. So I want to get the right utf8 chars out of a dictionary json-table b... See more...
Hi all, Due to utf16/8-mismatch, I find a lot of utf16 \xnn chars in my events; this makes the json-parser  kind of losing it. So I want to get the right utf8 chars out of a dictionary json-table by doing: f=replace(_raw,"\\\\x([0-9a-fA-F]{2})",json_extract(utfx,"{}.\1")) The dictionary simply looks like [{"00":"utf8char-1"}, ..., {"AE":"é"},...] But this doesn't seem to work, the event even gets nilled completely. Something explicit like this does seem to work though: (here for instance, all utf16 \xAE chars get replaced by the "é" char: f=replace(_raw,"\\\\x([0-9a-fA-F]{2})",json_extract(utfx,"{}.9E")) or this: f=replace(_raw,"\\\\x([0-9a-fA-F]{2})","\1")), which simply removes the "\x" ...so is it like the capt.groups of the regex in replace() is not evaluated if it is arg to another function io a plain string? Tx.
I have a case where some indexers take 4 to 5 hours to join the cluster. The system shows no/little system usage (CPU, Mem, I/O). The splunkd.log appears to loop through the same log entries mult... See more...
I have a case where some indexers take 4 to 5 hours to join the cluster. The system shows no/little system usage (CPU, Mem, I/O). The splunkd.log appears to loop through the same log entries multiple times. Then, the indexer continues loading when I see a log entry: Running job=BundleForcefulStateMachineResetJob After this reset job is ran I quickly see the public key for the master loaded and the indexer joins the cluster shortly thereafter. Here is a snippet of the log: 10-13-2022 11:22:02.293 -0700 WARN HttpListener - Socket error from 127.0.0.1:54240 while accessing /servicesNS/splunk-system-user/splunk_archiver/search/jobs: Broken pipe 10-13-2022 11:44:08.721 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (1103256 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 11:44:24.950 -0700 INFO PipelineComponent - CallbackRunnerThread is unusually busy, this may cause service delays: time_ms=1119484 new=0 null=0 total=56 {'name':'DistributedRestCallerCallback','valid':'1','null':'0','last':'3','time_ms':'0'},{'name':'HTTPAuthManager:timeoutCallback','valid':'1','null':'0','last':'1','time_ms':'0'},{'name':'IndexProcessor:ipCallback-0','valid':'1','null':'0','last':'6','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-1','valid':'1','null':'0','last':'19','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-2','valid':'1','null':'0','last':'30','time_ms':'4062'},{'name':'IndexProcessor:ipCallback-3','valid':'1','null':'0','last':'41','time_ms':'4164'},{'name':'MetricsManager:probeandreport','valid':'1','null':'0','last':'0','time_ms':'1103256'},{'name':'PullBasedPubSubSvr:timerCallback','valid':'1','null':'0','last':'2','time_ms':'0'},{'name':'ThreadedOutputProcessor:timerCallback','valid':'4','null':'0','last':'40','time_ms':'0'},{'name':'triggerCollection','valid':'44','null':'0','last':'55','time_ms':'0'} 10-13-2022 12:00:00.001 -0700 INFO ExecProcessor - setting reschedule_ms=3599999, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py 10-13-2022 12:18:32.106 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:19:02.105 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:19:32.106 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:20:02.105 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:20:30.137 -0700 WARN HttpListener - Socket error from 127.0.0.1:54544 while accessing /servicesNS/splunk-system-user/splunk_archiver/search/jobs: Broken pipe 10-13-2022 12:29:09.955 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (2182584 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 12:29:25.957 -0700 INFO PipelineComponent - CallbackRunnerThread is unusually busy, this may cause service delays: time_ms=2198585 new=1 null=0 total=57 {'name':'DistributedRestCallerCallback','valid':'1','null':'0','last':'3','time_ms':'0'},{'name':'HTTPAuthManager:timeoutCallback','valid':'1','null':'0','last':'1','time_ms':'0'},{'name':'IndexProcessor:ipCallback-0','valid':'1','null':'0','last':'6','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-1','valid':'1','null':'0','last':'19','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-2','valid':'1','null':'0','last':'30','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-3','valid':'1','null':'0','last':'41','time_ms':'4000'},{'name':'MetricsManager:probeandreport','valid':'1','null':'0','last':'0','time_ms':'2182584'},{'name':'PullBasedPubSubSvr:timerCallback','valid':'1','null':'0','last':'2','time_ms':'0'},{'name':'ThreadedOutputProcessor:timerCallback','valid':'4','null':'0','last':'40','time_ms':'0'},{'name':'triggerCollection','valid':'45','null':'0','last':'56','time_ms':'0'} 10-13-2022 12:46:13.298 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (496854 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 12:46:13.867 -0700 WARN HttpListener - Socket error from 127.0.0.1:54220 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.907 -0700 WARN HttpListener - Socket error from 127.0.0.1:54254 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.931 -0700 WARN HttpListener - Socket error from 127.0.0.1:54560 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.955 -0700 WARN HttpListener - Socket error from 127.0.0.1:54538 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:17.070 -0700 INFO BundleJob - Running job=BundleForcefulStateMachineResetJob
Dear All,   Please help to recommend  when i export  result to CSV  field work id if number start with 0 it not show when open in csv files   as capture screen below,   If i would l... See more...
Dear All,   Please help to recommend  when i export  result to CSV  field work id if number start with 0 it not show when open in csv files   as capture screen below,   If i would like to change format before export to csv it possible?    Best Regards, CR  
Hi, im working on new use case, but was stuck in few things.  I want to create a use case logic to monitors whenever user/IP are trying to access to log in from non authorize country.    ex... See more...
Hi, im working on new use case, but was stuck in few things.  I want to create a use case logic to monitors whenever user/IP are trying to access to log in from non authorize country.    example a use is support to log in from Berlin but he or she is log in from Chicago.  My ask 1. Is it possible from Splunk end to implement such use case 2. If yes what kind of logs we need to monitor such activity, is FW logs are enough? 3. What will be the query    thanks 
Hi, I am monitoring HTTP response code for a bunch of internal url's and it works as long as the sites are responding. But if the host is not responding I get nothing but an error in the Windows ap... See more...
Hi, I am monitoring HTTP response code for a bunch of internal url's and it works as long as the sites are responding. But if the host is not responding I get nothing but an error in the Windows application eventlog: Get \"http://osi3160.de-prod.dk:8080/ping\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", "monitorType": "http", "url": "http://osi3160.de-prod.dk:8080/ping"} Get \"http://osi3160.de-prod.dk:8080/ping\": context deadline exceeded"}   My agent_config.yaml looks like this: smartagent/http_api-gateway_1: type: http host: osi3160.de-prod.dk port: 8080 path: /ping regex: <body>OK<\/body> httpTimeout: 20s intervalSeconds: 60 Any ideas?
How to use this (sed -i 's/"//g' $LOOKUP_FILE) by using script can any one help thanks lateef
Hi All, When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned. 10-18-2022 11:00:22.349 ERROR Di... See more...
Hi All, When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned. 10-18-2022 11:00:22.349 ERROR DispatchThread [3247729 phase_1] - code=10 error="" 10-18-2022 11:00:22.349 ERROR ResultsCollationProcessor [3247729 phase_1] - SearchMessage orig_component= sid=1666090813.341131_7E89B3C6-34D5-44DA-B19C-E6A755245D39 message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:pldc1splindex1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.  The message.conf shows [DISPATCHCOMM:PEER_PIPE_EXCEPTION__S] message = Search results might be incomplete: the search process on the local peer:%s ended prematurely. action = Check the local peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. severity = warn I also have Splunk Alerts that are showing false positives, the alert search is retuning no results but the Splunk sourcetype=scheduler is sending out emails with success?  Is this related? What does this mean? PEER_PIPE_EXCEPTION__S Splunk Enterprise OnPrem version 9.0.1 on a distributed environment. Thanks
Hi all. Is there an easy and fast way to disable, at all or by some filters, the WARNING BANNERS i get sometimes in SPL Search form panel? The same Warnings are inside the Jobs detail, so i d... See more...
Hi all. Is there an easy and fast way to disable, at all or by some filters, the WARNING BANNERS i get sometimes in SPL Search form panel? The same Warnings are inside the Jobs detail, so i do not want to display them as "banner" inside the page. How? Thanks.
we are using Splunk Add-on for Microsoft Cloud Services to index Input type Azure Event Hub what field can be used as unique  key  ?
Hi everyone, I am using Splunk UI (Splunk Design System) to develop the Splunk App (with ReactJs) I want to send the email from my app( use Splunk SMTP setting). My app is allowing users can se... See more...
Hi everyone, I am using Splunk UI (Splunk Design System) to develop the Splunk App (with ReactJs) I want to send the email from my app( use Splunk SMTP setting). My app is allowing users can select the sender, recipient (to, cc, bcc), body. So my question is: Can we get the SMTP setting from Splunk? How can I send the email from app with Reactjs, does Splunk JS SDK support it? I knew I could create a custom rest endpoint with a python script to create send mail endpoint backend API but it is very complex. I want to find another way. Thank!
Hi  have a problem in stream app, in some flows wrong source and dest IP observed. For instance, I checked the original flow in Wireshark  and the original source IP and port were192.168.1.1:56271... See more...
Hi  have a problem in stream app, in some flows wrong source and dest IP observed. For instance, I checked the original flow in Wireshark  and the original source IP and port were192.168.1.1:56271 and dest IP and port were 192.168.1.2:80, meanwhile in stream the source and dest are displaced! any suggestion on this wired issue?
Hello, everyone! I have few questions about indexers cleaning: - How it's performed in clustered architecture? - Does it really needed? Do I correctly understand that frozen buckets delete auto... See more...
Hello, everyone! I have few questions about indexers cleaning: - How it's performed in clustered architecture? - Does it really needed? Do I correctly understand that frozen buckets delete automatically?
Did any one know what naming convention need to onboard the data from Corelight to Splunk? Do we need this kind of naming convention conn_<date>_<time>.log or conn.log dns.log are fine.
Hello, I need to install ARUBA TA; do you have any recommendations on how to proceed.  Your recommendations will be highly appreciated. Thank you!
I've created a table of test results using stats list() to create a table that looks like this (the application name is only listed once against the group of tests it's related to): Application ... See more...
I've created a table of test results using stats list() to create a table that looks like this (the application name is only listed once against the group of tests it's related to): Application TestName Outcome Website Search One Passed   Contact Page Passed   Order Form Passed Internal Query Form Passed   Look Up Passed   I would like to amend the table so that the Application is shown in the 'TestName' column above the group of tests it's related to, so it looks like this: TestName Outcome Website   Search One Passed Contact Page Passed Order Form Passed Internal   Query Form Passed Look Up Passed   I know this breaks normal table data layout but for the purposes of my dashboard I think it will make it look more readable.
1. I have below logs: server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find ... See more...
1. I have below logs: server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call), unable to find the logs from this server. server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call), unable to find the logs from this server. server6z: INFO could not find the logs under this path(apimanager call) i have mentioned in my props should_linemerge=false line_breaker=([\r\n]+) but i am seeing error like failed to parse timestamp defaulting to file modtime. How to resolve this issue. 2. I am getting the same issue as above for this type of logs as well Sample logs: /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /applicatins/dir/wrd-start/loadscript/filedata.com: line24: /applicatins/dir/wrd start/loadscript/filedata.com: not able to read the files /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /applicatins/dir/wrd-start/loadscript/filedata.com: line24: /applicatins/dir/wrd start/loadscript/filedata.com: not able to read the files /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /applicatins/dir/wrd-start/loadscript/filedata.com: line24: /applicatins/dir/wrd start/loadscript/filedata.com: not able to read the files
Hello, When I run a query I get the results as I need them in a table from Splunk but when I download the .csv file, the timestamp field changes to an incorrect date and year. Does anyone know h... See more...
Hello, When I run a query I get the results as I need them in a table from Splunk but when I download the .csv file, the timestamp field changes to an incorrect date and year. Does anyone know how I can fix it?        
Does anyone know a command to monitoring the web loading response (time) of a Splunk page/server? Like when you navigate on one page to another, one search to another.
Hi Experts, We have created a new role with the same capabilities as a user role, but we wanted to add another capability to this role to authorize them to enable or disable the alerts as required.... See more...
Hi Experts, We have created a new role with the same capabilities as a user role, but we wanted to add another capability to this role to authorize them to enable or disable the alerts as required.  Thanks heaps