All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a case where some indexers take 4 to 5 hours to join the cluster. The system shows no/little system usage (CPU, Mem, I/O). The splunkd.log appears to loop through the same log entries mult... See more...
I have a case where some indexers take 4 to 5 hours to join the cluster. The system shows no/little system usage (CPU, Mem, I/O). The splunkd.log appears to loop through the same log entries multiple times. Then, the indexer continues loading when I see a log entry: Running job=BundleForcefulStateMachineResetJob After this reset job is ran I quickly see the public key for the master loaded and the indexer joins the cluster shortly thereafter. Here is a snippet of the log: 10-13-2022 11:22:02.293 -0700 WARN HttpListener - Socket error from 127.0.0.1:54240 while accessing /servicesNS/splunk-system-user/splunk_archiver/search/jobs: Broken pipe 10-13-2022 11:44:08.721 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (1103256 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 11:44:24.950 -0700 INFO PipelineComponent - CallbackRunnerThread is unusually busy, this may cause service delays: time_ms=1119484 new=0 null=0 total=56 {'name':'DistributedRestCallerCallback','valid':'1','null':'0','last':'3','time_ms':'0'},{'name':'HTTPAuthManager:timeoutCallback','valid':'1','null':'0','last':'1','time_ms':'0'},{'name':'IndexProcessor:ipCallback-0','valid':'1','null':'0','last':'6','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-1','valid':'1','null':'0','last':'19','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-2','valid':'1','null':'0','last':'30','time_ms':'4062'},{'name':'IndexProcessor:ipCallback-3','valid':'1','null':'0','last':'41','time_ms':'4164'},{'name':'MetricsManager:probeandreport','valid':'1','null':'0','last':'0','time_ms':'1103256'},{'name':'PullBasedPubSubSvr:timerCallback','valid':'1','null':'0','last':'2','time_ms':'0'},{'name':'ThreadedOutputProcessor:timerCallback','valid':'4','null':'0','last':'40','time_ms':'0'},{'name':'triggerCollection','valid':'44','null':'0','last':'55','time_ms':'0'} 10-13-2022 12:00:00.001 -0700 INFO ExecProcessor - setting reschedule_ms=3599999, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py 10-13-2022 12:18:32.106 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:19:02.105 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:19:32.106 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:20:02.105 -0700 WARN DispatchReaper - Failed to read search info for id=1665688686.28 10-13-2022 12:20:30.137 -0700 WARN HttpListener - Socket error from 127.0.0.1:54544 while accessing /servicesNS/splunk-system-user/splunk_archiver/search/jobs: Broken pipe 10-13-2022 12:29:09.955 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (2182584 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 12:29:25.957 -0700 INFO PipelineComponent - CallbackRunnerThread is unusually busy, this may cause service delays: time_ms=2198585 new=1 null=0 total=57 {'name':'DistributedRestCallerCallback','valid':'1','null':'0','last':'3','time_ms':'0'},{'name':'HTTPAuthManager:timeoutCallback','valid':'1','null':'0','last':'1','time_ms':'0'},{'name':'IndexProcessor:ipCallback-0','valid':'1','null':'0','last':'6','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-1','valid':'1','null':'0','last':'19','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-2','valid':'1','null':'0','last':'30','time_ms':'4000'},{'name':'IndexProcessor:ipCallback-3','valid':'1','null':'0','last':'41','time_ms':'4000'},{'name':'MetricsManager:probeandreport','valid':'1','null':'0','last':'0','time_ms':'2182584'},{'name':'PullBasedPubSubSvr:timerCallback','valid':'1','null':'0','last':'2','time_ms':'0'},{'name':'ThreadedOutputProcessor:timerCallback','valid':'4','null':'0','last':'40','time_ms':'0'},{'name':'triggerCollection','valid':'45','null':'0','last':'56','time_ms':'0'} 10-13-2022 12:46:13.298 -0700 INFO PipelineComponent - MetricsManager:probeandreport() took longer than seems reasonable (496854 milliseconds) in callbackRunnerThread. Might indicate hardware or splunk limitations. 10-13-2022 12:46:13.867 -0700 WARN HttpListener - Socket error from 127.0.0.1:54220 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.907 -0700 WARN HttpListener - Socket error from 127.0.0.1:54254 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.931 -0700 WARN HttpListener - Socket error from 127.0.0.1:54560 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:13.955 -0700 WARN HttpListener - Socket error from 127.0.0.1:54538 while accessing /services/data/indexes: Broken pipe 10-13-2022 12:46:17.070 -0700 INFO BundleJob - Running job=BundleForcefulStateMachineResetJob
Dear All,   Please help to recommend  when i export  result to CSV  field work id if number start with 0 it not show when open in csv files   as capture screen below,   If i would l... See more...
Dear All,   Please help to recommend  when i export  result to CSV  field work id if number start with 0 it not show when open in csv files   as capture screen below,   If i would like to change format before export to csv it possible?    Best Regards, CR  
Hi, im working on new use case, but was stuck in few things.  I want to create a use case logic to monitors whenever user/IP are trying to access to log in from non authorize country.    ex... See more...
Hi, im working on new use case, but was stuck in few things.  I want to create a use case logic to monitors whenever user/IP are trying to access to log in from non authorize country.    example a use is support to log in from Berlin but he or she is log in from Chicago.  My ask 1. Is it possible from Splunk end to implement such use case 2. If yes what kind of logs we need to monitor such activity, is FW logs are enough? 3. What will be the query    thanks 
Hi, I am monitoring HTTP response code for a bunch of internal url's and it works as long as the sites are responding. But if the host is not responding I get nothing but an error in the Windows ap... See more...
Hi, I am monitoring HTTP response code for a bunch of internal url's and it works as long as the sites are responding. But if the host is not responding I get nothing but an error in the Windows application eventlog: Get \"http://osi3160.de-prod.dk:8080/ping\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", "monitorType": "http", "url": "http://osi3160.de-prod.dk:8080/ping"} Get \"http://osi3160.de-prod.dk:8080/ping\": context deadline exceeded"}   My agent_config.yaml looks like this: smartagent/http_api-gateway_1: type: http host: osi3160.de-prod.dk port: 8080 path: /ping regex: <body>OK<\/body> httpTimeout: 20s intervalSeconds: 60 Any ideas?
How to use this (sed -i 's/"//g' $LOOKUP_FILE) by using script can any one help thanks lateef
Hi All, When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned. 10-18-2022 11:00:22.349 ERROR Di... See more...
Hi All, When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned. 10-18-2022 11:00:22.349 ERROR DispatchThread [3247729 phase_1] - code=10 error="" 10-18-2022 11:00:22.349 ERROR ResultsCollationProcessor [3247729 phase_1] - SearchMessage orig_component= sid=1666090813.341131_7E89B3C6-34D5-44DA-B19C-E6A755245D39 message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:pldc1splindex1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.  The message.conf shows [DISPATCHCOMM:PEER_PIPE_EXCEPTION__S] message = Search results might be incomplete: the search process on the local peer:%s ended prematurely. action = Check the local peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. severity = warn I also have Splunk Alerts that are showing false positives, the alert search is retuning no results but the Splunk sourcetype=scheduler is sending out emails with success?  Is this related? What does this mean? PEER_PIPE_EXCEPTION__S Splunk Enterprise OnPrem version 9.0.1 on a distributed environment. Thanks
Hi all. Is there an easy and fast way to disable, at all or by some filters, the WARNING BANNERS i get sometimes in SPL Search form panel? The same Warnings are inside the Jobs detail, so i d... See more...
Hi all. Is there an easy and fast way to disable, at all or by some filters, the WARNING BANNERS i get sometimes in SPL Search form panel? The same Warnings are inside the Jobs detail, so i do not want to display them as "banner" inside the page. How? Thanks.
we are using Splunk Add-on for Microsoft Cloud Services to index Input type Azure Event Hub what field can be used as unique  key  ?
Hi everyone, I am using Splunk UI (Splunk Design System) to develop the Splunk App (with ReactJs) I want to send the email from my app( use Splunk SMTP setting). My app is allowing users can se... See more...
Hi everyone, I am using Splunk UI (Splunk Design System) to develop the Splunk App (with ReactJs) I want to send the email from my app( use Splunk SMTP setting). My app is allowing users can select the sender, recipient (to, cc, bcc), body. So my question is: Can we get the SMTP setting from Splunk? How can I send the email from app with Reactjs, does Splunk JS SDK support it? I knew I could create a custom rest endpoint with a python script to create send mail endpoint backend API but it is very complex. I want to find another way. Thank!
Hi  have a problem in stream app, in some flows wrong source and dest IP observed. For instance, I checked the original flow in Wireshark  and the original source IP and port were192.168.1.1:56271... See more...
Hi  have a problem in stream app, in some flows wrong source and dest IP observed. For instance, I checked the original flow in Wireshark  and the original source IP and port were192.168.1.1:56271 and dest IP and port were 192.168.1.2:80, meanwhile in stream the source and dest are displaced! any suggestion on this wired issue?
Hello, everyone! I have few questions about indexers cleaning: - How it's performed in clustered architecture? - Does it really needed? Do I correctly understand that frozen buckets delete auto... See more...
Hello, everyone! I have few questions about indexers cleaning: - How it's performed in clustered architecture? - Does it really needed? Do I correctly understand that frozen buckets delete automatically?
Did any one know what naming convention need to onboard the data from Corelight to Splunk? Do we need this kind of naming convention conn_<date>_<time>.log or conn.log dns.log are fine.
Hello, I need to install ARUBA TA; do you have any recommendations on how to proceed.  Your recommendations will be highly appreciated. Thank you!
I've created a table of test results using stats list() to create a table that looks like this (the application name is only listed once against the group of tests it's related to): Application ... See more...
I've created a table of test results using stats list() to create a table that looks like this (the application name is only listed once against the group of tests it's related to): Application TestName Outcome Website Search One Passed   Contact Page Passed   Order Form Passed Internal Query Form Passed   Look Up Passed   I would like to amend the table so that the Application is shown in the 'TestName' column above the group of tests it's related to, so it looks like this: TestName Outcome Website   Search One Passed Contact Page Passed Order Form Passed Internal   Query Form Passed Look Up Passed   I know this breaks normal table data layout but for the purposes of my dashboard I think it will make it look more readable.
1. I have below logs: server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find ... See more...
1. I have below logs: server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call), unable to find the logs from this server. server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call) server6z: INFO could not find the logs under this path(apimanager call), unable to find the logs from this server. server6z: INFO could not find the logs under this path(apimanager call) i have mentioned in my props should_linemerge=false line_breaker=([\r\n]+) but i am seeing error like failed to parse timestamp defaulting to file modtime. How to resolve this issue. 2. I am getting the same issue as above for this type of logs as well Sample logs: /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /applicatins/dir/wrd-start/loadscript/filedata.com: line24: /applicatins/dir/wrd start/loadscript/filedata.com: not able to read the files /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /applicatins/dir/wrd-start/loadscript/filedata.com: line24: /applicatins/dir/wrd start/loadscript/filedata.com: not able to read the files /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /path/svgt/app/loadscript/file.com: coloumn12: /path/svgt/app/loadscript/file.com: not able to view file /applicatins/dir/wrd-start/loadscript/filedata.com: line24: /applicatins/dir/wrd start/loadscript/filedata.com: not able to read the files
Hello, When I run a query I get the results as I need them in a table from Splunk but when I download the .csv file, the timestamp field changes to an incorrect date and year. Does anyone know h... See more...
Hello, When I run a query I get the results as I need them in a table from Splunk but when I download the .csv file, the timestamp field changes to an incorrect date and year. Does anyone know how I can fix it?        
Does anyone know a command to monitoring the web loading response (time) of a Splunk page/server? Like when you navigate on one page to another, one search to another.
Hi Experts, We have created a new role with the same capabilities as a user role, but we wanted to add another capability to this role to authorize them to enable or disable the alerts as required.... See more...
Hi Experts, We have created a new role with the same capabilities as a user role, but we wanted to add another capability to this role to authorize them to enable or disable the alerts as required.  Thanks heaps 
I have an  ``` index=xyz data.id=1 ``` which gives me list of unique id's [1,2,3,4,5]Not sure how to store the above result to get it used for another query. | stats count by uniqueId Now... See more...
I have an  ``` index=xyz data.id=1 ``` which gives me list of unique id's [1,2,3,4,5]Not sure how to store the above result to get it used for another query. | stats count by uniqueId Now I want to use the list above and get the data from another query and find the values Query 2 will return  1 -> good 2 -> Bad 3 -> Neural / etc Index2 I want to use the result [1,2,3,4] for the next query which will give me some extra information based on the ID only. Eg: Query 2 has index=xyz data.msg.id=1, data.xyz.val=good How can we do that? I am trying something like this   index="test" actionSubCateg IN (xyz) landingPageURL="xyz/?search=game_gupta" data.msg.queryName="query FindBtf" | table data.msg.id Find in second query the results of top [ search index="test" actionSubCateg="game" | rename data.DATA.id as id | fields id, scope | table id, scope]  
Hi I am trying to capture all event="DcSyncs" from my index. This index also contains event="DcID". The event "DCSyncs" can occur at anytime (pretty often though), but "DcID" occurs once every 8 hour... See more...
Hi I am trying to capture all event="DcSyncs" from my index. This index also contains event="DcID". The event "DCSyncs" can occur at anytime (pretty often though), but "DcID" occurs once every 8 hours. I am trying to get all "DcSyncs" and then take the HostName field of those results and see if that HostName field has a result for event="DcID". If it does filter it out of the results. To summarize: I am trying to collect all HostName's that have a "DCSyncs" event, but no "DcID" event. I have this setup to run on an 8 hour interval so I don't think I need the time logic of the search.  I keep trying different variations, but I think I am way off. Any help is appreciated. index=MyIndex event="DcSyncs" | join HostName [search NOT index=MyIndex event="DcID"] | table _time HostName event