All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If Warningcount is high, then I would like to see if target receiver/indexer is putting back-pressure. Check if queues blocked on target. If queues not blocked, check on target using netstat     n... See more...
If Warningcount is high, then I would like to see if target receiver/indexer is putting back-pressure. Check if queues blocked on target. If queues not blocked, check on target using netstat     netstat -an|grep <splunktcp port>     and see RECV Q, if it's high. If receiver queues are not blocked, but netstat shows RECV Q is full, then receiver need additional pipelines. If Warningcount is high because there was rolling restart at indexing tier, then set maxSendQSize to some 5% value of maxQueueSize. Example     maxSendQSize=2000000 maxQueueSize=50MB     If using autoLBVolume, then have maxQueueSize > 5 x autoLBVolume autoLBVolume > maxSendQSize Example     maxQueueSize=50MB autoLBVolume=5000000 maxSendQSize=2000000     maxSendQSize is total outstanding raw size of events/chunks in connection queue that needs to be sent to TCP Send-Q. This happens generally when TCP Send-Q is already full. autoLBVolume is minimum total raw size of events/chunks to be sent to a connection.
Thanks @gcusello 
thanks for the reply can you tell me how can i do that
Hi @Siddharthnegi, yes, it's unusual (usual logs are read by Universal or Heavy Forwarders), but it's possible. Remember that anyway, it's a best practice to forward all SH logs to the Indexers, so... See more...
Hi @Siddharthnegi, yes, it's unusual (usual logs are read by Universal or Heavy Forwarders), but it's possible. Remember that anyway, it's a best practice to forward all SH logs to the Indexers, so for this reason it's possible. Ciao. Giuseppe
Can i monitor a file in search head?
Hi team, I had upgraded from 9.0.5 version to 9.1.2 and upgradation successfully completed, but splunk web page can't reach this page  window displayed. and verified the bin  directory E:\splunk\... See more...
Hi team, I had upgraded from 9.0.5 version to 9.1.2 and upgradation successfully completed, but splunk web page can't reach this page  window displayed. and verified the bin  directory E:\splunk\bin>openssl s_client -connect simdoowwww:443 WARNING: can't open config file: ::::::/openssl.cnf connect: No such file or directory connect:errno=0     web.conf   [settings] enableSplunkWebSSL = 1 privKeyPath =a $SPLUNK_HOME\etc\auth\custom\myServerPrivateKey.key serverCert = $SPLUNK_HOME\etc\auth\custom\gddjkowww.ap.kinely.com.pem httpport = 443     The above configuration  in back end system, but page can't read this page displayed please help me on that.    
Hi @Jamietriplet  Sounds like _time is being read as a string not as epochtime, try this | eval _time = strptime(_time, "%Y-%m-%dT%H:%M:%S.%N")  
Sounds like order of precedence issue- These two will help in figuring out what is take the priority setting: (Some config is taking place before the other) but go by what @gcusello  is saying.    ... See more...
Sounds like order of precedence issue- These two will help in figuring out what is take the priority setting: (Some config is taking place before the other) but go by what @gcusello  is saying.    Inputs config /opt/splunk/bin/splunk btool inputs list --debug  outputs config /opt/splunk/bin/splunk btool outputs list --debug  
Hi this has moved , Ive put in a redirect. Thanks for letting me know
Hi @Jamietriplet, to use timechart you must use the -time field that's in epochtime format. If in your csv you have the _time field in a different format, you have to convert in epochtime (using st... See more...
Hi @Jamietriplet, to use timechart you must use the -time field that's in epochtime format. If in your csv you have the _time field in a different format, you have to convert in epochtime (using strptime function in eval command) before the timechart command: Ciao. Giuseppe
Hi @adrifesa95 , if your HF is forwarding other logs, te connection is ok. so, try to remove the second stana in the inputs.conf of the HF leaving only: [splunktcp://9997] disabled = 0 Ciao. Giu... See more...
Hi @adrifesa95 , if your HF is forwarding other logs, te connection is ok. so, try to remove the second stana in the inputs.conf of the HF leaving only: [splunktcp://9997] disabled = 0 Ciao. Giuseppe
Hi @sahityasweety, this timestamp seems to be in epochtime, so to transfrom it in human readable format you can use the strftime function in the eval command. e.g. to transform in format yyy-mm-dd ... See more...
Hi @sahityasweety, this timestamp seems to be in epochtime, so to transfrom it in human readable format you can use the strftime function in the eval command. e.g. to transform in format yyy-mm-dd HH:MM:SS, you could try: | eval timestamp=strftime(timetampo,"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
@ITWhisperer I've removed the option Now from the dropdown, what should be the new eval statement instead of <eval token="latest_Time">if(isnull('timedrop') or 'timedrop'="now",now(),relative_time(i... See more...
@ITWhisperer I've removed the option Now from the dropdown, what should be the new eval statement instead of <eval token="latest_Time">if(isnull('timedrop') or 'timedrop'="now",now(),relative_time(if($time.latest$="now",now(),$time.latest$), $timedrop$))</eval> ?
Hello, I answer to both of you, I leave you my outputs.conf that as you say I downloaded it from the cloud and it points to the indexers. [root@host ~]# cat /opt/splunk/etc/system/local/outputs.con... See more...
Hello, I answer to both of you, I leave you my outputs.conf that as you say I downloaded it from the cloud and it points to the indexers. [root@host ~]# cat /opt/splunk/etc/system/local/outputs.conf [tcpout] defaultGroup = splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1 useACK = true indexAndForward = 0 [tcpout:splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1] server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com:9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com:9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997 But this is a problem with this source, because I have other sources that go through that HF and arrive correctly to the cloud. I have already tested that port 9997 is up, but I must be missing something else. I have created the index mx_windows on both cloud and HF. any more ideas?  
Hi all, I am new to splunk, and i have got the following error: "Field '_time' should have numerical values"  when I try to run a timechart command. I have got a csv file 'try.csv', which I re... See more...
Hi all, I am new to splunk, and i have got the following error: "Field '_time' should have numerical values"  when I try to run a timechart command. I have got a csv file 'try.csv', which I read in some fields to display, but when I initiate a timechart command I get the above error. The csv file 'try.csv' has a column named _time, which has an ISO8601 time I would appreciate any guide or help I can get, as I am relatively new to splunk Thanks
If you done that, then best course or action might be log a support ticket, as there could be another underlying issue.    
Hello everyone, I'm currently working on a Dashboard to visualize Database latency across various machines, and I'm encountering an issue with the Line Chart's SPL (Search Processing Language). The ... See more...
Hello everyone, I'm currently working on a Dashboard to visualize Database latency across various machines, and I'm encountering an issue with the Line Chart's SPL (Search Processing Language). The SPL requirement is to retrieve all values of the field ms_per_block grouped by ds_file_path and machine. Here's my SPL: index=development sourcetype=custom_function user_action=database_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | search ds_file_path="\\\\swmfs\\orca_db_january_2024\\type\\rwo.ds" | chart values(ms_per_block) by ds_file_path machine My result: My goal is to have the output where each ds_file_path value is listed in individual rows along with the corresponding machine and ms_per_block values in separate rows. I've tried using the table command: | table ds_file_path, machine, ms_per_block But this doesn't give me the desired output. The machine name is under a field, whereas I need the machine name to be a separate field, each containing its respective ms_per_block value. I feel like I'm missing something here. Any guidance on how to achieve this would be greatly appreciated. Thanks in advance!  
Hello,   Thanks for your response. I have added the necessary configuration according to the article that you shared. But we are still facing this issue.  The UI loading is slow as well.  
Hello Splunk Community, I am trying to extract the "timestamp":"1715235824441" with proper details. Could anyone help me on this. Thanks in advance .   Regards, Sahitya
We need to know particularly about how many H status were coming to C within the day(12AM to11:59PM).