All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Jamietriplet  Sounds like _time is being read as a string not as epochtime, try this | eval _time = strptime(_time, "%Y-%m-%dT%H:%M:%S.%N")  
Sounds like order of precedence issue- These two will help in figuring out what is take the priority setting: (Some config is taking place before the other) but go by what @gcusello  is saying.    ... See more...
Sounds like order of precedence issue- These two will help in figuring out what is take the priority setting: (Some config is taking place before the other) but go by what @gcusello  is saying.    Inputs config /opt/splunk/bin/splunk btool inputs list --debug  outputs config /opt/splunk/bin/splunk btool outputs list --debug  
Hi this has moved , Ive put in a redirect. Thanks for letting me know
Hi @Jamietriplet, to use timechart you must use the -time field that's in epochtime format. If in your csv you have the _time field in a different format, you have to convert in epochtime (using st... See more...
Hi @Jamietriplet, to use timechart you must use the -time field that's in epochtime format. If in your csv you have the _time field in a different format, you have to convert in epochtime (using strptime function in eval command) before the timechart command: Ciao. Giuseppe
Hi @adrifesa95 , if your HF is forwarding other logs, te connection is ok. so, try to remove the second stana in the inputs.conf of the HF leaving only: [splunktcp://9997] disabled = 0 Ciao. Giu... See more...
Hi @adrifesa95 , if your HF is forwarding other logs, te connection is ok. so, try to remove the second stana in the inputs.conf of the HF leaving only: [splunktcp://9997] disabled = 0 Ciao. Giuseppe
Hi @sahityasweety, this timestamp seems to be in epochtime, so to transfrom it in human readable format you can use the strftime function in the eval command. e.g. to transform in format yyy-mm-dd ... See more...
Hi @sahityasweety, this timestamp seems to be in epochtime, so to transfrom it in human readable format you can use the strftime function in the eval command. e.g. to transform in format yyy-mm-dd HH:MM:SS, you could try: | eval timestamp=strftime(timetampo,"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
@ITWhisperer I've removed the option Now from the dropdown, what should be the new eval statement instead of <eval token="latest_Time">if(isnull('timedrop') or 'timedrop'="now",now(),relative_time(i... See more...
@ITWhisperer I've removed the option Now from the dropdown, what should be the new eval statement instead of <eval token="latest_Time">if(isnull('timedrop') or 'timedrop'="now",now(),relative_time(if($time.latest$="now",now(),$time.latest$), $timedrop$))</eval> ?
Hello, I answer to both of you, I leave you my outputs.conf that as you say I downloaded it from the cloud and it points to the indexers. [root@host ~]# cat /opt/splunk/etc/system/local/outputs.con... See more...
Hello, I answer to both of you, I leave you my outputs.conf that as you say I downloaded it from the cloud and it points to the indexers. [root@host ~]# cat /opt/splunk/etc/system/local/outputs.conf [tcpout] defaultGroup = splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1 useACK = true indexAndForward = 0 [tcpout:splunkcloud_20231028_9aaa4b04216cd9a0a4dc1eb274307fd1] server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com:9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com:9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997 But this is a problem with this source, because I have other sources that go through that HF and arrive correctly to the cloud. I have already tested that port 9997 is up, but I must be missing something else. I have created the index mx_windows on both cloud and HF. any more ideas?  
Hi all, I am new to splunk, and i have got the following error: "Field '_time' should have numerical values"  when I try to run a timechart command. I have got a csv file 'try.csv', which I re... See more...
Hi all, I am new to splunk, and i have got the following error: "Field '_time' should have numerical values"  when I try to run a timechart command. I have got a csv file 'try.csv', which I read in some fields to display, but when I initiate a timechart command I get the above error. The csv file 'try.csv' has a column named _time, which has an ISO8601 time I would appreciate any guide or help I can get, as I am relatively new to splunk Thanks
If you done that, then best course or action might be log a support ticket, as there could be another underlying issue.    
Hello everyone, I'm currently working on a Dashboard to visualize Database latency across various machines, and I'm encountering an issue with the Line Chart's SPL (Search Processing Language). The ... See more...
Hello everyone, I'm currently working on a Dashboard to visualize Database latency across various machines, and I'm encountering an issue with the Line Chart's SPL (Search Processing Language). The SPL requirement is to retrieve all values of the field ms_per_block grouped by ds_file_path and machine. Here's my SPL: index=development sourcetype=custom_function user_action=database_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | search ds_file_path="\\\\swmfs\\orca_db_january_2024\\type\\rwo.ds" | chart values(ms_per_block) by ds_file_path machine My result: My goal is to have the output where each ds_file_path value is listed in individual rows along with the corresponding machine and ms_per_block values in separate rows. I've tried using the table command: | table ds_file_path, machine, ms_per_block But this doesn't give me the desired output. The machine name is under a field, whereas I need the machine name to be a separate field, each containing its respective ms_per_block value. I feel like I'm missing something here. Any guidance on how to achieve this would be greatly appreciated. Thanks in advance!  
Hello,   Thanks for your response. I have added the necessary configuration according to the article that you shared. But we are still facing this issue.  The UI loading is slow as well.  
Hello Splunk Community, I am trying to extract the "timestamp":"1715235824441" with proper details. Could anyone help me on this. Thanks in advance .   Regards, Sahitya
We need to know particularly about how many H status were coming to C within the day(12AM to11:59PM).
Start by checking the logs These can also be set via the GUI $SPLUNK_HOME/bin/splunk set log-level HTTPServer -level DEBUG $SPLUNK_HOME/bin/splunk set log-level HttpInputDataHandler -level DEBUG $S... See more...
Start by checking the logs These can also be set via the GUI $SPLUNK_HOME/bin/splunk set log-level HTTPServer -level DEBUG $SPLUNK_HOME/bin/splunk set log-level HttpInputDataHandler -level DEBUG $SPLUNK_HOME/bin/splunk set log-level TcpInputProc -level DEBUG Remember to set back to WARN once you have finished debugging Then search - this may give you some clues for you to further investigate index=_internal source=*splunkd.log* (component=HttpInputDataHandler OR component=TcpInputProc OR component=HTTPServer)  
Try it this way around index=abc | mvexpand records{} |spath input=records{} | table ProcessName, message, severity, Username, Email, as Id
Suggestions made by @PickleRick  are probably best to go with.  In terms of it still not working - you will most likely need to adjust the reg-ex pattern based on your logs.
I've never come across an Splunk environment that uses dynamic IP's for indexers (might be asking for trouble) , but there may be some use cases, I don't know, probably cloud environments. Normally o... See more...
I've never come across an Splunk environment that uses dynamic IP's for indexers (might be asking for trouble) , but there may be some use cases, I don't know, probably cloud environments. Normally one would use static IP's and DNS service and name's for UF to Indexers communications. You would then configure your outputs.conf with those DNS names. The UF's have inbuilt functionality to spray portions of the data across the Indexers. So DNS may be the way forward for you.  Example outputs.conf    [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = indexer1.example.com:9997, indexer2.example.com:9997   If using Indexer cluster, you can use the Cluster Master Discovery Option - Read all about it here.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/indexerdiscovery    Regarding Smart Store - Read all about it here.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/AboutSmartStore 
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "... See more...
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "tmckinnon@netgear.com.invalid", "Id": "00530000000drllAAA"}      {"Username": "clau(smtest145)@netgear.com.org", "Email": "clau@netgear.com.invalid", "Id": "0050M00000DtmxIQAR"}      {"Username": "d.mitra@netgear.com.test1", "Email": "d.mitratest1@netgear.com", "Id": "0052g000003DSbTAAW"}      {"Username": "demoalias+test1@guest.netgear.com.org", "Email": "demoalias+test1@gmail.com.invalid", "Id": "0050M00000CyZJYQA3"}      {"Username": "dlohith+eventstest1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvYQAV"}      {"Username": "juan.gimenez+test1@netgear.com.apsqa2", "Email": "juan.gimenez+test1@netgear.com", "Id": "005D10000043gVxIAI"}      {"Username": "kulbir.singh+test1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvaQAF"}      {"Username": "rktest1028@guest.netgear.com.org", "Email": "rktest1028@gmail.com.invalid", "Id": "0053y00000G0UmxAAF"}      {"Username": "test123test2207@test.com", "Email": "kkhatri@netgear.com", "Id": "005D10000042Mi1IAE"}      {"Username": "test123test@test.com", "Email": "test123test@test.com", "Id": "0052g000003EUIUAA4"}    ]    severity: DEBUG I tried this query  index=abc|spath input=records{} | mvexpand records{} | table ProcessName, message, severity, Username, Email, as Id it returns 10 records but all the 10 records have same value I mean the first record Is there way to parse this array with all the key value pairs  @gcusello  @yuanliu 
So why not just count the C's in one day?