All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Not able to extract the difference.  Query:  (index="events_prod_gmh_gateway_esa") SPNLDSCR* | spath Y_CONV | search Y_CONV=CACAFORM| spath ID_FAMILLE | search ID_FAMILLE=CAFORM |eval Time_in = ... See more...
Not able to extract the difference.  Query:  (index="events_prod_gmh_gateway_esa") SPNLDSCR* | spath Y_CONV | search Y_CONV=CACAFORM| spath ID_FAMILLE | search ID_FAMILLE=CAFORM |eval Time_in = "20" + substr(sRefInt , 9 , 15) |eval Processing_Start_Time = strptime(HdtIn,"%Y%m%d%H%M%S.%q") , Processing_End_Time = strptime(HdtOut,"%Y%m%d%H%M%S.%q") , Reception_Time = strptime(Time_in,"%Y%m%d%H%M%S.%q") |eval Processing_Start_Time_1 = strptime(HdtIn,"%m/%d/%Y %H:%M:%S.%6N") , Processing_End_Time_1 = strptime(HdtOut,"%m/%d/%Y %H:%M:%S.%6N") , Reception_Time_1 = strptime(Time_in,"%Y%m%d%H%M%S.%q"), diff = Processing_End_Time_1 - Reception_Time_1 |convert ctime(Processing_Start_Time) , ctime(Processing_End_Time) , ctime(Reception_Time) | table _time , ID_FAMILLE , MSG_TYP_CONV , MSG_TYP_ORIG , sRefInt , Reception_Time , Processing_Start_Time , Processing_End_Time , Processing_Start_Time_1 , Processing_End_Time_1 , Reception_Time_1 , diff    
Hi All, I'm working on a project to create some dashboards that display a lot of information and one of the questions that I'm facing is how to know if Nessus scans are credential, I looked at some ... See more...
Hi All, I'm working on a project to create some dashboards that display a lot of information and one of the questions that I'm facing is how to know if Nessus scans are credential, I looked at some events, and it indicates the check type: local. Is this means it is credential ?  Thanks in advance for any information may help.
Have you tried | fields *Read*  ?
You have the right idea, but the strptime format strings don't match the example data.  Then simply subtract one from the other to get the difference.  Try this |eval Processing_Start_Time = strptim... See more...
You have the right idea, but the strptime format strings don't match the example data.  Then simply subtract one from the other to get the difference.  Try this |eval Processing_Start_Time = strptime(HdtIn,"%m/%d/%Y %H:%M:%S.%6N") , Processing_End_Time = strptime(HdtOut,"%m/%d/%Y %H:%M:%S.%6N") , Reception_Time = strptime(Time_in,"%Y%m%d%H%M%S.%q"), diff = Processing_End_Time - Reception_Time  
how to do for loop one liner in splunk soar playbook for i in code_1__output1: code_1__output5 == i.split(":")[0] if code_1__output5 == "ipaddress": code_1__output4 == s... See more...
how to do for loop one liner in splunk soar playbook for i in code_1__output1: code_1__output5 == i.split(":")[0] if code_1__output5 == "ipaddress": code_1__output4 == str(code_1__output5)
Hello! I have the following search: | mstats avg(*) as * WHERE index=indexhere host=hosthere span=1 by host |timechart span=1m latest(*) as * What i am trying to do is only show the fie... See more...
Hello! I have the following search: | mstats avg(*) as * WHERE index=indexhere host=hosthere span=1 by host |timechart span=1m latest(*) as * What i am trying to do is only show the fields that contains the word "read" somewhere in the field name. Each field name is different and doesn't have "read" in the same place or before/after the same special characters either. I have tried fixing with with different commands but can't seem to find a good solution.  Thanks in advance
@kp_pl wrote: index="odp" OR index="oap" txt2="ibum_p" | rename e as c_e | eval c_e = mvindex(split(c_e, ","), 0) | stats values(*) by c_e line 1 - two indexes joined and one of them filter... See more...
@kp_pl wrote: index="odp" OR index="oap" txt2="ibum_p" | rename e as c_e | eval c_e = mvindex(split(c_e, ","), 0) | stats values(*) by c_e line 1 - two indexes joined and one of them filtered ( to create OneToOne relation). To clarify, line 1 does *not* join the indexes nor does it create a one-to-one relation. The OR operator tells the search peers to select all events from the odp index and the events in the oap index where the txt2 field has the specified value.  No relationship between the two indexes is made or implied and none should be inferred. To create a relationship, use the join (not preferred), transaction (also not preferred), or stats (preferred) command to associate the events by common fields, as in line 4.
Hi  Can you please let me know how we can find the difference of time between 2 timestamp fields. For example, 2 timestamp fields are in the below format:  Reception_Time = 06/21/2024 08:58:00.... See more...
Hi  Can you please let me know how we can find the difference of time between 2 timestamp fields. For example, 2 timestamp fields are in the below format:  Reception_Time = 06/21/2024 08:58:00.000000  Processing_End_Time = 06/21/2024 09:52:55.000000   Query :  (index="events_prod_gmh_gateway_esa") SPNLDSCR2406210858000001000 | spath Y_CONV | search Y_CONV=CACAFORM| spath ID_FAMILLE | search ID_FAMILLE=CAFORM |eval Time_in = "20" + substr(sRefInt , 9 , 15) |eval Processing_Start_Time = strptime(HdtIn,"%Y%m%d%H%M%S.%q") , Processing_End_Time = strptime(HdtOut,"%Y%m%d%H%M%S.%q") , Reception_Time = strptime(Time_in,"%Y%m%d%H%M%S.%q") |convert ctime(Processing_Start_Time) , ctime(Processing_End_Time) , ctime(Reception_Time) | table _time , ID_FAMILLE , MSG_TYP_CONV , MSG_TYP_ORIG , sRefInt , Reception_Time , Processing_Start_Time , Processing_End_Time
Based on the data, I expect 2-4 rows per single REFERENCE_VAL.
Sorry for the delay on this; no, I don't really have an answer to that one. You might open a support ticket for advice there. In my instances, I generally tried to minimize the amount of events it wa... See more...
Sorry for the delay on this; no, I don't really have an answer to that one. You might open a support ticket for advice there. In my instances, I generally tried to minimize the amount of events it was being sent.
Please some anonymised sample events from both indexes and a description of what it is you are trying to achieve, and some expected output.
Hi Splunkers, currently we are managing an Enterprise Splunk environment previously managed by another company. As sadly often occurs, no documentation has been released and so we had to discover alm... See more...
Hi Splunkers, currently we are managing an Enterprise Splunk environment previously managed by another company. As sadly often occurs, no documentation has been released and so we had to discover almost information about architecture by ourselves. We successfully managed many tasks related to this big problem, but few ones remain; in particular, the one for what I open this discussion. The point is this: almost total ingested data are collected flowing to a couple of HF. This means that data flow is, typically: Log sources -> On prem HF -> Cloud HF (on IaaS VM) -> Cloud Indexer (on IaaS VM). With a search discovered here on community, based on internal logs, I found how to understand what Splunk component send data to another Splunk one. I mean: suppose I have HF on prem 1 -> Hf on cloud 2 I know how to discover this analyzing the internal logs. But what about if I want to discover which HF on prem collect data sent to a specific index? Let me do an example. Suppose I have this host set: Log sources (with NO UF installed on them) Log source 1 Log source 2 Log source 3 On prem HF HF on prem 1 HF on prem 2 HF on prem 3 On cloud HF (IaaS VM) HF on Cloud 1 On cloud indexer Indexer on cloud 1 (IaaS VM) Indexes index1 index2 index3 At starting point, I know only that all 3 On prem HF collect data and send them to HF on Cloud: then, data are sent to the Indexer. I don’t know which On prem HF collect data from which Log source, and in which index data are collected once they arrive on indexer; for sure, I could ask to system owner what configuration has been performed on log sources, but the idea is to discover this with a Splunk Search. Is this possible? The idea is to have a search where I can specify the exact flow. For example, suppose that 1 of the above flow is: Log source 1 -> On Prem HF 2 -> On Cloud HF -> On Cloud Indexer -> index3 I must be able to discover it.        
Try something like this index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main" | rex \"status\":"(?<Http_code>\d+)" | rex \"evtType\":"... See more...
Try something like this index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main" | rex \"status\":"(?<Http_code>\d+)" | rex \"evtType\":"\"(?<evt_type>\w+)"\" | rex field=_raw "Status code:"\s(?<code>\d+) | stats count(eval(Http_code>0 AND evt_type="REQUEST")) as "Totalhits" count(eval(Http_code <500 AND evt_type="REQUEST")) as "sR" count(eval(code =500)) as error
Did a quick check on the files in the colddb directory. There are 4 different GUID's, which are actually the same as the GUID's for the existing peers. (which makes sense, since I used the original ... See more...
Did a quick check on the files in the colddb directory. There are 4 different GUID's, which are actually the same as the GUID's for the existing peers. (which makes sense, since I used the original /opt/splunk/etc on the new RHEL9 nodes, which includes the instance.cfg holding the GUID)   $ ls -lrt colddb/ | awk -F_ '{print $5}' | sort -u 10B29386-EAD3-45F6-AFEF-6C5897D7507E 289FAAF8-810C-454E-9CF5-4DEA9C5CA3E7 332E50AC-2BE6-4FFB-96AB-3F7D612A1422 9C46DD6F-782E-4675-8E9B-90CABC42221D   And the current peers :   $ splunk list cluster-peers | grep -v ":" | grep [0-9] 10B29386-EAD3-45F6-AFEF-6C5897D7507E 289FAAF8-810C-454E-9CF5-4DEA9C5CA3E7 332E50AC-2BE6-4FFB-96AB-3F7D612A1422 42C49D52-0A71-4164-91EC-806EAEEEE085 9C46DD6F-782E-4675-8E9B-90CABC42221D   (The 42C49... GUID is from the restored node, holding all the cold buckets)
Aha... that makes sense ... and explains a lot. I will see if I can restore the cold buckets by renaming the files / setting the correct GUID in the instance.cfg on the restored node. Thanks a lot ... See more...
Aha... that makes sense ... and explains a lot. I will see if I can restore the cold buckets by renaming the files / setting the correct GUID in the instance.cfg on the restored node. Thanks a lot for pointing me in the right direction.  
OK. The buckets contain several things in their directory name. Most notably - clustered non-hot buckets contain guid of the source indexer. So if you change the guid of the indexer, it will not matc... See more...
OK. The buckets contain several things in their directory name. Most notably - clustered non-hot buckets contain guid of the source indexer. So if you change the guid of the indexer, it will not match any existing indexers and will not be treated as part of the cluster (actually it's not explicitly written anywhere but I suppose it will be treated as an unclustered bucket). Probably the same goes for your original problem - I suppose you had a stand-alone indexer or just distributed indexers without a cluster and then decided to cluster your indexers. In such case without manual intervention old buckets are treated as unclustered and are _not_ replicated.  
Hi Splunk SMEs, Good day, we face an issue after some deployment in splunk and we cannot connect now to Splunk HF DB Task Server. Initially is working fine, we have done upgrade in java version from... See more...
Hi Splunk SMEs, Good day, we face an issue after some deployment in splunk and we cannot connect now to Splunk HF DB Task Server. Initially is working fine, we have done upgrade in java version from coretto to zulu last month and its seem working fine. After some deployment it now cause the issue. Can anyone assist me and solve this.   Thanks Mel
Thanks for the quick response! Its working as expected. 
Thanks @richgalloway  for your inputs, Does the volume of data being sent to Splunk helps in determining which method to use between HEC and UF For our use case we plan to send events which has ass... See more...
Thanks @richgalloway  for your inputs, Does the volume of data being sent to Splunk helps in determining which method to use between HEC and UF For our use case we plan to send events which has associated information (a json ~400 bytes.) and we may not be sending more than 5000 such events/day. You also mentioned about the client to get Acks for events sent via HEC and we do plan to have that. Based on the volume and our use case do you suggest we go with HEC? Also , while building and add-on is it possible to add a query which will identify specific events as alerts and ship that with add-on which customer can install in their Splunk setup?
Thank you. Yes I was wrong about transforms.conf actually I wan to generate sourcetype from elastic:auditbeat:log based on events as This link has specified.