All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That app already supports FMC logs.  What changes do you need?  What have you tried yourself?  How were those efforts unsuccessful?
Hello @gcusello  I have masked the field for the purpose of safety. I tried by passing  values(paymentStatusResponse.orderCode) AS order_code its not working. With the below query (index= index... See more...
Hello @gcusello  I have masked the field for the purpose of safety. I tried by passing  values(paymentStatusResponse.orderCode) AS order_code its not working. With the below query (index= index_1 OR index= index_1) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler") "Did not observe any item or terminal signal within" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster values(host) as hostname count(host) as count values(message.tracers.ek-correlation-id{}) as corr_id  I am getting output as: cluster hostname count corr_id hhj yueyheh 3 1234234 343242 3423424 Now I want to add field paymentStatusResponse.orderCode, which comes from another logger "PaymentStatusClientImpl". The common entity between these 2 loggers is message.tracers.ek-correlation-id{}. So that my final output will be  cluster hostname count corr_id order_code hhj yueyheh 3 1234234 343242 3423424 order_1010 order_2020 order_3030
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please cou... See more...
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please could help anyone help on this. Prasent splunk vesrion is 8.2.0  
1. Usually (yes, I know there are rare cases where it makes sense) you configure external authentication only on SHs. Indexers should generally not run webui (DS and HFs often too) 2. Did you check ... See more...
1. Usually (yes, I know there are rare cases where it makes sense) you configure external authentication only on SHs. Indexers should generally not run webui (DS and HFs often too) 2. Did you check _internal?
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand wh... See more...
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand whats going wrong ?   "Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/config_explorer/authentication/providers/LDAP: The read operation timed out',)"   I tried increasing these attributes in authentication.conf but still no luck. network_timeout = 1200 sizelimit = 10000 timelimit = 1500 web.conf - [settings] enableSplunkWebSSL = true splunkdConnectionTimeout = 1201  
Try searching for field::value instead of field=value To test whether the fields are getting indexed (remember that field names _are_ case sensitive).
Yup. If you start lagging behind (in our case we were about 2-2.5 hours behind during midday; we would catch up during evening-night) and Windows decides to rotate the log file, you'll end up missing... See more...
Yup. If you start lagging behind (in our case we were about 2-2.5 hours behind during midday; we would catch up during evening-night) and Windows decides to rotate the log file, you'll end up missing events probably.
Hi @super_edition , this means that you have INDEXED_EXTRACTIONS=JSON in your props.conf and you don't need to use spath, please try this: (index= index_1 OR index= index_2) (kubernetes_namespace="... See more...
Hi @super_edition , this means that you have INDEXED_EXTRACTIONS=JSON in your props.conf and you don't need to use spath, please try this: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") "Did not observe any item or terminal signal within" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) AS cluster values(host) AS hostname count(host) AS count values(correlation-id{}) AS corr_id values(paymentStatusResponse.orderCode) AS order_code only one thing: in the screenshot it isn't clear the field name, it seems that there's something before paymentStatusResponse.orderCode, can you check it? are you sure that the file name is exactly paymentStatusResponse.orderCode? Ciao. Giuseppe
Hi @jaibalaraman , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expirati... See more...
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expiration time for both tokens are appropriate) Priority token usage filtered by partner name Response time analysis for security token/priority token   How to createádd panel for this 4 options
Hello @gcusello  If I run the main search as below: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=Payme... See more...
Hello @gcusello  If I run the main search as below: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") I am able to see "paymentStatusResponse.orderCode" values in interesting field.  
Hi @super_edition , running only your main search, do you see this field in interesting fields? Ciao. Giuseppe
Hi @richgalloway, thanks for the reply, that makes it clear.  I think it would be better to state explicitly, that there is no third-party software used, instead of leave it blank. Just to prevent ... See more...
Hi @richgalloway, thanks for the reply, that makes it clear.  I think it would be better to state explicitly, that there is no third-party software used, instead of leave it blank. Just to prevent misunderstandings. Cheers!
Thanks @gcusello  I have amended the changes query but the output of order_code column is still empty. order_code value  "paymentStatusResponse.orderCode" comes from 1 of the 2 logger. logger ... See more...
Thanks @gcusello  I have amended the changes query but the output of order_code column is still empty. order_code value  "paymentStatusResponse.orderCode" comes from 1 of the 2 logger. logger name PaymentStatusClientImpl  
Hi @shanemhartley , ingestion in Splunk is usually done using a Technical Add-On , in your case the Splunk_TA_nix (https://splunkbase.splunk.com/app/833). You have to install this add-on on the Uni... See more...
Hi @shanemhartley , ingestion in Splunk is usually done using a Technical Add-On , in your case the Splunk_TA_nix (https://splunkbase.splunk.com/app/833). You have to install this add-on on the Universal Forwarder enabling the input stanzas you need. If you want to store these logs in a defined index (instead of main), you have also to add to each enabled input stanza the option: index = <your_index> Then you have to install this add-on also on your Search Head or your Stand Alone Splunk Server. In this way you have the logs correctly parsed and usable. For more infos see at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Getstartedwithgettingdatain and there are also more videos. Ciao. Giuseppe
Hi @super_edition , at first don't use the search command after the main search because your search will be slower: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_n... See more...
Hi @super_edition , at first don't use the search command after the main search because your search will be slower: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") "Did not observe any item or terminal signal within" | spath "paymentStatusResponse.orderCode" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster, values(host) as hostname, count(host) as count, values(correlation-id{}) as corr_id, values(paymentStatusResponse.orderCode) as order_code and the asterisk isn't mandatory in a string like your one.  Then review the use of spath command at https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Spath : (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") "Did not observe any item or terminal signal within" | spath output=orderCode path=paymentStatusResponse.orderCode | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster values(host) as hostname count(host) as count values(correlation-id{}) as corr_id values(orderCode) as order_code Ciao. Giuseppe
Hi @mursidehsani , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @jaibalaraman , you have to change the time format in strftime command applying the format you like following the formats at https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Com... See more...
Hi @jaibalaraman , you have to change the time format in strftime command applying the format you like following the formats at https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Commontimeformatvariables : | makeresults | eval refresh_time=strftime(_time, "%A,%d/%m/%Y %Z %H:%M:%S") | table refresh_time  Ciao. Giuseppe
Hi @gcusello  It works! Thank you so much for your help.
Hi @mursidehsani, please try this: <your_search> | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | stats values(ink_type) AS ink_type BY time | sort - time | he... See more...
Hi @mursidehsani, please try this: <your_search> | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | stats values(ink_type) AS ink_type BY time | sort - time | head 1 | mvexpand ink_type Ciao. Giuseppe