All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a ... See more...
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a query that sends the information into a custom script, to then populate a lookup, almost as if it's recursive. I'm thinking this is a performance issue of the script (it is a Python script so it's not the fastest). This is an example command of what it looks like:  index="*" host="example.org" | map search="| customcommand \"$src$\""
I have syslogs coming into Splunk that need some cleaning up - it's essentially JSON with a few extra characters here and there (but enough to be improperly formatted). I'd really like to be able to ... See more...
I have syslogs coming into Splunk that need some cleaning up - it's essentially JSON with a few extra characters here and there (but enough to be improperly formatted). I'd really like to be able to use KV_MODE = json to auto extract fields, but those additional characters prevent this from happening. So I wrote a few SEDCMDs to remove those additional characters and applied the following stanzas to a new sourcetype: However, in our distributed Splunk Cloud environment, these SEDCMDs are not working. There are no errors in the _internal index pertaining to this sourcetype, and I can tell the sourcetype is applying because any key/value pairs in the data that pop up before the extra characters are automatically extracted at search-time as expected (so at least I know the KV_MODE stanza is trying to work). Because the SEDCMDs are not removing the extra characters, the other fields are not being auto-extracted. In my all-in-one test environment, the SEDCMDs work perfectly alongside KV_MODE to clean up the data and pull out the fields. I can't quite determine why it isn't working in Cloud - the syslog servers forwarding this data have Universal Forwarders so I understand why the sourcetype isn't applying at that level... but this sourcetype should be hitting the indexers and applied there, no? What am I missing?   
Hi @santoshpatil01 , Your request is lacking useful information for anyone to help. It is not clear the query format that will serve as your base search so maybe you'll have to adjust that to your ... See more...
Hi @santoshpatil01 , Your request is lacking useful information for anyone to help. It is not clear the query format that will serve as your base search so maybe you'll have to adjust that to your reality. Basically you'll need to have a base query that returns all raw data for the tokens wherever they are, and then you create the panels accordingly. If you don't have the input fields to set the tokens, you'll need to set them as well on each panel OR in the dashboard header depending on the filter active necessity. In each panel, mention the base search making this a linked search, and use as query something like this: Total request number for security token/priority token filtered by partner name | search partner=$token.partner$ | stats count as "Total Requests" by security_token, priority_token Duplicate request number filtered by partner name and customer ID (to check if current expiration time for both tokens are appropriate) | search partner=$token.parner$ AND customerId=$token.customerId$ | stats count by parner, customerId | where count>1 Priority token usage filtered by partner name | search partner=$token.parner$ | stats count by token_name Response time analysis for security token/priority token | stats avg(response_time) as response_time by security_token, priority_token Or if you need 90th percentile instead: | stats p90(response_time) as response_time by security_token, priority_token Again, this is just a scratch in the surface as I don't know your query, field names and additional information, but it should be enough for you to kick this off and play around.
Luckily, at the beginning of the search Splunk is actually quite smart in optimizing out some common issues. For example, if I run this (index= index_1 OR index= index_2) (kubernetes_namespace="kub... See more...
Luckily, at the beginning of the search Splunk is actually quite smart in optimizing out some common issues. For example, if I run this (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger="PaymentStatusClientImpl") | search "* Did not observe any item or terminal signal within*" on my  home Splunk instance (let's ignore the fact that I won't have any matching events obviously but that's not the point) and see the job detail dashboard  I can see this | search ("* Did not observe any item or terminal signal within*" (index=index_1 OR index=index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger="PaymentStatusClientImpl")) as optimized search.  And if we go to job log we can see this [ AND any did item not ns observe or signal terminal within* [ OR index::index_1 index::index_2 ] [ OR kube ose ] [ OR paymenterrorhandler paymentstatusclientimpl ] ] As base lispy search. As we can see, Splunk was not only able to "flat" both searches into single one but also noticed that the initial wildcard was before a major breaker and a such wouldn't affect the sought terms. But as a general rule of thumb - yes it's a good practice to keep your searches "tidy" and avoid wildcards at the beginning of search terms.
Hello, I tried the command, but same results, always 68 millions of events. I'll try to contact support, thanks for your help !  
Hi @super_edition , ok, in other words, you need to do a join with nother search, is it correct? if you haven't so many events, you could use the join command. If instead you're sure to have the m... See more...
Hi @super_edition , ok, in other words, you need to do a join with nother search, is it correct? if you haven't so many events, you could use the join command. If instead you're sure to have the message.tracers.ek-correlation-id{} field in all events, you could use this field as correlation key: (index= index_1 OR index= index_1) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler") "Did not observe any item or terminal signal within" OR logger="PaymentStatusClientImpl" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster values(host) as hostname count(host) as count values(paymentStatusResponse.orderCode) AS order_code BY message.tracers.ek-correlation-id{} Ciao. Giuseppe
First and foremost - don't do two things at once - either upgrade and then migrate or migrate then upgrade. Also - what things you don't understand? It's impossible to do a step by step instructions... See more...
First and foremost - don't do two things at once - either upgrade and then migrate or migrate then upgrade. Also - what things you don't understand? It's impossible to do a step by step instructions to do something like that without at least some knowledge and understanding on your side what you're doing. Should anything go wrong how will you be able to troubleshoot and fix your installation?
That app already supports FMC logs.  What changes do you need?  What have you tried yourself?  How were those efforts unsuccessful?
Hello @gcusello  I have masked the field for the purpose of safety. I tried by passing  values(paymentStatusResponse.orderCode) AS order_code its not working. With the below query (index= index... See more...
Hello @gcusello  I have masked the field for the purpose of safety. I tried by passing  values(paymentStatusResponse.orderCode) AS order_code its not working. With the below query (index= index_1 OR index= index_1) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler") "Did not observe any item or terminal signal within" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster values(host) as hostname count(host) as count values(message.tracers.ek-correlation-id{}) as corr_id  I am getting output as: cluster hostname count corr_id hhj yueyheh 3 1234234 343242 3423424 Now I want to add field paymentStatusResponse.orderCode, which comes from another logger "PaymentStatusClientImpl". The common entity between these 2 loggers is message.tracers.ek-correlation-id{}. So that my final output will be  cluster hostname count corr_id order_code hhj yueyheh 3 1234234 343242 3423424 order_1010 order_2020 order_3030
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please cou... See more...
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please could help anyone help on this. Prasent splunk vesrion is 8.2.0  
1. Usually (yes, I know there are rare cases where it makes sense) you configure external authentication only on SHs. Indexers should generally not run webui (DS and HFs often too) 2. Did you check ... See more...
1. Usually (yes, I know there are rare cases where it makes sense) you configure external authentication only on SHs. Indexers should generally not run webui (DS and HFs often too) 2. Did you check _internal?
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand wh... See more...
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand whats going wrong ?   "Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/config_explorer/authentication/providers/LDAP: The read operation timed out',)"   I tried increasing these attributes in authentication.conf but still no luck. network_timeout = 1200 sizelimit = 10000 timelimit = 1500 web.conf - [settings] enableSplunkWebSSL = true splunkdConnectionTimeout = 1201  
Try searching for field::value instead of field=value To test whether the fields are getting indexed (remember that field names _are_ case sensitive).
Yup. If you start lagging behind (in our case we were about 2-2.5 hours behind during midday; we would catch up during evening-night) and Windows decides to rotate the log file, you'll end up missing... See more...
Yup. If you start lagging behind (in our case we were about 2-2.5 hours behind during midday; we would catch up during evening-night) and Windows decides to rotate the log file, you'll end up missing events probably.
Hi @super_edition , this means that you have INDEXED_EXTRACTIONS=JSON in your props.conf and you don't need to use spath, please try this: (index= index_1 OR index= index_2) (kubernetes_namespace="... See more...
Hi @super_edition , this means that you have INDEXED_EXTRACTIONS=JSON in your props.conf and you don't need to use spath, please try this: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") "Did not observe any item or terminal signal within" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) AS cluster values(host) AS hostname count(host) AS count values(correlation-id{}) AS corr_id values(paymentStatusResponse.orderCode) AS order_code only one thing: in the screenshot it isn't clear the field name, it seems that there's something before paymentStatusResponse.orderCode, can you check it? are you sure that the file name is exactly paymentStatusResponse.orderCode? Ciao. Giuseppe
Hi @jaibalaraman , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expirati... See more...
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expiration time for both tokens are appropriate) Priority token usage filtered by partner name Response time analysis for security token/priority token   How to createádd panel for this 4 options
Hello @gcusello  If I run the main search as below: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=Payme... See more...
Hello @gcusello  If I run the main search as below: (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") I am able to see "paymentStatusResponse.orderCode" values in interesting field.  
Hi @super_edition , running only your main search, do you see this field in interesting fields? Ciao. Giuseppe
Hi @richgalloway, thanks for the reply, that makes it clear.  I think it would be better to state explicitly, that there is no third-party software used, instead of leave it blank. Just to prevent ... See more...
Hi @richgalloway, thanks for the reply, that makes it clear.  I think it would be better to state explicitly, that there is no third-party software used, instead of leave it blank. Just to prevent misunderstandings. Cheers!