All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you, the solution worked I tried 4 backslashes and I noticed that you used 3, is there any important difference?
How do i configure my splunk dashboard to display results from 8AM to the current time by default. I see options for Today or a specific date and timerange, but not a combination of both. ... See more...
How do i configure my splunk dashboard to display results from 8AM to the current time by default. I see options for Today or a specific date and timerange, but not a combination of both.
Thanks this is perfect. Exactly what i needed. 
The parameters are documented in the Admin Manual and in $SPLUNK_HOME/etc/system/README/savedsearches.conf.spec. Splunk's JavaScript SDK is documented at dev.splunk.com
There are no diagrams of how a Splunk search head works.  All we need to know is that user queries are sent to indexers and the responses from the indexers are collated and returned to the user.  Any... See more...
There are no diagrams of how a Splunk search head works.  All we need to know is that user queries are sent to indexers and the responses from the indexers are collated and returned to the user.  Any flow diagram would have a single box labeled "Search Head".
Hi @Narendra_Rao, AWX appears to support streaming logs directly to Splunk HTTP Event Collector. See https://ansible.readthedocs.io/projects/awx/en/latest/administration/logging.html#splunk and http... See more...
Hi @Narendra_Rao, AWX appears to support streaming logs directly to Splunk HTTP Event Collector. See https://ansible.readthedocs.io/projects/awx/en/latest/administration/logging.html#splunk and https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector or https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector. Differentiation by environment depends on your deployment architecture. If the host field isn't sufficient, the AWX log cluster_host_id field may be. You can define a simple lookup in Splunk to set, for example, an environment field based on the host or cluster_host_id field value. I've not used AWX, but see https://ansible.readthedocs.io/projects/awx/en/latest/administration/logging.html for the AWX log schema. Job events and possible job status changes seem like a good starting point. The app dashboards provide a few search examples that may be useful for building your own searches.
Hi @ririzk,  Coordinating support between two vendors is challenging, but if using Duo's recommended Splunk configuration or browsing https://help.duo.com/s/global-search/Splunk%20Connector doesn't ... See more...
Hi @ririzk,  Coordinating support between two vendors is challenging, but if using Duo's recommended Splunk configuration or browsing https://help.duo.com/s/global-search/Splunk%20Connector doesn't help solve your problem, you may need to contact Duo support directly.
Hi @priyanka2887, At which layer? TLS? HTTP? Splunk? TLS compression is largely deprecated, vulnerable to well-known attacks, and not (as far as I know) available in core JDK implementations of TLS... See more...
Hi @priyanka2887, At which layer? TLS? HTTP? Splunk? TLS compression is largely deprecated, vulnerable to well-known attacks, and not (as far as I know) available in core JDK implementations of TLS 1.2+. HttpEventCollectorLogbackAppender's underlying HTTP implementation, OkHttp, should compress any payload over 1024 bytes by default. See https://github.com/square/okhttp/blob/master/okhttp/src/main/kotlin/okhttp3/OkHttpClient.kt. HttpEventCollectorLogbackAppender doesn't expose a method or property to modify the threshold. See https://github.com/splunk/splunk-library-javalogging/blob/main/src/main/java/com/splunk/logging/HttpEventCollectorLogbackAppender.java and https://github.com/splunk/splunk-library-javalogging/blob/main/src/main/java/com/splunk/logging/HttpEventCollectorSender.java. If you want to add support for modifying the compression threshold, see the Contributing section at https://github.com/splunk/splunk-library-javalogging/blob/main/README.md.  Raw data is always compressed in Splunk, although the algorithm is configurable. See the journalCompression setting in https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf.    
I'm currently working on integrating Splunk with AWX to monitor Ansible automation jobs. I'm looking for guidance on the best practices for sending AWX job logs to Splunk. Specifically, I'm intereste... See more...
I'm currently working on integrating Splunk with AWX to monitor Ansible automation jobs. I'm looking for guidance on the best practices for sending AWX job logs to Splunk. Specifically, I'm interested in: Any existing plugins or recommended methods for forwarding AWX logs to Splunk. How to differentiate logs from QA and production environments within Splunk. Examples of SPL queries to identify failed jobs or performance metrics. Any advice or resources you could share would be greatly appreciated. Thanks.
Hey all super new to splunk administration - I'm having issues with the bro logs being indexed properly I have 2 days of logs from a folder - but when I go and search the index - despite Indexes sho... See more...
Hey all super new to splunk administration - I'm having issues with the bro logs being indexed properly I have 2 days of logs from a folder - but when I go and search the index - despite Indexes showing millions of events existing, I only see the bro tunnel logs, and they're for the wrong day I'm not even looking to set up all the sourcetypes and extractions at this moment. I just want all of the logs ingested and searchable on the correct day/time.  I've played with the Bro apps and switching the config around in the props.conf.  I've deleted the fishbucket folder to start over and force the re-indexing Overall I feel like there's another step I'm missing.  inputs.conf [monitor://C:\bro\netflow] disabled = false host = MyHost index = bro crcSalt = <SOURCE> 1) why are the tunnel logs being indexed for the wrong day? How do I fix? 2) where are the rest of the logs and how do I troubleshoot? 
Hi @Atchyuth_P , at first you cannot replicate old data in a cluster. so if for the clustered indexes you use the same names of the old not clustered indexes, you lose your old data, so the best a... See more...
Hi @Atchyuth_P , at first you cannot replicate old data in a cluster. so if for the clustered indexes you use the same names of the old not clustered indexes, you lose your old data, so the best approach is to use different names and create in you searches two eventtypes that use both the indexes (clustered and not clustered), waiting for the natural end of the old indexes, that will not receive new data and will be empty for the exceeding of the retention time. Otherwise, you could (but it's a very long job) export all your data from the old indexes (divided by sourcetype and host) and then import them in the new clustered indexes, but, as I said, it's a long job! Ciao. Giuseppe
Hi @ashwinve1385 , in my opinion, you have two solutions: 1) install on UF only the TA and in Splunk Cloud both TA and app, so you are sure to have data from the UF (using the TA) and KV-Store and... See more...
Hi @ashwinve1385 , in my opinion, you have two solutions: 1) install on UF only the TA and in Splunk Cloud both TA and app, so you are sure to have data from the UF (using the TA) and KV-Store and app on Splunk Cloud. 2) move the KV-Store from the TA to the App and then install the TA on UF and the app on Splunk Cloud.. If you have all the parsing rules (props.conf and transforms.conf) in both TA and App, I prefer the second solution, if instead you have the parsing rules on ly in the TA the first one is prefereable. Ciao. Giuseppe
Getting error 'Error occurred while trying to authenticate. Please try Again.' while authenticating Salesforce from splunk
Hi @ss2 , I don't know in US, but I suppose that's the same approach than Italy: the best ways to reach a Splunk Sales is through a Splunk Partner. Ciao. Giuseppe
Hi @sivaranjani , let me understand: you want to calculate the average per minute in a period and then display the count and the average per minute for each app, is it correct? if this is your req... See more...
Hi @sivaranjani , let me understand: you want to calculate the average per minute in a period and then display the count and the average per minute for each app, is it correct? if this is your requirement, please, try something like this: index=abc cf_space_name=prod-ad0000123 cf_app_name IN (RED,Blue,Green) "Initiating " OR "Protobuf message received" OR "Event Qualification Determined" | bucket _time span=1m | stats count(eval(cf_app_name == "RED)) AS RedVolume_by_min count(eval(cf_app_name == "blue")) AS BlueVolume_by_min count(eval(cf_app_name == "Green")) AS GreenVolume_by_min avg(GreenVolume) as AvgGVolume BY _time | stats sum(RedVolume_by_min) AS RedVolume sum(BlueVolume_by_min) AS BlueVolume sum(GreenVolume_by_min) AS GreenVolume avg(RedVolume_by_min) AS RedVolume_avg avg(BlueVolume_by_min) AS BlueVolume_avg avg(BlueVolume_by_min) AS BlueVolume_avg BY _time | eval estimate = (RED + Blue - Green) / AvgGVolume Ciao. Giuseppe
Hi @saurabhatsplunk , the order is the one you have in your result list before the outputlookup command , so, the correct questiojn is: how to add a the first position in the result list? if you c... See more...
Hi @saurabhatsplunk , the order is the one you have in your result list before the outputlookup command , so, the correct questiojn is: how to add a the first position in the result list? if you coult share your search I could help you. Ciao. Giuseppe
hi @AnanthaS , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
eventEndsAt  and eventStartsFrom  are epoch date format expresssed in milliseconds and now() is also epoch date format but not expressed in milliseconds format. I will rename the columns, thanks
Hi All, I want to add entry on first row of my lookup. I know how to append the entry using outputlookup but is there any way to prepend the entry on first row in lookup?
This is not a question, is it?  Here are some basics of asking an answerable question. Illustrate data in text format (anonymize as needed) - be it raw events, extracted fields, or output from a pr... See more...
This is not a question, is it?  Here are some basics of asking an answerable question. Illustrate data in text format (anonymize as needed) - be it raw events, extracted fields, or output from a preceding search.  Illustrate or explain any characteristics that is perhaps helping or preventing you from reaching desired results. Illustrate desired output (in text format unless the question is about visualization) corresponding to illustrated data. Explain the logical connection between illustrated data and desired output, all without SPL. If you already tried some SPL, also illustrate actual output from that illustrated data, then explain how actual output differs from desired output if that is not painfully obvious.