All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Use the advanced settings and use @d+8h and now  
Use your search in an alert and add the following | where Avg > 1000 Then set the timeframe for the search to be last 15 minutes and the alert trigger to be when there are greater than zero results
Hello, Im using splunk cloud and i have a lot of saved searches - alerts, dashboards, reports that i need to move from one app to another I have lists that map each saved search to the relevant app... See more...
Hello, Im using splunk cloud and i have a lot of saved searches - alerts, dashboards, reports that i need to move from one app to another I have lists that map each saved search to the relevant app Is there  a way to do it with api or any other way that it is not manually one by one ?   Thanks
Sample data of the original log: [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}] [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNum... See more...
Sample data of the original log: [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}] [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}]"} [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}] Do you mean to say that some log contains valid JSON, some contains quote-escaped JSON?  Or was the first entry a misprint; all logs are in fact quote-escaped JSON, like the following? log [{\"PhoneNumber\":\"+1 450555338\",\"AlternativePhoneNumber\":null,\"Email\":null,\"VoiceOnlyPhoneNumber\":null}] [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}] [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}] In this illustration, I assume that the "original log" contains some additional elements; only one field (named log) contains those escaped JSON because it is very unreasonable to escape quotation marks if it is the complete log. If as I speculated, all log values are escaped, you should aim at reconstructing JSON, not use rex to treat them as text.  So, I recommend   | rex field=log mode=sed "s/\\\\\"/\"/g" | spath input=log path={} | mvexpand {} | spath input={}   Using Splunk's built-in JSON handling is more robust than any regex you can craft.  From the mock data, the above will give you AlternativePhoneNumber Email PhoneNumber VoiceOnlyPhoneNumber null null +1 450555338 null +1 455255697 Dam@test.com.us +20 425554005 null +1 6155555533 null +1 459551561 +1 455556868 This is the emulation for the data   | makeresults | eval log = mvappend("[{\\\"PhoneNumber\\\":\\\"+1 450555338\\\",\\\"AlternativePhoneNumber\\\":null,\\\"Email\\\":null,\\\"VoiceOnlyPhoneNumber\\\":null}]", "[{\\\"PhoneNumber\\\":\\\"+20 425554005\\\",\\\"AlternativePhoneNumber\\\":\\\"+1 455255697\\\",\\\"Email\\\":\\\"Dam@test.com.us\\\",\\\"VoiceOnlyPhoneNumber\\\":null}]", "[{\\\"PhoneNumber\\\":\\\"+1 459551561\\\",\\\"AlternativePhoneNumber\\\":\\\"+1 6155555533\\\",\\\"Email\\\":null,\\\"VoiceOnlyPhoneNumber\\\":\\\"+1 455556868\\\"}]") | mvexpand log ``` data emulation above ```  
I have a query that displays avg duration. How to i modify query to alert if avg ( duration) is greater than 1000 last 15 mins.  index=tra cf_space_name="pr" "cf_app_name":"Sch" "msg"."Logging Durat... See more...
I have a query that displays avg duration. How to i modify query to alert if avg ( duration) is greater than 1000 last 15 mins.  index=tra cf_space_name="pr" "cf_app_name":"Sch" "msg"."Logging Duration" AND NOT "DistributedLockProcessor" |rename msg.DurationMs as TimeT |table _time TimeT msg.Service | bucket _time span=1m | stats avg(TimeT) as "Avg" by msg.Service
this removes null and uid from the target group. | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.use... See more...
this removes null and uid from the target group. | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", "properties.targetResources{}.modifiedProperties{}.newValue", operationName, _time ``` removes uid ``` | regex properties.targetResources{}.modifiedProperties{}.newValue!=".{8}-.{4}-.{4}-.{4}-.{12}" ``` removes null value ``` | search NOT properties.targetResources{}.modifiedProperties{}.newValue="null" | rename "properties.initiatedBy.user.userPrincipalName" as initiated_user, "properties.targetResources{}.userPrincipalName" as target_user, "properties.targetResources{}.modifiedProperties{}.newValue" as group_name | eval group = replace(group_name, "\"", "") | eval initiated_user = lower(initiated_user), target_user = lower(target_user)
In my example, I use 3 backslashes when creating the sample data. To get \" in a quoted string, you need escape the backslash \\, and the quote \", resulting in \\\" In the regex, I avoided the ne... See more...
In my example, I use 3 backslashes when creating the sample data. To get \" in a quoted string, you need escape the backslash \\, and the quote \", resulting in \\\" In the regex, I avoided the need to match on backslashes, so any backslash is just the escape character. However, in my alternative method, you'll notice that there are 5 backslashes in a row. The processing of the escape characters happens once for the string itself, taking \\\\\" down to \\", and then once for the regex, taking \\" down to \". 
Thank you, the solution worked I tried 4 backslashes and I noticed that you used 3, is there any important difference?
How do i configure my splunk dashboard to display results from 8AM to the current time by default. I see options for Today or a specific date and timerange, but not a combination of both. ... See more...
How do i configure my splunk dashboard to display results from 8AM to the current time by default. I see options for Today or a specific date and timerange, but not a combination of both.
Thanks this is perfect. Exactly what i needed. 
The parameters are documented in the Admin Manual and in $SPLUNK_HOME/etc/system/README/savedsearches.conf.spec. Splunk's JavaScript SDK is documented at dev.splunk.com
There are no diagrams of how a Splunk search head works.  All we need to know is that user queries are sent to indexers and the responses from the indexers are collated and returned to the user.  Any... See more...
There are no diagrams of how a Splunk search head works.  All we need to know is that user queries are sent to indexers and the responses from the indexers are collated and returned to the user.  Any flow diagram would have a single box labeled "Search Head".
Hi @Narendra_Rao, AWX appears to support streaming logs directly to Splunk HTTP Event Collector. See https://ansible.readthedocs.io/projects/awx/en/latest/administration/logging.html#splunk and http... See more...
Hi @Narendra_Rao, AWX appears to support streaming logs directly to Splunk HTTP Event Collector. See https://ansible.readthedocs.io/projects/awx/en/latest/administration/logging.html#splunk and https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector or https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector. Differentiation by environment depends on your deployment architecture. If the host field isn't sufficient, the AWX log cluster_host_id field may be. You can define a simple lookup in Splunk to set, for example, an environment field based on the host or cluster_host_id field value. I've not used AWX, but see https://ansible.readthedocs.io/projects/awx/en/latest/administration/logging.html for the AWX log schema. Job events and possible job status changes seem like a good starting point. The app dashboards provide a few search examples that may be useful for building your own searches.
Hi @ririzk,  Coordinating support between two vendors is challenging, but if using Duo's recommended Splunk configuration or browsing https://help.duo.com/s/global-search/Splunk%20Connector doesn't ... See more...
Hi @ririzk,  Coordinating support between two vendors is challenging, but if using Duo's recommended Splunk configuration or browsing https://help.duo.com/s/global-search/Splunk%20Connector doesn't help solve your problem, you may need to contact Duo support directly.
Hi @priyanka2887, At which layer? TLS? HTTP? Splunk? TLS compression is largely deprecated, vulnerable to well-known attacks, and not (as far as I know) available in core JDK implementations of TLS... See more...
Hi @priyanka2887, At which layer? TLS? HTTP? Splunk? TLS compression is largely deprecated, vulnerable to well-known attacks, and not (as far as I know) available in core JDK implementations of TLS 1.2+. HttpEventCollectorLogbackAppender's underlying HTTP implementation, OkHttp, should compress any payload over 1024 bytes by default. See https://github.com/square/okhttp/blob/master/okhttp/src/main/kotlin/okhttp3/OkHttpClient.kt. HttpEventCollectorLogbackAppender doesn't expose a method or property to modify the threshold. See https://github.com/splunk/splunk-library-javalogging/blob/main/src/main/java/com/splunk/logging/HttpEventCollectorLogbackAppender.java and https://github.com/splunk/splunk-library-javalogging/blob/main/src/main/java/com/splunk/logging/HttpEventCollectorSender.java. If you want to add support for modifying the compression threshold, see the Contributing section at https://github.com/splunk/splunk-library-javalogging/blob/main/README.md.  Raw data is always compressed in Splunk, although the algorithm is configurable. See the journalCompression setting in https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf.    
I'm currently working on integrating Splunk with AWX to monitor Ansible automation jobs. I'm looking for guidance on the best practices for sending AWX job logs to Splunk. Specifically, I'm intereste... See more...
I'm currently working on integrating Splunk with AWX to monitor Ansible automation jobs. I'm looking for guidance on the best practices for sending AWX job logs to Splunk. Specifically, I'm interested in: Any existing plugins or recommended methods for forwarding AWX logs to Splunk. How to differentiate logs from QA and production environments within Splunk. Examples of SPL queries to identify failed jobs or performance metrics. Any advice or resources you could share would be greatly appreciated. Thanks.
Hey all super new to splunk administration - I'm having issues with the bro logs being indexed properly I have 2 days of logs from a folder - but when I go and search the index - despite Indexes sho... See more...
Hey all super new to splunk administration - I'm having issues with the bro logs being indexed properly I have 2 days of logs from a folder - but when I go and search the index - despite Indexes showing millions of events existing, I only see the bro tunnel logs, and they're for the wrong day I'm not even looking to set up all the sourcetypes and extractions at this moment. I just want all of the logs ingested and searchable on the correct day/time.  I've played with the Bro apps and switching the config around in the props.conf.  I've deleted the fishbucket folder to start over and force the re-indexing Overall I feel like there's another step I'm missing.  inputs.conf [monitor://C:\bro\netflow] disabled = false host = MyHost index = bro crcSalt = <SOURCE> 1) why are the tunnel logs being indexed for the wrong day? How do I fix? 2) where are the rest of the logs and how do I troubleshoot? 
Hi @Atchyuth_P , at first you cannot replicate old data in a cluster. so if for the clustered indexes you use the same names of the old not clustered indexes, you lose your old data, so the best a... See more...
Hi @Atchyuth_P , at first you cannot replicate old data in a cluster. so if for the clustered indexes you use the same names of the old not clustered indexes, you lose your old data, so the best approach is to use different names and create in you searches two eventtypes that use both the indexes (clustered and not clustered), waiting for the natural end of the old indexes, that will not receive new data and will be empty for the exceeding of the retention time. Otherwise, you could (but it's a very long job) export all your data from the old indexes (divided by sourcetype and host) and then import them in the new clustered indexes, but, as I said, it's a long job! Ciao. Giuseppe
Hi @ashwinve1385 , in my opinion, you have two solutions: 1) install on UF only the TA and in Splunk Cloud both TA and app, so you are sure to have data from the UF (using the TA) and KV-Store and... See more...
Hi @ashwinve1385 , in my opinion, you have two solutions: 1) install on UF only the TA and in Splunk Cloud both TA and app, so you are sure to have data from the UF (using the TA) and KV-Store and app on Splunk Cloud. 2) move the KV-Store from the TA to the App and then install the TA on UF and the app on Splunk Cloud.. If you have all the parsing rules (props.conf and transforms.conf) in both TA and App, I prefer the second solution, if instead you have the parsing rules on ly in the TA the first one is prefereable. Ciao. Giuseppe
Getting error 'Error occurred while trying to authenticate. Please try Again.' while authenticating Salesforce from splunk