All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here's a part of my query, ignoring where the data is coming from:   | eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min"... See more...
Here's a part of my query, ignoring where the data is coming from:   | eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than 30sec", 1, bucket="30sec - 60sec", 2, bucket="1min - 2min", 3, bucket="2min - 4min", 4, bucket="More than 4min", 5) | sort sort_field | stats count as "Number of Queries" by bucket   The problem I have is that the results are ordered alphabetically by the name of each bucket.  I'd prefer to have the order always be from quickest to slowest: <30s, 30-60s, 1-2m, 2-4m, >4m What I get:   1min - 2min | <value> 2min - 4min | <value> 30sec - 60sec | <value> Less than 30sec | <value> More than 4min | <value>   What I want:   Less than 30sec | <value> 30sec - 60sec | <value> 1min - 2min | <value> 2min - 4min | <value> More than 4min | <value>   I've tried a number of different approaches, none seeming to do anything.  Is this possible?
Hi, I am quite new to Splunk, so sorry in advance if I ask silly questions. I have below task to do: "The logs show that Windows Defender has detected a Trojan on one of the machines on the ComTech... See more...
Hi, I am quite new to Splunk, so sorry in advance if I ask silly questions. I have below task to do: "The logs show that Windows Defender has detected a Trojan on one of the machines on the ComTech network. Find the relevant alerts and investigate the logs." I keep searching but dont get the right logs. I seached below filters:  source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" source="XmlWinEventLog:Microsoft-Windows-Windows Defender/Operational" I would really appreciate if you could help. Thanks, Pere    
I apologize, I don't believe my question was clear. I have 2 full fledged splunk deployments, 1 on-prem and 1 in AWS. The AWS SearchHeads are acting as remote search peers reside to the on-prem dep... See more...
I apologize, I don't believe my question was clear. I have 2 full fledged splunk deployments, 1 on-prem and 1 in AWS. The AWS SearchHeads are acting as remote search peers reside to the on-prem deployment. These search peers are hardcoded in the on-prem conf file as: 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 10.0.0.6 Now if the remote search peers 4-6 go down, will our on-prem splunk solution still be able to query our remote search peers as normal given that the config file has 3 non-live searchpeers
The Cluster Manager will keep track of where the searchable buckets are in the cluster.  If all goes well, you should be able to search with half the cluster still up.  It will depend on the search f... See more...
The Cluster Manager will keep track of where the searchable buckets are in the cluster.  If all goes well, you should be able to search with half the cluster still up.  It will depend on the search factor and the timing of the indexer failures as to whether the cluster will remain searchable.  The Indexer Clustering page on the Cluster Manager will tell you the state of the cluster.
I need my trial extended 14 more days.  I have to do a demo for my bosses on Tuesday  User : https://app.us1.signalfx.com/#/userprofile/GM-tC55A4AA  
So: if our search peers and indexers are synced across properly Distconf has 6 IPs but only 3 of those hosts are up Will our master search head cluster be able to still search against the peers?... See more...
So: if our search peers and indexers are synced across properly Distconf has 6 IPs but only 3 of those hosts are up Will our master search head cluster be able to still search against the peers? Or if it happens to hit a dead host it will return nothing for that query?
Unlike a forwarder sending data to a peer, search heads do not round-robin among the indexers.  Search queries are sent to all (most of the time) indexers and the responses are collated by the SH.  I... See more...
Unlike a forwarder sending data to a peer, search heads do not round-robin among the indexers.  Search queries are sent to all (most of the time) indexers and the responses are collated by the SH.  If the data on the 3 down peers is not replicated on the remaining 3 then you will get incomplete search results.
Compare the props.conf settings for that sourcetype against the raw event from the original data source.  Make sure the settings are correct.  No doubt something is off and is causing that letter to ... See more...
Compare the props.conf settings for that sourcetype against the raw event from the original data source.  Make sure the settings are correct.  No doubt something is off and is causing that letter to be dropped. Post the sanitized sample event and props.conf stanza here if you need help finding the problem.
5/17/24 12:45:46.313 PM persistuse Environment = LTQ3   In the above event character "r" is missing on word persistuse ( but exist in raw_data on host )  hence the events are creating without ti... See more...
5/17/24 12:45:46.313 PM persistuse Environment = LTQ3   In the above event character "r" is missing on word persistuse ( but exist in raw_data on host )  hence the events are creating without timestamp and getting data quality issues how this can be fixed 
We are generating HEC tokens on a deployment server and pushing them out to the HECs.  HEC tokens are disabled by default on the HECs and the deployment server and need to be enabled in global setti... See more...
We are generating HEC tokens on a deployment server and pushing them out to the HECs.  HEC tokens are disabled by default on the HECs and the deployment server and need to be enabled in global settings.  What I've done so far is: -authorize.conf, this is for user tokens and isn't working for HEC tokens -the CLI command for token enable isn't working because it's not enabled globally -inputs.conf has [http] disabled=0   The only thing that has worked is enabling it via the UI. Is there a way to enable these over CLI?
Hello @gcusello  Thanks for the quick response. One of my colleagues mentioned that he observed some intermittent connectivity issues/data loss when 8089 encryption was enabled.  What could be the... See more...
Hello @gcusello  Thanks for the quick response. One of my colleagues mentioned that he observed some intermittent connectivity issues/data loss when 8089 encryption was enabled.  What could be the possible reason?   Thanks.
This is perfect. Thank you! Only had to add the missing "by" in  | eventstats values(pod_name_all) as pod_name_all importance index=abc sourcetype=kubectl | lookup pod_list pod_name_lookup as pod_n... See more...
This is perfect. Thank you! Only had to add the missing "by" in  | eventstats values(pod_name_all) as pod_name_all importance index=abc sourcetype=kubectl | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | where sourcetype == "kubectl" | bin span=1h@h _time | stats values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all by importance _time | append [ inputlookup pod_list | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all by importance | eval missing = if(isnull(pod_name_all), pod_name_all, mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all)))) | where isnotnull(missing) | timechart span=1m@m dc(missing) by importance
Hi Team,   is it possible to update/enrich a notable after executing a playbook in splunk soar and that execution output must be attached in the Splunk notable. Example:   Assume I have correlat... See more...
Hi Team,   is it possible to update/enrich a notable after executing a playbook in splunk soar and that execution output must be attached in the Splunk notable. Example:   Assume I have correlation search named one and this triggers a notable and run a playbook actions. Now once the search triggers and notable is created, the action run a playbook should execute in soar and attach that output to the notable created. You think of this attaching ip reputation/geo locations of an ip to the notable so that soc can work without logging into virus total or any other sites.   Thank you
Hi @Jyo_Reel, 8089 is a management port and it's already encrypted. Anyway, the traffic port (by default 9997) can be encrypted, for more details see at https://docs.splunk.com/Documentation/Splunk... See more...
Hi @Jyo_Reel, 8089 is a management port and it's already encrypted. Anyway, the traffic port (by default 9997) can be encrypted, for more details see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/ConfigureSplunkforwardingtousesignedcertificates#:~:text=You%20can%20use%20transport%20layer,create%20and%20sign%20them%20yourself. Ciao. Giuseppe
I know someone whom has used this, its a flavour of Red hat / Cent OS - so you should be fine.  Here's the Splunk OS support matrix for the kernel versions supported  https://docs.splunk.com/Do... See more...
I know someone whom has used this, its a flavour of Red hat / Cent OS - so you should be fine.  Here's the Splunk OS support matrix for the kernel versions supported  https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/SystemRequirements 
Hello, Can 8089 port traffic be encrypted? What are the pros and cons?
If I have 6 search peers configured in the distsearch.conf file but 3 of them go down, can Splunk recognize that a host is down and continue skipping down the list until it gets a live host?
Hello, Does Splunk 9.0 compatible with Oracle Linux?
That WARN is just for extra security. Its still having issues with the server.pem file  I'm out of options to check mate, consider logging a support call, or you could if this is an option to you... See more...
That WARN is just for extra security. Its still having issues with the server.pem file  I'm out of options to check mate, consider logging a support call, or you could if this is an option to you, backup /etc/apps folder and re-install Splunk,  and restore the backed up /etc/apps folder, I know this is a drastic step...but might be quicker. 
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details requir... See more...
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details required in MISP add-on app. However, after the configuration, I'm getting the following error: (Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability). Furthermore, by looking into the role capabilities under Splunk UI setting, I dont see "dispatch_rest_to_indexers" role either. Could someone please assist?