All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you!  I was so close lol. I hacked it by prepending "  " and " " to a couple of bucket names to force them to sort ahead, but that made me cringe.  This is far better.  Thanks again!
Ugh. As I remember from quite a few years back, tomcat logs are awful to deal with. How are you rotating them? I suppose you're trying logrotate with copytruncate option (because that was the only w... See more...
Ugh. As I remember from quite a few years back, tomcat logs are awful to deal with. How are you rotating them? I suppose you're trying logrotate with copytruncate option (because that was the only way that even remotely resembled a "working" solution for rotating this). The problem I remember from my previous job was that in this case java wouldn't "rewind" the file position pointer and would continue to append to the old file position even though the file got truncated which would mean that you ended up with a sparse file filled with "virtual zeros" up to the previous logfile's end. catalina.out is a very ugly thing to deal with. As far as I remember, it didn't rotate on its own and if you wanted to "normally" rotate it you'd have to restart your tomcat completely which is a huge PITA.
Have you tested if it works for both /raw and /event endpoints? Just asking because I haven't used it on HEC so I don't know
What do you mean by "fixed"? Assigning _meta worked "since always" (I've been using it for last 5 years or so). But since it's a single setting, you can't just stack separate definitions from multi... See more...
What do you mean by "fixed"? Assigning _meta worked "since always" (I've been using it for last 5 years or so). But since it's a single setting, you can't just stack separate definitions from multiple files. Only one will be the "winning" one according to normal rules of config precedence.
Where the data comes from? How is it ingested? What do you mean by "raw_data on host"? What are your settings for ingesting data from this source (inputs, props, transforms...). Oh, and please, use ... See more...
Where the data comes from? How is it ingested? What do you mean by "raw_data on host"? What are your settings for ingesting data from this source (inputs, props, transforms...). Oh, and please, use punctuation. It greatly improves readability.
It's a very vague description of a problem. Anyway, traffic on management port (8089) is encrypted by default and has been at least since 7.0 version (for maaaaaany years now). And there is generall... See more...
It's a very vague description of a problem. Anyway, traffic on management port (8089) is encrypted by default and has been at least since 7.0 version (for maaaaaany years now). And there is generally no good reason to disable it.
This is not the place to ask for such things. You should contact Splunk sales team. Either directly or via your friendly local Splunk Partner.
See here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Updatepeerconfigurations#Restart_or_reload_after_configuration_bundle_changes.3F And here: https://docs.splunk.com/Documentatio... See more...
See here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Updatepeerconfigurations#Restart_or_reload_after_configuration_bundle_changes.3F And here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Updatepeerconfigurations#Use_the_CLI_to_validate_the_bundle_and_check_restart  
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app w... See more...
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app without a restart of the indexer? The TA we want to deploy is an extension to the nix TA, and all it does is run some simple bash scripted inputs.    
Solved. Splunk did not take conf file enablement on creation. It must be modified afterwards. 
Thanks @isoutamo  Thank you. Now I see the deployment clients are being listed using the command . ./splunk list deploy-clients. I added the stanza under /opt/splunk/etc/system/local/outputs.conf... See more...
Thanks @isoutamo  Thank you. Now I see the deployment clients are being listed using the command . ./splunk list deploy-clients. I added the stanza under /opt/splunk/etc/system/local/outputs.conf following the link you posted. [indexAndForward] index = true selectiveIndexing = true Thanks again Regards, PNV
Splitting up Splunk Enterprise and OS level log collection is a good idea. Including OS log collection with Splunk Enterprise forwarding creates some issues. Logs being ingested by an indexer may be ... See more...
Splitting up Splunk Enterprise and OS level log collection is a good idea. Including OS log collection with Splunk Enterprise forwarding creates some issues. Logs being ingested by an indexer may be handled differently than local files. For example, settings applied to inputs.conf on the indexer, for the sake of indexed files, might be applied everywhere. Thought a nuisance, things like this can be handled with careful configuration. But, from a management perspective, if you want to have a baseline set of OS log collection in an enterprise, applying rules across all of your systems if you have Indexer clusters, search head clusters, deployment servers, heavy forwarders, etc... all the different types of system, can be cumbersome to the point of not workable. If you do this using the deployment server managed UF, baseline log collection becomes far more manageable. This can be important if baseline log collection changes regularly. Also, note that splunk recently changed the UF to use the 'splunkfwd' user, while the 'splunk' user is for Splunk Enterprise. This leads me to believe the Splunk is already moving the direction of splitting up local log collection and log indexing.
| eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than... See more...
| eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than 30sec", 1, bucket="30sec - 60sec", 2, bucket="1min - 2min", 3, bucket="2min - 4min", 4, bucket="More than 4min", 5) | stats count as "Number of Queries" by bucket sort_field | sort sort_field | fields - sort_field
Here's a part of my query, ignoring where the data is coming from:   | eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min"... See more...
Here's a part of my query, ignoring where the data is coming from:   | eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than 30sec", 1, bucket="30sec - 60sec", 2, bucket="1min - 2min", 3, bucket="2min - 4min", 4, bucket="More than 4min", 5) | sort sort_field | stats count as "Number of Queries" by bucket   The problem I have is that the results are ordered alphabetically by the name of each bucket.  I'd prefer to have the order always be from quickest to slowest: <30s, 30-60s, 1-2m, 2-4m, >4m What I get:   1min - 2min | <value> 2min - 4min | <value> 30sec - 60sec | <value> Less than 30sec | <value> More than 4min | <value>   What I want:   Less than 30sec | <value> 30sec - 60sec | <value> 1min - 2min | <value> 2min - 4min | <value> More than 4min | <value>   I've tried a number of different approaches, none seeming to do anything.  Is this possible?
Hi, I am quite new to Splunk, so sorry in advance if I ask silly questions. I have below task to do: "The logs show that Windows Defender has detected a Trojan on one of the machines on the ComTech... See more...
Hi, I am quite new to Splunk, so sorry in advance if I ask silly questions. I have below task to do: "The logs show that Windows Defender has detected a Trojan on one of the machines on the ComTech network. Find the relevant alerts and investigate the logs." I keep searching but dont get the right logs. I seached below filters:  source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" source="XmlWinEventLog:Microsoft-Windows-Windows Defender/Operational" I would really appreciate if you could help. Thanks, Pere    
I apologize, I don't believe my question was clear. I have 2 full fledged splunk deployments, 1 on-prem and 1 in AWS. The AWS SearchHeads are acting as remote search peers reside to the on-prem dep... See more...
I apologize, I don't believe my question was clear. I have 2 full fledged splunk deployments, 1 on-prem and 1 in AWS. The AWS SearchHeads are acting as remote search peers reside to the on-prem deployment. These search peers are hardcoded in the on-prem conf file as: 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 10.0.0.6 Now if the remote search peers 4-6 go down, will our on-prem splunk solution still be able to query our remote search peers as normal given that the config file has 3 non-live searchpeers
The Cluster Manager will keep track of where the searchable buckets are in the cluster.  If all goes well, you should be able to search with half the cluster still up.  It will depend on the search f... See more...
The Cluster Manager will keep track of where the searchable buckets are in the cluster.  If all goes well, you should be able to search with half the cluster still up.  It will depend on the search factor and the timing of the indexer failures as to whether the cluster will remain searchable.  The Indexer Clustering page on the Cluster Manager will tell you the state of the cluster.
I need my trial extended 14 more days.  I have to do a demo for my bosses on Tuesday  User : https://app.us1.signalfx.com/#/userprofile/GM-tC55A4AA  
So: if our search peers and indexers are synced across properly Distconf has 6 IPs but only 3 of those hosts are up Will our master search head cluster be able to still search against the peers?... See more...
So: if our search peers and indexers are synced across properly Distconf has 6 IPs but only 3 of those hosts are up Will our master search head cluster be able to still search against the peers? Or if it happens to hit a dead host it will return nothing for that query?
Unlike a forwarder sending data to a peer, search heads do not round-robin among the indexers.  Search queries are sent to all (most of the time) indexers and the responses are collated by the SH.  I... See more...
Unlike a forwarder sending data to a peer, search heads do not round-robin among the indexers.  Search queries are sent to all (most of the time) indexers and the responses are collated by the SH.  If the data on the 3 down peers is not replicated on the remaining 3 then you will get incomplete search results.