All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 ... See more...
Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 = System.currentTimeMillis(); log.info("logTime={}", start2 - start1);   This is our logback config -  Taking 30 ms is too long for a single log action. Are we missing anything in the config ?  
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding... See more...
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding': Too many events (> 10000) in a single second.. Failed to find target event in final sorted event list. Cannot properly prune results The result sets are not huge.. maybe 150 events. What does the above errors mean and how do we resolve this error?
anyone faced this issues?
Yes, the on-prem search heads will be able to send queries to the AWS indexers.  Whether those queries are successful or not is another question the answer to which depends on how the indexers are co... See more...
Yes, the on-prem search heads will be able to send queries to the AWS indexers.  Whether those queries are successful or not is another question the answer to which depends on how the indexers are configured.  Are they in a cluster?  What are the replication factor and search factor settings? An indexer cluster with fully replicated and searchable data will be able to respond to search requests even if some peers are down.  The likelihood of the cluster being fully searchable goes down with each lost indexers.  If the indexers go down in rapid succession then it's possible (depending on the configuration) for some data to be unreachable.  In that case, the search requests will return incomplete results.
Try something like this | eval bucket=case(dur < 30, 0, dur <= 60, 1, dur <= 120, 2, dur <= 240, 3, dur > 240, 4) | stats count as "Number of Queries" by bucket | append [| makeresults | fields ... See more...
Try something like this | eval bucket=case(dur < 30, 0, dur <= 60, 1, dur <= 120, 2, dur <= 240, 3, dur > 240, 4) | stats count as "Number of Queries" by bucket | append [| makeresults | fields - _time | eval bucket=mvrange(0,5) | mvexpand bucket | eval "Number of Queries"=0] | stats sum('Number of Queries') as "Number of Queries" by bucket | eval bucket=mvindex(split("Less than 30sec,30sec - 60sec,1min - 2min,2min - 4min,More than 4min", ","), bucket)
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and ... See more...
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and India. We need a drop with country and based on country site  also should be shown. How can we do this?? <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="site"> <label>SITE</label> <choice value="*">All</choice> <prefix>site="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search> <query> | makeresults | eval site="BDC" | fields site | append [ | makeresults | eval env="SOC" | fields site ] | sort site | table site </query> </search> </input> </fieldset> <row> <panel> <table> <title>Total Count Of DataRequests</title> <search> <query> index=Datarequest-index $site$ | rex field= _raw "application :\s(?<Reqtotal>\d+)" |stats sum(Reqtotal) </query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
Extra credit: Is there a way to force all 5 buckets to always appear in the results, even if they have a 0 count?
Thank you!  I was so close lol. I hacked it by prepending "  " and " " to a couple of bucket names to force them to sort ahead, but that made me cringe.  This is far better.  Thanks again!
Ugh. As I remember from quite a few years back, tomcat logs are awful to deal with. How are you rotating them? I suppose you're trying logrotate with copytruncate option (because that was the only w... See more...
Ugh. As I remember from quite a few years back, tomcat logs are awful to deal with. How are you rotating them? I suppose you're trying logrotate with copytruncate option (because that was the only way that even remotely resembled a "working" solution for rotating this). The problem I remember from my previous job was that in this case java wouldn't "rewind" the file position pointer and would continue to append to the old file position even though the file got truncated which would mean that you ended up with a sparse file filled with "virtual zeros" up to the previous logfile's end. catalina.out is a very ugly thing to deal with. As far as I remember, it didn't rotate on its own and if you wanted to "normally" rotate it you'd have to restart your tomcat completely which is a huge PITA.
Have you tested if it works for both /raw and /event endpoints? Just asking because I haven't used it on HEC so I don't know
What do you mean by "fixed"? Assigning _meta worked "since always" (I've been using it for last 5 years or so). But since it's a single setting, you can't just stack separate definitions from multi... See more...
What do you mean by "fixed"? Assigning _meta worked "since always" (I've been using it for last 5 years or so). But since it's a single setting, you can't just stack separate definitions from multiple files. Only one will be the "winning" one according to normal rules of config precedence.
Where the data comes from? How is it ingested? What do you mean by "raw_data on host"? What are your settings for ingesting data from this source (inputs, props, transforms...). Oh, and please, use ... See more...
Where the data comes from? How is it ingested? What do you mean by "raw_data on host"? What are your settings for ingesting data from this source (inputs, props, transforms...). Oh, and please, use punctuation. It greatly improves readability.
It's a very vague description of a problem. Anyway, traffic on management port (8089) is encrypted by default and has been at least since 7.0 version (for maaaaaany years now). And there is generall... See more...
It's a very vague description of a problem. Anyway, traffic on management port (8089) is encrypted by default and has been at least since 7.0 version (for maaaaaany years now). And there is generally no good reason to disable it.
This is not the place to ask for such things. You should contact Splunk sales team. Either directly or via your friendly local Splunk Partner.
See here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Updatepeerconfigurations#Restart_or_reload_after_configuration_bundle_changes.3F And here: https://docs.splunk.com/Documentatio... See more...
See here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Updatepeerconfigurations#Restart_or_reload_after_configuration_bundle_changes.3F And here: https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Updatepeerconfigurations#Use_the_CLI_to_validate_the_bundle_and_check_restart  
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app w... See more...
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app without a restart of the indexer? The TA we want to deploy is an extension to the nix TA, and all it does is run some simple bash scripted inputs.    
Solved. Splunk did not take conf file enablement on creation. It must be modified afterwards. 
Thanks @isoutamo  Thank you. Now I see the deployment clients are being listed using the command . ./splunk list deploy-clients. I added the stanza under /opt/splunk/etc/system/local/outputs.conf... See more...
Thanks @isoutamo  Thank you. Now I see the deployment clients are being listed using the command . ./splunk list deploy-clients. I added the stanza under /opt/splunk/etc/system/local/outputs.conf following the link you posted. [indexAndForward] index = true selectiveIndexing = true Thanks again Regards, PNV
Splitting up Splunk Enterprise and OS level log collection is a good idea. Including OS log collection with Splunk Enterprise forwarding creates some issues. Logs being ingested by an indexer may be ... See more...
Splitting up Splunk Enterprise and OS level log collection is a good idea. Including OS log collection with Splunk Enterprise forwarding creates some issues. Logs being ingested by an indexer may be handled differently than local files. For example, settings applied to inputs.conf on the indexer, for the sake of indexed files, might be applied everywhere. Thought a nuisance, things like this can be handled with careful configuration. But, from a management perspective, if you want to have a baseline set of OS log collection in an enterprise, applying rules across all of your systems if you have Indexer clusters, search head clusters, deployment servers, heavy forwarders, etc... all the different types of system, can be cumbersome to the point of not workable. If you do this using the deployment server managed UF, baseline log collection becomes far more manageable. This can be important if baseline log collection changes regularly. Also, note that splunk recently changed the UF to use the 'splunkfwd' user, while the 'splunk' user is for Splunk Enterprise. This leads me to believe the Splunk is already moving the direction of splitting up local log collection and log indexing.
| eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than... See more...
| eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than 30sec", 1, bucket="30sec - 60sec", 2, bucket="1min - 2min", 3, bucket="2min - 4min", 4, bucket="More than 4min", 5) | stats count as "Number of Queries" by bucket sort_field | sort sort_field | fields - sort_field