All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am trying to do a chart overlay using a normal distribution graphic based upon the mean and standard deviation acquired from the fieldsummary command. I can generate the values in perl (bel... See more...
Hi, I am trying to do a chart overlay using a normal distribution graphic based upon the mean and standard deviation acquired from the fieldsummary command. I can generate the values in perl (below) for a bell curve. Can you tell me how to do this in the Splunk Dashboard xml? Thanks. #!/usr/bin/perl # min, max, count, mean, stdev all come from the fieldsummary command. $min = 0.442; $max = 0.507; $mean = 0.4835625; $stdev = 0.014440074377630105; $count = 128; $pi = 3.141592653589793238462; # The numbers above do not indicate a Gaussian distribution. # Create an artificial normal distribution (for the plot overlay) # based on 6-sigma. $min = sprintf("%.3f", $mean - 3.0*$stdev); # use sprintf as a rounding function $max = sprintf("%.3f", $mean + 3.0*$stdev); $interval = ($max - $min)/($count - 1); $x = $min; for ($i=0; $i<$count; $i++) { $y = (1.0/($stdev*sqrt(2.0*$pi))) * exp(-0.5*((($x-$mean)/$stdev)**2)); $myFIELD[$i] = sprintf(%.3f",$y); printf("myFIELD[$i]\n"); $x = $x + $interval; } exit;
Hello, Can anyone help me with the query that lists all the savedsearches in my splunk system along with the time taken by them to run completely?
Its the opposite for me, windows events are fine but application logs have problem, though, I will try upgrading the forwarder and check.
just moved to Almalinux 9.3 (from rhel 7 yikes!) systemd managed boot start works fine. my problem is when I tried to deploy an app with a restart, splunk was not able to start up complaining it was ... See more...
just moved to Almalinux 9.3 (from rhel 7 yikes!) systemd managed boot start works fine. my problem is when I tried to deploy an app with a restart, splunk was not able to start up complaining it was managed by systemd. has anyone else come across this? Splunk 9.0.5
Indexed data is set at the bucket (folder) level, Buckets are only frozen deleted or optionally archived when the newest event in the bucket is older than frozenTimePeriodInSecs, therefore you may ha... See more...
Indexed data is set at the bucket (folder) level, Buckets are only frozen deleted or optionally archived when the newest event in the bucket is older than frozenTimePeriodInSecs, therefore you may have a bucket that contains both new and really old data, but the old data won't be frozen until all of the data in the bucket are old (Splunk calculates this in the background)
I'm investigating why Splunk is keeping data beyond retention period stated in frozenTimePeriodInSecs? How can i fix this?  
Hello ,Has the problem been solved? I'm having a similar problem.
Not yet.  I'll be trying to work this out in the lab,   If anyone else finds a solution other than the apparently painful process of removing the forwarder, please let me know!  Thanks!
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.... See more...
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.txt and therefor we dont see any logs from //D:\Logs\jkl.txt in Splunk.   Now without modifying the nullroute in props and transforms, I want to ingest logs from //D:\Logs\jkl.txt, how can i avoid the null route to not apply on this specific logs?
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the... See more...
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the events will be missed. Application might get ACK from HEC, but if the event is still on the HF output queue (not yet sent to the indexer) and we have non-gracefull reboot of HF (so that it could not flush out it's output queue). Can you confirm ? What would be the best way to address it ? So that once the application receives ACK we do have end to end guarantee that event is indexed ? Thanks, Michal  
1) splunk list monitor and splunk list inputstatus 3) Remember that Splunk searches by _time, which typically is the timestamp extracted from the event. In order to verify how much data you inges... See more...
1) splunk list monitor and splunk list inputstatus 3) Remember that Splunk searches by _time, which typically is the timestamp extracted from the event. In order to verify how much data you ingest during given period of time you need to aggregate it over _indextime. That's why I was asking about time parsing. You can also check metrics on your HFs but if you have many sources, those UFs might not show in metrics.log at all if they don't fall into the "most active" subset. 4) It's not about the time formats being consistent or not. It's about how they are parsed in Splunk and if they fit the proper time. Otherwise the events might get indexed at a completely different time than you'd expect. And one more thing - even on your syslog receiver, the events can be delayed and if those files are rotated with logrotate (as they probably are) you might be doing a summary of a day's worth of events but those events could be shifted in time versus the "midnight to midnight" period. Did you verify that?  
I'm a beginner, can you be more specific? I'm having the same problem. I'm looking forward to your reply.
found a solution by splitting out filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log into two separate filelog entries as such file... See more...
found a solution by splitting out filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log into two separate filelog entries as such filelog/mule-logs-volume1: include: - /daas-splunk-otel/*/*/dla*.log start_at: beginning filelog/mule-logs-volume2: include: - /daas-splunk-otel/*/dla*.log start_at: beginning and remove all the router stuff 
Hi deepakc,    thanks for the quick reply.  The thing is, I have only started to build the app but never finished it. So now it shows up as a 'husk' of an app so to speak and has no data collectio... See more...
Hi deepakc,    thanks for the quick reply.  The thing is, I have only started to build the app but never finished it. So now it shows up as a 'husk' of an app so to speak and has no data collection finished yet.  However, you were right that the error I've seen has something to do with the validation process. And I'm now trying to make heads and tails from the _internal logs as suggested by splunk (which read for example that the props.conf file of the new app is missing, which indeed it is because I haven't finished setting it up yet.)  I will update on potential findings, once I've combed through the logs and tried to remedy the missing files. 
1) I didn't find any errors in splunkd.log on the UF. How would I "check status of inputs on the UF"? 2) I found the differences in various logs, but I will check the internal logs - didn't do that... See more...
1) I didn't find any errors in splunkd.log on the UF. How would I "check status of inputs on the UF"? 2) I found the differences in various logs, but I will check the internal logs - didn't do that yet 3) Discrepancy: see other replies to Guiseppe 4) Time parsing: I have added some samples below - as the time formats are consistant over the other events... 5) So far there are no rules on the HFs
Question in the title. Thanks in advance!
Well... there are several things to consider here. 1. Are all files being read properly (check status of inputs on the UF, check for errors, verify that you're not hitting some limits on opened file... See more...
Well... there are several things to consider here. 1. Are all files being read properly (check status of inputs on the UF, check for errors, verify that you're not hitting some limits on opened files and so on)? 2. Are other files from the same UF (the typical candidates for cross-checking would be UF's own logs) getting ingested properly? 3. How did you verify the discrepancy between those numbers? 4. Are your time parsing rules properly set up? That can heavily influence _where_ (or rather "when") the events are indexed. So you might be getting the events ingested "properly" but you might just not be seeing them while searching. 5. Do you have any rules (props/transforms, ingest actions) on your HFs/indexers that filter the events (or move them to other indexes). There are many things that could affect your ingestion process.
I don’t think this is a cert issue, if you use the AOB it tries to validate your app for being certified for the online certificate validation service, basically been given a stamp of approval and ne... See more...
I don’t think this is a cert issue, if you use the AOB it tries to validate your app for being certified for the online certificate validation service, basically been given a stamp of approval and needs the below: "Enter the login settings for your Splunk.com account. This information is required for the app precertification process" You normally get this via your sales process.  For the proxy part, it could be incorrect credentials I don’t think it’s a cert issue but could be wrong.   There is a section on the AOB for where self-signed certs should go, but I think this is  red herring https://docs.splunk.com/Documentation/AddonBuilder/4.1.4/UserGuide/ConfigureDataCollection
If the app is installed on the SH, it will be replicated to the indexer UNLESS it is excluded from the bundle.  To exclude files from the bundle, add entries to the [replicationDenyList] stanza in di... See more...
If the app is installed on the SH, it will be replicated to the indexer UNLESS it is excluded from the bundle.  To exclude files from the bundle, add entries to the [replicationDenyList] stanza in distsearch.conf and restart the SH. [replicationDenyList] MSbin = E:\Splunk\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*
Hi @michaelteck , if you give to the monitor command a path, Splunk reads all the file in this path. You can exclude events older than a data (e.g. 1 day ago), adding a parameter to the input stanz... See more...
Hi @michaelteck , if you give to the monitor command a path, Splunk reads all the file in this path. You can exclude events older than a data (e.g. 1 day ago), adding a parameter to the input stanza ignoreOlderThan = 1d Ciao. Giuseppe