All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could try next by your own responsibility! Anyhow your SHC is not fulfilling Splunk’s requirements! Have you try to stop all nodes on SHC? Backup kvstore. Then remove that app from this one no... See more...
You could try next by your own responsibility! Anyhow your SHC is not fulfilling Splunk’s requirements! Have you try to stop all nodes on SHC? Backup kvstore. Then remove that app from this one node by rm -fr …./etc/apps/<your app nam>. Then start the all nodes one by one on the SHC and check what is your situation after that? Also check kvstore and shcluster statuses. That may help you, or it could lead the situation which force you to install whole SHC from scratch! So test this with your own risk.
Have a look at this one - maybe this would be of help, it mentions Azure App Insights  (It's always worth perusing on Splunk base and working through the Azure TA's as to your requirements). The hel... See more...
Have a look at this one - maybe this would be of help, it mentions Azure App Insights  (It's always worth perusing on Splunk base and working through the Azure TA's as to your requirements). The help file is within the TA so you would need to look at that for further help https://splunkbase.splunk.com/app/7246  
This is an example using makeresults command - you can use the rex command to extract key values from the  content.payload field  Example only to show you how to extract some of fields I have call... See more...
This is an example using makeresults command - you can use the rex command to extract key values from the  content.payload field  Example only to show you how to extract some of fields I have called my field data replace this with yours . | makeresults | eval data = "fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=RequestID: 101524, GL Monthly Rates - Validate and upload program" | rex field=data "fileName=(?<fileName>\w+\.\w+),\speriodName=(?<periodName>\w+),\sstatus=(?<status>\w+)" | table * Or look at the spath command  that may also be another way
There is no out-of-the-box query for that.  You have to combine a query that returns all saved searches with a query that pulls run times from the logs.  These searches should get you started. | res... See more...
There is no out-of-the-box query for that.  You have to combine a query that returns all saved searches with a query that pulls run times from the logs.  These searches should get you started. | rest /servicesNS/-/-/saved/searches index=_internal source=*scheduler.log component=SavedSplunker status=success  
I tried below code but it not working. can any one let me know what is wrong here: <form version="1.1" theme="light"> <label>HTMD Dashboard</label> <fieldset submitButton="false"> <input type="time"... See more...
I tried below code but it not working. can any one let me know what is wrong here: <form version="1.1" theme="light"> <label>HTMD Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="host"> <label>Env wise hosts</label> <choice value="appdev1host","logdev1host","cordev1host">DEV1</choice> <choice value="appdev2host","logdev2host","cordev2host">DEV2</choice> <choice value="appdev3host","logdev3host","cordev3host">DEV3</choice> <choice value="appdev4host","logdev4host","cordev4host">DEV4</choice> <choice value="appsit1host","logsit1host","corsit1host">SIT1</choice> <choice value="appsit2host","logsit2host","corsit2host">SIT2</choice> <choice value="appsit3host","logsit3host","corsit3host">SIT3</choice> <choice value="appsit4host","logsit4host","corsit4host">SIT4</choice> </fieldset> <row> <panel> <table> <title>Incoming Count &amp; Total Count</title> <search> <query>index=test-index source=application.logs $host$ "Incoming count" |stats count by "Incoming count" |appendcols index=test-index source=application.logs $host$ "Total count" |stats count by "Total count" |table "Incoming count" "Total count" </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>
I don’t think you can. Setting null routing should come first in your props and transforms (Left to right order) otherwise all the data will get discarded, so look at the order of your props, I'm s... See more...
I don’t think you can. Setting null routing should come first in your props and transforms (Left to right order) otherwise all the data will get discarded, so look at the order of your props, I'm sure the null is first order which defines the jkl.txt logs.   What you want to do now is to explicitly add the jkl.txt for ingest, so the method would be to whitelist only the files you want to be logged as in the example below.   [monitor://D:\Logs\*] sourcetype = abc index = def whitelist=(*jkl.txt|*myother_files.txt)   So, me thinks you may have to modify the null routing  or disable it.
Can you dynamically change the charts (ie. from bar to line), using a dropdown menu? At the moment, I've created multiple charts and utilizing show and hide (depending on the options selected), to s... See more...
Can you dynamically change the charts (ie. from bar to line), using a dropdown menu? At the moment, I've created multiple charts and utilizing show and hide (depending on the options selected), to serve this purpose.   I was wondering if there's an easier/cleaner/simpler way of achieving this.
Hi All, I have a field called content.payload and the value is like .How to extract these values {fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=Reque... See more...
Hi All, I have a field called content.payload and the value is like .How to extract these values {fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=RequestID: 101524, GL Monthly Rates - Validate and upload program}
Hi, I am trying to do a chart overlay using a normal distribution graphic based upon the mean and standard deviation acquired from the fieldsummary command. I can generate the values in perl (bel... See more...
Hi, I am trying to do a chart overlay using a normal distribution graphic based upon the mean and standard deviation acquired from the fieldsummary command. I can generate the values in perl (below) for a bell curve. Can you tell me how to do this in the Splunk Dashboard xml? Thanks. #!/usr/bin/perl # min, max, count, mean, stdev all come from the fieldsummary command. $min = 0.442; $max = 0.507; $mean = 0.4835625; $stdev = 0.014440074377630105; $count = 128; $pi = 3.141592653589793238462; # The numbers above do not indicate a Gaussian distribution. # Create an artificial normal distribution (for the plot overlay) # based on 6-sigma. $min = sprintf("%.3f", $mean - 3.0*$stdev); # use sprintf as a rounding function $max = sprintf("%.3f", $mean + 3.0*$stdev); $interval = ($max - $min)/($count - 1); $x = $min; for ($i=0; $i<$count; $i++) { $y = (1.0/($stdev*sqrt(2.0*$pi))) * exp(-0.5*((($x-$mean)/$stdev)**2)); $myFIELD[$i] = sprintf(%.3f",$y); printf("myFIELD[$i]\n"); $x = $x + $interval; } exit;
Hello, Can anyone help me with the query that lists all the savedsearches in my splunk system along with the time taken by them to run completely?
Its the opposite for me, windows events are fine but application logs have problem, though, I will try upgrading the forwarder and check.
just moved to Almalinux 9.3 (from rhel 7 yikes!) systemd managed boot start works fine. my problem is when I tried to deploy an app with a restart, splunk was not able to start up complaining it was ... See more...
just moved to Almalinux 9.3 (from rhel 7 yikes!) systemd managed boot start works fine. my problem is when I tried to deploy an app with a restart, splunk was not able to start up complaining it was managed by systemd. has anyone else come across this? Splunk 9.0.5
Indexed data is set at the bucket (folder) level, Buckets are only frozen deleted or optionally archived when the newest event in the bucket is older than frozenTimePeriodInSecs, therefore you may ha... See more...
Indexed data is set at the bucket (folder) level, Buckets are only frozen deleted or optionally archived when the newest event in the bucket is older than frozenTimePeriodInSecs, therefore you may have a bucket that contains both new and really old data, but the old data won't be frozen until all of the data in the bucket are old (Splunk calculates this in the background)
I'm investigating why Splunk is keeping data beyond retention period stated in frozenTimePeriodInSecs? How can i fix this?  
Hello ,Has the problem been solved? I'm having a similar problem.
Not yet.  I'll be trying to work this out in the lab,   If anyone else finds a solution other than the apparently painful process of removing the forwarder, please let me know!  Thanks!
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.... See more...
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.txt and therefor we dont see any logs from //D:\Logs\jkl.txt in Splunk.   Now without modifying the nullroute in props and transforms, I want to ingest logs from //D:\Logs\jkl.txt, how can i avoid the null route to not apply on this specific logs?
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the... See more...
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the events will be missed. Application might get ACK from HEC, but if the event is still on the HF output queue (not yet sent to the indexer) and we have non-gracefull reboot of HF (so that it could not flush out it's output queue). Can you confirm ? What would be the best way to address it ? So that once the application receives ACK we do have end to end guarantee that event is indexed ? Thanks, Michal  
1) splunk list monitor and splunk list inputstatus 3) Remember that Splunk searches by _time, which typically is the timestamp extracted from the event. In order to verify how much data you inges... See more...
1) splunk list monitor and splunk list inputstatus 3) Remember that Splunk searches by _time, which typically is the timestamp extracted from the event. In order to verify how much data you ingest during given period of time you need to aggregate it over _indextime. That's why I was asking about time parsing. You can also check metrics on your HFs but if you have many sources, those UFs might not show in metrics.log at all if they don't fall into the "most active" subset. 4) It's not about the time formats being consistent or not. It's about how they are parsed in Splunk and if they fit the proper time. Otherwise the events might get indexed at a completely different time than you'd expect. And one more thing - even on your syslog receiver, the events can be delayed and if those files are rotated with logrotate (as they probably are) you might be doing a summary of a day's worth of events but those events could be shifted in time versus the "midnight to midnight" period. Did you verify that?  
I'm a beginner, can you be more specific? I'm having the same problem. I'm looking forward to your reply.