All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm investigating why Splunk is keeping data beyond retention period stated in frozenTimePeriodInSecs? How can i fix this?  
Hello ,Has the problem been solved? I'm having a similar problem.
Not yet.  I'll be trying to work this out in the lab,   If anyone else finds a solution other than the apparently painful process of removing the forwarder, please let me know!  Thanks!
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.... See more...
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.txt and therefor we dont see any logs from //D:\Logs\jkl.txt in Splunk.   Now without modifying the nullroute in props and transforms, I want to ingest logs from //D:\Logs\jkl.txt, how can i avoid the null route to not apply on this specific logs?
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the... See more...
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the events will be missed. Application might get ACK from HEC, but if the event is still on the HF output queue (not yet sent to the indexer) and we have non-gracefull reboot of HF (so that it could not flush out it's output queue). Can you confirm ? What would be the best way to address it ? So that once the application receives ACK we do have end to end guarantee that event is indexed ? Thanks, Michal  
1) splunk list monitor and splunk list inputstatus 3) Remember that Splunk searches by _time, which typically is the timestamp extracted from the event. In order to verify how much data you inges... See more...
1) splunk list monitor and splunk list inputstatus 3) Remember that Splunk searches by _time, which typically is the timestamp extracted from the event. In order to verify how much data you ingest during given period of time you need to aggregate it over _indextime. That's why I was asking about time parsing. You can also check metrics on your HFs but if you have many sources, those UFs might not show in metrics.log at all if they don't fall into the "most active" subset. 4) It's not about the time formats being consistent or not. It's about how they are parsed in Splunk and if they fit the proper time. Otherwise the events might get indexed at a completely different time than you'd expect. And one more thing - even on your syslog receiver, the events can be delayed and if those files are rotated with logrotate (as they probably are) you might be doing a summary of a day's worth of events but those events could be shifted in time versus the "midnight to midnight" period. Did you verify that?  
I'm a beginner, can you be more specific? I'm having the same problem. I'm looking forward to your reply.
found a solution by splitting out filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log into two separate filelog entries as such file... See more...
found a solution by splitting out filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log into two separate filelog entries as such filelog/mule-logs-volume1: include: - /daas-splunk-otel/*/*/dla*.log start_at: beginning filelog/mule-logs-volume2: include: - /daas-splunk-otel/*/dla*.log start_at: beginning and remove all the router stuff 
Hi deepakc,    thanks for the quick reply.  The thing is, I have only started to build the app but never finished it. So now it shows up as a 'husk' of an app so to speak and has no data collectio... See more...
Hi deepakc,    thanks for the quick reply.  The thing is, I have only started to build the app but never finished it. So now it shows up as a 'husk' of an app so to speak and has no data collection finished yet.  However, you were right that the error I've seen has something to do with the validation process. And I'm now trying to make heads and tails from the _internal logs as suggested by splunk (which read for example that the props.conf file of the new app is missing, which indeed it is because I haven't finished setting it up yet.)  I will update on potential findings, once I've combed through the logs and tried to remedy the missing files. 
1) I didn't find any errors in splunkd.log on the UF. How would I "check status of inputs on the UF"? 2) I found the differences in various logs, but I will check the internal logs - didn't do that... See more...
1) I didn't find any errors in splunkd.log on the UF. How would I "check status of inputs on the UF"? 2) I found the differences in various logs, but I will check the internal logs - didn't do that yet 3) Discrepancy: see other replies to Guiseppe 4) Time parsing: I have added some samples below - as the time formats are consistant over the other events... 5) So far there are no rules on the HFs
Question in the title. Thanks in advance!
Well... there are several things to consider here. 1. Are all files being read properly (check status of inputs on the UF, check for errors, verify that you're not hitting some limits on opened file... See more...
Well... there are several things to consider here. 1. Are all files being read properly (check status of inputs on the UF, check for errors, verify that you're not hitting some limits on opened files and so on)? 2. Are other files from the same UF (the typical candidates for cross-checking would be UF's own logs) getting ingested properly? 3. How did you verify the discrepancy between those numbers? 4. Are your time parsing rules properly set up? That can heavily influence _where_ (or rather "when") the events are indexed. So you might be getting the events ingested "properly" but you might just not be seeing them while searching. 5. Do you have any rules (props/transforms, ingest actions) on your HFs/indexers that filter the events (or move them to other indexes). There are many things that could affect your ingestion process.
I don’t think this is a cert issue, if you use the AOB it tries to validate your app for being certified for the online certificate validation service, basically been given a stamp of approval and ne... See more...
I don’t think this is a cert issue, if you use the AOB it tries to validate your app for being certified for the online certificate validation service, basically been given a stamp of approval and needs the below: "Enter the login settings for your Splunk.com account. This information is required for the app precertification process" You normally get this via your sales process.  For the proxy part, it could be incorrect credentials I don’t think it’s a cert issue but could be wrong.   There is a section on the AOB for where self-signed certs should go, but I think this is  red herring https://docs.splunk.com/Documentation/AddonBuilder/4.1.4/UserGuide/ConfigureDataCollection
If the app is installed on the SH, it will be replicated to the indexer UNLESS it is excluded from the bundle.  To exclude files from the bundle, add entries to the [replicationDenyList] stanza in di... See more...
If the app is installed on the SH, it will be replicated to the indexer UNLESS it is excluded from the bundle.  To exclude files from the bundle, add entries to the [replicationDenyList] stanza in distsearch.conf and restart the SH. [replicationDenyList] MSbin = E:\Splunk\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*
Hi @michaelteck , if you give to the monitor command a path, Splunk reads all the file in this path. You can exclude events older than a data (e.g. 1 day ago), adding a parameter to the input stanz... See more...
Hi @michaelteck , if you give to the monitor command a path, Splunk reads all the file in this path. You can exclude events older than a data (e.g. 1 day ago), adding a parameter to the input stanza ignoreOlderThan = 1d Ciao. Giuseppe
A bit late to the party But seriously - getting the size of the index is one thing but before that we need to define what we mean by that. Index can be measured by many different parameters. 1.... See more...
A bit late to the party But seriously - getting the size of the index is one thing but before that we need to define what we mean by that. Index can be measured by many different parameters. 1. Cumulative size of all indexed events (that's what license usage counts as well) 2. Size of raw event files (compressed or not) 3. Cumulative size of everything related to just events (raw data, metadata) 4. Cumulative size of data regarding events as well as summaries created for given index. 5. Any of the above points but expressed not in terms of file sizes but in terms of usage of underlying storage (as in block-aligned or similar).
Hello everyone,  I turn to you because I have a little problem. I have an MFT server that generates logs in a directory. In this directory the log files are stored in directories that have the name ... See more...
Hello everyone,  I turn to you because I have a little problem. I have an MFT server that generates logs in a directory. In this directory the log files are stored in directories that have the name of the day. And the log files have the name 1000005847456.log. For example, today’s logs 23 April 2024 are stored in the 2024-04-23/ directory.  For now, I have this input.conf file : [monitor:///data/logs/.../100000*.log] disabled=false sourcetype=log4j host=PC followTail=0 index=test_wild  When I launch the Universal Forwarder, it starts listing all files in/data/logs/.../ . And it also starts to send the data in the log directory as of 4 days ago. I am not looking to retrieve the old log data but the log data of today. I don’t understand this behavior of the Universal Forwarder. Could someone help me? 
Hi @Egyas, I just run into the same issue trying to upgrade a Splunk UF 9.1.2 -> 9.2.1 installed on a server with a Splunk Enterprise instance (just upgraded to 9.2.1). Did you find any workaround/... See more...
Hi @Egyas, I just run into the same issue trying to upgrade a Splunk UF 9.1.2 -> 9.2.1 installed on a server with a Splunk Enterprise instance (just upgraded to 9.2.1). Did you find any workaround/solution except removing one of them? Thanks in advance!
So you want to use a checkbox and not a multiselect. Both are different in splunk context. Here is the updated one. You can leave the checkbox and just filter in the text box or you can select the c... See more...
So you want to use a checkbox and not a multiselect. Both are different in splunk context. Here is the updated one. You can leave the checkbox and just filter in the text box or you can select the check box and the filter <form version="1.1" theme="light"> <label>CheckBox_Text</label> <fieldset submitButton="false"> <input type="checkbox" token="exclude" searchWhenChanged="true" id="checkbox"> <label>Select to exclude</label> <fieldForLabel>Project</fieldForLabel> <fieldForValue>Project</fieldForValue> <search> <query>|makeresults count=5|streamstats count |eval Project="Project".count|eval Record="Some records "|eval Record=if(count%2==0,Record,Record."Error")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> ,</delimiter> <prefix>AND NOT Project IN (</prefix> <suffix>)</suffix> <default>""</default> </input> <input type="text" token="text_filter" searchWhenChanged="true"> <label>Text to Filter</label> <default>*</default> </input> </fieldset> <row> <panel> <table> <search> <query>|makeresults count=5|streamstats count |eval Project="Project".count|eval Record="Some records "|eval Record=if(count%2==0,Record,Record."Error") |where NOT like (Record,"%$text_filter$%") $exclude$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
There are no packet errors on the UF ip -s link show ens192 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:50:56:bb:07:5... See more...
There are no packet errors on the UF ip -s link show ens192 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:50:56:bb:07:59 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 13730859826 7324421 0 0 0 358 TX: bytes packets errors dropped carrier collsns 1976804117 6163908 0 0 0 0