All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, thank you. I got focused on explaining why, that forgot to write what to do more explicitly.
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage ... See more...
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage location or processed by some external script (for example, compressed and encrypted for long-time storage). But yes, by default "freezing" equals deleting the bucket. 2. As you mentioned when listing the parameters affecting bucket lifecycle, there are also limits regarding a volume size. So the buckets might roll from cold to frozen if any of the conditions are met: 1) The bucket is older than the retention limit (the _newest_ event in the bucket is _older_ than the limit - in other words - whole bucket contains only events older than the retention limit). 2) The index has grown beyond the size limit 3) The volume has grown beyond the size limit. Obviously the 3) condition can be met only if your index directories are defined using volumes. In case of the 2) condition Splunk will freeze the oldest bucket for the index (again - the one for which the newest event is oldest), but in case of the 3) condition Splunk will freeze the oldest bucket from any of the indexes contained on the volume. You can find the actual reason for freezing bucket by searching your _internal index for AsyncFreezer Typically if just one index freezes before reaching retention period you'd expect this index to run out of space but if buckets from many indexes get prematurely frozen it might be the volume size issue. Yet, you can see volume size limit affecting just one index if your indexes have significantly differing retention periods so that one of them contains much older events than other ones.  
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packa... See more...
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packages have an update every month). The first query would get the latest packages patched (with in the last 30 days) - depending on what day of the month the patching occurred - I would like to pass the earliest datetime stamp found minus X seconds (as MaxTime)  to the second query. So, the second query could use the same index, source, sourcetype but where latest=MaxTime. Don't try this at home, putting  latest=MaxTime-10 in the second query caused Splunk to laugh at me and return "Invalid value 'MaxTime-10' for time term 'latest'"...no hard feelings, Splunk laughs at me often.   Thanks for any assistance in advance. JLund  
@deepakc Is there a formula i can use to determine the right diskSize, maxTotalDataSize, maxWarmDBCount? I think that will help me set the right values for these parameters.
From your answer, maxTotalDataSizeMB is the same size as diskSize. That's the reason it's rolling off. Still don't know which of the parameters to tune to get it fixed.
I am trying to query audit logs from Splunk. The logs are for azure but when I hit the below query, it only returns the text fields and not the object or array fields like initiatedBy and targetResou... See more...
I am trying to query audit logs from Splunk. The logs are for azure but when I hit the below query, it only returns the text fields and not the object or array fields like initiatedBy and targetResources. Do I need to query this data in a different manner?   index="directoryaudit" | fields id activityDisplayName result operationType correlationId initiatedBy resultReason targetResources category loggedByService activityDateTime
Thank you so much @deepakc  Your answer has been very helpful!
Thank you @ITWhisperer . I used your recommended query as below but unable to get any output: index=test1 sourcetype=test2 EVENT A | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_T... See more...
Thank you @ITWhisperer . I used your recommended query as below but unable to get any output: index=test1 sourcetype=test2 EVENT A | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_Time=strftime(Event_Time, "%m/%d/%y %H:%M:%S") Please see below my old Splunk query being used using Splunk default "_time" field. index=test1 sourcetype=test2 EVENT A | bucket span=1s _time | stats count AS EventPerSec by _time | timechart span=1d max(EventPerSec) Ultimately, in this query, I want to replace "_time" by "Event_Time" that is more accurate than "_time".  Note that there can be multiple events in my data occurring at the exact same time (to the Seconds value). So basically, my query find the peak "EventPerSec" value in 1 day. Hope this explanation helps.
Data rolls off due to a few reasons. Data arrives in Splunk, it then needs to move through HOT/WARM>COLD/FROZEN, otherwise data will build up and you will run out of space. 1.Warm buckets move ... See more...
Data rolls off due to a few reasons. Data arrives in Splunk, it then needs to move through HOT/WARM>COLD/FROZEN, otherwise data will build up and you will run out of space. 1.Warm buckets move cold when either the homePath or maxWarmDBCount reach their limits.  2.Cold buckets are deleted when either the frozenTimePeriodInSecs or maxTotalDataSizeMB reach their limits. This may help show why it’s moving – see the event_message field index=_internal sourcetype=splunkd component=BucketMover | fields _time, bucket, candidate, component, event_message, from, frozenTimePeriodInSecs, host, idx,latest, log_level, now, reason, splunk_server, to | fieldformat "now"=strftime('now', "%Y/%m/%d %H:%M:%S") | fieldformat "latest"=strftime('latest', "%Y/%m/%d %H:%M:%S") | eval retention_days = frozenTimePeriodInSecs / 86400 | table _time component, bucket, from, to, candidate, event_message, from, frozenTimePeriodInSecs, retention_days, host, idx, now, latest, reason, splunk_server, log_level   You apply config via indexes.conf for the index for disk constrains by configuring the various options: Settings: frozenTimePeriodInSecs (Retention Period in seconds - Old bucket data is deleted (option to freeze it) based on the newest event - maxTotalDataSizeMB = (Limits the overall size of the index - (hot, warm, cold moves frozen) maxVolumeDataSizeMB = (limits the total size of all databases that reside on this volume) maxWarmDBCount = (The maximum number of warm buckets moves to cold) maxHotBuckets = (The number of actively written open buckets - when exceeded it moves to warm state) maxHotSpanSecs = (Specifies how long a bucket remains in the hot/warm state before moving to cold) maxDataSize = (specifies that a hot bucket can reach before splunkd triggers a roll to warm) maxVolumeDataSizeMB = (Overall Volume Size limit) homePath.maxDataSizeMB = (limit the individual index size) coldPath.maxDataSizeMB = (limit the individual index size) maxVolumeDataSizeMB = (limits the total size of all databases that reside on this volume)   See the indexes.conf for details https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf
I'll check them out, thanks! @isoutamo 
Hi This is quite often asked and answered question. You could found many answers via google with search phrase “ site:community.splunk.com index retention time” r. Ismo
Hi Here is excellent presentation about event distribution “ Best practises for Data Collection - Richard Morgan”. You could found it at least from slide share service. r. Ismo
Did you manage to get a solution for this? For me as well this fails. But the relative thing works. Like -1d and so but this does not MM/dd/yyyy':'HH:mm:ss
Hello guys.... I have this task to investigate why indexes roll of data before retention age. From my findings, it shows number of warm buckets exceeded. Here's what the index configuration looks lik... See more...
Hello guys.... I have this task to investigate why indexes roll of data before retention age. From my findings, it shows number of warm buckets exceeded. Here's what the index configuration looks like. How can i fix this? [wall] repFactor=auto coldPath = volume:cold/customer/wall/colddb homePath = volume:hot_warm/customer/wall/db thawedPath = /splunk/data/cold/customer/wall/thaweddb frozenTimePeriodInSecs = 34186680 maxHotBuckets = 10 maxTotalDataSizeMB = 400000
</input><input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> ... See more...
</input><input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup applicationName | sort applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> </input> <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $BankApp$ | sort InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> </input> Query : index=mulesoft environment=PRD $BankApp$ OR (priority="ERROR" OR priority="WARN") | stats values(*) as * by correlationId | rename content.InterfaceName as InterfaceName content.FileList{} as FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR",priority="WARN","WARN",priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | where $interface$ AND isnotnull(FileList) | sort -timestamp If i select all in dropdown the particular values of inputlookup  file fields and data should be showen.If its * then its shows all the values.This the query which i am trying to achieve things. 
As you have gotten valid license, just ask unlock license from same source as you got your normal license.
Thanks.
You should replace the version number with word “latest” and then you will get the latest version of those documents.
Hi @Simon.Rajanpaul, Thank you so much for coming back many months later and sharing a solution. I love to see it!
Hi @karthi2809, this is a new question, even if on the same topic, it's always better to open a new question to have a quicker and probably better answer. Anyway, at first don't use the search comm... See more...
Hi @karthi2809, this is a new question, even if on the same topic, it's always better to open a new question to have a quicker and probably better answer. Anyway, at first don't use the search command after the main search because your search will be slower Then, I see again a different field name than the one in the input, which is the correct one? could you share yur search with the tokens? Ciao. Giuseppe