All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First, some quick comment.  Judging from your attempted SPL, and your tendency to think "join", Splunk is not laughing.  It is just a stranger to you.  And strangers can be intimidating.  I often kee... See more...
First, some quick comment.  Judging from your attempted SPL, and your tendency to think "join", Splunk is not laughing.  It is just a stranger to you.  And strangers can be intimidating.  I often keep a browser tab open with https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ so I can easily lookup what's available, and what syntax any command or function requires.  For example, Time modifiers describe in much detail what you can give as earliest and latest.  MaxTime is just not a thing in SPL.  If MaxTime is a field you calculated previously, you cannot easily pass it into a subsearch (which is what join command must call).  For this latter point to strike home, you will need some more familiarity about how Splunk work.  Splunk's Search Manual can be a good start point to learn those. Back to your actual use case, technically you can make Splunk do exactly what you wanted.  But as I hinted above, Splunk - and Splunk practitioners like me, intimidate, nay bully those who dare to join.  Unless absolutely necessary, just use a Splunk-friendly query to achieve the same goal.  It will benefit you in the short term as well as long. You mention time periods you tried to connect the two searches, but give no indication as to what is the link between the two searches.  It seems obvious that you are not trying to "join" the two searches by _time.  So, there must be some other logic other than just wanting to set time interval differently.  Can you describe the actual logic in your use case?  What is the output you are trying to get?  What are some data characteristics that help you arrive at your output?  Illustrate in concrete terms or mockup data.
Hello Paul, Thank you for a quick response, Its direct from Palo to to Splunk. I am using Paloalto App and Add on,  I am not seeing indexes growing at all. I tried looking at the data from Search op... See more...
Hello Paul, Thank you for a quick response, Its direct from Palo to to Splunk. I am using Paloalto App and Add on,  I am not seeing indexes growing at all. I tried looking at the data from Search option and try to match with various filters. Regards Rabab
Hi Rabab, A few more details will be needed to help here. Is your Palo Alto setup sending directly to Splunk, with a syslog server, or  via an HF/UF? Where have you tried looking for the dat... See more...
Hi Rabab, A few more details will be needed to help here. Is your Palo Alto setup sending directly to Splunk, with a syslog server, or  via an HF/UF? Where have you tried looking for the data? Have you looked to see if any of your indexes are growing?   
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs suc... See more...
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs successfully to the windows machine where splunk is installed but I cannot see anything in splunk itself. Can anyone help?   Regards Rabab
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1)... See more...
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1) and here is the real data's timestamp in screenshot 2 which inturn gives us the incorrect count between the query output (count and the real data logs (count 12) Screenshot1    Screenshot2:  
It's been over a week now and I still don't have access to Splunk Cloud. Customer Support couldn't help me so far.    
Is splunk laughing even harder, does the second query - the one inside the join run before the query outside the join?
There are no miracles. So if you delete files or directories and they reappear after some time, there must be something responsible for redeploying them onto your server. There are three different ... See more...
There are no miracles. So if you delete files or directories and they reappear after some time, there must be something responsible for redeploying them onto your server. There are three different internal Splunk mechanisms that can cause that: 1) Indexer cluster config bundle management - as this is not an indexer, it doesn't apply here 2) Search head cluster deployer config push - you're sayint it's a standalone search head so it wouldn't apply either 3) Deployment from a Deployment Server - that's a possible scenario. I suppose the easiest way to verify if it is configured to pull apps from a DS is to either run splunk btool deploymentclient list target-broker:deploymentServer or verify existence of $SPLUNK_HOME/var/run/serverclass.xml file Of course there is also the possibility that your configs are managed by some external provisioning tool like ansible, puppet, chef or any kind of in-house built script. But this is something we cannot know.
Yes, thank you. I got focused on explaining why, that forgot to write what to do more explicitly.
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage ... See more...
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage location or processed by some external script (for example, compressed and encrypted for long-time storage). But yes, by default "freezing" equals deleting the bucket. 2. As you mentioned when listing the parameters affecting bucket lifecycle, there are also limits regarding a volume size. So the buckets might roll from cold to frozen if any of the conditions are met: 1) The bucket is older than the retention limit (the _newest_ event in the bucket is _older_ than the limit - in other words - whole bucket contains only events older than the retention limit). 2) The index has grown beyond the size limit 3) The volume has grown beyond the size limit. Obviously the 3) condition can be met only if your index directories are defined using volumes. In case of the 2) condition Splunk will freeze the oldest bucket for the index (again - the one for which the newest event is oldest), but in case of the 3) condition Splunk will freeze the oldest bucket from any of the indexes contained on the volume. You can find the actual reason for freezing bucket by searching your _internal index for AsyncFreezer Typically if just one index freezes before reaching retention period you'd expect this index to run out of space but if buckets from many indexes get prematurely frozen it might be the volume size issue. Yet, you can see volume size limit affecting just one index if your indexes have significantly differing retention periods so that one of them contains much older events than other ones.  
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packa... See more...
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packages have an update every month). The first query would get the latest packages patched (with in the last 30 days) - depending on what day of the month the patching occurred - I would like to pass the earliest datetime stamp found minus X seconds (as MaxTime)  to the second query. So, the second query could use the same index, source, sourcetype but where latest=MaxTime. Don't try this at home, putting  latest=MaxTime-10 in the second query caused Splunk to laugh at me and return "Invalid value 'MaxTime-10' for time term 'latest'"...no hard feelings, Splunk laughs at me often.   Thanks for any assistance in advance. JLund  
@deepakc Is there a formula i can use to determine the right diskSize, maxTotalDataSize, maxWarmDBCount? I think that will help me set the right values for these parameters.
From your answer, maxTotalDataSizeMB is the same size as diskSize. That's the reason it's rolling off. Still don't know which of the parameters to tune to get it fixed.
I am trying to query audit logs from Splunk. The logs are for azure but when I hit the below query, it only returns the text fields and not the object or array fields like initiatedBy and targetResou... See more...
I am trying to query audit logs from Splunk. The logs are for azure but when I hit the below query, it only returns the text fields and not the object or array fields like initiatedBy and targetResources. Do I need to query this data in a different manner?   index="directoryaudit" | fields id activityDisplayName result operationType correlationId initiatedBy resultReason targetResources category loggedByService activityDateTime
Thank you so much @deepakc  Your answer has been very helpful!
Thank you @ITWhisperer . I used your recommended query as below but unable to get any output: index=test1 sourcetype=test2 EVENT A | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_T... See more...
Thank you @ITWhisperer . I used your recommended query as below but unable to get any output: index=test1 sourcetype=test2 EVENT A | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_Time=strftime(Event_Time, "%m/%d/%y %H:%M:%S") Please see below my old Splunk query being used using Splunk default "_time" field. index=test1 sourcetype=test2 EVENT A | bucket span=1s _time | stats count AS EventPerSec by _time | timechart span=1d max(EventPerSec) Ultimately, in this query, I want to replace "_time" by "Event_Time" that is more accurate than "_time".  Note that there can be multiple events in my data occurring at the exact same time (to the Seconds value). So basically, my query find the peak "EventPerSec" value in 1 day. Hope this explanation helps.
Data rolls off due to a few reasons. Data arrives in Splunk, it then needs to move through HOT/WARM>COLD/FROZEN, otherwise data will build up and you will run out of space. 1.Warm buckets move ... See more...
Data rolls off due to a few reasons. Data arrives in Splunk, it then needs to move through HOT/WARM>COLD/FROZEN, otherwise data will build up and you will run out of space. 1.Warm buckets move cold when either the homePath or maxWarmDBCount reach their limits.  2.Cold buckets are deleted when either the frozenTimePeriodInSecs or maxTotalDataSizeMB reach their limits. This may help show why it’s moving – see the event_message field index=_internal sourcetype=splunkd component=BucketMover | fields _time, bucket, candidate, component, event_message, from, frozenTimePeriodInSecs, host, idx,latest, log_level, now, reason, splunk_server, to | fieldformat "now"=strftime('now', "%Y/%m/%d %H:%M:%S") | fieldformat "latest"=strftime('latest', "%Y/%m/%d %H:%M:%S") | eval retention_days = frozenTimePeriodInSecs / 86400 | table _time component, bucket, from, to, candidate, event_message, from, frozenTimePeriodInSecs, retention_days, host, idx, now, latest, reason, splunk_server, log_level   You apply config via indexes.conf for the index for disk constrains by configuring the various options: Settings: frozenTimePeriodInSecs (Retention Period in seconds - Old bucket data is deleted (option to freeze it) based on the newest event - maxTotalDataSizeMB = (Limits the overall size of the index - (hot, warm, cold moves frozen) maxVolumeDataSizeMB = (limits the total size of all databases that reside on this volume) maxWarmDBCount = (The maximum number of warm buckets moves to cold) maxHotBuckets = (The number of actively written open buckets - when exceeded it moves to warm state) maxHotSpanSecs = (Specifies how long a bucket remains in the hot/warm state before moving to cold) maxDataSize = (specifies that a hot bucket can reach before splunkd triggers a roll to warm) maxVolumeDataSizeMB = (Overall Volume Size limit) homePath.maxDataSizeMB = (limit the individual index size) coldPath.maxDataSizeMB = (limit the individual index size) maxVolumeDataSizeMB = (limits the total size of all databases that reside on this volume)   See the indexes.conf for details https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf
I'll check them out, thanks! @isoutamo 
Hi This is quite often asked and answered question. You could found many answers via google with search phrase “ site:community.splunk.com index retention time” r. Ismo
Hi Here is excellent presentation about event distribution “ Best practises for Data Collection - Richard Morgan”. You could found it at least from slide share service. r. Ismo