All Posts

Top

All Posts

I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs suc... See more...
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs successfully to the windows machine where splunk is installed but I cannot see anything in splunk itself. Can anyone help?   Regards Rabab
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1)... See more...
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1) and here is the real data's timestamp in screenshot 2 which inturn gives us the incorrect count between the query output (count and the real data logs (count 12) Screenshot1    Screenshot2:  
It's been over a week now and I still don't have access to Splunk Cloud. Customer Support couldn't help me so far.    
Is splunk laughing even harder, does the second query - the one inside the join run before the query outside the join?
There are no miracles. So if you delete files or directories and they reappear after some time, there must be something responsible for redeploying them onto your server. There are three different ... See more...
There are no miracles. So if you delete files or directories and they reappear after some time, there must be something responsible for redeploying them onto your server. There are three different internal Splunk mechanisms that can cause that: 1) Indexer cluster config bundle management - as this is not an indexer, it doesn't apply here 2) Search head cluster deployer config push - you're sayint it's a standalone search head so it wouldn't apply either 3) Deployment from a Deployment Server - that's a possible scenario. I suppose the easiest way to verify if it is configured to pull apps from a DS is to either run splunk btool deploymentclient list target-broker:deploymentServer or verify existence of $SPLUNK_HOME/var/run/serverclass.xml file Of course there is also the possibility that your configs are managed by some external provisioning tool like ansible, puppet, chef or any kind of in-house built script. But this is something we cannot know.
Yes, thank you. I got focused on explaining why, that forgot to write what to do more explicitly.
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage ... See more...
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage location or processed by some external script (for example, compressed and encrypted for long-time storage). But yes, by default "freezing" equals deleting the bucket. 2. As you mentioned when listing the parameters affecting bucket lifecycle, there are also limits regarding a volume size. So the buckets might roll from cold to frozen if any of the conditions are met: 1) The bucket is older than the retention limit (the _newest_ event in the bucket is _older_ than the limit - in other words - whole bucket contains only events older than the retention limit). 2) The index has grown beyond the size limit 3) The volume has grown beyond the size limit. Obviously the 3) condition can be met only if your index directories are defined using volumes. In case of the 2) condition Splunk will freeze the oldest bucket for the index (again - the one for which the newest event is oldest), but in case of the 3) condition Splunk will freeze the oldest bucket from any of the indexes contained on the volume. You can find the actual reason for freezing bucket by searching your _internal index for AsyncFreezer Typically if just one index freezes before reaching retention period you'd expect this index to run out of space but if buckets from many indexes get prematurely frozen it might be the volume size issue. Yet, you can see volume size limit affecting just one index if your indexes have significantly differing retention periods so that one of them contains much older events than other ones.  
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packa... See more...
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packages have an update every month). The first query would get the latest packages patched (with in the last 30 days) - depending on what day of the month the patching occurred - I would like to pass the earliest datetime stamp found minus X seconds (as MaxTime)  to the second query. So, the second query could use the same index, source, sourcetype but where latest=MaxTime. Don't try this at home, putting  latest=MaxTime-10 in the second query caused Splunk to laugh at me and return "Invalid value 'MaxTime-10' for time term 'latest'"...no hard feelings, Splunk laughs at me often.   Thanks for any assistance in advance. JLund  
@deepakc Is there a formula i can use to determine the right diskSize, maxTotalDataSize, maxWarmDBCount? I think that will help me set the right values for these parameters.
From your answer, maxTotalDataSizeMB is the same size as diskSize. That's the reason it's rolling off. Still don't know which of the parameters to tune to get it fixed.
I am trying to query audit logs from Splunk. The logs are for azure but when I hit the below query, it only returns the text fields and not the object or array fields like initiatedBy and targetResou... See more...
I am trying to query audit logs from Splunk. The logs are for azure but when I hit the below query, it only returns the text fields and not the object or array fields like initiatedBy and targetResources. Do I need to query this data in a different manner?   index="directoryaudit" | fields id activityDisplayName result operationType correlationId initiatedBy resultReason targetResources category loggedByService activityDateTime
Thank you so much @deepakc  Your answer has been very helpful!
Thank you @ITWhisperer . I used your recommended query as below but unable to get any output: index=test1 sourcetype=test2 EVENT A | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_T... See more...
Thank you @ITWhisperer . I used your recommended query as below but unable to get any output: index=test1 sourcetype=test2 EVENT A | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_Time=strftime(Event_Time, "%m/%d/%y %H:%M:%S") Please see below my old Splunk query being used using Splunk default "_time" field. index=test1 sourcetype=test2 EVENT A | bucket span=1s _time | stats count AS EventPerSec by _time | timechart span=1d max(EventPerSec) Ultimately, in this query, I want to replace "_time" by "Event_Time" that is more accurate than "_time".  Note that there can be multiple events in my data occurring at the exact same time (to the Seconds value). So basically, my query find the peak "EventPerSec" value in 1 day. Hope this explanation helps.
Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel): Q1: How to best optimize threat analysis? Your goal should be to get to aut... See more...
Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel): Q1: How to best optimize threat analysis? Your goal should be to get to auto-containment / auto-reinforcement Using a trustworthy analysis signal is key for you to take containment actions via automation Splunk Attack Analyzer with its consistent, comprehensive and automated threat analysis can provide a strong foundation Q2: How does Splunk differentiate from Palo Alto Networks? Attack Chain Following - Allows SAA to navigate complex attack chains involving obfuscation techniques like lure pages, QR codes, captchas. Sandbox analysis often ends up incomplete in such scenarios Phishing Detection - Traditional sandboxes provide very little coverage for phishing which is often the highest volume of analysis for SOC/IR teams Q3: Is Attack Analyzer part of a standard Splunk package or a separate application? Attack Analyzer is an independent solution with close integrations to Splunk SOAR, Splunk ES and Splunk Platform It’s pricing is determined by answers to the following questions: How many individuals gain value from it - # of seats How many artifacts need to be analyzer - # of submissions   Other Questions (check the #office-hours Slack channel for responses): Is it possible to have a free version of it installed for a limited time for a small POC? How does Attack Analyzer integrate with Splunk Enterprise Security? How does Attack Analyzer compare to a sandbox tool? How does Attack Analyzer deal with things like QR codes or Cloudflare Captchas?   Live Questions: To detect this kind of phishing attacks, Splunk Attack Analyzer is scanning the mail server or the client? Or does it work with the firewall? Does Splunk Attack Analyzer works on the object objectives expose to Internet or acts inside a company. Mostly,we know that there aren't usually eyes on a monitor 24/7.  So what mechanism does SAA use to notify security team members that some high percentage risk was identified, a particular Soar playbook was triggered, or where no Soar playbook is associated with the finding then what is the notification mechanism to make security team members aware of the situation? Besides phishing and URL scanning, are there any other use cases for this product? Is there an option to get scannings of other users/customers like of IOC, etc... ?
Data rolls off due to a few reasons. Data arrives in Splunk, it then needs to move through HOT/WARM>COLD/FROZEN, otherwise data will build up and you will run out of space. 1.Warm buckets move ... See more...
Data rolls off due to a few reasons. Data arrives in Splunk, it then needs to move through HOT/WARM>COLD/FROZEN, otherwise data will build up and you will run out of space. 1.Warm buckets move cold when either the homePath or maxWarmDBCount reach their limits.  2.Cold buckets are deleted when either the frozenTimePeriodInSecs or maxTotalDataSizeMB reach their limits. This may help show why it’s moving – see the event_message field index=_internal sourcetype=splunkd component=BucketMover | fields _time, bucket, candidate, component, event_message, from, frozenTimePeriodInSecs, host, idx,latest, log_level, now, reason, splunk_server, to | fieldformat "now"=strftime('now', "%Y/%m/%d %H:%M:%S") | fieldformat "latest"=strftime('latest', "%Y/%m/%d %H:%M:%S") | eval retention_days = frozenTimePeriodInSecs / 86400 | table _time component, bucket, from, to, candidate, event_message, from, frozenTimePeriodInSecs, retention_days, host, idx, now, latest, reason, splunk_server, log_level   You apply config via indexes.conf for the index for disk constrains by configuring the various options: Settings: frozenTimePeriodInSecs (Retention Period in seconds - Old bucket data is deleted (option to freeze it) based on the newest event - maxTotalDataSizeMB = (Limits the overall size of the index - (hot, warm, cold moves frozen) maxVolumeDataSizeMB = (limits the total size of all databases that reside on this volume) maxWarmDBCount = (The maximum number of warm buckets moves to cold) maxHotBuckets = (The number of actively written open buckets - when exceeded it moves to warm state) maxHotSpanSecs = (Specifies how long a bucket remains in the hot/warm state before moving to cold) maxDataSize = (specifies that a hot bucket can reach before splunkd triggers a roll to warm) maxVolumeDataSizeMB = (Overall Volume Size limit) homePath.maxDataSizeMB = (limit the individual index size) coldPath.maxDataSizeMB = (limit the individual index size) maxVolumeDataSizeMB = (limits the total size of all databases that reside on this volume)   See the indexes.conf for details https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf
I'll check them out, thanks! @isoutamo 
Hi This is quite often asked and answered question. You could found many answers via google with search phrase “ site:community.splunk.com index retention time” r. Ismo
Hi Here is excellent presentation about event distribution “ Best practises for Data Collection - Richard Morgan”. You could found it at least from slide share service. r. Ismo
Did you manage to get a solution for this? For me as well this fails. But the relative thing works. Like -1d and so but this does not MM/dd/yyyy':'HH:mm:ss
Hello guys.... I have this task to investigate why indexes roll of data before retention age. From my findings, it shows number of warm buckets exceeded. Here's what the index configuration looks lik... See more...
Hello guys.... I have this task to investigate why indexes roll of data before retention age. From my findings, it shows number of warm buckets exceeded. Here's what the index configuration looks like. How can i fix this? [wall] repFactor=auto coldPath = volume:cold/customer/wall/colddb homePath = volume:hot_warm/customer/wall/db thawedPath = /splunk/data/cold/customer/wall/thaweddb frozenTimePeriodInSecs = 34186680 maxHotBuckets = 10 maxTotalDataSizeMB = 400000