All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Key Reasons for Using Different Buckets in Splunk: Data Lifecycle Management: Splunk categorizes buckets to handle data at different stages of its lifecycle. As data ages, it moves through diffe... See more...
Key Reasons for Using Different Buckets in Splunk: Data Lifecycle Management: Splunk categorizes buckets to handle data at different stages of its lifecycle. As data ages, it moves through different types of buckets: Hot Buckets: Where the data is first written. These are actively being indexed. Warm Buckets: Once hot buckets are full, they move to warm buckets. These are still searchable but no longer being written to. Cold Buckets: As data ages, it moves to cold buckets. These contain older data and are stored on cheaper, slower storage, but are still searchable. Frozen Buckets: Data that is moved out of Splunk, often archived or deleted based on retention policies. Frozen data is not searchable in Splunk unless thawed (restored). This structure helps manage data efficiently and ensures that recent data is readily available while older data is archived or deleted based on retention policies. Performance Optimization: Splunk searches through recent (hot/warm) and historical (cold) data differently to optimize performance. By organizing data into different buckets, Splunk can prioritize newer data, which is searched more often, while minimizing resource usage on older data. This improves search performance because Splunk doesn’t need to scan all data equally. Efficient Resource Allocation: Storing data in different types of buckets allows for resource optimization. For example: Hot and Warm buckets typically reside on faster, more expensive storage (SSD or fast disks) to ensure quick access to recent data. Cold buckets are stored on slower, cheaper storage, conserving resources while still keeping older data searchable. Retention and Compliance: Organizations often have different retention requirements for data. By using bucket configurations, Splunk allows you to retain data based on the bucket type. For instance, you might keep hot/warm data for a shorter period, and cold data for longer. Frozen buckets can be used to archive data to long-term storage (or delete it) based on compliance requirements. Data Recovery and Index Integrity: If there’s an issue with the index or corruption, buckets help isolate and recover specific portions of the data without impacting the entire index. Splunk can selectively roll back or restore data from buckets, which is easier than dealing with a single monolithic structure. Search Granularity and Parallelism: Different buckets allow Splunk to parallelize searches more effectively. When a search is performed, Splunk can search through hot, warm, and cold buckets in parallel, improving the speed of search execution. Historical Data Archiving: Frozen buckets enable you to offload older, less frequently accessed data to external storage or archive systems, allowing Splunk to manage historical data cost-effectively without overwhelming the system with too much data.
Can't hot bucket just roll directly to cold bucket? Or it's not possible? Does it have anything to do with the fact that the hot bucket is actively getting written to? Can anyone please shed some lig... See more...
Can't hot bucket just roll directly to cold bucket? Or it's not possible? Does it have anything to do with the fact that the hot bucket is actively getting written to? Can anyone please shed some light on this on a technical level as I'm not getting the answer I'm looking for from the documentations. Thanks in advance.
Hi @marcoscala sorry for the late response. I only saw your comment just now. Here's how we did it: Before anything else, make sure that the connection between your Splunk forwarder and SFMC is ... See more...
Hi @marcoscala sorry for the late response. I only saw your comment just now. Here's how we did it: Before anything else, make sure that the connection between your Splunk forwarder and SFMC is established and nothing is blocking it. This is were we had our problem initially. Set up HEC on your Splunk forwarder. Make sure to set the allowQueryStringAuth setting to "true". This will make your HEC act as a webhook. This is important because SMFC only allows you to input endpoint URL and nothing else. Register your callback URL in SFMC using the HEC endpoint URL and token from step 2. Your callback URL should look something like this: https://<Your HEC endpoint URL here>:8088/services/collector/event?token=<your HEC token here>​ If successful, this will return a callbackid and verification key to be used for the next step. Manually verify the callback created from step 3. Now I'm not sure if it matters where you do it but just to be sure, execute the command on the server which is running your Splunk forwarder instance. Create your ENS in SMFC. Granted that everything went well, you should now see the events coming in. I suggest temporarily removing all the filters from your ENS until you've confirmed that you're indeed receiving data from it.
How do I generate reports and run stats on key=value from just message field . Ignoring rest of the fields.  {"cluster_id":"cluster", "message":"Excel someType=MY_TYPE totalItems=1 errors=ABC, X... See more...
How do I generate reports and run stats on key=value from just message field . Ignoring rest of the fields.  {"cluster_id":"cluster", "message":"Excel someType=MY_TYPE totalItems=1 errors=ABC, XYZ status=success","source":"some_data"}   Gone through multiple examples but could not find something concrete that will help me group by on  key someType, compute stats on totalItems, list top errors ABC, XYZ These don't have to be in the same query. I assume top errors grouping would be a separate query.
Hello @sainag_splunk   Those docs are very resourceful, thank you so much. However, I need to provide HEC token to the data source owner to send log from their server. How can I create that token d... See more...
Hello @sainag_splunk   Those docs are very resourceful, thank you so much. However, I need to provide HEC token to the data source owner to send log from their server. How can I create that token directly by using inputs.conf file? Would it be possible for you to provide any example/sample inputs.conf file for it My index =adt_audit and sourcetype=adt:audit
Our MySQL server was upgraded from 5.7 to 8.0.37, and the MariaDB plugin no longer supports exporting audit log files. Are there any methods to export audit logs in a Windows environment?
I want to receive Keycloak logs in the Splunk Cloud platform. I found Keycloak apps in Splunkbase, but they seem to be unavailable in Splunk Cloud. Are there any methods to receive Keycloak logs in S... See more...
I want to receive Keycloak logs in the Splunk Cloud platform. I found Keycloak apps in Splunkbase, but they seem to be unavailable in Splunk Cloud. Are there any methods to receive Keycloak logs in Splunk Cloud?
Please use this doc to create HEC via CLI: https://docs.splunk.com/Documentation/Splunk/latest/Data/UseHECfromtheCLI Or you can directly create inputs.conf as mentioned here. https://docs.splunk.... See more...
Please use this doc to create HEC via CLI: https://docs.splunk.com/Documentation/Splunk/latest/Data/UseHECfromtheCLI Or you can directly create inputs.conf as mentioned here. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stanza_for_each_token   Please UpVote if helpful.  
Hello, Is it possible to create HEC Token from the CLI  of Linux host? Any recommendations how to create HEC token from CLI would be greatly appreciated. Thank you! 
So working with Splunk on this issue, it came down to two issues with the Splunk UF, the way it currently is designed and does things. Firstly, when the Splunk Service is starting, if it can't get a... See more...
So working with Splunk on this issue, it came down to two issues with the Splunk UF, the way it currently is designed and does things. Firstly, when the Splunk Service is starting, if it can't get a response from the Event Log within 30 seconds, it stops trying to collect Windows Events until the service is restarted. I have found that this can happen at times when a server is rebooted and applying patches, on the final reboot, it can be delayed. At this point Splunk Service starts, but if it times out, then you'll get no data collection of the Windows Event logs, as there is currently no auto retry function, if it doesn't respond in 30 seconds. The workaround is to change the Splunk UF service to Automatic Delayed start, to try overcome this issue. The second issue is to do with the Windows Event Log capture directive evt_resolve_ad_obj=1. If for some reason the Splunk UF needs to resolve an AD SID, that is not cached already, and the resolving of the SID times out - maybe say a Domain Controller was rebooting at the time of a resolve, or something like that, then the Splunk UF will stop capturing any more Events for that Event Log until the Splunk UF is restarted, once more it doesn't auto retry, or continue on to the next Event entry. The work around is to set evt_resolve_ad_obj=0 - so it doesn't try resolve any unknown SID's. You won't know this has occurred unless you are monitoring your data sets in the indexes for each host, checking to see if the Event Log data is arriving or not. Splunk informed us that the behaviors we are experiencing are due to the current design of the product. To fix these, it would come under enhancement requests. The case technician has submitted two feature requests on our behalf:   1. EID-I-2424: Implement a retry mechanism or allow configurable timeout settings to address the 30-second initialization timeout for Windows event log data collection in Splunk Universal Forwarder. https://ideas.splunk.com/ideas/EID-I-2424   2. EID-I-2425: Enhance the `evt_resolve_ad_obj=1` setting to skip or retry unresolved Security Identifiers (SIDs) instead of halting event log collection when SID resolution fails. https://ideas.splunk.com/ideas/EID-I-2425
Hi @ilhwan  >> an uptime report for Splunk  maybe more details needed pls. by linux uptime, you will have how many users logged in, cpu usage, system startup time, last shutdown time, load average,... See more...
Hi @ilhwan  >> an uptime report for Splunk  maybe more details needed pls. by linux uptime, you will have how many users logged in, cpu usage, system startup time, last shutdown time, load average, etc are you looking a similar report for Splunk or something else, more details pls.  for the question about monitoring console, the monitoring console got lots of nice and useful dashboards, like longer running searches, high CPU intensive searches, which user is running more Splunk, and lot more.  All you need to do is, 1) get a list of things of what you are looking for. 2) check if DMC got some dashboard panels with the details you are looking for.  3) make your own dashboard with panels from existing DMC panels or your own SPL.    hope this gave some ideas, thanks.   
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring... See more...
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring console log things like 
@sainag_splunk Oh okay!  where does adding in the Time range come in? or how is it linked to the panel's search?  
Yes, You can also use the | loadjob command directly in the search in Dashboard Studio if you're trying to load up saved searches.  I can take a look when I'm on my computer about the issue, p... See more...
Yes, You can also use the | loadjob command directly in the search in Dashboard Studio if you're trying to load up saved searches.  I can take a look when I'm on my computer about the issue, please share your json code.   
@sainag_splunk  Correct me if I'm wrong but that doc is with Classical Dashboard where it uses XML code we are using Dashboard Studio that works with JSON code.
Hi @PickleRick , I tested your suggestion and it worked. Thank you for your help. 1) I added one more case where the IP has an empty name.   I added condition in the where clause (dc=0) and it work... See more...
Hi @PickleRick , I tested your suggestion and it worked. Thank you for your help. 1) I added one more case where the IP has an empty name.   I added condition in the where clause (dc=0) and it worked.  I am afraid if I used isnull(name), sometimes it contains " " (empty string). Please let me know if this is doable 2)  Is it possible to do this without using eventstat?       I have already used eventstats in the search, but for a different field      Will that cause any delays or issues?   Did you ever use multiple eventstats in your search?    Thank you so much for your help ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0 1.1.1.7   location-7 | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0 1.1.1.7,,location-7" | eventstats dc(name) AS dc BY ip | where name!="name0" OR dc=0 OR (name=="name0" AND dc=1)  
Yes. Define exception in Nessus.
Do you have a heavy forwarder in your environment to install this add-on,  this is  a modular input on a heavy forwarder, please disable this on the search head and install this on one of your heavy ... See more...
Do you have a heavy forwarder in your environment to install this add-on,  this is  a modular input on a heavy forwarder, please disable this on the search head and install this on one of your heavy forwarder.    
Hello, We are using Splunk Enterprise version 9.1.2. Yes that is the correct app we are trying to use and I verified that the visibility is enabled.
Hello, looks like an issue with app/TA UI visibility. I have seen issues like this whenever there is TA with the missing config. Are you trying to use: https://splunkbase.splunk.com/app/3681 ? is th... See more...
Hello, looks like an issue with app/TA UI visibility. I have seen issues like this whenever there is TA with the missing config. Are you trying to use: https://splunkbase.splunk.com/app/3681 ? is this Splunk Enterprise or Cloud? What Version? Can you please go to Manage Apps > Your app > Edit Properties > Visible  > Just to make sure.     Thanks