All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observabil... See more...
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observability at this time. Is it possible to manage or configure these OTel Collectors through the traditional Splunk Deployment Server? If so, could you please share any relevant documentation or guidance? I came across documentation related to the Splunk Add-on for the OpenTelemetry Collector, but it appears to be focused on Splunk Observability. Any clarification or direction would be greatly appreciated. Thanks in advance for your support!
Splunk free didn't contains needed feature for PoC! The minimum is trial, but it has ingestion size limits and also it didn't contains e.g. remote LM feature. One option is request developer licen... See more...
Splunk free didn't contains needed feature for PoC! The minimum is trial, but it has ingestion size limits and also it didn't contains e.g. remote LM feature. One option is request developer license (could take some times to get) or contact to your local Splunk partner and asked trial from them for Onprem PoC.
Have you looked these guides? https://machinelearningmastery.com/practical-guide-choosing-right-algorithm-your-problem/ https://www.geeksforgeeks.org/choosing-a-suitable-machine-learning-algorithm... See more...
Have you looked these guides? https://machinelearningmastery.com/practical-guide-choosing-right-algorithm-your-problem/ https://www.geeksforgeeks.org/choosing-a-suitable-machine-learning-algorithm/ https://labelyourdata.com/articles/how-to-choose-a-machine-learning-algorithm https://scikit-learn.org/stable/machine_learning_map.html I suppose that those helps you to select suitable algorithms with your test data.
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question:  Are both required in an environment where vCenter is p... See more...
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question:  Are both required in an environment where vCenter is present, or is the Splunk add-on for VMware ESXi logs not necessary if so? Or in other words, is the add-on for VMware ESXi just for simple bare metal installs that do not use vCenter?  Are ESXi builds with vCenter sending all of the ESXi logs up to vCenter anyhow and one needs only use the add-on for vCenter?   Second question, am I reading correctly that the add-on for vCenter requires BOTH a syslog output and a vCenter user account for API access?
@danielbb  Since your ingestion is minimal (2 GB/day), and assuming you already have on-prem infrastructure: Stick with on-prem if you want full control and already have the hardware. Choose Inge... See more...
@danielbb  Since your ingestion is minimal (2 GB/day), and assuming you already have on-prem infrastructure: Stick with on-prem if you want full control and already have the hardware. Choose Ingest-Based Term Licensing for predictable costs and flexibility. Consider Splunk Free (500 MB/day) for testing or very small-scale use. If you're open to cloud and want to reduce operational overhead, Splunk Cloud with pay-as-you-go could be a cost-effective and low-maintenance alternative. For such a small ingestion volume, Splunk Cloud might be worth considering if: You want to avoid infrastructure management. You prefer pay-as-you-go pricing  You value scalability and ease of updates. Pricing | Splunk  If you need further assistance or a detailed quote, contact Splunk sales or a partner like SP6, and consider a free trial to validate your setup.   https://www.splunk.com/en_us/products/pricing/platform-pricing.html https://sp6.io/blog/choosing-the-right-splunk-license/  https://www.reddit.com/r/Splunk/comments/1cbf9gu/what_the_deal_with_splunk_cloud_vs_on_prem/ 
One another comment. Don't use .../etc/system/local for (almost) anything! Create your own app and use it to store your conf files. In that way everything is working much better in long run.
| eval description=case(like(Status,"%host is down%"),Date.",".Server.",".Status.",".Threshold,like(Status,"%database is down%"),Date.",".Db.",".Status.",".'Instance status')
Currently minimum licence for Cloud is 5GB and it's always for minimum one year. If you want to test with cloud you must use Splunk's free trial which is 14days and you cannot migrate/convert this to... See more...
Currently minimum licence for Cloud is 5GB and it's always for minimum one year. If you want to test with cloud you must use Splunk's free trial which is 14days and you cannot migrate/convert this to official stack after 14d period. In onprem minimum license is 1GB/day. You could use it for single node or create even multisite cluster with it or anything between those options. With your ingested data amount there is no option to go svc based license in onprem even there is this splunk's offering. This needs much higher daily ingestion amount to be reasonable priced vs. ingestion based license. Currently SCP (splunk cloud license) are quite nicely priced vs. onprem + hw/virtual capacity/management stuff needed for those so I definitely look SCP for production probably even earlier than 5GB daily base ingestion. Of course it depends on what kind of environment you have and how splunk will be managed there etc. And of course you quite probably need some nodes (e.g. DS + IHF + DBX HF etc.) into onprem too?
Hi regex101.com is your friend, when you need to start with regex. Here is your example https://regex101.com/r/Tu8JB5/1 In splunk you have couple of ways to get this done. Use rex as @livehybrid ... See more...
Hi regex101.com is your friend, when you need to start with regex. Here is your example https://regex101.com/r/Tu8JB5/1 In splunk you have couple of ways to get this done. Use rex as @livehybrid shows and create that rex e.g. by regex101.com Use splunk | makeresults | eval _raw ="2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455" | multikv noheader=t ``` Above prepare sample data ``` | rex field=_raw "Service ID - (?<serviceID>.*$)" | table serviceID​   use Splunk with rex | makeresults | eval _raw ="2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455" | multikv noheader=t ``` Above prepare sample data ``` | erex serviceID examples="zywstrf,abc123f" | table serviceID​ Use Splunk's "Extract new field" feature under "Interesting fields" and then select regex and follow those instructions. There are two more places in GUI where you could found this same functionality Please accept solution for answer which helps you to solve this issue. That way also other people will know what to do when they are looking an answer for same issue.   
Hi @danielbb  I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with.  It is possible to get ingestion bas... See more...
Hi @danielbb  I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with.  It is possible to get ingestion based licenses for on-premises which I believe go from 1GB in chunks, so you would be able to purchase a 2GB license. I think your best option here is to speak to sales, if you dont have a contact go via https://www.splunk.com/en_us/talk-to-sales/pricing.html?expertCode=sales and tell them you wish to do a PoC etc, they should be able to provide you a proper license for the PoC versus the trial license available online. Happy Splunking!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for ... See more...
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for an on-prem environment? for something so small, does it make sense to switch and have it on the cloud?
Can you explain more detail level what is your issue with that JSON? It contains some data and you want to do with it ....
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you... See more...
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you could use it officially with Cloud too.
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how ... See more...
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how that has happened, can you e.g. dedup _raw or just set of different fields or did it needs some calculations/modifications (e.g. times) too?
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it se... See more...
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it several times to be sure! And of course you must 1st get rid of those new duplicates and ensure that all your inputs works as they should without duplicating new events. After that you probably could do that delete if you are absolutely sure that it works also in your case. And I propose that you should use some temp account which has can_delete role just for this time what is needed to do that clean up.
Hi @Harikiranjammul  If you can, restrict the first part of the search by adding (Status="*host is down* OR Status="*Some other criteria*")  Did this answer help you? If so, please consider: A... See more...
Hi @Harikiranjammul  If you can, restrict the first part of the search by adding (Status="*host is down* OR Status="*Some other criteria*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I dont like the idea you can not add dedup with pipelines in the simple base search of dataset.  The Splunk should offer a ready method to deduplicate index.
IMHO: I don't like or suggest you to add delete permissions to anyone permanently! It isn't great idea to run scheduled job which are removing events from splunk.
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to othe... See more...
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to other Splunk's tools/servers which are much better for that tasks. From Splunk offering you should look this https://www.splunk.com/en_us/products/infrastructure-monitoring.html for pure server monitoring.
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.c... See more...
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Admin/DataSelfStorage With IA, EP and IP output format is JSON, actually HEC suitable format. With IP you can also select Parquet if you want. If you are running your Enterprise in AWS, then you could configure that your frozen buckets will stored on S3 buckets. Based on your cold2frozen script you could store only raw data into S3 or more if you really want it.