After 9.2 the reason for this is that DS needs local indexes for get information about DCs. And best practices said that you should forward all logs into your indexer instead of keeping those into l...
See more...
After 9.2 the reason for this is that DS needs local indexes for get information about DCs. And best practices said that you should forward all logs into your indexer instead of keeping those into local node like DS. You could fix this by adding to stored those indexes also into locally. https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers here is instructions how to do it.
The rebuild of forwarders assets should happen automatically with period what is defined into CMC -> Forwarders -> Forwarder Monitoring Setup: Data Collection Interval. If that time has gone after yo...
See more...
The rebuild of forwarders assets should happen automatically with period what is defined into CMC -> Forwarders -> Forwarder Monitoring Setup: Data Collection Interval. If that time has gone after you have add this UF and you can see those logs in _internal index and this continue I propose that you create a support ticket that they could figure out why this forwarder asset hasn't updated as expected. Of course if time has elapsed less than that period you could update it manually or decrease that time and build it again. What is preferred time period for update is depending on your needs to get those UF into this list and how many UFs you have.
You could also use tstats with prestats=t paramter like | tstats prestats=t count where index=my_index by _time span=1h
| timechart span=1h count
| timewrap 1w https://docs.splunk.com/Documentatio...
See more...
You could also use tstats with prestats=t paramter like | tstats prestats=t count where index=my_index by _time span=1h
| timechart span=1h count
| timewrap 1w https://docs.splunk.com/Documentation/Splunk/9.4.2/SearchReference/Tstats
It's just like @PickleRick said, you should never use splunk as termination point for syslog! You should setup a real syslog server to avoid this kind of issues. As said we haven't enough informati...
See more...
It's just like @PickleRick said, you should never use splunk as termination point for syslog! You should setup a real syslog server to avoid this kind of issues. As said we haven't enough information yet to make any real guesses what have caused this issue. But as there is only one server which has have this issue, I start to looking it first. Also check that is it in same network segment than those other. Has it been up and running this time? Any FW changes in it. Could it have any time issue or anything else related that time has changed and fixed later?
It's a bit unclear what you need. 1. Usually if you want to check the product for yourself and just see what it can do you can use the trial license. 2. If you want a PoC you'd rather involve your ...
See more...
It's a bit unclear what you need. 1. Usually if you want to check the product for yourself and just see what it can do you can use the trial license. 2. If you want a PoC you'd rather involve your local Splunk Partner because usually a PoC involves demonstrating that some real-life scenarios can be realized using product or some actual problems can be solved. So it's best you simply contact one of your local friendly Splunk Partners and talk with them about a PoC or about getting a demo license for a short period of time. Anyway, 2GB/day doesn't usually warrant a "cluster".
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observabil...
See more...
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observability at this time. Is it possible to manage or configure these OTel Collectors through the traditional Splunk Deployment Server? If so, could you please share any relevant documentation or guidance? I came across documentation related to the Splunk Add-on for the OpenTelemetry Collector, but it appears to be focused on Splunk Observability. Any clarification or direction would be greatly appreciated. Thanks in advance for your support!
Splunk free didn't contains needed feature for PoC! The minimum is trial, but it has ingestion size limits and also it didn't contains e.g. remote LM feature. One option is request developer licen...
See more...
Splunk free didn't contains needed feature for PoC! The minimum is trial, but it has ingestion size limits and also it didn't contains e.g. remote LM feature. One option is request developer license (could take some times to get) or contact to your local Splunk partner and asked trial from them for Onprem PoC.
Have you looked these guides? https://machinelearningmastery.com/practical-guide-choosing-right-algorithm-your-problem/ https://www.geeksforgeeks.org/choosing-a-suitable-machine-learning-algorithm...
See more...
Have you looked these guides? https://machinelearningmastery.com/practical-guide-choosing-right-algorithm-your-problem/ https://www.geeksforgeeks.org/choosing-a-suitable-machine-learning-algorithm/ https://labelyourdata.com/articles/how-to-choose-a-machine-learning-algorithm https://scikit-learn.org/stable/machine_learning_map.html I suppose that those helps you to select suitable algorithms with your test data.
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question: Are both required in an environment where vCenter is p...
See more...
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question: Are both required in an environment where vCenter is present, or is the Splunk add-on for VMware ESXi logs not necessary if so? Or in other words, is the add-on for VMware ESXi just for simple bare metal installs that do not use vCenter? Are ESXi builds with vCenter sending all of the ESXi logs up to vCenter anyhow and one needs only use the add-on for vCenter? Second question, am I reading correctly that the add-on for vCenter requires BOTH a syslog output and a vCenter user account for API access?
@danielbb Since your ingestion is minimal (2 GB/day), and assuming you already have on-prem infrastructure: Stick with on-prem if you want full control and already have the hardware. Choose Inge...
See more...
@danielbb Since your ingestion is minimal (2 GB/day), and assuming you already have on-prem infrastructure: Stick with on-prem if you want full control and already have the hardware. Choose Ingest-Based Term Licensing for predictable costs and flexibility. Consider Splunk Free (500 MB/day) for testing or very small-scale use. If you're open to cloud and want to reduce operational overhead, Splunk Cloud with pay-as-you-go could be a cost-effective and low-maintenance alternative. For such a small ingestion volume, Splunk Cloud might be worth considering if: You want to avoid infrastructure management. You prefer pay-as-you-go pricing You value scalability and ease of updates. Pricing | Splunk If you need further assistance or a detailed quote, contact Splunk sales or a partner like SP6, and consider a free trial to validate your setup. https://www.splunk.com/en_us/products/pricing/platform-pricing.html https://sp6.io/blog/choosing-the-right-splunk-license/ https://www.reddit.com/r/Splunk/comments/1cbf9gu/what_the_deal_with_splunk_cloud_vs_on_prem/
One another comment. Don't use .../etc/system/local for (almost) anything! Create your own app and use it to store your conf files. In that way everything is working much better in long run.
| eval description=case(like(Status,"%host is down%"),Date.",".Server.",".Status.",".Threshold,like(Status,"%database is down%"),Date.",".Db.",".Status.",".'Instance status')
Currently minimum licence for Cloud is 5GB and it's always for minimum one year. If you want to test with cloud you must use Splunk's free trial which is 14days and you cannot migrate/convert this to...
See more...
Currently minimum licence for Cloud is 5GB and it's always for minimum one year. If you want to test with cloud you must use Splunk's free trial which is 14days and you cannot migrate/convert this to official stack after 14d period. In onprem minimum license is 1GB/day. You could use it for single node or create even multisite cluster with it or anything between those options. With your ingested data amount there is no option to go svc based license in onprem even there is this splunk's offering. This needs much higher daily ingestion amount to be reasonable priced vs. ingestion based license. Currently SCP (splunk cloud license) are quite nicely priced vs. onprem + hw/virtual capacity/management stuff needed for those so I definitely look SCP for production probably even earlier than 5GB daily base ingestion. Of course it depends on what kind of environment you have and how splunk will be managed there etc. And of course you quite probably need some nodes (e.g. DS + IHF + DBX HF etc.) into onprem too?
Hi regex101.com is your friend, when you need to start with regex. Here is your example https://regex101.com/r/Tu8JB5/1 In splunk you have couple of ways to get this done. Use rex as @livehybrid ...
See more...
Hi regex101.com is your friend, when you need to start with regex. Here is your example https://regex101.com/r/Tu8JB5/1 In splunk you have couple of ways to get this done. Use rex as @livehybrid shows and create that rex e.g. by regex101.com Use splunk | makeresults
| eval _raw ="2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455"
| multikv noheader=t
``` Above prepare sample data ```
| rex field=_raw "Service ID - (?<serviceID>.*$)"
| table serviceID use Splunk with rex | makeresults
| eval _raw ="2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f
2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455"
| multikv noheader=t
``` Above prepare sample data ```
| erex serviceID examples="zywstrf,abc123f"
| table serviceID Use Splunk's "Extract new field" feature under "Interesting fields" and then select regex and follow those instructions. There are two more places in GUI where you could found this same functionality Please accept solution for answer which helps you to solve this issue. That way also other people will know what to do when they are looking an answer for same issue.
Hi @danielbb I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with. It is possible to get ingestion bas...
See more...
Hi @danielbb I think from memory the minimum I've seen a Cloud stack is 50GB, however this is just based on a some of the smaller customers I have worked with. It is possible to get ingestion based licenses for on-premises which I believe go from 1GB in chunks, so you would be able to purchase a 2GB license. I think your best option here is to speak to sales, if you dont have a contact go via https://www.splunk.com/en_us/talk-to-sales/pricing.html?expertCode=sales and tell them you wish to do a PoC etc, they should be able to provide you a proper license for the PoC versus the trial license available online. Happy Splunking! Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for ...
See more...
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for an on-prem environment? for something so small, does it make sense to switch and have it on the cloud?
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you...
See more...
Yes this has added recently into official documentation. But currently there are still e.g. appinspector which didn't accept it in Cloud vetting. What I have heard this will be fixed soon, so then you could use it officially with Cloud too.
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how ...
See more...
Of course you can add dedup on those queries but this will kill your performance! And it depends how this duplication has happened and how you could identified those events? That just depends on how that has happened, can you e.g. dedup _raw or just set of different fields or did it needs some calculations/modifications (e.g. times) too?
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it se...
See more...
Unfortunately at least I didn't know any generic answer for this. That method what they presented here is one option, but as said you need to be 100% sure that it works with your data and test it several times to be sure! And of course you must 1st get rid of those new duplicates and ensure that all your inputs works as they should without duplicating new events. After that you probably could do that delete if you are absolutely sure that it works also in your case. And I propose that you should use some temp account which has can_delete role just for this time what is needed to do that clean up.