All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have Splunk Enterprise 9.3.1. I looked through the limits.conf but not sure where to edit. How do I increase the upload size?
Hi @uagraw01 Looks like this issue is about memory. Could you pls check memory usage on this Splunk instance, thanks. 
O11Y does not accept any logs anymore that are sent directly to the O11Y endpoints. The only way is to send the logs to Splunk Enterprise and then use Log Observer Connect.
Okay, could you please check out following thread Solved: How to resolve index buckets stuck in "Fixup Tasks... - Splunk Community and follow the described steps?
Yes, it’ not worked 
Have you tried to resync the bucket under Actions?
If you use  combineWith: "\t" the log entries are correctly splitted or not? Could you remove the combinedWith parameter from the config and deploy it again?
Hi Giuseppe, thank you for feedback I appreciate. In your previous message you mention It isn't a role problem, but a knowledge objects sharing permissions problem. What exactly you mean by that?  ... See more...
Hi Giuseppe, thank you for feedback I appreciate. In your previous message you mention It isn't a role problem, but a knowledge objects sharing permissions problem. What exactly you mean by that?  BR
Yes, This is what i am planing to do so. Thanks for the input.
1. There is no such thing as "generic solution" to a very broadly specified problem. It's as if you asked "how to make people happy? I want a generic solution". If you have a specific problem, we can... See more...
1. There is no such thing as "generic solution" to a very broadly specified problem. It's as if you asked "how to make people happy? I want a generic solution". If you have a specific problem, we can try to help you find a specific solution. 2. If you don't want to use any ready-made apps, you have to implement such functionality yourself. Have a list of sources/hosts (either build it dynamically when your environment is running or create it from external data - for example export from your CMDB) and repeatedy verify if you have recent events ingested from those sources and hosts. That's it.
1. To deal with possible outages you schedule it with continuous schedule - that means that your search will be scheduled for each continuous time period. See https://docs.splunk.com/Documentation/S... See more...
1. To deal with possible outages you schedule it with continuous schedule - that means that your search will be scheduled for each continuous time period. See https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports#Change_the_report_scheduling_mode_to_prioritize_completeness_of_data_over_recency_of_data 2. For this you'd typically use longer search window (and typically you'd want to search with a slight (maybe not; depending on your data) delay to account for data ingestion latency). But as with any search (not just summary-building one), if you have some data outside of your search-range you won't find it
Hello Splunkers!! I am getting "Bad allocation" error on all the Splunk dashboard panel. Please help me to identify the potential root cause.  
Thanks for an answer however i am looking for generic solution and don't want to using any App.
Try raising upload file size limits - analogically as with ES installation (and if it helps, post docs feedback)
You might actually do it another way. Assuming you're getting your counts from the pretty much same set of data, probably just being different in some field(s) values you can create a base search to ... See more...
You might actually do it another way. Assuming you're getting your counts from the pretty much same set of data, probably just being different in some field(s) values you can create a base search to get both of those counts. For example - for logins and logouts - adjust to your case index=whatever | stats count(eval(operation="login")) as logins count(eval(operation="logout")) as logouts Then you can: 1. Have two separate visualizations - each of them displaying just one result field 2. Have a post-process search for that base search | eval diff=logins-logouts which you can use for another single value visualization. This way you can just use one base search for everything.
I haven't heard anything yet. I don't know if this place is active. 
Well, if the servers themselves work OK - that's the task for the infrastructure team. They should have tools for that (or at least the knowledge what and how should be checked). You can discuss with... See more...
Well, if the servers themselves work OK - that's the task for the infrastructure team. They should have tools for that (or at least the knowledge what and how should be checked). You can discuss with them if Splunk can be helpful in this process but of course you'd need some data ingested from the relevant hosts. If you just want to check if the servers send data which is ultimately ingested into splunk, there are several apps for that on Splunkbase, for example TrackMe.
If you define multiple output groups, events are pushed to all of them at the same time (unless you override the routing per input or in transform). If you have multiple destination hosts in an outp... See more...
If you define multiple output groups, events are pushed to all of them at the same time (unless you override the routing per input or in transform). If you have multiple destination hosts in an output group, they are handled in a round robin way. There's no other way using built-in mechanics. You'd need to either use http output and install and intermediate http rev-proxy with health-checked and prioritized backends or do some form of external "switching" of the destination based either on some dynamic network-level redirects or DNS-based mechanisms. But all those are generally non-splunk solutions and add complexity to your deployment.
Thanks for clearifying, @PickleRick . So what would be the best practice for creating such synthetic events? A scheduled search every 5 (or so) Minutes? If yes, how to deal with: - SH-Downtimes - l... See more...
Thanks for clearifying, @PickleRick . So what would be the best practice for creating such synthetic events? A scheduled search every 5 (or so) Minutes? If yes, how to deal with: - SH-Downtimes - logins where only one of both needed events for a successful login is in the search time range, and the other is in the search time range of the previous run of the scheduled search
First of all sorry i am not clear about the servers. These are syslogs servers which we patch and we have list of server to validate if these are working perfectly fine post patching activity or not.... See more...
First of all sorry i am not clear about the servers. These are syslogs servers which we patch and we have list of server to validate if these are working perfectly fine post patching activity or not. Do you know or suggest what all things we can validate in one by creating any dashboard or some other type of automation?