All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you use  combineWith: "\t" the log entries are correctly splitted or not? Could you remove the combinedWith parameter from the config and deploy it again?
Hi Giuseppe, thank you for feedback I appreciate. In your previous message you mention It isn't a role problem, but a knowledge objects sharing permissions problem. What exactly you mean by that?  ... See more...
Hi Giuseppe, thank you for feedback I appreciate. In your previous message you mention It isn't a role problem, but a knowledge objects sharing permissions problem. What exactly you mean by that?  BR
Yes, This is what i am planing to do so. Thanks for the input.
1. There is no such thing as "generic solution" to a very broadly specified problem. It's as if you asked "how to make people happy? I want a generic solution". If you have a specific problem, we can... See more...
1. There is no such thing as "generic solution" to a very broadly specified problem. It's as if you asked "how to make people happy? I want a generic solution". If you have a specific problem, we can try to help you find a specific solution. 2. If you don't want to use any ready-made apps, you have to implement such functionality yourself. Have a list of sources/hosts (either build it dynamically when your environment is running or create it from external data - for example export from your CMDB) and repeatedy verify if you have recent events ingested from those sources and hosts. That's it.
1. To deal with possible outages you schedule it with continuous schedule - that means that your search will be scheduled for each continuous time period. See https://docs.splunk.com/Documentation/S... See more...
1. To deal with possible outages you schedule it with continuous schedule - that means that your search will be scheduled for each continuous time period. See https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports#Change_the_report_scheduling_mode_to_prioritize_completeness_of_data_over_recency_of_data 2. For this you'd typically use longer search window (and typically you'd want to search with a slight (maybe not; depending on your data) delay to account for data ingestion latency). But as with any search (not just summary-building one), if you have some data outside of your search-range you won't find it
Hello Splunkers!! I am getting "Bad allocation" error on all the Splunk dashboard panel. Please help me to identify the potential root cause.  
Thanks for an answer however i am looking for generic solution and don't want to using any App.
Try raising upload file size limits - analogically as with ES installation (and if it helps, post docs feedback)
You might actually do it another way. Assuming you're getting your counts from the pretty much same set of data, probably just being different in some field(s) values you can create a base search to ... See more...
You might actually do it another way. Assuming you're getting your counts from the pretty much same set of data, probably just being different in some field(s) values you can create a base search to get both of those counts. For example - for logins and logouts - adjust to your case index=whatever | stats count(eval(operation="login")) as logins count(eval(operation="logout")) as logouts Then you can: 1. Have two separate visualizations - each of them displaying just one result field 2. Have a post-process search for that base search | eval diff=logins-logouts which you can use for another single value visualization. This way you can just use one base search for everything.
I haven't heard anything yet. I don't know if this place is active. 
Well, if the servers themselves work OK - that's the task for the infrastructure team. They should have tools for that (or at least the knowledge what and how should be checked). You can discuss with... See more...
Well, if the servers themselves work OK - that's the task for the infrastructure team. They should have tools for that (or at least the knowledge what and how should be checked). You can discuss with them if Splunk can be helpful in this process but of course you'd need some data ingested from the relevant hosts. If you just want to check if the servers send data which is ultimately ingested into splunk, there are several apps for that on Splunkbase, for example TrackMe.
If you define multiple output groups, events are pushed to all of them at the same time (unless you override the routing per input or in transform). If you have multiple destination hosts in an outp... See more...
If you define multiple output groups, events are pushed to all of them at the same time (unless you override the routing per input or in transform). If you have multiple destination hosts in an output group, they are handled in a round robin way. There's no other way using built-in mechanics. You'd need to either use http output and install and intermediate http rev-proxy with health-checked and prioritized backends or do some form of external "switching" of the destination based either on some dynamic network-level redirects or DNS-based mechanisms. But all those are generally non-splunk solutions and add complexity to your deployment.
Thanks for clearifying, @PickleRick . So what would be the best practice for creating such synthetic events? A scheduled search every 5 (or so) Minutes? If yes, how to deal with: - SH-Downtimes - l... See more...
Thanks for clearifying, @PickleRick . So what would be the best practice for creating such synthetic events? A scheduled search every 5 (or so) Minutes? If yes, how to deal with: - SH-Downtimes - logins where only one of both needed events for a successful login is in the search time range, and the other is in the search time range of the previous run of the scheduled search
First of all sorry i am not clear about the servers. These are syslogs servers which we patch and we have list of server to validate if these are working perfectly fine post patching activity or not.... See more...
First of all sorry i am not clear about the servers. These are syslogs servers which we patch and we have list of server to validate if these are working perfectly fine post patching activity or not. Do you know or suggest what all things we can validate in one by creating any dashboard or some other type of automation?
In Classic XML dashboards, you can add a <done> stanza to the searches for your singles and set tokens from the first row of the results. You can then use these tokens in your subsequent search.
Hi @Stives , if they are private, you cannot do anithing, he shoudl share it at least at app level, enabling the roles to edit alerts and dashboards. If these knowledge objects are orphaned (becaus... See more...
Hi @Stives , if they are private, you cannot do anithing, he shoudl share it at least at app level, enabling the roles to edit alerts and dashboards. If these knowledge objects are orphaned (because e.g. the account was disabled), there's a feature to assign them to another user, then you can share to the correct roles. Otherwise, the only way is to search them on conf files and copy in another user or app area. Ciao. Giuseppe
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authenticati... See more...
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#2:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#3:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#4:8089 Authentication Failed GetRemoteAuthToken [1964778 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#1:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetBundleListTransaction [1964778 DistributedPeerMonitorThread] - Unable to get bundle list from peer: https://OLDIDX#2:8089 due to: Connect Timeout; exceeded 60000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#3:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#4:8089 due to: Connect Timeout; exceeded 5000 milliseconds All OLDIDX are old servers, turned off and shut down! None of SHC Members has OLDIDX#* in DistributedSeach conf Recently i update a V7 to V8 Infrastructure. I also searched all .conf for all ip of OLDIDX#*, none of them was found. WHERE are those "artifact" stored? Is there something in "raft" of new SHC? Need to remove alla SHC conf, and redo it from begin?   This messages in splunkd.log appears ONLY DURING the restart of SHC. During the days, using the SHC, i never had, and still i don't have any type of similar message. Thanks.
| table Time count1 count2 count3 The first field (column) will be the x axis, the other columns will be the lines.
Hi Giuseppe, thank you for feedback. This is exactly the problem user who created dashboard can´t edit permissions and on the other side I´m not able see his dashboard as it´s set to Private so we n... See more...
Hi Giuseppe, thank you for feedback. This is exactly the problem user who created dashboard can´t edit permissions and on the other side I´m not able see his dashboard as it´s set to Private so we not able move forward like this. Thanks
Okay, have you checked the internal logs on the affected instance for any WARN or ERROR events especially during a reboot?