All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@tem Did you ever find the fix to this? We are getting the same error “Failed to authenticate with gateway after 3 retries” and can not figure it out. Ours is with the Ontap add-on but it also uses t... See more...
@tem Did you ever find the fix to this? We are getting the same error “Failed to authenticate with gateway after 3 retries” and can not figure it out. Ours is with the Ontap add-on but it also uses the SA-Hydra app.
Generally knowledge bundle contains most of the content from the SH unless you blacklist some parts of it. Why not just deploy the apps to the indexer then you might ask. Two reasons. 1. Variabilit... See more...
Generally knowledge bundle contains most of the content from the SH unless you blacklist some parts of it. Why not just deploy the apps to the indexer then you might ask. Two reasons. 1. Variability of the KOs on the SHs - each time something changes on the SH (including users private objects) you'd have to deploy new apps 2. The same indexer(s) can be search peers for multiple different SH(C)s of which each can have separate set of search-time configs. Possibly conflicting with each other. So indexer-deployed apps are "active" in index time while objects replicated in a knowledge bundle are active in search time.
Yes, this is the confusing point. Did you mean if my search is: index = main eventtype=authentication This search will replicate the knowledge bundle which contains the relative Knowledge Object t... See more...
Yes, this is the confusing point. Did you mean if my search is: index = main eventtype=authentication This search will replicate the knowledge bundle which contains the relative Knowledge Object to the search itself not all the Knowledge Object which exists on the search head?   Knowledge bundle replication overview - Splunk Documentation "The process of knowledge bundle replication causes peers, by default, to receive nearly the entire contents of the search head's apps." Any explanation will be greatly appreciated!
Use | fillnull field2 value="" That will force all events with no field2 to have an empty value, rather than a null value. That's the normal way to force potentially null fields to exist when usin... See more...
Use | fillnull field2 value="" That will force all events with no field2 to have an empty value, rather than a null value. That's the normal way to force potentially null fields to exist when using them in split by clauses, or top, as in your case.
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get di... See more...
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get different results when I search if I include a "field2" in the results. Can I search and return all results weather or not "field2" exists? | top field1 = all possible results | top field1 field2 field3 = only results with all fields What I want is just to show a blank line where "field2" would be on matches that don't have a "field2". Basically make "field2" optional.
That's a start.  You'll also need maxVolumeDataSizeMB so Splunk knows how large the volume is.  Then each index definition needs to reference the volume. [volume:MyVolume] path = /some/file/path [M... See more...
That's a start.  You'll also need maxVolumeDataSizeMB so Splunk knows how large the volume is.  Then each index definition needs to reference the volume. [volume:MyVolume] path = /some/file/path [MyIndexSaturated] coldPath = volume:path/myindexsaturated/colddb homePath = volume:path/myindexsaturated/db thawedPath = $SPLUNK_DB/myindexsaturated/thaweddb frozenTimePeriodInSecs = 1209600
Yes, it's recommended as a best practice to implement all Magic 8 configs because they establish consistency and reliability in data onboarding. While most TAs start with Magic 6, adding the EVENT_BR... See more...
Yes, it's recommended as a best practice to implement all Magic 8 configs because they establish consistency and reliability in data onboarding. While most TAs start with Magic 6, adding the EVENT_BREAKER configs gives you better control over event distribution and parsing. Think of Magic 6 as the minimum standard, and Magic 8 as the complete package for optimal data handling. The TAs can be updated with the additional configs when needed based on your specific deployment, but having all 8 from the start is generally ideal as it prevents potential data parsing issues down the line. If this Helps, Please Upvote.
Yes, it is considered Best Practice to specify all of the Great/Magic 8 props every time.  People are lazy, however, so TAs often include only the props that differ from the default settings.
Hi, Did anyone come across adding "Oracle Autonomous DB" monitoring using "Wallet" on AppDynamics. Need some help with JDBC string with using a Wallet file. Regards, Vinodh
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configur... See more...
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configured TAs on splunkbase  include the magic 3 or 4 what happened to the rest of them? Is it always a best practice to  include all 8?
@christophecris Please open a support case to get this checked. If this Helps, Please Upvote.
I have had this issue in the past where despite installing JDK, DB Connect will not recognize a task server. I ended up trying different versions of JDK until eventually I got one that worked. Could ... See more...
I have had this issue in the past where despite installing JDK, DB Connect will not recognize a task server. I ended up trying different versions of JDK until eventually I got one that worked. Could you try a few different JDK packages?
I have about 800 searches. some that run take more than a minute.  so in the messages it states: status: skipped, reason: "The maximum number of concurrent auto-summarization searches on this inst... See more...
I have about 800 searches. some that run take more than a minute.  so in the messages it states: status: skipped, reason: "The maximum number of concurrent auto-summarization searches on this instance has been reached. "  no warnings or errors. all messages have "INFO" right after date/time cpu usage is at about 12% and memory usage is at 28%
No that should be fine. As long as you have enough CPU and threads and your correlation searches are not overlapping with its next execution (e.g. if the search runs every 2 hours but it takes 2.5 ho... See more...
No that should be fine. As long as you have enough CPU and threads and your correlation searches are not overlapping with its next execution (e.g. if the search runs every 2 hours but it takes 2.5 hours to complete), then you use the +1 minute technique to spread the searches around, then it should be fine. Do you get warnings about concurrent searches or do you see high CPU usage in your monitoring console?
It might "work" but it doesn't work properly. With this search you need to read every single event you have in your specified time range just to find the few matching ones. You need to a) Define p... See more...
It might "work" but it doesn't work properly. With this search you need to read every single event you have in your specified time range just to find the few matching ones. You need to a) Define proper extractions in Splunk's configuration or (even better; assuming your events _are_ well-formed jsons) b) Configure the sourcetype associated with this type of events to use KV_MODE=json
Hi @Uma.Boppana, Thank you for asking your question on the community. Did you happen to find a solution to your question or any new information you can share here? If not, and you are still look... See more...
Hi @Uma.Boppana, Thank you for asking your question on the community. Did you happen to find a solution to your question or any new information you can share here? If not, and you are still looking for help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Wait. I think you're confusing INDEXED_EXTRACTIONS with general index-time operations. With TA-windows the latters are used (I'm not 100% sure if they aren't only used if you still collect the data... See more...
Wait. I think you're confusing INDEXED_EXTRACTIONS with general index-time operations. With TA-windows the latters are used (I'm not 100% sure if they aren't only used if you still collect the data "old-style" with sourcetype set to a particular event log). Also the knowledge bundle is something completely different from the apps deployed on the indexers the normal way. Knowledge bundle is what is used with a search spawned from the search-head layer. Apps installed on the indexers are what is used during indexing.
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. ... See more...
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. My time range looks like: Earliest Time: -2h  Latest Time: now cron schedule: 1 */2 * * * for each new search I add +1 to the minute tab of the cron schedule up to 59 and then start over.  so on the next search the schedule would be 2 */2 * * * and so on... is there a more efficient way I should be scheduling searches? Thank you.
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given ... See more...
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given only high and checked the Neap policy as well. under episode information found the severity we had given the same as the first event. Can someone please guide how to avoid the info episode  and how to find the path to configure for info severity Regards, Nagalakshmi 
Thanks for your help!   I am still confusing how indexer cluster should be managed, if i want to create any KOs at the search head side,  should i push these KOs to the indexers also?