All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configur... See more...
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configured TAs on splunkbase  include the magic 3 or 4 what happened to the rest of them? Is it always a best practice to  include all 8?
@christophecris Please open a support case to get this checked. If this Helps, Please Upvote.
I have had this issue in the past where despite installing JDK, DB Connect will not recognize a task server. I ended up trying different versions of JDK until eventually I got one that worked. Could ... See more...
I have had this issue in the past where despite installing JDK, DB Connect will not recognize a task server. I ended up trying different versions of JDK until eventually I got one that worked. Could you try a few different JDK packages?
I have about 800 searches. some that run take more than a minute.  so in the messages it states: status: skipped, reason: "The maximum number of concurrent auto-summarization searches on this inst... See more...
I have about 800 searches. some that run take more than a minute.  so in the messages it states: status: skipped, reason: "The maximum number of concurrent auto-summarization searches on this instance has been reached. "  no warnings or errors. all messages have "INFO" right after date/time cpu usage is at about 12% and memory usage is at 28%
No that should be fine. As long as you have enough CPU and threads and your correlation searches are not overlapping with its next execution (e.g. if the search runs every 2 hours but it takes 2.5 ho... See more...
No that should be fine. As long as you have enough CPU and threads and your correlation searches are not overlapping with its next execution (e.g. if the search runs every 2 hours but it takes 2.5 hours to complete), then you use the +1 minute technique to spread the searches around, then it should be fine. Do you get warnings about concurrent searches or do you see high CPU usage in your monitoring console?
It might "work" but it doesn't work properly. With this search you need to read every single event you have in your specified time range just to find the few matching ones. You need to a) Define p... See more...
It might "work" but it doesn't work properly. With this search you need to read every single event you have in your specified time range just to find the few matching ones. You need to a) Define proper extractions in Splunk's configuration or (even better; assuming your events _are_ well-formed jsons) b) Configure the sourcetype associated with this type of events to use KV_MODE=json
Hi @Uma.Boppana, Thank you for asking your question on the community. Did you happen to find a solution to your question or any new information you can share here? If not, and you are still look... See more...
Hi @Uma.Boppana, Thank you for asking your question on the community. Did you happen to find a solution to your question or any new information you can share here? If not, and you are still looking for help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Wait. I think you're confusing INDEXED_EXTRACTIONS with general index-time operations. With TA-windows the latters are used (I'm not 100% sure if they aren't only used if you still collect the data... See more...
Wait. I think you're confusing INDEXED_EXTRACTIONS with general index-time operations. With TA-windows the latters are used (I'm not 100% sure if they aren't only used if you still collect the data "old-style" with sourcetype set to a particular event log). Also the knowledge bundle is something completely different from the apps deployed on the indexers the normal way. Knowledge bundle is what is used with a search spawned from the search-head layer. Apps installed on the indexers are what is used during indexing.
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. ... See more...
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. My time range looks like: Earliest Time: -2h  Latest Time: now cron schedule: 1 */2 * * * for each new search I add +1 to the minute tab of the cron schedule up to 59 and then start over.  so on the next search the schedule would be 2 */2 * * * and so on... is there a more efficient way I should be scheduling searches? Thank you.
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given ... See more...
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given only high and checked the Neap policy as well. under episode information found the severity we had given the same as the first event. Can someone please guide how to avoid the info episode  and how to find the path to configure for info severity Regards, Nagalakshmi 
Thanks for your help!   I am still confusing how indexer cluster should be managed, if i want to create any KOs at the search head side,  should i push these KOs to the indexers also?  
You have only defined a single source and destination, you need to compare multiples.  The color*/edge*/weight* is meant to independently review the spread across the comparisons values associated wi... See more...
You have only defined a single source and destination, you need to compare multiples.  The color*/edge*/weight* is meant to independently review the spread across the comparisons values associated with the source/destination groups. Look at the sample images of visualizations on the splunk base app page for prime examples. https://splunkbase.splunk.com/app/4611  
Start with disk space - most likely Splunk is so busy trying to manage buckets and ingestion with no additional space that many activities are suffering.
At one point there was a Splunk-on-Splunk template for ITSI which worked wonders in a previous environment I monitored.  I did supplement the existing template with the inbound syslog system monitori... See more...
At one point there was a Splunk-on-Splunk template for ITSI which worked wonders in a previous environment I monitored.  I did supplement the existing template with the inbound syslog system monitoring.  However, I didn't do anything to monitor the Router,FW, and LB since the network was quite large and any HA activities would require additional details.  It would have been too large a task for the return on value. Monitoring these items separately would have a lot of value and if port labels are informative then you can make up for the full integration map. IMO
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Wher... See more...
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Where do I begin resolving my issue.   Thanks    
Hi @BRFZ , did you configured this server to send logs to the Indexers? did you opened the firewall routes between this server and Indexers on the port 9997? Make these checks. Ciao. Giuseppe
That the knowledge bundle is replicated to the search peers is correct but for the parsing (e.g. timestamp extraction) during indexing only the configuration from $SPLUNK_HOME/etc/peer-apps is used a... See more...
That the knowledge bundle is replicated to the search peers is correct but for the parsing (e.g. timestamp extraction) during indexing only the configuration from $SPLUNK_HOME/etc/peer-apps is used as a source. So that's the reason why you must deploy the TA on the indexer if it is no HeavyForwarder inbetween. The knowledge bundle is used during the searching. https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Howindexingworks
Hello, I have a server configured with three roles: Deployment Server, Console Monitoring, and License Master. However, I am not receiving the internal and audit logs from this server, such as logs ... See more...
Hello, I have a server configured with three roles: Deployment Server, Console Monitoring, and License Master. However, I am not receiving the internal and audit logs from this server, such as logs from the Search Head or Indexers. If you have any solutions to this problem, I would greatly appreciate your help.  
You can do it easierwith following config: <format type="color" field="categoryId"> <colorPalette type="map">{"ACCESSORIES":#6DB7C6,"ARCADE":#F7BC38,"STRATEGY":#AFEEEE}</colorPalet... See more...
You can do it easierwith following config: <format type="color" field="categoryId"> <colorPalette type="map">{"ACCESSORIES":#6DB7C6,"ARCADE":#F7BC38,"STRATEGY":#AFEEEE}</colorPalette> </format> https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Table_format_source_code_example 
Please check if your user has the edit_tokens_all capability assigned. The authentication extensions are configured and running without any issues?