All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page... See more...
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page, alls agents status are Ok!     I just don't get it!
Hi @Dk123 , as @richgalloway said, what's your architetcure? you have to create the indexes.conf on the Indexers, so you can use them, but you don't see them in Search Heads. To use them in Indexe... See more...
Hi @Dk123 , as @richgalloway said, what's your architetcure? you have to create the indexes.conf on the Indexers, so you can use them, but you don't see them in Search Heads. To use them in Indexers and see them on Search Heads, you have to create the indexes on both the types of machines. If instead you have a stand aolone server, did you restarted Splunk after creation? Ciao. Giuseppe
Hi @Andre_ , I don't think that's possible, also because Universal Forwarder's configurations are usually managed using a Deployment Server. But, you could have very large inputs and take all the w... See more...
Hi @Andre_ , I don't think that's possible, also because Universal Forwarder's configurations are usually managed using a Deployment Server. But, you could have very large inputs and take all the weblogs or Apache logs, e.g. if your Apache logs are in the folder /opt/apache/<app>/data/<other_folders>/apache.log you could usi in your input: [monitor:///opt/apache/.../*.log] Ciao. Giuseppe  
yep .. we are doing just that - 1st you need to capture a batch job failing - this can be done in a number of ways such as writing the batch status to a log file to capture failures. - Then monitor ... See more...
yep .. we are doing just that - 1st you need to capture a batch job failing - this can be done in a number of ways such as writing the batch status to a log file to capture failures. - Then monitor that log file and create a KPI - Create a custom alert action that runs a batch job restart - in the neap create the logic that when the KPI picks up a failing batch job then trigger the custom alert action - then you need another correlation search to capture the batch job being successful and correlate with the KPI returning to normal to complete the cycle
Hi, Our project is planning to have Splunk ITSI to do batch monitoring from Control M jobs and have autohealing as well. Would that be feasible with Splunk ITSI? Does Splunk ITSI have capabilities t... See more...
Hi, Our project is planning to have Splunk ITSI to do batch monitoring from Control M jobs and have autohealing as well. Would that be feasible with Splunk ITSI? Does Splunk ITSI have capabilities to take action like running a custom script to force restart, or force OK a Control M job once conditions are met to be ? Looking forward to your insights.
I can find evidence to back this up, but hard to find any real case. In our case, we're limited by resources, so we can't add more deployers. We're using two search head clusters to keep things separ... See more...
I can find evidence to back this up, but hard to find any real case. In our case, we're limited by resources, so we can't add more deployers. We're using two search head clusters to keep things separate. This way, if one cluster needs to handle a lot of saved searches, it won't slow down the other one.
Hello, Is it possible to configure a Universal Forwarder to automatically discover the location of weblogs for IIS or Apache? I can programmatically get the locations and have a script for Windows a... See more...
Hello, Is it possible to configure a Universal Forwarder to automatically discover the location of weblogs for IIS or Apache? I can programmatically get the locations and have a script for Windows and Linux that returns a list of locations.  Kind Regards Andre
curl -kv https://<host>.<stack>.splunkcloud.com:8088/services/collector/health Where <host>.<stack>, you will find if you run `index=_internal` and see `host` field.
Finally found the working URI for demo version of Splunk Cloud Platform. curl -kv https://<host>.<stack>.splunkcloud.com:8088/services/collector/health Where <host>.<stack>, you will find if yo... See more...
Finally found the working URI for demo version of Splunk Cloud Platform. curl -kv https://<host>.<stack>.splunkcloud.com:8088/services/collector/health Where <host>.<stack>, you will find if you run `index=_internal` and see `host` field.
Likely HEC not available for 14 days demo version of Splunk Cloud Platform.
A standalone Deployment Server that is not functioning with any other server roles should be able to handle up to 25,000 forwarders. In the past, customers with large deployments have set up Deploym... See more...
A standalone Deployment Server that is not functioning with any other server roles should be able to handle up to 25,000 forwarders. In the past, customers with large deployments have set up Deployment Servers behind a load balancer and kept apps sync'd between them using tools such as Puppet or Ansible. Since 9.2.0, Splunk Deployment Servers have been architected to work as a "cluster" behind a load balancer and to keep apps and client status sync'd between them via a shared network directory. This allows any number of forwarders to be managed. For example, for an environment capturing data from 100,000 forwarders, a cluster of at least 4 Deployment Servers would be a good place to start.
Since Splunk Enterprise 9.2.0, Splunk has introduced "Deployment Server Scaling", which involves setting Deployment Servers behind a load balancer (or use DNS mapping) and granting all access to a si... See more...
Since Splunk Enterprise 9.2.0, Splunk has introduced "Deployment Server Scaling", which involves setting Deployment Servers behind a load balancer (or use DNS mapping) and granting all access to a single network share. Each DS uses the share path to update and share app configurations and post log files. This allows the DS' to keep apps, client lists and client status in sync between them. While Splunk documentation mentions 50 clients, this is only in reference to ensuring the DS is on its own server, not sharing functionality with any other Splunk instance such as search head, indexer, Monitoring Console, License Manger, etc. A Deployment Server can actually handle up to 25,000 clients, if granted enough system and network resources to manage the load. With Deployment Server scaling, the number of forwarders that can be managed multiplies with each Deployment Server added to the "cluster". Two can manage up to 50,000 clients, three can manage up to 75,000, etc. All Deployment Servers in a cluster share all apps and all clients. DS Scaling is also referred to as "clustering", though it works nothing like indexer or search head clusters-- the different DS's don't communicate with one another directly or formally form a "cluster".  This allows very large environments to manage a multitude of forwarders. Too many forwarders? Add another Deployment Server. Here are a few links: Splunk Documentation: Implement a Deployment Server Cluster Splunk Documentation: Estimate Deployment Server Performance Deployment Server section of this Splunk Lantern article: Scaling your Splunk Deployment, which consolidates relevant Splunk documentation Splunk Community Article: Deployment Server Scalability Best Practices "Discovered Intelligence" blog article on setting up a Splunk Deployment Server cluster. I have not (yet) tested their suggestions but this is a great place to start for a quick overview of what's needed. Deployment Servers are on track for significant improvements in the near future as well, with the goal of reducing/eliminating the need for 3rd party tools such as Puppet or Ansible for those who wish to manage everything within Splunk itself.
Hey Will, I just wanted to say a huge THANK YOU for your help! Your suggestion to increase MAX_DAYS_AGO to 3000 completely solved my issue, and Splunk now correctly recognizes my timestamps. Ho... See more...
Hey Will, I just wanted to say a huge THANK YOU for your help! Your suggestion to increase MAX_DAYS_AGO to 3000 completely solved my issue, and Splunk now correctly recognizes my timestamps. Honestly, I had been struggling with this for quite some time, and your solution saved me a lot of time and frustration. I really appreciate the effort you put into answering my question.   Thanks again, and have a great day!   Best, Emil
Where did you install the custom app?  It must be installed on the indexer(s) to create the index, but it must also be installed on the search head(s) for the index to appear in the GUI.
I have created a index by CLI ( script)on custom application but the index is not reflecting in Splunk gui 
it worked, thanks Whisperer, a helping hand as alwas
Try something like this index=linux host=* sourcetype=bash_history "systemctl start" OR "systemctl enable" OR (mv /opt/) | eval systemctl=if(searchmatch("systemctl"), "systemctl",null()) | eval mo_o... See more...
Try something like this index=linux host=* sourcetype=bash_history "systemctl start" OR "systemctl enable" OR (mv /opt/) | eval systemctl=if(searchmatch("systemctl"), "systemctl",null()) | eval mo_opt=if(searchmatch("mv") AND searchmatch("/opt/"), "mv_opt", null()) | stats dc(mv_opt) as mv_opt dc(systemctl) as systemctl by host | where mv_opt==1 and systemctl==1
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemc... See more...
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemctl start" OR "systemctl enable") | union [search index=linux host=* sourcetype=bash_history (mv AND /opt/ ) ] just to make more clearer, i want a match only  if a server generated a log that contains "mv AND /opt/" and another log that contains "systemctl start" OR "systemctl enable"       thanks in advance
Does the following search help? This uses json_ functions and mvexpand to split out and then match up the fields and expressions: | datamodel | spath output=modelName modelName |search modelName=Ne... See more...
Does the following search help? This uses json_ functions and mvexpand to split out and then match up the fields and expressions: | datamodel | spath output=modelName modelName |search modelName=Network_Traffic | eval objects=json_array_to_mv(json_extract(_raw,"objects")) | mvexpand objects | eval calculations=json_array_to_mv(json_extract(objects,"calculations")) | mvexpand calculations | eval outputFields=json_array_to_mv(json_extract(calculations,"outputFields")) | mvexpand outputFields | eval fieldName=json_extract(outputFields,"fieldName") | eval expression=json_extract(calculations,"expression") | table modelName fieldName expression  
It looks like your time extraction settings are corrrect, however you need to add MAX_DAYS_AGO to be a higher value (eg 3000) for Splunk to accept that 2017 timestamp as the default is 2000 and there... See more...
It looks like your time extraction settings are corrrect, however you need to add MAX_DAYS_AGO to be a higher value (eg 3000) for Splunk to accept that 2017 timestamp as the default is 2000 and therefore Splunk is not accepting the date. Let me know if adding MAX_DAYS_AGO=3000 to your extraction config works! Good luck Will