All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Providing there are no issues, a rolling restart is OK to perform. Its best to do this when it's least busy or have maintaince Window for your BAU operations. A rolling restart performs a phased res... See more...
Providing there are no issues, a rolling restart is OK to perform. Its best to do this when it's least busy or have maintaince Window for your BAU operations. A rolling restart performs a phased restart of all peer nodes, so that the indexer cluster as a whole can continue to perform its function during the restart process and data should be sent to the other indexers, whilst one is being restarted. There a number of checks it perfoms so can take a while which depends on your architecture. First check the status, you can use the manager GUI or CLI /opt/splunk/bin/splunk show cluster-status --verbose Restart from the GUI or use the CLI /opt/splunk/bin/splunk rolling-restart cluster-peers
Hi @Poojitha, you can see in the Deployment server GUI that there's an option to flag that requires a splunk restart on the client when an app is updated, this is mandatory to reload the configurati... See more...
Hi @Poojitha, you can see in the Deployment server GUI that there's an option to flag that requires a splunk restart on the client when an app is updated, this is mandatory to reload the configurations on the client. Ciao. Giuseppe
@gcusello  Thanks for your response. I checked the first point you mentioned. The app which I want to push has splunk user permission on it. The HF is set to run as splunk as well. So, its looking... See more...
@gcusello  Thanks for your response. I checked the first point you mentioned. The app which I want to push has splunk user permission on it. The HF is set to run as splunk as well. So, its looking fine here. What do you mean by "flagged the retart Splunk flag for that app." ? I didnot get this. Please can you explain more. I am deploying custom app. Regards, PNV
Hi @Poojitha, at first check if the owner of the app is splunk: if it's root and you are using splunk user to run splunk serveces in the HF you could have problems. Then check if yu flagged the ret... See more...
Hi @Poojitha, at first check if the owner of the app is splunk: if it's root and you are using splunk user to run splunk serveces in the HF you could have problems. Then check if yu flagged the retart Splunk flag for that app. At least check again (even if you already checked) the serverclass: check if the server is inserted in the serverclass. Last check, what kind of app are you depoying, a custom or a splanbase app? Ciao. Giuseppe
Hi All, I have setup new deployment server and new heavy forwarder. There is successful phonehome connection when I check with command "./splunk list deploy-clients". The client is successfully co... See more...
Hi All, I have setup new deployment server and new heavy forwarder. There is successful phonehome connection when I check with command "./splunk list deploy-clients". The client is successfully connecting the server. I want to push new app to this new heavy forwarder. But the app is not getting pushed from Deployment server. I verified that  the app is under deployment_apps directory. I also checked serverclass.conf file.Both are looking good. What is the reason that app is not getting written ?  Do I need to first create a app in HF manually so that DS finds the app and pushes the changes ?  Regards, PNV
1. In order to run DB connect you need to run it on a Heavy Forwarder, as it contains many component’s that are pre-requisites. Use the below link for more details https://docs.splunk.com/Documen... See more...
1. In order to run DB connect you need to run it on a Heavy Forwarder, as it contains many component’s that are pre-requisites. Use the below link for more details https://docs.splunk.com/Documentation/DBX/3.16.0/DeployDBX/HowSplunkDBConnectworks   2. In short yes, Splunk  has in built functions to be able to send data to different destinations, using the UF, so simple example, if you have Splunk on premise and Splunk in cloud, you can send to both if desired. Parsing the data, has performance gains if going via the HF, it will examine the data, and transform it, there are many sub parts to the pipeline process. In terms of the fast mode when you parse data before indexing, the extracted fields are available for use in searches, regardless of whether you're using fast mode or not, the fast mode is one of three modes, allows you to search for available data using a different criterion.   See the three below links for more details:  https://docs.splunk.com/Documentation/Splunk/9.0.4/Forwarding/Routeandfilterdatad https://docs.splunk.com/Documentation/Splunk/9.2.1/Deploy/Datapipeline https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Search/Changethesearchmode      3. If you data source can only send API data to Splunk, then this is a good option (it’s basically agentless) and called the HTTP event collector. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector      
@deepakc  will this affect any data cuz it's production env .
Worth trying a rolling restart on the cluster https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Userollingrestart
https://support.whatfix.com/docs/adding-whatfix-javascript-to-salesforce-community This is what I tried to do. But the script tags are not getting reflected when I open this community.
Hi Cansel, Yes, I can access that. I have added the script tag in Community builder head markup, but it is not reflecting when I inspect the site.
i have a problem in the indexer cluster master  i got error from 1 week ago which is red color saying there is a data durability .     and this photo for indexer clustring from the cluster ma... See more...
i have a problem in the indexer cluster master  i got error from 1 week ago which is red color saying there is a data durability .     and this photo for indexer clustring from the cluster master   and this from inside 1 index    any help ?
Thank you for your reply, marnall. I have some additional questions to your scenarios. 1. Why the fact that the apps are being managed by web interface makes it better for us to collect logs usin... See more...
Thank you for your reply, marnall. I have some additional questions to your scenarios. 1. Why the fact that the apps are being managed by web interface makes it better for us to collect logs using heavy forwarder? For example, I have MSSQL database, from which I am collecting some data from the tables directly from DBConnect and I don't need any kind of forwarder in order to get my data into Splunk, why would I want to use a heavy forwarder? 2. "you might want to send certain data to one indexer cluster and other data to another indexer cluster." Does it mean that this kind of operation is impossible on the universal forwarder? Also what are the benefits of parsing data before it's indexed? Does it mean that when we do the "fast" mode search we will see the fields that were extracted by the HF? 3.  I didn't work with HEC, so I am sorry if it's a very simple or dumb question, but, what does it mean to "expose the HEC interface of your indexers"? Also why would we want to avoid that?  I am only 1 month with Splunk, so I am sorry in case I am complicating things   Thank you for your time, marnall!  
Grazie,  Giuseppe
The you should use the index with list of indexes to search as the subsearch, i.e. put your meta search in the subsearch and it will return the index you want. [ | search index=meta_info sourcety... See more...
The you should use the index with list of indexes to search as the subsearch, i.e. put your meta search in the subsearch and it will return the index you want. [ | search index=meta_info sourcetype=meta:info | search group_name=admingr AND spIndex_name=admin_audit | rename spIndex_name as index | fields index ] In the form above, it's totally hard coded, but I assume the spIndex_name= statement is variable.  
@bowesmana @richgalloway;, We have an index that contains the list of index names; so, one search is going to get the index name from that index; other search is going to search the events (or get t... See more...
@bowesmana @richgalloway;, We have an index that contains the list of index names; so, one search is going to get the index name from that index; other search is going to search the events (or get the events) within that index. A very interesting use case. But customer wants it. 
Another option just for you only. Just make your list under [general] in the user-prefs.conf $SPLUNK_HOME/etc/users/<YOURNAME>/user-prefs/local/user-prefs.conf [general] appOrder = search,lookup_... See more...
Another option just for you only. Just make your list under [general] in the user-prefs.conf $SPLUNK_HOME/etc/users/<YOURNAME>/user-prefs/local/user-prefs.conf [general] appOrder = search,lookup_editor
Your search is a little odd - it seems you just want to search index=admin_audit - so what's the purpose of the index=meta_info part  what's wrong with just index=admin_audit  
Oh, I think I just found the answer. Looks like in the alert_actions.conf file there is the hostname property that if you explicitly put https:// in front of the url then you can avoid having it tack... See more...
Oh, I think I just found the answer. Looks like in the alert_actions.conf file there is the hostname property that if you explicitly put https:// in front of the url then you can avoid having it tack on the web port when it sends emails. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Alertactionsconf
We have a load balancer sitting in front of our search head cluster that is reverse proxying the connection to the search heads over https port 443. The search head web interfaces are running on port... See more...
We have a load balancer sitting in front of our search head cluster that is reverse proxying the connection to the search heads over https port 443. The search head web interfaces are running on port 8000. The issue is when our search heads send out alert emails they append 8000 to the load balancer url which doesn't work because the load balancer is listening on 443. Is there a way to tell the search heads to leave off the port or specify a different port explicitly in the alert emails?
I owe you a lot of beers!