All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. In order to run DB connect you need to run it on a Heavy Forwarder, as it contains many component’s that are pre-requisites. Use the below link for more details https://docs.splunk.com/Documen... See more...
1. In order to run DB connect you need to run it on a Heavy Forwarder, as it contains many component’s that are pre-requisites. Use the below link for more details https://docs.splunk.com/Documentation/DBX/3.16.0/DeployDBX/HowSplunkDBConnectworks   2. In short yes, Splunk  has in built functions to be able to send data to different destinations, using the UF, so simple example, if you have Splunk on premise and Splunk in cloud, you can send to both if desired. Parsing the data, has performance gains if going via the HF, it will examine the data, and transform it, there are many sub parts to the pipeline process. In terms of the fast mode when you parse data before indexing, the extracted fields are available for use in searches, regardless of whether you're using fast mode or not, the fast mode is one of three modes, allows you to search for available data using a different criterion.   See the three below links for more details:  https://docs.splunk.com/Documentation/Splunk/9.0.4/Forwarding/Routeandfilterdatad https://docs.splunk.com/Documentation/Splunk/9.2.1/Deploy/Datapipeline https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Search/Changethesearchmode      3. If you data source can only send API data to Splunk, then this is a good option (it’s basically agentless) and called the HTTP event collector. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector      
@deepakc  will this affect any data cuz it's production env .
Worth trying a rolling restart on the cluster https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Userollingrestart
https://support.whatfix.com/docs/adding-whatfix-javascript-to-salesforce-community This is what I tried to do. But the script tags are not getting reflected when I open this community.
Hi Cansel, Yes, I can access that. I have added the script tag in Community builder head markup, but it is not reflecting when I inspect the site.
i have a problem in the indexer cluster master  i got error from 1 week ago which is red color saying there is a data durability .     and this photo for indexer clustring from the cluster ma... See more...
i have a problem in the indexer cluster master  i got error from 1 week ago which is red color saying there is a data durability .     and this photo for indexer clustring from the cluster master   and this from inside 1 index    any help ?
Thank you for your reply, marnall. I have some additional questions to your scenarios. 1. Why the fact that the apps are being managed by web interface makes it better for us to collect logs usin... See more...
Thank you for your reply, marnall. I have some additional questions to your scenarios. 1. Why the fact that the apps are being managed by web interface makes it better for us to collect logs using heavy forwarder? For example, I have MSSQL database, from which I am collecting some data from the tables directly from DBConnect and I don't need any kind of forwarder in order to get my data into Splunk, why would I want to use a heavy forwarder? 2. "you might want to send certain data to one indexer cluster and other data to another indexer cluster." Does it mean that this kind of operation is impossible on the universal forwarder? Also what are the benefits of parsing data before it's indexed? Does it mean that when we do the "fast" mode search we will see the fields that were extracted by the HF? 3.  I didn't work with HEC, so I am sorry if it's a very simple or dumb question, but, what does it mean to "expose the HEC interface of your indexers"? Also why would we want to avoid that?  I am only 1 month with Splunk, so I am sorry in case I am complicating things   Thank you for your time, marnall!  
Grazie,  Giuseppe
The you should use the index with list of indexes to search as the subsearch, i.e. put your meta search in the subsearch and it will return the index you want. [ | search index=meta_info sourcety... See more...
The you should use the index with list of indexes to search as the subsearch, i.e. put your meta search in the subsearch and it will return the index you want. [ | search index=meta_info sourcetype=meta:info | search group_name=admingr AND spIndex_name=admin_audit | rename spIndex_name as index | fields index ] In the form above, it's totally hard coded, but I assume the spIndex_name= statement is variable.  
@bowesmana @richgalloway;, We have an index that contains the list of index names; so, one search is going to get the index name from that index; other search is going to search the events (or get t... See more...
@bowesmana @richgalloway;, We have an index that contains the list of index names; so, one search is going to get the index name from that index; other search is going to search the events (or get the events) within that index. A very interesting use case. But customer wants it. 
Another option just for you only. Just make your list under [general] in the user-prefs.conf $SPLUNK_HOME/etc/users/<YOURNAME>/user-prefs/local/user-prefs.conf [general] appOrder = search,lookup_... See more...
Another option just for you only. Just make your list under [general] in the user-prefs.conf $SPLUNK_HOME/etc/users/<YOURNAME>/user-prefs/local/user-prefs.conf [general] appOrder = search,lookup_editor
Your search is a little odd - it seems you just want to search index=admin_audit - so what's the purpose of the index=meta_info part  what's wrong with just index=admin_audit  
Oh, I think I just found the answer. Looks like in the alert_actions.conf file there is the hostname property that if you explicitly put https:// in front of the url then you can avoid having it tack... See more...
Oh, I think I just found the answer. Looks like in the alert_actions.conf file there is the hostname property that if you explicitly put https:// in front of the url then you can avoid having it tack on the web port when it sends emails. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Alertactionsconf
We have a load balancer sitting in front of our search head cluster that is reverse proxying the connection to the search heads over https port 443. The search head web interfaces are running on port... See more...
We have a load balancer sitting in front of our search head cluster that is reverse proxying the connection to the search heads over https port 443. The search head web interfaces are running on port 8000. The issue is when our search heads send out alert emails they append 8000 to the load balancer url which doesn't work because the load balancer is listening on 443. Is there a way to tell the search heads to leave off the port or specify a different port explicitly in the alert emails?
I owe you a lot of beers!
URLs with spaces in them must be encoded.  Use the urlencode command available with the Webtools Add-on (https://splunkbase.splunk.com/app/4146).
The regular expression in the rex command has some misplaced escape characters that are preventing matches.  Try this query index=xxxxx 'User ID" | rex field=_raw "User\sID-(?<username>\w+)" | sta... See more...
The regular expression in the rex command has some misplaced escape characters that are preventing matches.  Try this query index=xxxxx 'User ID" | rex field=_raw "User\sID-(?<username>\w+)" | stats count by username  
Description How can I produce a URL in an alert email that uses field values, either by in-line results or in the body of the email. When an alert is triggered an email is sent with field dashboard_... See more...
Description How can I produce a URL in an alert email that uses field values, either by in-line results or in the body of the email. When an alert is triggered an email is sent with field dashboard_url. For projects with no spaces in the name, the URL is clickable. If there is a space, the URL contains only up to the space and is broken. Sample query   | makeresults format=json data="[{\"project\":\"projectA - Team A\"},{\"project\":\"projectB\"}]" | eval dashboard_url="https://internal.com:8000/en-US/app/search/dash?form.q_project=".project.""   Result: https://internal.com:8000/en-US/app/search/dash?form.q_project=projectA - Team A Workarounds attempted I tried building the dashboard_url in the email body using results.project. The same condition occurs, projects with spaces get a broken link.
I have a simple search  index=xxxxx "User ID" and I need the correct syntax to get the actual username in the results. Sample Event INFO xcvxcvxcvxcvxcvxcvxcvxcvxcvxcvvcx - Logged User ID-XXXXXX ... See more...
I have a simple search  index=xxxxx "User ID" and I need the correct syntax to get the actual username in the results. Sample Event INFO xcvxcvxcvxcvxcvxcvxcvxcvxcvxcvvcx - Logged User ID-XXXXXX Now I can easy do a count of how many people logged on but need to report on the XXXXXX I thought about doing index=xxxxx 'User ID" | rex field=_raw "User\/s\ID\/-\(?<username>\d+)" | stats count by username The search is returning the results and just a count but I need to see the username in my stats. I am new to this so please mind the ignorance 
At a high Level:   Think about what data you want from your website, is it OS logs Application logs, Security Logs etc and identify them. For those logs you want is there a Splunk TA - Search o... See more...
At a high Level:   Think about what data you want from your website, is it OS logs Application logs, Security Logs etc and identify them. For those logs you want is there a Splunk TA - Search on Splunk Base. (This will help with the data integration and parse the data). Install a Universal Forwarder onto the Web Hosted Servers and monitor the logs or other methods are API and Splunk HEC. You may even have to use a Heavy Forwarder to collect the logs - this depends on the logs/data you want and your Splunk architecture.