All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Tom.Davison, I wanted to check in to see if this was still a question you had. If so, check out Mario's reply and continue the conversation. 
Hi @Seda.Şerefoğlu, Have you seen this AppDynamics Docs page? Please let me know if it helps answer your question.
You could try disabling the inputs in the introspection_generator_addon app, but I don't know what side effects that might cause.  It might mean some MC dashboards stop working.
I have a lookup table with a bunch of IP addresses (ipaddress.csv) and a blank column called hostname. I would like to search in splunk to find what hostnames each IP address have. I can find the hos... See more...
I have a lookup table with a bunch of IP addresses (ipaddress.csv) and a blank column called hostname. I would like to search in splunk to find what hostnames each IP address have. I can find the hostnames in index=fs sourcetype=inventory. I'm just having a hardtime with the query logic of using the lookup table IPs to output to a table in splunk with their corresponding hostnames. Ideally I want to output the results in a table in splunk and add the hostnames back to the lookup table in the hostname column. Any help would be greatly appreciated! Thank you
For it to work, your Splunk Cloud stack must be on Azure.  However, I've seen conflicting information about whether DDSS is supported on Azure now.  Contact your Splunk account team.
Hello,  Looking for some guidance on "Auto Retry"  on HTTP / Browser Test,  Have a Scheduled Test running every 30 mins ( 9:00, 9:30 ) When the test fails at 9:00 see the result updated as "Failed... See more...
Hello,  Looking for some guidance on "Auto Retry"  on HTTP / Browser Test,  Have a Scheduled Test running every 30 mins ( 9:00, 9:30 ) When the test fails at 9:00 see the result updated as "Failed (Auto Retry)" The next run occurs only at 9:30 a.m. Is this expected? (Does Auto Retry kick in only at the scheduled run time?)
At the moment, our tiny indexer has very little disk space and _introspection consumes roughly GB of storage a day, is there a way to minimize the space consumed by the index besides making the reten... See more...
At the moment, our tiny indexer has very little disk space and _introspection consumes roughly GB of storage a day, is there a way to minimize the space consumed by the index besides making the retention very short? 
What seems to be the problem?  Not all users own knowledge objects.  How are you searching for them?  If you have CLI access have you looked in $SPLUNK_HOME/etc/users?
Is it possible to connect Azure storage to a Splunk cloud instance? Our client wants to store data from their Splunk cloud instance in azure to eliminate their Splunk cloud storage overage 
Nice, but ensure that you cannot add too many pipelines. Usually it’s less than you have cores/cpus on your box. And this is valid only for forwarders.
Hi @gcusello , Can you please guide me how to achieve this in brief? So that it will be helpful for me? What splunk query included in it?
I am trying to install Splunk_TA_Nix on my UFs. I am in air-gapped area, so can't copy errors and paste here.  I followed below steps: cd $SPLUNK_HOME/etc/apps/ tar xzvf $TMP/Splunk_TA_nix-4.7.0-1... See more...
I am trying to install Splunk_TA_Nix on my UFs. I am in air-gapped area, so can't copy errors and paste here.  I followed below steps: cd $SPLUNK_HOME/etc/apps/ tar xzvf $TMP/Splunk_TA_nix-4.7.0-156739.tgz mkdir $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local cp $SPLUNK_HOME/etc/apps/Splunk_TA_nix/default/inputs.conf $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local/. vi $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local/inputs.conf chown -R splunkfwd:splunkfwd $SPLUNK_HOME/etc/apps/Splunk_TA_nix And restarted Splunk   I was able to get it working on 2 machines but then on next couple of machines, I am seeing: -0500 ERROR Configwatcher [32904 SplunkConfigChangeWatcherThread] - File =/opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/splunk_TA_nix/default/app.conf not available in baseline directory -0500 ERROR Configwatcher [32904 SplunkConfigChangeWatcherThread] - Unable to log the changes for path=/opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/app.conf Similar errors for other file name as well, like ._tags.conf and eventtypes.conf.  It seems like a permission issue but I have compared and permissions on the add-on folder and all files/dirs seems to be just like other UFs where the same add-on is working.    Any help would be appreciated.     
Hi @josephp , the point is that if you share the field extraction at App level, outside this app you cannot see the field. So repeat your search in the App where you extracted the field and see if ... See more...
Hi @josephp , the point is that if you share the field extraction at App level, outside this app you cannot see the field. So repeat your search in the App where you extracted the field and see if you have results. If you need to run the search outside the app where the extraction is defined, share the field extraction at Global level. Ciao. Giuseppe
Hi @deckard1984 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma ... See more...
Hi @deckard1984 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Karthikeya , a lookup is surely a good solution! I don't know if it's possible to extract with a search the IPs to be inserted in this lookup, if it's possible, you can create a search to extra... See more...
Hi @Karthikeya , a lookup is surely a good solution! I don't know if it's possible to extract with a search the IPs to be inserted in this lookup, if it's possible, you can create a search to extract these IPs and save them in the lookup using outputlookup, then , you can schedule this search to run e.g. once a day. Otherwise, you can manage these list using the Lookup Editor App. Rememeber, when you create this lookup to create the Lookup Definition and in it enable Match_Type CIDR (in Advanced options) so you can use range of IPs, so you don't need LIKE. Ciao. Giuseppe
1.  Both ends must be using the same type of connection.  If the indexer is told to expect TLS then it will reject any non-TLS connection attempts.  Without a connection, data cannot be indexed. 2. ... See more...
1.  Both ends must be using the same type of connection.  If the indexer is told to expect TLS then it will reject any non-TLS connection attempts.  Without a connection, data cannot be indexed. 2. Yes, it is possible and is done all the time in Splunk Cloud. 3. Yes, you can.  In fact, TLS and non-TLS connections *must* be on separate ports.
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I di... See more...
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I did the same in the opposite direction. 2. Is it possible to configure TLS/SSL certificates on the "universal forwarder" and make a connection with the indexer? Will it work? 3. Can we index data using two different ports? For example 9997 - without TLS and 9998 - with TLS.
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there.  So the ask is they want to exclude these IP addresses which con... See more...
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there.  So the ask is they want to exclude these IP addresses which contains threat messages. IPs are dynamic (different IPs daily) and threat messages also dynamic (different). Normally to exclude this we need to give NOT (IP) NOT (IP)..... But here there are 100s of IPs and it will be big query. What can be done in this case? My thoughts.. Can I create a lookup table and user manually update that on daily basis and to exclude the IP addresses which are present in this lookup? Like just NOT (lookup table name)  If it is good please help me with the workaround and query to be followed?  Thanks in advance.
After increasing the pipeline to 4, I have observed some improvements. Thanks 
My SHC captain displayed the message: Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/... See more...
My SHC captain displayed the message: Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/n]: Killing the pid correlated to port 8000 worked.