Hello, Looking for some guidance on "Auto Retry" on HTTP / Browser Test, Have a Scheduled Test running every 30 mins ( 9:00, 9:30 ) When the test fails at 9:00 see the result updated as "Failed...
See more...
Hello, Looking for some guidance on "Auto Retry" on HTTP / Browser Test, Have a Scheduled Test running every 30 mins ( 9:00, 9:30 ) When the test fails at 9:00 see the result updated as "Failed (Auto Retry)" The next run occurs only at 9:30 a.m. Is this expected? (Does Auto Retry kick in only at the scheduled run time?)
At the moment, our tiny indexer has very little disk space and _introspection consumes roughly GB of storage a day, is there a way to minimize the space consumed by the index besides making the reten...
See more...
At the moment, our tiny indexer has very little disk space and _introspection consumes roughly GB of storage a day, is there a way to minimize the space consumed by the index besides making the retention very short?
What seems to be the problem? Not all users own knowledge objects. How are you searching for them? If you have CLI access have you looked in $SPLUNK_HOME/etc/users?
Is it possible to connect Azure storage to a Splunk cloud instance? Our client wants to store data from their Splunk cloud instance in azure to eliminate their Splunk cloud storage overage
Nice, but ensure that you cannot add too many pipelines. Usually it’s less than you have cores/cpus on your box. And this is valid only for forwarders.
I am trying to install Splunk_TA_Nix on my UFs. I am in air-gapped area, so can't copy errors and paste here. I followed below steps: cd $SPLUNK_HOME/etc/apps/ tar xzvf $TMP/Splunk_TA_nix-4.7.0-1...
See more...
I am trying to install Splunk_TA_Nix on my UFs. I am in air-gapped area, so can't copy errors and paste here. I followed below steps: cd $SPLUNK_HOME/etc/apps/ tar xzvf $TMP/Splunk_TA_nix-4.7.0-156739.tgz mkdir $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local cp $SPLUNK_HOME/etc/apps/Splunk_TA_nix/default/inputs.conf $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local/. vi $SPLUNK_HOME/etc/apps/Splunk_TA_nix/local/inputs.conf chown -R splunkfwd:splunkfwd $SPLUNK_HOME/etc/apps/Splunk_TA_nix And restarted Splunk I was able to get it working on 2 machines but then on next couple of machines, I am seeing: -0500 ERROR Configwatcher [32904 SplunkConfigChangeWatcherThread] - File =/opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/splunk_TA_nix/default/app.conf not available in baseline directory -0500 ERROR Configwatcher [32904 SplunkConfigChangeWatcherThread] - Unable to log the changes for path=/opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/app.conf Similar errors for other file name as well, like ._tags.conf and eventtypes.conf. It seems like a permission issue but I have compared and permissions on the add-on folder and all files/dirs seems to be just like other UFs where the same add-on is working. Any help would be appreciated.
Hi @josephp , the point is that if you share the field extraction at App level, outside this app you cannot see the field. So repeat your search in the App where you extracted the field and see if ...
See more...
Hi @josephp , the point is that if you share the field extraction at App level, outside this app you cannot see the field. So repeat your search in the App where you extracted the field and see if you have results. If you need to run the search outside the app where the extraction is defined, share the field extraction at Global level. Ciao. Giuseppe
Hi @deckard1984 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma ...
See more...
Hi @deckard1984 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Karthikeya , a lookup is surely a good solution! I don't know if it's possible to extract with a search the IPs to be inserted in this lookup, if it's possible, you can create a search to extra...
See more...
Hi @Karthikeya , a lookup is surely a good solution! I don't know if it's possible to extract with a search the IPs to be inserted in this lookup, if it's possible, you can create a search to extract these IPs and save them in the lookup using outputlookup, then , you can schedule this search to run e.g. once a day. Otherwise, you can manage these list using the Lookup Editor App. Rememeber, when you create this lookup to create the Lookup Definition and in it enable Match_Type CIDR (in Advanced options) so you can use range of IPs, so you don't need LIKE. Ciao. Giuseppe
1. Both ends must be using the same type of connection. If the indexer is told to expect TLS then it will reject any non-TLS connection attempts. Without a connection, data cannot be indexed. 2. ...
See more...
1. Both ends must be using the same type of connection. If the indexer is told to expect TLS then it will reject any non-TLS connection attempts. Without a connection, data cannot be indexed. 2. Yes, it is possible and is done all the time in Splunk Cloud. 3. Yes, you can. In fact, TLS and non-TLS connections *must* be on separate ports.
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I di...
See more...
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I did the same in the opposite direction. 2. Is it possible to configure TLS/SSL certificates on the "universal forwarder" and make a connection with the indexer? Will it work? 3. Can we index data using two different ports? For example 9997 - without TLS and 9998 - with TLS.
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there. So the ask is they want to exclude these IP addresses which con...
See more...
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there. So the ask is they want to exclude these IP addresses which contains threat messages. IPs are dynamic (different IPs daily) and threat messages also dynamic (different). Normally to exclude this we need to give NOT (IP) NOT (IP)..... But here there are 100s of IPs and it will be big query. What can be done in this case? My thoughts.. Can I create a lookup table and user manually update that on daily basis and to exclude the IP addresses which are present in this lookup? Like just NOT (lookup table name) If it is good please help me with the workaround and query to be followed? Thanks in advance.
My SHC captain displayed the message: Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/...
See more...
My SHC captain displayed the message: Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/n]: Killing the pid correlated to port 8000 worked.
Are you also on a cloud trial? These could just be momentary server busy etc, be sure to check splunk internal logs index=_internal source=*splunkd.log httpinputdatahandler) to see if the payload ...
See more...
Are you also on a cloud trial? These could just be momentary server busy etc, be sure to check splunk internal logs index=_internal source=*splunkd.log httpinputdatahandler) to see if the payload hit a 503 or something then retried. It is "expected" that hec clients have to handle backpressure or timeouts, so from time to time you may see a failed send, but as long as retry is successful, its "normal" unless you up your indexing layer to handle more traffic uninterrupted. The error says "timeout reached" so it could be that Splunk was to busy to answer (especially in standalone trail or small test boxes). Also please confirm the HEC full URL you are using. I believe you need to put the full URL https://http-inputs.foo.splunkcloud.com/services/collector/event (or trial equivalient) OP looks like they configured to just the cloud url on 8088, which is not a correct url for HEC.
same problem. The index is correct # docker logs -f sc4s SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splun...
See more...
same problem. The index is correct # docker logs -f sc4s SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:events... syslog-ng checking config sc4s version=3.34.1 Configuring the health check port to: 8080 [2025-01-21 13:54:30 +0000] [129] [INFO] Starting gunicorn 23.0.0 [2025-01-21 13:54:30 +0000] [129] [INFO] Listening at: http://0.0.0.0:8080 (129) [2025-01-21 13:54:30 +0000] [129] [INFO] Using worker: sync [2025-01-21 13:54:30 +0000] [138] [INFO] Booting worker with pid: 138 starting syslog-ng no errors on startup but still these sc4s:events keep coming no idea what they are and the are annoying. The index is correct.