Hi @josephp , the point is that if you share the field extraction at App level, outside this app you cannot see the field. So repeat your search in the App where you extracted the field and see if ...
See more...
Hi @josephp , the point is that if you share the field extraction at App level, outside this app you cannot see the field. So repeat your search in the App where you extracted the field and see if you have results. If you need to run the search outside the app where the extraction is defined, share the field extraction at Global level. Ciao. Giuseppe
Hi @deckard1984 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma ...
See more...
Hi @deckard1984 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Karthikeya , a lookup is surely a good solution! I don't know if it's possible to extract with a search the IPs to be inserted in this lookup, if it's possible, you can create a search to extra...
See more...
Hi @Karthikeya , a lookup is surely a good solution! I don't know if it's possible to extract with a search the IPs to be inserted in this lookup, if it's possible, you can create a search to extract these IPs and save them in the lookup using outputlookup, then , you can schedule this search to run e.g. once a day. Otherwise, you can manage these list using the Lookup Editor App. Rememeber, when you create this lookup to create the Lookup Definition and in it enable Match_Type CIDR (in Advanced options) so you can use range of IPs, so you don't need LIKE. Ciao. Giuseppe
1. Both ends must be using the same type of connection. If the indexer is told to expect TLS then it will reject any non-TLS connection attempts. Without a connection, data cannot be indexed. 2. ...
See more...
1. Both ends must be using the same type of connection. If the indexer is told to expect TLS then it will reject any non-TLS connection attempts. Without a connection, data cannot be indexed. 2. Yes, it is possible and is done all the time in Splunk Cloud. 3. Yes, you can. In fact, TLS and non-TLS connections *must* be on separate ports.
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I di...
See more...
Hi, So I wanted to check some possibilities of indexing data using TLS/SSL certificates. 1. I configured TLS only on the indexer, not on the heavy forwarder and data stopped indexing, but why? I did the same in the opposite direction. 2. Is it possible to configure TLS/SSL certificates on the "universal forwarder" and make a connection with the indexer? Will it work? 3. Can we index data using two different ports? For example 9997 - without TLS and 9998 - with TLS.
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there. So the ask is they want to exclude these IP addresses which con...
See more...
Hello, We have a field called client_ip which contains different IP addresses and in events different threat messages will be there. So the ask is they want to exclude these IP addresses which contains threat messages. IPs are dynamic (different IPs daily) and threat messages also dynamic (different). Normally to exclude this we need to give NOT (IP) NOT (IP)..... But here there are 100s of IPs and it will be big query. What can be done in this case? My thoughts.. Can I create a lookup table and user manually update that on daily basis and to exclude the IP addresses which are present in this lookup? Like just NOT (lookup table name) If it is good please help me with the workaround and query to be followed? Thanks in advance.
My SHC captain displayed the message: Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/...
See more...
My SHC captain displayed the message: Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/n]: Killing the pid correlated to port 8000 worked.
Are you also on a cloud trial? These could just be momentary server busy etc, be sure to check splunk internal logs index=_internal source=*splunkd.log httpinputdatahandler) to see if the payload ...
See more...
Are you also on a cloud trial? These could just be momentary server busy etc, be sure to check splunk internal logs index=_internal source=*splunkd.log httpinputdatahandler) to see if the payload hit a 503 or something then retried. It is "expected" that hec clients have to handle backpressure or timeouts, so from time to time you may see a failed send, but as long as retry is successful, its "normal" unless you up your indexing layer to handle more traffic uninterrupted. The error says "timeout reached" so it could be that Splunk was to busy to answer (especially in standalone trail or small test boxes). Also please confirm the HEC full URL you are using. I believe you need to put the full URL https://http-inputs.foo.splunkcloud.com/services/collector/event (or trial equivalient) OP looks like they configured to just the cloud url on 8088, which is not a correct url for HEC.
same problem. The index is correct # docker logs -f sc4s SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splun...
See more...
same problem. The index is correct # docker logs -f sc4s SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:events... syslog-ng checking config sc4s version=3.34.1 Configuring the health check port to: 8080 [2025-01-21 13:54:30 +0000] [129] [INFO] Starting gunicorn 23.0.0 [2025-01-21 13:54:30 +0000] [129] [INFO] Listening at: http://0.0.0.0:8080 (129) [2025-01-21 13:54:30 +0000] [129] [INFO] Using worker: sync [2025-01-21 13:54:30 +0000] [138] [INFO] Booting worker with pid: 138 starting syslog-ng no errors on startup but still these sc4s:events keep coming no idea what they are and the are annoying. The index is correct.
Hi @josephp , did you defined on the Search Head the field extraction for critC field? if yes, the grants for this field are App or Global? if App, are you in the same App? Ciao. Giuseppe
Hi, We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the ne...
See more...
Hi, We recently migrated from a standalone Search Head to a clustered one. However, we are having some issue running some search commands. For example, this is a query that is not working on the new SH cluster sourcetype=dataA index=deptA | where critC > 25 On the old search head, this query runs fine and we see the results as expected. But on the SH cluster, this doesn't yield anything. I have run the "sourcetype=dataA index=deptA" search query by itself, and they both see the same events. I am not sure why the search with (| where citC > 25) on the standalone SH would work and the cluster would not. Any help would be appreciated. Thank you
It is clean at startup SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful ...
See more...
It is clean at startup SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:fallback... SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=sddc_internal for sourcetype=sc4s:events... syslog-ng checking config sc4s version=3.34.1 Configuring the health check port to: 8080 [2025-01-21 13:36:54 +0000] [135] [INFO] Starting gunicorn 23.0.0 [2025-01-21 13:36:54 +0000] [135] [INFO] Listening at: http://0.0.0.0:8080 (135) [2025-01-21 13:36:54 +0000] [135] [INFO] Using worker: sync [2025-01-21 13:36:54 +0000] [138] [INFO] Booting worker with pid: 138 starting syslog-ng
I am getting this all of the time and A the index exists and i can test it with curl and when sc4s starts it shows it is able to connect - it is annoying. what else can i check it is not well docume...
See more...
I am getting this all of the time and A the index exists and i can test it with curl and when sc4s starts it shows it is able to connect - it is annoying. what else can i check it is not well documented.
There are several possible approaches but each of them has its own drawbacks. The most obvious three are: 1) Use eventstats to add count to events, sort and limit by the count value. (might be memo...
See more...
There are several possible approaches but each of them has its own drawbacks. The most obvious three are: 1) Use eventstats to add count to events, sort and limit by the count value. (might be memory-intensive as I said earlier) 2) Use subsearch to find the count, then search your whole body of data for those events (if you can't use "fast" commands like tstats for your subsearch you might hit all the subsearch-related problems; also you're effectively digging twice through your whole data set) 3) Add more values() aggregations to your stats listing specific fields (might cause problems with "linking" values from different fields; especially if potentially empty fields are involved).