All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SN1 , probably that day someone closed the firewall port between Forwarder and Indexer. The port should be 9997. if this is the port, you can try using telnet from the Forwarder: telnet <host... See more...
Hi @SN1 , probably that day someone closed the firewall port between Forwarder and Indexer. The port should be 9997. if this is the port, you can try using telnet from the Forwarder: telnet <host_ip> <port> Ciao. Giuseppe
The error you're seeing suggests a network connectivity issue between your forwarder and the receiving Splunk instance (likely an Indexer or Heavy Forwarder). Here are some steps to troubleshoot: V... See more...
The error you're seeing suggests a network connectivity issue between your forwarder and the receiving Splunk instance (likely an Indexer or Heavy Forwarder). Here are some steps to troubleshoot: Verify network connectivity: - Can you connect to the destination host from the forwarder (Try using netcat with something like `nc -vz -w1 <destinationIP> <destinationPort>` Is the specified port open and accessible on the destination server (Is Splunk listening?) Are any other hosts able to connect and send data? Check firewall rules: - Ensure no firewall is blocking the connection on either end. Verify Splunk configurations: - On the forwarder, check outputs.conf for correct destination settings. - On the receiving end, verify inputs.conf for proper port configurations. Restart Splunk services: - Sometimes a restart can resolve connectivity issues, try restarting the forwarder, if no progress then try restart Splunk on the receiver to confirm it is working correctly. Check for any recent network changes - Were there any infrastructure modifications around January 29th? Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
When you ran the telnet check, was this from the same host you are trying to access Splunk with via the browser or from the Splunk server itself? If this was checked from the Splunk server then I wo... See more...
When you ran the telnet check, was this from the same host you are trying to access Splunk with via the browser or from the Splunk server itself? If this was checked from the Splunk server then I would suggest checking the firewall rules on that host if either `iptables` or `firewalld` is configured to allow inbound traffic on port 8000. You can check your firewall rules with: `sudo iptables -L` or `sudo firewall-cmd --list-all` depending how this is configured on your host. Please check if you are using https in your URL if Splunk has been configured with SSL enabled.   
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what sh... See more...
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what should I do?
Refer to the tables in my original post. I'm doing a count of events per each span using tstats.  So just how many events were there from 00:00-04:00, 04:00-08:00 etc. But splunk chooses that startin... See more...
Refer to the tables in my original post. I'm doing a count of events per each span using tstats.  So just how many events were there from 00:00-04:00, 04:00-08:00 etc. But splunk chooses that starting point of 00:00 and sometimes it's a very poor choice so I would like to be able to adjust it. So instead it would be 01:00-05:00, 05:00-09:00 etc. The methods I've found in the forum do not seem to work with tstats. As shown in my second table, the _time labels are adjusted but the values are not recalculated. 
I want to be able to support adaptive response action in Splunk Enterprise Security but when I put some value there Im getting the error, even its not empty  what I did wrong?         
Does it work with tstats?
Ah, I do see it now, thanks. I was assuming all data would be included in one of the "Workload" (e.g. "Exchange") or "app" data values, but the sourcetype "o365:reporting:messagetrace" does not have ... See more...
Ah, I do see it now, thanks. I was assuming all data would be included in one of the "Workload" (e.g. "Exchange") or "app" data values, but the sourcetype "o365:reporting:messagetrace" does not have "Workload" or "app" data values and I was excluding the message trace events with search parameters like "Workload="*"" Appreciate the help! 
This looks like JSON so you should ingest it as such. Alternatively, you could use spath to extract the fields. Alternatively, look at the json functions.
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splu... See more...
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splunk logs? 
Which buckets get frozen is decided by bucket's age. Bucket's age is the time of the most recent event in the bucket. But. There can be buckets which even have events from the future (they go to so c... See more...
Which buckets get frozen is decided by bucket's age. Bucket's age is the time of the most recent event in the bucket. But. There can be buckets which even have events from the future (they go to so called quarantine bucket). So it's a bit more complicated and depends on data characteristics of indexes involved. dbinspect is the command to check for info on your buckets.
Here is the full env_file SC4S_SOURCE_SYSLOG_PORTS=514,32514,32601,41514,41601,42514,42601 SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://<>:3001/services/collector/event SC4S_DEST_SPLUNK_HEC_DEFAULT_TO... See more...
Here is the full env_file SC4S_SOURCE_SYSLOG_PORTS=514,32514,32601,41514,41601,42514,42601 SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://<>:3001/services/collector/event SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=<> SC4S_DEST_SPLUNK_HEC_DEFAULT_INDEX=<> # SC4S_DEST_SPLUNK_HEC_DEFAULT_MODE=GLOBAL # SC4S_DEST_SPLUNK_HEC_DEFAULT_FORMAT=json # SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no SC4S_DEST_SPLUNK_HEC_TLS_CA_FILE=/etc/ssl/certs/ca-certificates.crt SC4S_DEST_SPLUNK_HEC_TLS_CLIENT_CERT=/etc/syslog-ng/tls/splunk.crt SC4S_DEST_SPLUNK_HEC_TLS_CLIENT_KEY=/etc/syslog-ng/tls/splunk.key SC4S_SOURCE_TLS_ENABLE=yes SC4S_SOURCE_TLS_KEY=/etc/syslog-ng/tls/server.key SC4S_SOURCE_TLS_CERT=/etc/syslog-ng/tls/server.pem SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_ENABLE=yes SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR=/var/lib/sc4s/disk-buffer SC4S_HEALTH_CHECK_ENABLE=true SC4S_LISTEN_INTERNAL_HEALTH_PORT=9129 SC4S_ETC=/etc/syslog-ng SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL_SECONDS=30 SC4S_LISTEN_STATUS_PORT=9129 SC4S_LISTEN_DEFAULT_TCP_PORT=41514 SC4S_LISTEN_DEFAULT_UDP_PORT=42514 SC4S_LISTEN_DEFAULT_TLS_PORT=7514 SC4S_LISTEN_DEFAULT_RFC5426_PORT=41601 SC4S_LISTEN_DEFAULT_RFC6587_PORT=42601 SC4S_LISTEN_DEFAULT_RFC5425_PORT=7425 SC4S_HEALTH_CHECK_ENABLE=true SC4S_DEST_SPLUNK_HEC_BATCH_SIZE=1 SC4S_DEST_SPLUNK_HEC_RETRY_LIMIT=1 SC4S_DEST_SPLUNK_HEC_RETRY_INTERVAL=5 SOURCE_ALL_SET=DEFAULT_TCP,DEFAULT_UDP # SC4S_SEND_METRICS_TERMINAL=no SC4S_DEBUG=false SC4S_LOG_LEVEL=false SC4S_DEFAULT_TIMEZONE=Europe/Berlin PYTHONPATH=/var/lib/python-venv/lib/python3.12/site-packages:/etc/syslog-ng/python:/etc/syslog-ng/pylib # Tunning settings SC4S_DEST_SPLUNK_HEC_TIME_REOPEN=30 SC4S_DEST_SPLUNK_HEC_BATCH_LINES=100 SC4S_DEST_SPLUNK_HEC_BATCH_TIMEOUT=5000 SC4S_DEST_SPLUNK_HEC_KEEPALIVE=yes SC4S_DEST_SPLUNK_HEC_WORKERS=8 SC4S_DEST_SPLUNK_HEC_DISKBUFF_ENABLE=yes SC4S_DEST_SPLUNK_HEC_DISKBUFF_RELIABLE=yes SC4S_DEST_SPLUNK_HEC_DISKBUFF_MEMBUFLENGTH=10000 SC4S_DEST_SPLUNK_HEC_DISKBUFF_DISKBUFSIZE=200000000
Thank you ! I just tested it now, works like a charm. I was worried that the extra search you used will consume processing power, but all ok as job inspector said  This search has completed and ha... See more...
Thank you ! I just tested it now, works like a charm. I was worried that the extra search you used will consume processing power, but all ok as job inspector said  This search has completed and has returned 1 results by scanning 0 events in 0.014 seconds Apologies for the late reply, i've been swamped.
@PickleRick What I understand is that when data collection of the A index is stopped and only the data collection of the _internal index is maintained, A/colddb collected by collecting data from the... See more...
@PickleRick What I understand is that when data collection of the A index is stopped and only the data collection of the _internal index is maintained, A/colddb collected by collecting data from the two indexes was 30MB, and in _internaldb/db 10MB, only the data of the _internal index is collected. Since A/colddb is the oldest in the cold Volume, is it reduced from 30MB to 20MB? With this logic, wouldn't all data stored in A/colddb be frozen?
Depends on what you mean by "capacity". Splunk doesn't have "capacity" as such. It doesn't have its own limit above which it won't index events. It will simply freeze old data if it exceeds some limi... See more...
Depends on what you mean by "capacity". Splunk doesn't have "capacity" as such. It doesn't have its own limit above which it won't index events. It will simply freeze old data if it exceeds some limits. (ok, technically it will stop ingesting if the underlying storage becomes full and OS-level writes cannot be done anymore but that's a different story). Buckets from A/colddb might get frozen at some point because while the A index does not exceeds its thresholds, the overall size of buckets on the volume is too big and A's buckets are oldest.
Hi @Skv , if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed. Ciao. Giu... See more...
Hi @Skv , if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed. Ciao. Giuseppe
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this... See more...
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this issue. We have checked port 8000 is listening fine.  Trying 10.xxx.xxx.xx... Connected to 10.xxx.xxx.xx. Escape character is '^]'. ^Z Connection closed by foreign host. tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN We dont have anything in internal logs to dig in. we are getting the error message on the page as below: The connection has timed out The server at 10.xxx.xxx.xx is taking too long to respond.   The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer’s network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the web.  
For Tenable SC with Custom Server SSL Certificate: After identifying the source issue, if its SSL-related, Modify the following file to disable SSL Verification: /opt/splunk/etc/apps/TA-tenable/bin/... See more...
For Tenable SC with Custom Server SSL Certificate: After identifying the source issue, if its SSL-related, Modify the following file to disable SSL Verification: /opt/splunk/etc/apps/TA-tenable/bin/tenable_consts.py verify_ssl_for_sc_api_key = False It worked for me.  
Splunk cannot and  does not detect if data has already been indexed.  As @gcusello said,  it will attempt to avoid re-ingesting data, but that's not perfect. It's up to the app doing the ingestion t... See more...
Splunk cannot and  does not detect if data has already been indexed.  As @gcusello said,  it will attempt to avoid re-ingesting data, but that's not perfect. It's up to the app doing the ingestion to prevent reading the same data twice.  In DB Connect, for example, a "rising column" is defined to identify unique records.  Your app could do something similar, using case ID and Closed Time, perhaps.
@PickleRick  So shouldn’t the A/colddb capacity become 0 at some point?