Getting Data In

Splunk Connect for Syslog

Loves-to-Learn Lots

I was following the documentation of splunk connect for syslog so that I could ingest syslog in Splunk Cloud setup.
I cannot turn of SSL option in my HEC global settings. So I did not uncomment the below line
I created the file /opt/sc4s/env_file with the contents.

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088 SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#Uncomment the following line if using untrusted SSL certificates

I started my sc4s.service ( systemd service created by following the doc). I started to get exception

Followed this for splunk cloud. When sc4s service is started I get error below curl: (60) SSL certificate problem: self-signed certificate in certificate chain More details here: curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. SC4S_ENV_CHECK_HEC: Invalid Splunk HEC URL, invalid token, or other HEC connectivity issue index=main. sourcetype=sc4s:fallback Startup will continue to prevent data loss if this is a transient failure.

If I uncomment the line, I don't see the exception anymore but I fail to get any message when I 

search index=* sourcetype=sc4s:events "starting up" as suggested in the documentation. No sample data when I run 
echo “Hello SC4S” > /dev/udp/<SC4S_ip>/514

Please let me know what I am missing in the setup so that I can proceed forward

Labels (3)
0 Karma


From your the logs it shows:

Splunk HEC connection test successful to index=main for sourcetype=sc4s:events (So run this and check the main index - if you can see this then your the connection is working.

In terms of the /opt/sc4s/local/context/splunk_index.csv follow all the steps from the below run time configuration, there are a number of stpes and you need to complete them all.

As you can send curl test events to cloud you don't need whitelist (BUT its best practise to have them in place for security reasons.) 

0 Karma

Loves-to-Learn Lots

When I run 
echo '<14>1 2024-04-19T12:34:56.789Z myhostname myapp 12345 - [exampleSDID@32473 iut="3" eventSource="application" eventID="1011"] Something happened through echoing.' > /dev/udp/
I am able to see it in Splunk. But when my application is sending syslog on port 514, it does not appear on Splunk although the same message is visible when I run TCP dump on port 514. 
What would I be missing here?

To reply to your question, I believe I have followed the steps in runtime configuration (

0 Karma


So the the echo works - you can see data in Splunk, but your syslog APP which sends syslog data is not visable in Splunk and tcpdump shows that the APP is sending data to SC4S.

Things to check:

1. Check the “No data in Splunk" section -

Restart sc4s and look at the logs

/usr/bin/<podman|docker> logs SC4S

2. Is your syslog APP a common syslog source and supported by SC4S?

3. Is your syslog APP in the known SC4S vendors list?

4. Check if it need some special enviromental config for the /opt/sc4s/env_file (Example look at the McAfee known source, it has a number of configuration options, indexes, ports, TA's env file config see this example -

5. Check the /opt/sc4s/env_file ensure the settings for your syslog APP are set here.

6. Check /opt/sc4s/local/context/splunk_metadata.csv
Ensure the keyname (You App source), and ensure its mapped to the correct index in cloud

7. Have you deployed the correct TA's for your syslog APP onto Splunk cloud.

0 Karma

Loves-to-Learn Lots

So my application sends data in RFC5424 format. It a test c# application running my local which basically sends data through a udp client in RFC5424 format  to an ec2instance which runs sc4s inside docker.

The logs don't help because I don't see  anything after 
starting goss
starting syslog-ng

I am not aware if I have to configure anything in splunk cloud

0 Karma


It sounds like you have created a custom syslog app with custom application type of data and its not one of the common NETWORK  syslog sources...this means it’s not going to be parsed and formatted and handled by SC4S, therefore your options are:


Option 1. See if the SC4S community can create one for you (As this sounds like it’s NOT network data then you might have issues as it sounds like a custom application data. SC4S is not designed to handle OS or Application data. You can log an issue here and maybe they can help. You will need to send a PCAP file. (I doubt if this is feasible, so then look at option 2) 


Option 2. Install a normal syslog server (syslog-ng or R-syslog) and configure it as opposed to using SC4S as its primarily designed to handle common network syslog data sources. Send your custom syslog app data to the server running normal (syslog-ng or r-syslog) and configure it log the data into text files into a folder. Install a Splunk UF and configure it to monitor (inputs.conf) your log files and send to Splunk cloud via outputs.conf. The Splunk UF will pick those up and then using outputs.conf send that data to Splunk cloud. You then need to create a TA to parse the custom syslog raw data, so apply metadata, sourcetype, fields, extraction and ensure the timestamp etc are all correct, then install the custom TA in Splunk cloud.

0 Karma

Loves-to-Learn Lots

Could I get by creating Simple Log path by port ( ?


0 Karma


A few things to check:

1. Have you enabled Whitelisting for HEC as this is cloud or are firewalls blocking. 

2. Check logs
journalctl -b -u sc4s

3. Check your all your indexs have been created in Splunk cloud.

4. Check the indexes are mapped

5. Try basic testing using curl - create a token and use the below, may need some tuning

Use below example and change to your stack name

curl "" \
-H "Authorization: Splunk CF179AE4-3C99-45F5-A7CC-3284AA91CF67" \
-d '{"event": "Hello, world!", "sourcetype": "manual"}'

0 Karma

Loves-to-Learn Lots

WHen I have uncommented the line SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no. I get the below logs when I run the command journalctl -b -u sc4s

Apr 18 13:53:40 ip-MachineIP systemd[1]: Starting SC4S Container...
Apr 18 13:53:41 ip-MachineIP docker[12242]: latest: Pulling from splunk/splunk-connect-for-syslog/container3
Apr 18 13:53:41 ip-MachineIP docker[12242]: Digest: sha256:f8ff916d9cb6836cb0b03b578f51a3777c7a4c84e580fdad9b768cdc7ef2910e
Apr 18 13:53:41 ip-MachineIP docker[12242]: Status: Image is up to date for
Apr 18 13:53:41 ip-MachineIP docker[12242]:
Apr 18 13:53:41 ip-MachineIP systemd[1]: Started SC4S Container.
Apr 18 13:53:42 ip-MachineIP docker[12254]: SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...
Apr 18 13:53:43 ip-MachineIP docker[12254]: SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...
Apr 18 13:53:47 ip-MachineIP docker[12254]: syslog-ng checking config
Apr 18 13:53:47 ip-MachineIP docker[12254]: sc4s version=3.22.3
Apr 18 13:53:48 ip-MachineIP docker[12254]: starting goss
Apr 18 13:53:50 ip-MachineIP docker[12254]: starting syslog-ng

I have created all the indexes mentioned in the document (

I cannot find the file /opt/sc4s/local/context/splunk_index.csv
I am able to curl and send message to splunk using -k flag in my curl command.

Do I need to whitelist if I am able to curl?

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...