I am using S3SPL from datapunctum and am trying to get some data to be search.
In the internal index there are no errors logged.
I have setup my ingest actions with .json or .ndjson and also configured my prefix correctly to reflect the timestamp.
I am using minio
root@esprimo-piere:/opt/splunk/etc/apps/splunk_ingest_actions/local# cat outputs.conf
[rfs:splunk]
batchSizeThresholdKB = 131072
batchTimeout = 30
compression = none
dropEventsOnUploadError = false
format = json
format.json.index_time_fields = true
format.ndjson.index_time_fields = true
partitionBy = day
path = s3://splunk/
remote.s3.access_key = XXXX
remote.s3.encryption = none
remote.s3.endpoint = https://localhost:9000
remote.s3.secret_key = XXX
remote.s3.signature_version = v4
remote.s3.supports_versioning = false
remote.s3.url_version = v1
root@esprimo-piere:/opt/splunk/etc/apps/SA-DP-s3spl/local# cat s3spl_bucket.conf
[s3spl_bucket://splunk]
aws_access_key = XXXXX
aws_secret_key = ********
bucket_ia = True
bucket_name = splunk
endpoint_url = https://localhost:9000
max_events_per_file = -1
max_files_read = -1
max_total_events = 1000
prefix = /year=${_time:%Y}/month=${_time:%m}/day=${_time:%d}/
timezone = Europe/Berlin
verify_ssl = False
Hi @pwoehl welcome to the community.
1) This app S3SPL Add-on for Splunk is a new app (at least for me), not sure how to help you.
2) whatever happens, Splunk should update something to the internal logs.
As you say no internal logs, in this case, looks like something wrong from your configs end.
3) if i am in your place, best idea i would do is to get some ideas/support from datapunctum directly:
https://docs.datapunctum.com/s3spl/s3spl-faq
4) lets wait for other Splunkers to provide some more ideas, thanks.