All Apps and Add-ons

Data index stopped after finishing current date

Loves-to-Learn

Dear Team,

I have Cloudflare data for my website, and set it up as Cloudflare index on Splunk. After finished index data from S3 of the current date, 3 days passed already and my index hasn't been updated at all.

Is there any troubleshooting process that i can follow and later share it here so we can find the root cause of it?

 

Sincerely,

 

Peter

Labels (3)
0 Karma

SplunkTrust
SplunkTrust

We need more information.  Please describe your Splunk architecture and how data gets from Cloudflare to Splunk.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Loves-to-Learn

@richgalloway 

Standalone Splunk enterprise trial version.

one S3 bucket on AWS, and followed this instruction to get the data to Splunk successfully.

https://developers.cloudflare.com/logs/analytics-integrations/splunk

So what else do you need?

0 Karma

SplunkTrust
SplunkTrust

So it worked and now it doesn't.  What changed in the meantime?  Any firewall changes?  Have you checked the logs on each end?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Loves-to-Learn

I did not change anything in the configuration.:) If you want me to run any troubleshooting, I can do that.

0 Karma

SplunkTrust
SplunkTrust
You did not make any changes, but is it possible someone else did, like the Network team?
Have you checked the logs?
---
If this reply helps you, an upvote would be appreciated.
0 Karma

Loves-to-Learn

@richgalloway 

I might found the root cause. This is what i did:

1 - rebuilt the Splunk standalone server.

2 - ingest data from S3 normally, with setup new input, new IAM role.

3 - data ingested successfully.

4 - problem: start having error like in this post : https://community.splunk.com/t5/Archive/AggregatorMiningProcessor-Log-ERROR/m-p/336528

5 - i copied the props.conf from default to local folder, and made change to Max_event, and "BREAK_ONLY_BEFORE_DATE to False, and unset any MUST_NOT_BREAK_BEFORE or MUST_NOT_BREAK_AFTER rules."

6 - RESTART the server.

7 - having this error: 

level=WARNING pid=142498 tid=Thread-4 logger=root pos=utils.py:wrapper:162 | Run function: get_all_keys failed: Traceback (most recent call last):

  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/utils.py", line 159, in wrapper

    return func(*args, **kwargs)

  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_common.py", line 186, in get_all_keys

    encoding_type=encoding_type)

  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/bucket.py", line 474, in get_all_keys

    '', headers, **params)

  File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/boto/s3/bucket.py", line 412, in _get_all

    response.status, response.reason, body)

boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request

<?xml version="1.0" encoding="UTF-8"?>

<Error><Code>ExpiredToken</Code><Message>The provided token has expired.</Message><Token-0>FwoGZXIvYXdzEIj//////////wEaDJjPPQIawmKrFYpa3SKxAbnfEEK/lq1HR2jMsdfZ1cFxIsU18PJTe4zdCxSJ0FYFVmrdlB0vyR3qVAeW5fESaim446Ks72wnxTZbQ1dp5m9SF74B9OO//qwDeThJXAnJDPbqXWhJbbq1wUO9Yh/6Q4ob0U8cg2XRjTfvplU7eCZItAX9YojvFyyK3G7uh6DtybnWFGEHoB7gfeIcuKL6a8SOw5AplE7WC2xGTwXp0ElQRhYIG+mX5k8+vv9kr2gKcyiXmIn9BTItQbK4rTa8ZUN1Nm4o8XA5PnSJ+EUpmjcDh2QuZM1M0mOCfVUip3EEzHfEBtEu</Token-0><RequestId>xxxxxxxxxxxx</RequestId><HostId>D2aLBfncsumaaPAfEAucyYmdlmNvfqKOn/Yk+oI0y1aBHTbC57u2+N0K38ycs8oQogoXTSr62Eo=</HostId></Error>

8 - now the data is indexing with 3.4KB/s speed.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!