Getting Data In

Splunk for Amazon S3 Add-on not able to fetch all logs

adamb0mb
Explorer

I'm testing out Splunk for indexing Amazon CloudFront logs which get stored automatically into Amazon S3. I'm attempting to pull in via the Amazon S3 Add-on.

Yesterday, I installed splunk and the S3 addon. After processing a day or so of logs, I ran into my trial license limit. No problem, I've got enough data to get some work done. Today I'd like to get some more data into my index. Is Splunk supposed to automatically be checking for more data? Is there a way I can force it to start updating again?

Edit
This appears to be an issue with the Splunk for Amazon S3 issue. But, I still do not know how to resolve it.

INFO  ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py" Connecting to my-bucket.s3.amazonaws.com.
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py" Traceback (most recent call last):
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"   File "/Applications/splunk/etc/apps/s3/bin/s3.py", line 697, in <module>
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"     run()
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"   File "/Applications/splunk/etc/apps/s3/bin/s3.py", line 408, in run
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"     objs = get_objs_from_bucket(key_id, secret_key, bucket, subdir)
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"   File "/Applications/splunk/etc/apps/s3/bin/s3.py", line 361, in get_objs_from_bucket
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"     conn = get_http_connection(key_id, secret_key, bucket, obj = None, query_string = query_string)
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"   File "/Applications/splunk/etc/apps/s3/bin/s3.py", line 195, in get_http_connection
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"     conn.connect()
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"   File "/Applications/splunk/lib/python2.7/httplib.py", line 757, in connect
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"     self.timeout, self.source_address)
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"   File "/Applications/splunk/lib/python2.7/socket.py", line 553, in create_connection
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py"     for res in getaddrinfo(host, port, 0, SOCK_STREAM):
ERROR ExecProcessor - message from "python /Applications/splunk/etc/apps/s3/bin/s3.py" socket.gai error: [Errno 8] nodename nor servname provided, or not known
INFO  ExecProcessor - Ran script: python /Applications/splunk/etc/apps/s3/bin/s3.py, took 2926.8 seconds to run, 0 bytes read, exited with code 1

rjordan
Engager

The S3 modular input doesn't check to see if IsTruncated is set in the bucket listing and the use a marker to continue. This has the effect of limiting the listing to the first 1000 objects. I made a quick modification to work around this, but the plugin has other major issues that make it unsuitable for production use so I decided to set use s3cmd in sync mode to just grab the logs and let Splunk index the files.

Here's the mod in case anyone is interested:

def get_objs_from_bucket(key_id, secret_key, bucket, subdir = None):
    more_data = True
    marker = None
    objs = []
    while more_data:
        query_string = "?marker=%s" % (marker)
        if subdir:
         query_string = "?marker=%s&prefix=%s&delimiter=/" % (marker, urllib.quote(subdir))
        conn = get_http_connection(key_id, secret_key, bucket, obj = None, query_string = query_string)
        resp = conn.getresponse()
        log_response(resp)
        if resp.status != 200:
           raise Exception, "AWS HTTP request return status code %d (%s): %s" % \
                (resp.status, resp.reason, get_amazon_error(resp.read()))
        bucket_listing = resp.read()
        conn.close()

        # parse AWS's bucket listing response
        doc = xml.dom.minidom.parseString(bucket_listing)
        root = doc.documentElement
        key_nodes = root.getElementsByTagName("Key")
        for key in key_nodes:
            if key.firstChild.nodeType == key.firstChild.TEXT_NODE:
                objs.append(key.firstChild.data)
        if root.getElementsByTagName("IsTruncated")[0].firstChild.data == "true":
            marker = objs[-1]
            logging.info("found %d objects so far..." % (len(objs)))
        else:
            more_data = False
            logging.info("found %d objects total..." % (len(objs)))

    return objs

packetwerks
Engager

The above mod worked for me.

0 Karma

adamb0mb
Explorer

There is definitely an issue with the S3 connector, and it's getting logged to splunkd.log.

Connecting to <my s3>.
Traceback:
 File "s3.py", line 408, in run
  objs = get_objs_from_bucket
 File "s3.py", line 361, in get_objs_from_bucket
  conn = get_http_connection
 File "s3.py", line 195, in get_http_connection
  conn.connect()
 File "httplib.py", line 757, in connect
 File "socket.py", line 553, in create_connection
  for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
0 Karma

hexx
Splunk Employee
Splunk Employee

It was actually kind of hard to find, because it's in the etc/apps/gettingstarted/local/inputs.conf file (could that be affecting it?)

No, that's not the issue. That location just happens to be where the UI committed your configuration to when you enabled this input. More details on how Splunk layers configuration files can be found here.

0 Karma

adamb0mb
Explorer

This is the stanza for the input:
[s3://my-cloudfrontlogs/cflog/]
key_id = KEY_IDXXXXXXX
secret_key = SECRET_KEYXXXXXXXX

It was actually kind of hard to find, because it's in the etc/apps/gettingstarted/local/inputs.conf file (could that be affecting it?)

0 Karma

hexx
Splunk Employee
Splunk Employee

Another possibility is that the S3 modular input uses a checkpoint logic that somehow got messed up. I would recommend looking in splunkd.log for messages indicating that the scripted input is running and what the outcome of the command execution is.

0 Karma

hexx
Splunk Employee
Splunk Employee

First off, it is theoretically not possible for one license violation to have this effect. Violations do not affect data intake, only the ability to search if enough of them are accrued.

Is Splunk supposed to automatically be checking for more data?

That really depends on how the said input is configured. Could you show us the stanza in the app's inputs.conf that describes this particular input?

Is there a way I can force it to start updating again?

A simple test to see if this is some kind of bad transient state is to restart splunkd and see if you get more data coming in.

Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...