All Apps and Add-ons

Splunk Add-on for Amazon Web Services: Cannot index AWS config change notifications

Communicator

We are indexing AWS data into Splunk using Splunk Add-on for AWS.

We have configured inputs to retrieve data from AWS Config.

AWS Config data should go in the sourcetypes 'aws:config' & 'aws:config:notification'.

While we do get data in 'aws:config' we do not get any data under 'aws:config:notification'.

The documentation (https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureInputs) states that 'SQS-based S3' input type is supported for 'aws:config:notification'.

However, we spotted the following message in the logs :

 2019-06-07 07:55:02,631 level=INFO pid=26766 tid=Thread-5 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:parse:149 | start_time=1559893993 datainput="config-sqs_s3", created=1559894102.63 message_id="58cee5d5-0b6b-46b7-af16-2e3ee2a2d22f" ttl=300 job_id=73693ed4-de47-4c5d-bd0f-3dbf11bcb5b6 | message="Ingnoring this config message." message_type="ConfigurationItemChangeNotification"

And handler.py seems pretty clear about it:

class ConfigNoticeParser(object):
    """
    Wrapper class for easy accessing config dict
    based notifications.
    """
    _SUPPORTED_MESSAGE_TYPE = [
        'ConfigurationHistoryDeliveryCompleted',
        'ConfigurationSnapshotDeliveryCompleted',
    ]

    _UNSUPPORTED_MESSAGE_TYPE = [
        'ConfigurationItemChangeNotification',
        'ConfigurationSnapshotDeliveryStarted',
        'ComplianceChangeNotification',
        'ConfigRulesEvaluationStarted',
        'OversizedConfigurationItemChangeNotification',
        'OversizedConfigurationItemChangeDeliveryFailed'
    ]

    def __init__(self, message, region_cache):
        self._message = message
        self._region_cache = region_cache

    def parse(self):
        message = self._message
        message_type = message['messageType']
        if message_type in self._UNSUPPORTED_MESSAGE_TYPE:
            logger.info('Ingnoring this config message.',
                        message_type=message_type)
            return []

        if message_type not in self._SUPPORTED_MESSAGE_TYPE:
            raise TypeError('Unknown config message.')

        # for supported message types
        bucket = message['s3Bucket']
        region = self._region_cache.get_region(bucket)
        key = message['s3ObjectKey']
        if not isinstance(key, unicode):
            raise TypeError('s3ObjectKey is expected to be an unicode object.')
        return [self._make(region, bucket, key)]

    def _make(self, region, bucket, key):
        return S3Notice(region, bucket, key, None, None)

inputs.conf:

[aws_sqs_based_s3://config-sqs_s3]
aws_account = <assume_role_name>
aws_iam_role = <aws_account_name>
disabled = 0
host = <host>
index = main
interval = 300
s3_file_decoder = config
sourcetype = aws:config
sqs_batch_size = 10
sqs_queue_region = <region>
sqs_queue_url = https://sqs.eu-west-1.amazonaws.com/<aws_account_id>/<sqs_name>;

Are we missing something here ?

Thanks in advance for any hint!

1 Solution

Communicator

Alright there is a feature request for this : ADDON-20112

In the meantime, we are successfully indexing config change notifications using CloudWatch Events rule + Kinesis Firehose + HEC : https://aws.amazon.com/fr/blogs/mt/ingest-aws-config-data-into-splunk-with-ease/

It just need a little tweaking at the indexing level to correctly split json events.

View solution in original post

0 Karma

Communicator

Alright there is a feature request for this : ADDON-20112

In the meantime, we are successfully indexing config change notifications using CloudWatch Events rule + Kinesis Firehose + HEC : https://aws.amazon.com/fr/blogs/mt/ingest-aws-config-data-into-splunk-with-ease/

It just need a little tweaking at the indexing level to correctly split json events.

View solution in original post

0 Karma

Engager

Would you be able to elaborate on how you split the JSON events? I've been trying to use BREAKONLYBEFORE in props.conf but haven't had any success yet.

0 Karma

Communicator

Finally we went back to SQS mode and created a separate SQS queue for config notifications.

I do not have the config anymore and I do not remember having issue with event breaking but after having checked another props for events also coming form CW Event rule and the sample I still had, this might help:

[aws:config:notification]
LINEBREAKER = }}(){\"MessageId
SHOULD
LINEMERGE = false
TIMEPREFIX = SentTimestamp\":\s\"
MAX
TIMESTAMPLOOKAHEAD = 13
TIME
FORMAT = %s

0 Karma

Explorer

As we are on the same page(and i guess many others as well) could you share more details on how it's been resolved.

0 Karma

Communicator

Hello @tvergov , I guess you meant how it has been resolved using SQS instead of CloudWatch Events rule + Kinesis Firehose + HEC.

Well on the AWS side we now have one 'awsconfig' SNS Topic with 2 SQS Queues subscriptions, 'awsconfig' & 'awsconfig_notification'.

In the AWS add-on, we still have our Config SQS-Based S3 input linked to our aws_config SQS that is supposed to gather data in both aws:config & aws:config:notification sourcetypes but only feed aws:config sourcetype.

We have added a Custom SQS input linked to our awsconfignotification SQS and have assigned it to the aws:config:notification sourcetype.

New Member

@D2SI , could you please tell me what you mean "Custom SQS input"? I have the same issue and searching for solution.

0 Karma

Loves-to-Learn

I guess, he meant, when you create AWS config DATA INPUT ( from config, not sqs-based s3), you have to MENTION "aws:config:notification" as sourcetype.

i am going to try what he said as i have same issue now.

what was your solution, did you try his solution and worked? please share

0 Karma

Loves-to-Learn

or

[aws_sqs_based_s3://config-sqs_s3]
aws_account = <assume_role_name>
aws_iam_role = <aws_account_name>
disabled = 0
host = <host>
index = main
interval = 300
s3_file_decoder = config
sourcetype = aws:config:notification mentioned as custom data type (props need for this)
sqs_batch_size = 10
sqs_queue_region = <region>
sqs_queue_url = https://sqs.eu-west-1.amazonaws.com/<aws_account_id>/<sqs_name>;
0 Karma