We are ingesting logs from imperva SQS queue from our aws enviornement. We want to use custom sourcetype for this logs i.e "imperva:incapsula" instead of default sourcetype "aws:s3:accesslogs" on Splunk add-on for AWS. We have made changes via backend on inputs.conf and restarted service, these chnages are reflecting on UI as well as could see below events in internal logs stating inputs has tagged to the new sourcetype. But the logs are still being indexed under old sourcetype i.e. aws:s3:accesslogs. Have tried multiple things like creating a new custom input with new sourcetype, created props.conf for the new sourcetype under system/local directory but it didn't helped, the logs are still being indexed under default sourcetype "aws:s3:accesslogs" Internal logs post making changes "2022-01-28 09:33:58,959 level=INFO pid=10133 tid=MainThread logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:run:635 | datainput="imperva-waf-log" start_time=1643362438 | message="Data input started." aws_account="SplunkProdCrossAccountUser" aws_iam_role="aee_splunk_prd" disabled="0" host="ip-172-27-201-15.ec2.internal" index="corp_imperva" interval="300" python.version="python3" s3_file_decoder="S3AccessLogs" sourcetype="imperva:incapsula" sqs_batch_size="10" sqs_queue_region="**-1" sqs_queue_url="https://***/aee-splunk-prd-imperva-waf" using_dlq="1"" props.conf [imperva:incapsula] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\CEF\:\d\| NO_BINARY_CHECK=true TIME_FORMAT=%s%3N TIME_PREFIX=start= MAX_TIMESTAMP_LOOKAHEAD=128 inputs.conf [aws_sqs_based_s3://imperva-waf-log] aws_account = SplunkProdCrossAccountUser aws_iam_role = aee_splunk_prd index = corp_imperva interval = 300 s3_file_decoder = S3AccessLogs #sourcetype = aws:s3:accesslogs sourcetype = imperva:incapsula sqs_batch_size = 10 sqs_queue_region = ***-1 sqs_queue_url = https://**/aee-splunk-prd-imperva-waf using_dlq = 1 disabled = 0 Does anyone have faced similar issue?
... View more