After configuring migration for a few indexes, the following errors is filling up the log on all cluster peers:
06-27-2019 07:43:11.914 +0200 ERROR S3Client - command=get transactionId=0x4053369110 rTxnId=0x4051be42d0 status=comple
ted success=N localPath="D:\Program Files\Splunk\var\lib\splunk\db\db15613655051561358169598A3573547-8422-44
29-B22D-4867A0A1B8E8.tmp" offset=0 error="can not open file" reason="Access is denied."
06-27-2019 07:43:11.915 +0200 ERROR RetryableClientTransaction - transactionDone(): groupId=0x4043b50ef0 rTxnId=0x4051b
e42d0 transactionId=0x4053369110 success=N HTTP-statusCode=502 HTTP-statusDescription="network error" retries=429496729
5 retry=N noretryreason="transaction group had fatal error" remainingTxns=0
S3 bucket is populated with files and retrieving also works - it's just amount of errors is insane
We're running EC2 instances, is there a way we can find why are we getting Bad Gateway errors in Splunk and how to solve them?
Seems like Splunk SmartStore is not supported on Windows, yet?
It refers to Linux only.
With Splunk smartstore Generally, ignore HTTP status code on their own
Always try to keep the Windows path as short as possible, e.g. install to E:\splunk.
There is a permission error in your event provided. Does your splunk user have admin permissions? That's usually needed for the Program Files folder.
The permissions are correct but just in case I have reset them yet still getting access denied messages. What is strange, I can see .tmp files been created there with 0 size and then disappear again.