All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Helm version.BuildInfo{Version:"v3.14.2", GitCommit:"", GitTreeState:"clean", GoVersion:"go1.22.7"} I used helm from Azure cloudshell and also tried GCP cloudshell. Both had similar issue. Do I need... See more...
Helm version.BuildInfo{Version:"v3.14.2", GitCommit:"", GitTreeState:"clean", GoVersion:"go1.22.7"} I used helm from Azure cloudshell and also tried GCP cloudshell. Both had similar issue. Do I need to try installing kubectl and helm locally and try again?
Hi @splunklearner , you have to create a Splunk Role for each AD Group. Then in each role, you have to fix the index to use and/or the additional filtering options. Ciao. Giuseppe
Thanks a lot for your help !
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As ... See more...
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As Cribl.io doesn't have connector for Snowflake (and not in near roadmap), wondering if I use Splunk DB connect to read data from Snowflake and send to Cribl.io followed by sending to destination i.e. Splunk (for log monitoring and alerting) Question: Would this be "double hop" to Splunk, if yes, any Splunk charges be applicable while Splunk DB connect reading from Snowflake and sending to Cribl.io? Thank you! Avi
Hi @gcusello , How to assign specific index to specific AD group and how to map specific FQDN to that particular index, so that specific AD group should see their logs only? 
Hi @splunklearner , as you well know, AD Groups are associated to one or more Splunk Roles and data access is managed associating Roles and indexes. You eventually can filter access to the data of ... See more...
Hi @splunklearner , as you well know, AD Groups are associated to one or more Splunk Roles and data access is managed associating Roles and indexes. You eventually can filter access to the data of the same index inserting a filter (e.g. a sourcetype or one other field), in this way, you can reduce the indexes number but anyway, you have to identify a rule to filter data access; usually sourcetype isn't the best solution because sourcetype is usually associated to the logs  or to the technology, if you could identify onother field, you could do it. Ciao. Giuseppe
For suppose... 'X' application has specific AD group say "Y" and specific index "Z"... Generally X application team members/owners are in Y group and should access Z index. This is fine till here. ... See more...
For suppose... 'X' application has specific AD group say "Y" and specific index "Z"... Generally X application team members/owners are in Y group and should access Z index. This is fine till here. But client concerned about numerous applications having numerous AD groups which will be difficult to maintain. So for suppose in single AD group can we include multiple app teams with multiple indexes and can we restrict them by sourcetype specifying to that particular app? Is it possible or any other ways to do this? To reduce AD groups at the same time app level restriction should be there. 
Hi @eluisramos , I never saw this behaviour, usually licenses are added. Open a case to Splunk Support. Ciao. Giuseppe
Hi @splunklearner , No, the data access is managed in Splunk at index level, but must every AD group see only one ore any indexes? I suppose that you are trying to manage multitenancy, in this way ... See more...
Hi @splunklearner , No, the data access is managed in Splunk at index level, but must every AD group see only one ore any indexes? I suppose that you are trying to manage multitenancy, in this way different indexes is the only solution. Ciao. Giuseppe
Hi @isoutamo , yes I usually do it Ciao. Giuseppe
Hello @ankit86 Do you have versioning enabled on S3 side ? Are you sure you have selected correct Bucket name while creating input ? This article can help you : https://splunk.my.site.com/customer/s/... See more...
Hello @ankit86 Do you have versioning enabled on S3 side ? Are you sure you have selected correct Bucket name while creating input ? This article can help you : https://splunk.my.site.com/customer/s/article/Gneric-S3-input-which-configured-in-Splunk-Add-on-for-AWS-is-failing-with-error-ie-PermanentRedirect  ===== Appreciate Karma and Marked Solution if this helps you.  
If your events are truly in JSON, you are asking the wrong question.  Let me explain. The sample you illustrated above is not JSON compliant.  Specifically, quotation marks are badly placed.  So, th... See more...
If your events are truly in JSON, you are asking the wrong question.  Let me explain. The sample you illustrated above is not JSON compliant.  Specifically, quotation marks are badly placed.  So, the most important question is whether the sample is faithful.  Or do you mean a compliant JSON like this: {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46.899958 [ERR] 10.25.1.2:30080 - pid:96866 - unable to connect to endpoint", "service": "hello world"} Assuming your raw events are JSON compliant, Splunk would give you a field named message.  The task is simply to extract the desired part from this field.  In other words, the fact that data is JSON should have no bearing on your question.  If I read your mind correctly, you want the string after [ERR]. (I'm not joking about reading mind.  You should always illustrate what you want using sample data.)  Therefore | rex field=message "\[ERR\] (?<error>.+)" If, on the other hand, your raw events are mangled like in your illustration, the answer will depend on how badly mangled the events are.  The best solution would be to implore your developers to fix the log. Either way, the question is really not about JSON.
Hello @Meett , Thank you for replying. Here is the Error I noticed yesterday but I am not sure if this is relevant--   14/11/2024 18:43:31.066 2024-11-14 13:13:31,066 level=ERROR pid=3929... See more...
Hello @Meett , Thank you for replying. Here is the Error I noticed yesterday but I am not sure if this is relevant--   14/11/2024 18:43:31.066 2024-11-14 13:13:31,066 level=ERROR pid=3929073 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:index_data:114 | datainput="Imperva" bucket_name="imperva-XXXX-XXXXX" | message="Failed to collect data through generic S3." start_time=1731590010 job_uid="8ecfb3a2-5c70-4b1a-b7d7-f0b0fb3dfb94" Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 108, in index_data self._do_index_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 131, in _do_index_data self.collect_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 181, in collect_data self._discover_keys(index_store) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 304, in _discover_keys for key in keys: File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_common.py", line 98, in get_keys for page in paginator.paginate(Bucket=bucket, Prefix=prefix): File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/paginate.py", line 269, in __iter__ response = self._make_request(current_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/paginate.py", line 357, in _make_request return self._method(**current_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/client.py", line 535, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/client.py", line 983, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (PermanentRedirect) when calling the ListObjectsV2 operation: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, cli... See more...
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, client: 210.54.88.72, server: mpos.mintpayments.com, request: "GET /payment-mint/cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFSSSIbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CSSSAPXXXXXXPxmO7kjMi&X-CompanyToken=d1111e8lV1mpvljiCD2zRgEEU121p&_=1731369073330 HTTP/1.1", upstream: "https://10.20.3.59:28076//cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFY3jmET4IbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CNPAPIIk7elIMDTunrIGMuXPxmO7kjMi&X-CompanyToken=dX6E3yDe8lV1mpvljiCD2zRgEEU121p&_=173123073330", host: "test.mintpayments.com", referrer: "https://vicky9.mintpayments.com/testing??asd We are trying to 1) GET query parameters must not be logged 2) Referrer must not contain the query string I have updated my config as below [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/props.conf [source::///var/log/devops/nginx_error.log] TRANSFORMS-sanitize_referer = remove_get_query_params, remove_referer_query [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/transforms.conf [remove_get_query_params] REGEX = (GET|POST|HEAD) ([^? ]+)\?.* FORMAT = $1 $2 DEST_KEY = _raw REPEAT_MATCH = true [remove_referer_query] REGEX = referrer: "(.*?)\?.*" FORMAT = referrer: "$1" DEST_KEY = _raw REPEAT_MATCH = true Verified that the regex is correct and when I run below to list the changes, its present /opt/splunkforwarder/bin/splunk btool transforms list --debug /opt/splunkforwarder/bin/splunk btool props list --debug Still I can see no transformation in the logs, what could be the issue here ? We are using custom splunkforwarder in our env.
There might be a Salesforce app that can manage ingestion and extraction.  Short of that, if you are certain that ingestion is complete, you can post sample events (anonymize as needed) so volunteers... See more...
There might be a Salesforce app that can manage ingestion and extraction.  Short of that, if you are certain that ingestion is complete, you can post sample events (anonymize as needed) so volunteers can help.
Hello @ankit86 , It’s hard to answer without looking at internal logs, we should check internal logs of inputs that you have configured and identify possible ERRORs. 
Hello, There is an app for Aruba Edgeconnect - https://splunkbase.splunk.com/app/6302 Is there any documentation on how to get the logs ingested into splunk from Aruba EdgeConnect ?  
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Im... See more...
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Imperva S3. And in inputs I am using the Incremental S3, the logs are coming in to Splunk Cloud but there is some miss too I can see some logs are available on AWS S3 but some how those are ingesting in to Splunk Cloud, I am not getting any help online reason I am posting question here.  Please advice some body.    Thank you.
This link will help only if you are using Splunk Enterprise, what about if you are on Splunk Cloud?
https://github.com/mgalbert/splunk_search_history_kvstore