All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunklearner , as you well know, AD Groups are associated to one or more Splunk Roles and data access is managed associating Roles and indexes. You eventually can filter access to the data of ... See more...
Hi @splunklearner , as you well know, AD Groups are associated to one or more Splunk Roles and data access is managed associating Roles and indexes. You eventually can filter access to the data of the same index inserting a filter (e.g. a sourcetype or one other field), in this way, you can reduce the indexes number but anyway, you have to identify a rule to filter data access; usually sourcetype isn't the best solution because sourcetype is usually associated to the logs  or to the technology, if you could identify onother field, you could do it. Ciao. Giuseppe
For suppose... 'X' application has specific AD group say "Y" and specific index "Z"... Generally X application team members/owners are in Y group and should access Z index. This is fine till here. ... See more...
For suppose... 'X' application has specific AD group say "Y" and specific index "Z"... Generally X application team members/owners are in Y group and should access Z index. This is fine till here. But client concerned about numerous applications having numerous AD groups which will be difficult to maintain. So for suppose in single AD group can we include multiple app teams with multiple indexes and can we restrict them by sourcetype specifying to that particular app? Is it possible or any other ways to do this? To reduce AD groups at the same time app level restriction should be there. 
Hi @eluisramos , I never saw this behaviour, usually licenses are added. Open a case to Splunk Support. Ciao. Giuseppe
Hi @splunklearner , No, the data access is managed in Splunk at index level, but must every AD group see only one ore any indexes? I suppose that you are trying to manage multitenancy, in this way ... See more...
Hi @splunklearner , No, the data access is managed in Splunk at index level, but must every AD group see only one ore any indexes? I suppose that you are trying to manage multitenancy, in this way different indexes is the only solution. Ciao. Giuseppe
Hi @isoutamo , yes I usually do it Ciao. Giuseppe
Hello @ankit86 Do you have versioning enabled on S3 side ? Are you sure you have selected correct Bucket name while creating input ? This article can help you : https://splunk.my.site.com/customer/s/... See more...
Hello @ankit86 Do you have versioning enabled on S3 side ? Are you sure you have selected correct Bucket name while creating input ? This article can help you : https://splunk.my.site.com/customer/s/article/Gneric-S3-input-which-configured-in-Splunk-Add-on-for-AWS-is-failing-with-error-ie-PermanentRedirect  ===== Appreciate Karma and Marked Solution if this helps you.  
If your events are truly in JSON, you are asking the wrong question.  Let me explain. The sample you illustrated above is not JSON compliant.  Specifically, quotation marks are badly placed.  So, th... See more...
If your events are truly in JSON, you are asking the wrong question.  Let me explain. The sample you illustrated above is not JSON compliant.  Specifically, quotation marks are badly placed.  So, the most important question is whether the sample is faithful.  Or do you mean a compliant JSON like this: {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46.899958 [ERR] 10.25.1.2:30080 - pid:96866 - unable to connect to endpoint", "service": "hello world"} Assuming your raw events are JSON compliant, Splunk would give you a field named message.  The task is simply to extract the desired part from this field.  In other words, the fact that data is JSON should have no bearing on your question.  If I read your mind correctly, you want the string after [ERR]. (I'm not joking about reading mind.  You should always illustrate what you want using sample data.)  Therefore | rex field=message "\[ERR\] (?<error>.+)" If, on the other hand, your raw events are mangled like in your illustration, the answer will depend on how badly mangled the events are.  The best solution would be to implore your developers to fix the log. Either way, the question is really not about JSON.
Hello @Meett , Thank you for replying. Here is the Error I noticed yesterday but I am not sure if this is relevant--   14/11/2024 18:43:31.066 2024-11-14 13:13:31,066 level=ERROR pid=3929... See more...
Hello @Meett , Thank you for replying. Here is the Error I noticed yesterday but I am not sure if this is relevant--   14/11/2024 18:43:31.066 2024-11-14 13:13:31,066 level=ERROR pid=3929073 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:index_data:114 | datainput="Imperva" bucket_name="imperva-XXXX-XXXXX" | message="Failed to collect data through generic S3." start_time=1731590010 job_uid="8ecfb3a2-5c70-4b1a-b7d7-f0b0fb3dfb94" Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 108, in index_data self._do_index_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 131, in _do_index_data self.collect_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 181, in collect_data self._discover_keys(index_store) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 304, in _discover_keys for key in keys: File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_common.py", line 98, in get_keys for page in paginator.paginate(Bucket=bucket, Prefix=prefix): File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/paginate.py", line 269, in __iter__ response = self._make_request(current_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/paginate.py", line 357, in _make_request return self._method(**current_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/client.py", line 535, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/lib/botocore/client.py", line 983, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (PermanentRedirect) when calling the ListObjectsV2 operation: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, cli... See more...
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, client: 210.54.88.72, server: mpos.mintpayments.com, request: "GET /payment-mint/cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFSSSIbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CSSSAPXXXXXXPxmO7kjMi&X-CompanyToken=d1111e8lV1mpvljiCD2zRgEEU121p&_=1731369073330 HTTP/1.1", upstream: "https://10.20.3.59:28076//cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFY3jmET4IbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CNPAPIIk7elIMDTunrIGMuXPxmO7kjMi&X-CompanyToken=dX6E3yDe8lV1mpvljiCD2zRgEEU121p&_=173123073330", host: "test.mintpayments.com", referrer: "https://vicky9.mintpayments.com/testing??asd We are trying to 1) GET query parameters must not be logged 2) Referrer must not contain the query string I have updated my config as below [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/props.conf [source::///var/log/devops/nginx_error.log] TRANSFORMS-sanitize_referer = remove_get_query_params, remove_referer_query [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/transforms.conf [remove_get_query_params] REGEX = (GET|POST|HEAD) ([^? ]+)\?.* FORMAT = $1 $2 DEST_KEY = _raw REPEAT_MATCH = true [remove_referer_query] REGEX = referrer: "(.*?)\?.*" FORMAT = referrer: "$1" DEST_KEY = _raw REPEAT_MATCH = true Verified that the regex is correct and when I run below to list the changes, its present /opt/splunkforwarder/bin/splunk btool transforms list --debug /opt/splunkforwarder/bin/splunk btool props list --debug Still I can see no transformation in the logs, what could be the issue here ? We are using custom splunkforwarder in our env.
There might be a Salesforce app that can manage ingestion and extraction.  Short of that, if you are certain that ingestion is complete, you can post sample events (anonymize as needed) so volunteers... See more...
There might be a Salesforce app that can manage ingestion and extraction.  Short of that, if you are certain that ingestion is complete, you can post sample events (anonymize as needed) so volunteers can help.
Hello @ankit86 , It’s hard to answer without looking at internal logs, we should check internal logs of inputs that you have configured and identify possible ERRORs. 
Hello, There is an app for Aruba Edgeconnect - https://splunkbase.splunk.com/app/6302 Is there any documentation on how to get the logs ingested into splunk from Aruba EdgeConnect ?  
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Im... See more...
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Imperva S3. And in inputs I am using the Incremental S3, the logs are coming in to Splunk Cloud but there is some miss too I can see some logs are available on AWS S3 but some how those are ingesting in to Splunk Cloud, I am not getting any help online reason I am posting question here.  Please advice some body.    Thank you.
This link will help only if you are using Splunk Enterprise, what about if you are on Splunk Cloud?
https://github.com/mgalbert/splunk_search_history_kvstore
@sloshburch @rjthibod Can you please explain what is RBAC with indexes approach? The wildcard approach?
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Gen... See more...
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Generally they will be given FQDN/Hostname to us and we will be mapping to the particular index. In this way we have numerous AD groups and indexes. But our client is expecting less AD groups because it is difficult to maintain those many AD groups.  So, here my question... is there any chance to reduce AD groups by restricting specific to Source type rather than Index? So in one index can we have multiple applications where we can restrict them by sourcetype? If yes, please help me with the approach?  
Self post.  Thank you Splunk team for the suggestion!
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search hi... See more...
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search history to the KV Store using the feature mentioned in the release notes.  As of now, and as with all backups/restores, please make sure you test.  Hope this helps someone else.  Thanks to all that helped test and validate (and listen to me vent) along the way!  Please feel free to share your experiences if you use this feature or if I may have missed something as well.   I'll throw the code up shortly as well. https://docs.splunk.com/Documentation/Splunk/9.1.6/ReleaseNotes/MeetSplunk Preserve search history across search heads Search history is lost when users switch between various nodes in a search head cluster. This feature utilizes KV store to keep search history replicated across nodes. See search_history_storage_mode in limits.conf in the Admin Manual for information on using this functionality.   ### Backup Kvstore - pick your flavor of backing up (rest api, splunk cli, splunk app like "KV Store Tools Redux") # To backup just Search History  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz -appName system -collectionName SearchHistory   # To backup entire Kvstore (most likely a good idea)  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz       ### Restore archive # Change directory to location of archive backup cd /opt/splunk/var/lib/splunk/kvstorebackup # Locate archive to restore ls -lst # List archive files (optional, but helpful to see what's inside and how archive will extract to ensure you don't overwrite expected files)  tar ztvf SearchHistory_1731206815.tar.gz -rw------- splunk/splunk 197500 2024-11-10 02:46 system/SearchHistory/SearchHistory0.json # Extract archive or selected files tar zxvf SearchHistory_1731206815.tar.gz system/SearchHistory/SearchHistory0.json       ### Parse archive to prep to restore # Change directory to where archive was extracted cd /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory # Create/copy splunk_parse_search_history_kvstore_backup_per_user.py script to parse archives in directory to /tmp (or someplace else) and run on archive(s) ./splunk_parse_search_history_kvstore_backup_per_user.py /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory/SearchHistory0.json # List files created ls -ls SearchHistory0*  96 -rw-rw-r-- 1 splunk splunk  95858 Nov 14 23:12 SearchHistory0_admin.json 108 -rw-rw-r-- 1 splunk splunk 108106 Nov 14 23:12 SearchHistory0_nobody.json       ### Restore archives needed # NOTE:  To prevent SearchHistory leaking between users, you MUST restore to the corresponding user context # Either loop/iterate through restored files or do them one at a time calling the corresponding REST API curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory/batch_save -H "Content-Type: application/json" -d @SearchHistory0_<user>.json       ### Validate that the SearchHistory Kvstore was restored properly for the user through calling the REST API and/or also logging into Splunk as the user to test with, navigate to "Search & Reporting" and selecting "Search History" curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory         #### NOTE: There are default limits in kvstore that you need to account for if you're files are large!   If you run into problems, review your splunkd.log and/or the KV Store dashboards within the MC (Search --> KV Store) # /opt/splunk/bin/splunk btool limits list --debug kvstore /opt/splunk/etc/system/default/limits.conf           [kvstore] /opt/splunk/etc/system/default/limits.conf           max_accelerations_per_collection = 10 /opt/splunk/etc/system/default/limits.conf           max_documents_per_batch_save = 50000 /opt/splunk/etc/system/default/limits.conf           max_fields_per_acceleration = 10 /opt/splunk/etc/system/default/limits.conf           max_mem_usage_mb = 200 /opt/splunk/etc/system/default/limits.conf           max_queries_per_batch = 1000 /opt/splunk/etc/system/default/limits.conf           max_rows_in_memory_per_dump = 200 /opt/splunk/etc/system/default/limits.conf           max_rows_per_query = 50000 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_result_mb = 100 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_save_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_size_per_result_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_threads_per_outputlookup = 1       ### Troubleshooting # To delete the entire SearchHistory KV Store (because maybe you inadvertently restored everything to an incorrect user, testing, or due to other shenanigans) /opt/splunk/bin/splunk clean kvstore -app system -collection SearchHistory   # To delete a user specific context in the SearchHistory KV Store (because see above) curl -k -u admin:splunk@dmin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory -X DELETE   ### Additional Notes It was noted that restoring for a user that has not logged in yet may report messages similar to "Action forbidden".  To remedy this, you might be able to create a local user and then restore again. 
Hi there, If I recall correct the Checkpoint only supports syslog over TCP and can therefore use TLS. Splunk syslog input only supports UDP and no SSL. That said you could use a TCP input, configur... See more...
Hi there, If I recall correct the Checkpoint only supports syslog over TCP and can therefore use TLS. Splunk syslog input only supports UDP and no SSL. That said you could use a TCP input, configure TLS/SSL https://docs.splunk.com/Documentation/Splunk/8.2.7/Admin/Inputsconf and see what you can get. Hope this helps ... cheers, MuS