All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@sloshburch @rjthibod Can you please explain what is RBAC with indexes approach? The wildcard approach?
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Gen... See more...
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Generally they will be given FQDN/Hostname to us and we will be mapping to the particular index. In this way we have numerous AD groups and indexes. But our client is expecting less AD groups because it is difficult to maintain those many AD groups.  So, here my question... is there any chance to reduce AD groups by restricting specific to Source type rather than Index? So in one index can we have multiple applications where we can restrict them by sourcetype? If yes, please help me with the approach?  
Self post.  Thank you Splunk team for the suggestion!
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search hi... See more...
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search history to the KV Store using the feature mentioned in the release notes.  As of now, and as with all backups/restores, please make sure you test.  Hope this helps someone else.  Thanks to all that helped test and validate (and listen to me vent) along the way!  Please feel free to share your experiences if you use this feature or if I may have missed something as well.   I'll throw the code up shortly as well. https://docs.splunk.com/Documentation/Splunk/9.1.6/ReleaseNotes/MeetSplunk Preserve search history across search heads Search history is lost when users switch between various nodes in a search head cluster. This feature utilizes KV store to keep search history replicated across nodes. See search_history_storage_mode in limits.conf in the Admin Manual for information on using this functionality.   ### Backup Kvstore - pick your flavor of backing up (rest api, splunk cli, splunk app like "KV Store Tools Redux") # To backup just Search History  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz -appName system -collectionName SearchHistory   # To backup entire Kvstore (most likely a good idea)  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz       ### Restore archive # Change directory to location of archive backup cd /opt/splunk/var/lib/splunk/kvstorebackup # Locate archive to restore ls -lst # List archive files (optional, but helpful to see what's inside and how archive will extract to ensure you don't overwrite expected files)  tar ztvf SearchHistory_1731206815.tar.gz -rw------- splunk/splunk 197500 2024-11-10 02:46 system/SearchHistory/SearchHistory0.json # Extract archive or selected files tar zxvf SearchHistory_1731206815.tar.gz system/SearchHistory/SearchHistory0.json       ### Parse archive to prep to restore # Change directory to where archive was extracted cd /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory # Create/copy splunk_parse_search_history_kvstore_backup_per_user.py script to parse archives in directory to /tmp (or someplace else) and run on archive(s) ./splunk_parse_search_history_kvstore_backup_per_user.py /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory/SearchHistory0.json # List files created ls -ls SearchHistory0*  96 -rw-rw-r-- 1 splunk splunk  95858 Nov 14 23:12 SearchHistory0_admin.json 108 -rw-rw-r-- 1 splunk splunk 108106 Nov 14 23:12 SearchHistory0_nobody.json       ### Restore archives needed # NOTE:  To prevent SearchHistory leaking between users, you MUST restore to the corresponding user context # Either loop/iterate through restored files or do them one at a time calling the corresponding REST API curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory/batch_save -H "Content-Type: application/json" -d @SearchHistory0_<user>.json       ### Validate that the SearchHistory Kvstore was restored properly for the user through calling the REST API and/or also logging into Splunk as the user to test with, navigate to "Search & Reporting" and selecting "Search History" curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory         #### NOTE: There are default limits in kvstore that you need to account for if you're files are large!   If you run into problems, review your splunkd.log and/or the KV Store dashboards within the MC (Search --> KV Store) # /opt/splunk/bin/splunk btool limits list --debug kvstore /opt/splunk/etc/system/default/limits.conf           [kvstore] /opt/splunk/etc/system/default/limits.conf           max_accelerations_per_collection = 10 /opt/splunk/etc/system/default/limits.conf           max_documents_per_batch_save = 50000 /opt/splunk/etc/system/default/limits.conf           max_fields_per_acceleration = 10 /opt/splunk/etc/system/default/limits.conf           max_mem_usage_mb = 200 /opt/splunk/etc/system/default/limits.conf           max_queries_per_batch = 1000 /opt/splunk/etc/system/default/limits.conf           max_rows_in_memory_per_dump = 200 /opt/splunk/etc/system/default/limits.conf           max_rows_per_query = 50000 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_result_mb = 100 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_save_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_size_per_result_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_threads_per_outputlookup = 1       ### Troubleshooting # To delete the entire SearchHistory KV Store (because maybe you inadvertently restored everything to an incorrect user, testing, or due to other shenanigans) /opt/splunk/bin/splunk clean kvstore -app system -collection SearchHistory   # To delete a user specific context in the SearchHistory KV Store (because see above) curl -k -u admin:splunk@dmin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory -X DELETE   ### Additional Notes It was noted that restoring for a user that has not logged in yet may report messages similar to "Action forbidden".  To remedy this, you might be able to create a local user and then restore again. 
Hi there, If I recall correct the Checkpoint only supports syslog over TCP and can therefore use TLS. Splunk syslog input only supports UDP and no SSL. That said you could use a TCP input, configur... See more...
Hi there, If I recall correct the Checkpoint only supports syslog over TCP and can therefore use TLS. Splunk syslog input only supports UDP and no SSL. That said you could use a TCP input, configure TLS/SSL https://docs.splunk.com/Documentation/Splunk/8.2.7/Admin/Inputsconf and see what you can get. Hope this helps ... cheers, MuS
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as t... See more...
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as the log format (there are some other formats as well); I will choose "Splunk". I need to know how to configure Splunk Enterprise to receive encrypted traffic from Check Point if I use TLS at the Check Point to send encrypted traffic to Splunk. Can someone enlighten me on this please?   Thanks!
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effecti... See more...
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effective daily volume was set from 6144MB to 1024MB instead of 3072MB Does anyone know why it is not taking the 3 licenses correctly? even removing the 3 expired ones is limited to 1024MB only Even if you restart Splunk it's the same result
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Fir... See more...
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Firstly I would simply like to change the labels linked to Username and Password to replace them with Client ID and Client Secret. (and secondly add the Tenant ID field). I achieved this by editing the file in $SPLUNK_HOME/etc/apps/<my_app>/appserver/static/js/build/globalConfig.json. Then I incremented the version number in the app properties. (as shown in this post https://community.splunk.com/t5/Getting-Data-In/Splunk-Add-on-Builder-Global-Account-settings/m-p/570565) However when I make new modifications elsewhere in my app, the globalConfig.json file is reset to its default values. Do you know how to do this? Splunk Version : 9.2.1 Add-On Builder version : 4.3.0 Thanks
The reason why it's naming the series 2023 is that the current month is now November 2024, so it's wrapping by 12 months, so the first series is Dec 2023->Nov-2024 - even though you are only searchin... See more...
The reason why it's naming the series 2023 is that the current month is now November 2024, so it's wrapping by 12 months, so the first series is Dec 2023->Nov-2024 - even though you are only searching for data in the current year, the timewrap command will work out the series name based on your timewrap span of 1y.  If you made the search with earliest=@y latest=+y@y, which is searching from 2024-01-01 to 2024-12-31 it will label the series correctly as 2024. So, it's just a function of timewrap. You can see this more clearly if you set your time_format to include the month, i.e. time_format=%Y-%m - then you will get and if you change your series=exact to relative, you will see it's 'latest_year', which means a 12 month period. Hope this helps  
Any subsearch will have a limit - there is a way to combine two datasets in lookups without using append, e.g. | inputlookup file1.csv | inputlookup append=t file2.csv using append=t on the second ... See more...
Any subsearch will have a limit - there is a way to combine two datasets in lookups without using append, e.g. | inputlookup file1.csv | inputlookup append=t file2.csv using append=t on the second inputlookup does NOT have a subsearch limitation. Without knowing what you are doing in more detail, it's impossible to suggest a solution, however, even though you are using commands such as mvexpand, it is generally possible to do a single search (index=A OR index=B) method and then manipulate the result set to get what you want.
Yes if you are using SCP then ACS is your selection to do this. There is also a Terraform connector to do this kind of stuff if that is familiar tool for you. And if you are partner then there is a... See more...
Yes if you are using SCP then ACS is your selection to do this. There is also a Terraform connector to do this kind of stuff if that is familiar tool for you. And if you are partner then there is a presentation kept couple of years ago in GPS which give you a excellent framework to manage Clients SCP environments.
I had the same issue today on a fresh splunk installation. I solved this after doing the following: Install java 11 on my server Configure the app.conf file under the local folder of the DB Connec... See more...
I had the same issue today on a fresh splunk installation. I solved this after doing the following: Install java 11 on my server Configure the app.conf file under the local folder of the DB Connect application (\etc\apps\splunk_app_db_connect\local)       [install] is_configured = 1       Under the same folder, create the dbx_settings.conf file with the following:       [java] javaHome = C:\Program Files\Java\jdk-11       I'm kinda sure all I needed was the is_configured set to 1. Please try and validate this.
This should mater as _time didn't get value from c_time or Time. Basically those lines are not needed. Unless there is some weird alias in props.conf or something which put e.g. Time in _time field? ... See more...
This should mater as _time didn't get value from c_time or Time. Basically those lines are not needed. Unless there is some weird alias in props.conf or something which put e.g. Time in _time field? You should try to find where in this Dashboard is something which are manipulating _time based on c_time or Time field.
Hi here is one old post which contains link to one python script to do this. https://community.splunk.com/t5/Dashboards-Visualizations/Can-we-move-the-saved-searches-or-knowledge-objects-created/m-p... See more...
Hi here is one old post which contains link to one python script to do this. https://community.splunk.com/t5/Dashboards-Visualizations/Can-we-move-the-saved-searches-or-knowledge-objects-created/m-p/672741/highlight/true#M55102 As it already said you can select all those objects under Reassign Knowledge Objects.  After that it gives to you possibility to do bulk reassign for those. Usually this work as expected, but time by time You cannot change all those KOs with GUI. Then just use previously mentioned python script and it do the rest. r. Ismo
Hi 1st as you are using UDP as transmit protocol you will definitely lost events. You cannot do anything for it as it due to that protocol. You should build separate syslog cluster with VIP address... See more...
Hi 1st as you are using UDP as transmit protocol you will definitely lost events. You cannot do anything for it as it due to that protocol. You should build separate syslog cluster with VIP address and then send syslog events from those backends to splunk. Both rsyslog and syslog-ng are suitable for that. If you haven't enough experience about syslog server then probably the easiest way to achieve this is use Splunk's SC4S. You could find it from https://splunk.github.io/splunk-connect-for-syslog/main/ https://splunkbase.splunk.com/app/4740 There is also some .conf presentation about it. Probably 2020 (or 2019)? And never use any HF or indexer as terminating TCP/UDP syslog feed with Splunk. Use always separate syslog server. r. Ismo
I'm sorry, I think I put it in the wrong place. We're using Splunk Cloud, so this solution (ACS) will probably work. I'll update when I worked on it to confirm it works for my needs.
Based on group where you have put this question You are doing this on Splunk Enterprise not in Splunk Cloud? ACS is working only with cloud, not with Enterprise. In Enterprise you need to have CLI ... See more...
Based on group where you have put this question You are doing this on Splunk Enterprise not in Splunk Cloud? ACS is working only with cloud, not with Enterprise. In Enterprise you need to have CLI access into node and then you can script it. E.g. ansible is good tool to manage installations. You could have control node where you get packages/apps from git and then install those with ansible-play.
@gcusello have you try to add _meta tag in your HF/UF's inputs.conf and put that information there? I think that this could solve your needs?
Those messages are quite normal and not describe what issues you have. Have you try e.g. nc or curl to check, if master is listening peers and response anything? Is pass4symKey working or are there ... See more...
Those messages are quite normal and not describe what issues you have. Have you try e.g. nc or curl to check, if master is listening peers and response anything? Is pass4symKey working or are there any messages for it in _internal? btw when you post logs, please use block element </> where you paste those lines. It's much easier to read and we can be sure that those are what you have pasted. If the connection between master and peer is working there are lot of messages in _internal.
Hi Based on these conf files it seems to do next. Take timestamp from beginning of event and put it into _time Ensure that lines are not longer than 10000 characters  syslog-host transformation ... See more...
Hi Based on these conf files it seems to do next. Take timestamp from beginning of event and put it into _time Ensure that lines are not longer than 10000 characters  syslog-host transformation is missing, so I cannot tell what it do! extract hostname from event and save it into metadata to use on next step define used index based on hostname (fqdn) on event. Fqdn vs index is defined on that csv lookup file Change \r\n newline to just \n  Don't generate punctuation for event More detailed information from those links which @PaulPanther add in his post. r. Ismo