All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users ar... See more...
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users are unable to authenticate through the RADIUS server. - Installed radius client on splunk machine and configure the radiusclient.conf file with radius server data - Updated the authentication.conf file located in $SPLUNK_HOME/etc/system/local/, as well as updates to web.confto support RADIUS authentication requests in Splunk Web. - Used the radtest tool to validate the connection between the Splunk RADIUS client - Monitored the Splunk authentication logs in $SPLUNK_HOME/var/log/splunk/splunkd.log to identify any errors, and consistently encountered the following error: Could not find [externalTwoFactorAuthSettings] in authentication stanza. - Integrated radiusScripted.py to assist with RADIUS authentication, configuring it to work with the authentication settings. It appears that Splunk is unable to successfully authenticate with the RADIUS server, with repeated errors indicating missing configuration stanzas or settings that are not recognized. Environment Details: Splunk Version: 9.1.5 Authentication Configuration Files: authentication.conf, web.conf Additional Scripts: radiusScripted.py Please advise on troubleshooting steps or configuration adjustments needed to resolve this issue. Any insights or documentation on RADIUS integration best practices with Splunk would be highly appreciated. thanks   
The biggest change on the Universal forwarder from 9.0. > 9.3 was least-privileged user. https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstal... See more...
The biggest change on the Universal forwarder from 9.0. > 9.3 was least-privileged user. https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller#Manage_SePrivilegeUser_permissions   Do you see any issues on the permissions? I recommend working with support if this is happening frequently on all your windows hosts. If this reply helps, Please UpVote.
Like @jawahir007 , I cannot reproduce your results. My instance is 9.3.1.  Whether this is a update issue, Splunk itself does output the correct spacing in the stats table in my browsers, namely... See more...
Like @jawahir007 , I cannot reproduce your results. My instance is 9.3.1.  Whether this is a update issue, Splunk itself does output the correct spacing in the stats table in my browsers, namely Safari 17.6, Chrome 129.0.6668.60, and Firefox 94.0.1. In short, Splunk does display extra spaces.  Something in your browser's renderer is giving the incorrect display.
@jaibalaraman  Hello If I understand correctly you mean like “reset token button" ? I don't think that is currently supported in studio. Might be available in the future versions.   Try something l... See more...
@jaibalaraman  Hello If I understand correctly you mean like “reset token button" ? I don't think that is currently supported in studio. Might be available in the future versions.   Try something like this might work: 1. Add a Button Input 2. Configure Action as "Go to URL" 3. Set URL to the dashboard's URL 4. Default tokens will be preserved as defined in Initialize Tokens section       If this reply helps, Please upvote.
As @PickleRick notes, if time is critical for correlation, bin is risky.  This is one of the use cases where transaction is appropriate.  But you cannot use span alone.  Using index alone with _time ... See more...
As @PickleRick notes, if time is critical for correlation, bin is risky.  This is one of the use cases where transaction is appropriate.  But you cannot use span alone.  Using index alone with _time is also unsafe.  From the context implied in mock data, you want transaction by user.  It is very important that you describe these critical logic clearly, explicitly, and without help from SPL. The only obstacle is the field names "username" and "Users_Name"; this is so easily overcome with coalesce. (It is never a good idea to illustrate mock data with inaccuracy.  If the field name is Users_Name, you should consistently illustrate it as Users_Name and not usersname.)  One element that distracted people is the servtime and logtime conversion in initial illustrated SPL.  These fields adds no value to the use case. This is the code that should get you started   index=printserver OR index=printlogs | eval username = coalesce(username, Users_Name) | fields - usersname | transaction username maxspan=3s | table _time prnt_name username location directory file   Using your corrected mock data, the above gives _time prnt_name username location directory file 2024-11-04 12:20:56 Printer2 tim.allen FrontDesk c:/documents/files/ document2.xlsx 2024-11-04 11:05:32 Printer1 jon.doe Office c:/desktop/prints/ document1.doc Here is an emulation of your mock data.  Play with it and compare with real data   | makeresults format=csv data="_time, prnt_name, username, location, _raw 2024-11-04 11:05:32, Printer1, jon.doe, Office, server event 1 2024-11-04 12:20:56, Printer2, tim.allen, FrontDesk, server event 2" | eval index = "prntserver" | append [makeresults format=csv data="_time, Users_Name, directory, file, _raw 2024-11-04 11:05:33, jon.doe, c:/desktop/prints/, document1.doc, log event 1 2024-11-04 12:20:58, tim.allen, c:/documents/files/, document2.xlsx, log event 2" | eval index = "printlogs"] | eval _time = strptime(_time, "%F %T") | sort - _time ``` the above emulates index=printserver OR index=printlogs ```   Hope this helps
Hint: check out the splunk dump command.  Assuming this is for already indexed data and depending on how much data your searches return this can be a quick and dirty way to dump to SH local disk t... See more...
Hint: check out the splunk dump command.  Assuming this is for already indexed data and depending on how much data your searches return this can be a quick and dirty way to dump to SH local disk then you can just have some process run the aws s3 cp commands.  The benefit of this is you can get formatted output with fields you want to retain in the plaintext data that will end up in the files. also can be compressed and files can be rolled, etc. Can also be triggered via api or federated search.  Downside, depending on the definition of "tonnes" and how much time those tonnes span, the searches may need to be well thought out and broken down into chunks of time. Command is marked as "internal" unsupported, but depending on your needs may be fine.  Requires single long running search that could fail. This is where something like posting the job then programmatically pulling down the results and validating with some logic might be more robust depending on your needs.  Other options would likely have you looking at surgery with Splunk formatted buckets which, is probably lowerl level then you need/want to go if some other system needs to eat it.  If this is also new data streaming in, id be looking at Ingest Actions to output to S3. 
Hello! Is it a single search head or search head cluster? Have your indexers already been migrated to Azure? Will the SH(C) be searching Azure or On-Prem indexers as well? What "components" do... See more...
Hello! Is it a single search head or search head cluster? Have your indexers already been migrated to Azure? Will the SH(C) be searching Azure or On-Prem indexers as well? What "components" do you rely on most on this SH(C)? Premium apps like ES or ITSI? or just Splunk Enterprise apps? I would probably: - back up the apps and kvstore if needed - build the new SH/SHC in the cloud - restore configs - cut over DNS or point users in a uniform fashion to the new SH during a Maintenance Window - shut down the old SH.  Curious what docs you ended up in, always worth feedback at the bottom of the docs page if the topic or guidance needed was missing. 
Hi there, Easiest way is to download, extract it and see what it includes. cheers, MuS
When use a group email address (with owner permissions) and configure the integration between Splunk and GWS, an authentication error occurs. However, if use a user name email address, the integratio... See more...
When use a group email address (with owner permissions) and configure the integration between Splunk and GWS, an authentication error occurs. However, if use a user name email address, the integration is successful.  I thought that granting owner permissions would allow the group email address to integrate successfully just like a user email address, but this was incorrect. Ref: https://splunk.github.io/splunk-add-on-for-google-workspace/Configureinputs1/ ========== 9. In the Service account details page for your new service account, perform the following steps: ~~~~~ Omitted ~~~~~ h. Navigate to the user name email address that has Owner permissions. Copy the email address. ==========
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB ... See more...
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB topological structure. java.sql.SQLException: ORA-01005: null password given; logon denied I have provided the userid and password in the configuration -> controller setting, Any one faced this kind of issue ?
Hi All  I would like to add reset button in the dashboard however i am not able to see the option to add in dashboard studio.   Thanks
@jawahir007  I don't have any pending updates.  If it's a browser issue, then why is it displaying text correctly in the search box, but not on the statistical table in Splunk?    Thanks
Did you even get an answer to this? I've found that the local.meta is only send to the Captain and should sync to the members. Which does not happen for me. Seems that the replication for local.meta ... See more...
Did you even get an answer to this? I've found that the local.meta is only send to the Captain and should sync to the members. Which does not happen for me. Seems that the replication for local.meta is not in the /default/system.conf. Wondering is adding the stanza to the local/system.conf would solve the isssue. But I'm not sure which stanza we would need then.
How can I get a list of all libraries included in this app? I will need that to get this through our security review.
Hi, it should work, and it’s working fine on my end. Try upgrading your browser if you have any pending updates.   ------ If you find this solution helpful, please consider accepting it and aw... See more...
Hi, it should work, and it’s working fine on my end. Try upgrading your browser if you have any pending updates.   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Hello, Splunk doesn't display extra spaces on variables that I assigned. Please see below example I used Google Chrome and Microsoft Edge, it gave me same results.  If I exported the CSV, the data ... See more...
Hello, Splunk doesn't display extra spaces on variables that I assigned. Please see below example I used Google Chrome and Microsoft Edge, it gave me same results.  If I exported the CSV, the data have correct number of spaces. Please suggest. Thank you   | makeresults | fields - _time | eval One Space = "One space Test" | eval Two Spaces = "Two spaces Test" | eval Three Spaces = "Three spaces Test"        
Thanks!  
Hi there, Take a look at this https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477  Basically, what you need to do is use an eval to... See more...
Hi there, Take a look at this https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477  Basically, what you need to do is use an eval to normalise the client IP: | eval clientIp = coalesce(vpn.client_ip,matches event.src_ip) and use a 'stats ... by clientIp' Hope this helps ... cheers, MuS
@best-west basically we need to package an new app  that has props.conf for the SEDCMD, referencing your sourcetype for the data needing to transform and deploy from UI from uploaded apps.  I think t... See more...
@best-west basically we need to package an new app  that has props.conf for the SEDCMD, referencing your sourcetype for the data needing to transform and deploy from UI from uploaded apps.  I think the issue might be because of 000-self-service-app . You can also ask splunk support to make this update for you.  Is this Classic or Victoria stack? If you want to create props/transforms as mentioned try using ingest actions and see as an example. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_to_filter_AWS_CloudTrail_logs If my reply helps, please upvote.
I have an index with events containing a src_ip but not a username for the event.   I have another index of VPN auth logs that has the assigned IP and username.  But the VPN IPs are randomly assigned... See more...
I have an index with events containing a src_ip but not a username for the event.   I have another index of VPN auth logs that has the assigned IP and username.  But the VPN IPs are randomly assigned. I need to get the username from the VPN logs where vpn.client_ip matches event.src_ip.  But I need to make sure that the returned username is the one that was assigned during the event.  In short, I need to get the last vpn client_ip assignment to match the event.src_ip BEFORE the event so the vpn.username would be the correct one for event.src_ip. Here's a generic representation of my current query but I get nothing back. index=event ... | join left=event right=vpn where event.src_ip=vpn.client_ip max=1 usetime=true earlier=true [search index=vpn]