All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You're thinking about this too much as a "programming" exercise. SPL works differently. A bit like a bash one-liner (I suppose the pipe chars in the SPL syntax weren't chosen randomly ;-)) So pleas... See more...
You're thinking about this too much as a "programming" exercise. SPL works differently. A bit like a bash one-liner (I suppose the pipe chars in the SPL syntax weren't chosen randomly ;-)) So please be a bit more descriptive about what you want to do with those four fields returned from the ldapsearch.  
HF is just an indexer with local indexing disabled. So if you want to index locally you're effectively turning your server into indexer with additional forwarding enabled. So while for "pure HF" the... See more...
HF is just an indexer with local indexing disabled. So if you want to index locally you're effectively turning your server into indexer with additional forwarding enabled. So while for "pure HF" the forwarder license will do, for "indexing HF" you need to properly license the instance as if you'd do with any other indexer.
The question was because if you had HF in front of your indexers, there's were your index-time props would be applied. Since you're using UF to push data to Cloud, you indeed need to push an app to t... See more...
The question was because if you had HF in front of your indexers, there's were your index-time props would be applied. Since you're using UF to push data to Cloud, you indeed need to push an app to the Cloud as @sainag_splunk wrote.
Pro tip: "no luck" and "doesn't work" are bad words in a discussion forum as they convey little information in best case scenario.  If I have to read your mind, the second search returns no result.  ... See more...
Pro tip: "no luck" and "doesn't work" are bad words in a discussion forum as they convey little information in best case scenario.  If I have to read your mind, the second search returns no result.  Is this correct? Before diagnosing the second search, I want to delve into the first one first.  What's wrong with simply plugging your token in that one? ...base search (member_dn=$userid$ OR member_id=$userid$ OR Member_Security_ID=$userid$ OR member_user_name=$userid$) Not only is this the simplest way you can express your condition, but it is also more efficient. As to your second one, it does not express what you think it "should" do.  When the compiler sees a token in a search, it simply substitute it with the current value in that token space.  Suppose your user sets $userid$ to joeshmoe.  After compilation, the SPL engine sees this expression: index=windows_logs | where joeshmoe IN (member_dn, member_id, Member_Security_ID, member_user_name) It is highly unlikely for your data set to have a field named joeshmoe AND this field has some values that equal to one of those four fields.  It is much more likely that member_dn, member_id, Member_Security_ID, or member_user_name in your dataset has a literal value of "joeshmoe". In SPL, all eval expressions treat bare words as either a function name or a field name, not string literal. (As such, the second phrase in that second search, | eval userid=johnsmith, assigns a null value to userid.)   So, if you want to use the where command instead of plugging the token into index search, quote the token properly: index=windows_logs | where "$userid$" IN (member_dn, member_id, Member_Security_ID, member_user_name) I still recommend the first one, however. (Note: search (implied in the first line) is one of few SPL commands that interprets bare words as literals unless they explicitly appear in the left-hand side of a search operator such as = and IN. Hope this helps.
Hi @dharris_splunk , there's only one license to index logs, the normal license, as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/TypesofSplunklicenses . Only for HFs, there... See more...
Hi @dharris_splunk , there's only one license to index logs, the normal license, as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/TypesofSplunklicenses . Only for HFs, there's a Forwarder license but it doesn't permits to locally index logs. Ciao. Giuseppe
Hi @yuanliu  Could it be an issue with my Splunk profile? I am using Splunk Enterprise Version:9.0.4. My browser is Google Chrome Version 130.0.6723.117 (Official Build) (64-bit) and Microsoft Ed... See more...
Hi @yuanliu  Could it be an issue with my Splunk profile? I am using Splunk Enterprise Version:9.0.4. My browser is Google Chrome Version 130.0.6723.117 (Official Build) (64-bit) and Microsoft Edge Version 130.0.2849.68 (Official build) (64-bit) If it's a browser issue, why is it not working on another browser like Microsoft Edge? Thank you for your help
 HF do NOT need an enterprise license if you are just ingesting/parsing.  On a HF, if you are locally indexing "you need an enterprise license"        If this reply helps, Please UpVote
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users ar... See more...
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users are unable to authenticate through the RADIUS server. - Installed radius client on splunk machine and configure the radiusclient.conf file with radius server data - Updated the authentication.conf file located in $SPLUNK_HOME/etc/system/local/, as well as updates to web.confto support RADIUS authentication requests in Splunk Web. - Used the radtest tool to validate the connection between the Splunk RADIUS client - Monitored the Splunk authentication logs in $SPLUNK_HOME/var/log/splunk/splunkd.log to identify any errors, and consistently encountered the following error: Could not find [externalTwoFactorAuthSettings] in authentication stanza. - Integrated radiusScripted.py to assist with RADIUS authentication, configuring it to work with the authentication settings. It appears that Splunk is unable to successfully authenticate with the RADIUS server, with repeated errors indicating missing configuration stanzas or settings that are not recognized. Environment Details: Splunk Version: 9.1.5 Authentication Configuration Files: authentication.conf, web.conf Additional Scripts: radiusScripted.py Please advise on troubleshooting steps or configuration adjustments needed to resolve this issue. Any insights or documentation on RADIUS integration best practices with Splunk would be highly appreciated. thanks   
The biggest change on the Universal forwarder from 9.0. > 9.3 was least-privileged user. https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstal... See more...
The biggest change on the Universal forwarder from 9.0. > 9.3 was least-privileged user. https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller#Manage_SePrivilegeUser_permissions   Do you see any issues on the permissions? I recommend working with support if this is happening frequently on all your windows hosts. If this reply helps, Please UpVote.
Like @jawahir007 , I cannot reproduce your results. My instance is 9.3.1.  Whether this is a update issue, Splunk itself does output the correct spacing in the stats table in my browsers, namely... See more...
Like @jawahir007 , I cannot reproduce your results. My instance is 9.3.1.  Whether this is a update issue, Splunk itself does output the correct spacing in the stats table in my browsers, namely Safari 17.6, Chrome 129.0.6668.60, and Firefox 94.0.1. In short, Splunk does display extra spaces.  Something in your browser's renderer is giving the incorrect display.
@jaibalaraman  Hello If I understand correctly you mean like “reset token button" ? I don't think that is currently supported in studio. Might be available in the future versions.   Try something l... See more...
@jaibalaraman  Hello If I understand correctly you mean like “reset token button" ? I don't think that is currently supported in studio. Might be available in the future versions.   Try something like this might work: 1. Add a Button Input 2. Configure Action as "Go to URL" 3. Set URL to the dashboard's URL 4. Default tokens will be preserved as defined in Initialize Tokens section       If this reply helps, Please upvote.
As @PickleRick notes, if time is critical for correlation, bin is risky.  This is one of the use cases where transaction is appropriate.  But you cannot use span alone.  Using index alone with _time ... See more...
As @PickleRick notes, if time is critical for correlation, bin is risky.  This is one of the use cases where transaction is appropriate.  But you cannot use span alone.  Using index alone with _time is also unsafe.  From the context implied in mock data, you want transaction by user.  It is very important that you describe these critical logic clearly, explicitly, and without help from SPL. The only obstacle is the field names "username" and "Users_Name"; this is so easily overcome with coalesce. (It is never a good idea to illustrate mock data with inaccuracy.  If the field name is Users_Name, you should consistently illustrate it as Users_Name and not usersname.)  One element that distracted people is the servtime and logtime conversion in initial illustrated SPL.  These fields adds no value to the use case. This is the code that should get you started   index=printserver OR index=printlogs | eval username = coalesce(username, Users_Name) | fields - usersname | transaction username maxspan=3s | table _time prnt_name username location directory file   Using your corrected mock data, the above gives _time prnt_name username location directory file 2024-11-04 12:20:56 Printer2 tim.allen FrontDesk c:/documents/files/ document2.xlsx 2024-11-04 11:05:32 Printer1 jon.doe Office c:/desktop/prints/ document1.doc Here is an emulation of your mock data.  Play with it and compare with real data   | makeresults format=csv data="_time, prnt_name, username, location, _raw 2024-11-04 11:05:32, Printer1, jon.doe, Office, server event 1 2024-11-04 12:20:56, Printer2, tim.allen, FrontDesk, server event 2" | eval index = "prntserver" | append [makeresults format=csv data="_time, Users_Name, directory, file, _raw 2024-11-04 11:05:33, jon.doe, c:/desktop/prints/, document1.doc, log event 1 2024-11-04 12:20:58, tim.allen, c:/documents/files/, document2.xlsx, log event 2" | eval index = "printlogs"] | eval _time = strptime(_time, "%F %T") | sort - _time ``` the above emulates index=printserver OR index=printlogs ```   Hope this helps
Hint: check out the splunk dump command.  Assuming this is for already indexed data and depending on how much data your searches return this can be a quick and dirty way to dump to SH local disk t... See more...
Hint: check out the splunk dump command.  Assuming this is for already indexed data and depending on how much data your searches return this can be a quick and dirty way to dump to SH local disk then you can just have some process run the aws s3 cp commands.  The benefit of this is you can get formatted output with fields you want to retain in the plaintext data that will end up in the files. also can be compressed and files can be rolled, etc. Can also be triggered via api or federated search.  Downside, depending on the definition of "tonnes" and how much time those tonnes span, the searches may need to be well thought out and broken down into chunks of time. Command is marked as "internal" unsupported, but depending on your needs may be fine.  Requires single long running search that could fail. This is where something like posting the job then programmatically pulling down the results and validating with some logic might be more robust depending on your needs.  Other options would likely have you looking at surgery with Splunk formatted buckets which, is probably lowerl level then you need/want to go if some other system needs to eat it.  If this is also new data streaming in, id be looking at Ingest Actions to output to S3. 
Hello! Is it a single search head or search head cluster? Have your indexers already been migrated to Azure? Will the SH(C) be searching Azure or On-Prem indexers as well? What "components" do... See more...
Hello! Is it a single search head or search head cluster? Have your indexers already been migrated to Azure? Will the SH(C) be searching Azure or On-Prem indexers as well? What "components" do you rely on most on this SH(C)? Premium apps like ES or ITSI? or just Splunk Enterprise apps? I would probably: - back up the apps and kvstore if needed - build the new SH/SHC in the cloud - restore configs - cut over DNS or point users in a uniform fashion to the new SH during a Maintenance Window - shut down the old SH.  Curious what docs you ended up in, always worth feedback at the bottom of the docs page if the topic or guidance needed was missing. 
Hi there, Easiest way is to download, extract it and see what it includes. cheers, MuS
When use a group email address (with owner permissions) and configure the integration between Splunk and GWS, an authentication error occurs. However, if use a user name email address, the integratio... See more...
When use a group email address (with owner permissions) and configure the integration between Splunk and GWS, an authentication error occurs. However, if use a user name email address, the integration is successful.  I thought that granting owner permissions would allow the group email address to integrate successfully just like a user email address, but this was incorrect. Ref: https://splunk.github.io/splunk-add-on-for-google-workspace/Configureinputs1/ ========== 9. In the Service account details page for your new service account, perform the following steps: ~~~~~ Omitted ~~~~~ h. Navigate to the user name email address that has Owner permissions. Copy the email address. ==========
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB ... See more...
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB topological structure. java.sql.SQLException: ORA-01005: null password given; logon denied I have provided the userid and password in the configuration -> controller setting, Any one faced this kind of issue ?
Hi All  I would like to add reset button in the dashboard however i am not able to see the option to add in dashboard studio.   Thanks
@jawahir007  I don't have any pending updates.  If it's a browser issue, then why is it displaying text correctly in the search box, but not on the statistical table in Splunk?    Thanks
Did you even get an answer to this? I've found that the local.meta is only send to the Captain and should sync to the members. Which does not happen for me. Seems that the replication for local.meta ... See more...
Did you even get an answer to this? I've found that the local.meta is only send to the Captain and should sync to the members. Which does not happen for me. Seems that the replication for local.meta is not in the /default/system.conf. Wondering is adding the stanza to the local/system.conf would solve the isssue. But I'm not sure which stanza we would need then.