All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

[field] is improper syntax for the regex command.  Use the field name by itself.  If it's an argument to a macro then use $field$.
This is a really old question but I want to share what I've learned after a recent incident led us to looking at our search artifact count and discovering a huge spike after a config change.  So the... See more...
This is a really old question but I want to share what I've learned after a recent incident led us to looking at our search artifact count and discovering a huge spike after a config change.  So there is [currently] no setting you can update to fix the TTL config for DMAs. DMAs have a hard-coded TTL that is 300s unless there are error messages from the accelerated search, in which case it goes to 86400s. It will only go back down to 300s if the previous run had no errors.  After this incident, we filed an enhancement ticket to make this configurable. It was just filed, though, and hasn't been triaged and worked on, so no timeline at all on when it would be configurable.  The best course of action for now would be to figure out what the errors are in the DMAs (which is visible in the Data Model page, when you expand the data model) and resolve them so the errors stop. Or just clear the directory manually or with some script.
Hey everyone, I'm doing testing regarding ingesting Zscaler ZPA Logs into Splunk using LSS, I'd like any assistance and any relevant configurations that could assist me.
Looking for SPL that will give me the ID Cost by month, only grabbing the last event (_time) for that month.  Sample data below. I have a system that updates cost daily for the same ID. Looking for g... See more...
Looking for SPL that will give me the ID Cost by month, only grabbing the last event (_time) for that month.  Sample data below. I have a system that updates cost daily for the same ID. Looking for guidence before I venture down a wrong path. Sample data below.   Thank you!   bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16T12:42:41.282-04:00 6/1/25 1 1.4 2025-06-16T12:00:41.282-04:00 5/1/25 1 2.5 2025-06-15T12:42:41.282-04:00 5/1/25 1 2.2 2025-06-14T12:00:41.282-04:00 5/1/25 2 3.2 2025-06-14T12:42:41.282-04:00 5/1/25 2 3.3 2025-06-14T12:00:41.282-04:00 3/1/25 1 4.4 2025-06-13T12:42:41.282-04:00 3/1/25 1 5 2025-06-13T12:00:41.282-04:00 3/1/25 2 6 2025-06-13T12:42:41.282-04:00 3/1/25 2 6.3 2025-06-13T12:00:41.282-04:00
I have a question regarding how to handle a regex query in a macro. Below I have a regex similar to the one I'm doing that matches when i use a regex checker, but when I try and add it to a simple se... See more...
I have a question regarding how to handle a regex query in a macro. Below I have a regex similar to the one I'm doing that matches when i use a regex checker, but when I try and add it to a simple search macro in splunk it gives an error: Error: Error in 'SearchOperator:regex': Usage: regex <field> (=|!=) <regex>. Macro tied to the rule. Basically has a first part of a script, then IP address it ignores, and then a second part of the script. One below is really simplified but gets same error:  Regex Example: | regex [field] !="^C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" "Resolve-DnsName \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b \| Select-Object -Property NameHost$" String to check against in this example:  C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" "Resolve-DnsName 0.0.0.0 | Select-Object -Property NameHost I feel like this should work, but maybe there is something I'm missing on how Splunk handles regex and how I need to tweak it.  Any info on this would be greatly appreciated.  Thanks. 
Hi @sawwinnaung , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thank you! You helped me a lot with the time issue!
Just a quick update to my final saved search, which allows a simple double-click and paste of a new MAC in any format, but will never return a result initially. I am using this as a report. index=ma... See more...
Just a quick update to my final saved search, which allows a simple double-click and paste of a new MAC in any format, but will never return a result initially. I am using this as a report. index=main sourcetype=syslog [ | makeresults | eval input_mac="INPUT_HERE" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | where len(mac_clean)=12 | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1:\2:\3:\4:\5:\6") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1-\2-\3-\4-\5-\6") | eval mac_dot=replace(mac_clean, "(....)(....)(....)", "\1.\2.\3") | eval query=mvappend(mac_clean, mac_colon, mac_hyphen, mac_dot) | mvexpand query | where isnotnull(query) | fields query | format ] | table _raw
Hi, I'm onboarding some new data and I'm working on the fields extraction. Data is some proper JSON related to emails. I'm having some hard time with the "attachments" field which I'm trying to ma... See more...
Hi, I'm onboarding some new data and I'm working on the fields extraction. Data is some proper JSON related to emails. I'm having some hard time with the "attachments" field which I'm trying to make CIM compliant. This attachment field is multivalue (it's a JSON array) and contains : - The string "attachments" in the 0 position (the first position) - The file name in every impair position (1, 3 , 5, etc.) - The file hash in every pair position So far, I've done it in SPL but I cant find a way to do that in a props.conf (because in props.conf, you can't do a multiline |eval : Every eval is treated in a parrallel way) or in a transforms.conf.   Here is what I've done in SPL : | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") ``` To get rid of the string "attachments" ``` | eval attachments = mvindex(attachments, 1, mvcount(attachments)-1) ```To create an index``` | eval index_attachments=mvrange(0,mvcount(attachments),1) ```To write down in file_type is the value is file_name or file_hash :``` | eval modulo = mvmap(index_attachments, 'index_attachments'%2) | eval file_type = mvmap(modulo, if(modulo=0,"file_name", "file_hash")) ``` To zip all that with a "::::SPLIT::::" ``` | eval file_pair = mvzip('file_type', attachments, "::::SPLIT::::") ``` To then create file_name and file_hash``` | eval file_name = mvmap(file_pair, if(match(file_pair, "file_name::::SPLIT::::.*"), 'file_pair', null() )) | eval file_hash = mvmap(file_pair, if(match(file_pair, "file_hash::::SPLIT::::.*"), 'file_pair', null() )) | eval file_name = mvmap(file_name, replace(file_name, "file_name::::SPLIT::::", "")) | eval file_hash = mvmap(file_hash, replace(file_hash, "file_hash::::SPLIT::::", "")) | fields - attachments file_pair file_type index_attachments modulo attachments   I'd be very glad to find a solution Thanks for your kind help !
Appreciate the insight@livehybrid  — that helps! I’ll be doing a bit more investigation on my end too. if anyone has any other suggesstions , let me know..
My current understanding of the solution is as follows — please correct me if I’m wrong: If the authentication type is SAML or LDAP, the user is always considered active (i.e., the account cannot ... See more...
My current understanding of the solution is as follows — please correct me if I’m wrong: If the authentication type is SAML or LDAP, the user is always considered active (i.e., the account cannot be locked through Splunk (multiple failed login attempts). If the authentication type is Splunk, then the user can be either Active or Locked-out, based on the "locked-out" attribute. Therefore, I need to update my logic accordingly: Check the authentication type of the user first. If it's Splunk, then check the "locked-out" attribute. If it's LDAP or SAML, assume the user is active by default.
Hi @sanjai  You could use the current-context endpoint (e.g. | rest /services/authentication/current-context) - this does return a locked-out field (see below) however for SSO/SAML/LDAP users this w... See more...
Hi @sanjai  You could use the current-context endpoint (e.g. | rest /services/authentication/current-context) - this does return a locked-out field (see below) however for SSO/SAML/LDAP users this will always be 0 as the locked-out value is only used for native/local Splunk accounts. It is the authentication provider that determines if the account is locked, e.g. if you fail to login 3 times with LDAP your LDAP provider may temporarily block you, this isnt something you can determine natively from Splunk, you would need some info from LDAP for this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi Splunk Community, I’m developing a User Management React application using the Splunk React UI framework, intended to be used inside a custom Splunk App. This app uses the REST API (/services/aut... See more...
Hi Splunk Community, I’m developing a User Management React application using the Splunk React UI framework, intended to be used inside a custom Splunk App. This app uses the REST API (/services/authentication/users) to fetch user details. What I’ve Done: In my local Splunk instance (where users are created manually with Authentication System = Splunk),  for testing the app ,each user object contains a "locked-out" attribute. I use this attribute to determine account status: "locked-out": 0 → User is Active "locked-out": 1 → User is Locked This works as expected in my local environment. The Issue: When testing the same app on a development Splunk instance that uses LDAP authentication, I noticed that LDAP user accounts do not contain the locked-out attribute.   Because of this, my app incorrectly assumes the user is locked (my logic defaults to "Locked" if the attribute is missing). My Questions: Do LDAP or SAML user accounts in Splunk expose any attribute that can be used to determine if the account is locked or active? If not, is there any workaround or recommended practice for this scenario? Is there a capability that allows a logged-in user to view their own authentication context or session info? I’m aware of the edit_user capability, but that allows users to modify other users, which I want to avoid. ( the below image user does't have the Admin role how can it shows the USER AND AUTH menu) Table from custom react app(lists only currently logged in user)   What is the expected behavior when an LDAP or SAML user enters the wrong password multiple times? For Splunk-native users, after several failed login attempts, the "locked-out" attribute is set to 1. For LDAP/SAML users, even after multiple incorrect login attempts, I don’t see any status change or locked-out attribute. Is this expected? Are externally authenticated users (LDAP/SAML) not "locked" in the same way as Splunk-native accounts? Scenario Tested: Logged in with correct username but incorrect password (more than 5 times). Splunk-authenticated user: "locked-out" attribute appears and is set to 1. LDAP-authenticated user: no attribute added or updated; no visible change to user status. Goal: I want the React app to accurately reflect account status for both Splunk-native and LDAP/SAML users. Looking for best practices or alternative approaches for handling this. Let me know if you additional details about my question Thanks in advance, Sanjai
Hi @sawwinnaung , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma... See more...
Hi @sawwinnaung , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
When I applied below confs in HF, PROCTITLE has been discarded successfully .  Thanks for your help and suggestions. props.conf [linux_audit] TRANSFORMS-set = discard_proctitle [source::/var/log/... See more...
When I applied below confs in HF, PROCTITLE has been discarded successfully .  Thanks for your help and suggestions. props.conf [linux_audit] TRANSFORMS-set = discard_proctitle [source::/var/log/audit/audit.log] TRANSFORMS-set = discard_proctitle transforms.conf [discard_proctitle] REGEX = ^type=PROCTITLE.* DEST_KEY = queue FORMAT = nullQueue      
Hi @Andre_  Pls check the MAX_DAYS_AGO option on the props.conf https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Propsconf   MAX_DAYS_AGO = <integer> * The maximum number of days in the p... See more...
Hi @Andre_  Pls check the MAX_DAYS_AGO option on the props.conf https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Propsconf   MAX_DAYS_AGO = <integer> * The maximum number of days in the past, from the current date as provided by the input layer (For example forwarder current time, or modtime for files), that an extracted date can be valid. * Splunk software still indexes events with dates older than 'MAX_DAYS_AGO' with the timestamp of the last acceptable event. * If no such acceptable event exists, new events with timestamps older than 'MAX_DAYS_AGO' uses the current timestamp. * For example, if MAX_DAYS_AGO = 10, Splunk software applies the timestamp of the last acceptable event to events with extracted timestamps older than 10 days in the past. If no acceptable event exists, Splunk software applies the current timestamp. * If your data is older than 2000 days, increase this setting. * Highest legal value: 10951 (30 years). * Default: 2000 (5.48 years).  
@Andre_  Another option you can consider is to change the destination path for the Windows Event Logs in Event Viewer and configure Splunk to monitor this new location. This approach allows you to s... See more...
@Andre_  Another option you can consider is to change the destination path for the Windows Event Logs in Event Viewer and configure Splunk to monitor this new location. This approach allows you to start collecting only new events, effectively avoiding the indexing of historical data. Additionally, by using the standard Splunk input settings (without current_only = 1), you ensure that no events are missed during restarts, as Splunk will continue to track and ingest all new events from the updated log file. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
It's not in the spec file, I tried and it does not work.  
Hi @Andre_ , Iì'm not sure about this: I used it on wineventlogs. Ciao. Giuseppe
Hi @sawwinnaung , if you haven't any additional HF in your infrastructure, check the regex you're using (you can test it using the regex command in Splunk). In this way you can check if there was s... See more...
Hi @sawwinnaung , if you haven't any additional HF in your infrastructure, check the regex you're using (you can test it using the regex command in Splunk). In this way you can check if there was some change in the logs structure. Then try to use backslash to escape the = in your regex. Ciao. Giuseppe