All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I'm onboarding some new data and I'm working on the fields extraction. Data is some proper JSON related to emails. I'm having some hard time with the "attachments" field which I'm trying to ma... See more...
Hi, I'm onboarding some new data and I'm working on the fields extraction. Data is some proper JSON related to emails. I'm having some hard time with the "attachments" field which I'm trying to make CIM compliant. This attachment field is multivalue (it's a JSON array) and contains : - The string "attachments" in the 0 position (the first position) - The file name in every impair position (1, 3 , 5, etc.) - The file hash in every pair position So far, I've done it in SPL but I cant find a way to do that in a props.conf (because in props.conf, you can't do a multiline |eval : Every eval is treated in a parrallel way) or in a transforms.conf.   Here is what I've done in SPL : | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") ``` To get rid of the string "attachments" ``` | eval attachments = mvindex(attachments, 1, mvcount(attachments)-1) ```To create an index``` | eval index_attachments=mvrange(0,mvcount(attachments),1) ```To write down in file_type is the value is file_name or file_hash :``` | eval modulo = mvmap(index_attachments, 'index_attachments'%2) | eval file_type = mvmap(modulo, if(modulo=0,"file_name", "file_hash")) ``` To zip all that with a "::::SPLIT::::" ``` | eval file_pair = mvzip('file_type', attachments, "::::SPLIT::::") ``` To then create file_name and file_hash``` | eval file_name = mvmap(file_pair, if(match(file_pair, "file_name::::SPLIT::::.*"), 'file_pair', null() )) | eval file_hash = mvmap(file_pair, if(match(file_pair, "file_hash::::SPLIT::::.*"), 'file_pair', null() )) | eval file_name = mvmap(file_name, replace(file_name, "file_name::::SPLIT::::", "")) | eval file_hash = mvmap(file_hash, replace(file_hash, "file_hash::::SPLIT::::", "")) | fields - attachments file_pair file_type index_attachments modulo attachments   I'd be very glad to find a solution Thanks for your kind help !
Appreciate the insight@livehybrid  — that helps! I’ll be doing a bit more investigation on my end too. if anyone has any other suggesstions , let me know..
My current understanding of the solution is as follows — please correct me if I’m wrong: If the authentication type is SAML or LDAP, the user is always considered active (i.e., the account cannot ... See more...
My current understanding of the solution is as follows — please correct me if I’m wrong: If the authentication type is SAML or LDAP, the user is always considered active (i.e., the account cannot be locked through Splunk (multiple failed login attempts). If the authentication type is Splunk, then the user can be either Active or Locked-out, based on the "locked-out" attribute. Therefore, I need to update my logic accordingly: Check the authentication type of the user first. If it's Splunk, then check the "locked-out" attribute. If it's LDAP or SAML, assume the user is active by default.
Hi @sanjai  You could use the current-context endpoint (e.g. | rest /services/authentication/current-context) - this does return a locked-out field (see below) however for SSO/SAML/LDAP users this w... See more...
Hi @sanjai  You could use the current-context endpoint (e.g. | rest /services/authentication/current-context) - this does return a locked-out field (see below) however for SSO/SAML/LDAP users this will always be 0 as the locked-out value is only used for native/local Splunk accounts. It is the authentication provider that determines if the account is locked, e.g. if you fail to login 3 times with LDAP your LDAP provider may temporarily block you, this isnt something you can determine natively from Splunk, you would need some info from LDAP for this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi Splunk Community, I’m developing a User Management React application using the Splunk React UI framework, intended to be used inside a custom Splunk App. This app uses the REST API (/services/aut... See more...
Hi Splunk Community, I’m developing a User Management React application using the Splunk React UI framework, intended to be used inside a custom Splunk App. This app uses the REST API (/services/authentication/users) to fetch user details. What I’ve Done: In my local Splunk instance (where users are created manually with Authentication System = Splunk),  for testing the app ,each user object contains a "locked-out" attribute. I use this attribute to determine account status: "locked-out": 0 → User is Active "locked-out": 1 → User is Locked This works as expected in my local environment. The Issue: When testing the same app on a development Splunk instance that uses LDAP authentication, I noticed that LDAP user accounts do not contain the locked-out attribute.   Because of this, my app incorrectly assumes the user is locked (my logic defaults to "Locked" if the attribute is missing). My Questions: Do LDAP or SAML user accounts in Splunk expose any attribute that can be used to determine if the account is locked or active? If not, is there any workaround or recommended practice for this scenario? Is there a capability that allows a logged-in user to view their own authentication context or session info? I’m aware of the edit_user capability, but that allows users to modify other users, which I want to avoid. ( the below image user does't have the Admin role how can it shows the USER AND AUTH menu) Table from custom react app(lists only currently logged in user)   What is the expected behavior when an LDAP or SAML user enters the wrong password multiple times? For Splunk-native users, after several failed login attempts, the "locked-out" attribute is set to 1. For LDAP/SAML users, even after multiple incorrect login attempts, I don’t see any status change or locked-out attribute. Is this expected? Are externally authenticated users (LDAP/SAML) not "locked" in the same way as Splunk-native accounts? Scenario Tested: Logged in with correct username but incorrect password (more than 5 times). Splunk-authenticated user: "locked-out" attribute appears and is set to 1. LDAP-authenticated user: no attribute added or updated; no visible change to user status. Goal: I want the React app to accurately reflect account status for both Splunk-native and LDAP/SAML users. Looking for best practices or alternative approaches for handling this. Let me know if you additional details about my question Thanks in advance, Sanjai
Hi @sawwinnaung , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma... See more...
Hi @sawwinnaung , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
When I applied below confs in HF, PROCTITLE has been discarded successfully .  Thanks for your help and suggestions. props.conf [linux_audit] TRANSFORMS-set = discard_proctitle [source::/var/log/... See more...
When I applied below confs in HF, PROCTITLE has been discarded successfully .  Thanks for your help and suggestions. props.conf [linux_audit] TRANSFORMS-set = discard_proctitle [source::/var/log/audit/audit.log] TRANSFORMS-set = discard_proctitle transforms.conf [discard_proctitle] REGEX = ^type=PROCTITLE.* DEST_KEY = queue FORMAT = nullQueue      
Hi @Andre_  Pls check the MAX_DAYS_AGO option on the props.conf https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Propsconf   MAX_DAYS_AGO = <integer> * The maximum number of days in the p... See more...
Hi @Andre_  Pls check the MAX_DAYS_AGO option on the props.conf https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Propsconf   MAX_DAYS_AGO = <integer> * The maximum number of days in the past, from the current date as provided by the input layer (For example forwarder current time, or modtime for files), that an extracted date can be valid. * Splunk software still indexes events with dates older than 'MAX_DAYS_AGO' with the timestamp of the last acceptable event. * If no such acceptable event exists, new events with timestamps older than 'MAX_DAYS_AGO' uses the current timestamp. * For example, if MAX_DAYS_AGO = 10, Splunk software applies the timestamp of the last acceptable event to events with extracted timestamps older than 10 days in the past. If no acceptable event exists, Splunk software applies the current timestamp. * If your data is older than 2000 days, increase this setting. * Highest legal value: 10951 (30 years). * Default: 2000 (5.48 years).  
@Andre_  Another option you can consider is to change the destination path for the Windows Event Logs in Event Viewer and configure Splunk to monitor this new location. This approach allows you to s... See more...
@Andre_  Another option you can consider is to change the destination path for the Windows Event Logs in Event Viewer and configure Splunk to monitor this new location. This approach allows you to start collecting only new events, effectively avoiding the indexing of historical data. Additionally, by using the standard Splunk input settings (without current_only = 1), you ensure that no events are missed during restarts, as Splunk will continue to track and ingest all new events from the updated log file. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
It's not in the spec file, I tried and it does not work.  
Hi @Andre_ , Iì'm not sure about this: I used it on wineventlogs. Ciao. Giuseppe
Hi @sawwinnaung , if you haven't any additional HF in your infrastructure, check the regex you're using (you can test it using the regex command in Splunk). In this way you can check if there was s... See more...
Hi @sawwinnaung , if you haven't any additional HF in your infrastructure, check the regex you're using (you can test it using the regex command in Splunk). In this way you can check if there was some change in the logs structure. Then try to use backslash to escape the = in your regex. Ciao. Giuseppe
@gcusello                 The props.conf and transforms.conf files are located on the indexer under the following path: /app/splunk/etc/apps/TA-linux_auditd/local/ These configurations previously... See more...
@gcusello                 The props.conf and transforms.conf files are located on the indexer under the following path: /app/splunk/etc/apps/TA-linux_auditd/local/ These configurations previously worked successfully. However, after upgrading the Splunk version and migrating the Linux environment, the configurations no longer seem to function as expected. Thanks for your suggestions.
You can upload your own app with custom JS and css in it, as long as you are on Victoria experience in the Cloud and the JS will pass the AppInspect when you upload the app. We do this all the time ... See more...
You can upload your own app with custom JS and css in it, as long as you are on Victoria experience in the Cloud and the JS will pass the AppInspect when you upload the app. We do this all the time and have a set of JS functions that we use for our applications that are bundled with every app we created.
@PrewinThomas              Thanks for your help. Even though I updated REGEX = type=PROCTITLE in transforms.conf located on the indexer, the filtering still isn’t working.  
I think the opposite is the case:   current_only = <boolean> * Whether or not to acquire only events that arrive while the instance is running. * A value of "true" means the input only acquires ... See more...
I think the opposite is the case:   current_only = <boolean> * Whether or not to acquire only events that arrive while the instance is running. * A value of "true" means the input only acquires events that arrive while the instance runs and the input is on. The input does not read data which was stored in the Windows Event Log while the instance was not running. This means that there will be gaps in the data if you restart the instance or experiences downtime.
Hi Giuseppe, "ignoreOlderThan" only applies to log files, not windows event logs (like security events, application events, etc)   Kind Regards Andre
Hi @sawwinnaung , at first use backslash when you have = in your regexes, anyway, where do you located these conf files? they must be located in the first full Splunk instance that data are passing... See more...
Hi @sawwinnaung , at first use backslash when you have = in your regexes, anyway, where do you located these conf files? they must be located in the first full Splunk instance that data are passing through, in other words, in the first Heavy Forwarder (if present) or in the Indexers (if there are no HFs), not on Universal Forwarder. Ciao. Giuseppe
Hi @Andre_ , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Inputsconf , to read only the events newer than 7 days, you have to use, in you inputs.conf the option ignore... See more...
Hi @Andre_ , as you can read at https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Inputsconf , to read only the events newer than 7 days, you have to use, in you inputs.conf the option ignoreOlderThan: ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons Ciao. Giuseppe
thanks for that (API's challenge is we need Splunk application login which has been redacted for users to reduce footprint)   show-encrypted  is something I haven't tried, but seems promising. Wil... See more...
thanks for that (API's challenge is we need Splunk application login which has been redacted for users to reduce footprint)   show-encrypted  is something I haven't tried, but seems promising. Will test and get back