All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am working on Education field and have started using Splunk Entp  since May 18 , 2025. Yesterday 16 Jun 2025 , i faced with log in problem. I uploaded once again the version 9.4.3 and tried... See more...
Hello, I am working on Education field and have started using Splunk Entp  since May 18 , 2025. Yesterday 16 Jun 2025 , i faced with log in problem. I uploaded once again the version 9.4.3 and tried to log in again but same result. Admin should be the person who can solve this issue what is mentioned on the main black box.  I need a service support by admin or responsible POC. I am using VPN if this may cause to log in, this is for the technical team information. My University email is oguz.unal@ogr.yesevi.edu.tr which i used while signing. My alternative email is  Kind regards, Ogz
Hello team ,  Please help me modify this query such that it is able to loop through all the values of the csv file :   Although it is able to give the clients and sensitivity of the selected source... See more...
Hello team ,  Please help me modify this query such that it is able to loop through all the values of the csv file :   Although it is able to give the clients and sensitivity of the selected sourcetype but in the results in the fields- Sourcetype Domain and NewIndex it is only giving the values of the first sourcetype- A4Server Like for example over here the selected sourcetype is A4server but in the sourcetype it is giving A4ServerBeta  as it is not looping through the entire csv but only the first value | tstats count WHERE index=* sourcetype=A4Server by index  | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)"   | table index, clients, sensitivity | join type=left client [     | inputlookup appserverdomainmapping.csv      | table NewIndex, Domain, Sourcetype ]| eval NewIndex= NewIndex + sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex  
this is my log    i need a report like below: where I can see price difference in a single report. I don't want to put those records which has same mainframePrice and discountPrice, only I ... See more...
this is my log    i need a report like below: where I can see price difference in a single report. I don't want to put those records which has same mainframePrice and discountPrice, only I want to put those records where mainframePrice and discountPrice are different here I manually entered the individual values to get the report,      
The splunkfwd user is created by default in version 9.1, and seeing the warning "User splunkfwd does not exist - using root" while upgrade. the upgrade guide does not say that creating the splunkfwd... See more...
The splunkfwd user is created by default in version 9.1, and seeing the warning "User splunkfwd does not exist - using root" while upgrade. the upgrade guide does not say that creating the splunkfwd user is mandatory for Universal Forwarder installations or upgrades. Upgrade the universal forwarder | Splunk Docs "When you upgrade, the RPM/DEB package installer retrieves the file owner of SPLUNK_HOME/etc/myinstall/splunkd.xml. If a previous user exists, the RPM/DEB package installer will not create a splunkfwd user and instead will reuse the existing user. If you wish to create a least privileged user, that is, the splunkfwd user, you must remove the existing user first." the warning appears during the upgrade regarding the missing splunkfwd user, there are no permission issues, and the forwarder is functioning properly with "splunk" User. Appreciate your guidance on whether it is mandatory to create the splunkfwd user for Universal Forwarder9.4.0 or higher version? Note: in this topic Splunk enterprise and Splunk UF not installed on the same machine
Hi everyone, I am a Mendix developer and i would like to implementSplunkCloud for monitoring. I already have the HEC token port and hostname in my Mendix cloud environment. I would like to send er... See more...
Hi everyone, I am a Mendix developer and i would like to implementSplunkCloud for monitoring. I already have the HEC token port and hostname in my Mendix cloud environment. I would like to send error logs to SplunkCloud from Mx.  Based on my research JSON format is a common practice. Is there any way where i can send my data to Splunk as a JSON format? Idk how that works for Splunk. Any suggestions?
Hello, I have a Windows machine with an UF installed that logs various logs such as wineventlog. These logs work correctly and are ingested into Splunk, and have for some time. I wanted to add a new... See more...
Hello, I have a Windows machine with an UF installed that logs various logs such as wineventlog. These logs work correctly and are ingested into Splunk, and have for some time. I wanted to add a new log from a Software that runs on the machine and added it to the the input.conf file. The log is a tracelog for the software and is seen added to monitoring in the _internal index with no errors. The log is ingested correctly initially in batch input, but the UF fails to monitor the file afterwards. The log is a a fixed size of 50MB and once the log is full it will start overwriting the oldest event in the log, meaning it will start at the top. I have already tried: change the initCrcLength change the ignoreOlderThan Set NO_BINARY_CHECK = true - this fixed some previous errors where Splunk believed the file to be binary, it's just Ansi encoded. Sett alwaysOpenFile = true - this did not seem to change anything.   Thanks in advance for any tips, tricks or advice.
Hello, with this query : index=abc | search source = "xyz" | stats count by source I can see the count of sources having count more than 0.  But I cant manage to get the ones with 0 count.  An... See more...
Hello, with this query : index=abc | search source = "xyz" | stats count by source I can see the count of sources having count more than 0.  But I cant manage to get the ones with 0 count.  Anyone able to help me please ?  Thank you 
Hi  I have Created a Splunk Addon builder using Splunk Enterprise version 9. And i installed in Splunk Cloud now i am facing some issues with addon , how can i check the logs of this addon in splun... See more...
Hi  I have Created a Splunk Addon builder using Splunk Enterprise version 9. And i installed in Splunk Cloud now i am facing some issues with addon , how can i check the logs of this addon in splunk cloud?Pls assist.
I'm trying to split a pair of rows with a pair of multivalued columns. The value in both columns is related to each position of the multivalued column. To make myself clear, I'm displaying the initia... See more...
I'm trying to split a pair of rows with a pair of multivalued columns. The value in both columns is related to each position of the multivalued column. To make myself clear, I'm displaying the initial result table, and below that is the table for the desired result. I tried mvexpand, but that doesn't give me the expected result. Example: I have rows like this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com                      instance2.com instance3.com            instance4.com             instance5.com             2022-02-28 2022-03-1 2022-03-2 2022-03-4 2022-03-5   And I would like to transform them into this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com 2022-02-28 2022-03-01 Domain1.com instance2.com 2022-02-28 2022-03-02 Domain1.com instance3.com 2022-02-28   Domain1.com instance4.com 2022-02-28 2022-03-04 Domain1.com instance5.com 2022-02-28 2022-03-05
How do you run a match a field ID between two indexes? without using a sub search(due to limit of 10000 results) without using Join command resource intensive and there is about 140,000+ results s... See more...
How do you run a match a field ID between two indexes? without using a sub search(due to limit of 10000 results) without using Join command resource intensive and there is about 140,000+ results so running join will take forever to load. I tried the following below but doesn't seem to work: index=xxx  source type=xxx  | eval source_index="a" | append [search index=summary_index | eval source_index="b" | fields ID] | stats values(source_index) as sources by trace | where mvcount(sources) > 1 | timechart span=1h values(count) AS "Customers per Hour" Trying to match between the main search and the summary search Unique ID accounts field and if it matches we want it to give us a count of how many ID there is which will translate customers per hour.
Hello, We try to see whether splunk can be our solution for dashboard. I download the trial version which is 9.4.2(I do see the system support for the version 9.4.2 is RHEL8 or RHEL9.) Is there an... See more...
Hello, We try to see whether splunk can be our solution for dashboard. I download the trial version which is 9.4.2(I do see the system support for the version 9.4.2 is RHEL8 or RHEL9.) Is there any other trial version i can download to try? (Our device use RHEL7 and Python 2) I am able to install the splunk 9.4.2 in our system and run the splunk start but I cannot access the UI with the address: http: {domain-name}:8000.    
I'm working with a Splunk Enterprise cluster deployed with the splunk-enterprise Helm Chart. I'm trying to install Amazon's CloudWatch Agent onto my Splunk pods, to send Splunk application logs to Cl... See more...
I'm working with a Splunk Enterprise cluster deployed with the splunk-enterprise Helm Chart. I'm trying to install Amazon's CloudWatch Agent onto my Splunk pods, to send Splunk application logs to CloudWatch. I decided to try to do this by defining Ansible pre tasks and setting them in my Helm values.yaml, for example: clusterManager:     defaults:         ansible_pre_tasks:             - 'file:///mnt/playbooks/install_cloudwatch_agent.yml' I got my pre tasks working, but they're failing.   At first I tried to install CloudWatch Agent from yum, but this failed because the Python dnf module was missing. Actually, it looks like yum and dnf aren't installed at all in the splunk Docker image.   Then I tried to just download the RPM and install that, but this failed because I didn't have permission to get a transaction lock with rpm. I tried to solve the permissions issue by setting  become_user "{{ privileged_user }}" on my task, but this didn't work either, nor could I become root.   Are splunk-ansible pre tasks and post tasks an appropriate way to install additional supporting services onto the Splunk Enterprise pods like this? If so, are there any examples showing how to do it? If not, is there some other approach that would be a better fit?
Hey everyone, I'm doing testing regarding ingesting Zscaler ZPA Logs into Splunk using LSS, I'd like any assistance and any relevant configurations that could assist me.
Looking for SPL that will give me the ID Cost by month, only grabbing the last event (_time) for that month.  Sample data below. I have a system that updates cost daily for the same ID. Looking for g... See more...
Looking for SPL that will give me the ID Cost by month, only grabbing the last event (_time) for that month.  Sample data below. I have a system that updates cost daily for the same ID. Looking for guidence before I venture down a wrong path. Sample data below.   Thank you!   bill_date ID Cost _time 6/1/25 1 1.24 2025-06-16T12:42:41.282-04:00 6/1/25 1 1.4 2025-06-16T12:00:41.282-04:00 5/1/25 1 2.5 2025-06-15T12:42:41.282-04:00 5/1/25 1 2.2 2025-06-14T12:00:41.282-04:00 5/1/25 2 3.2 2025-06-14T12:42:41.282-04:00 5/1/25 2 3.3 2025-06-14T12:00:41.282-04:00 3/1/25 1 4.4 2025-06-13T12:42:41.282-04:00 3/1/25 1 5 2025-06-13T12:00:41.282-04:00 3/1/25 2 6 2025-06-13T12:42:41.282-04:00 3/1/25 2 6.3 2025-06-13T12:00:41.282-04:00
I have a question regarding how to handle a regex query in a macro. Below I have a regex similar to the one I'm doing that matches when i use a regex checker, but when I try and add it to a simple se... See more...
I have a question regarding how to handle a regex query in a macro. Below I have a regex similar to the one I'm doing that matches when i use a regex checker, but when I try and add it to a simple search macro in splunk it gives an error: Error: Error in 'SearchOperator:regex': Usage: regex <field> (=|!=) <regex>. Macro tied to the rule. Basically has a first part of a script, then IP address it ignores, and then a second part of the script. One below is really simplified but gets same error:  Regex Example: | regex [field] !="^C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" "Resolve-DnsName \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b \| Select-Object -Property NameHost$" String to check against in this example:  C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" "Resolve-DnsName 0.0.0.0 | Select-Object -Property NameHost I feel like this should work, but maybe there is something I'm missing on how Splunk handles regex and how I need to tweak it.  Any info on this would be greatly appreciated.  Thanks. 
Hi, I'm onboarding some new data and I'm working on the fields extraction. Data is some proper JSON related to emails. I'm having some hard time with the "attachments" field which I'm trying to ma... See more...
Hi, I'm onboarding some new data and I'm working on the fields extraction. Data is some proper JSON related to emails. I'm having some hard time with the "attachments" field which I'm trying to make CIM compliant. This attachment field is multivalue (it's a JSON array) and contains : - The string "attachments" in the 0 position (the first position) - The file name in every impair position (1, 3 , 5, etc.) - The file hash in every pair position So far, I've done it in SPL but I cant find a way to do that in a props.conf (because in props.conf, you can't do a multiline |eval : Every eval is treated in a parrallel way) or in a transforms.conf.   Here is what I've done in SPL : | makeresults | eval attachments = mvappend("attachments", "doc1.pdf", "abc123", "doc2.pdf", "def456", "doc3.bla", "ghx789") ``` To get rid of the string "attachments" ``` | eval attachments = mvindex(attachments, 1, mvcount(attachments)-1) ```To create an index``` | eval index_attachments=mvrange(0,mvcount(attachments),1) ```To write down in file_type is the value is file_name or file_hash :``` | eval modulo = mvmap(index_attachments, 'index_attachments'%2) | eval file_type = mvmap(modulo, if(modulo=0,"file_name", "file_hash")) ``` To zip all that with a "::::SPLIT::::" ``` | eval file_pair = mvzip('file_type', attachments, "::::SPLIT::::") ``` To then create file_name and file_hash``` | eval file_name = mvmap(file_pair, if(match(file_pair, "file_name::::SPLIT::::.*"), 'file_pair', null() )) | eval file_hash = mvmap(file_pair, if(match(file_pair, "file_hash::::SPLIT::::.*"), 'file_pair', null() )) | eval file_name = mvmap(file_name, replace(file_name, "file_name::::SPLIT::::", "")) | eval file_hash = mvmap(file_hash, replace(file_hash, "file_hash::::SPLIT::::", "")) | fields - attachments file_pair file_type index_attachments modulo attachments   I'd be very glad to find a solution Thanks for your kind help !
Hi Splunk Community, I’m developing a User Management React application using the Splunk React UI framework, intended to be used inside a custom Splunk App. This app uses the REST API (/services/aut... See more...
Hi Splunk Community, I’m developing a User Management React application using the Splunk React UI framework, intended to be used inside a custom Splunk App. This app uses the REST API (/services/authentication/users) to fetch user details. What I’ve Done: In my local Splunk instance (where users are created manually with Authentication System = Splunk),  for testing the app ,each user object contains a "locked-out" attribute. I use this attribute to determine account status: "locked-out": 0 → User is Active "locked-out": 1 → User is Locked This works as expected in my local environment. The Issue: When testing the same app on a development Splunk instance that uses LDAP authentication, I noticed that LDAP user accounts do not contain the locked-out attribute.   Because of this, my app incorrectly assumes the user is locked (my logic defaults to "Locked" if the attribute is missing). My Questions: Do LDAP or SAML user accounts in Splunk expose any attribute that can be used to determine if the account is locked or active? If not, is there any workaround or recommended practice for this scenario? Is there a capability that allows a logged-in user to view their own authentication context or session info? I’m aware of the edit_user capability, but that allows users to modify other users, which I want to avoid. ( the below image user does't have the Admin role how can it shows the USER AND AUTH menu) Table from custom react app(lists only currently logged in user)   What is the expected behavior when an LDAP or SAML user enters the wrong password multiple times? For Splunk-native users, after several failed login attempts, the "locked-out" attribute is set to 1. For LDAP/SAML users, even after multiple incorrect login attempts, I don’t see any status change or locked-out attribute. Is this expected? Are externally authenticated users (LDAP/SAML) not "locked" in the same way as Splunk-native accounts? Scenario Tested: Logged in with correct username but incorrect password (more than 5 times). Splunk-authenticated user: "locked-out" attribute appears and is set to 1. LDAP-authenticated user: no attribute added or updated; no visible change to user status. Goal: I want the React app to accurately reflect account status for both Splunk-native and LDAP/SAML users. Looking for best practices or alternative approaches for handling this. Let me know if you additional details about my question Thanks in advance, Sanjai
Hello, I am about to onboard 1000+ Windows UF. Those have windows event logs going back many years. Is there a way to exclude any windows eventlog older than 7 days from being ingested during the in... See more...
Hello, I am about to onboard 1000+ Windows UF. Those have windows event logs going back many years. Is there a way to exclude any windows eventlog older than 7 days from being ingested during the initial onboarding? For log files there's an option for inputs.conf on the UF, but nothing similar for eventlog? Kind Regards Andre
I am trying to setup props & transforms in indexers to send PROCTITLE events to null queue i tried below regex but that doesn't seem to work.  props.conf and transforms.conf location:   /app/splunk... See more...
I am trying to setup props & transforms in indexers to send PROCTITLE events to null queue i tried below regex but that doesn't seem to work.  props.conf and transforms.conf location:   /app/splunk/etc/apps/TA-linux_auditd/local/ props.conf [linux_audit] TRANSFORMS-set = discard_proctitle [source::/var/log/audit/audit.log] TRANSFORMS-set = discard_proctitle transforms.conf [discard_proctitle] REGEX = ^type=PROCTITLE.* DEST_KEY = queue FORMAT = nullQueue sample event-   type=PROCTITLE msg=audit(1750049138.587:1710xxxx): proctitle=737368643A206165705F667470757372205B70726xxxxx   type=PROCTITLE msg=audit(1750049130.891:1710xxxx): proctitle="(systemd)" type=PROCTITLE msg=audit(1750049102.068:377xxxx): proctitle="/usr/lib/systemd/systemd-logind" Could someone help me to fix this issue?