All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team,   I need to extract the complete User list and their associated roles and groups in AppDynamics. Looks like there is no straight approach within AppD. Could you let me know any other optio... See more...
Hi Team,   I need to extract the complete User list and their associated roles and groups in AppDynamics. Looks like there is no straight approach within AppD. Could you let me know any other option to get this list.    With API , I am able to extract only the user list. But I need it with the roles assigned to them. 
Hi Splunkers!!, We have recently configured SSO in Splunk using Keycloak, and it's working fine — users are able to log in through the Keycloak identity provider. Now, we have a new requirement whe... See more...
Hi Splunkers!!, We have recently configured SSO in Splunk using Keycloak, and it's working fine — users are able to log in through the Keycloak identity provider. Now, we have a new requirement where some users should be able to bypass SSO and use the traditional Splunk login (username/password) instead. Current Setup: Splunk SSO is configured via Keycloak (SAML). All users are redirected to Keycloak for authentication. We now want to allow dual login options: Primary: SSO via Keycloak (default for most users). Secondary: Traditional login for selected users (e.g., admins, service accounts). Objective: Allow both SSO and non-SSO (Splunk local authentication) login methods to coexist. Below is our setting for SSO. [authentication] authSettings = saml authType = SAML [roleMap_SAML] commissioning_engineer = integration hlc_support_engineer = integration [saml] caCertFile = D:\Splunk\etc\auth\cacert.pem clientCert = D:\Splunk\etc\auth\server.pem entityId = splunk fqdn = https://splunk.kigen-iht-001.cnaw.k8s.kigen.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production/protocol/saml idpSSOUrl = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production/protocol/saml inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production lockRoleToFullDN = true redirectPort = 443 replicateCertificates = true scimEnabled = false signAuthnRequest = true signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = $7$CCkQUt0tA8sZJMmU+8kigen0zdv/mxXjJsLRbmuBkEnMfhQ== ssoBinding = HTTP-POST [userToRoleMap_SAML] kg-user = commiss_engineer;hlc_support_engineer::::
Hello, Got tasked with finding all hosts that didnt have the crowdstrike agent installed and running into problems with my searches.  Ive used the following "CSFalconservice.exe | stats count by ho... See more...
Hello, Got tasked with finding all hosts that didnt have the crowdstrike agent installed and running into problems with my searches.  Ive used the following "CSFalconservice.exe | stats count by host" & "index=*sourcetype="crowdstrike:events:sensor" | stats count by host" but its not giving me the information per each individual hosts.   V/r Ghost
I am looking for a range of number within my results of my search query but I am getting no results back after adding in a where clause.  This is my original search query.  index=os sourcetype=ps... See more...
I am looking for a range of number within my results of my search query but I am getting no results back after adding in a where clause.  This is my original search query.  index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=no | sort Mem_Used_GB desc | head 20 This is some of the results.   This is the new search where I am looking for a range of data between 128 and 256 and I am getting no results back, even with events matched. I have also played with time line and range of the where clause and still nothing. index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=no | where Mem_Used_GB >= 128 AND Mem_Used_GB <= 256 | sort Mem_Used_GB desc | head 20  
I'm trying to download Splunk using "wget -O splunk-9.4.2-e9664af3d956.x86_64.rpm "https://download.splunk.com/products/splunk/releases/9.4.2/linux/splunk-9.4.2-e9664af3d956.x86_64.rpm"" and it's han... See more...
I'm trying to download Splunk using "wget -O splunk-9.4.2-e9664af3d956.x86_64.rpm "https://download.splunk.com/products/splunk/releases/9.4.2/linux/splunk-9.4.2-e9664af3d956.x86_64.rpm"" and it's hanging at 35%. Was wondering if this is known issue.
Hello, I am trying to create a notable event in the mission control area within Enterprise Security to capture when an index has not received data within 24 hours. This should be simple and straig... See more...
Hello, I am trying to create a notable event in the mission control area within Enterprise Security to capture when an index has not received data within 24 hours. This should be simple and straight forward but I can't seem to figure out why this isn't working.  I have the detection search as  index = <target index> |stats count condition in the alert to trigger is search count = 0 I also have email alerts setup as an additional way to notify the proper people,  this part of the security content works, but why doesn't the actual event appear in the Mission control area? This has me stumped, any help would be greatly appreciated.
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query... See more...
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query and not sure if the "join" command is working. index=* | join Account_Name [ search index=* EventCode=4740 OR EventCode=4767 | eval login_account=mvindex(Account_Name,1) | bin span=4h  _time | stats count values(EventCode) as EventCodeList count(eval(match(EventCode,"4740"))) as Locked ,count(eval(match(EventCode,"4767"))) as Unlocked by Account_Name | where Locked >= 1 and Unlocked = 0 ] | stats count dc(login_account) as "UniqueAccount" values(login_account) as "Login_Account" values(host) as "HostName" values(Workstation_Name) as Source_Computer values(src_ip) as SourceIP by EventCode| where UniqueAccount >= 10
How to onboard MOVEit Server Database logs which is hosted on prem to Splunk Cloud? What is the preferred method?
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Fre... See more...
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | timechart min(Value) as "Used Space" | predict "Used Space" algorithm=LLP5 future_timespan=180 Could anyone help with modified query.    
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two mess... See more...
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two messages: a RECEIVER log and a SENDER log. Here’s my current query: index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | transaction id startswith="RECEIVER" endswith="SENDER" mvlist=message | search eventcount > 1 | eval count=mvcount(message) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw The idea is to group together RECEIVER and SENDER logs using the transaction id that my logs creates (e.g., RECEIVER[52] and SENDER[52]), and then extract and separate the first and second messages of the transaction into request and response to have a better visualisation. The transaction command seems to be grouping the logs correctly, I get the right number of transactions, and both receiver and sender logs are present in the _raw field. For a few cases it works fine, I have as expected the proper request and response in two distinct fields, but for many transactions, the response (second message) is showing as NULL, even though eventcount is 2 and both messages are visible in _raw The message field is well present in both ends of the transaction, as I can see it in the _raw output. Can someone guide me on what is wrong with my query ?  
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenab... See more...
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenable.com/plugins/nessus/33851) Notice that for some Splunk Enterprise Instances after upgrade,  there are 2 tar.gz files created in /opt/splunk/opt/packages that cause the below 2 processes to be started by Splunk (pkg-run) agentmanager-1.0.1+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.tar.gz identity-0.0.1-xxxxxx.tar.gz The 2 processes are started by Splunk user and it will re-spawn if process is killed using kill command /opt/splunk/var/run/supervisor/pkg-run/pkg-agent-manager2203322202/agent-manager /opt/splunk/var/run/supervisor/pkg-run/pkg-identity1066404666/identity How come upgrade of Splunk Enterprise will cause these 2 files to be created or is normal?
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a... See more...
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a distributed environment with 3 indexers and a management node? I normally run the following command at an all in one setup. /opt/splunk/bin/splunk clean eventdata -index index_name
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom ind... See more...
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom index: rapid7 Data is being successfully ingested via a Python script using the /services/collector/event endpoint The script defines index: rapid7 and sourcetype: rapid7:assets I can search the data using: index=rapid7 and get results. I can also confirm the sourcetype: index=rapid7 | stats count by sourcetype Problem: I am trying to add rapid7 to my role’s default search indexes, but when I go to: Settings → Roles → admin → Edit → Indexes searched by default The index rapid7 appear blank, I don't know that this is the all problem.  What I’ve verified: The index exists and receives data The data is visible in Search & Reporting if I explicitly specify index=rapid7 I am an admin user I confirmed the index is created (visible under Settings → Indexes) My Questions: What could cause an index to not appear in the "Indexes searched by default" list under role settings? Could this be related to the app context of the index (e.g., if created under http_event_collector)? Is there a way in Splunk Cloud to globally share an index created via HEC so it appears in role configuration menus? I want to be able to search sourcetype="rapid7:assets" without explicitly specifying my index=rapid7, by including it in my role's default search indexes. Any advice, experience or support links would be appreciated! Thanks!
Hello, I have an air-gapped Splunk AppDynamics (25.1) HA on-premises instance deployed, fleet management service enabled, and smart agents installed on the VMs to manage the app server agents. I wa... See more...
Hello, I have an air-gapped Splunk AppDynamics (25.1) HA on-premises instance deployed, fleet management service enabled, and smart agents installed on the VMs to manage the app server agents. I want to be able to download the agents directly from AppDynamics Downloads from the controller UI instead of downloading manually (i.e. Using AppDynamics Portal), but I don't know which URLs should be whitelisted on the firewall. Can anyone help me with this? Thanks, Osama
I have installed ES on deployer as suggested by splunk docs, then transfered this app to /opt/splunk/etc/shcluster/apps and pushed the apps to my cluster. but still when I open ES on any search head... See more...
I have installed ES on deployer as suggested by splunk docs, then transfered this app to /opt/splunk/etc/shcluster/apps and pushed the apps to my cluster. but still when I open ES on any search head it still says Post instal configurations and when I click configure it says you can not do it on SHC member
The old connector didn't support Db2 on Z.   Wondering if the latest version in Splunk base now supports mainframe Db2 on z/OS.   Thanks.
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4)   How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4) How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
Query1: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time. Query2: index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |... See more...
Query1: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time. Query2: index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time. Query3: index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time. Combined query: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time |appendcols [|search index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time] |appendcols [|search index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" Code=OPT OR Code=ONP |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time] |eval Start_Date=srftime(_time, "%Y-%m-%d") |table Start_Date Code TOATL_ONIP1 TOATL_ONIP2 TOATL_ONIP3 Output for individual query1: Start_Date Code TOTAL_ONIP1 2025-04-01 OPT 2 2025-04-02 OPT 4 2025-04-03 OPT 0 2025-04-01 ONP 1 2025-04-02 ONP 2 2025-04-03 ONP 3 Output for individual query2: Start_Date Code TOTAL_ONIP2 2025-04-01 OPT 0 2025-04-02 OPT 0 2025-04-03 OPT 0 2025-04-01 ONP 4 2025-04-02 ONP 2 2025-04-03 ONP 3 Output for individual query3: Start_Date Code TOTAL_ONIP3 2025-04-01 OPT 0 2025-04-02 OPT 0 2025-04-03 OPT 9 2025-04-01 ONP 0 2025-04-02 ONP 6 2025-04-03 ONP 8 Combined query output: Start_Date Code TOTAL_ONIP1 TOTAL_ONIP2 TOTAL_ONIP3 2025-04-01 OPT 2 4 9 2025-04-02 OPT 4 2 6 2025-04-03 OPT 1 3 8 2025-04-01 ONP 2     2025-04-02 ONP 3     2025-04-03 ONP       When we combine the query the count is not matching with the individual queries. For example: on April1st for ONP for TOTAL_ONIP2 is 4 but in combined one it is showing null,  and 4 value updated in OPT april 1st 
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Vee... See more...
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Veeam data is really only usable within the Veeam App.  If you are in a different App in Splunk and try to query the Veeam data a lot of fields will be “missing”.  You can see here that I need use 3 fields (EventGroup, ActivityType, and severity) to find the specific events I’m looking for, but only 1 of those fields is actually availble in the _raw data:   Ok...so why are these fields available in the Veeam App but not in any other App in Splunk, especially since they don’t even actually exist?  This is due to the “enrichment” the Veeam App is performing translating things like “instanceId” into something human-readable and informative.  For example instanceId here is “41600” and when you query the Veeam events there is a lookup that references 41600 and returns additional information:   Great, so if this is available in the Veeam App, why don’t I just do all my work there rather than trying to make this extra information available outside the Veeam App?  The short answer is I want to be able to work with more than one dataset at a time.  The longer answer is that I have a custom “app” where I store all my SOC security detection queries.  Splunk also has their Enterprise Security App which basically does the same thing.   What this allows is the creation of correlated searches, such as one search that picks up any “ransomware” related event regardless of whether it comes from Veeam or AntiVirus or UEBA, etc.   But if the Veeam data isn’t usable outside of the Veeam app you can’t incorporate it into your standard SOC process.     What you need to do is make the all the enrichment in the Veeam App (props, lookups, transforms, datamodels, etc) readable from any App in Splunk, not just from the Veeam App.   You can do all this from the Splunk GUI (you might need to be an Admin...not sure...I’m an Admin so I can do everything/whatever I want LOL  ) Share the Data Model Globally:   Share the enrichment (“props” & “transforms”) Globally:   You can see here before and after snips of the “export” config after I modified all the properties: (default.meta)   (local.meta which overrides defaul created dynamically after edit)