All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone! This is not a significant issue, but sometimes I observe a somewhat strange behavior after pushing a configuration bundle from the deployer. I have numerous default apps containing ... See more...
Hello everyone! This is not a significant issue, but sometimes I observe a somewhat strange behavior after pushing a configuration bundle from the deployer. I have numerous default apps containing an app.conf file under the local directory with the content below: "C:\Program Files\Splunk\etc\shcluster\apps\search\local\app.conf" > [shclustering] deployer_push_mode = local_only   To push a bundle I use this command: splunk apply shcluster-bundle -target $host -auth $authString -push-default-apps yes --answer-yes   After successfully pushing the bundle, I sometimes observe the warning below on a random search head: File Integrity checks found 4 files that did not match the system-provided manifest. And it's always app.conf for example: 'C:\Program Files\Splunk\etc\apps\search\default\app.conf' Is there a way to fix this? Or maybe I'm doing something wrong? Am I wrong to expect that 'local_only' mode never touches the 'default' directory on the target host?  
Hi.   We have been forced to add identifiers to the collection tier in our Splunk environment. The way we have solved it, is using _meta with a couple of fields. Now we have some DB data that ... See more...
Hi.   We have been forced to add identifiers to the collection tier in our Splunk environment. The way we have solved it, is using _meta with a couple of fields. Now we have some DB data that is getting indexed and we would like to tag these data the same way. The easy part was editing the /local/inputs.conf file and adding the extra line to all the input stanzas - this unfortunately didn't work and as far as I can read the db_inputs.conf doesn't allow the _meta line in the stanzas.   Does anyone have an idea, how to solve this problem, my thougths run in the direction of index-eval, but that is a more complex setup.   Kind regards las
Hi Team,   I need to extract the complete User list and their associated roles and groups in AppDynamics. Looks like there is no straight approach within AppD. Could you let me know any other optio... See more...
Hi Team,   I need to extract the complete User list and their associated roles and groups in AppDynamics. Looks like there is no straight approach within AppD. Could you let me know any other option to get this list.    With API , I am able to extract only the user list. But I need it with the roles assigned to them. 
Hi Splunkers!!, We have recently configured SSO in Splunk using Keycloak, and it's working fine — users are able to log in through the Keycloak identity provider. Now, we have a new requirement whe... See more...
Hi Splunkers!!, We have recently configured SSO in Splunk using Keycloak, and it's working fine — users are able to log in through the Keycloak identity provider. Now, we have a new requirement where some users should be able to bypass SSO and use the traditional Splunk login (username/password) instead. Current Setup: Splunk SSO is configured via Keycloak (SAML). All users are redirected to Keycloak for authentication. We now want to allow dual login options: Primary: SSO via Keycloak (default for most users). Secondary: Traditional login for selected users (e.g., admins, service accounts). Objective: Allow both SSO and non-SSO (Splunk local authentication) login methods to coexist. Below is our setting for SSO. [authentication] authSettings = saml authType = SAML [roleMap_SAML] commissioning_engineer = integration hlc_support_engineer = integration [saml] caCertFile = D:\Splunk\etc\auth\cacert.pem clientCert = D:\Splunk\etc\auth\server.pem entityId = splunk fqdn = https://splunk.kigen-iht-001.cnaw.k8s.kigen.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production/protocol/saml idpSSOUrl = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production/protocol/saml inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://keycloak.walamb-iht-001.cnap.k8s.kigen.com/auth/realms/production lockRoleToFullDN = true redirectPort = 443 replicateCertificates = true scimEnabled = false signAuthnRequest = true signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = $7$CCkQUt0tA8sZJMmU+8kigen0zdv/mxXjJsLRbmuBkEnMfhQ== ssoBinding = HTTP-POST [userToRoleMap_SAML] kg-user = commiss_engineer;hlc_support_engineer::::
Hello, Got tasked with finding all hosts that didnt have the crowdstrike agent installed and running into problems with my searches.  Ive used the following "CSFalconservice.exe | stats count by ho... See more...
Hello, Got tasked with finding all hosts that didnt have the crowdstrike agent installed and running into problems with my searches.  Ive used the following "CSFalconservice.exe | stats count by host" & "index=*sourcetype="crowdstrike:events:sensor" | stats count by host" but its not giving me the information per each individual hosts.   V/r Ghost
I am looking for a range of number within my results of my search query but I am getting no results back after adding in a where clause.  This is my original search query.  index=os sourcetype=ps... See more...
I am looking for a range of number within my results of my search query but I am getting no results back after adding in a where clause.  This is my original search query.  index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=no | sort Mem_Used_GB desc | head 20 This is some of the results.   This is the new search where I am looking for a range of data between 128 and 256 and I am getting no results back, even with events matched. I have also played with time line and range of the where clause and still nothing. index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=no | where Mem_Used_GB >= 128 AND Mem_Used_GB <= 256 | sort Mem_Used_GB desc | head 20  
I'm trying to download Splunk using "wget -O splunk-9.4.2-e9664af3d956.x86_64.rpm "https://download.splunk.com/products/splunk/releases/9.4.2/linux/splunk-9.4.2-e9664af3d956.x86_64.rpm"" and it's han... See more...
I'm trying to download Splunk using "wget -O splunk-9.4.2-e9664af3d956.x86_64.rpm "https://download.splunk.com/products/splunk/releases/9.4.2/linux/splunk-9.4.2-e9664af3d956.x86_64.rpm"" and it's hanging at 35%. Was wondering if this is known issue.
Hello, I am trying to create a notable event in the mission control area within Enterprise Security to capture when an index has not received data within 24 hours. This should be simple and straig... See more...
Hello, I am trying to create a notable event in the mission control area within Enterprise Security to capture when an index has not received data within 24 hours. This should be simple and straight forward but I can't seem to figure out why this isn't working.  I have the detection search as  index = <target index> |stats count condition in the alert to trigger is search count = 0 I also have email alerts setup as an additional way to notify the proper people,  this part of the security content works, but why doesn't the actual event appear in the Mission control area? This has me stumped, any help would be greatly appreciated.
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query... See more...
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query and not sure if the "join" command is working. index=* | join Account_Name [ search index=* EventCode=4740 OR EventCode=4767 | eval login_account=mvindex(Account_Name,1) | bin span=4h  _time | stats count values(EventCode) as EventCodeList count(eval(match(EventCode,"4740"))) as Locked ,count(eval(match(EventCode,"4767"))) as Unlocked by Account_Name | where Locked >= 1 and Unlocked = 0 ] | stats count dc(login_account) as "UniqueAccount" values(login_account) as "Login_Account" values(host) as "HostName" values(Workstation_Name) as Source_Computer values(src_ip) as SourceIP by EventCode| where UniqueAccount >= 10
How to onboard MOVEit Server Database logs which is hosted on prem to Splunk Cloud? What is the preferred method?
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Fre... See more...
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | timechart min(Value) as "Used Space" | predict "Used Space" algorithm=LLP5 future_timespan=180 Could anyone help with modified query.    
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two mess... See more...
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two messages: a RECEIVER log and a SENDER log. Here’s my current query: index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | transaction id startswith="RECEIVER" endswith="SENDER" mvlist=message | search eventcount > 1 | eval count=mvcount(message) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw The idea is to group together RECEIVER and SENDER logs using the transaction id that my logs creates (e.g., RECEIVER[52] and SENDER[52]), and then extract and separate the first and second messages of the transaction into request and response to have a better visualisation. The transaction command seems to be grouping the logs correctly, I get the right number of transactions, and both receiver and sender logs are present in the _raw field. For a few cases it works fine, I have as expected the proper request and response in two distinct fields, but for many transactions, the response (second message) is showing as NULL, even though eventcount is 2 and both messages are visible in _raw The message field is well present in both ends of the transaction, as I can see it in the _raw output. Can someone guide me on what is wrong with my query ?  
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenab... See more...
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenable.com/plugins/nessus/33851) Notice that for some Splunk Enterprise Instances after upgrade,  there are 2 tar.gz files created in /opt/splunk/opt/packages that cause the below 2 processes to be started by Splunk (pkg-run) agentmanager-1.0.1+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.tar.gz identity-0.0.1-xxxxxx.tar.gz The 2 processes are started by Splunk user and it will re-spawn if process is killed using kill command /opt/splunk/var/run/supervisor/pkg-run/pkg-agent-manager2203322202/agent-manager /opt/splunk/var/run/supervisor/pkg-run/pkg-identity1066404666/identity How come upgrade of Splunk Enterprise will cause these 2 files to be created or is normal?
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a... See more...
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a distributed environment with 3 indexers and a management node? I normally run the following command at an all in one setup. /opt/splunk/bin/splunk clean eventdata -index index_name
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom ind... See more...
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom index: rapid7 Data is being successfully ingested via a Python script using the /services/collector/event endpoint The script defines index: rapid7 and sourcetype: rapid7:assets I can search the data using: index=rapid7 and get results. I can also confirm the sourcetype: index=rapid7 | stats count by sourcetype Problem: I am trying to add rapid7 to my role’s default search indexes, but when I go to: Settings → Roles → admin → Edit → Indexes searched by default The index rapid7 appear blank, I don't know that this is the all problem.  What I’ve verified: The index exists and receives data The data is visible in Search & Reporting if I explicitly specify index=rapid7 I am an admin user I confirmed the index is created (visible under Settings → Indexes) My Questions: What could cause an index to not appear in the "Indexes searched by default" list under role settings? Could this be related to the app context of the index (e.g., if created under http_event_collector)? Is there a way in Splunk Cloud to globally share an index created via HEC so it appears in role configuration menus? I want to be able to search sourcetype="rapid7:assets" without explicitly specifying my index=rapid7, by including it in my role's default search indexes. Any advice, experience or support links would be appreciated! Thanks!
Hello, I have an air-gapped Splunk AppDynamics (25.1) HA on-premises instance deployed, fleet management service enabled, and smart agents installed on the VMs to manage the app server agents. I wa... See more...
Hello, I have an air-gapped Splunk AppDynamics (25.1) HA on-premises instance deployed, fleet management service enabled, and smart agents installed on the VMs to manage the app server agents. I want to be able to download the agents directly from AppDynamics Downloads from the controller UI instead of downloading manually (i.e. Using AppDynamics Portal), but I don't know which URLs should be whitelisted on the firewall. Can anyone help me with this? Thanks, Osama
I have installed ES on deployer as suggested by splunk docs, then transfered this app to /opt/splunk/etc/shcluster/apps and pushed the apps to my cluster. but still when I open ES on any search head... See more...
I have installed ES on deployer as suggested by splunk docs, then transfered this app to /opt/splunk/etc/shcluster/apps and pushed the apps to my cluster. but still when I open ES on any search head it still says Post instal configurations and when I click configure it says you can not do it on SHC member
The old connector didn't support Db2 on Z.   Wondering if the latest version in Splunk base now supports mainframe Db2 on z/OS.   Thanks.
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4)   How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、... See more...
Hi, I try to display the number of events per day from multiple indexes. I wrote the below SPL, but when all index values are null for a specific date, the line itself is not displayed. 複数のindexから、nullには0を代入し、1日ごとのイベント件数を表示させたいです。 chartコマンドを使いイベント件数を表示、特定indexの値がnullの場合はisnullで0を代入できたのですが、特定の日にちだけ全てのindexの値がnullの時、その日の行自体が表示されません。 index IN (index1, index2, index3, index4) | bin span=1d _time | chart count _time over index | eval index4=if(isnull(index4), 0, index4) How to display a line of 4/2 by substituting 0 like the below table, when all indexes value of 4/2 are null? 下記の表のように4/2の値がなくとも、0を代入して4/2の行を表示させる方法はないでしょうか。   index1 index2 index3 index4 4/1 12 3 45 0 4/2 0 0 0 0 4/3 16 7 34 0