All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My Splunk installation can't read files from windows host from a specific folder on the C:// drive. Logs are collected from another folder without problems. There are no errors in index _internal, st... See more...
My Splunk installation can't read files from windows host from a specific folder on the C:// drive. Logs are collected from another folder without problems. There are no errors in index _internal, stanza in inputs.conf looks standard, monitor on the folder and the path are specified correctly. The rights to the folder and files are system ones, as are other files that we can collect. What could be the problem?
  Hi, I encountered an issue where my indexer disconnected from the search head (SH), and similarly, the SH and indexer1 disconnected from the deployment server and license master. I keep receiving... See more...
  Hi, I encountered an issue where my indexer disconnected from the search head (SH), and similarly, the SH and indexer1 disconnected from the deployment server and license master. I keep receiving the following error message:    Error [00000010] Instance name "A.A.A.A:PORT" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time: 2024-10-14T16:23:23.000+02:00; Failed 5 out of 5 times. I've tried re-adding the peer but the issue persists. Does anyone have suggestions on how to resolve this? Thanks in advance!
I have created an index to store my data on Splunk.  The data contains 5 csv files uploaded one by one in the index. Now, if I try to show the data inside  the index, it shows the latest data (the ... See more...
I have created an index to store my data on Splunk.  The data contains 5 csv files uploaded one by one in the index. Now, if I try to show the data inside  the index, it shows the latest data (the csv file that was uploaded at the end ) We can show the data of other files by querying, including specific source names, but by default, we can not see the whole data; we can only see the data of the last table. To overcome this challenge we have used joins to join all the tables and show them through the query in one report. I wanted to find out if there is a better way to do this. I have to show this data in Power BI, and for that, I should have a complete report of the data.
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which di... See more...
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which displays a blank page with {"status":1}  So login seems somehow to work but after that it gets stuck in this page and in the splunkd.logs I can see the following Error message: "ERROR UiAuth [28137 TcpChannelThread] - user= action=login status=failure reason=missing-username" so it sounds that there is maybe something wrong in the claims mapping ? here is my local/authentication.conf       [roleMap_SAML] admin = test [splunk_auth] constantLoginTime = 0.000 enablePasswordHistory = 0 expireAlertDays = 15 expirePasswordDays = 90 expireUserAccounts = 0 forceWeakPasswordChange = 0 lockoutAttempts = 5 lockoutMins = 30 lockoutThresholdMins = 5 lockoutUsers = 1 minPasswordDigit = 0 minPasswordLength = 8 minPasswordLowercase = 0 minPasswordSpecial = 0 minPasswordUppercase = 0 passwordHistoryCount = 24 verboseLoginFailMsg = 1 [authentication] authSettings = saml authType = SAML [authenticationResponseAttrMap_SAML] mail = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress realName = http://schemas.microsoft.com/identity/claims/displayname role = http://schemas.microsoft.com/ws/2008/06/identity/claims/groups [saml] caCertFile = /opt/splunk/etc/auth/cacert.pem clientCert = /opt/splunk/etc/auth/server.pem entityId = <instance>.westeurope.cloudapp.azure.com fqdn = https://<instance>.westeurope.cloudapp.azure.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://login.microsoftonline.com/<tentantid>/saml2 idpSSOUrl = https://login.microsoftonline.com/<tentantid>/saml2 inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://sts.windows.net/<tentantid>/ lockRoleToFullDN = true nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress redirectPort = 0 replicateCertificates = true signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = <pw> ssoBinding = HTTP-POST       does anyone has a hint what could go wrong in our setup? Thanks in advance!  
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exact... See more...
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exactly?
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirement... See more...
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirements for hosting these servers? Based on your input, we will plan and provision the necessary hardware. The primary role of the Deployment Master server will be to create custom apps and collect data from client machines using Splunk Universal Forwarder. For the Heavy Forwarders, we will be installing multiple add-ons to configure and fetch data from sources such as Azure Storage (Table, Blob), O365 applications, Splunk DB Connect, Qualys, AWS, and client machine data parsing. We are looking for the minimum, moderate, and maximum hardware requirements as recommended by Splunk Support to host the Splunk DM and HF servers in the Nutanix environment. If there are any support articles or documentation available, that would be greatly appreciated. Thank you!
Hello Splunker!! Could you please help me to optimize below query ? Customer saying dedup is taking so much resource consumption. So what should I change in the query so that the complete query get... See more...
Hello Splunker!! Could you please help me to optimize below query ? Customer saying dedup is taking so much resource consumption. So what should I change in the query so that the complete query gets optimized? index=abc sourcetype=abc _tel type=TEL (trigger=MFC_SND OR trigger=FMC_SND) telegram_type=CO order_type=TO area=D10 aisle=A01 *1000383334* | rex field=_raw "(?P<Ordernumber>[0-9]+)\[ETX\]" | fields _time area aisle section source_tel position destination Ordernumber | join area aisle [ inputlookup isc where section="" | fields area aisle mark_code | rename area AS area aisle AS aisle] | lookup movement_type mark_code source AS source_tel position AS position destination AS destination OUTPUT movement_type | fillnull value="Unspecified" movement_type | eval movement_category = case( movement_type like "%IH - LH%", "Storage", movement_type like "%LH - R%", "Storage", movement_type like "%IH - IH%", "Storage", movement_type like "%R - LH%", "Retrieval", movement_type like "%LH - O%", "Retrieval", 1 == 1, "Unknown" ) | fields - source_tel position destination | dedup Ordernumber movement_category | stats count AS orders by area aisle section movement_category movement_type Ordernumber _raw
Hello,  I would like to know if it's possible to setup a "lot" of automation broker in a single instance within the same tenant ? Or is it only 1 by "tenant" ? My main usecase would be to have ... See more...
Hello,  I would like to know if it's possible to setup a "lot" of automation broker in a single instance within the same tenant ? Or is it only 1 by "tenant" ? My main usecase would be to have access and act upon a lot of "Onprem" client with a few SOAR cloud instance (client are already merge by "group of client", therefore I do not want to re-split with 1 tenant = 1 client)  PSA : I did not manage to find the details about the possibility to have multiple "automation broker" in both Splunk SOAR and Splunk Automation Broker, I assume it's possible based on the API and the "id" for the broker, I just want to confirm it, thanks ! 
Hi, i am using classic dashboard. I have below 2 INPUT boxes ( SRC_Condition and Source IP)  to filter the src_ip.  By default, we can only place input boxes next to one another.   How can i align th... See more...
Hi, i am using classic dashboard. I have below 2 INPUT boxes ( SRC_Condition and Source IP)  to filter the src_ip.  By default, we can only place input boxes next to one another.   How can i align these 2 on top of one another ?   Splunk doesn't allow us to drag and drop them on top of each other.   
Hi, I'm trying to drilldown on a table using two different input values (from two radio button inputs). When I have input from one radio button, it works all fine. For eg, if I have this statement ... See more...
Hi, I'm trying to drilldown on a table using two different input values (from two radio button inputs). When I have input from one radio button, it works all fine. For eg, if I have this statement in drilldown tag of table it works perfectly: <set token="tokenNode">$click.value$</set>   However, when I place second set token statements It just says No Results Found: I tried both click.value & click.value2 Option 1: <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value$</set>   Option 2:   <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value2$</set>  
Hi Splunk Experts, Can you please let me know how we can calculate the max and avg TPS for a time period of last 3 months along with the exact time of occurrence. I came up with below query, ... See more...
Hi Splunk Experts, Can you please let me know how we can calculate the max and avg TPS for a time period of last 3 months along with the exact time of occurrence. I came up with below query, but it is showing me error as the count of event is greater than 50000. Can anyone please help or guide me on how to overcome this issue.   index=XXX "attrs"=traffic NOT metas | timechart span=1s count AS TPS | eventstats max(TPS) as MAX_TPS | eval Peak_Time=if(MAX_TPS==TPS,_time,null()) | stats avg(TPS) as AVG_TPS first(MAX_TPS) as MAX_TPS first(Peak_Time) as Peak_Time | fieldformat Peak_Time=strftime(Peak_Time,"%x %X")      
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output... See more...
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output must include 10-Oct, 11-Oct, 12.-Oct and 13-Oct each with top 10  table names with highest insert sum       index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort limit=10 +_time -count         Thanks in advance
Hello there,  our shop uses proofpoint vpn for our remote users to access on-prem resources. I've been looking into splunkbase to see if there's a published app, I don't see any add-on for vpn data ... See more...
Hello there,  our shop uses proofpoint vpn for our remote users to access on-prem resources. I've been looking into splunkbase to see if there's a published app, I don't see any add-on for vpn data ingestion. I see there's a proofpoing email security add on, but it doesn't seem to relate to vpn logs.  Any ideas what add-on\apps will work for it? thanks. 
In release 9.2.2403 I see that: You can customize the text color of dashboard panel titles and descriptions with the titleColor and descriptionColor options in the source code... But I'm ... See more...
In release 9.2.2403 I see that: You can customize the text color of dashboard panel titles and descriptions with the titleColor and descriptionColor options in the source code... But I'm not sure how to modify the source code appropriately to make this work.  If I have this basic starting point:   { "type": "splunk.table", "title": "Sample title for testing color", "options": {}, "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }   Where can I insert titleColor? My Splunkcloud version is Version:9.2.2403.108
Trying to use syslog-ng for latest Splunk enterprise.  I am getting error " Failed to acquire /run/systemd/journal/syslog socket, disabling systemd-syslog source" when I try to run the service manual... See more...
Trying to use syslog-ng for latest Splunk enterprise.  I am getting error " Failed to acquire /run/systemd/journal/syslog socket, disabling systemd-syslog source" when I try to run the service manually.  This error prevents me to run the syslog-ng service in systemctl during bootup.  Any idea or help would be appreciated.
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does... See more...
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does anyone know if there is a change event handler for inputs in Dashboard Studio like there is in the XML dashboards? I've not seen anything in the docs, but I could just be looking in the wrong place. Thanks.
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND... See more...
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND status_code -- how do i get this to get only the status codes that are >=199 and <300 --> these belong to my success bucket >=499 --> These belong to my error bucket | eval Derived_Status_Code= case( status_code>=199 and status_code<300,"Success", status_code>=499,"Errors", 1=1,"Others" ``` I do not need anything that is not in the above conditions ) |Table <> |Where Derived_Status_Code IN ("Errors',"Success") I want to avoid where and get this into search using AND Thank you so much for your time
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same ... See more...
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same for 2 other hosts, the number remains the same between refreshes. Is it because it is doing sampling somewhere? If so,  where can I disable the sampling config?
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several ... See more...
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several databases, including MySQL, PostgreSQL, MongoDB, and Oracle. So far, I’ve been able to send TCP syslogs to Logstash using the Universal Forwarder. Additionally, I’ve successfully connected to MySQL using Splunk DB Connect but I’m not receiving any logs from it to Logstash. I would appreciate any advice on forward database audit logs through the Universal Forwarder to Logstash in real time or is there any provision of creating a sink or something? Any help or examples would be great! Thanks in advance.
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new ev... See more...
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new events on SH. It is now converting UTC->PST.  I ran a search for previous week and for those events it is converting timestamp correctly, from UTC-> Eastern.  I am a little confused since both searches are done from same search head against same set of indexers. If there was a TZ issue, woudn't Splunk have converted both incorrectly?  I also ran same searches on indexer with identical output. Recent events in PST whereas older events continue to show as EST. Here are some examples For previous week   Recent. Splunk shows a UTC->PST conversion instead. I did test this manually via Add Data and Splunk is correctly formatting it to Eastern. How can I troubleshoot why recent events in search are showing PST conversion? My current TZ setting on SH is still set to Eastern Time. Also confirmed that system time for HF, indexers and Search Heads is set to Eastern.  Thanks