All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which di... See more...
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which displays a blank page with {"status":1}  So login seems somehow to work but after that it gets stuck in this page and in the splunkd.logs I can see the following Error message: "ERROR UiAuth [28137 TcpChannelThread] - user= action=login status=failure reason=missing-username" so it sounds that there is maybe something wrong in the claims mapping ? here is my local/authentication.conf       [roleMap_SAML] admin = test [splunk_auth] constantLoginTime = 0.000 enablePasswordHistory = 0 expireAlertDays = 15 expirePasswordDays = 90 expireUserAccounts = 0 forceWeakPasswordChange = 0 lockoutAttempts = 5 lockoutMins = 30 lockoutThresholdMins = 5 lockoutUsers = 1 minPasswordDigit = 0 minPasswordLength = 8 minPasswordLowercase = 0 minPasswordSpecial = 0 minPasswordUppercase = 0 passwordHistoryCount = 24 verboseLoginFailMsg = 1 [authentication] authSettings = saml authType = SAML [authenticationResponseAttrMap_SAML] mail = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress realName = http://schemas.microsoft.com/identity/claims/displayname role = http://schemas.microsoft.com/ws/2008/06/identity/claims/groups [saml] caCertFile = /opt/splunk/etc/auth/cacert.pem clientCert = /opt/splunk/etc/auth/server.pem entityId = <instance>.westeurope.cloudapp.azure.com fqdn = https://<instance>.westeurope.cloudapp.azure.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://login.microsoftonline.com/<tentantid>/saml2 idpSSOUrl = https://login.microsoftonline.com/<tentantid>/saml2 inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://sts.windows.net/<tentantid>/ lockRoleToFullDN = true nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress redirectPort = 0 replicateCertificates = true signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = <pw> ssoBinding = HTTP-POST       does anyone has a hint what could go wrong in our setup? Thanks in advance!  
I have a table which is getting data from one of our indexes, somewhat like below: <table> <title>Tech Spec Values for Selected Node:</title> <search> <query>index=test_index_prod sourcetype="SPE... See more...
I have a table which is getting data from one of our indexes, somewhat like below: <table> <title>Tech Spec Values for Selected Node:</title> <search> <query>index=test_index_prod sourcetype="SPEC" | eventstats max(rundate) as maxDate, max(runtime) as maxTime, count as fno | where rundate=maxDate AND runtime=maxTime | search node="$form.tokenNode$" outcome="$form.tokenSwitch" | table node, outcome, name, class, resource | sort node, name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value$</set> </drilldown> </table>   And then I have two radio button fields with token names as tokenNode & tokenSwitch with both having different values. I want the drilldown to happen when any of the radio button value is selected by user from the two radio button input fields.
Hi @ITWhisperer  It resolved my query.  Time = Corresponds to hour of Time count1 = Count of records of current week count2 = Count of records of current week - 1   count3 = Count of records of ... See more...
Hi @ITWhisperer  It resolved my query.  Time = Corresponds to hour of Time count1 = Count of records of current week count2 = Count of records of current week - 1   count3 = Count of records of current week - 2  I need to restrict the X axis with the hour of the current day (Today) only. But when i select the Global time range as today, count2 and count3 becomes blank.  Is it possible to select the global time range as Last 30 days to fetch last 30 days data and view the X Axis with the hour of Time. 0-24 hour.  (1 day ) Current it shows 0-24 hour in X axis many times (every hour in last 30 days).  Can you please help me with this request.   
Hi @anandhalagaras1 , as I said, start with the default configuration (12CPUs and 12 GB RAM) and analyze the machine load using the monitoring console and the queues. If you haven't too high queues... See more...
Hi @anandhalagaras1 , as I said, start with the default configuration (12CPUs and 12 GB RAM) and analyze the machine load using the monitoring console and the queues. If you haven't too high queues and too high load maintain the default configuration, otherwise, add more resources, It isn't possible to give a general configuration. Ciao. Giuseppe
Extending a previously answered question is perhaps not the best way of getting your question answered, particularly when the extension is a bit vague. Please start a new question with more specifics... See more...
Extending a previously answered question is perhaps not the best way of getting your question answered, particularly when the extension is a bit vague. Please start a new question with more specifics about your particular usecase and the difficulties you are having i.e. what would you want the solution to look like.
@gcusello , Thank you for your swift response. For the Deployment Master server, we have around 1,000+ client machines in our environment. So it would be helpful if you could help me with the recomm... See more...
@gcusello , Thank you for your swift response. For the Deployment Master server, we have around 1,000+ client machines in our environment. So it would be helpful if you could help me with the recommended hardware specifications for this setup? As for the Heavy Forwarders, we will be ingesting over 40 GB of approximate data daily from both the HF servers. The primary data sources include Microsoft Azure Storage Table and Blob using the Splunk Add-On for Microsoft Cloud Services, the Qualys Technology Add-On, Splunk DB Connect, and data parsing for approximately 120+ client machines per Heavy Forwarder. What would be the recommended hardware specifications for these servers? , 
There is no single good answer to such question. A Deployment Server (not Deployment Master), depending on your environment size and configuration parameters, can run perfectly well on a relatively ... See more...
There is no single good answer to such question. A Deployment Server (not Deployment Master), depending on your environment size and configuration parameters, can run perfectly well on a relatively small server (like 4CPU and 8GB; if you disable GUI, probably even smaller) but can need to be load-balanced over several quite big machines if you have many clients and many often changing apps. As for HF, good thing is that you don't have to have just one HF in your environment (technically, you can have multiple separate DS instances for separate segments of your deployment but it makes app management more troublesome).. So you can start with a moderately sized HF (like a reference all-in-one server) and either scale out by adding cores/memory if you start lacking resources or add more instances of HF and migrate some inputs there.
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exact... See more...
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exactly?
I increased the limit several times, but eventually I got the same error. Do you know a way to see what data was received, for example, to do a search?
Hi @maspiro , for my knowledge, restarting the search is the only way to reset a token. Ciao. Giuseppe
Hi @anandhalagaras1 , there isn't any formal requirement from Splunk about Deployment Server and Heavy Forwarders, the only requirements are for a normal stand-alone Splunk Server: 12 CPUs and 12 GB... See more...
Hi @anandhalagaras1 , there isn't any formal requirement from Splunk about Deployment Server and Heavy Forwarders, the only requirements are for a normal stand-alone Splunk Server: 12 CPUs and 12 GB RAM. From my experience, I could add that, for DS, it depends on the number of client, if they aren't so many (some hundreds), you could also have less CPUs and RAM (8+8), in addition, from few time, you can also use more than one DS. It's different for HFs: if they have to do an hard job for parsing logs (regexes), it's better to give them more resources (expecially CPUs); in one heavy project, where our 4 HF had to receive and parse hundreds of GB every day, I used 24 CPUs and 64 GB RAM for each one. My hint is to start with the normal reference hardware (12+12), analyze machine loads and queues and eventually add more resources (we're usually speaking of virtual servers). In addition, if you have to receive syslogs, don't use Splunk for them, but use an rsyslog (or syslog-ng) server and then Splunk can read the written files. Ciao. Giuseppe
Hi Giuseppe! It's very useful, but this soluton needs to restart the search. My needing is that one panel is related via token to another: when I click on a field in the second panel the previous s... See more...
Hi Giuseppe! It's very useful, but this soluton needs to restart the search. My needing is that one panel is related via token to another: when I click on a field in the second panel the previous show only related record. How can I reset the token in order to have all the records in the first panel without restart the search? Thanks a lot! 
Ok, what you're describing is more of a SOAR functionality. If you wanted to do something like that within Splunk Enterprise you'd have to implement it yourself. And I'm pretty sure an app doing that... See more...
Ok, what you're describing is more of a SOAR functionality. If you wanted to do something like that within Splunk Enterprise you'd have to implement it yourself. And I'm pretty sure an app doing that would not pass vetting on Cloud.
No, more like this index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort 0 +_time -count | streamstats count as ... See more...
No, more like this index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort 0 +_time -count | streamstats count as row by _time | where row <= 10 | streamstats latest(count) as previous by table_Name window=1 global=f current=f | eval increase=round(100*(count-previous)/previous,0) The previous answer was based on the green table - since this is based on my first answer, combining the two should work for you (I removed the extra sort as this is redundant given the first sort.
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirement... See more...
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirements for hosting these servers? Based on your input, we will plan and provision the necessary hardware. The primary role of the Deployment Master server will be to create custom apps and collect data from client machines using Splunk Universal Forwarder. For the Heavy Forwarders, we will be installing multiple add-ons to configure and fetch data from sources such as Azure Storage (Table, Blob), O365 applications, Splunk DB Connect, Qualys, AWS, and client machine data parsing. We are looking for the minimum, moderate, and maximum hardware requirements as recommended by Splunk Support to host the Splunk DM and HF servers in the Nutanix environment. If there are any support articles or documentation available, that would be greatly appreciated. Thank you!
@ITWhisperer did you mean the final splunk query would look like as below? index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _tim... See more...
@ITWhisperer did you mean the final splunk query would look like as below? index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort limit=10 +_time -count | sort 0 _time | streamstats latest(count) as previous by Table_Name window=1 global=f current=f | eval increase=round(100*(count-previous)/previous,0)
Optimisation will usually depend on the data set(s) you are dealing with, which you haven't provided. Having said that, the dedup by Ordernumber and movement_category will mean that there is only one... See more...
Optimisation will usually depend on the data set(s) you are dealing with, which you haven't provided. Having said that, the dedup by Ordernumber and movement_category will mean that there is only one event with each unique combination of the values in these fields, which means the count from the stats will always be 1, so what is the point of doing the stats? Your join is to an inputlookup, can this be replaced by a simple lookup?
Hi @hazem , now I don't find the parameter, also because I try to avoid to change it, the default value usually is the best solution. Ciao. giuseppe
Hi @neerajs_81 , good for you, see next time! maybe you could try the hint from @ITWhisperer  to put inputs in different rows, bat always one by one in each panel. Ciao and happy splunking Giusep... See more...
Hi @neerajs_81 , good for you, see next time! maybe you could try the hint from @ITWhisperer  to put inputs in different rows, bat always one by one in each panel. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @gcusello  could you please provide me with the stanza to change the interval required to read logs from the log file?   ,EX MSSQL-  ERROR.log file