All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes , I want to do the hourly count (0-23) in the X Axis.  X Axis = Hour of the day ( stored in the field Time ).  Y axis : 3 lines ( Count1 , count2 , count3) .  Count 1 : Corresponds to the ... See more...
Yes , I want to do the hourly count (0-23) in the X Axis.  X Axis = Hour of the day ( stored in the field Time ).  Y axis : 3 lines ( Count1 , count2 , count3) .  Count 1 : Corresponds to the count of record of current week at a particular hour.  Count 2 : Corresponds to the count of record of current week - 1  at a particular hour.  Count 3 : Corresponds to the count of record of current week - 2  at a particular hour.  Result should be like below:     
Given that the number of orders is always 1 (as previously explained and shown in your screenshot), the dedup is not actually doing anything useful and can be removed. This could affect the orders fi... See more...
Given that the number of orders is always 1 (as previously explained and shown in your screenshot), the dedup is not actually doing anything useful and can be removed. This could affect the orders field in that it could be more than 1. This could be resolved by either evaluating it to 1 after the stats command, or by using distinct count | stats dc(Ordernumber) AS orders by area aisle section movement_category movement_type Ordernumber _raw
This seems a bit confused - drilldown happens when the user clicks on a cell in the table. In your instance, this appears to set two tokens to the same value (based on where the user clicked). Your s... See more...
This seems a bit confused - drilldown happens when the user clicks on a cell in the table. In your instance, this appears to set two tokens to the same value (based on where the user clicked). Your search also includes using the value of two input tokens. When either of these inputs is changed, the search will run again, using the new values of the tokens. This isn't drilldown. This is just how inputs and tokens work. Please can you try to give more concrete examples of what your events look like, what the rest of your dashboard looks like, what you would like to happen when the user interacts with your dashboard, etc.?
You will need to clarify what it is you are trying to do - do you want an hourly count i.e. the x-axis is 0-23? If so, what has weekly counts got to do with it? What are count1, count2 and count3 in ... See more...
You will need to clarify what it is you are trying to do - do you want an hourly count i.e. the x-axis is 0-23? If so, what has weekly counts got to do with it? What are count1, count2 and count3 in this respect? What does your source data look like and what do you want your results to look like?
@ITWhisperer  Thanks for your response. As per your suggestion  I will take care of the join and replace that will lookup command.  I am adding screenshots of the results so that you can get a li... See more...
@ITWhisperer  Thanks for your response. As per your suggestion  I will take care of the join and replace that will lookup command.  I am adding screenshots of the results so that you can get a little more clarity. Below are the results while executing the above query. Order number is same but one entry is for "Storage" & other one for "Retrieval" .  Job inspection   while executing above query Do you have any suggestion so that I can replace dedup with some more optimized command?
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which di... See more...
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which displays a blank page with {"status":1}  So login seems somehow to work but after that it gets stuck in this page and in the splunkd.logs I can see the following Error message: "ERROR UiAuth [28137 TcpChannelThread] - user= action=login status=failure reason=missing-username" so it sounds that there is maybe something wrong in the claims mapping ? here is my local/authentication.conf       [roleMap_SAML] admin = test [splunk_auth] constantLoginTime = 0.000 enablePasswordHistory = 0 expireAlertDays = 15 expirePasswordDays = 90 expireUserAccounts = 0 forceWeakPasswordChange = 0 lockoutAttempts = 5 lockoutMins = 30 lockoutThresholdMins = 5 lockoutUsers = 1 minPasswordDigit = 0 minPasswordLength = 8 minPasswordLowercase = 0 minPasswordSpecial = 0 minPasswordUppercase = 0 passwordHistoryCount = 24 verboseLoginFailMsg = 1 [authentication] authSettings = saml authType = SAML [authenticationResponseAttrMap_SAML] mail = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress realName = http://schemas.microsoft.com/identity/claims/displayname role = http://schemas.microsoft.com/ws/2008/06/identity/claims/groups [saml] caCertFile = /opt/splunk/etc/auth/cacert.pem clientCert = /opt/splunk/etc/auth/server.pem entityId = <instance>.westeurope.cloudapp.azure.com fqdn = https://<instance>.westeurope.cloudapp.azure.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://login.microsoftonline.com/<tentantid>/saml2 idpSSOUrl = https://login.microsoftonline.com/<tentantid>/saml2 inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://sts.windows.net/<tentantid>/ lockRoleToFullDN = true nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress redirectPort = 0 replicateCertificates = true signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = <pw> ssoBinding = HTTP-POST       does anyone has a hint what could go wrong in our setup? Thanks in advance!  
I have a table which is getting data from one of our indexes, somewhat like below: <table> <title>Tech Spec Values for Selected Node:</title> <search> <query>index=test_index_prod sourcetype="SPE... See more...
I have a table which is getting data from one of our indexes, somewhat like below: <table> <title>Tech Spec Values for Selected Node:</title> <search> <query>index=test_index_prod sourcetype="SPEC" | eventstats max(rundate) as maxDate, max(runtime) as maxTime, count as fno | where rundate=maxDate AND runtime=maxTime | search node="$form.tokenNode$" outcome="$form.tokenSwitch" | table node, outcome, name, class, resource | sort node, name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value$</set> </drilldown> </table>   And then I have two radio button fields with token names as tokenNode & tokenSwitch with both having different values. I want the drilldown to happen when any of the radio button value is selected by user from the two radio button input fields.
Hi @ITWhisperer  It resolved my query.  Time = Corresponds to hour of Time count1 = Count of records of current week count2 = Count of records of current week - 1   count3 = Count of records of ... See more...
Hi @ITWhisperer  It resolved my query.  Time = Corresponds to hour of Time count1 = Count of records of current week count2 = Count of records of current week - 1   count3 = Count of records of current week - 2  I need to restrict the X axis with the hour of the current day (Today) only. But when i select the Global time range as today, count2 and count3 becomes blank.  Is it possible to select the global time range as Last 30 days to fetch last 30 days data and view the X Axis with the hour of Time. 0-24 hour.  (1 day ) Current it shows 0-24 hour in X axis many times (every hour in last 30 days).  Can you please help me with this request.   
Hi @anandhalagaras1 , as I said, start with the default configuration (12CPUs and 12 GB RAM) and analyze the machine load using the monitoring console and the queues. If you haven't too high queues... See more...
Hi @anandhalagaras1 , as I said, start with the default configuration (12CPUs and 12 GB RAM) and analyze the machine load using the monitoring console and the queues. If you haven't too high queues and too high load maintain the default configuration, otherwise, add more resources, It isn't possible to give a general configuration. Ciao. Giuseppe
Extending a previously answered question is perhaps not the best way of getting your question answered, particularly when the extension is a bit vague. Please start a new question with more specifics... See more...
Extending a previously answered question is perhaps not the best way of getting your question answered, particularly when the extension is a bit vague. Please start a new question with more specifics about your particular usecase and the difficulties you are having i.e. what would you want the solution to look like.
@gcusello , Thank you for your swift response. For the Deployment Master server, we have around 1,000+ client machines in our environment. So it would be helpful if you could help me with the recomm... See more...
@gcusello , Thank you for your swift response. For the Deployment Master server, we have around 1,000+ client machines in our environment. So it would be helpful if you could help me with the recommended hardware specifications for this setup? As for the Heavy Forwarders, we will be ingesting over 40 GB of approximate data daily from both the HF servers. The primary data sources include Microsoft Azure Storage Table and Blob using the Splunk Add-On for Microsoft Cloud Services, the Qualys Technology Add-On, Splunk DB Connect, and data parsing for approximately 120+ client machines per Heavy Forwarder. What would be the recommended hardware specifications for these servers? , 
There is no single good answer to such question. A Deployment Server (not Deployment Master), depending on your environment size and configuration parameters, can run perfectly well on a relatively ... See more...
There is no single good answer to such question. A Deployment Server (not Deployment Master), depending on your environment size and configuration parameters, can run perfectly well on a relatively small server (like 4CPU and 8GB; if you disable GUI, probably even smaller) but can need to be load-balanced over several quite big machines if you have many clients and many often changing apps. As for HF, good thing is that you don't have to have just one HF in your environment (technically, you can have multiple separate DS instances for separate segments of your deployment but it makes app management more troublesome).. So you can start with a moderately sized HF (like a reference all-in-one server) and either scale out by adding cores/memory if you start lacking resources or add more instances of HF and migrate some inputs there.
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exact... See more...
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exactly?
I increased the limit several times, but eventually I got the same error. Do you know a way to see what data was received, for example, to do a search?
Hi @maspiro , for my knowledge, restarting the search is the only way to reset a token. Ciao. Giuseppe
Hi @anandhalagaras1 , there isn't any formal requirement from Splunk about Deployment Server and Heavy Forwarders, the only requirements are for a normal stand-alone Splunk Server: 12 CPUs and 12 GB... See more...
Hi @anandhalagaras1 , there isn't any formal requirement from Splunk about Deployment Server and Heavy Forwarders, the only requirements are for a normal stand-alone Splunk Server: 12 CPUs and 12 GB RAM. From my experience, I could add that, for DS, it depends on the number of client, if they aren't so many (some hundreds), you could also have less CPUs and RAM (8+8), in addition, from few time, you can also use more than one DS. It's different for HFs: if they have to do an hard job for parsing logs (regexes), it's better to give them more resources (expecially CPUs); in one heavy project, where our 4 HF had to receive and parse hundreds of GB every day, I used 24 CPUs and 64 GB RAM for each one. My hint is to start with the normal reference hardware (12+12), analyze machine loads and queues and eventually add more resources (we're usually speaking of virtual servers). In addition, if you have to receive syslogs, don't use Splunk for them, but use an rsyslog (or syslog-ng) server and then Splunk can read the written files. Ciao. Giuseppe
Hi Giuseppe! It's very useful, but this soluton needs to restart the search. My needing is that one panel is related via token to another: when I click on a field in the second panel the previous s... See more...
Hi Giuseppe! It's very useful, but this soluton needs to restart the search. My needing is that one panel is related via token to another: when I click on a field in the second panel the previous show only related record. How can I reset the token in order to have all the records in the first panel without restart the search? Thanks a lot! 
Ok, what you're describing is more of a SOAR functionality. If you wanted to do something like that within Splunk Enterprise you'd have to implement it yourself. And I'm pretty sure an app doing that... See more...
Ok, what you're describing is more of a SOAR functionality. If you wanted to do something like that within Splunk Enterprise you'd have to implement it yourself. And I'm pretty sure an app doing that would not pass vetting on Cloud.
No, more like this index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort 0 +_time -count | streamstats count as ... See more...
No, more like this index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort 0 +_time -count | streamstats count as row by _time | where row <= 10 | streamstats latest(count) as previous by table_Name window=1 global=f current=f | eval increase=round(100*(count-previous)/previous,0) The previous answer was based on the green table - since this is based on my first answer, combining the two should work for you (I removed the extra sort as this is redundant given the first sort.
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirement... See more...
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirements for hosting these servers? Based on your input, we will plan and provision the necessary hardware. The primary role of the Deployment Master server will be to create custom apps and collect data from client machines using Splunk Universal Forwarder. For the Heavy Forwarders, we will be installing multiple add-ons to configure and fetch data from sources such as Azure Storage (Table, Blob), O365 applications, Splunk DB Connect, Qualys, AWS, and client machine data parsing. We are looking for the minimum, moderate, and maximum hardware requirements as recommended by Splunk Support to host the Splunk DM and HF servers in the Nutanix environment. If there are any support articles or documentation available, that would be greatly appreciated. Thank you!