All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @GaryZ, As far as I understand, this is not possible with dashboard studio so the best solution would be to have both charts there, but only one displaying depending on the token. However, you c... See more...
Hi @GaryZ, As far as I understand, this is not possible with dashboard studio so the best solution would be to have both charts there, but only one displaying depending on the token. However, you can do it with Classic Dashboards (i.e. simple XML dashboards). Here's an example:   <form version="1.1" theme="light"> <label>Splunk answers</label> <fieldset submitButton="false"> <input type="dropdown" token="chart" searchWhenChanged="true"> <label>Chart Style</label> <choice value="line">Line Chart</choice> <choice value="column">Bar Chart</choice> <default>line</default> <initialValue>line</initialValue> </input> </fieldset> <row> <panel> <title>Chart</title> <chart> <search> <query>| gentimes start=-20 | eval sample=random()%100 | eval _time = starttime | timechart span=1d max(sample) as value</query> <earliest>-20d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">$chart$</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>   The trick here is to create a token with the value of the chart you'd like to show ("line" or "column") and then use that token in the XML:   <option name="charting.chart">$chart$</option>     This might get annoying to develop though, as you can't edit the chart while this value is set. You can always change it while editing and then change it back when you're done.      
Thanks, Tejas and Rich...   Very much appreciated.
And if you then want to make that a bar chart, replace the fields - c at the end with | fields myFIELD | mvexpand myFIELD | eval count=tonumber(myFIELD)  
Quite literally | makeresults | fields - _time | eval min = 0.442 | eval max = 0.507 | eval mean = 0.4835625 | eval stdev = 0.014440074377630105 | eval count = 128 | eval pi = 3.1415926535897932384... See more...
Quite literally | makeresults | fields - _time | eval min = 0.442 | eval max = 0.507 | eval mean = 0.4835625 | eval stdev = 0.014440074377630105 | eval count = 128 | eval pi = 3.141592653589793238462 | eval min = printf("%.3f", mean - 3.0 *stdev)```; # use sprintf as a rounding function``` | eval max = printf("%.3f", mean + 3.0 * stdev) | eval x=min | eval interval = (max - min)/(count - 1) | eval c=mvrange(0, count, 1) | foreach c mode=multivalue [ | eval y= (1.0/(stdev * sqrt(2.0 * pi))) * exp(-0.5*(pow(((x - mean) / stdev), 2))), myFIELD=mvappend(myFIELD, printf("%.3d", y)), x = x + interval ] | fields - c  
Yes, it's very easy. Just edit the chart type setting in the XML to use a token and then in your input give the appropriate options, e.g.   <panel> <input type="dropdown" token="viz_type... See more...
Yes, it's very easy. Just edit the chart type setting in the XML to use a token and then in your input give the appropriate options, e.g.   <panel> <input type="dropdown" token="viz_type" searchWhenChanged="true"> <label>What viz type</label> <choice value="pie">Pie</choice> <choice value="bar">Bar</choice> <choice value="line">Line</choice> <choice value="column">Column</choice> </input> <chart> <search> <query> | makeresults count=5000 | eval car=mvindex(split("Volvo,Mercedes,VW,Porsche",","),random() % 4) | stats count by car </query> </search> <option name="charting.chart">$viz_type$</option> <option name="charting.drilldown">all</option> </chart> </panel>  
What's your SPL?
I am not sure how you managed to create that because that XML is completely broken and is not a valid dashboard. Your <choice> values are not valid XML, e.g. you can't have value=multiple quoted stri... See more...
I am not sure how you managed to create that because that XML is completely broken and is not a valid dashboard. Your <choice> values are not valid XML, e.g. you can't have value=multiple quoted strings. <choice value="appdev1host","logdev1host","cordev1host">DEV1</choice> Why don't you just make your choice value something like <choice value="*dev1host">DEV1</choice> and so on. Also, not sure what you are trying to achieve with your SPL - are "Total count" and "Incoming count" fields in your data? Using appendcols is not a good technique as you are repeating almost the identical search, which is not necessary. If you want to share an example of your data I can help suggest a correct search.
summaryindex and collect are synonyms - I believe summaryindex is just an alias for the documented collect command. Your understand is correct re the two searches. (1) happens before (2) and (2) can... See more...
summaryindex and collect are synonyms - I believe summaryindex is just an alias for the documented collect command. Your understand is correct re the two searches. (1) happens before (2) and (2) can be done as often as needed in the same day until (1) happens again the following day. That link is about moving existing CSV contents to KV store. You don't need a CSV to get data to a lookup. You can simply  search data | outputlookup kv_store_lookup Note that a KV store lookup is a lookup definition, not a lookup table file. A CSV is a lookup table file, but can also have a definition associated with it (and it's good practice), whereas a KV store lookup definition just requires the definition and an associated collection to be defined. You can create collections using the Splunk app for lookup editing https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/DefineaKVStorelookupinSplunkWeb  
Good morning, I am currently instructing the Cluster Admin course, and a student has asked a question which to my great surprise doesn't seem to covered anywhere. They have an indexer cluster and S... See more...
Good morning, I am currently instructing the Cluster Admin course, and a student has asked a question which to my great surprise doesn't seem to covered anywhere. They have an indexer cluster and SHC on a single site, and they want to shut down everything for a planned power outage in their data centre.   What is the correct sequence and commands for doing this? My own guesses are: Shut down everything that is sending data to splunk first. Place the index cluster in maintenance mode Shut down the deployment server if in use. Shutdown the SHC deployer (splunk stop) Shut down the SHC members (splunk stop?) Shut down the indexer members (? not sure which variant of the commands to use here) Shut down the cluster master last. Restart is the reverse order. Correct or not? Thank you, Charles
Hi, I am facing a executable permission issue for the few scripts for a splunk app and seeing these errors on various search heads, what is the best way to fix it? can someone help me with the scrip... See more...
Hi, I am facing a executable permission issue for the few scripts for a splunk app and seeing these errors on various search heads, what is the best way to fix it? can someone help me with the script or a fix if you ever come across?   thanks in advance.
Can you illustrate how you obtain incomingcount rejectedcount invalidcount topcount trmpcount topiccount?  As a habit, always share how data looks like.  If you just count stuff, there should be no "... See more...
Can you illustrate how you obtain incomingcount rejectedcount invalidcount topcount trmpcount topiccount?  As a habit, always share how data looks like.  If you just count stuff, there should be no "empty" column. (Also, are you asking about empty row or empty column?) For example, if you have this data set Application incoming invalid rejected top trmp top Login come something   some other     Login   some more   some stuff     Login       stuff stuff     Success come in     more stuff     and you use this to produce those count columns   | stats count(incoming) as incomingcount count(rejected) as rejectedcount count(invalid) as invalidcount count(top) as topcount count(trmp) as trmpcount count(topic) as topiccount by Application   Splunk should give you Application incomingcount rejectedcount invalidcount topcount trmpcount topiccount Login 1 0 2 3 0 0 Success 1 0 0 1 0 0 Here is my data emulation to produce that mock input.   | makeresults format=csv data="Application, incoming, rejected, invalid, top, trmp, topic Login, come, , something, some other Login, , , some more, some stuff Login, , , , stuff stuff Success, come in, , , more stuff"    
As @ITWhisperer you only have access to the first result of the table in the <done> clause, but assuming you only have a single result then you can set the token based on that very simply using <eval... See more...
As @ITWhisperer you only have access to the first result of the table in the <done> clause, but assuming you only have a single result then you can set the token based on that very simply using <eval> <done> <eval token="tok_runtime">if($result.has_runtime$="Yes", "true", null())</eval> </done> If you have multiple results, then this would work <search> <query> index="abc" sourcetype="abc" Info.Title="$Title$" |spath output=Runtime_data path=Info.runtime_data | eval has_runtime = if(isnotnull(Runtime_data), 1, 0) | table _time, has_runtime | eventstats max(has_runtime) as has_runtime </query> <done> <eval token="tok_runtime">if($result.has_runtime$>0, "true", null())</eval> </done> </search>
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on ... See more...
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on an EC2 instance in the particular AWS environment. I've read a lot of documents, tried different things, followed a couple of videos, and I'm able to see cloudwatch configuration entries in my main index, but so far have not gotten any cloudwatch logs. I am not interested in deep architectural understanding.  I just want to start from the very beginning at the true step one, and end at the last step with logs showing up in my main index. Also, the community "ask a question" page requires an "associated Apps" and I picked one from the available list, but I don't care which app works, I just want to use the one that works. Thank you very much in advance.
1. The transfer time is governed by two factors: 1) the speed of the network; and 2) the maxKBps setting in limits.conf.  The latter defaults to 256KBps (approximately), but setting it zero disables ... See more...
1. The transfer time is governed by two factors: 1) the speed of the network; and 2) the maxKBps setting in limits.conf.  The latter defaults to 256KBps (approximately), but setting it zero disables the limit and makes the network the limiting factor. 2. The EPS rate is the data transmission rate divided by the size of the events.  Both of those numbers are unknown in this thread so EPS cannot be calculated.
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we ... See more...
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we have data for Login but no data for other applications so it automatically filled zero values,  but for rejectedcount, trmpcount, topiccount there is no data for any application  0 value is not getting filled up. Application incomingcount rejectedcount invalidcount topcount trmpcount topiccount Login 1   2 5     Success 8   0 2     Error 0   0 10     logout 2   0 4     Debug 0   0 22     error-state 0   0 45     normal-state 0   0 24      
hello @AaronJaques @titchynz , I have posted solution that should resolve the error you have mentioned. Please check the following link  Stack Overflow - Splunk DB Connect and Snowflake Integrati... See more...
hello @AaronJaques @titchynz , I have posted solution that should resolve the error you have mentioned. Please check the following link  Stack Overflow - Splunk DB Connect and Snowflake Integration Error
Hi @MattKr, Here's an option that will run from the UI. | rest /services/data/indexes splunk_server=local | stats count by title | rename title as index | map [| metadata type=sourcetypes index=$i... See more...
Hi @MattKr, Here's an option that will run from the UI. | rest /services/data/indexes splunk_server=local | stats count by title | rename title as index | map [| metadata type=sourcetypes index=$index$ | eval index="$index$"] maxsearches=100 In the first line, make sure splunk_server=<NAME OF INDEXER>,  for Splunk Cloud local is fine. Make the maxsearches=XXX match the total number of indexes you have.  This uses the metadata command to get the sourcetypes, and earliest/latest times, and the number of matching events.  The one drawback is that the index isn't included in the results, so I've set it up via the map command so it will run the metadata search for each index. Couple of things to note: This will run as many searches as you have indexes - so be careful. The metadata search is lightening fast as it only runs on the index metadata (hence the name) so there's no real data being brought back - just data about the index. You need to run it as an all-time search to get all of your data... Pick a time to do this to reduce any impact. I ran the search on a small cloud environment with 52 indexes over all time and it completed in 4.9s.  Give that a go.    
| eval startTS=strptime(startTS, "%F %T.%3N%z") | sort by startTS, thirdPartyId | fieldformat startTS=strftime(startTS, "%F %T.%3N") | streamstats range(startTS) as difference window=2 global=f by ha... See more...
| eval startTS=strptime(startTS, "%F %T.%3N%z") | sort by startTS, thirdPartyId | fieldformat startTS=strftime(startTS, "%F %T.%3N") | streamstats range(startTS) as difference window=2 global=f by hashCode thirdPartyId
Also looking for this
@svukov please have a look at transaction command after sort if needed on time  here https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction ? | transaction thirdPartyId,ha... See more...
@svukov please have a look at transaction command after sort if needed on time  here https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction ? | transaction thirdPartyId,hashcode maxspan=?