All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sorry i probably didnt expressed myself well. i want that wildcards gets taken into account. So based from the table i posted as example i would want results like this title totalEventCoun... See more...
Sorry i probably didnt expressed myself well. i want that wildcards gets taken into account. So based from the table i posted as example i would want results like this title totalEventCount frozenTimePeriodInSecs NumOfSearches _audit 771404957 188697600 23348  (_audit + _*) _configtracker 717 2592000 22311 (_configtracker + _*) _internal 7039169453 15552000 24098 (_internal + _*)
@PickleRick I mean to say. The value of the TASKIDUPDATED field is always unique value after applying checkpoint value event should be ingested only once and not multiple times.   Below is the set... See more...
@PickleRick I mean to say. The value of the TASKIDUPDATED field is always unique value after applying checkpoint value event should be ingested only once and not multiple times.   Below is the setting I am currently using for db connect.  connection = VIn disabled = 0 index = group_data index_time_mode = current interval = */10 * * * * max_rows = 0 mode = rising query = SELECT * FROM "WMCDB"."KLDGSF_ROUPOVERVIEW"\ WHERE TASKIDUPDATED < ?\ ORDER BY TASKIDUPDATED DESC query_timeout = 30 sourcetype = overview_packgroup tail_rising_column_init_ckpt_value = {"value":null,"columnType":null} tail_rising_column_name = TASKIDUPDATED tail_rising_column_number = 3 input_timestamp_column_number = 10 input_timestamp_format =
Yes, you're right about timechart, but I wonder what the purpose of rendering a timechart where there are no datapoints at any time span. I've certainly used an html panel instead of a null timechar... See more...
Yes, you're right about timechart, but I wonder what the purpose of rendering a timechart where there are no datapoints at any time span. I've certainly used an html panel instead of a null timechart. I can't see why you'd want to do that to display something empty
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. S... See more...
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. Sounds like both do pretty much the same capability in Splunk cloud.    Is there any major difference in these?    Thanks, SGS
Thanks @bowesmana , yes, this is the typical "solution" I've seen around, however this does not work on `timechart` and similar time bucket constrained expressions. Certainly if one is after just a ... See more...
Thanks @bowesmana , yes, this is the typical "solution" I've seen around, however this does not work on `timechart` and similar time bucket constrained expressions. Certainly if one is after just a solve for `stats` this definitely does work. This is my query: index=* source=squid_proxy_logs | search (warn* OR error*) AND _raw!="*SendEcho*" AND (NOT url=*) AND _raw!="*setrlimit: RLIMIT_NOFILE*" | timechart span=5m count(_raw) as hits I've tried appendpipe, append etc tricks with a variety of expressions such as: | appendpipe [| makeresults | where hits=0] | appendpipe [|makeresults | stats count(_raw) as count | where count=0 ] and a few other alternates I've seen around, but all have the same issue, they work great on a single vector stats return result being null/empty, but with the timechart this doesn't really play well unfortunately.. I think the closest I can get is where I have to makeresults myself into the spans and bins I need and then use a query to aggregate the counts into those predefined bins I've carved up, and these bins of course would be generated based on the search query timerange so it would work for historical periods as well as realtime ... Just need to rejig my query I think to do something like this so it always produces a fixed matrix/tabular output and with the respective count values for that point in time, rather than trying to build a dataset from where there are just zero values to start with (as is the case if there are NO records matching)... so it kinda makes sense why this happens...  
And a technique I use a reasonable amount in dashboards is to have a panel for results and a panel for no results hidden behind tokens, e.g. <form version="1.1" theme="light"> <label>tmp4</label> ... See more...
And a technique I use a reasonable amount in dashboards is to have a panel for results and a panel for no results hidden behind tokens, e.g. <form version="1.1" theme="light"> <label>tmp4</label> <fieldset submitButton="false"> <input type="text" token="user" searchWhenChanged="true"> <label>User</label> </input> </fieldset> <row> <panel> <html depends="$no_results$"> <h1>No results found</h1> </html> <table depends="$has_results$"> <search> <progress> <unset token="has_results"></unset> <unset token="no_results"></unset> </progress> <done> <eval token="has_results">if($job.resultCount$=0, null(), "true")</eval> <eval token="no_results">if($job.resultCount$&gt;0, null(), "true")</eval> </done> <query>index=_audit user=$user|s$</query> <earliest>-24h</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>
Just to reiterate here the general simple solution to this issue in case it gets read again, which has already been posted in this thread. All you need to do is to add the appendpipe clause to the e... See more...
Just to reiterate here the general simple solution to this issue in case it gets read again, which has already been posted in this thread. All you need to do is to add the appendpipe clause to the end of the search like this - where "NOUSER" is assumed not to exist, so without the appendpipe, will return no results found.   index=_audit user=NOUSER | appendpipe [ | stats count | where count=0 ]      
Wow thanks for rapid responses @SanjayReddy  and @PickleRick  Didn't expect such turnaround on my vent in this dead/old thread. However I really do appreciate the constructive feedback, and I cer... See more...
Wow thanks for rapid responses @SanjayReddy  and @PickleRick  Didn't expect such turnaround on my vent in this dead/old thread. However I really do appreciate the constructive feedback, and I certainly do understand the justification for why the stats/timechart functions as it does, it's just a shame I've been trawling most of those other linked threads and many hours of google searches to find many different suggested approaches, none which oddly seem to fit the bill for what is actually a fairly small, simple query/resultant expectation.. It's one of those things you just think, meh, this takes 30 secs in ANSI SQL, noSQL or any other RDBMS to produce the desired resultant matrix/vector, but in Splunk, I need my masters in SPL   Anyway, thank you again, wholeheartedly appreciate your positive and responsive attitudes given my pretty low-contribution post   I will check those threads you've provided which I haven't looked at before and if all else fails, as you've suggested, I'll post afresh 🥰
Hi, just stumbled upon this one. You've probably already resolved the issue, but for anyone who might have a similar issue in the future: Roles are not returned by Auth0 SAML assertion by default, b... See more...
Hi, just stumbled upon this one. You've probably already resolved the issue, but for anyone who might have a similar issue in the future: Roles are not returned by Auth0 SAML assertion by default, but you can implement a rule that fixes that. I've created a guide here a while back: https://isbyr.com/return-user-roles-in-auth0/ Hope that this helps.
You can do this at the end | eval title=coalesce(title, usedData) | fields - usedData | stats values(*) as * by title Note that you seem to pull in a bunch of macros that do not contain any index s... See more...
You can do this at the end | eval title=coalesce(title, usedData) | fields - usedData | stats values(*) as * by title Note that you seem to pull in a bunch of macros that do not contain any index searches
I think that using script should work. Just use sudo w/o password and with exact command if needed. Splunk has recognized this as a bug, but I haven’t yet Jira either estimated fix version/time.
I don't know what the GlobalMantics dataset is, but Splunk is not shipped with preloaded data, you need to ingest the data yourself, so if you have that dataset, you can ingest it in any version of S... See more...
I don't know what the GlobalMantics dataset is, but Splunk is not shipped with preloaded data, you need to ingest the data yourself, so if you have that dataset, you can ingest it in any version of Splunk
Do you mean in the second dashboard, there are inputs which are not selected as you wanted? If so, it's probably because the input token is named form.xxx, so if your input dropdown for Services is ... See more...
Do you mean in the second dashboard, there are inputs which are not selected as you wanted? If so, it's probably because the input token is named form.xxx, so if your input dropdown for Services is the token ShortConfigRuleName, then you should pass the URL with form.ShortConfigRuleName=$row.ShortConfigRuleName$
Not sure why you are seeing it executing randomly - that does not seem right - can you produce a test case. However, I use this regularly for updating lookups - all you need to do it reset the token... See more...
Not sure why you are seeing it executing randomly - that does not seem right - can you produce a test case. However, I use this regularly for updating lookups - all you need to do it reset the tokens following the end of the search that writes the lookup. See the <done> section below. Unsetting the form.* tokens will remove the inputs from the display and removing the non-form tokens will prevent the search from running until all 4 of the tokens are input. Note in the <search> below, I did it slightly differently using makeresults, so there is a _time field which is when the search runs. It will add the new data to the END of the lookup, so that may not be useful for you. Note that when using append=t the _time field will not get added to the existing lookup if it does not already exist. <form version="1.1" theme="light"> <label>tmp3</label> <description>Replicate time picker issue</description> <fieldset submitButton="true" autoRun="false"> <input type="text" token="usecasename" searchWhenChanged="false"> <label>Enter UseCaseName Here</label> </input> <input type="text" token="error" searchWhenChanged="false"> <label>Enter Error/Exception here</label> </input> <input type="text" token="impact" searchWhenChanged="false"> <label>Enter Impact here</label> </input> <input type="text" token="reason" searchWhenChanged="false"> <label>Enter Reason here</label> </input> </fieldset> <row depends="$hide$"> <panel> <table> <title></title> <search> <query> | makeresults | eval useCaseName="$usecasename$", "Error/Exception in logs"="$error$", Impact="$impact$", Reason="$reason$" | outputlookup append=t lookup_exceptions_all_usecase1.csv </query> <earliest>-24h</earliest> <latest></latest> <sampleRatio>1</sampleRatio> <done> <unset token="usecasename"></unset> <unset token="error"></unset> <unset token="impact"></unset> <unset token="reason"></unset> <unset token="form.usecasename"></unset> <unset token="form.error"></unset> <unset token="form.impact"></unset> <unset token="form.reason"></unset> </done> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
@Shashwat  You can't have an 'empty' time picker, so as soon as you enter the dashboard, the time picker will fire a change event, so you will get is_time_picker=true and it will set 'All time' as t... See more...
@Shashwat  You can't have an 'empty' time picker, so as soon as you enter the dashboard, the time picker will fire a change event, so you will get is_time_picker=true and it will set 'All time' as the default So it works the first time round because when you select a table, it unsets is_time_picker. You need to use an <eval> in the time picker change element, so that it only sets your token when time.earliest is given a value - also remove all the unset tokens. <form version="1.1" theme="light"> <label>Time Picker Input</label> <description>Replicate time picker issue</description> <fieldset submitButton="false"> <input type="dropdown" token="item" searchWhenChanged="true"> <label>Select Item</label> <choice value="table1">TABLE-1</choice> <choice value="table2">TABLE-2</choice> <choice value="table3">TABLE-3</choice> <change> <condition value="table1"> <set token="tab1">"Table1"</set> <unset token="tab2"></unset> <unset token="tab3"></unset> </condition> <condition value="table2"> <set token="tab2">"Table2"</set> <unset token="tab1"></unset> <unset token="tab3"></unset> </condition> <condition value="table3"> <set token="tab3">"Table3"</set> <unset token="tab1"></unset> <unset token="tab2"></unset> </condition> <condition> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="tab3"></unset> </condition> </change> </input> <input type="time" token="time" searchWhenChanged="true"> <label>Select Time</label> <change> <eval token="is_time_selected">if(isnull($time.earliest$), null(), "true")</eval> </change> <default> <earliest>0</earliest> <latest></latest> </default> </input> </fieldset> <row depends="$tab1$,$is_time_selected$"> <panel> <table> <title>Table1</title> <search> <query> | makeresults | eval Table = "Table1" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab2$,$is_time_selected$"> <panel> <table> <title>Table2</title> <search> <query> | makeresults | eval Table = "Table2" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab3$,$is_time_selected$"> <panel> <table> <title>Table3</title> <search> <query> | makeresults | eval Table = "Table3" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> BTW, have you tried using link list inputs - they are quite handy for doing tab based inputs like you appear to be doing. You can achieve some near visuals using a simple bit of CSS for styling, e.g.    
Adding to @SanjayReddy 's answer I'll take the opportunity to explain why this actually makes sense. Firstly, Splunk executes search commands in a pipeline. Each subsequent command knows only the re... See more...
Adding to @SanjayReddy 's answer I'll take the opportunity to explain why this actually makes sense. Firstly, Splunk executes search commands in a pipeline. Each subsequent command knows only the results from the previous step. That's why you have to make sure you have all the data you need for further processing at each step and you can't reference any data you've already filtered out or in any other way "lost" along the way. Secondly, if stats count were to return 0 in case it got no events on input it would have to be implemented as an explicit exception to a normal stats behaviour. Remember that there are much more aggregation functions than just count for stats and for at least some of them returning a value for zero input rows would make no sense. Like average=0 is definitely not the same as no result at all. Thirdly, even count can be over some field. How is stats supposed to know whatever values should be expected in those fields? So this behaviour while maybe a bit inconvenient to handle (actually, it could be worth posting an idea for a generalized "default result" command if there isn't one yet; haven't checked it) it is consistent with the overall stats mechanics.
Whay do you mean by "duplicate" in this context? Two different values for the same TASKID? That's expected.
Ok, does your lookup definition list _raw as one of input fields? If not you must use the AS clause for input field.
We are using a metrics index to store metric events. These metric events are linked to a different parent dataset through a unique ID dimension. This ID dimension can have tens of thousands of unique... See more...
We are using a metrics index to store metric events. These metric events are linked to a different parent dataset through a unique ID dimension. This ID dimension can have tens of thousands of unique values, and the parent dataset primarily consists of string values. Given the cardinality issues associated with metric indices (where it's best to avoid dimensions with a large range of unique values), what would be the best practice in this scenario? https://docs.splunk.com/Documentation/Splunk/latest/Metrics/BestPractices#Cardinality_issues  Would it be a good idea to use a key-value store (kvstore) for the parent data and perform lookups from the metric data? How would this approach impact performance?
thanks @meetmshah for taking this in notice. Fortunately I got the issue.  I was doing everything correct except one thing, that is I was creating inbound rule of tcp 8000 after the instalation and ... See more...
thanks @meetmshah for taking this in notice. Fortunately I got the issue.  I was doing everything correct except one thing, that is I was creating inbound rule of tcp 8000 after the instalation and after running the splunk. But when I at first created the inbound tcp 8000 rule and then installed the splunk and started it, so now everything is working fine. Thanks again...