All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the m... See more...
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the multiselect values stay populated when a different event is selected. The search for variable will update the dropdown, though. Is there a way to reset the selected variables when a different event is selected? I have seen the simple xml versions for this but haven't seen any information on how to do this in dashboard stuido. Any help is greatly appreciated. { "visualizations": { "viz_Visualization": { "type": "splunk.line", "dataSources": { "primary": "ds_mainSearch" }, "options": { "overlayFields": [], "y": "> primary | frameBySeriesNames($dd2|s$)", "y2": "> primary | frameBySeriesNames('')", "lineWidth": 3, "showLineSmoothing": true, "xAxisMaxLabelParts": 2, "showRoundedY2AxisLabels": false, "x": "> primary | seriesByName('_time')" }, "title": "Visualization", "containerOptions": { "visibility": {} }, "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "type": "auto", "newTab": false } } ] } }, "dataSources": { "ds_dd1": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype |dedup EventName \n| sort str(EventName)" }, "name": "dd1Search" }, "ds_mainSearch": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype EventName IN (\"$dd1$\") VariableName IN ($dd2|s$) \n| timechart span=5m max(Value) by VariableName", "enableSmartSources": true }, "name": "mainSearch" }, "ds_dd2": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "index=index source=source sourcetype=sourcetype EventName = \"$dd1$\" |dedup VariableName \n| sort str(VariableName)" }, "name": "dd2Search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_dd1": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd1" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd1" }, "title": "Event Name", "type": "input.dropdown", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"EventName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"EventName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_dd2": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd2" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd2" }, "title": "Variable(s)", "type": "input.multiselect", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"VariableName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"VariableName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_Visualization", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 653 } } ], "globalInputs": [ "input_global_trp", "input_dd1", "input_dd2" ] }, "description": "", "title": "Test" }  
| makeresults format=csv data="date,OTHER,arc,dev,test,prod 7/16/2024,5.76,0.017,2.333,2.235,19.114 7/17/2024,5.999,0.018,2.595,2.26,18.355 7/18/2024,6.019,0.018,2.559,1.962,16.879 7/19/2024,5.650,01... See more...
| makeresults format=csv data="date,OTHER,arc,dev,test,prod 7/16/2024,5.76,0.017,2.333,2.235,19.114 7/17/2024,5.999,0.018,2.595,2.26,18.355 7/18/2024,6.019,0.018,2.559,1.962,16.879 7/19/2024,5.650,018,2.177,1.566,14.573 7/20/2024,4.849,0.013,2.389,1.609,12.348 7/21/2024,4.619,0.013,2.19,1.618,12.296 7/22/2024,5.716,0.019,2.425,1.626,14.286 7/23/2024,5.716,0.019,2.425,1.626,14.286" | eval _time=strptime(date,"%m/%d/%Y") | fields - date ``` the lines above simulate the data from your loadjob (with 22nd duplicated to 23rd to give 2 Tuesdays) ``` ``` there is no need for the transpose as the untable will work with the _time and index fields in the other order ``` | untable _time index size | eval date=strftime(_time,"%F") | eval day=strftime(_time, "%a") | where day="Tue" | fields - day _time | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=date." change" | xyseries index date relative_size] | appendpipe [| xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
Mr. Galloway Thank you for your reply and input  The retention requirements are  Data / Log Retention:  Hot / Warm  for 30 Months  which I broke out              Hot: 6 months            Warm:... See more...
Mr. Galloway Thank you for your reply and input  The retention requirements are  Data / Log Retention:  Hot / Warm  for 30 Months  which I broke out              Hot: 6 months            Warm: 24 months      But I understand that 6 months is to long per you reply and will adjust to 1 or 2 days    Cold:  18 months   Archive or Frozen: 18 months  with data ceiling and data deletion Revised indexes.conf maxHotSpanSecs =  86400 or 172800 (1 or 2 day of hot bucket data)  maxHotIdleSecs = 86400 or 172800 maxWarmDBCount = The maximum number of warm buckets. How does one set the warm bucket for SIZE - would prefer to use SIZE and NOT # of Buckets  coldPath.maxDataSizeMB = 47335428  frozenTimePeriodInSecs = 47335428 coldToFrozenDir = "$SPLUNK_HOME/myfrozenarchive     
Amazing, worked exactly as you explained it will.
Looks like init blocks happen too early, so try adding a hidden row/panel like this <row depends="$alwaysHidden$"> <panel> <table> <search> <query>| makeresults | eval ... See more...
Looks like init blocks happen too early, so try adding a hidden row/panel like this <row depends="$alwaysHidden$"> <panel> <table> <search> <query>| makeresults | eval sid1="$SID1$", sid2="$SID2$"</query> <done> <set token="selected_shift">$result.sid1$</set> </done> </search> </table> </panel> </row> You will notice that you still get waiting for input initially but after a short time the panel will display with the initial search results. If you want to get really fancy, you could change the search in the hidden panel to take the current time into account and set the default / initial dataset accordingly.
  OTHER arc dev test prod 7/16/2024 5.76 0.017 2.333 2.235 19.114 7/17/2024 5.999 0.018 2.595 2.26 18.355 7/18/2024 6.019 0.018 2.559 1.962 16.879 7/19/20... See more...
  OTHER arc dev test prod 7/16/2024 5.76 0.017 2.333 2.235 19.114 7/17/2024 5.999 0.018 2.595 2.26 18.355 7/18/2024 6.019 0.018 2.559 1.962 16.879 7/19/2024 5.65 0.018 2.177 1.566 14.573 7/20/2024 4.849 0.013 2.389 1.609 12.348 7/21/2024 4.619 0.013 2.19 1.618 12.296 7/22/2024 5.716 0.019 2.425 1.626 14.286
That didn't format well.  I'll post the data in a separate reply from my comments. I was able to play around with what you sent and this gives me rows with the size and the change in size, which i... See more...
That didn't format well.  I'll post the data in a separate reply from my comments. I was able to play around with what you sent and this gives me rows with the size and the change in size, which is the data I want, but I can't seem to get it back to the table format I need.  if I add "| stats values(*) as * by index" then I end up with a format that is multivalue and I haven't been able to get that untangled either.  I am OK at this stuff, but am definitely not a pro level user. | loadjob "" | eval date=(strftime(_time,"%Y-%m-%d")) | fields - _time | transpose header_field=date | rename column AS index | sort index | untable index date size | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size]
Thanks for your help, I really appreciate it. Here's the output from the report job: OTHER arc dev test prod 2024-07-16 5.760 0.017 2.333 2.235 19.114 2024-07-17 5.999 0.018 2.595 2... See more...
Thanks for your help, I really appreciate it. Here's the output from the report job: OTHER arc dev test prod 2024-07-16 5.760 0.017 2.333 2.235 19.114 2024-07-17 5.999 0.018 2.595 2.260 18.355 2024-07-18 6.019 0.018 2.559 1.962 16.879 2024-07-19 5.650 0.018 2.177 1.566 14.573 2024-07-20 4.849 0.013 2.389 1.609 12.348 2024-07-21 4.619 0.013 2.190 1.618 12.296 2024-07-22 5.716 0.019 2.425 1.626 14.286 I was able to play around with what you sent and this gives me rows with the size and the change in size, which is the data I want, but I can't seem to get it back to the table format I need.  if I add "| stats values(*) as * by index" then I end up with a format that is multivalue and I haven't been able to get that untangled either.  I am OK at this stuff, but am definitely not a pro level user. | loadjob "" | eval date=(strftime(_time,"%Y-%m-%d")) | fields - _time | transpose header_field=date | rename column AS index | sort index | untable index date size | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size]
Just to clarify. Every device on this network is being logged by splunk, but these two firewalls are the only two that have this problem. All the other devices can pull logs normally, so I don't beli... See more...
Just to clarify. Every device on this network is being logged by splunk, but these two firewalls are the only two that have this problem. All the other devices can pull logs normally, so I don't believe the time format is the issue.
All of this has been through the GUI. On the search head I enabled clustering and added it as the Search peer. Is that not the way to do it?
Does it work if you make a selection to trigger the change handler? If so, you could add a set of the token in the init block of the dashboard. This might not work depending on whether it is executed... See more...
Does it work if you make a selection to trigger the change handler? If so, you could add a set of the token in the init block of the dashboard. This might not work depending on whether it is executed before or after the base searches. If it is executed before the base searches, you may have to do something a bit more complicated to ensure the order of searches are execute in a controlled manner
There probably aren't too many people here who know the M-21-31 requirements.  However, there probably are a lot of people who could help you comply with those requirements if you tell what they are.... See more...
There probably aren't too many people here who know the M-21-31 requirements.  However, there probably are a lot of people who could help you comply with those requirements if you tell what they are. FWIW, maxHotSpanSecs is a maximum value, not a fixed value.  Hot buckets could roll to warm before that time span is reached.  Also, have a single bucket that spans 6 months is not a good idea - it could get to be too large.  For best control of retention time, set the hot bucket time limit to 1 day (86400). There is no mechanism for controlling how long a bucket is warm.  Those only roll to cold based on size or count.
Check out the REST API Modular Input app (https://splunkbase.splunk.com/app/1546). Or write your own script and make a scripted input out of it.
    <form version="1.1" theme="light"> <label> Report </label> <search id="Night"> <query>| inputlookup handover_timeline.csv | dedup Shift Date | search Shift="Night" | appendcols [| maker... See more...
    <form version="1.1" theme="light"> <label> Report </label> <search id="Night"> <query>| inputlookup handover_timeline.csv | dedup Shift Date | search Shift="Night" | appendcols [| makeresults count=24 | streamstats count as Timeline | eval Timeline=if(Timeline&lt;10, "0".Timeline.":00", Timeline.":00") | table Timeline] | streamstats first(Date) as Date, first(Shift) as Shift | tail 6 | sort Timeline | append [| inputlookup handover_timeline.csv | dedup Shift Date | search Shift="Night" | appendcols [| makeresults count=24 | streamstats count as Timeline | eval Timeline=if(Timeline&lt;10, "0".Timeline.":00", Timeline.":00") | table Timeline] | streamstats first(Date) as Date, first(Shift) as Shift | head 6 ] | fields Date Shift Timeline "Hourly details of shift"</query> <done> <set token="SID1">$job.sid$</set> </done> </search> <search id="Day"> <query>| inputlookup handover_timeline.csv | dedup Shift Date | search Shift=Day | appendcols [| makeresults count=24 | streamstats count as Timeline | eval Timeline=if(Timeline&lt;10, "0".Timeline.":00", Timeline.":00") | table Timeline] | streamstats first(Date) as Date, first(Shift) as Shift | streamstats count as row_number | eventstats max(row_number) as total_rows | where row_number &gt; 6 AND row_number &lt;= total_rows - 6 | fields - row_number, total_rows</query> <done> <set token="SID2">$job.sid$</set> </done> </search> <search> <query> | makeresults | eval token="$date_tok$" | eval earliest=if(token="today", relative_time(now(), "@d"), strptime(token, "%d/%m/%Y")) | eval latest=if(token="today", now(), earliest + 86400) | table earliest, latest </query> <finalized> <set token="earliest_tok">$result.earliest$</set> <set token="latest_tok">$result.latest$</set> </finalized> <earliest>-7d@d</earliest> <latest>now</latest> <refresh>300</refresh> <refreshType>delay</refreshType> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_tok" searchWhenChanged="true"> <label>Date:</label> <fieldForLabel>Date</fieldForLabel> <fieldForValue>Date</fieldForValue> <search> <query> | makeresults | timechart span=1d count | sort - _time | eval Date=strftime(_time, "%d/%m/%Y"), earliest=relative_time(_time, "@d") | table Date, earliest | tail 7 | sort - earliest </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <choice value="today">Today</choice> <initialValue>today</initialValue> <default>today</default> </input> <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> <change> <condition match="$value$ == 'Day'"> <set token="selected_shift">$SID1$</set> </condition> <condition match="$value$ == 'Night'"> <set token="selected_shift">$SID2$</set> </condition> </change> </input> </fieldset> <row> <panel> <html> NOTES: The data shown corresponds to the start of the shift, which is 6:45 AM for the Day shift and 6:45 PM for the Night shift. </html> </panel> </row> <row> <panel id="flf"> <title>FLF</title> <single> <search> <query>| inputlookup daily_ticket_count.csv | eval today = strftime(now(), "%d/%m/%Y") | eval Date = if(Date == today, "today", Date) | search Shift="$shift_tok$" Date="$date_tok$" | where isnotnull(FLF_perc) | head 1 | fields FLF_perc</query> <earliest>$earliest_tok$</earliest> <latest>$latest_tok$</latest> </search> <option name="drilldown">none</option> <option name="height">75</option> <option name="numberPrecision">0.00</option> <option name="rangeColors">["0xd93f3c","0x65a637"]</option> <option name="rangeValues">[80]</option> <option name="refresh.display">none</option> <option name="unit">%</option> <option name="unitPosition">after</option> <option name="useColors">1</option> </single> </panel> <panel> <title>Ticket Count</title> <table> <search> <query>| inputlookup daily_ticket_count.csv | eval today = strftime(now(), "%d/%m/%Y") | eval Date = if(Date == today, "today", Date) | search Shift="$shift_tok$" Date="$date_tok$" type IN ("Request", "Incident") | fields - FLF_perc | head 2</query> <earliest>$earliest_tok$</earliest> <latest>$latest_tok$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search> <query>| loadjob $selected_shift$ | table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row> </form> Now getting this message for the Timeline panel. Search is waiting for input.   Full XML above, if someone can spot any errors.
Try like this <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <... See more...
Try like this <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> <change> <condition match="$value$ == 'Day'"> <set token="selected_shift">$SID1$</set> </condition> <condition match="$value$ == 'Night'"> <set token="selected_shift">$SID2$</set> </condition> </change> </input> <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search> <query>| loadjob $selected_shift$ | table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row>
Thanks @PickleRick for the detailed explanation! it's very helpful
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. Thi... See more...
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. This my sample  | tstats summariesonly=true values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest) as dest values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.hostname) as hostname values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.os_type) as os_type values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.exploit_title) as exploit_title values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.malware_title) as malware_title from datamodel=Vulnerabilities_Custom.Vulnerabilities_Non_Remediation where nodename IN ("Vulnerabilities_Custom.Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.High_Or_Critical_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Medium_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Low_Or_Informational_Vulnerabilities_Non_Remediation") by Vulnerabilities_Custom.Vulnerabilities_Non_Remediation._time, Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest | table event_time dest hostname os_type exploit_title malware_title  Has anyone have clues about this?   
How can I constantly hit a http end point in a remote server to collect useful metrics and then import it to splunk hourly for example and use it for useful visualisations?
Thanks for getting back to me. This is what I`ve done:  - base searches: <search id="Night"> <query>...</query> <done> <set token="SID1">$job.sid$</set> </done> </search> ... See more...
Thanks for getting back to me. This is what I`ve done:  - base searches: <search id="Night"> <query>...</query> <done> <set token="SID1">$job.sid$</set> </done> </search> <search id="Day"> <query>...</query> <done> <set token="SID2">$job.sid$</set> </done>   - dropdown input: <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> <change> <condition match="$value$ == 'Day'"> <set token="selected_shift">Day</set> </condition> <condition match="$value$ == 'Night'"> <set token="selected_shift">Night</set> </condition> </change> </input>    - panel: <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$ $selected_shift$</title> <search base="$selected_shift$"> <query>| table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row>  The $selected_shift$ token doesn`t seem to be working properly - any idea ? Thanks.
Is this a per profile basis? Per cluster basis? how does this restart back?