All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your help, I really appreciate it. Here's the output from the report job: OTHER arc dev test prod 2024-07-16 5.760 0.017 2.333 2.235 19.114 2024-07-17 5.999 0.018 2.595 2... See more...
Thanks for your help, I really appreciate it. Here's the output from the report job: OTHER arc dev test prod 2024-07-16 5.760 0.017 2.333 2.235 19.114 2024-07-17 5.999 0.018 2.595 2.260 18.355 2024-07-18 6.019 0.018 2.559 1.962 16.879 2024-07-19 5.650 0.018 2.177 1.566 14.573 2024-07-20 4.849 0.013 2.389 1.609 12.348 2024-07-21 4.619 0.013 2.190 1.618 12.296 2024-07-22 5.716 0.019 2.425 1.626 14.286 I was able to play around with what you sent and this gives me rows with the size and the change in size, which is the data I want, but I can't seem to get it back to the table format I need.  if I add "| stats values(*) as * by index" then I end up with a format that is multivalue and I haven't been able to get that untangled either.  I am OK at this stuff, but am definitely not a pro level user. | loadjob "" | eval date=(strftime(_time,"%Y-%m-%d")) | fields - _time | transpose header_field=date | rename column AS index | sort index | untable index date size | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size]
Just to clarify. Every device on this network is being logged by splunk, but these two firewalls are the only two that have this problem. All the other devices can pull logs normally, so I don't beli... See more...
Just to clarify. Every device on this network is being logged by splunk, but these two firewalls are the only two that have this problem. All the other devices can pull logs normally, so I don't believe the time format is the issue.
All of this has been through the GUI. On the search head I enabled clustering and added it as the Search peer. Is that not the way to do it?
Does it work if you make a selection to trigger the change handler? If so, you could add a set of the token in the init block of the dashboard. This might not work depending on whether it is executed... See more...
Does it work if you make a selection to trigger the change handler? If so, you could add a set of the token in the init block of the dashboard. This might not work depending on whether it is executed before or after the base searches. If it is executed before the base searches, you may have to do something a bit more complicated to ensure the order of searches are execute in a controlled manner
There probably aren't too many people here who know the M-21-31 requirements.  However, there probably are a lot of people who could help you comply with those requirements if you tell what they are.... See more...
There probably aren't too many people here who know the M-21-31 requirements.  However, there probably are a lot of people who could help you comply with those requirements if you tell what they are. FWIW, maxHotSpanSecs is a maximum value, not a fixed value.  Hot buckets could roll to warm before that time span is reached.  Also, have a single bucket that spans 6 months is not a good idea - it could get to be too large.  For best control of retention time, set the hot bucket time limit to 1 day (86400). There is no mechanism for controlling how long a bucket is warm.  Those only roll to cold based on size or count.
Check out the REST API Modular Input app (https://splunkbase.splunk.com/app/1546). Or write your own script and make a scripted input out of it.
    <form version="1.1" theme="light"> <label> Report </label> <search id="Night"> <query>| inputlookup handover_timeline.csv | dedup Shift Date | search Shift="Night" | appendcols [| maker... See more...
    <form version="1.1" theme="light"> <label> Report </label> <search id="Night"> <query>| inputlookup handover_timeline.csv | dedup Shift Date | search Shift="Night" | appendcols [| makeresults count=24 | streamstats count as Timeline | eval Timeline=if(Timeline&lt;10, "0".Timeline.":00", Timeline.":00") | table Timeline] | streamstats first(Date) as Date, first(Shift) as Shift | tail 6 | sort Timeline | append [| inputlookup handover_timeline.csv | dedup Shift Date | search Shift="Night" | appendcols [| makeresults count=24 | streamstats count as Timeline | eval Timeline=if(Timeline&lt;10, "0".Timeline.":00", Timeline.":00") | table Timeline] | streamstats first(Date) as Date, first(Shift) as Shift | head 6 ] | fields Date Shift Timeline "Hourly details of shift"</query> <done> <set token="SID1">$job.sid$</set> </done> </search> <search id="Day"> <query>| inputlookup handover_timeline.csv | dedup Shift Date | search Shift=Day | appendcols [| makeresults count=24 | streamstats count as Timeline | eval Timeline=if(Timeline&lt;10, "0".Timeline.":00", Timeline.":00") | table Timeline] | streamstats first(Date) as Date, first(Shift) as Shift | streamstats count as row_number | eventstats max(row_number) as total_rows | where row_number &gt; 6 AND row_number &lt;= total_rows - 6 | fields - row_number, total_rows</query> <done> <set token="SID2">$job.sid$</set> </done> </search> <search> <query> | makeresults | eval token="$date_tok$" | eval earliest=if(token="today", relative_time(now(), "@d"), strptime(token, "%d/%m/%Y")) | eval latest=if(token="today", now(), earliest + 86400) | table earliest, latest </query> <finalized> <set token="earliest_tok">$result.earliest$</set> <set token="latest_tok">$result.latest$</set> </finalized> <earliest>-7d@d</earliest> <latest>now</latest> <refresh>300</refresh> <refreshType>delay</refreshType> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_tok" searchWhenChanged="true"> <label>Date:</label> <fieldForLabel>Date</fieldForLabel> <fieldForValue>Date</fieldForValue> <search> <query> | makeresults | timechart span=1d count | sort - _time | eval Date=strftime(_time, "%d/%m/%Y"), earliest=relative_time(_time, "@d") | table Date, earliest | tail 7 | sort - earliest </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <choice value="today">Today</choice> <initialValue>today</initialValue> <default>today</default> </input> <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> <change> <condition match="$value$ == 'Day'"> <set token="selected_shift">$SID1$</set> </condition> <condition match="$value$ == 'Night'"> <set token="selected_shift">$SID2$</set> </condition> </change> </input> </fieldset> <row> <panel> <html> NOTES: The data shown corresponds to the start of the shift, which is 6:45 AM for the Day shift and 6:45 PM for the Night shift. </html> </panel> </row> <row> <panel id="flf"> <title>FLF</title> <single> <search> <query>| inputlookup daily_ticket_count.csv | eval today = strftime(now(), "%d/%m/%Y") | eval Date = if(Date == today, "today", Date) | search Shift="$shift_tok$" Date="$date_tok$" | where isnotnull(FLF_perc) | head 1 | fields FLF_perc</query> <earliest>$earliest_tok$</earliest> <latest>$latest_tok$</latest> </search> <option name="drilldown">none</option> <option name="height">75</option> <option name="numberPrecision">0.00</option> <option name="rangeColors">["0xd93f3c","0x65a637"]</option> <option name="rangeValues">[80]</option> <option name="refresh.display">none</option> <option name="unit">%</option> <option name="unitPosition">after</option> <option name="useColors">1</option> </single> </panel> <panel> <title>Ticket Count</title> <table> <search> <query>| inputlookup daily_ticket_count.csv | eval today = strftime(now(), "%d/%m/%Y") | eval Date = if(Date == today, "today", Date) | search Shift="$shift_tok$" Date="$date_tok$" type IN ("Request", "Incident") | fields - FLF_perc | head 2</query> <earliest>$earliest_tok$</earliest> <latest>$latest_tok$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search> <query>| loadjob $selected_shift$ | table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row> </form> Now getting this message for the Timeline panel. Search is waiting for input.   Full XML above, if someone can spot any errors.
Try like this <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <... See more...
Try like this <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> <change> <condition match="$value$ == 'Day'"> <set token="selected_shift">$SID1$</set> </condition> <condition match="$value$ == 'Night'"> <set token="selected_shift">$SID2$</set> </condition> </change> </input> <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search> <query>| loadjob $selected_shift$ | table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row>
Thanks @PickleRick for the detailed explanation! it's very helpful
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. Thi... See more...
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. This my sample  | tstats summariesonly=true values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest) as dest values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.hostname) as hostname values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.os_type) as os_type values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.exploit_title) as exploit_title values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.malware_title) as malware_title from datamodel=Vulnerabilities_Custom.Vulnerabilities_Non_Remediation where nodename IN ("Vulnerabilities_Custom.Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.High_Or_Critical_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Medium_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Low_Or_Informational_Vulnerabilities_Non_Remediation") by Vulnerabilities_Custom.Vulnerabilities_Non_Remediation._time, Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest | table event_time dest hostname os_type exploit_title malware_title  Has anyone have clues about this?   
How can I constantly hit a http end point in a remote server to collect useful metrics and then import it to splunk hourly for example and use it for useful visualisations?
Thanks for getting back to me. This is what I`ve done:  - base searches: <search id="Night"> <query>...</query> <done> <set token="SID1">$job.sid$</set> </done> </search> ... See more...
Thanks for getting back to me. This is what I`ve done:  - base searches: <search id="Night"> <query>...</query> <done> <set token="SID1">$job.sid$</set> </done> </search> <search id="Day"> <query>...</query> <done> <set token="SID2">$job.sid$</set> </done>   - dropdown input: <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> <change> <condition match="$value$ == 'Day'"> <set token="selected_shift">Day</set> </condition> <condition match="$value$ == 'Night'"> <set token="selected_shift">Night</set> </condition> </change> </input>    - panel: <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$ $selected_shift$</title> <search base="$selected_shift$"> <query>| table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row>  The $selected_shift$ token doesn`t seem to be working properly - any idea ? Thanks.
Is this a per profile basis? Per cluster basis? how does this restart back?  
Assuming it is not simple a typo and case does matter (Shift_tok is not the same as shift_tok), then you could try setting a different token in the done handler of each of your two bases with the job... See more...
Assuming it is not simple a typo and case does matter (Shift_tok is not the same as shift_tok), then you could try setting a different token in the done handler of each of your two bases with the job.sid, then use the change handler of the dropdown to copy the relevant sid token value into a token which you use in your search with the loadjob command
I`ve got 2 base searches:      <search id="Night">     and      <search id="Day">      And a dropdown input:     <input type="dropdown" token="shift_tok" searchWhenChanged="true"> ... See more...
I`ve got 2 base searches:      <search id="Night">     and      <search id="Day">      And a dropdown input:     <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input>      I need to find a way to reference the base searches, depending on the input provided by the user. I was hoping to use a token to reference the base searches, but donesn`t seem to be working:     <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search base="$Shift_tok$"> <query>| table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row> </form>  
Theoretically, you could set up forwarding logs to HEC endpoint by defining proper destination and message template. But. As Palo themselves write on their docs page - "Log forwarding to an HTTP se... See more...
Theoretically, you could set up forwarding logs to HEC endpoint by defining proper destination and message template. But. As Palo themselves write on their docs page - "Log forwarding to an HTTP server is designed for log forwarding at low frequencies and is not recommend for deployments with a high volume of log forwarding. You may experience log loss when forwarding to an HTTP server if your deployment generate a high volume of logs that need to be forwarded." Which actually makes sense since it seems Palo will emit a separate HTTP request for each event which might flood your receiver in case of - for example - traffic logs. (and I'm not sure how well it does with keepalives and reusing connections). It doesn't seem to be able to aggregate multiple events into a single request. So it is indeed a typical approach to send events via syslog to any reasonable modern syslog daemon (either rsyslog or syslog-ng) and handle it there. They can either write them to file which will be picked up by UF or can aggregate them (at least rsyslog can, I'm not that good with syslog-ng but I suppose it does as well) into bigger batches and send much lower number of requests to destination HEC endpoint (like a single HTTP request for every 100 events to send). Of course you have much more flexibility in processing data in-transit if you use an intermediate syslog daemon.
Thanks!! Yes something like x axis time and y axis is count column in bar chart + line chart of processing_time so it looks correct However I think the processing_time calculation is abit tricky: ... See more...
Thanks!! Yes something like x axis time and y axis is count column in bar chart + line chart of processing_time so it looks correct However I think the processing_time calculation is abit tricky: How can I calculate the processing_time for the below 21:13:12,825: done bulk saving messages should have a processing_time as per below: Look at 2nd prev "All read threads finished flush.." which is 21:13:12,528 and take the time difference so is 297ms, but I can't use transaction since there are also many other "all read threads finished in between 2 done bulk saving messages  2024-08-07 21:13:12,007 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 113 ms 2024-08-07 21:13:12,007 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:12,054 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:12,132 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:12,179 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:12,257 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,398 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,528 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,778 [33] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:4668 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:12,825 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 24 ms  
Hi, I'm unable to launch the Splunk Add-on on AWS page on the Admin console, page show as Loading but no output at all. No abnormalities seen in the splunkd.log, only some checksum mismatch errors. ... See more...
Hi, I'm unable to launch the Splunk Add-on on AWS page on the Admin console, page show as Loading but no output at all. No abnormalities seen in the splunkd.log, only some checksum mismatch errors.  My splunk was recently upgraded to 9.2.2, last tried on earlier version it was working.  Splunk Add-on on AWS version is 5.1.0. Can I check if anyone came across the same issue and managed to resolve it?
I'm not sure you understand the macros correctly. if you define a macro with two parameters paramA and paramB it will get substituted in your search with whatever values you specify for them. These ... See more...
I'm not sure you understand the macros correctly. if you define a macro with two parameters paramA and paramB it will get substituted in your search with whatever values you specify for them. These are separate layers.
OK. That actually makes sense. I'm no AD expert but indeed as far as I remember you cannot use local accounts on domain controllers - all "local" accounts are indeed domain accounts. If this is not d... See more...
OK. That actually makes sense. I'm no AD expert but indeed as far as I remember you cannot use local accounts on domain controllers - all "local" accounts are indeed domain accounts. If this is not described in the forwarder installation manual, it could be worth posting a feedback (there is a feedback form on the bottom of every doc page).