Activity Feed
- Got Karma for Re: How do I add yesterday's date to an emailed report subject?. 2 weeks ago
- Got Karma for Re: Token substitues value with double quotes | unable to use panel token values in DB connect query to compare string values as they need single quotes. 4 weeks ago
- Got Karma for Re: What are the differences between append, appendcols, and join search commands?. 11-06-2024 02:19 AM
- Got Karma for Re: How to replace specific field value?. 06-20-2024 06:36 AM
- Got Karma for Re: "Other" in timechart. 01-11-2024 01:30 AM
- Got Karma for Re: Token substitues value with double quotes | unable to use panel token values in DB connect query to compare string values as they need single quotes. 12-14-2023 07:48 AM
- Got Karma for Re: If there are no results found, how do I get my search to return a field that has the value of zero?. 11-17-2023 07:09 AM
- Got Karma for Re: Search Query Help: Number of Events per Event Code and Total size of those events. 08-07-2023 06:54 AM
- Got Karma for Re: Stats (count(x) as countX, count(y) as countY) BY FIELD X. 03-16-2023 06:36 AM
- Got Karma for Re: How to reference a dashboard token in an HTML panel?. 01-30-2023 12:00 AM
- Got Karma for Re: "Other" in timechart. 01-13-2023 05:20 AM
- Got Karma for Re: How do I use the reverse command to change the order of my table?. 01-10-2023 07:29 AM
- Got Karma for Re: Get single value panel to display a "date". 12-27-2022 07:12 AM
- Got Karma for Re: How can I create a query to find dashboard usage and top used dashboards of all the dashboards in my environment?. 09-23-2022 10:13 AM
- Got Karma for Re: How can I create a query to find dashboard usage and top used dashboards of all the dashboards in my environment?. 09-15-2022 05:06 AM
- Got Karma for Re: In a dashboard, how can I remove the panels that say "No results found." with code or something equivalent?. 09-12-2022 01:38 AM
- Got Karma for Re: "Other" in timechart. 08-11-2022 05:22 AM
- Got Karma for Re: "Other" in timechart. 08-07-2022 10:23 PM
- Got Karma for Re: "Other" in timechart. 08-07-2022 10:23 PM
- Got Karma for Re: Why are the two base searches throw warnings in a dashboard?. 08-01-2022 06:04 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
1 | |||
0 | |||
1 |
10-09-2019
04:37 PM
Nevermind my comment - I'm pretty sure I know what's wrong.
You're using "threat:cvs" which makes that a string, and you want to match two fields..
Try using single quotes instead to have Splunk grab the values of that field instead.
| eval match=case('threat:cve' = cs23,"Yes",'threat:cve' != cs23,"No")
... View more
10-09-2019
04:33 PM
what is the error you are getting?
... View more
10-07-2019
09:27 PM
Make the subsearch in gentimes into a post process/base search and pass a token. I believe the reason it doesn't work is the way that it passes the argument.
<dashboard>
......
<search>
<query>"base search"
| tail 1
| convert timeformat="%m/%d/%Y:%H:%M:%S" ctime(_time) as dt
| return $dt</query>
<done>
<set token="token">$result.dt$</set>
</done>
<earliest>$earliest$</earliest>
<latest>$latest$</latest>
</search>
<row>
<panel>
<table>
<search>
<query>base search
| append
[| gentimes start=$token$ increment=3h| convert timeformat="%Y/%m/%d - %H" ctime(starttime)
| rename starttime as defaultDate
| table defaultDate]</query>
.........
</dashboard>
... View more
10-07-2019
09:03 PM
So there seems to be an issue with the way trellis reacts with sorting.
My best solution for this is to append numbers to the field names before charting them. Something like |eval field=if(field="first_viewed_field","1_first_viewed_field",field="second_viewed_field","1_second_viewed_field",...)
Since you want to view your results by top used to least used, you can utilize streamstats
Something like
<basesearch that gets you columns with components, counts, and whatever other thing you were charting with (time, maybe)> |sort 0 - count|streamstats count as header by component|eval component=header."-".component|chart values(count) as count by _time(or whatever this was) component
adjust as needed, based on the query you're actually working with - I realize that this is likely not accurate since I'm not sure what you're query or data looks like. If you provide that information, I might be able to help more. this type of strategy, though, should automatically put the fields in a different order.
... View more
10-05-2019
05:44 PM
1 Karma
i would append the lookup table with the user_info.csv
Something similar to
<base search to gather all active users with latest login date>
|rename U as UserID
|inputlookup user_info.csv append=true
|stats latest(last_login) as last_login by UserID
|fillnull last_login value="Never"
... View more
10-04-2019
02:41 PM
1 Karma
Add the inputlookup command to your saved search to dedup before you output.
Run it without the outputlookup command first for testing purposes.
index="linuxevents" AND host=rub.us AND source="/var/log/audit/audit.log"
AND acct="$userId_tok$"
| stats count as _count by acct, auid, addr
| rename acct AS ACCT, auid AS AUID, addr AS ADDR
| inputlookup myAAAlookup.csv append=true
| dedup ACCT AUID ADDR
| outputlookup myAAAlookup.csv append=true
... View more
10-03-2019
01:15 PM
as @renjith.nair stated in the comments, I believe what you're after is simply
index="search_index" search processing_service | eval time_in_mins=('metric_value')/60 | stats avg(time_in_mins) as all_channel_avg
which would just output one column named all_channel_avg and one row with the avg.
if you'd like both the individual channel avg AND the total avg, possibly something like:
index="search_index" search processing_service | eval time_in_mins=('metric_value')/60 |eventstats avg(time_in_mins) as total_avg| stats values(total_avg) as all_channel_avg avg(time_in_mins) as channel_avg by channel
however, you might want to do a count and sum in the stats command and then the eventstats and some eval in order to not run eventstats before stats.
index="search_index" search processing_service | eval time_in_mins=('metric_value')/60| stats avg(time_in_mins) as channel_avg sum(time_in_mins) as total_mins count as total_count by channel|eventstats sum(total_mins) as total_mins sum(total_count) as total_count|eval all_channel_avg=total_mins/total_count
again, that might actually need some work, as i'm currently really thinking that the math might not be right....
... View more
10-03-2019
01:08 PM
try appending the lookup instead:
index=myindex | stats count by src_mac signature | inputlookup append=t max=0 fs_src_mac_tg.csv | fillnull value="no" exists |eventstats values(exists) as exists by src_mac| search exists="no"
you might need to edit it a bit, but by appending it to the bottom, you'll get all results from the lookup, instead of joining the src_mac to the rows that exist from the search.
https://docs.splunk.com/Documentation/Splunk/7.3.1/SearchReference/Inputlookup
https://docs.splunk.com/Documentation/Splunk/7.3.1/SearchReference/Lookup
... View more
10-01-2019
05:51 PM
as a fun note, you can use range to calculate duration. It will not format it, but in a chart, you will need it as a number, not a string.
you also mention you want this by date, which I don't see in the query provided.
something to get you started might be:
index=rpa
|stats range(_time) as duration max(_time) as _time by sessionId
|eval duration_min=round(duration/60,2)
|timechart avg(duration_min) as avg_duration_min by sessionId
... View more
10-01-2019
05:44 PM
if you do a <base search>|stats count by latency|sort 0 - latency , is the first result the same?
try doing index=fultonrssi sourcetype=FultonRSSI test_type_code=PING closet_id="*" host=*| timechart max(latency) as "Max Latency" by site_name
or
index=fultonrssi sourcetype=FultonRSSI test_type_code=PING closet_id="*" host=*| chart count by latency site_name to check the differences. Those searches you have look accurate to me, so in my opinion, it looks like those sites have the same max for that time frame. You could try to add span=5min to the timechart to see if a more narrow span will yield different results, as well.
... View more
10-01-2019
05:19 PM
what is your search?
... View more
10-01-2019
09:23 AM
This isn't going to necessarily highlight the entire row, but you can highlight the cell that you care about based on the value.
In the dashboard, click on the pencil the top right of the column, enable color based on values and enter the values/color that you're interested in highlighting.
Another way to go about highlighting those rows is by using JS and CSS. You can use this answer for reference: https://answers.splunk.com/answers/588394/change-the-color-of-rows-in-a-table-based-on-text-1.html
... View more
10-01-2019
09:07 AM
I just want to make sure I understand what you're saying here.
You have a dashboard with a URL similar to <hostname>:8000/en-US/app/<appname>/<dashboard1>
This dashboard has a diagram with search head, indexer, and heavy forwarder
What you want is by clicking on any of them, unsets a token and somehow hides the entire dashboard but sets a token and displays an entirely different dashboard with a URL similar to <hostname>:8000/en-US/app/<appname>/<dashboard2>
I just want to make sure you're asking for hiding dashboards and not just panels within the dashboard.
You can't really "hide" a dashboard but you can click something and it opens a new dashboard, either in a new tab/window or in that same tab. Would that work? The following documentation is about linking to another dashboard.
https://docs.splunk.com/Documentation/Splunk/7.3.1/Viz/DrilldownLinkToDashboard
... View more
09-26-2019
04:17 PM
1 Karma
you can try something like this: |eval dur2=floor(time/86400)."d+".floor(time/3600)."H:".(floor(time/60)%60)."M:".floor(time%60)."S"
but generally the duration is doing what you want, without adding the d/H/M/S values.
8+18:30:28 means 8 days, 18 hours, 30 minutes, and 28 seconds.
... View more
09-22-2019
08:05 AM
1 Karma
The 8+ is referring to the number of days. How exactly are you wanting to display duration?
... View more
09-17-2019
07:16 AM
Based on your comment about the subsearch event count, you're hitting a limit that's truncating the results.
https://docs.splunk.com/Documentation/Splunk/7.3.1/SearchReference/Join
Join commands, such as join, append, etc. have a default limit set in limits.conf of 10000.
My suggestion is to see about either reworking your search to exclude the joins, moving the searches around so that the first subsearch is actually the base search (so long that the base search wouldn't hit the subsearch limit), or massively increase the limit in limits.conf. It isn't recommended to always increase limits.conf unless you understand the architecture of your environment and know that it can handle it.
... View more
09-16-2019
04:31 PM
this is INCREDIBLY long and i thank you for taking so much time writing all of this, but i'm just going to ask you two simple question:
how long do these searches take individually
how many results do each of these searches yield individually?
... View more
07-24-2019
05:59 PM
Have you checked that the fields are spelled correctly and capitalized properly and the field value is also correctly spelled/capped? I know it’s silly but it’s critical. The fields and values need to exist and need to be exact. Do you have example data?
... View more
07-10-2019
06:05 AM
2 Karma
...| stats sum(count) as events values(kb) as KB, values(mb) AS MB by EventCode doesn’t work?
... View more
07-09-2019
01:34 PM
2 Karma
just add in sum(count) as events to the last stats command. think that should do it.
... View more
07-09-2019
11:45 AM
3 Karma
can you try something like this? if i'm understanding what you're looking for, you just need to add in EventCode to your fields and stats commands.
index=wineventlog
| fields _raw EventCode
| eval esize=len(_raw)
| stats count as count avg(esize) as avg by EventCode
| eval bytes=count*avg
| eval kb=bytes/1024
| eval mb=round(kb/1024,2)
| stats values(kb) as KB, values(mb) AS MB by EventCode
... View more
07-09-2019
11:38 AM
So the reason that wouldn't work is because you're calculating less_dur and then filtering when it's less than 1. THEN you create more_dur, but the duration is already always less than 1. you would need to do both evals before the where statements.
... View more
06-15-2019
10:02 AM
If you were to write up a regex to extract the number of present values and do an eval to calculate the percentage, I think that’s what you’re looking for.
I’m on mobile, so bear with me right now. Something like:
|rex field=values "\(Present\) \*\*\*\",\"count\":(?<present_count>/d+)"|eval perc_present=(present_count/count)*100
Might need some tweaking.
... View more
05-21-2019
12:27 PM
agree with the ever-brilliant @rich7177 that we should at least test that the PDF is generating properly or not.
if you are looking to add a line when no data is present, you can add something like this to the end (tweak as needed)
|appendpipe [stats count|where count=0|eval Compliant="No Data"]
... View more