Activity Feed
- Got Karma for Re: How do I add yesterday's date to an emailed report subject?. a week ago
- Got Karma for Re: Token substitues value with double quotes | unable to use panel token values in DB connect query to compare string values as they need single quotes. 4 weeks ago
- Got Karma for Re: What are the differences between append, appendcols, and join search commands?. 11-06-2024 02:19 AM
- Got Karma for Re: How to replace specific field value?. 06-20-2024 06:36 AM
- Got Karma for Re: "Other" in timechart. 01-11-2024 01:30 AM
- Got Karma for Re: Token substitues value with double quotes | unable to use panel token values in DB connect query to compare string values as they need single quotes. 12-14-2023 07:48 AM
- Got Karma for Re: If there are no results found, how do I get my search to return a field that has the value of zero?. 11-17-2023 07:09 AM
- Got Karma for Re: Search Query Help: Number of Events per Event Code and Total size of those events. 08-07-2023 06:54 AM
- Got Karma for Re: Stats (count(x) as countX, count(y) as countY) BY FIELD X. 03-16-2023 06:36 AM
- Got Karma for Re: How to reference a dashboard token in an HTML panel?. 01-30-2023 12:00 AM
- Got Karma for Re: "Other" in timechart. 01-13-2023 05:20 AM
- Got Karma for Re: How do I use the reverse command to change the order of my table?. 01-10-2023 07:29 AM
- Got Karma for Re: Get single value panel to display a "date". 12-27-2022 07:12 AM
- Got Karma for Re: How can I create a query to find dashboard usage and top used dashboards of all the dashboards in my environment?. 09-23-2022 10:13 AM
- Got Karma for Re: How can I create a query to find dashboard usage and top used dashboards of all the dashboards in my environment?. 09-15-2022 05:06 AM
- Got Karma for Re: In a dashboard, how can I remove the panels that say "No results found." with code or something equivalent?. 09-12-2022 01:38 AM
- Got Karma for Re: "Other" in timechart. 08-11-2022 05:22 AM
- Got Karma for Re: "Other" in timechart. 08-07-2022 10:23 PM
- Got Karma for Re: "Other" in timechart. 08-07-2022 10:23 PM
- Got Karma for Re: Why are the two base searches throw warnings in a dashboard?. 08-01-2022 06:04 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
1 | |||
0 | |||
1 |
06-09-2021
08:47 AM
1 Karma
That's a good catch @black_bagel , but you don't have to eval domain_all before you do the foreach statement, you could just have |makeresults | eval domain_field1=1| eval domain_field2=5| eval domain_field3=4| eval domain_field4=6|foreach domain_* [|eval domain_all=min(domain_all,'<<FIELD>>')] and that will still produce 1 for domain_all.
... View more
12-03-2020
11:39 AM
Here is a good start for a rest call: |rest splunk_server=local servicesNS/-/-/data/ui/views f=eai:* f=label f=title the field eai:acl.owner will give you the owner of a dashboard. REST Splunk Doc I'm not sure that I 100% understand your use case, though. You have a dashboard (or dashboards) that you want to send a PDF of to a dynamic list of users, and the results of the dashboard should change based on the recipient? Or did I completely miss the use case
... View more
08-21-2020
10:57 AM
I'm not sure if this solved the original issue but it definitely has been a great solution to my issue 🙂
... View more
- Tags:
- A
06-18-2020
11:55 AM
1 Karma
Here's a starting off point. This likely needs some adjustments but should get you going. index=_audit search="'search *" sourcetype=audittrail|stats values(apiStartTime) as earliest_time by search_id search|rex field=search max_match=0 "\'search\s.*index=(?<searched_index>[^\s|\"]+)"|rex field=search max_match=0 "\'search\s.*\`(?<macros_used>[^\`]+)\`"|rex field=search max_match=0 "\'search\s.*eventtype=(?<searched_eventtype>[^\s|\"]+)"|join searched_eventtype splunk_server type=left [|rest /servicesNS/-/-/admin/eventtypes splunk_server=* f=search f=title|table splunk_server title search|rename title as searched_eventtype search as searched_eventtype_def]|join macros_used splunk_server type=left [| rest /servicesNS/-/-/admin/macros splunk_server=* f=definition f=title|table splunk_server title definition|rename title as macros_used definition as macro_used_def]|rex field=macro_used_def max_match=0 "index=(?<macro_index>[^\s|\"]+)"|rex field=searched_eventtype_def max_match=0 "index=(?<eventtype_index>[^\s|\"]+)"|eval all_searches_indexes=coalesce(searched_index,coalesce(macro_index,eventtype_index))|eval all_searches_indexes=if(isnull(all_searches_indexes),"not_defined or *",all_searches_indexes)|stats values(earliest_time) as earliest_time by search_id all_searches_indexes|eval earliest_time_epoch=if(earliest_time="'ZERO_TIME'",relative_time(now(),"-90d"),strptime(earliest_time,"'%a %b %d %T %Y'"))|eval earliest_time_bucket=case(earliest_time_epoch<relative_time(now(),"-7d"),"last 7d",earliest_time_epoch<relative_time(now(),"-14d"),"last 14d",earliest_time_epoch<relative_time(now(),"-21d"),"last 21d",earliest_time_epoch<relative_time(now(),"-28d"),"last 28d",1=1,"last 90d")|chart count by all_searches_indexes earliest_time_bucket What this is doing is looking at audit logs for any search being run. Then using regex, it's extracting anything with index= to grab the indexes. It's also looking for any macros or eventtypes so that it can grab any of those that might have indexes defined in them. You might want to tweak that bit a little to search for just eventtypes with indexes and macros with indexes. Then it joins them all together into one field and grabs the earliest time for each search and buckets the time by your definition above.
... View more
06-18-2020
11:10 AM
Do different lines of the alert get sent to different recipients? Or is it just that one copy of the alert get sent internally and one copy of the exact same dataset get sent externally?
... View more
06-15-2020
08:58 AM
1 Karma
All you'd really need to do is something similar to |tstats count where index=<interesting_index> [|inputlookup hashes.csv|table <hash_field_name_in_index>] by index sourcetype you could also do something like index=<interesting_index> <filtering_data> [|inputlookup hashes.csv|table <hash_field_name_in_index>] | stats max(_time) as last_seen by index<hash_field_name_in_index> there are honestly a handful of ways you could do this. depends on the input and the output, too. you can also join in the lookup file using | lookup instead of as a subsearch.
... View more
06-15-2020
08:52 AM
Is the regex101 what you want extracted? all those groups? or is it not exactly what you want? what is the limit problem? is this a regex you're doing in props or transforms or are you doing it with |rex on the search line? can you provide the entire stanza or the entire search string?
... View more
05-10-2020
06:17 PM
Good point @MuS ! Thanks!
... View more
04-07-2020
09:06 AM
4 Karma
In Splunk v 7.3+, you can use the rest call, as long as your lookup tables have definitions in transforms.conf created.
| rest splunk_server=* /servicesNS/-/-/data/transforms/lookups getsize=true f=size f=title f=type f=filename f=eai*|fields splunk_server filename title type size eai:appName
|where isnotnull(size)|eval MB = round(size / 1024 / 1024, 2)|search MB>{0}|fields - size
The docs do not have the getsize param defined yet, but there has been feedback submitted to have it added.
... View more
04-03-2020
06:36 PM
Are timezones an issue for this search? You could just create an eval statement that says if _time is between certain hours to mark it as one time period, and so on.
... View more
04-03-2020
06:33 PM
Just to verify - the alert is enabled? When you run that search over 15 minutes, you get data? (I'd hope so since it's _internal)
Did you check the _internal scheduler logs to see if there was any error for this search? Skips or anything
... View more
04-03-2020
06:30 PM
1 Karma
It's because you're doing a values(*) in your stats command, which is bringing in every index time. If any of the other fields end up having more than one value, it'll also do the same thing for those.
try doing - using max(_indextime) in the stats command and moving the strftime to the bottom.
(index=* sourcetype=ActiveDirectory objectCategory="CN=Computer,CN=Schema,CN=Configuration,DC=g1net,DC=com") OR (index=windows DisplayName="BitLocker Drive Encryption Service" source="kiwi syslog server")
| eval cn=coalesce(cn,host) | stats values(*) AS * max(_indextime) as indextime BY cn | search cn=* host=*
NOT [inputlookup All_Virtual_Machines.csv | rename Name as cn]
| where StartMode!="" AND operatingSystem!="" AND Started!="true"
| rename cn as System, operatingSystem as OS
| dedup System
| table System StartMode State Started OS indextime | eval indextime=strftime(indextime,"%Y-%m-%d %H:%M:%S")| sort System
... View more
I haven't done this in a while, so it might not be true for v8*, however I recall in v6* there was an issue with schedule PDFs that took too long to run. The PDF would render empty panels if the search ran for an extended period of time. How long does your search typically take to run?
Found the doc on this:
https://docs.splunk.com/Documentation/Splunk/8.0.2/Viz/DashboardPDFs#Configure_the_timeout_setting_for_generating_a_PDF
default is 1h in limits.conf, not sure if your env. has the default limit or another limit set. Something you could look into to see if that's the reason?
... View more
04-03-2020
06:14 PM
1 Karma
My suggestion would be to create either a lookup table with the bad domains or a macro. this way you can just add [|inputlookup bad_domains.csv] to the search (for a lookup)
The lookup will work best if the field is extracted in the logs (for instance, a domain field, in which the lookup table has a domain column).
The macro would work if you're just doing keyword searches
... View more
04-03-2020
06:08 PM
do you have valid creds? the last line mention that the authorization failed and to verify the username and password for the api.
... View more
03-10-2020
11:00 AM
2 Karma
Thanks @woodcock
try this query:
| rest splunk_server=* /servicesNS/-/-/saved/searches f=title f=eai:acl* f=description f=qualifiedSearch f=next_scheduled_time search="eai:acl.app!=splunk_archiver" search="eai:acl.app!=splunk_app_windows_infrastructure" search="eai:acl.app!=splunk_app_aws" search="eai:acl.app!=nmon"
| table splunk_server title, eai:acl.owner, description, eai:acl.app, qualifiedSearch, next_scheduled_time | search next_scheduled_time!="" qualifiedSearch!="*index IN*"
| regex qualifiedSearch!=".*index\s*(!?)=\s*([^*]|\*\S+)"
| regex qualifiedSearch="^\s*search "
| regex qualifiedSearch!="^\s*search\s*\[\s*\|\s*inputlookup"
| rex field=qualifiedSearch "^(?P<exampleQueryToDetermineIndexes>[^\|]+)"
|rex field=exampleQueryToDetermineIndexes max_match=0 "eventtype=(?<eventtype>[^\s]+)"
|join eventtype splunk_server type=left [|rest /services/admin/eventtypes splunk_server=* f=search f=title|table splunk_server title search|rename title as eventtype search as eventtype_def]
|eval eventtype="eventtype=".eventtype|eval exampleQueryToDetermineIndexes=if(like(exampleQueryToDetermineIndexes,"%eventtype%"),replace(exampleQueryToDetermineIndexes,eventtype,eventtype_def),exampleQueryToDetermineIndexes)
|replace *eventtype* with *eventtype_def* in exampleQueryToDetermineIndexes
| regex exampleQueryToDetermineIndexes!="\`"
| regex exampleQueryToDetermineIndexes!=".*index\s*(!?)=\s*([^*]|\*\S+)" | regex exampleQueryToDetermineIndexes!=".*tag\s*(!?)=\s*([^*]|\*\S+)"
|where isnotnull(exampleQueryToDetermineIndexes)
|fields - eventtype eventtype_def
| rename eai:acl.owner AS owner, eai:acl.app AS Application
|stats values(*) as * by Application title
It actually comes from https://splunkbase.splunk.com/app/3796/ , if I recall correctly - we adjusted it slightly in our environment, but I think this is the original search.
... View more
02-07-2020
02:54 PM
you need to create two tokens, one fore ANI and one for CASE_NUMBER
something like
<set token="case_num">$row.CASE_NUMBER$</set>
<set token="ani">$row.ANI$</set>
and then edit the search drilldown to use those two tokens.
something like
<link target="_blank">search?q=<search in url encoded nonsense>&CASE_NUMBER=$case_num$&ANI=$ani$</link>
... View more
02-07-2020
02:49 PM
It's not the No Title that's taking up space, it's the html break that you have. you might be able to use some CSS to reduce the space, but I don't have any written up that'll help at the moment.
... View more
02-03-2020
01:43 PM
base searches do not work like that. you can't add it as an identifier inside of a search string, it only works inside of the node. You'll probably want to use |loadjob
create a token with the sid from your base search, something like
<done>
<set token="sid">$job.sid$</set>
</done>
and then within your subsearch, something like
|loadjob $sid$.....
... View more
01-10-2020
02:48 PM
Just want to double check one thing:
you used your jira service account and api token, as well as your jira site url, for the configs? Not just your jira username/password?
... View more
01-06-2020
10:09 AM
2 Karma
you have earliest hardcoded in your search bar and it's set to 15min. when you remove that and broaden your search to 30d, does that help at all?
have you checked that field extractions are working properly? you have User=*, but it could be that something happened that the field extractions are broken somewhere? try just the index=uam (I also noticed that in one comment you put iam and another you put uam , so just double check for any typos) for a broader range and see the last time data came through (you can also use the |tstats trick that @mydog8it suggests, but I might add |tstats max(_time) as max_time max(_indextime) as max_indextime where index=uam|convert ctime(max_time) ctime(max_indextime) in order to get the last time and indextime for that index) . If you see that data has come in within the last 15 minutes or so, shorten your time frame and do index=uam|fieldsummary to see what fields are being extracted.
... View more
12-11-2019
11:18 AM
1 Karma
so....
snow_index has taskcurrentstatus: "TASK0001OpenService - Tools"
snow_index_2 has taskcurrentstatus: "TASK0001OpenService & Tools"
You can't just join on taskcurrentstatus and expect it to match those two together, because they don't match.
you'd need to do something like:
|replace "* - *" with "* & *" in group before you eval taskcurrentstatus and a similar one in the subsearch with t_group
but really, you shouldn't use join, it's ugly and has limitations.
try something like this:
(index=snow_index sourcetype=task_lookup_dat_csv) OR (index=snow_index_2)
| eval ExistingInSummaryTable=if(index=snow_index,"No","Yes")
| eval taskcurrentstatus=coalesce(number+state+group,t_number+t_state+t_group)
| stats values(t_number) as t_number values(t_state) as t_state values(t_group) as t_group values(ExistingInSummaryTable) as ExistingInSummaryTable by taskcurrentstatus| search ExistingInSummaryTable="No"
you might need to adjust that a bit, break it apart command by command, but you can do that search without a join, guarantee it.
... View more
12-10-2019
02:25 PM
in your timechart, add limit=0 if you want to display them all. However, your results might get truncated due to so many indexes, depends on how you're trying to display the data. charting commands limit to 10 plus "OTHER"/"NULL" fields.
... View more
10-10-2019
06:15 PM
you know what....streamstats by _time might be a terrible solution for this because you need to compare two different times...sorry...
this is what you should try:
index=foo|stats count as _count by _time LATITUDE LONGITUDE CELL NAME ADDRESS BAND PCI SITE_ELEV ECELL_ID PSITECODE|timewrap 1w
... View more
10-10-2019
12:10 PM
1 Karma
you can use streamstats
something like
<base search that gives a table of weekly information sorted properly>
|streamstats window=1 current=f values(*) as prev_* by _time
may need to adjust label or sorting to have it make sense, though.
... View more