Activity Feed
- Posted Re: Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-28-2025 06:43 AM
- Posted Re: Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-27-2025 11:14 AM
- Posted Re: Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-27-2025 10:27 AM
- Posted Dashboard Studio Single Value with Trendlines on Splunk Enterprise. 01-27-2025 06:58 AM
- Posted Re: Lookup Table Modifying _time and timepicker ignoring on Splunk Search. 01-24-2025 07:32 AM
- Posted Re: Lookup Table Modifying _time and timepicker ignoring on Splunk Search. 01-24-2025 06:51 AM
- Posted Lookup Table Modifying _time and timepicker ignoring on Splunk Search. 01-24-2025 06:30 AM
- Posted Re: block any search for index=* with workload on Getting Data In. 12-03-2024 06:52 AM
- Posted Why are scheduled searches defaulting to other and causing wrong cron timezone? on Alerting. 04-11-2023 07:30 AM
- Posted Re: Splunk Add-on for AWS Issues with Kinesis Pull on All Apps and Add-ons. 07-21-2022 05:41 AM
- Posted How to Work Around Distinct 10K Limit on Splunk Search. 06-01-2022 07:19 AM
- Posted Re: Dynamically Subtract Two Last Column Values on Splunk Search. 05-02-2022 10:28 AM
- Posted How to dynamically subtract two last column values? on Splunk Search. 05-02-2022 08:33 AM
- Posted Re: How to get Stats values by Month as a Column? on Dashboards & Visualizations. 03-29-2022 07:34 AM
- Posted How to get Stats values by Month as a Column? on Dashboards & Visualizations. 03-25-2022 12:40 PM
- Tagged How to get Stats values by Month as a Column? on Dashboards & Visualizations. 03-25-2022 12:40 PM
- Posted Re: How to set an alert to fire based on lookup table value? on Splunk Search. 02-23-2022 01:38 PM
- Posted How to set an alert to fire based on lookup table value? on Splunk Search. 02-23-2022 06:17 AM
- Posted Re: Reading complexed nested Json on Splunk Search. 02-16-2022 08:36 AM
- Posted Re: Reading complexed nested Json on Splunk Search. 02-16-2022 07:54 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
01-28-2025
06:43 AM
This is probably why I'm getting confused. Jan 100 Feb 105 March 90 Given this table, I want to have a single value trellis view that depicts the month total , with a trend compared to the previous month. Jan 100 (no trend) , Feb 105 (shows trend up 5), March 90 (shows trend down 15). I'm going to play with timewrap, maybe I'm going down the wrong path.
... View more
01-27-2025
11:14 AM
Hi, this https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/trellisLayout. Specifly like this sample. With each month, cost and the trend line comparing to previous month.
... View more
01-27-2025
10:27 AM
Keep getting "Select a valid trellis split by field", used "_time" and tried "monthly_cost" .
... View more
01-27-2025
06:58 AM
Hi, struggling to get single values to show with trendline comparing to previous month. | bin span=1mon _time
| chart sum(cost) as monthly_costs over bill_date Tried differnt varations. The above will show a single value for each month, but I want to add a trendline to the single value to compare to the previous month. Any ideas? Thanks!
... View more
Labels
- Labels:
-
using Splunk Enterprise
01-24-2025
07:32 AM
My use case requires strict relationships. | inputlookup append=t mylookup
| eval _time = strptime(start_date, "%Y-%m-%d")
| addinfo
| rename info_* AS *
| where _time >= min_time AND _time <= max_time This works for my use case, bit clunkly. Thank all.
... View more
01-24-2025
06:51 AM
Ah ok. I changed the definitiion to below. Its still not working, time picker is ignoring the time. Anything else I should do?
... View more
01-24-2025
06:30 AM
Hi, Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore
| eval _time = strptime(start_date, "%Y-%m-%d")
| eval readable_time = strftime(_time, "%Y-%m-%d %H:%M:%S") start_date is YYYY-MM-DD, when I modify the _time, I can see it is changed via readable_time, but the timepicker still ignores the change. I can say search last 30 days and I get the events with _time before the range in the timepicker. Any ideas? Thanks!
... View more
12-03-2024
06:52 AM
Reading through the Ideas, there are a few written different ways that will yield the same result. This is the simplest explanation, https://ideas.splunk.com/ideas/PLECID-I-606. If we can use * as a literal, then it will help your problem too. What would be best is to be able to implement a regex statement. At my shop, it would be ok to do index=ABCDE*, but not index=A*.
... View more
04-11-2023
07:30 AM
Hi,
We already have a case open but wondering if someone else ran into this problem. Randomingly Scheduled Searches are losing the original owner and defaulting to Other This causes the cron scheduled to be running at the wrong times, since the timezone defaults to the system timezone.
Does anyone have any experience in figuring out why the owner is being changed to Other? We use SAML.
Thanks
Chris
... View more
Labels
- Labels:
-
alert action
-
alert condition
-
cron
07-21-2022
05:41 AM
No. We ended up going with another solution (product outside of Splunk). The TA was very buggy and does not scale or cluster aware (just one HF doing work). It caused us many headaches. For smaller shops, I'm sure it works fine. Chris
... View more
06-01-2022
07:19 AM
Hi, trying to get stats of user search stats. I'm struggling trying to workaround the 10K limit with distinct , stats dc(sids) in the below query. ("data.search_props.type"!=other "data.search_props.user"!=splunk-system-user AND "data.search_props.user"!=admin data.search_props.sid::* host=* index=_introspection sourcetype=splunk_resource_usage)
| eval mem_used='data.mem_used', app='data.search_props.app', elapsed='data.elapsed', label='data.search_props.label', intro_type='data.search_props.type', mode='data.search_props.mode', user='data.search_props.user', cpuperc='data.pct_cpu', search_head='data.search_props.search_head', read_mb='data.read_mb', provenance='data.search_props.provenance', label=coalesce(label,provenance), sid='data.search_props.sid'
| rex field=sid "^remote_[^_]+_(?P<sid>.*)"
| eval sid=(("'" . sid) . "'"), search_id_local=replace('data.search_props.sid',"^remote_[^_]+",""), from=null(), username=null(), searchname2=null(), searchname=null()
| rex field=search_id_local "(_rt)?(_?subsearch)*_?(?P<from>[^_]+)((_(?P<base64username>[^_]+))|(__(?P<username>[^_]+)))((__(?P<app>[^_]+)__(?P<searchname2>[^_]+))|(_(?P<base64appname>[^_]+)__(?P<searchname>[^_]+)))"
| rex field=search_id_local "^_?(?P<from>SummaryDirector)"
| fillnull from value="adhoc"
| eval searchname=coalesce(searchname,searchname2), type=case((from == "scheduler"),"scheduled",(from == "SummaryDirector"),"acceleration",isnotnull(searchname),"dashboard",true(),"ad-hoc"), type=case((intro_type == "ad-hoc"),if((type == "dashboard"),"dashboard",intro_type),true(),intro_type)
| fillnull label value="unknown"
| stats max(elapsed) as runtime max(mem_used) as mem_used, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb, values(sid) AS sids by type, mode, app, user, label, host, search_head, data.pid
| eval type=replace(type," ","-"), search_head_cluster="default"
| stats dc(sids) AS search_count, sum(totalCPU) AS total_cpu, sum(mem_used) AS total_mem_used, max(runtime) AS max_runtime, avg(runtime) AS avg_runtime, avg(avgCPU) AS avgcpu_per_indexer, sum(read_mb) AS read_mb, values(app) AS app by type, user
| eval prefix="user_stats.introspection."
| addinfo
| rename info_max_time as _time
| fields - "info_*" Can someone suggest a tweak in the SPL to get around distinct 10K limit? Thank you, Chris
... View more
Labels
- Labels:
-
stats
05-02-2022
08:33 AM
Hi, have SPL that generates months of data. I want subtract just the last two columns. The fields will change month to month, so I can't hard code.
Given the below sample, how can I get lastMonthDiff without hardcoding the field values? Thank you! Chris
| makeresults
| eval "2202-01"=1
| eval "2202-02"=2
| eval "2202-03"=5
| eval "2202-04"=4
| append
[| makeresults
| eval "2202-01"=4
| eval "2202-02"=5
| eval "2202-03"=7
| eval "2202-04"=3
]
| append
[| makeresults
| eval "2202-01"=5
| eval "2202-02"=2
| eval "2202-03"=7
| eval "2202-04"=9
]
| fields - _time |foreach * [eval lastMonthDiff = '2202-03' - '2202-04']
... View more
Labels
- Labels:
-
table
03-29-2022
07:34 AM
Yes. that was exactly what I was looking for.!
... View more
03-25-2022
12:40 PM
Hi,
I have a simple stats
| stats values(Field1) sum(TB) by Field2 date_month
This gives me one row for each month.
Field1 10 Field2 Jan
Field1 15 Field2 Feb
I want to see it like below so each month is on the same row grouped by the fields.
Field1 Field2 Jan 10 Feb 15
Tried transpose and some other suggestions. I just keep missing.
Thanks,
Chris
... View more
Labels
- Labels:
-
table
02-23-2022
01:38 PM
I figured out a solution. I was over thinking it. index=main ```place all your normal SPL here```
| eval alert_name = "myalert" ```create a variable with this alert name or key```
| lookup AlertSample.csv AlertName AS alert_name output IsOn ```lookup table has 2 columns, AlertName and IsOn```
| search IsOn=true ```evalute to true``` If there is a better way, would be glad to hear it. Chris
... View more
02-23-2022
06:17 AM
Hi, struggling why I can't seem to get this working. I want to have an alert evaluate to true (trigger) based on if its deemed active or inactive in a lookup table. The idea would be SPL would alway check the lookup and if the alert SPL evaluates to true, it would do it normal action. This way, we can have numerous alerts that are disabled (evaluate to false) but just updating one value in a lookup table and not clicking Disable for all alerts. I was thinking i could do something like
index=main
| append
[| inputlookup AlertSample.csv where AlertName=MySampleName
| fields IsOn]
this and just append the value IsOn to all the events but its not working and I have tried many variants of spl. Suggestions or a better way of doing this? Thank you! Chris
... View more
Labels
- Labels:
-
lookup
02-16-2022
08:36 AM
I'm almost there. Now I need to count by each event, as this is totaling for every single event. Looks like I just need to add a group by in the stats. Thank you!
... View more
02-16-2022
07:54 AM
Neat. Trying to follow. I need to have the total of all, not each count. So, in my example, the total is 5.
... View more
02-16-2022
07:33 AM
Ok I updated. Just know that the sample is deeply nested and I can get to this object starting with an initial spath.
... View more
- Tags:
- Ok I up
02-16-2022
06:57 AM
sorry, i made the sample too easy. I updated my sample json. No, I need to count the instance of the object.
... View more
02-16-2022
06:26 AM
Hi, struggling trying to count objects in a big json doc. I'm on version 8.0.5, so function json_keys is not available.
{
"0": {
"field1": "123"
},
"1": {
"field2": "123"
},
"2": {
"field3": "123"
},
"3": {
"field4": "123"
},
"4": {
"field5": "123"
}
}
This is a sample, I am able to get down to the path (startpath) with spath. What I'm trying to do is count the instances of the objects (0,1,2,3,4). I can't cleanly regex backwards as the real values names are not consistent. Thought I could do something like startpath{} and list them out , but the wildcards {} are not working anyway I try it. Thoughts, suggestions?
Thanks
Chris
... View more
Labels
- Labels:
-
field extraction
09-24-2021
11:09 AM
Ugh. Seem counter intuitive. Thought time modifiers in SPL overrode the time picker. Anyway, thanks, so I can just run some SPL for last 30-60 days in hopes I find my problem in the future. If someone is pushing events in the past like months and years in the past, it will be a heavy query...
... View more
09-24-2021
10:57 AM
I'm stumped. So using this debug data , I did a search specifc to the time. Time picker was selected for Sept 6 (same day) index=os sourcetype=ps host=MyHost _time=1630910039.599 | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S")
| table host _time indextime Returns, the event time and indexed time differences, see below. Yeah, so my hunch was right. Now I want to reverse engineer some spl i can run now and find some issues. So lets just start with using a time modifier for index_earliest and spot check before I do some time math. index=os sourcetype=ps _index_earliest=-24h
| eval indextime=strftime(_indextime,"%Y-%m-%d")
| eval event_time =strftime(_time, "%Y-%m-%d")
| table host _time indextime event_time The above does not pull back any data. If I remove the Timemodifier in the SPL and set the time picker for 24hrs, I do get back data. Am I missing something?
... View more
09-24-2021
07:39 AM
Update. So I had to put HEC into DEBUG mode to find my issue body_chunk="{"time":1630910039.599,"index":"os","host":"myhost","source":"ps","sourcetype":"ps", Just a snippet of the event above. The "time" sent in, is for the past. The HEC received time is Sept 6-time, but the actual time is Sept-24-time. This is the problem I'm trying to find without having to place an indexer into DEBUG mode. To clarify the question, how would I find this problem in the future by SPL? Thanks Chris
... View more