All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I understood correctly, just use wildcards | mstats rate(vault.hostname*.runtime.total_gc_pause_ns) as gc_pause_* WHERE `vault_telemetry` span=5m | timechart max(gc_pause_*) AS iowait_* bins=1000... See more...
If I understood correctly, just use wildcards | mstats rate(vault.hostname*.runtime.total_gc_pause_ns) as gc_pause_* WHERE `vault_telemetry` span=5m | timechart max(gc_pause_*) AS iowait_* bins=1000 | eval warning=3.3e7, critical=8.3e7  
Hi there,  I am trying to make a statistic graph in my dashboard using the search below.   | mstats rate(vault.runtime.total_gc_pause_ns.value) as gc_pause WHERE `vault_telemetry` AND cluster=* AND... See more...
Hi there,  I am trying to make a statistic graph in my dashboard using the search below.   | mstats rate(vault.runtime.total_gc_pause_ns.value) as gc_pause WHERE `vault_telemetry` AND cluster=* AND (host=*) BY host span=5m | timechart max(gc_pause) AS iowait bins=1000 BY host | eval warning=3.3e7, critical=8.3e7 **Note that the search below comes from the pre-defined dashboard template but it is not working as is in my environment.  In my Splunk, when I do a mpreview of my index `vault_telemetry` I am getting a result like the below: metric_name:vault.hostname1.runtime.total_gc_pause_ns metric_name:vault.hostname2.runtime.total_gc_pause_ns metric_name:vault.hostname3.runtime.total_gc_pause_ns metric_name:vault.hostname3.runtime.total_gc_pause_ns metric_name:vault.hostname4.runtime.total_gc_pause_ns If I modify the pre-defined search from the template using the below I can get the result however, I can only do it on one hostname.  | mstats rate(vault.hostname1.runtime.total_gc_pause_ns) as gc_pause WHERE `vault_telemetry` span=5m | timechart max(gc_pause) AS iowait bins=1000 | eval warning=3.3e7, critical=8.3e7   I would like to have all the hostname shows on my single panel. Can someone please able to assist and help me with the correct search index I need to use?
Mt date field is _time. When i use this query i'm getting as no results found. | eval weeknum=strftime(strptime(_time,"%d-%m-%Y"),"%V") | chart dc(Task_num) as Tasks over weeknum by STATUS  
Are you talking about lookup files or kv stores? Can you describe what is not 'working' and give an example of what you see when you try the commands  
This location means the lookup is private /opt/splunk/etc/users/[myuserID]/testapp/lookups/test.csv which is the default state when you upload a lookup. You should change the permission to app befo... See more...
This location means the lookup is private /opt/splunk/etc/users/[myuserID]/testapp/lookups/test.csv which is the default state when you upload a lookup. You should change the permission to app before you do the outputlookup.  
Hi @elizabethl_splu  Is this feature now available in dashboard studio?
I tried it too.its not working should i enable anything or add any property while creating lookup file
Hello, I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the permission /opt/splun... See more...
Hello, I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the permission /opt/splunk/etc/users/[myuserID]/testapp/lookups/test.csv When I used outputlookup, it wrote the same test.csv file into a different directory below and I cannot change the permission. /opt/splunk/etc/apps/testapp/lookups/test.csv    Please suggest. Thank you
Thanks @bowesmana  this is closer to what i'm trying to achieve and has given me some idea on how to work on splunk searches of this complexity.
If this is a known consistent csv you are going to create, then create a new lookup of that name and upload a dummy csv. You can then define the permissions on the csv and outputlookup will then not ... See more...
If this is a known consistent csv you are going to create, then create a new lookup of that name and upload a dummy csv. You can then define the permissions on the csv and outputlookup will then not change those permissions.  
So rather than use top, which is somewhat useful, it is generally easier to get results using stats index=mydata | bin _time span=1h | stats count by _time movieId | eventstats sum(count) as total b... See more...
So rather than use top, which is somewhat useful, it is generally easier to get results using stats index=mydata | bin _time span=1h | stats count by _time movieId | eventstats sum(count) as total by _time | eval percent=round(count/total*100, 2) so this search will give you 1h buckets with counts by movieId then the eventstats will calculate the total per hour of all movies and then the percent calc will get the percentage of each movie within that hour. However, although efficient, I expect what you may want is a streamstats variant, which will give you a sliding 60 minute window, so if your peak for a movie is from 20:30 to 21:30 this will show using streamstats, but not necessarily stats by 1h buckets You could do something like this index=mydata | streamstats time_window=1h count as userCount by movieId | streamstats time_window=1h count as totalCount | eval percent=round(userCount/totalCount*100, 2) | timechart span=1h max(percent) as maxPercent by movieId which will show the max percent for all movies in 1h buckets, but with the time calculated as a sliding window You can then test for thresholds or further manipulate your data - the timechart above is one way of looking at it, but you can do anything from there
The search produces a table with 4 rows, 2 columns with URI and p95 fields so after the stats, so this | transpose header_field=URI | where URI4>2
Hi Team, How to integrate proficio with splunk
Thanks @bowesmana , you're a legend!
In what way is it not working? You are setting key_field to the key from the original record - which is what you would do if you are trying to update an existing row in the table, but you actually w... See more...
In what way is it not working? You are setting key_field to the key from the original record - which is what you would do if you are trying to update an existing row in the table, but you actually want to append a new row. Remove the key_field=key, but keep the append=true  
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum versio... See more...
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum version required is 8.2.12, I'm not sure how big is the risk in upgrading process as we need to be sure the information in indexers is going to be safe and splunk must be operational. i have read some of the upgrading documentation to version 9.0.6 but as is said i am not sure the best option with the minimum risk. Do you have any advice? Thank you!
You are collecting from the same index, so just put all 3 counts in the same mstats | mstats sum(vault.token.creation.nonprod) as count_nonprod sum(vault.token.creation.dev) as count_dev ... See more...
You are collecting from the same index, so just put all 3 counts in the same mstats | mstats sum(vault.token.creation.nonprod) as count_nonprod sum(vault.token.creation.dev) as count_dev sum(vault.token.creation.nonprod_preprod) as count_nonprod_preprod where index=vault_metrics span=1h | addtotals | timechart sum(Total) as Total span=1h | fillnull value=0 | eventstats perc90(Total) as p90_Total perc50(Total) as p50_Total The addtotals gives you a sume of all the count_* fields into a single new field Total, so then just use that new field total to calculate the percentiles
Simple way is to first extract the numeric value and the time specifier, then use eval to do the test/calc | rex field=elapsedTime "(?<timeValue>[\d\.]*)(?<timeSpecifier>(ms|s))" | eval elapsedTime=... See more...
Simple way is to first extract the numeric value and the time specifier, then use eval to do the test/calc | rex field=elapsedTime "(?<timeValue>[\d\.]*)(?<timeSpecifier>(ms|s))" | eval elapsedTime=if(timeSpecifier="ms", timeValue, timeValue*1000)  
Basic query is something like this, but will depend on your fields index=your_source_index status>=400 status<600 | stats count by ip status You will then get a table of ip+status+count you can do... See more...
Basic query is something like this, but will depend on your fields index=your_source_index status>=400 status<600 | stats count by ip status You will then get a table of ip+status+count you can do whatever you want to do with that - what's your goal?  
Actually it's not so hard to achieve. Here is another example, where I have added another Day 0 and some event dates inside and outside any day. | makeresults | eval _raw="DATE,Start_Time,End_Time ... See more...
Actually it's not so hard to achieve. Here is another example, where I have added another Day 0 and some event dates inside and outside any day. | makeresults | eval _raw="DATE,Start_Time,End_Time Day_3,2023-09-12 01:12:12.003,2023-09-13 01:13:13.993 Day_2,2023-09-11 01:11:11.002,2023-09-12 01:12:12.992 Day_1,2023-09-10 01:10:10.001,2023-09-11 01:11:11.991 Day_0,2023-09-04 01:12:12.000,2023-09-06 17:22:13.990" | multikv forceheader=1 | table DATE Start_Time End_Time | eval _time = strptime(Start_Time, "%F %T.%Q") | eval end = strptime(End_Time, "%F %T.%Q"), start=_time | append [ | makeresults | eval _raw="Event type,Time,Others EventID2,2023-09-11 01:20:20.133, ``` INSIDE DAY 2 ``` EventID1,2023-09-11 01:11:11.132, ``` INSIDE DAY 2 ``` EventID9,2023-09-10 01:20:30.131, ``` INSIDE DAY 1 ``` EventID3,2023-09-10 01:20:10.130, ``` INSIDE DAY 1 ``` EventID5,2023-09-10 01:10:20.129, ``` INSIDE DAY 1 ``` EventID1,2023-09-10 01:10:10.128, ``` INSIDE DAY 1 ``` EventID4,2023-09-07 01:10:10.127, ``` OUTSIDE ANY ``` EventID3,2023-09-06 06:10:10.126, ``` INSIDE DAY 0 ``` EventID2,2023-09-05 19:10:10.125, ``` INSIDE DAY 0 ``` EventID1,2023-09-04 04:10:10.124, ``` INSIDE DAY 0 ``` EventID0,2023-09-04 01:10:10.123," ``` OUTSIDE ANY ``` | multikv forceheader=1 | table Event_type Time | eval _time = strptime(Time, "%F %T.%Q") | eval eventTime=_time | fields - Time ] | sort _time | filldown DATE start end | eval eventIsInside=case(isnull(Event_type), "YES", isnotnull(Event_type) AND _time>=start AND _time<=end, "YES", 1==1, "NO") | where eventIsInside="YES" | stats values(*_Time) as *_Time list(Event_type) as eventIDs list(eventTime) as eventTimes by DATE | eval eventTimes=strftime(eventTimes, "%F %T.%Q") | table DATE Start_Time End_Time eventIDs eventTimes You can see that this works by making a common time, which is based on either start time or event time and then sorting by time. Setting start and end epoch times for the source 1 data means you can then 'filldown' those fields to subsequent event (source 2) rows until the next source 1 Day. Then as each event source 2 now has the preceeding day's start/end time, it can make the comparison for it's own time. Hope this helps.