All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This location means the lookup is private /opt/splunk/etc/users/[myuserID]/testapp/lookups/test.csv which is the default state when you upload a lookup. You should change the permission to app befo... See more...
This location means the lookup is private /opt/splunk/etc/users/[myuserID]/testapp/lookups/test.csv which is the default state when you upload a lookup. You should change the permission to app before you do the outputlookup.  
Hi @elizabethl_splu  Is this feature now available in dashboard studio?
I tried it too.its not working should i enable anything or add any property while creating lookup file
Hello, I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the permission /opt/splun... See more...
Hello, I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the permission /opt/splunk/etc/users/[myuserID]/testapp/lookups/test.csv When I used outputlookup, it wrote the same test.csv file into a different directory below and I cannot change the permission. /opt/splunk/etc/apps/testapp/lookups/test.csv    Please suggest. Thank you
Thanks @bowesmana  this is closer to what i'm trying to achieve and has given me some idea on how to work on splunk searches of this complexity.
If this is a known consistent csv you are going to create, then create a new lookup of that name and upload a dummy csv. You can then define the permissions on the csv and outputlookup will then not ... See more...
If this is a known consistent csv you are going to create, then create a new lookup of that name and upload a dummy csv. You can then define the permissions on the csv and outputlookup will then not change those permissions.  
So rather than use top, which is somewhat useful, it is generally easier to get results using stats index=mydata | bin _time span=1h | stats count by _time movieId | eventstats sum(count) as total b... See more...
So rather than use top, which is somewhat useful, it is generally easier to get results using stats index=mydata | bin _time span=1h | stats count by _time movieId | eventstats sum(count) as total by _time | eval percent=round(count/total*100, 2) so this search will give you 1h buckets with counts by movieId then the eventstats will calculate the total per hour of all movies and then the percent calc will get the percentage of each movie within that hour. However, although efficient, I expect what you may want is a streamstats variant, which will give you a sliding 60 minute window, so if your peak for a movie is from 20:30 to 21:30 this will show using streamstats, but not necessarily stats by 1h buckets You could do something like this index=mydata | streamstats time_window=1h count as userCount by movieId | streamstats time_window=1h count as totalCount | eval percent=round(userCount/totalCount*100, 2) | timechart span=1h max(percent) as maxPercent by movieId which will show the max percent for all movies in 1h buckets, but with the time calculated as a sliding window You can then test for thresholds or further manipulate your data - the timechart above is one way of looking at it, but you can do anything from there
The search produces a table with 4 rows, 2 columns with URI and p95 fields so after the stats, so this | transpose header_field=URI | where URI4>2
Hi Team, How to integrate proficio with splunk
Thanks @bowesmana , you're a legend!
In what way is it not working? You are setting key_field to the key from the original record - which is what you would do if you are trying to update an existing row in the table, but you actually w... See more...
In what way is it not working? You are setting key_field to the key from the original record - which is what you would do if you are trying to update an existing row in the table, but you actually want to append a new row. Remove the key_field=key, but keep the append=true  
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum versio... See more...
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum version required is 8.2.12, I'm not sure how big is the risk in upgrading process as we need to be sure the information in indexers is going to be safe and splunk must be operational. i have read some of the upgrading documentation to version 9.0.6 but as is said i am not sure the best option with the minimum risk. Do you have any advice? Thank you!
You are collecting from the same index, so just put all 3 counts in the same mstats | mstats sum(vault.token.creation.nonprod) as count_nonprod sum(vault.token.creation.dev) as count_dev ... See more...
You are collecting from the same index, so just put all 3 counts in the same mstats | mstats sum(vault.token.creation.nonprod) as count_nonprod sum(vault.token.creation.dev) as count_dev sum(vault.token.creation.nonprod_preprod) as count_nonprod_preprod where index=vault_metrics span=1h | addtotals | timechart sum(Total) as Total span=1h | fillnull value=0 | eventstats perc90(Total) as p90_Total perc50(Total) as p50_Total The addtotals gives you a sume of all the count_* fields into a single new field Total, so then just use that new field total to calculate the percentiles
Simple way is to first extract the numeric value and the time specifier, then use eval to do the test/calc | rex field=elapsedTime "(?<timeValue>[\d\.]*)(?<timeSpecifier>(ms|s))" | eval elapsedTime=... See more...
Simple way is to first extract the numeric value and the time specifier, then use eval to do the test/calc | rex field=elapsedTime "(?<timeValue>[\d\.]*)(?<timeSpecifier>(ms|s))" | eval elapsedTime=if(timeSpecifier="ms", timeValue, timeValue*1000)  
Basic query is something like this, but will depend on your fields index=your_source_index status>=400 status<600 | stats count by ip status You will then get a table of ip+status+count you can do... See more...
Basic query is something like this, but will depend on your fields index=your_source_index status>=400 status<600 | stats count by ip status You will then get a table of ip+status+count you can do whatever you want to do with that - what's your goal?  
Actually it's not so hard to achieve. Here is another example, where I have added another Day 0 and some event dates inside and outside any day. | makeresults | eval _raw="DATE,Start_Time,End_Time ... See more...
Actually it's not so hard to achieve. Here is another example, where I have added another Day 0 and some event dates inside and outside any day. | makeresults | eval _raw="DATE,Start_Time,End_Time Day_3,2023-09-12 01:12:12.003,2023-09-13 01:13:13.993 Day_2,2023-09-11 01:11:11.002,2023-09-12 01:12:12.992 Day_1,2023-09-10 01:10:10.001,2023-09-11 01:11:11.991 Day_0,2023-09-04 01:12:12.000,2023-09-06 17:22:13.990" | multikv forceheader=1 | table DATE Start_Time End_Time | eval _time = strptime(Start_Time, "%F %T.%Q") | eval end = strptime(End_Time, "%F %T.%Q"), start=_time | append [ | makeresults | eval _raw="Event type,Time,Others EventID2,2023-09-11 01:20:20.133, ``` INSIDE DAY 2 ``` EventID1,2023-09-11 01:11:11.132, ``` INSIDE DAY 2 ``` EventID9,2023-09-10 01:20:30.131, ``` INSIDE DAY 1 ``` EventID3,2023-09-10 01:20:10.130, ``` INSIDE DAY 1 ``` EventID5,2023-09-10 01:10:20.129, ``` INSIDE DAY 1 ``` EventID1,2023-09-10 01:10:10.128, ``` INSIDE DAY 1 ``` EventID4,2023-09-07 01:10:10.127, ``` OUTSIDE ANY ``` EventID3,2023-09-06 06:10:10.126, ``` INSIDE DAY 0 ``` EventID2,2023-09-05 19:10:10.125, ``` INSIDE DAY 0 ``` EventID1,2023-09-04 04:10:10.124, ``` INSIDE DAY 0 ``` EventID0,2023-09-04 01:10:10.123," ``` OUTSIDE ANY ``` | multikv forceheader=1 | table Event_type Time | eval _time = strptime(Time, "%F %T.%Q") | eval eventTime=_time | fields - Time ] | sort _time | filldown DATE start end | eval eventIsInside=case(isnull(Event_type), "YES", isnotnull(Event_type) AND _time>=start AND _time<=end, "YES", 1==1, "NO") | where eventIsInside="YES" | stats values(*_Time) as *_Time list(Event_type) as eventIDs list(eventTime) as eventTimes by DATE | eval eventTimes=strftime(eventTimes, "%F %T.%Q") | table DATE Start_Time End_Time eventIDs eventTimes You can see that this works by making a common time, which is based on either start time or event time and then sorting by time. Setting start and end epoch times for the source 1 data means you can then 'filldown' those fields to subsequent event (source 2) rows until the next source 1 Day. Then as each event source 2 now has the preceeding day's start/end time, it can make the comparison for it's own time. Hope this helps.
Hello Splunkers, Can someone help me with a query to detect multiple http errors from single IP , basically when the status code is in 400s/500s. Thank you, regards, Moh
Finding this much later and love it. I made some slight edits to include calculated fields (the mvfilter NOT match is for the sub-model names that start with capital letters and the is_/is_not_ stuff... See more...
Finding this much later and love it. I made some slight edits to include calculated fields (the mvfilter NOT match is for the sub-model names that start with capital letters and the is_/is_not_ stuff for each sub-model): | datamodel | rex field=_raw "\"modelName\"\s*\:\s*\"(?<modelName>[^\"]+)\"" | spath output=fieldList objects{}.calculations{}.outputFields{}.displayName | spath output=fieldList2 objects{}.fields{}.displayName | eval fieldList = mvappend(fieldList,fieldList2) | where modelName!="Splunk_CIM_Validation" | table modelName fieldList | eval fieldList = mvdedup(mvfilter(NOT match(fieldList,"is_.*|^[A-Z]")))  The check index/sourcetype is a handy addition. I also highly recommend Outpost's Data Model Mechanic for troubleshooting DMs.
I don't have access to splunk servers, these are managed by a central team. Are these logs available to search within splunk? If yes, any how how can I search for it?
Both "Once" and "For each result" behaves the same way for me. In both cases, I got the alert with only one event from the results. I am assuming PagerDuty doesn't support multiple results.