All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Keep your original text boxes so that the user can enter the ip address (range) but also have either a checkbox for the equal/not equal decision or a pair of radio buttons and use the token from this... See more...
Keep your original text boxes so that the user can enter the ip address (range) but also have either a checkbox for the equal/not equal decision or a pair of radio buttons and use the token from this choice to modify your search.
Another possibility is to use the sed mode of the rex command to replace the id part with a fixed value. This would rely on the id being formatted in an identifiable pattern. You may need to work wit... See more...
Another possibility is to use the sed mode of the rex command to replace the id part with a fixed value. This would rely on the id being formatted in an identifiable pattern. You may need to work with your application designers to ensure that all ids follow a particular pattern or patterns otherwise you may end up having more rex commands to replace different formats of ids.
Hi New to Splunk On-Call , I have setup a new Team with 3 members, and I've created a rotation and shift with all three as members. I'm stuck with the best way to setup the Escalation Policy, I wan... See more...
Hi New to Splunk On-Call , I have setup a new Team with 3 members, and I've created a rotation and shift with all three as members. I'm stuck with the best way to setup the Escalation Policy, I want it to call the initial person on call and then contact the other two in turn if they don't respond e.g. Contact Member 1 Wait 10mins Contact Member 2 Wait 10mins Contact Member 3   They way I have it at the moment is having three steps in the Escalation Policy: Step 1 - Immediate - Notify the On-Duty user(s) in rotation Step 2 - Wait 10 mins - Notify the next user(s) in the current on-duty shift Step 2 - Wait 20 mins - Notify the next user(s) in the current on-duty shift   Is this the best way to do it, the text "Notify the On-Duty user(s) in rotation" has confused me as it suggests that it should call multiple members in a rotation, but I can't find anything that describes how it calls more then the initial on-call person?
@ITWhisperer  Based on the response i changed my query to below. index=stuff "kubernetes.labels.app"="some-stuff" | search "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath ... See more...
@ITWhisperer  Based on the response i changed my query to below. index=stuff "kubernetes.labels.app"="some-stuff" | search "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as url| eval url=if(mvindex(split(url,"/"),4)="namespace","/attribute/namespace/{id}",url) | eval url=if(mvindex(split(url,"/"),2)="schema","/spec-api/schema/{id}",url)| convert timeformat="%Y/%m/%d" ctime(_time) as date | stats min("log.context.duration") as RT_fastest max("log.context.duration") as RT_slowest p95("log.context.duration") as RT_p95 p99("log.context.duration") as RT_p99 avg("log.context.duration") as RT_avg count(url) as Total_Req by url | sort Total_Req desc   If you see, i had to write the eval twice for two different end points. But as my application grows, there may come different API's(endpoints) with the same patterns. And i would have to write the eval for each one of them.  So, I was thinking is there a more generic way to group these types of API's into one rather than writing the eval again and again. I was looking into the "cluster" query, but was not able to get anything out of it. 
Just append a table command listing the fields in the order you want them
@ITWhisperer  : Thanks It worked. You are best Just a small correction related to the order of columns. Is it possible to have currentweek-4 column first, then currentweek-3 , then currentweek... See more...
@ITWhisperer  : Thanks It worked. You are best Just a small correction related to the order of columns. Is it possible to have currentweek-4 column first, then currentweek-3 , then currentweek-2  , then currentweek-1 and , currentweek in the end before Deviation.     
Hi ,   I want to ask community how you do health check of servers after patching? Is there any automation you have build in order to identify if server health check is good after patching activity ... See more...
Hi ,   I want to ask community how you do health check of servers after patching? Is there any automation you have build in order to identify if server health check is good after patching activity for multiple server in one shot? Using any tool to identify or any query build up or  any dashboard to enter the server details and get stats?
| stats count as Total by field1 field2 field3 Day Time Week | eventstats max(Week) as ThisWeek | eval Week=if(Week=ThisWeek,"CurrentWeek","CurrentWeek".(Week-ThisWeek)) | eval {Week} = Total | stats... See more...
| stats count as Total by field1 field2 field3 Day Time Week | eventstats max(Week) as ThisWeek | eval Week=if(Week=ThisWeek,"CurrentWeek","CurrentWeek".(Week-ThisWeek)) | eval {Week} = Total | stats values(Current*) as Current* by field1 field2 field3 Day Time | fillnull value=0 | eval Deviation=2*CurrentWeek/('CurrentWeek-2'+'CurrentWeek-1')
Hi Oscar Wanted to check if this "Health Rule Name:  ${event.healthRule.name}" works with HTTP template also?
Hi @ITWhisperer  Can you please let me how can I correct the below stats command to re-evaluate Week after the stats command to be current week, current week -1 and current week -2.  | stats coun... See more...
Hi @ITWhisperer  Can you please let me how can I correct the below stats command to re-evaluate Week after the stats command to be current week, current week -1 and current week -2.  | stats count as Total by field1 field2 field3 Day Time Week | eval Week_{Week} = Total | stats values(Week_*) as Week_* by field1 field2 field3 Day Time | fillnull value=0 | eval Deviation=2*Week_41/(Week_39+Week_40)
Hello,  I have a dashboard that shows network traffic based on 4 simple text boxes for the user to input SRC_IP SRC_PORT DEST_IP DEST_PORT  How can we create a filter such as "EQUAL" and "NOT ... See more...
Hello,  I have a dashboard that shows network traffic based on 4 simple text boxes for the user to input SRC_IP SRC_PORT DEST_IP DEST_PORT  How can we create a filter such as "EQUAL" and "NOT EQUAL TO" options for a  DEST_IP input box ?  Requirement is that end user should be to select "NOT EQUAL and enter an ip-address or range to exclude whatever they want to  in the input box and accordingly the panels will display the corresponding data. For example , if they want to exclude all private ips (10.x.x.x)  from DEST_IP ,   they need to be able to select "NOT EQUAL TO" along with entering "10.0.0.0\8"  for this ask.  Hope clear.  I tried creating MULTISELECT input box as follows but in MULTISELECT box, it does not let a user enter/type any data that they want to manually filter .   Any assistance will be highly appreciated.    
Assuming your lookup file containing the user ids has the column name  "Account_Name"  which matches the field name in the windows events,  you can do something like this:   index=wineventlog s... See more...
Assuming your lookup file containing the user ids has the column name  "Account_Name"  which matches the field name in the windows events,  you can do something like this:   index=wineventlog sourcetype=wineventlog EventCode=4624 [|inputlookup my_lookup_file.csv | fields Account_Name] | stats ...... ..... ....     I verified it, it works in my env.  Just make sure the column_name / field_name in lookup is correct to based on what you want to filter on.     PS: Hit "MARK as Answer" if this solves your query.
I managed to achieve the same outcome with an alert in Splunk Cloud like this: index=my_idx path="/api/*/endpointPath" status=500 | rex field=path "/api/(?<userId>.*)/endpointPath" | fields userId... See more...
I managed to achieve the same outcome with an alert in Splunk Cloud like this: index=my_idx path="/api/*/endpointPath" status=500 | rex field=path "/api/(?<userId>.*)/endpointPath" | fields userId | stats count by userId | eventstats sum(count) as totalCount | eval percentage=(count/totalCount) | where percentage>0.05 | sort -count
index=_internal source=metrics.log* group=tcpin_connections hostname=<the UF or the server you're checking> should work
Hi,   I'm pretty new to Splunk and I have a simple question that maybe one of you guys could help me figure out. I have a search that I'm using to find the latest login events for a specific set ... See more...
Hi,   I'm pretty new to Splunk and I have a simple question that maybe one of you guys could help me figure out. I have a search that I'm using to find the latest login events for a specific set of users. The problem is that there are about 130 users and I tried specifying the users in the search using (Account_Name=user1 OR Account_Name=user2 OR Account_Name=user3.......) I tried entering all 130 but it didn't work I noticed there was a limit after some point, and then I'd stop receiving results. So I did some research and I noticed people mentioned lookup files. So I created a CSV file with the list of actual users that I'd like to run a report on. how can I join the lookup file to the query so I'm only joining the values from the "UserID" field in my lookup table to the field "Account_Name" that comes with the windows event logs that I'm using to build the query. So far this is my query how could I use the lookup to assist to only filter the 130 users.    index=wineventlog sourcetype=wineventlog EventCode=4624 Account_Name!=*$ | stats latest(_time) as last_login_time by Account_Name | convert ctime(last_login_time) as "Last Login Time" | rename Account_Name as "User" | sort - last_login_time | table User "Last Login Time"
Hey @abow i don’t think that can work.
Hi, I'm exploring a way to get the search results for the name of Indexes, who created those indexes and creation date. So far I have got the DDAS Retention Days, DDAS Index Size, DDAA Retention ... See more...
Hi, I'm exploring a way to get the search results for the name of Indexes, who created those indexes and creation date. So far I have got the DDAS Retention Days, DDAS Index Size, DDAA Retention Days, DDAA Usage, along with the Earliest and Latest Event Dates. I'm trying with the owner of the indexes but am not getting the desired results. The search query I've been using is given below: | rest splunk_server=local /servicesNS/-/-/data/indexes | rename title as indexName, owner as creator | append [ search index=summary source="splunk-storage-detail" (host="*.personalsplunktesting.*" OR host=*.splunk*.*) | fillnull rawSizeGB value=0 | eval rawSizeGB=round(rawSizeBytes/1024/1024/1024,2) | rename idxName as indexName ] | append [ search index=summary source="splunk-ddaa-detail" (host="*.personalsplunktesting.*" OR host=*.splunk*.*) | eval archiveUsage=round(archiveUsage,2) | rename idxName as indexName ] | stats latest(retentionDays) as "Searchable Storage (DDAS) Retention Days", latest(rawSizeGB) as "Searchable Storage (DDAS) Index Size GB", max(archiver.coldStorageRetentionPeriod) as "Archive Storage (DDAA) Retention Days", latest(archiveUsage) as "Archive Storage (DDAA) Usage GB", latest(ninetyDayArchived) as "Archived GB Last 90 Days", latest(ninetyDayExpired) as "Expired GB Last 90 Days" by indexName | append [ | tstats earliest(_time) as earliestTime latest(_time) as latestTime where index=* by index | eval earliest_event=strftime(earliestTime, "%Y-%m-%d %H:%M:%S"), latest_event=strftime(latestTime, "%Y-%m-%d %H:%M:%S") | rename index as indexName | fields indexName earliest_event latest_event ] | stats values("Searchable Storage (DDAS) Retention Days") as "Searchable Storage (DDAS) Retention Days", values("Searchable Storage (DDAS) Index Size GB") as "Searchable Storage (DDAS) Index Size GB", values("Archive Storage (DDAA) Retention Days") as "Archive Storage (DDAA) Retention Days", values("Archive Storage (DDAA) Usage GB") as "Archive Storage (DDAA) Usage GB", values(earliest_event) as "Earliest Event", values(latest_event) as "Latest Event", values(creator) as "Creator" by indexName Please can anyone help me on this? Thanks in advance!  
Macros are expanded before the resultant SPL is parsed and executed which probably means that macros stored in a lookup are not expanded.
If the average is in a field, it should be available for you to use as an overlay. Which version of Splunk are you using? (Dashboard Studio is still under development so some features may be working ... See more...
If the average is in a field, it should be available for you to use as an overlay. Which version of Splunk are you using? (Dashboard Studio is still under development so some features may be working in earlier versions. Make sure you are on the latest version.)
Here is a sample of the data posted to the TCP connection: { "time": 1728428019, "host": "x.x.x.x", "fields": { "metric_name:x.x.x.x.ds.bIn": 1111, "metric_name:x.x.x.x.ds.bOut": 2222 } }