All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a live database feed through DB Connect. This feed is having incidents data for different teams and _time is set to last_updated. I am trying to find count of different incident statu... See more...
Hello, I have a live database feed through DB Connect. This feed is having incidents data for different teams and _time is set to last_updated. I am trying to find count of different incident statuses by Teams , I am trying below search (with time-picker set to last 6 months) but it is not showing correct numbers: index=idx_1 source="idx_src_1" sourcetype="idx_srctype_1" | rename COMMENT as "sorting data in descending order and removing duplicates by keeping the latest record for each incident id" | sort -lastUpdate | dedup incID | rename COMMENT as "to get the count of incidents for each team by incident status" | chart count by incStatus,teamName but if I specify a team name in the search, it gives correct numbers: index=idx_1 source="idx_src_1" sourcetype="idx_srctype_1" teamName="Team1" | rename COMMENT as "sorting data in descending order and removing duplicates by keeping the latest record for each incident id" | sort -lastUpdate | dedup incID | chart count by incStatus,teamName  Can someone please suggest me on how to resolve this. Thank you. Madhav
Hi,   I'm running Splunk Free and have a data source which has events in the last 24 hours. When I run a search for All Time, event are shown in the index, but when I search for Yesterday I get no ... See more...
Hi,   I'm running Splunk Free and have a data source which has events in the last 24 hours. When I run a search for All Time, event are shown in the index, but when I search for Yesterday I get no results. The only other thing to note is that I only just created the index the data is in because I am expermenting with a new data source. Not sure if this affects anything. Anyone got an explanation for this?
Hi Team,    I am trying to extract the value after a particular string format and its not getting the right value. The Value i wanted is after this string "COUNT(1)=*" {ORD_CRTN_DTE=XXXXXXXX, CO... See more...
Hi Team,    I am trying to extract the value after a particular string format and its not getting the right value. The Value i wanted is after this string "COUNT(1)=*" {ORD_CRTN_DTE=XXXXXXXX, COUNTRY=XXXXXX, AGENCY=XXXXXXX, PGM_CDE=XXXXXXX, COUNT(1)=1}] I am using the the below rex and its not giving the value of the count i.e 1. The Rex I am using is below. rex "COUNT(1)=(?<Count>\d+)"
I am doing index time field extraction for structured files. files are pipe delimited.  I am using following source type configuration- [header] INDEXED_EXTRACTIONS = psv DATETIME_CONFIG = LINE_... See more...
I am doing index time field extraction for structured files. files are pipe delimited.  I am using following source type configuration- [header] INDEXED_EXTRACTIONS = psv DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) FIELD_DELIMITER = | NO_BINARY_CHECK = true category = Structured pulldown_type = 1 disabled = false   it work with all events except event where there is some special character in field like \" here blnumber should be \"SCT 33447 3344733276\" but its getting addition text from other field and "|" delimiter not worked.  I have tried using SEDCMD-mask_sc_raw = s/\\"/'/g at time of indexing also  tried transform  [mask_sc_meta] SOURCE_KEY = _meta DEST_KEY = _meta REGEX = (.*)\\"(.*) FORMAT = $1'$2 WRITE_META = false But still issue with this extraction. 
I have a dashboard. There are several inputs. One of them is a DateTime picker. I wish on the open as well as on choosing a new date range to fill a token value (that is a token from drilldown) wit... See more...
I have a dashboard. There are several inputs. One of them is a DateTime picker. I wish on the open as well as on choosing a new date range to fill a token value (that is a token from drilldown) with a set of values (for example - a comma separated) which I would like to retrieve from a separate query. I mean a query is not binded to any of the dashboards or the base query. This could be something like this:     <form> <fieldset submitButton="false"> <input type="time" token="dateFilter"> <label>By time</label> <default> <earliest>-7d@w0</earliest> <latest>@w0</latest> </default> <change> <set token="filial"> <query> <search> index=fed-trash source=spok |stats values(FILIAL_ID) |eval f = mvjoin(FILIAL_ID, ",") </search> </query> </set> </change> </input> </fieldset>     This is wrong ofcourse, but I hope this explains what I want. Actually I want the produced result from a query above: FILIAL_ID 1111 2222 3333 would be transformed to a string 1111,2222,3333 and be paste as if I set a token by this: <change> <set token="filial">1111,2222,3333</set> </change> as I use token $filial$ further in my dashboad query at WHERE-clause like this index=... source=... |where FILIAL_ID IN ($filial$) ...  
I want a distinct count for a given field by day, but this count also needs to look at all previous days in the given time range to see the remaining distinct count? For example: When I search ... See more...
I want a distinct count for a given field by day, but this count also needs to look at all previous days in the given time range to see the remaining distinct count? For example: When I search  for all events that are on Monday the result is : a,b,c,d,c,z When I search for all events that are on Tuesday the result is : a,b,c,d,e,f,a,b When I search for all events that are on Wednesday the result is : a,b,c,d,e,f,z The distinct count for Monday is 5 and for Tuesday is 6 and for Wednesday it is 7. The  remaining distinct count for Tuesday would be 2, since a,b,c,d have all already appeared on Monday and the remaining distinct count for Wednesday would be 0 since all values have appeared on both Monday and Tuesday already. How would I be able to do this? Thanks
I am trying to create a passive dns collection based on splunk stream data. My current SPL is this: index=botsv2 sourcetype=stream:dns query=*frothly.local | stats earliest(_time) as firstSeen, lat... See more...
I am trying to create a passive dns collection based on splunk stream data. My current SPL is this: index=botsv2 sourcetype=stream:dns query=*frothly.local | stats earliest(_time) as firstSeen, latest(_time) as lastSeen by query | eval firstSeen=strftime(firstSeen, "%m/%d/%Y %H:%M:%S") | eval lastSeen=strftime(lastSeen, "%m/%d/%Y %H:%M:%S") What I am trying to accomplish is show how many days as "activeDays" a query was made. Just because its first and last may be seen 30 days apart it may have only been queried a couple times.  Also the stream queries and answers are separate events and how would i join them to create a by query and answers  
Hi All, Can anyone suggest if we can throttle a correlation search if a notable is already in open state for same grouping values?  Eg: I have a notable that triggers if someone accesses GoogleAPI ... See more...
Hi All, Can anyone suggest if we can throttle a correlation search if a notable is already in open state for same grouping values?  Eg: I have a notable that triggers if someone accesses GoogleAPI Storage site, I do not want it to trigger if I have a notable triggered from a same IP until the first notable is resolved.  Else, wouldn't it be good thing to have instead of having multiple notables triggered for the similar occurrences until the root cause of the issue/threat is resolved? 
Hi, I am using jira add-on available in splunk base app-1438 to ingest events from JIRA cloud instance to splunk . However it is ingesting only ~10% of the historical data from JIRA. It seems to be ... See more...
Hi, I am using jira add-on available in splunk base app-1438 to ingest events from JIRA cloud instance to splunk . However it is ingesting only ~10% of the historical data from JIRA. It seems to be ingesting any new events without any issues. I have used this add-on before for JIRA on-premise which worked fine but on cloud instance it is creating problems. I tried using below jql in input.conf but it ingested only 100 events out of 1000+ events  created > "2019/01/01 00:00"  
Greetings, I am new to Splunk, but do understand most of the concepts since we use the product at work with various forwarders, etc. I am running Splunk server on a small Mini Windows 10 system wit... See more...
Greetings, I am new to Splunk, but do understand most of the concepts since we use the product at work with various forwarders, etc. I am running Splunk server on a small Mini Windows 10 system with SplunkStream enabled. I am ingesting packets via a separate dedicated NIC I have setup to receive from a smart switch using port mirroring. I get plenty of useful data and SplunkStream is great, but I would like to somehow transform the inbound IP address to the host name. I only ever have one host name in the logs obviously for my Splunk host. Many of the logs I drill into and even the chart data shows the src_ip for all the host activity on my network. Appreciate any information and\or assistance to make this happen if it's possible. Hopefully I am describing the situation clearly.   Thanks!
Hello,  is there any way for the ip address to be copied over to the top... The condition is whenever the root's command doesn't have an ip and is followed by a standard user command's , which is bas... See more...
Hello,  is there any way for the ip address to be copied over to the top... The condition is whenever the root's command doesn't have an ip and is followed by a standard user command's , which is bash with an ip, that ip should be root's too SPL       index=* (source="/var/log/secure" (TERM(sudo) AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(usermod) OR TERM(useradd))) OR (TERM(sudo:) OR TERM(su:) AND("session opened for user root" OR COMMAND=/bin/bash)) OR (TERM(sshd) AND "Accepted password" [search index=* (source="/var/log/secure" (TERM(sudo) AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(usermod) OR TERM(useradd)))) OR (source="/root/.bash_history" AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(usermod) OR TERM(useradd))) | regex _raw!= ".*user NOT in sudoers.*" | stats earliest(_time) as E latest(_time) as latest | eval earliest = relative_time(E, "-24h@s") | fields earliest latest])) OR (source="/root/.bash_history" AND (TERM(adduser) OR TERM(chown) OR TERM(userdel) OR TERM(chmod) OR TERM(usermod) OR TERM(useradd))) | eval Date = strftime(_time, "%Y-%d-%m") | eval Time = if(source=="/root/.bash_history",strftime(_time, "%Y-%d-%m %H:%M:%S"), if(match(_raw,"(?<=sudo:)\s*[[:alnum:]]\S*[[:alnum:]]\s*(?=\:).*(?<=COMMAND\=)*") ,strftime(_time, "%Y-%d-%m %H:%M:%S"),null())) | regex _raw!= ".*user NOT in sudoers.*" | eval Users = "root" | eval command = if(source=="/root/.bash_history",_raw,null()) | rex field=_raw "(?<=sudo:)\s*(?P<Users>[[:alnum:]]\S*[[:alnum:]])\s*(?=\:).*(?<=COMMAND\=)(?P<command>.*)" | rex field=_raw "(?<=for)\s*(?P<Users>[[:alnum:]]\S*[[:alnum:]])\s*(?=from).*(?<=from)\s*(?P<ip>[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+)" | eval "Command/Events" = replace(command,"^(\/bin\/|\/sbin\/)","") | eval time_command = mvzip(Time,'Command/Events') | stats values(time_command) as Time_Command latest(ip) as "IP Address" by Date Users index host | mvexpand Time_Command | makemv Time_Command delim="," | eval Time=mvindex(Time_Command , 0) | eval "Command/Events"=mvindex(Time_Command , 1) | table Time Command/Events host Users "IP Address"      
Hi I have log file that copy on splunk server every day with below structure: /data/appserver/ACC/20200617/log.customer1.20200617.bz2 /data/appserver/ACC/20200617/log.customer2.20200617.bz2 /da... See more...
Hi I have log file that copy on splunk server every day with below structure: /data/appserver/ACC/20200617/log.customer1.20200617.bz2 /data/appserver/ACC/20200617/log.customer2.20200617.bz2 /data/appserver/ACC/20200617/log.cus1.20200617.bz2 /data/appserver/ACC/20200617/log.cus2.20200617.bz2 /data/appserver/ACC/20200617/log.cus3.20200617.bz2 now I want splunk consider everything between two dots as host name, like this: customer1 customer2 cus1 cus2 cus3 I try to do this throgh web ui and "Set host" = "5" : settings > datainputs > Files & Directories > /data/appserver/ACC/ is this correct? i mean splunk consider 5's part of address?   Thanks
I've  a log like below and I want to extract the fields "country", "currency" "{"id":1, "message":"country=US&currency=USD"}. I wrote SPL  index="main"| rex max_match=0 field=message "(?<key>\... See more...
I've  a log like below and I want to extract the fields "country", "currency" "{"id":1, "message":"country=US&currency=USD"}. I wrote SPL  index="main"| rex max_match=0 field=message "(?<key>\w+)=(?<value>[^&]+)" | eval z=mvzip(key, value, "~") | mvexpand z | rex field=z "(?<key>[^~]+)~(?<value>.*)" | eval {key} = value After extracting the fields, I can search based on only one field.  This works . index="main"| rex max_match=0 field=message "(?<key>\w+)=(?<value>[^&]+)" | eval z=mvzip(key, value, "~") | mvexpand z | rex field=z "(?<key>[^~]+)~(?<value>.*)" | eval {key} = value | search country=US  This does not work index="main"| rex max_match=0 field=message "(?<key>\w+)=(?<value>[^&]+)" | eval z=mvzip(key, value, "~") | mvexpand z | rex field=z "(?<key>[^~]+)~(?<value>.*)" | eval {key} = value | search country="US" AND currency="USD".  It yields 0 results Any pointers please?   
I had four different keyword( job Success msg ) and need to display job name and status.but i am getting counts index=* cf_app_name="s*" OR cf_app_name=nd* ("All feed is completed" OR "XXX Success: ... See more...
I had four different keyword( job Success msg ) and need to display job name and status.but i am getting counts index=* cf_app_name="s*" OR cf_app_name=nd* ("All feed is completed" OR "XXX Success: XXX" OR "YYY Success: YYY" OR "Finished handshake success" ) | eval searchString = case(like(_raw, "%All feed is completed%"), "First Job", like(_raw, "%XXX Success: XXX%"), "Second Job", like(_raw, "%YYY Success: YYY%"), "third job",like(_raw, "%Finished handshake success%"), "Fourth job", 1==1, "Incorrect searchString match, please refactor") | stats count by searchString _time Actual result: First job          5 second Job   7 Excpected output: first job                           Success Second job                    Success Third job                         failure
I have multiple inputs(3 INPUTS) in a dashboard, I run a sql in the panels. I want to execute a query if the other two values are null. Can you help me with the query. |dbxquery connection="*" que... See more...
I have multiple inputs(3 INPUTS) in a dashboard, I run a sql in the panels. I want to execute a query if the other two values are null. Can you help me with the query. |dbxquery connection="*" query="select * from usr where mID like 'id=$dn$%'". Other values $in_ID, ex_ID. I want to execute this query if values of $in_ID, ex_ID is null.
| dbxquery connection="*"  query="select STOREENT_ID,count(*) O_C from table1 " | appendcols [| dbxquery connection="*" query="select count(*) P_S_T from table2 " | join [| dbxquery connection="*... See more...
| dbxquery connection="*"  query="select STOREENT_ID,count(*) O_C from table1 " | appendcols [| dbxquery connection="*" query="select count(*) P_S_T from table2 " | join [| dbxquery connection="*" query="select count(*) P_E_Y from table2"] |join [dbxquery connection="*" query="select count(*) P_ACTIVE from table2 where status=1"]]   This my sample query, I want all the results in a single line. The value before append prints in a line and after append the values are printed in a new line.
I am unable to login to a recently created Trial.    From My Instances, I do not have the option to INVITE USERS. Also, PRODUCT: Splunk>Cloud Trial SIZES: 5GB START DATE: Jun 18, 2020 EXPIRATIO... See more...
I am unable to login to a recently created Trial.    From My Instances, I do not have the option to INVITE USERS. Also, PRODUCT: Splunk>Cloud Trial SIZES: 5GB START DATE: Jun 18, 2020 EXPIRATION DATE: Jul 03, 2020 INSTANCE NAME: kubenet No INVITE USERS only ACCESS INSTANCE Clicking ACCESS INSTANCE, entering my splunk.com login credentials into the new page returns: (!) Login Failed. Noticed @brreeves_splunk  resolved a similar case 3 weeks ago. Please can you assist, as I would like to take advantage and trial this product?
How do we find the average of a table column filled with time values?
I have a simple flat data table in splunk enterprise 8.02 that has values in a field called UK_0 for current month quantities and UK_1 for the previous month up to UK_6 for six months ago.  I am tryi... See more...
I have a simple flat data table in splunk enterprise 8.02 that has values in a field called UK_0 for current month quantities and UK_1 for the previous month up to UK_6 for six months ago.  I am trying to replace these field names with the actual month names using an eval on now() to get month names      | eval thismonth=strftime(now(),"%B"), lastmonth = strftime(relative_time(now(),"-1mon"),"%B") |stats Values("Service provider") as "Service Provider" values(part_id) as "Part Number" values(UK_1) as lastmonth values(UK_0) as thismonth by "Part Info"     the above simply replaces the UK_0 with the heading "thismonth" rather than June.  how do i get it to say May and June rather than UK_1 & UK_0 ?
Hello, I have been making a ton of alerts to notify me on certain event IDs but this morning when I logged into my console I saw a huge red exclamation mark next to "Searches Skipped." I looked at t... See more...
Hello, I have been making a ton of alerts to notify me on certain event IDs but this morning when I logged into my console I saw a huge red exclamation mark next to "Searches Skipped." I looked at the alert and it said "red threshold has been exceed" "40% of searches were skipped". I disabled 7 of the alerts and consolidated two alerts into one.  The issue went away. My question is how can I monitor for those event IDs in real time without triggering  the threshold.