All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I have different queries that get the age of a ticket only counting the business hours. I need to do different queries as business hours are not the same for different regions. For exa... See more...
Hi Splunkers, I have different queries that get the age of a ticket only counting the business hours. I need to do different queries as business hours are not the same for different regions. For example, this is the one for the Americas:     | eval start_time_epoch = strptime(reported_time,"%b %d %Y %H:%M:%S") | eval start_time_second = strftime(start_time_epoch,"%S") | eval start_time_epoch_rounded = start_time_epoch - start_time_second | fields - start_time_epoch, start_time_second | eval close_time_epoch = strptime(processed_time,"%b %d %Y %H:%M:%S") | eval close_time_second = strftime(close_time_epoch,"%S") | eval close_time_epoch_rounded = close_time_epoch - close_time_second | fields - close_time_epoch, close_time_second | eval minute = mvrange(0, (close_time_epoch_rounded - start_time_epoch_rounded), 60) | mvexpand minute | eval _time = start_time_epoch_rounded + minute | eval myHour = strftime(_time,"%H") | eval myMinute = strftime(_time,"%H") | eval myDay = strftime(_time,"%A") | where myDay != "Saturday" AND myDay != "Sunday" AND (myHour >= 13 OR myHour < 1) | stats count as durationInMinutes by reported_time, processed_time | eval duration = tostring(durationInMinutes*60, "duration")     Instead, for Europe I have this line different:     | where myDay != "Saturday" AND myDay != "Sunday" AND (myHour >= 7 OR myHour < 17)       The output for all of them is to show if today the SLO of 1 hour was missed by doing the following:     | eval SLO=if(durationInMinutes>60,"SLO Fail","SLO Achieved") | chart count by SLO     I'm working with UTC time. And it works great. The problem here is that when I get my charts, it will show some events from the Americas as today's (because it's from 13 UTC to 1 UTC) but instead I'd like to see that as yesterdays. Any idea how to get some sort of offset in my query? Thank you all! Wheresmydata
Hi Everyone,   Is there anyway to limiting the users for a particular HTML Dashboard in Splunk?
I have implemented a customized date input but this is not getting aligned with my other inputs in the same row. I am attaching the screenshot to show how it is getting aligned currently. I want all ... See more...
I have implemented a customized date input but this is not getting aligned with my other inputs in the same row. I am attaching the screenshot to show how it is getting aligned currently. I want all the inputs in one row horizontally. Here's what I tried:   <panel> <html> <style>.dateContainer{ display:flex; margin-top: 0px !important; margin-right: 10px !important; margin-bottom: 0px !important; margin-left: 0px !important; } .dateInput{ padding-right:15px !important; } .#test1{ display:flex; margin-top: 0px !important; margin-right: 10px !important; margin-bottom: 0px !important; margin-left: 10px !important; } .#test2{ display:flex; margin-top: 0px !important; margin-right: 10px !important; margin-bottom: 0px !important; margin-left: 20px !important; }</style> <div class="dateContainer"> <div class="dateInput"> <div>From Date:</div> <input id="from_default1" type="date" name="fromDate" value="$from_default1$" /> </div> <div class="dateInput"> <div>To Date:</div> <input id="to_default1" type="date" name="toDate" value="$to_default1$" /> </div> </div> </html> <input type="dropdown" token="field1" id="test1"> <label>test</label> </input> <input type="dropdown" token="field2" id="test2"> <label>Test</label> </input> </panel>
We want to extract Json key&Value pairs, but source is prefixing the text before Json data. Please let us know the search string to extract json fields. ************************************* 2020-... See more...
We want to extract Json key&Value pairs, but source is prefixing the text before Json data. Please let us know the search string to extract json fields. ************************************* 2020-06-22 23:52:40,895 INFO [Timer-Driven Process Thread-10] o.a.nifi.processors.standard.LogMessage LogMessage[id=2202601e] TEST{ "domain": "ABC", "module": "TEST", "EventID" : "1233" }
what will happen if i reboot the cluster peer after running the maintenance mode on master and offline command on peer. Will it join the cluster again once peer status changes to down after exceeding... See more...
what will happen if i reboot the cluster peer after running the maintenance mode on master and offline command on peer. Will it join the cluster again once peer status changes to down after exceeding the restart_timeout? i could see the peer status changes to down once i put it in offline and then rebooted the peer server. I noticed that it joins the cluster again and status changes to Up and Searchable as Yes.
My search consists solely of a call to a search macro. It looks like this: `blabla1(host="blabla2", mon-host="blabla3" )` The search macro starts as follows: | inputlookup blabla4.csv | eval count... See more...
My search consists solely of a call to a search macro. It looks like this: `blabla1(host="blabla2", mon-host="blabla3" )` The search macro starts as follows: | inputlookup blabla4.csv | eval counter=0 | ... I get an error message "Error in 'inputlookup' command. This command must be the first command of a search." Does this error message mean that Splunk does not support the use of inputlookup in a search macro?
Hi Splunk Experts I've created a summary index where it contains 6 eval cases, for example: eval 1=case(match(something,"a",...."b","c"), eval 2 =case (d,e,f)....eval 6=case(x,y,z)  where a,b,c...... See more...
Hi Splunk Experts I've created a summary index where it contains 6 eval cases, for example: eval 1=case(match(something,"a",...."b","c"), eval 2 =case (d,e,f)....eval 6=case(x,y,z)  where a,b,c....x,y,z are the individual detailed functions & 1,2,3,,4,5,6 as overall functions. Now I have combined all eval functions into a single value using eval Total_Function = mvappend(1,2,3,4,5,6). But I want to list the table with both overall function & individual detailed function as well. But I am not sure how to get individual detail values in the table along with overall function. Expected table as below: Time Total_Function      Overallfunction Individual function XX     T otal_Function          1                               a YY       Total_Function          1                               b ZZ       Total_Function          1                               c AA       Total_Function         6                               x BB       Total_Function         6                               y CC      Total_Function          6                               z                      Kindly help me please. (Please note, there are multiple individual functions in each eval case)  
Hi Splunk experts, I am a new face here. I have a task for multiple alerts creating. I am wondering is it possible to pass a list of strings as an argument to my custom macro. Let's me explain more ... See more...
Hi Splunk experts, I am a new face here. I have a task for multiple alerts creating. I am wondering is it possible to pass a list of strings as an argument to my custom macro. Let's me explain more about the idea: My argument will be like this:  "scope=A" "scope=A OR scope=B" "scope=A OR scope=B OR scope=C" .... Basically scope can be equals to whatever value as much as we need and I want to write a macro just has only one argument, but I still could add number of values to my macro: myMacro(1) How can I solve it? Thank in advance.    
Found a minor issue with the ESCU search "ESCU - Detect Prohibited Applications Spawning cmd.exe - Rule" Advanced Edit -> Rename "parent_process" to be "parent_process_name" to get the parent name i... See more...
Found a minor issue with the ESCU search "ESCU - Detect Prohibited Applications Spawning cmd.exe - Rule" Advanced Edit -> Rename "parent_process" to be "parent_process_name" to get the parent name in the "Incident Review" Tab.  ESCU version 1.0.49      
Hi, I am trying to get the results form two indexes and appending the results . The query is working on search window. But after adding to dashboard its timed out. Anyone please help to optimize the... See more...
Hi, I am trying to get the results form two indexes and appending the results . The query is working on search window. But after adding to dashboard its timed out. Anyone please help to optimize the code .   code - index=servicewow dv_cmdb_ci=Work OR short_description="*WJM*" OR assignment_group="People" earliest="-24h@h" dv_state="Open" OR dv_state="Work in Progress"|fields opened_at,dv_number,priority|dedup dv_number|eval new1=now()|eval new=strftime(new1,"%Y-%m-%d %H:%M:%S") |stats list(opened_at) as start, list(new) as current by dv_number,priority|append[search index=sales_enterprise sourcetype=sfdc:case Category__c=Work earliest="-24h@h" Status="Open" OR Status="In Progress"|fields CaseNumber,Priority,Status,CreatedDate|dedup CaseNumber|eval new1=now()|eval new=strftime(new1,"%Y-%m-%d %H:%M:%S") |stats list(CreatedDate) as csstart, list(new) as cscurrent by CaseNumber,Priority,Status]|eval duration=strptime(current,"%Y-%m-%d %H:%M:%S") - strptime(start,"%Y-%m-%d %H:%M:%S")|eval Time=round(((((duration)/3600)/24)),0)|eval csduration=strptime(cscurrent,"%Y-%m-%d %H:%M:%S") - strptime(csstart,"%Y-%m-%dT%H:%M:%S")|eval CaseTime=round(((((csduration)/3600)/24)),0)|eval IncSLA=if((Time>3 AND priority=3),"P3 INC-SLA Breached", if((Time>7 AND priority=4),"P4 INC-SLA Breached","SLA Yet to Breach"))|eval CaseSLA=if((CaseTime>3 AND Priority="Medium"),"P3 Case-SLA Breached", if((CaseTime=1 AND Priority="Low"),"P4 Case-SLA Breached","SLA Yet to Breach"))|stats count(eval(IncSLA="P3 INC-SLA Breached")) as "P3 Inc-SLA Breached",count(eval(IncSLA="P4 INC-SLA Breached")) as "P4 Inc-SLA Breached",count(eval(CaseSLA="P3 case-SLA Breached")) as "P3 Case-SLA Breached", count(eval(CaseSLA="P4 Case-SLA Breached")) as "P4 Case-SLA Breached"|transpose|rename column as Incidents/Cases|rename "row 1" as "NoOfIncidents/Cases Breached"
Hi All, Need a solution for this-    data display should change every 30mins but max data display is 10 fields  Ex- 8 - 3 (30min gap data) will show ,once 3:30 pm happen , data display change... See more...
Hi All, Need a solution for this-    data display should change every 30mins but max data display is 10 fields  Ex- 8 - 3 (30min gap data) will show ,once 3:30 pm happen , data display change to 8:30 am-3:30pm .like wise. please suggest some solution for this .
Can AppDynamics process Spring Boot Actuator metrics, so they can displayed/graphed and monitored? The values are available via JMX but it's not as straight forward to retrieve values. In fact, the J... See more...
Can AppDynamics process Spring Boot Actuator metrics, so they can displayed/graphed and monitored? The values are available via JMX but it's not as straight forward to retrieve values. In fact, the JMX retriever doesn't seem to work with it at all.  These values can also be retrieved via HTTP requests (the URL is pretty standard) as per standard Spring Boot 2.1+. The base URL can be given, the metric names can be retrieved via different URL and hence these can be retrieved. Can AppDynamics handle retrieving these metrics automatically and allow for them to be displayed/graphed etc? Thanks, Ian
Following the instruction from here, Send SNMP events to your Splunk deployment I'm setting up the monitoring of the file al /var/log/snmp-traps. I wonder what would be the source type. I guess that ... See more...
Following the instruction from here, Send SNMP events to your Splunk deployment I'm setting up the monitoring of the file al /var/log/snmp-traps. I wonder what would be the source type. I guess that there should be one defined already. It has the following content: for example,        NET-SNMP version 5.7.3 2020-06-22 16:30:44 localhost [UDP: [127.0.0.1]:49799->[127.0.0.1]:162]: iso.3.6.1.2.1.1.3.0 = Timeticks: (1) 0:00:00.01 iso.3.6.1.6.3.1.1.4.1.0 = OID: ccitt.1 2020-06-22 16:50:44 localhost [UDP: [127.0.0.1]:59061->[127.0.0.1]:162]: iso.3.6.1.2.1.1.3.0 = Timeticks: (1) 0:00:00.01 iso.3.6.1.6.3.1.1.4.1.0 = OID: ccitt.1 2020-06-22 16:50:48 localhost [UDP: [127.0.0.1]:59062->[127.0.0.1]:162]: iso.3.6.1.2.1.1.3.0 = Timeticks: (1) 0:00:00.01 iso.3.6.1.6.3.1.1.4.1.0 = OID: ccitt.1 2020-06-22 17:21:36 localhost [UDP: [127.0.0.1]:58259->[127.0.0.1]:162]: iso.3.6.1.2.1.1.3.0 = Timeticks: (1) 0:00:00.01 iso.3.6.1.6.3.1.1.4.1.0 = OID: ccitt.1        Thanks for your help!
Hello, I am using Splunk enterprise 7.3.5. I would like to send an email, using the command sendemail, but I would like to create it based on a search result, so I am trying:   eventtype = myev... See more...
Hello, I am using Splunk enterprise 7.3.5. I would like to send an email, using the command sendemail, but I would like to create it based on a search result, so I am trying:   eventtype = myeventype | table message_subject, sender_address |sendemail sendresults=true inline=true from=$sender_address$ subject=$message_subject$ to=myemail   Where message_subject and sender_address, are fields of the search.  But when I received the email, looks like- (see the attached image) Basically, the parameters are not working, I received the email without any of those parameters set.   How can I fix that?
I have 3 reports that I want to put into one report, here is my search sourcetype=MSExchange:*:MessageTracking source_id=SMTP (event_id=RECEIVE) user_bunit=Energy recipient_domain="IID.com" | stats... See more...
I have 3 reports that I want to put into one report, here is my search sourcetype=MSExchange:*:MessageTracking source_id=SMTP (event_id=RECEIVE) user_bunit=Energy recipient_domain="IID.com" | stats count as RECEIVE by recipient |append [search sourcetype=MSExchange:*:MessageTracking source_id=SMTP (event_id=SEND) user_bunit=Energy recipient_domain="IID.com" | stats count as SEND by recipient] |table recipient, SEND, RECEIVE The data I get is only the recipient and RECEIVE data, it does not display the SEND information what  I missing here    
Hi, I'm trying to exclude as many crawl bots from my search and show only human hits on our website. I found the search code below when googling, but it shows me all the non-human sessions. Can anyon... See more...
Hi, I'm trying to exclude as many crawl bots from my search and show only human hits on our website. I found the search code below when googling, but it shows me all the non-human sessions. Can anyone explain how to exclude these sessions? Or another way of showing all non-bot traffic? Thanks! index="main" sourcetype="access_combined" | eval usersession=clientip + "_" + useragent | sort usersession, _time | delta _time as visit_pause p=1 | streamstats current=f window=1 global=f last(usersession) as previous_usersession | eval visit_pause=if(usersession==previous_usersession, visit_pause, -1) | search visit_pause!=-1 | stats var(visit_pause) as variance by usersession | search variance<5
Hi everyone, I want to create an alert which runs every hour, checks the last 60 minutes of events to get the count number, then compares this with the average of the past 7 days. index=data | tim... See more...
Hi everyone, I want to create an alert which runs every hour, checks the last 60 minutes of events to get the count number, then compares this with the average of the past 7 days. index=data | timechart span=1h count | timewrap d series=short | addtotals s* | eval 7dayavg=Total/7.0 | table _time, _span, s0, 7dayavg | rename s0 as now   This displays every hour for today and 7dayavg but how do i just show for the past 60 minutes, then compare that with the 7dayavg of the same 60 minute time block?
I am working with the following query....however, the start time and end tied output that i am getting is below and some of the times is listed several times. 06/22/2020 15:24:06.370000 I am trying... See more...
I am working with the following query....however, the start time and end tied output that i am getting is below and some of the times is listed several times. 06/22/2020 15:24:06.370000 I am trying to get only the time instead of the current format.    index= XYZ  SMF30JBN=M*DDD* SMF30JNM=JOB* (SMF30STP=1 OR SMF30STP=5) sourcetype="syncsort:smf030" | rename SMF30JNM as JOBNUMBER SMF30JBN as JOBNAME | eval START = case(SMF30STP=1,strptime(DATETIME, "%Y-%m-%d %H:%M:%S.%2N")) | eval END = case(SMF30STP=5,strptime(DATETIME, "%Y-%m-%d %H:%M:%S.%2N")) | stats values(START) as START values(END) as END by JOBNUMBER JOBNAME | convert dur2sec(START) as STARTTIME dur2sec(END) as ENDTIME | convert ctime(STARTTIME) as START_TIME ctime(ENDTIME) as END_TIME | table JOBNAME START_TIME END_TIME
I installed the Splunk App for Windows Infrastructure using the following Splunk guide: https://docs.splunk.com/Documentation/MSApp/2.0.1/MSInfra/AbouttheSplunkAppforMSInfrastructure. I set up my the... See more...
I installed the Splunk App for Windows Infrastructure using the following Splunk guide: https://docs.splunk.com/Documentation/MSApp/2.0.1/MSInfra/AbouttheSplunkAppforMSInfrastructure. I set up my the Splunk deployment server on my Splunk Enterprise Instance. For some reason, the Splunk Forwarder that I set up to be a client of this server is no longer sending the logs from the monitor that I defined in C:\Program FIles\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf.  The Splunk Windows Add-on and the Splunk App for Windows Infrastructure is working correctly and is sending the Windows logs to the deployment server.  I believe I need to change something in the setting of the deployment server/clients/server class to get my monitor to send the logs to the "main" index but I don't know what to change.  
`get_seclabel(host,"domain_controller","-90d")` Macro expanded: | inputlookup sec_label where (label="domain_controller" type="host" last_updated>=1585079881.000000)   In the input lookup there a... See more...
`get_seclabel(host,"domain_controller","-90d")` Macro expanded: | inputlookup sec_label where (label="domain_controller" type="host" last_updated>=1585079881.000000)   In the input lookup there are the following columns: label, type, and value. The results of this lookup give me everything that is a domain controller.  I'm trying to exclude anything that matches in the value column so I'm using this in a search but it's not excluding the list properly: NOT [| `get_seclabel(host,"domain_controller","-90d")` I still see NADC01 as a returned value in my search even though I'm excluding it here. Any idea what I'm doing wrong?