All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am wanted to calculate shift Analysts VPN session start and end time duration to exactly capture the shift during 24 hours as I have 3 shifts with following timings   Morning Shift time = 7am  to... See more...
I am wanted to calculate shift Analysts VPN session start and end time duration to exactly capture the shift during 24 hours as I have 3 shifts with following timings   Morning Shift time = 7am  to 3pm Evening Shift time = 3pm to 11pm    night shift time duration = 11pm to 7am next morning   Currently I constructed following query that is having wrong data whenever i increase time more than 24 hours how i can put if condition in this query to add a column Shift time (morning ,evening ,night ) based on Start and end time if condition time range ? index=it sourcetype=pulse:connectsecure vendor_product="Pulse Connect Secure" realm=Company-Domain+DUO1001 earliest=-24 | iplocation src | eval Attempts= if(vendor_action="started","Session_Started","Session_Ended") | stats values(Attempts) AS All_Attempts values(src) AS src count(eval(Attempts="Session_Started")) AS Started count(eval(Attempts="Session_Ended")) AS Ended min(_time) AS start_time max(_time) AS end_time by user | eval Duration=end_time-start_time | search user=Analyst1 OR user=Analyst2 OR user=Analyst3 OR user=Analyst4 OR user=Analyst5 OR user=Analyst6 OR user=Analyst7 OR user=Analyst8 OR user=Analyst9 | convert ctime(start_time) | convert ctime(end_time) | eval totall_duration=tostring(Duration,"duration") | table user,All_Attempts,src,Started,Ended,start_time,end_time,totall_duration In excel I am using following formula to calculate the shift duration from ticket close time =IF(HOUR(E2)<7,"Night Shift",IF(HOUR(E2)<15,"Morning Shift",IF(HOUR(E2)<23,"Evening Shift","Night Shift")))  How I can insert similar condition in splunk to get the result intron of  a new calculated column called shift with Session started and session End (time  duration between both times)?   @manjunathmeti  @woodcock   
I've the following data in my table part1.part2.answer.local part1-part2..part3.part4.answer.net part11.part11-part11.answerxyz.net part1-part2-part3-part4.answer.net part1-part2-part3-part6.ans... See more...
I've the following data in my table part1.part2.answer.local part1-part2..part3.part4.answer.net part11.part11-part11.answerxyz.net part1-part2-part3-part4.answer.net part1-part2-part3-part6.answer.com part127.09 abcd (+789) part127.08 abcd (+123) part127.06 abcd (+456) I want to split it as follows : 1.) If there is a space present in the data then it should be returned exactly as it is input-------------------------------------output part127.09 abcd (+789)-------- part127.09 abcd (+789) part127.08 abcd (+123)------- part127.08 abcd (+123) part127.06 abcd (+456) --------part127.06 abcd (+456) 2.) If there is no space then the part before the first dot should be returned input------------------------------------------------------ output part1.part2.answer.local ----------------------------------part1 part1-part2..part3.part4.answer.net------------------- part1-part2 part11.part11-part11.answerxyz.net ------------------part11 part1-part2-part3-part4.answer.net -------------------part1-part2-part3-part4 part1-part2-part3-part6.answer.com------------------part1-part2-part3-part6 I've tried this :       index=ind sourcetype=src | fields f1 | where f1 != "null" | dedup f1 | eval temp=f1 | eval derived_name_having_space=if(match(f1,"\s*[[:space:]]+"),1,0) | eval with_Space=f1 | where derived_name_having_space=1 | eval without_Space=mvindex(split(temp,"."),0) | where derived_name_having_space=0 | table with_Space without_Space f1       Here I'm not getting any rows returned. -------------------------------------------------------------------------- But , In the above query when I remove the part       | eval without_Space=mvindex(split(temp,"."),0) | where derived_name_having_space=0       I get the correct results of the rows where derived_name_having_space=1 . --------------------------------------------------------------------------- Similarly, when I remove the part       | eval with_Space=f1 | where derived_name_having_space=1       I don't get the correct results of the rows where derived_name_having_space=0 . input ---------------------------------------------output part127.09 abcd (+789)-------------------- -part127 part127.08 abcd (+123) -------------------- part127 part127.06 abcd (+456) -------------------- part127 Since they all evaluate to the same result it creates a problem while deduping. ------------------------------------------------------------------------------- I've used the regex class from here : https://www.debuggex.com/cheatsheet/regex/pcre Can anyone point me where I'm missing it or any other approach should be followed? Thanks
Hi, I have been trying to fetch Agent logs through AppDynamics controller itself.  I am not able to understand the use of " Logger name " in Request agent file.  I have tried to download multiple ... See more...
Hi, I have been trying to fetch Agent logs through AppDynamics controller itself.  I am not able to understand the use of " Logger name " in Request agent file.  I have tried to download multiple logging files with different logger name eg: com.appdynamics,  com.appdynamics.BusinessTransaction but the files which are getting downloaded have same size and same type of files( like BT's and Byte code). Could you please explain me the use of this tab ?  Regards, Ujjwal. 
I want to execute a query in app1, but I want to get the data from app2 For eg: Execute query in app1 "index="abc",  This should get the data from app2 Please help!
I am sending data to Splunk using HEC but after trying all the methods exposed by Splunk API , I am getting all the custom properties nested under a single "message" or "data" attribute. Is there a w... See more...
I am sending data to Splunk using HEC but after trying all the methods exposed by Splunk API , I am getting all the custom properties nested under a single "message" or "data" attribute. Is there a way so that all my properties are logged in original format and not under a single head. Actual : { ID: 123, message: src : "abcd", category: "list" , user: "tchsavy"   } Expected : { ID : 123 , message : "Hello" , src : "abcd", category: "list" , user: "tchsavy" } 
We are trying to develop Monitoring as Code application. So, to start with we want to export existing Splunk Configuration in .tf file format and then we can try to modify the .tf file to modify the ... See more...
We are trying to develop Monitoring as Code application. So, to start with we want to export existing Splunk Configuration in .tf file format and then we can try to modify the .tf file to modify the respective splunk configuration. I can see Splunk has provided terraform provider. Is there a way we can export existing Splunk configuration in .tf file format. https://registry.terraform.io/providers/splunk/splunk/latest   I am open for suggestion if there are some other better way to implemement Monitoring as Code solution around Splunk.
I receive a bunch of messages that all are assigned to a group by the groupID. I also have a dynamic set of a range as a Multivalue-Field, that needs to be used as a filter for these messages. I ... See more...
I receive a bunch of messages that all are assigned to a group by the groupID. I also have a dynamic set of a range as a Multivalue-Field, that needs to be used as a filter for these messages. I tried it like this so far, but couldn't get any results:     index=my_index sourcetype=my_source | eval range=case("case1", mvrange(1,9), "case2", mvrange(10,19),...) | where groupID in (range) | stats count(_raw) as count by groupdID       So if case1 happens, i only want to see the amount of Messages in the specified groupID-range, and so on.. Can anyone help me with that ?
@niketn  I am trying to display the selected start and end time in the UI. I followed particularly the below answer given by you.  https://community.splunk.com/t5/Dashboards-Visualizations/Setting-... See more...
@niketn  I am trying to display the selected start and end time in the UI. I followed particularly the below answer given by you.  https://community.splunk.com/t5/Dashboards-Visualizations/Setting-job-earliestTime-and-job-latestTime-tokens-for-the-date/m-p/345200/highlight/true#M22464 It was working fine but suddenly it stopped saying Invalid Date. We recently had a splunk Upgrade to Version:8.0.4.1. Could it be due to the upgrade? Was there any change. I couldnt narrow down to exact issue. Here is the code which you have shared   <form> <label>Show Time from Time Picker</label> <!-- Dummy search to pull selected time range earliest and latest date/time --> <search> <query>| makeresults | addinfo </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <done> <eval token="tokEarliestTime">strftime(strptime('$job.earliestTime$',"%Y/%m/%dT%H:%M:%S.%3N %p"),"%m/%d/%y %I:%M:%S.%3N %p")</eval> <eval token="tokLatestTime">strftime(strptime('$job.latestTime$',"%Y/%m/%dT%H:%M:%S.%3N %p"),"%m/%d/%y %I:%M:%S.%3N %p")</eval> </done> </search> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <!-- sample HTML Panel to display results in required format --> <html> ( $tokEarliestTime$ to $tokLatestTime$) </html> </panel> </row> </form>   Attaching the screenshot of what is shown in the UI. Could you please suggest. 
Hi. I have a strange behaviour from about 48h by an UF, a single one. 1) On UF both metrics and splunkd logs events, NO ERRORS! Connections to outputs is OK! 2) UF has not been touched in last 48h... See more...
Hi. I have a strange behaviour from about 48h by an UF, a single one. 1) On UF both metrics and splunkd logs events, NO ERRORS! Connections to outputs is OK! 2) UF has not been touched in last 48h, same conf / same addons / same ALL 3) UF has been updated to clean 7.2.0, but problem permains rolled back to previous version... 4) All inputs are sent, _internal (metrics.log/splunkd.log) NOT from 48h!!! 5) I still clean log dir on UF from rotated *.? and online metrics and splunkd, and restarted!!! No way!!! 6) Deleted addons, and redeployed. No way!!! _internal are missing!!! Any idea? Thanks.
Need help with a Splunk query  to display % failures  % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searchTermForA1"   A2= Total number of events ... See more...
Need help with a Splunk query  to display % failures  % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searchTermForA1"   A2= Total number of events returned by the below query: index="xyz"  "searchTermForA2" Please help with the query. Thanks!  
Hi  How much cost of splunk for security perspective.   Please reply to message. Thanks 
Hi Team,   I have my logs for jira,bamboo and ucd in splunk with indexes like index=jira,index=bamboo and index=ucd for all these tools need to build a realtime dashboard .Can someone guide me how ... See more...
Hi Team,   I have my logs for jira,bamboo and ucd in splunk with indexes like index=jira,index=bamboo and index=ucd for all these tools need to build a realtime dashboard .Can someone guide me how to show as a realtime dashboard   Thanks  
Hello,   We have two deployment-App, named A and B.  They both have inputs.conf to monitor path /log/A and /log/B. If I use deployment server to push either A or B, it works fine. But If I push bo... See more...
Hello,   We have two deployment-App, named A and B.  They both have inputs.conf to monitor path /log/A and /log/B. If I use deployment server to push either A or B, it works fine. But If I push both A and B to the clients, only App path /log/A or /log/B is being monitored. Is this because two app's inputs.conf are located in /default folder?   Thanks, Mike
Can you provide an example of a search query or script I can use to tell if a windows server is shutdown or down.i am looking for the best way to set up an shutdown or down status alert for windows s... See more...
Can you provide an example of a search query or script I can use to tell if a windows server is shutdown or down.i am looking for the best way to set up an shutdown or down status alert for windows server.
Hi, In lookup definition, IT_server_list is created in lookup definition which is mapped to CSV named (server_list.csv) In Lookup Table, server_list.csv file is there In automatic lookup, IT_se... See more...
Hi, In lookup definition, IT_server_list is created in lookup definition which is mapped to CSV named (server_list.csv) In Lookup Table, server_list.csv file is there In automatic lookup, IT_server_list is created why do we need automatic lookup?  
I created some of the columns using regex. So all of the codes for the regex needs to be included. I would like to find the total duration based on StationName. StationName          Duration ABC123... See more...
I created some of the columns using regex. So all of the codes for the regex needs to be included. I would like to find the total duration based on StationName. StationName          Duration ABC123                        100 ABC123                        200 ABC456                         50   When I pasted this query at the end of my codes, it only shows the StationName but the sum of Duration column is empty. How can I get the sum of duration based on StationName? | stats sum(Duration) by StationName
Hi, I'm having trouble launching the web server after installing Splunk for Mac OSX (El Capitan version 10.11.6). Once installed and after selecting "Start and Show Splunk" I get the below error mes... See more...
Hi, I'm having trouble launching the web server after installing Splunk for Mac OSX (El Capitan version 10.11.6). Once installed and after selecting "Start and Show Splunk" I get the below error message when attempting to open browser:   When trying to launch from the terminal I get the below error message as well: Any advice would be great!  
Hello, I'm trying to extract some SSID info into a field in Splunk. This info comes after a certain text string in some Cisco WLC logs. Sample logs: Jul 18 15:00:27 10.171.12.44 DA-WLC-03: *Dot1x_NW... See more...
Hello, I'm trying to extract some SSID info into a field in Splunk. This info comes after a certain text string in some Cisco WLC logs. Sample logs: Jul 18 15:00:27 10.171.12.44 DA-WLC-03: *Dot1x_NW_MsgTask_0: Jul 18 15:00:25.919: %APF-3-AUTHENTICATION_TRAP: [SA]apf_80211.c:20019 Client Authenticated: MACAddress:fa:f0:6c:56:34:bf Base Radio MAC:a0:93:51:22:38:b0 Slot:0 User Name:dave2345@ox.ac.uk Ip Address:10.156.4.11 SSID:eduwifi Jul 18 15:20:3510.171.12.44 DA-WLC-03: *Dot1x_NW_MsgTask_0: Jul 18 15:20:33.510: %APF-3-AUTHENTICATION_TRAP: [SA]apf_80211.c:20019 Client Authenticated: MACAddress:b8:27:56:34:cc:d0 Base Radio MAC:a0:93:51:22:38:b0 Slot:0 User Name: unknown Ip Address:10.156.4.11 SSID:W-Guest These logs are often different lengths but the common feature I want to initially capture as a field is what comes after the text SSID: I can use this basic regex string in testing on regex101.com and it seems to do the trick: (?:<=SSID:).* but whenever I try toeither extract the field or use the rex command in splunk it does not work. Please could someone tell me if this is the correct regex expression and what formatting would i need to use in splunk to extract the field ?     wifiThis seems to be a common request but I can't get it to work 
How can I create the alert for if host is power off(I have one windows host I'd,)
Hi I have configured Splunk AWS plugin to get files stored in a s3 bucket. These files come from a Apache server and have Apache access log format.  I use an s3 generic input and it seems to be con... See more...
Hi I have configured Splunk AWS plugin to get files stored in a s3 bucket. These files come from a Apache server and have Apache access log format.  I use an s3 generic input and it seems to be connected (I tried with only one file) but when I check for searching events I don't see anything ? Internal Splunk logs indicate the s3 bucket is well reached and the file inside well processed without error. Do you have a idea  from which this issue can be due ? Thanks Saïd