All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey guys. I have multiple events combined to transactions. I'd like to view the duration of each transaction on a timechart to have an overview about when and how long which transaction occured. My ... See more...
Hey guys. I have multiple events combined to transactions. I'd like to view the duration of each transaction on a timechart to have an overview about when and how long which transaction occured. My search so far is: searchterms | eval start_time = if(like(_raw, "%START%"), 'start', 'null') | eval end_time = if(like(_raw, "%END%"), 'end', 'null') | transaction JobDescription startswith=(LogMessage="* START *") endswith=(LogMessage="* END *") maxevents=5000 | timechart [pls help] I'm pretty lost on that case, so help is very appreciated
Hi Folks, I am getting the status of my applications(Server-001 and Server-002)every 15mins like the below example in json format and pushing it to splunk forwarder. I want to create a line cha... See more...
Hi Folks, I am getting the status of my applications(Server-001 and Server-002)every 15mins like the below example in json format and pushing it to splunk forwarder. I want to create a line chart or may be any other visualization based on the below events. Say suppose for pass we can assign value 1 and for FAIL we can assign value 0 and plot a chart over _time. Thanks Event 1                  "name": "Server-001",                   "status": "PASS" Event 2                   "name": "Server-002",                   "status": "PASS" Event 3                  "name": "Server-001",                  "status": "PASS" Event 4                 "name": "Server-002",                 "status": "PASS"
After onboarding done, The logs are reporting to splunk, But most of the events are showing like binary as below. The file CHARSET shows UTF-16LE, Have tried Auto also but stil   x00E\x00B\x00U\x00... See more...
After onboarding done, The logs are reporting to splunk, But most of the events are showing like binary as below. The file CHARSET shows UTF-16LE, Have tried Auto also but stil   x00E\x00B\x00U\x00G\x00:\x00g\x00e\x00t\x00D\x00S\x00M\x00U\x00s\x00e\x00r\x00s\x00:\x00G\x00e\x00t\x00t\x00i\x00n\x00g\x00 \x00i\x00n\x00f\x00o\x00r\x00m\x00a\x00t\x00i\x00o\x00n\x00 \x00f\x00o\x00r\x00 \x00R\x00U\x007\x006\x007\x001\x003\x001\x00 \x00 \x00 \x00 \x00D\x00E\x00B\x00U\x00G\x00:\x00g\x00e\x00t\x00D\x00S\x00M\x00U\x00s\x00e\x00r\x00s\x00:\x00S\x00u\x00c\x00c\x00e\x00s\x00s\x00   Props has written as below: SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-16LE disabled=false DATETIME_CONFIG=CURRENT
Hi All. Been using version 2.0.0 of SPL Rehab on our Splunk Cloud Search head for the past 12 months, we have just been upgraded to the Victoria experience and it no longer works. Error we are gett... See more...
Hi All. Been using version 2.0.0 of SPL Rehab on our Splunk Cloud Search head for the past 12 months, we have just been upgraded to the Victoria experience and it no longer works. Error we are getting is a 403 you don't have permissions to invoke DEBUG. Any ideas or should I contact splunk support?  
Hello Splunk community, We are facing memory issues with our Splunk. Problem is connected with the installation of a new add-on app. Environment is Windows server. I understand that any new app is a... See more...
Hello Splunk community, We are facing memory issues with our Splunk. Problem is connected with the installation of a new add-on app. Environment is Windows server. I understand that any new app is a different python process running in the server. Is there a way to associate these processes with specific python processes? I need to see which is using the most memory to possibly disable it. Thank you, Fuzzylogic
Hi, I want to create a dashboard, where a user has a drop down input to select a named time frame ($value$). The start and end date of the time frame are defined in a lookup table.  Each of my ev... See more...
Hi, I want to create a dashboard, where a user has a drop down input to select a named time frame ($value$). The start and end date of the time frame are defined in a lookup table.  Each of my events has a milestone date. I want to filter to those events where the milestone date is between the start and end date from the lookup table. I tried something like this: index=my_index | where milestone_date_epoch > [inputlookup mapping_lookup WHERE time_frame = $value$ | eval startdate = strptime(Start_date, "%Y-%m-%d") | return startdate] | where milestone_date_epoch < [inputlookup mapping_lookup WHERE time_frame = $value$ | eval enddate = strptime(End_date, "%Y-%m-%d") | return enddate] But I get an error message Can you help me to get this fixed?
Hello. I have 3 SH. When I switch the captain to another SH Data disappears in it. In its normal state, SH has 20 million events. but when he becomes captain. It has a maximum of 400-500 events.  Th... See more...
Hello. I have 3 SH. When I switch the captain to another SH Data disappears in it. In its normal state, SH has 20 million events. but when he becomes captain. It has a maximum of 400-500 events.  There is no such problem with the other two SH. What is the problem ? please help me!!
Dear community I am struggling with how to allow different format in a search input, but still finding the corresponding events In my events I have mac addresses of this format 84-57-33-0D-B4-A8 I... See more...
Dear community I am struggling with how to allow different format in a search input, but still finding the corresponding events In my events I have mac addresses of this format 84-57-33-0D-B4-A8 I have built a dynamic dashboard where the mac adresses are found if the user types in exactly this format . However the user might search for a mac address like this  8457330DB4A8  or 84:57:33:0D:B4:A8 so in order to find results successfully, I have to recalculate the two inputs, so that are changed to the expected format. So a test query like this recalculates the first format |makeresults | eval m = "aab2c34be26e" | eval MAC2 = substr(m,1,2)."-".substr(m,3,2)."-".substr(m,5,2)."-".substr(m,7,2)."-".substr(m,9,2)."-".substr(m,11,2) | fields MAC2   a test query like this recalculates the second format: |makeresults | eval m = "aa:c3:4b:e2:6e" | eval MAC2 = replace (m,":","-") | fields MAC2   But I am failing to combine it to a joint query dependent on the input if my $mac$ address can be all three formats, then I have to choose the recalculation dependent on the input.   My idea would be to write a condition  with a regex match of $mac$ with   ([0-9A-Fa-f]{2}[-]){5}  then no  recalculation ([0-9A-Fa-f]{2}[:]){5} then  replace like shown above ([0-9A-Fa-f]{2}){5} then  substitute like shown above   I tried several ways of CASE and IF, but never got it to work... any help highly appreciated! Thanks    
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has lot’s  of false positive because it is natural to bec... See more...
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has lot’s  of false positive because it is natural to become high duration in some point. It’s not normal when it continues high duration. e.g.  normal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   abnormal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[50.01] 00:01:00.000 WARNING duration[90.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   1-how can I detect abnormal condition with splunk? (Best way with minimum false positive on hug data) 2-which visualization or chart more suitable to show this abnormal condition daily? this is huge log file and it is difficult to show all data for each day on single chart. Any idea?  Thanks,
Dear Splunk Community, I need help extracting a string (CTJT) plus any 6 characters after. CTJT is the start of an error code and always the same, the 6 characters after are different but always 6 c... See more...
Dear Splunk Community, I need help extracting a string (CTJT) plus any 6 characters after. CTJT is the start of an error code and always the same, the 6 characters after are different but always 6 charaters. Meaning the full error code is 10 characters like this: CTJTAAB013 The error codes in the events are always on random positions, never fixed! I need to extract the errorcode and evaluate it in a field:   CTJT* | table errorcode | eval errorcode = "I want to fetch the error code here"     I have tried substr but I cant find a method for fetching the first index of CTJT. Can anyone help me create a regex that does the above or maybe some other way?   Thanks in advance
Hello, I am trying to connect NetBackup app to Splunk using REST API Modular Input App (https://splunkbase.splunk.com/app/1546/). Our use case is slightly complicated.  Request can be fulfilled in 2... See more...
Hello, I am trying to connect NetBackup app to Splunk using REST API Modular Input App (https://splunkbase.splunk.com/app/1546/). Our use case is slightly complicated.  Request can be fulfilled in 2 steps.  1. Need to send POST request and we get some token value as a results. 2. Send GET request using that token to get required data from NetBackup server. Does anyone had similar situation earlier or having any suggestions to implement this scenario. Update: I am able to implement 2 separate requests. Splunk is on windows platform. If it was on linux then I would have write a script which will  execute the first request using curl and copy  the token value to the input config of 2nd request. Not sure how to handle on windows platform. Thanks
Hi, I am trying to filter events based on a lookup table with a time range. My lookup table looks like this:  startDay startTime endDay endTime Saturday 20:00 Tuesday 08:00        ... See more...
Hi, I am trying to filter events based on a lookup table with a time range. My lookup table looks like this:  startDay startTime endDay endTime Saturday 20:00 Tuesday 08:00         With this lookup, it should remove all events from Saturday 8PM until Tuesday 8AM. How do i create this query
We have a large number of Forwarders and would like to optimize the metrics data sent from them to the internal index. The main goal is to have the a reasonable size of the index and still have enou... See more...
We have a large number of Forwarders and would like to optimize the metrics data sent from them to the internal index. The main goal is to have the a reasonable size of the index and still have enough data to search. Is there a way to aggregate increase the sampling rate ?  There is a setting in limits.conf  [metrics] interval = 30 masxeries = 10 increasing the pooling interval between samples from 30 seconds to lets say to 90 would decrease sampling and save some storage, right? thansk for any hint.        
i am trying to parse MS-Exchange http_proxy logs with below setup in props & transforms but this doesnt seem to be working inputs.conf UF- [monitor://D:\Program Files\Microsoft\Exchange Server\... See more...
i am trying to parse MS-Exchange http_proxy logs with below setup in props & transforms but this doesnt seem to be working inputs.conf UF- [monitor://D:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\*\*.LOG] disabled=0 recusrive=true index= exchange_index sourcetype= exchange_httpproxy ignoreOlderThan = 0d Props and transforms on SH [exchange_httpproxy] REPORT-extractfields = extractfields [extractfields] DELIMS="," FIELDS=DateTime,RequestId,MajorVersion,MinorVersion,BuildVersion,RevisionVersion,ClientRequestId,Protocol,UrlHost,UrlStem,ProtocolAction,AuthenticationType,IsAuthenticated,AuthenticatedUser,Organization,AnchorMailbox,UserAgent,ClientIpAddress,ServerHostName,HttpStatus,BackEndStatus,ErrorCode,Method,ProxyAction,TargetServer,TargetServerVersion,RoutingType,RoutingHint,BackEndCookie,ServerLocatorHost,ServerLocatorLatency,RequestBytes,ResponseBytes,TargetOutstandingRequests,AuthModulePerfContext,HttpPipelineLatency,CalculateTargetBackEndLatency,GlsLatencyBreakup,TotalGlsLatency,AccountForestLatencyBreakup,TotalAccountForestLatency,ResourceForestLatencyBreakup,TotalResourceForestLatency,ADLatency,SharedCacheLatencyBreakup,TotalSharedCacheLatency,ActivityContextLifeTime,ModuleToHandlerSwitchingLatency,ClientReqStreamLatency,BackendReqInitLatency,BackendReqStreamLatency,BackendProcessingLatency,BackendRespInitLatency,BackendRespStreamLatency,ClientRespStreamLatency,KerberosAuthHeaderLatency,HandlerCompletionLatency,RequestHandlerLatency,HandlerToModuleSwitchingLatency,ProxyTime,CoreLatency,RoutingLatency,HttpProxyOverhead,TotalRequestTime,RouteRefresherLatency,UrlQuery,BackEndGenericInfo,GenericInfo,GenericErrors,EdgeTraceId,DatabaseGuid,UserADObjectGuid,PartitionEndpointLookupLatency,RoutingStatus  
Hi, I am trying to do a Lookup with a calculated field. Details: I have a csv containing three coloumns: DomainName,ThreatName,Date And my base search has a field "DomainName" which contains doma... See more...
Hi, I am trying to do a Lookup with a calculated field. Details: I have a csv containing three coloumns: DomainName,ThreatName,Date And my base search has a field "DomainName" which contains domains with "www." appended in some of the domains results.  So I formulated my search like: base search | eval calcDomainName = replace(DomainName,"www\.", "") | lookup iocs_domains DomainName as calcDomainName OUTPUT ThreatName, Date | table xalcDomainName ThreatName Date In my Lookup Definition, I have put "no_match" as my default. However when searched with above, I dont get any fields like "ThreatName", "Date" in my output. My Lookup is uploaded in search app and permissions are read for everyone. I am also searching the same under Search App only. And I can view contents of my csv with below command under Search & Reporting App: | inputlookup iocs_domains I even verified order of processing, in which calculated field preceeds Lookup.  Unable to understand what am I doing wrong.
So this search... index="myindex" source="/data/logs/log.json" "Calculation Complete" ... the results return a MessageBody field which has various different strings in.  I need to do the most simpl... See more...
So this search... index="myindex" source="/data/logs/log.json" "Calculation Complete" ... the results return a MessageBody field which has various different strings in.  I need to do the most simple regex in the world (*my string) and then want to count the messages which match that string eventually charting them.  I thought this would work, but it just returns 0 for them all. index="myindex" source="/data/logs/log.json" "Calculation Complete" | stats | count(eval(MessageBody="*my string")) as My_String | count(eval(MessageBody="*your string")) as Your_String | count(eval(MessageBody="*other string")) as Other_String  Help
Hello, I'm trying to make a report to count the number of interfaces available and used. I found the query that matches my need. index=centreon check_command="Cisco-SNMP-Interfaces-Global-Status" ... See more...
Hello, I'm trying to make a report to count the number of interfaces available and used. I found the query that matches my need. index=centreon check_command="Cisco-SNMP-Interfaces-Global-Status" service_description="Status_All-Interfaces" src_interface!="Ethernet*.*" src_interface!="Vlan*" src_interface!="mgmt*" src_interface!="port*" src_interface!="Null*" src_interface!="loopback*" | rex field=host "ZSE-(?<loc>\w+)-(?<room>\w+).*" | replace "1H" WITH "UC1"| replace "1E" WITH "UC1"| replace "2H" WITH "UC2"|replace "2F" WITH "UC2"| replace "6E" WITH "C6"| replace "6F" WITH "C6"| replace "6T" WITH "C6"| replace "4B" WITH "C4"| replace "4T" WITH "C4"| replace "4E" WITH "C4" | eval site=loc+"-"+room | stats count(src_interface) as tot_int by site | appendcols [search index=centreon check_command="Cisco-SNMP-Interfaces-Global-Status" service_description="Status_All-Interfaces" src_interface!="Ethernet*.*" src_interface!="Vlan*" src_interface!="mgmt*" src_interface!="port*" src_interface!="Null*" src_interface!="loopback*" state_interface="up" | rex field=host "ZSE-(?<loc>\w+)-(?<room>\w+).*" | replace "1H" WITH "UC1"| replace "1E" WITH "UC1"|replace "2H" WITH "UC2"|replace "2F" WITH "UC2"| replace "6E" WITH "C6"| replace "6F" WITH "C6"| replace "6T" WITH "C6"| replace "4B" WITH "C4"| replace "4T" WITH "C4"| replace "4E" WITH "C4" | eval site=loc+"-"+room | stats count(state_interface) as tot_up by site] | eval tot_free=tot_int-tot_up My concern is the frequency of data reception by SPLUNK is not stable (plus or minus 10 minutes) How to make so that my dream is based only on the last events received? Thank's      
Hi Guyz, We have SAP Soloman 7.2, We are looking forward to integrate it with Appdynamics for dashboarding purpose. Can anyone please assist over this? Regards, Ash
Hi team,   We are upgrading splunk version from 7.3.6 to 8.1.X. As per procudure first we have upgraded version on cluster master and trying to login their but could not logged-in, like its not ac... See more...
Hi team,   We are upgrading splunk version from 7.3.6 to 8.1.X. As per procudure first we have upgraded version on cluster master and trying to login their but could not logged-in, like its not accepting the credentials.  1. Shall we upgrade entire infra first and then check. 2. Do we need to first get fix this login issue and then go for the next step. Kindly suggest.   Thanks & Regards, Abhijeet B. 
Hi, recently I deploy the Splunk connect for Syslog in docker and my first candidate to use it was our Citrix ADC VPX. Following the instructions here https://splunk.github.io/splunk-connect-for-sys... See more...
Hi, recently I deploy the Splunk connect for Syslog in docker and my first candidate to use it was our Citrix ADC VPX. Following the instructions here https://splunk.github.io/splunk-connect-for-syslog/main/sources/Citrix/ I see the logs correctly flowing into splunk. now it is time to take some useful alerts out of it. I thought about something very basic to start with: - Detect when a failover between the two Citrix happens. - Detect when a virtual server is UP but a node of the load balancing group got down - Detect when a virtual server is completely down, all nodes got down. I am diving in the events trying to get some meaning out of them without much luck. so far I identified few fields but nothing that makes much sense. Has someone any additional information regarding the logs that I could reuse somehow? maybe some queries on which I could based on ?   Thanks a lot.