All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, I had a question, I wanted to check the time on which my saved searches / scheduled reports and alerts are running. Is there a way we can list out the name of the searches and at wh... See more...
Hello Splunkers, I had a question, I wanted to check the time on which my saved searches / scheduled reports and alerts are running. Is there a way we can list out the name of the searches and at what time are they running? Also, how can we find out which saved searches are running at the same time?
I want to select a team on a dashboard and feed that token into a search that will find all the applications they manage in a csv lookup file, then search tags fields in an index of 'metadata' to get... See more...
I want to select a team on a dashboard and feed that token into a search that will find all the applications they manage in a csv lookup file, then search tags fields in an index of 'metadata' to get all the logs related to those applications. The  This is my search so far but its provdes blank results. index=metadata [ | inputlookup snow_sys_applications.csv | where support_group="12345"    ```$token$ would go here``` | table u_cloud_domain | format ] | eval appdomain = mvfind('Tags{}.Key', "^Domain")          ```find the Domain tag``` | eval AppDomain = mvindex('Tags{}.Value', appdomain) ```find the value of the Domain tag``` | search AppDomain=u_cloud_domain                                 ``` use the value to search the subsearch results``` | rex field=source ".*:(?<awsMetaType>.*)" | table AppDomain awsMetaType id The subsearch I developed works separately and will provide a list a list of u_cloud_domain's 0001 0002 0003 0004 0005 The main search will work separately if a search for an AppDomain="0005". The won't work together. I suspect because u_cloud_domain is a list of values and AppDomain is a single value. I have tried with and without | format. What search or where commands do I need to get them to work? Also is there a way to see the output results from the subsearch to check them?
I'm a Splunk PS consultant working with a client on a greenfield build, S1 SVA. They want to connect to an Oracle database and we have installed DBConnect and Java etc. After some JRE / JVM version t... See more...
I'm a Splunk PS consultant working with a client on a greenfield build, S1 SVA. They want to connect to an Oracle database and we have installed DBConnect and Java etc. After some JRE / JVM version troubleshooting and environment path challenges, it all works fine and can interrogate the Oracle instance. Problem: After each Splunk reload the DB connect fails This is due to the port allocation and can be fixed by manually changing it from 9998 to 9995 for example in the GUI and saving This requires manual intervention each time which is not sustainable The port saves in /opt/splunk/etc/apps/splunk_app_db_connect/jars/<file>.vmopts Has anybody else experienced this please? It's unclear if this is related to the a path variable for Java or some sort of locking and it not being able to reuse the port post daemon reload.
I need to extract a time value from log file where the time value appears with a few different variations of characters around it.  I'm struggling with handling all the variations through my regex ex... See more...
I need to extract a time value from log file where the time value appears with a few different variations of characters around it.  I'm struggling with handling all the variations through my regex extract. Below are examples of each of the variations: ChainedQuery elapsed time [90]ms Elapsed time: 114ms Elapsed time to get Service pool: 339 Elapsed Time: 69 ,took 37ms Is there a way to extract all the numeric values with 1 regex?  
I want to know the Possibility and solution of; 1. Displaying the total number of calls per minute between two Api endpoints. 2. Display the (number of calls) from the (total number of calls) when ... See more...
I want to know the Possibility and solution of; 1. Displaying the total number of calls per minute between two Api endpoints. 2. Display the (number of calls) from the (total number of calls) when the time response is above the threshold set in (ms).
I have an application which logs data in the following form: 2023-06-30T12:21:08Z DEBUG scalehandler Getting metrics from scaler {"scaledObject.Namespace": "my-namespace", "scaledObject.Name": "my-a... See more...
I have an application which logs data in the following form: 2023-06-30T12:21:08Z DEBUG scalehandler Getting metrics from scaler {"scaledObject.Namespace": "my-namespace", "scaledObject.Name": "my-app", "scaler": "myScaler", "metricName": "http_count_total", "metrics": [], "scalerError": "error getting metrics"} From the above _raw, I want to extract fields such as scaledObject.Name, scaler and scalerError which I would then use to create an alert. Could someone help me in creating new fields for the above mentioned json fields using the rex function? 
Hi team, Logs are not coming to splunk .The UF is working fine and even connected to indexers, inputs.conf and everything seems perfect. we are facing this issue for few UFs only. can you sugges... See more...
Hi team, Logs are not coming to splunk .The UF is working fine and even connected to indexers, inputs.conf and everything seems perfect. we are facing this issue for few UFs only. can you suggest something which i should check?  These are the warnings we are getting :-  1. Search peer dallpspiap090m has the following message: Daily indexing volume limit exceeded. Per the Splunk Enterprise license policy in effect, search is disabled after 5 warnings over a 30-day window. Your Splunk deployment is subject to license enforcement. See License Manager for details. 2. Root Cause(s): Sum of 3 highest per-cpu iowaits reached red threshold of 15 Sum of 3 highest per-cpu iowaits reached yellow threshold of 7 Maximum per-cpu iowait reached red threshold of 10 Unhealthy Instances: dallpshdap010m mialvshdap010m.vtitel.net dallvissap010m.vtitel.net mialvissap030m.vtitel.net dallvissap030m.vtitel.net mialvissap010m.vtitel.net dallvissap020m.vtitel.net mialvissap020m.vtitel.net  3. Search Lag Root Cause(s): The percentage of non high priority searches lagged (67%) over the last 24 hours is very high and exceeded the yellow thresholds (40%) on this Splunk instance. Total Searches that were part of this percentage=268303. Total lagged Searches=182113
  {"timestamp":"2023-06-28T11:00:13.545Z","message":"Time taken for Method1 Call : 3120","class":"com.xyz.enterprise.plans.client.v20.D2CClient","thread":"reactor-http-nio-1","level":"DEBUG","servic... See more...
  {"timestamp":"2023-06-28T11:00:13.545Z","message":"Time taken for Method1 Call : 3120","class":"com.xyz.enterprise.plans.client.v20.D2CClient","thread":"reactor-http-nio-1","level":"DEBUG","service":"product-aggregator-models","traceId":"4b2f19f625adf891","spanId":"4b2f19f625adf891"} {"timestamp":"2023-06-28T11:00:13.901Z","message":"Time taken for Method2 : 3476","class":"com.xyz.enterprise.plans.client.v20.D2CClient","thread":"reactor-http-nio-1","level":"DEBUG","service":"product-aggregator-models","traceId":"4b2f19f625adf891","spanId":"4b2f19f625adf891"} {"timestamp":"2023-06-28T11:00:14.43Z","message":"Time taken for Method3 Services : 4082","class":"com.xyz.enterprise.plans.client.v20.HpassClient","thread":"reactor-http-nio-4","level":"DEBUG","service":"product-aggregator-models","traceId":"4b2f19f625adf891","spanId":"4b2f19f625adf891"} {"timestamp":"2023-06-28T11:00:14.454Z","message":"Time taken for Method4 : 4","class":"com.xyz.enterprise.plans.service.v20.InvokeAndCombineHpassD2CService","thread":"reactor-http-nio-4","level":"DEBUG","service":"product-aggregator-models","traceId":"4b2f19f625adf891","spanId":"4b2f19f625adf891"}   From Above Logs I wanted to create a table as below how to achieve it ? traceId Method1 Method2 Method3 Method4 4b2f19f625adf891 3120 3476 4082 4
Hi All, I am fairly new to Splunk and I have bit of a challenge in front of me which I am not able to resolve. I have the following event - 30/06/2023 12:23:15 (01) >> AdyenProxy::AdyenPaymentRespo... See more...
Hi All, I am fairly new to Splunk and I have bit of a challenge in front of me which I am not able to resolve. I have the following event - 30/06/2023 12:23:15 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : 102 Shopper cancelled pin entry ; Refusal Reason : 102 Shopper cancelled pin entry I am using the regex "rex field=_raw "AdyenPaymentResponse:.+\sReason\s:\s(?<Failure_Message>.+)" to extract the error message using refusal reason as the keyword as for some places the error printing under Message is irrelevant. But the problem I am facing is at some of the events the Refusal Reason field is empty and I have to capture the field value under Message eg -- "30/06/2023 12:18:39 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : MAINTENANCE ; Refusal Reason : " I am trying to extract all the error messages under one field called Failure_Message.  Or to capture the Message part under same extracted field when Refusal Reason is empty. Is it possible ?
Hi everyone, We have action rules in the Notable Event Aggregation Policies that send email notifications. The emails are received but they do not include the specified search field data. In the ... See more...
Hi everyone, We have action rules in the Notable Event Aggregation Policies that send email notifications. The emails are received but they do not include the specified search field data. In the subject and body have some of the search fields that exist (and are populated) in the episodes in the following format: $result.<searchfield>$ E.G. $result.Message$   But the data from the fields are not included in the emails we receive. We have tried several different fields with the same result. Any idea what we are missing here? Thanks.  
Hi , needed a help. i need to add a column that is added newly to the sql data.below is the query | savedsearch ABC | join type=left BS_ID [| search index="PQR" source=XYZ | rename BS_CODE as BS_I... See more...
Hi , needed a help. i need to add a column that is added newly to the sql data.below is the query | savedsearch ABC | join type=left BS_ID [| search index="PQR" source=XYZ | rename BS_CODE as BS_ID SERVICE_OWNER as "System Owner" BUSINESS_OWNER as "Business Owner" SERVICE_SUBCATEGORY as Function SDM_FULLNAME as SDM | sort LOGICAL_NAME | eval Application = DESCRIPTION | rex mode=sed field=Application "s/^Managed//g" | rex mode=sed field=Application "s/Application$//g" | rex mode=sed field=Application "s/application$//g" | eval Application = trim(Application) | streamstats count as NO by BS_ID | eventstats max(NO) as MaxTotal by BS_ID | where NO=MaxTotal |eval Function=case(Function="Service Excellence COE" and Application="Medical Insights Reporting","Service Excellence CoE",1=1,Function) | table BS_ID Application Function SDM "System Owner" "Business Owner"] | lookup countries.csv name as COUNTRY outputnew latitude, longitude, name | eval COUNTRY = if(isnull(COUNTRY),"NA",COUNTRY) | eval DEPARTMENT_LONG_NAME = if(isnull(DEPARTMENT_LONG_NAME),"NA",DEPARTMENT_LONG_NAME) | eval DEPARTMENT_SHORT_NAME = if(isnull(DEPARTMENT_SHORT_NAME),"NA",DEPARTMENT_SHORT_NAME) my ABC savedsearch has a column newly added as Category. i need to get into this saved search  
Hi All This is my search and the bold words are token passed values | search ASSIGNMENT IN ("SYS LI SI FINANCIAL PLANNING ,SYS LI SI ORDERING ROB") AFFECTED_CI="*" STATUS="*" PRIORITY="*"   I... See more...
Hi All This is my search and the bold words are token passed values | search ASSIGNMENT IN ("SYS LI SI FINANCIAL PLANNING ,SYS LI SI ORDERING ROB") AFFECTED_CI="*" STATUS="*" PRIORITY="*"   I need this search like this | search ASSIGNMENT IN ("SYS LI SI FINANCIAL PLANNING" ,"SYS LI SI ORDERING ROB") AFFECTED_CI="*" STATUS="*" PRIORITY="*"   Is any option in multiselect(dashboard studio)
Good morning, Working on my first single sign practice, and when following the instructions / KB on splunk, it mentions this 7. Add the path to the ClientCert parameter in authentication configurat... See more...
Good morning, Working on my first single sign practice, and when following the instructions / KB on splunk, it mentions this 7. Add the path to the ClientCert parameter in authentication configuration: clientCert = $SPLUNK_HOME/etc/auth/samlRequestSigningCerts/samlCert.pem)   However im not sure which pem cert that relates to as it only says that we create the following,   $SPLUNK_HOME/etc/auth/samlRequestSigningCerts/samlRequestSigningCert.pem   Has anyone got any advice or could let me know where I should get the samlcert.pem from, ( Or am I missing something).   Thanks in advance,   Link to KB article   https://docs.splunk.com/Documentation/Splunk/9.0.5/Security/SAMLSHC      
Hi, I need an expert opinion on Splunk migration whose data is encrypted. Basically one of our customer want to migrate their Splunk data to different storage platform from the existing one but the ... See more...
Hi, I need an expert opinion on Splunk migration whose data is encrypted. Basically one of our customer want to migrate their Splunk data to different storage platform from the existing one but the challenge here is that Splunk data is encrypted.  Can someone suggest a tool which can decrypt inline in a migration? Or Is it possible to get splunk to decrypt all of the data before we migrate? Your inputs are highly appreciated. Thank you.  
We have a HF which was configured to pull data from various sources, but it stopped receiving data suddenly from a few sources. How can I troubleshoot this issue?
Hi guys, I am currently seeing that processing queues on one of my heavy forwarders appear to be blocking during a 6 hour period at night time when log volume being ingested is much lower (during th... See more...
Hi guys, I am currently seeing that processing queues on one of my heavy forwarders appear to be blocking during a 6 hour period at night time when log volume being ingested is much lower (during this period, log volume ingested drops from 10 million to under 3 million events). Are there are obvious reasons as to why queue block ratios would increase at same time that thruput/event volume decreases? As I'm guessing that the opposite would generally be expected? We can see that block ratios increase at 1:00 AM below: index=_internal source=*metrics.log group=queue blocked=true host=HF max_size_kb>0 | timechart span=30m@m count by name   At the same time we can see thruput decrease below: index=_internal sourcetype=splunkd host=HF group=per_sourcetype_thruput series=* | timechart sum(kb) by series   Current queue config and context over last 24h:   I have also noticed a lot of 'Could not send data to output queue (parsingQueue), retrying' errors around the same time: index=_internal host=HFsource=*splunkd* log_level=ERROR OR log_level=WARN event_message="Could not send data to output queue (parsingQueue), retrying..." | timechart span=5m@m count by event_message         I would appreciate any answers around why queue block ratios would increase at same time that thruput/event volume decreases and also any solutions for getting average queue block ratios as close to 0 as possible - queues currently appear to be blocking throughout the day with highest block ratios occuring 1:00 - 8:00 AM.
Hello Team, Please suggest how to modify Token based on the choice values from another Token? Refer below requirement: My requirement is to fetch the token="TimeSpan" value based on the sele... See more...
Hello Team, Please suggest how to modify Token based on the choice values from another Token? Refer below requirement: My requirement is to fetch the token="TimeSpan" value based on the selection of token="timefrom" and token="timeto" And I want to use below formula: TimeSpan = ($timeto$ - $timefrom$) * 60 For example: TimeSpan = (4 - 2)*60 = 120 <input type="dropdown" token="instance" searchWhenChanged="true"> <label>Instance</label> <choice value="instance1">instance1</choice> <choice value="instance2">instance2</choice> <default>instance1</default> <initialValue>instance1</initialValue> </input> <input type="dropdown" token="timefrom" searchWhenChanged="true"> <label>Time Range From (24 Hour)</label> <choice value="1">01:00</choice> <choice value="2">02:00</choice> <choice value="3">03:00</choice> <choice value="4">04:00</choice> <choice value="5">05:00</choice> <choice value="6">06:00</choice> <choice value="7">07:00</choice> <default>2</default> <initialValue>2</initialValue> </input> <input type="dropdown" token="timeto" searchWhenChanged="true"> <label>Time Range To (24 Hour)</label> <choice value="1">01:00</choice> <choice value="2">02:00</choice> <choice value="3">03:00</choice> <choice value="4">04:00</choice> <choice value="5">05:00</choice> <choice value="6">06:00</choice> <choice value="7">07:00</choice> <default>4</default> <initialValue>4</initialValue> </input> <input type="text" token="TimeSpan" searchWhenChanged="true"> <label>Interval (in min)</label> <default></default> </input> @ITWhisperer@niketn 
I have lookup file which is updated periodicaly and has three columns: Source, Dest, Contact a,                   k,             111mail.com  b,                   l,              112mail.com  ... See more...
I have lookup file which is updated periodicaly and has three columns: Source, Dest, Contact a,                   k,             111mail.com  b,                   l,              112mail.com  c,                   m,           113mail.com  I want to create one alert that will iterate every row,  use Source, Dest as filters and send email to particular email address. For example: index=test sourcetype=main Source=a Dest=k   | table *  -> send to 111mail.com  index=test sourcetype=main Source=b Dest=l    | table * -> send to 112mail.com  index=test sourcetype=main Source=c Dest=m | table * -> send to 113mail.com  Thanks, Lis
Hello,   I have this code for now: index=host= sourcetype=csv source=C:\\2023-CW25_5.csv | join type=left AssigneeID [inputlookup key_user.csv | table NT_Name | where AssigneeID = NT_Name ] H... See more...
Hello,   I have this code for now: index=host= sourcetype=csv source=C:\\2023-CW25_5.csv | join type=left AssigneeID [inputlookup key_user.csv | table NT_Name | where AssigneeID = NT_Name ] Have two csv files that I need to compare the columns AssigneeID from the 2023-CW25_5.csv file to the column NT_Name in the key-user.csv file and return the values from the Cluster column. How can I do that?
Good morning!! I am a Beginner in Splunk, below query only tell me whether HF is DOWN or HEALTHY. ++++++++++++++++++++++++++++++ `internal` source="/opt/splunk/var/log/splunk/metrics.log" fwdType=f... See more...
Good morning!! I am a Beginner in Splunk, below query only tell me whether HF is DOWN or HEALTHY. ++++++++++++++++++++++++++++++ `internal` source="/opt/splunk/var/log/splunk/metrics.log" fwdType=full | search NOT hostname=*cbl* NOT hostname=*cwn* NOT hostname=*SCO* NOT hostname=*SGW* NOT hostname=*spft* NOT hostname=*tsps* NOT hostname=*psps* NOT hostname=*puba* NOT hostname=aue11pspf001* NOT hostname=aue12pspf001*| dedup hostname | stats count as historic_count | appendcols [search `internal` source="/opt/splunk/var/log/splunk/metrics.log" fwdType=full earliest=-5m latest=now | search NOT hostname=*cbl* NOT hostname=*cwn* NOT hostname=*SCO* NOT hostname=*SGW* NOT hostname=*spft* NOT hostname=*tsps* NOT hostname=*psps* NOT hostname=*puba* NOT hostname=aue11pspf001* NOT hostname=aue12pspf001*| dedup hostname | stats count as recent_count] | eval count = historic_count - recent_count | rangemap field=count low=0-0 severe=1-100 | eval health = if(count>0, "HF DOWN", "HEALTHY") | eval count = health | table count health range ++++++++++++++++++++++++ Could you please help me modify this to find which HOST is DOWN? Immediate response is highly appreciated. Thanks, in advance.