All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have scheduled a splunk report to run at 11 AM IST everyday (cron schedule : 0 11 * * *). Search Head time zone is IST. But when I checked savedsearch , the next scheduled time  is sh... See more...
Hi All, I have scheduled a splunk report to run at 11 AM IST everyday (cron schedule : 0 11 * * *). Search Head time zone is IST. But when I checked savedsearch , the next scheduled time  is shown as 4:30 PM IST  though advance settings has 0 11 * * *. Until yesterday it was fine . But suddenly seeing this issue. Does anyone has idea what is going wrong ? Regards, Poojitha NV
I have this kind of log: Mar 18 02:32:19 MachineName python3[948]: DEBUG:root:... Dispatching: {'id': '<id>', 'type': 'threat-detection', 'entity': 'threat', 'origin': '<redacted>', 'nature': 'syste... See more...
I have this kind of log: Mar 18 02:32:19 MachineName python3[948]: DEBUG:root:... Dispatching: {'id': '<id>', 'type': 'threat-detection', 'entity': 'threat', 'origin': '<redacted>', 'nature': 'system', 'user': 'system', 'timestamp': '2025-03-17T19:32:17.974Z', 'threat': {'id': '<redacted_uuid>', 'maGuid': '<redacted_guid>', 'detectionDate': '2025-03-17T19:32:17.974Z', 'eventType': 'Threat Detection Summary', 'threatType': 'non-pe-file', 'threatAttrs': {'name': '<filename>.ps1', 'path': 'C:\\Powershell\\Report\\<filename>.ps1', 'md5': '<redacted_hash>', 'sha1': '<redacted_hash>', 'sha256': '<redacted_hash>'}, 'interpreterFileAttrs': {'name': 'powershell.exe', 'path': 'C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe', 'md5': '097CE5761C89434367598B34FE32893B', 'sha1': '044A0CF1F6BC478A7172BF207EEF1E201A18BA02', 'sha256': 'BA4038FD20E474C047BE8AAD5BFACDB1BFC1DDBE12F803F473B7918D8D819436'}, 'severity': 's1', 'rank': '100', 'score': '50', 'detectionTags': ['@ATA.Discovery', '@ATA.Execution', '@ATE.T1083', '@ATE.T1059.001', '@MSI._apt_file_psgetfiles', '@ATA.CommandAndControl', '@ATE.T1102.003', '@MSI._process_PS_public_repos', '@MSI._process_ps_getchilditem', '@ATE.T1105', '@ATE.T1071.001', '@MSI._process_pswebrequest_remotecopy', '@ATA.DefenseEvasion', '@ATE.T1112', '@MSI._reg_ep0029_intranet'], 'contentVersion': None}, 'firstDetected': '2025-03-17T19:32:17.974Z', 'lastDetected': '2025-03-17T19:32:17.974Z', 'tenant-id': '<redacted_tenant_id>', 'transaction-id': '<redacted_transaction_id>'} The "Dispatching" I want it to be a required text, so only log that have this keywork would I apply transforming. I want to parse the JSON part so I can use its fields, like json_data.threatAttrs.name. Any suggestions? I tried the regex editor UI, but it broke down since it couldn't differentiate the "name" fields, since the same field name appeared. So I am thinking of using props.conf and transforms.conf, but I don't know how. Any help would be appreciated!
Hi. I checked the issue below, but I couldn't find proxy option in the code. https://github.com/bentleymi/ta-webtools/issues/17 Is the proxy option (-x or --proxy) of linux curl available on the W... See more...
Hi. I checked the issue below, but I couldn't find proxy option in the code. https://github.com/bentleymi/ta-webtools/issues/17 Is the proxy option (-x or --proxy) of linux curl available on the WebTools Add-on? If there is another way that I don't know, please share it. @jkat54 Webtools Add-on 
I've created a new source type with a regex. It was working but I found an edge case where it was broken. I rewrote the regex using a capture group but the group doesn't seem to be  getting applied. ... See more...
I've created a new source type with a regex. It was working but I found an edge case where it was broken. I rewrote the regex using a capture group but the group doesn't seem to be  getting applied. Can someone tell me if this should work? Here is my regex: s/"message":\s*"{([\s\S]*)}"/"data": {$1}/g put in "SEDCMD-a".  My json data is as follows: { "message": "{ "test": "test data" }" } and my transformed data ends up like this: { "data": {$1}} It isn't making the replacement with the capture group. Am I doing something wrong? Should this work? Thanks, -Tim
I have a few use cases where adding additional information to the "Description" field in ServiceNow incidents would be beneficial. Adding description="xxx" under Custom Fields does not seem to work u... See more...
I have a few use cases where adding additional information to the "Description" field in ServiceNow incidents would be beneficial. Adding description="xxx" under Custom Fields does not seem to work unfortunately, and the last update I saw regarding this was a forum post from 2018 where someone had suggested to edit the .py files in the add-on to create this functionality.   Are there any current or future plans to add the "description" field to the official Splunk Add-on for ServiceNow?
Good day,   I'm trying to think of how I can write a search to find a specific event and then take all the events surrounding that specific event within a time frame and group them for analysis. Fo... See more...
Good day,   I'm trying to think of how I can write a search to find a specific event and then take all the events surrounding that specific event within a time frame and group them for analysis. For example, assuming web traffic, say I write a search to find DNS traffic from a  host attempting to resolve a specific hostname. We'll go with google.com on computer myHost.mydomian.com I find 20 events for the last 24/hrs , each with their own _time field for when myHost.mydomain.com attempted to resolve the hostname for google.com. Now, I want to take the _time field for each of those events and grab specified fields from events within the past and following 10 seconds of that _time value. Doing this, I'd expect to create ~20 bins assuming the initial 20 events don't overlap, each with the rawdata or fields from the events immediately preceeding and following the DNS event. The example aside, the issue I'm trying to solve currenly is this. I have 13 hosts who each attempted to resolve the the name for a domain at various times. I need to see what shared sites those 13 hosts were visiting  just before the DNS traffic. For example, maybe they were all on Facebook.com, and therefore I can draw a conclusion that Facebook.com was the one prompting the DNS traffic. Currently, I'm tossing around in my head how I might use a subsearch, bin, or transaction command to do this, but I'm not sure, and I'm happy to take any advice from others on what kind of search I need to write.
Hi Team, I am using splunk otel collector daemonset to collect logs from containers and send them to splunk with some transformations. I am trying to achieve converting following log entry which is ... See more...
Hi Team, I am using splunk otel collector daemonset to collect logs from containers and send them to splunk with some transformations. I am trying to achieve converting following log entry which is in string format Body: Str(2025-03-05T22:46:16.526842773Z stdout F {"workspace":"#1234","service":{"updated_at":1700246094,"log_type":"kong-apilog"}}   Next I need to parse log entry as {"timestamp": 2025-03-05T22:46:16.526842773Z, "log_entry": "stdout", "log_type": "F", "log": {"workspace":"#1234","service":{"updated_at":1700246094,"log_type":"kong-apilog"}} } Following is my config   filelog/kong-logs: include: - /var/log/containers/kong-*.log - type: regex_parser regex: ^(?P<time>[^ ]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$ parse_from: body storage: file_storage   transform: error_mode: ignore log_statements: - context: log statements: - set(attributes["log"], ParseJSON(attributes["log"])) So far I am able to parse log attribute alone to json but not able to construct full json structure as mentioned above and I am also facing error converting time attribute which is in string format to timestamp field  using following transformer - set(time, Time(attributes["time"], "%Y-%m-%dT%H:%M:%S.%9N%Z")) Since my timestamp is in nanoseconds I need to parse it in nano seconds Can someone please help me in achieving the desired output Thanks, Vamsi
Hello, I have a Classic Dashboard with a table that retrieves data using the following search: index=myindex| lookup mylookup.csv field1 OUTPUT field2 Through JavaScript, I update the mylookup... See more...
Hello, I have a Classic Dashboard with a table that retrieves data using the following search: index=myindex| lookup mylookup.csv field1 OUTPUT field2 Through JavaScript, I update the mylookup.csv file using outputlookup, and after the update is completed, I trigger a refresh of the search in the panel using: var mainSearch = mvc.Components.get("my_table_search"); if (mainSearch) { mainSearch.startSearch(); }   Even though the data is correctly written into the lookup (mylookup.csv), the table does not reflect the updated data when the search is refreshed.   How can I ensure that Splunk reloads the lookup with the latest data in the table after my JavaScript updates it? Is there a way to force Splunk to bypass any caching of the lookup file?   TY
How do I link for example: Service XX down - action required to an Knowledge Article containing the alert instruction? To summarize: I want the knowledge article containing the alert instruction att... See more...
How do I link for example: Service XX down - action required to an Knowledge Article containing the alert instruction? To summarize: I want the knowledge article containing the alert instruction attached to the incident in Service Now. Thanks in advance
Hello: I have a query that extracts a set of 5 request_ids based on certain criteria.  I then need to include these request ids in a subsearch using the "IN" operator.  I build up the string for the... See more...
Hello: I have a query that extracts a set of 5 request_ids based on certain criteria.  I then need to include these request ids in a subsearch using the "IN" operator.  I build up the string for the the search using the following: | stats list(request_id) as req_id_list | eval req_id_clause="(".mvjoin(req_id_list, ",").")" I then use it in my query as follows: | search req_id IN $req_id_clause$ However, Splunk interprets the $reg_id_clause$ as a literal string that is "(req_id1, req_id2...)" and I get an error.  What are my options to handle this?   Thanks!
When I try to install the UF for AIX, it fails to extract to with a checksum error AIXSERVER:/nim/media/SOFTWARE/splunk/Splunk-9.4.0 >pc.tgz|tar -xvf *     < tar: 0511-169 A directory checksum erro... See more...
When I try to install the UF for AIX, it fails to extract to with a checksum error AIXSERVER:/nim/media/SOFTWARE/splunk/Splunk-9.4.0 >pc.tgz|tar -xvf *     < tar: 0511-169 A directory checksum error on media; 0 not equal to 72514. AIXSERVER:/nim/media/SOFTWARE/splunk/Splunk-9.4.0 > I have the same problem with the 9.4.1 bits too. Does anyone have any idea what I can do?
I have to use cProfile for the profiling details for my custom generating command. I could not install cProfile in the Splunk python. SO could anybody help if they know how to install python lib in s... See more...
I have to use cProfile for the profiling details for my custom generating command. I could not install cProfile in the Splunk python. SO could anybody help if they know how to install python lib in splunk and use it for custom command
Need help for the below Query  index=na sourcetype=na:co state=down host_state_type="HARD" [| tstats prestats=f values(host) as host_name WHERE index=df AND host!=1* | eval host_name=lower(host_na... See more...
Need help for the below Query  index=na sourcetype=na:co state=down host_state_type="HARD" [| tstats prestats=f values(host) as host_name WHERE index=df AND host!=1* | eval host_name=lower(host_name) | mvexpand host_name ] | stats count as DownMins by host_name |sort - DownMins|head 25 | addcoltotals label=Total labelfield=host_name While am running the query for last 24 hours its showing results but when I reduced the time range to less than 24 hours it's not showing any output but previously this query is running fine. Also while I run the query like -  It shows me output for last 24 hours   - index=na sourcetype=na:co state=down host_state_type="HARD" while I search the same host in  - index= df  host="ooo" Am getting the results while combining both not getting results less than 24 hours  Also I can see in the backend those host are down but not showing in Splunk query 
Hello, We are using Elastic till now. And there is a plan to migrate to SPLOC observability. In Elastic we have feature where application can store logs by writting logstash pieplines and create re... See more...
Hello, We are using Elastic till now. And there is a plan to migrate to SPLOC observability. In Elastic we have feature where application can store logs by writting logstash pieplines and create reports dashboards. For example there is a Kafka application user to store it's logs in elastic. Here do we have such feature ?
I'm trying to create a report that includes the following information and want to schedule it to run monthly. I need to know how can I gather the information from Splunk. How many events are observ... See more...
I'm trying to create a report that includes the following information and want to schedule it to run monthly. I need to know how can I gather the information from Splunk. How many events are observed by Splunk for a month? Of those, how many are internal Splunk events ? How many events are from log sources? Total number of notables observed by month Classification of notables based on severity What is the notable generation time? what is the time the notable was assigned to analyst? what is the time the analyst responded to the notable and what was the response? what time was the notable closed?   As of now I'm going through the `notable`, but need more information as to how this can be navigated. Your comments would be appreciated.   Thanks
I have some rather large json data payloads being sent over to Splunk. I've seen payloads around 1MB in size. It took me a while to get field extraction to work most of the time.  The main  thing was... See more...
I have some rather large json data payloads being sent over to Splunk. I've seen payloads around 1MB in size. It took me a while to get field extraction to work most of the time.  The main  thing was to create a new source type which mimics the _json (or the json_no_timestamp one) and set TRUNCATE = 0 (which might not be the best thing). Field extraction has been working quite well. I then duplicated that source type and setup a couple regex to transform some of the data. Field extraction stopped working with the new source type (only with the large payloads). I switched back to the original source type and field extraction works again. I'll note that the data being sent in this test is not being transformed by the regex since the fields don't exist in this particualar set of test data. If I send a smaller payload, field extraction does work properly (even when data is actually transformed/regex). Can anyone suggest something that I could look at or explain why including regex in the source type that doesn't do any transform of data might stop field extraction from working?
Hi ,   How to convert 2025-03-13T11:03:38Z to the "%d/%m/%Y %I:%M:%S ". I have tried this, but it didn't work. | eval Lastevent=strftime(last_seen, "%d/%m/%Y %I:%M:%S %p")
Hi  The problem is token substitution in the dropdown input. Specifically: Issue with entityToken Substitution: The first value of the dropdown choice (e.g., target or CB) is not being correctl... See more...
Hi  The problem is token substitution in the dropdown input. Specifically: Issue with entityToken Substitution: The first value of the dropdown choice (e.g., target or CB) is not being correctly substituted into the $entityToken$ token. <form version="1.1" theme="dark"> <label>Stats local Clone1</label> <fieldset submitButton="true"> <input type="dropdown" token="entityTokenFirst"> <label>Select Data Entity</label> <!-- Set two values for each choice --> <choice value="target,*-test-targetf">Target </choice> <choice value="CB,*-test-cb">CB</choice> <default>target,*-test-targetf</default> <change> <!-- Split the value and set tokens for both parts --> <set token="entityLabel">$label$</set> <eval token="searchName">mvindex(split($value$, ","), 1)</eval> <eval token="entityTokenFirst">mvindex(split($value$, ","), 0)</eval> </change> </input> <input type="time" token="timeToken"> <label>Select Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <title>Distinct Consumer Count for $entityLabel$</title> <search> <query> index="" source="**" | spath path=test.nsp3s{} output=nsp3s | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name=$searchName$ | sort -_time | head 1 | fields DistinctAdminUserCount </query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Total Request :</title> <single> <search> <query> index="$indexToken$" source IN ("*-data-$stageToken$-$entityTokenFirst$") msg=":data:invoke" | stats count </query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="height">317</option> <option name="rangeColors">["0xcba700","0xdc4e41"]</option> <option name="rangeValues">[200]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="unitPosition">after</option> <option name="useColors">1</option> </single> </panel> </row> </form>​ This is causing the $entityToken$ token to not reflect the expected value in queries where it is used, such as:   source IN ("/aws/lambda/*-data-$stageToken$-$entityToken$" Correct Substitution for searchName: The second value of the dropdown choice (e.g., -test-targetf or *-test-cb) is being correctly substituted into the $searchName$ token. This indicates that the <eval> logic for searchName is working as expected: <eval token="searchName">mvindex(split($value$, ","), 1)</eval> Potential Cause of the Problem: The <eval> logic for entityToken might not be working as expected:     <eval token="entityToken">mvindex(split($value$, ","), 0)</eval? This could be due to: A syntax issue in the <eval> block. A conflict with the token name entityToken elsewhere in the dashboard. The $value$ token not being properly passed or split. Problem: The first value of the dropdown choice (e.g., target or CBD) is not being correctly substituted into the $entityToken$ token in the Splunk dashboard. This is causing queries that rely on $entityToken$ to fail or use incorrect values. Expected Behavior: $entityToken$ should be set to the first value of the selected dropdown choice (e.g., target or CB). $searchName$ should be set to the second value of the selected dropdown choice (e.g., *-test-targetf or *-test-cb). Current Behavior: $searchName$ is correctly set to the second value of the dropdown choice. $entityToken$ is not being set to the first value of the dropdown choice. Impact: Queries that rely on $entityToken$ are failing or using incorrect values, such as:      
Hello Team Splunk 9.4.0. Running as root. All in one. Seems super simple problem. I am not able to have maxmind lookup working adding Country + City to IP. root@splunk:/opt/splunk/etc/apps/search/... See more...
Hello Team Splunk 9.4.0. Running as root. All in one. Seems super simple problem. I am not able to have maxmind lookup working adding Country + City to IP. root@splunk:/opt/splunk/etc/apps/search/local# cat transforms.conf [maxmind_lookup] allow_caching = 1 case_sensitive_match = 1 external_cmd = /opt/splunk/etc/apps/search/bin/geoip_wrapper.sh fields_list = ip, Country Tested the script: root@splunk:/opt/splunk# echo -e "ip\n8.8.8.8" | /opt/splunk/etc/apps/search/bin/geoip_wrapper.sh ip,Country 8.8.8.8,United States   So seems working fine, but in my search.log getting: 03-16-2025 12:31:09.437 INFO DispatchStorageManagerInfo [631235 searchOrchestrator] - Successfully created new dispatch directory for search job. sid=828bccc0c4803f0f_tmp dispatch_dir=/opt/splunk/var/run/splunk/dispatch/828bccc0c4803f0f_tmp 03-16-2025 12:31:09.437 INFO SearchParser [631235 searchOrchestrator] - PARSING: premakeresults 03-16-2025 12:31:09.443 ERROR ExternalProvider [631235 searchOrchestrator] - Could not find '/opt/splunk/etc/apps/search/bin/geoip_wrapper.sh'. It is required for lookup 'maxmind_lookup'.   Permissions are fine: root@splunk:/opt/splunk# ls -la /opt/splunk/etc/apps/search/bin/geoip_wrapper.sh -rwxr-xr-x 1 root root 82 Mar 16 12:46 /opt/splunk/etc/apps/search/bin/geoip_wrapper.sh What am i missing, spend hours already on this.... Also tried direct python script (without wrapper) and the same results. Tried also the path with $SPLUNK_HOME but no change. For me it looks like a kind of sandboxing ? Maybe i should switch to relative paths ? (tried, did not help) Thanks, Michal
Hi, I am doing an initial search based off of initial field inputs within a dashboard.  The issue I am having is after my chart gets populated with standard deviation, i am  attempting to do a drill... See more...
Hi, I am doing an initial search based off of initial field inputs within a dashboard.  The issue I am having is after my chart gets populated with standard deviation, i am  attempting to do a drilldown click on the chart and once that action happens another panel dynamically appears with the log events from the date/time data point from my chart.  Unfortunately this is not working the panel is always displayed and does a search.  No matter the data point I click in the chart the search happens but doesn't use the date/time of the click.  Even my "labelApp" token is not displaying properly.  See below: <form version="1.1"> <label>API Gateway Dynamic Application Reporting</label> <!--<row>--> <!-- <panel>--> <!-- <title>THESE ARE MY TOKEN VALUES</title>--> <!-- <html>--> <!-- <h2>Index = $indexName$</h2>--> <!-- <h2>Cluster = $clusterName$</h2>--> <!-- <h2>SourceType = mule:app:app</h2>--> <!-- <h2>Application = $labelApp$</h2>--> <!-- <h2>ErrorSearch = $errorSearch$</h2>--> <!-- <h2>Time = $searchTime$</h2>--> <!-- <h2>drilldown1 = $earliest$</h2>--> <!-- <h2>drilldown2 = $latest$</h2>--> <!-- </html>--> <!-- </panel>--> <!--</row>--> <search id="baseSearch"> <query>index=$indexName$ cluster_name=$clusterName$ sourcetype=mule:app:app label_app=$labelApp$ ("\"statusCode\"") | rex .*\"traceId\"\s:\s\"?(?&lt;traceId&gt;.*?)\".* | rex "(?s)\"statusCode\"\s:\s\"?(?&lt;statusCode&gt;[245]\d{2})\"?" | stats count by statusCode | eventstats sum(count) as totalCount | eval percentage=round(count*100/totalCount,3) </query> <earliest>$searchTime.earliest$</earliest> <latest>$searchTime.latest$</latest> </search> <search id="baseSearch2"> <query>index=$indexName$ cluster_name=$clusterName$ sourcetype=mule:app:app label_app=$labelApp$ ("\"statusCode\"") | rex .*\"traceId\"\s:\s\"?(?&lt;traceId&gt;.*?)\".* | rex "(?s)\"statusCode\"\s:\s\"?(?&lt;statusCode&gt;[245]\d{2})\"?" | timechart span=1$timeSpan$ count(statusCode) as "Number_Of_Requests" | eventstats mean(Number_Of_Requests) as "Average_Requests_Per_Time_Span" stdev(Number_Of_Requests) as "Standard_Deviation" | eval Standard_Deviation=round(Standard_Deviation,2) | eval Average_Requests_Per_Time_Span=round(Average_Requests_Per_Time_Span,2)</query> <earliest>$searchTime.earliest$</earliest> <latest>$searchTime.latest$</latest> </search> <fieldset submitButton="false" autoRun="false"> <input type="radio" token="indexName"> <label>Index</label> <choice value="br_master_application_non-prod">UAT</choice> <choice value="br_master_application_prod">Prod</choice> <change> <condition value="br_master_application_non-prod"> <set token="clusterName">"broadridge-msapi-gateway-proxy-uatcluster"</set> </condition> <condition value="br_master_application_prod"> <set token="clusterName">"broadridge-msapi-gateway-proxy-prdcluster"</set> </condition> </change> <search> <query/> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="radio" token="timeSpan"> <label>Time_Span</label> <choice value="s">Second</choice> <choice value="m">Minute</choice> <choice value="h">Hour</choice> <choice value="d">Day</choice> </input> <input type="dropdown" token="labelApp" depends="$indexName$" searchWhenChanged="true"> <label>Application</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>label_app</fieldForLabel> <fieldForValue>label_app</fieldForValue> <search> <query>index=$indexName$ cluster_name=$clusterName$ sourcetype=mule:app:app label_app=* | dedup label_app | table label_app | sort label_app</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="time" token="searchTime" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-7d@d</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <title>Status Code By Slice ($labelApp$)</title> <search base="baseSearch"> <query>| fields - count totalCount | chart max(percentage) by statusCode</query> <!--<earliest>$searchTime.earliest$</earliest>--> <!--<latest>$searchTime.latest$</latest>--> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.chart.showLabels">true</option> <option name="charting.chart.showPercent">true</option> </chart> </panel> <panel> <table> <title>All Status Code Percentage Table ($labelApp$)</title> <search base="baseSearch"> <query>| table statusCode, count, totalCount, percentage</query> <!--<earliest>$searchTime.earliest$</earliest>--> <!--<latest>$searchTime.latest$</latest>--> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <chart> <title>Total Combined Requests Per Time_Span Graph ($labelApp$)</title> <search base="baseSearch2"> <query/> <!--<earliest>$searchTime.earliest$</earliest>--> <!--<latest>$searchTime.latest$</latest>--> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.chart">line</option> <option name="charting.chart.resultTruncationLimit">500000</option> <option name="charting.data.count">500000</option> <option name="charting.chart.overlayFields">Deviation,"Average Requests Per Time_Span"</option> <option name="charting.drilldown">all</option> <option name="charting.legend.placement">bottom</option> <option name="refresh.display">preview</option> <drilldown> <eval token="drilldown1">$earliest$</eval> <eval token="drilldown2">$latest$</eval> </drilldown> </chart> </panel> <panel> <table> <title>Total, Average, and Standard Deviation Requests Per Time_Span Table ($labelApp$)</title> <search base="baseSearch2"> <query/> <!--<earliest>$searchTime.earliest$</earliest>--> <!--<latest>$searchTime.latest$</latest>--> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <title>Latency Metrics for Trade Execution</title> <table> <search> <query> index=$indexName$ sourcetype="mule:app:app" aws_account_name="CORP-MSAPIGW" label_app=$label_app$ | rex "traceId=\"(?&lt;trace_id>[^\"]+)\"" | rex "clientId=\"(?&lt;client_id>[^\"]+)\"" | rex "message=\"(?&lt;message>[^\"]+)\"" | rex "request_method=\"(?&lt;request_method>[^\"]+)\"" | rex "request_url=\"(?&lt;request_url>[^\"]+)\"" | rex "request_queryParams_account=\"(?&lt;account>[^\"]+)\"" | rex "request_headers_x-request-id=\"(?&lt;x_request_id>[^\"]+)\"" | rex "statusCode=\"(?&lt;status_code>\d+)\"" | rex "latency_backend_latency_in_ms=\"(?&lt;backend_latency>[0-9]+)\"" | rex "latency_request_latency_in_ms=\"(?&lt;request_latency>[0-9]+)\"" | rex "latency_response_latency_in_ms=\"(?&lt;response_latency>[0-9]+)\"" | eval backend_latency_ms=tonumber(backend_latency), request_latency_ms=tonumber(request_latency), response_latency_ms=tonumber(response_latency) | eval total_latency_ms = backend_latency_ms + request_latency_ms + response_latency_ms | eventstats perc90(total_latency_ms) as perc90_threshold | where total_latency_ms &lt;= perc90_threshold | eventstats avg(backend_latency_ms) as avg_backend_latency_ms, avg(request_latency_ms) as avg_request_latency_ms, avg(response_latency_ms) as avg_response_latency_ms | eval avg_90_percent_latency_ms = avg_backend_latency_ms + avg_request_latency_ms + avg_response_latency_ms | rename backend_latency_ms AS "Backend Latency (ms)", request_latency_ms AS "Request Latency (ms)", response_latency_ms AS "Response Latency (ms)", total_latency_ms AS "Total Latency (ms)", avg_90_percent_latency_ms AS "90% Avg Total Latency (ms)" | table trace_id, client_id, message, request_method, request_url, account, x_request_id, status_code, "Backend Latency (ms)", "Request Latency (ms)", "Response Latency (ms)", "Total Latency (ms)", "90% Avg Total Latency (ms)" </query> </search> <drilldown> <eval token="drilldown1">$earliest$</eval> <eval token="drilldown2">$latest$</eval> </drilldown> </table> </panel> </row> <row depends="$drilldown1$"> <panel> <event> <title>Drill Down Events</title> <search> <query>index=$indexName$ cluster_name=$clusterName$ sourcetype=mule:app:app label_app=$labelApp$ ("\"statusCode\"") </query> <earliest>$drilldown1$</earliest> <latest>$drilldown2$</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form> I commented out the last part as I have not gotten to that piece of the dashboard yet.  Any help would be greatly appreciated as I have been banging my head on this for a day or more at this point.