All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Studio and Classic has their pros and cons. Personally, I prefer Classic as there are more opportunities to extend the current capabilities through CSS and Javascript and custom visualisations, etc. ... See more...
Studio and Classic has their pros and cons. Personally, I prefer Classic as there are more opportunities to extend the current capabilities through CSS and Javascript and custom visualisations, etc. While it is true that Splunk development is focussed on Studio, you are tied to their release schedule, so, if you have the patience to wait until the features you want make it into a release, stick with it, otherwise, Classic might give you the features already, with the compromise of losing some of the wysiwyg features of Studio.
I checked by going to my AWS linux instance (where our Splunk instances reside) for this particular add-on folder we have drwx------- permissions in both DS and HF. Do I need to change these permissi... See more...
I checked by going to my AWS linux instance (where our Splunk instances reside) for this particular add-on folder we have drwx------- permissions in both DS and HF. Do I need to change these permissions to configure data input in HF? or these permissions are sufficient? @PickleRick 
Whitelisting is one thing but I'd verify with your proxy admins that the requests are properly passed through. Just to be on the safe side.
The indexers extract fields from events as they are read from the index.   As @PickleRick implied, the effort put into that extraction is determined by the search mode (Fast, Smart, or Verbose).  Eac... See more...
The indexers extract fields from events as they are read from the index.   As @PickleRick implied, the effort put into that extraction is determined by the search mode (Fast, Smart, or Verbose).  Each extracted field takes up memory for processing and network bandwidth to send to the SH.  Using the fields command helps reduce the number of fields retained so you have memory and bandwidth. Indexers do not decide if a field is interesting or not - the SH does that.
Yes using proxy for that in our company and whitelisted these domains as well in our AWS VPC..
It'shard to say precisely since the addon is not very talkative in terms of logs but my understanding would be that Splunk is trying to validate the config - see https://docs.splunk.com/Documentation... See more...
It'shard to say precisely since the addon is not very talkative in terms of logs but my understanding would be that Splunk is trying to validate the config - see https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/ModInputsValidate to see how it works. The 404 error comes from the addon itself. Unfortunately, it's not very descriptive. And it's confusing since 404 means that resource wasn't found. Access permissions problems should be signalled with 403. You could try to check if the addon has some configurable logging (typically you'd look for log4j.properties file in case of java-based software). Are you using proxy to reach the internet?
Splunk decides which fields to extract based on search commands and whether you use fast or verbose mode. So you can limit the amount of data processed even in verbose mode by removing some fields (... See more...
Splunk decides which fields to extract based on search commands and whether you use fast or verbose mode. So you can limit the amount of data processed even in verbose mode by removing some fields (but it's better to just not use verbose mode and explicitly specify interesting fields). But the other important use case is that Splunk returns the _raw field (and other defsult fields, but this one is usually most significant) which can be really memory-intensive, especially if you're dealing with huge json blobs or something similar. And again - no, fields are not "extracted in SH". Fields are getting extracted at the very beginning of a search on an indexer (before other commands in the pipeline kick in). Just because something is a search-time operation doesn't mean it happens on a SH.
Hi @richgalloway , could you explain how it affects the amount of data returning from the insexers to the SH? I know that the command says which fields should be discarded or retained, but if the fi... See more...
Hi @richgalloway , could you explain how it affects the amount of data returning from the insexers to the SH? I know that the command says which fields should be discarded or retained, but if the field specified is not an indexed field and I didnt use "rex", meaning that I refer to an "interesting field" which according to my understanding is extracted in the SH, how can it affect the amount of data? unless you are saying that using "fields" command indirectly tells the indexers to extract those fields.
An app is just a directory with content so there should be no "technical" difference between a locally installed app and DS-distributed one. It's all about maintainability. I've seen environments whe... See more...
An app is just a directory with content so there should be no "technical" difference between a locally installed app and DS-distributed one. It's all about maintainability. I've seen environments where people would distribute pre-configured add-ons and I've seen places where half of the apps on HFs were standard and installed from DS but other half was installed locally.
@PickleRick have one query here... Do pushing app from DS to HF and configuring it in HF causing the problem? Our HF has Jre installed. When we install directly on HF and configuring it is working. W... See more...
@PickleRick have one query here... Do pushing app from DS to HF and configuring it in HF causing the problem? Our HF has Jre installed. When we install directly on HF and configuring it is working. What will be the consequences I may face in future if I install directly on HF rather than pushing it from DS? 
Yes, the fields command can affect the amount of data returned to the SH.  It's something I recommend to all of my customers as a way to improve search efficiency. Just to be clear, the command does... See more...
Yes, the fields command can affect the amount of data returned to the SH.  It's something I recommend to all of my customers as a way to improve search efficiency. Just to be clear, the command does NOT extract any fields.  It merely says which ones should be discarded (- option) or retained (+ option).  Actual extractions are done by the rex command or the EXTRACT setting in props.conf.
Hi @richgalloway , thank you.   1. by saying that indexers also perform search-time field extraction, you mean that the "interesting fields" can be extracted by the indexers? my thinking is that th... See more...
Hi @richgalloway , thank you.   1. by saying that indexers also perform search-time field extraction, you mean that the "interesting fields" can be extracted by the indexers? my thinking is that those fields need to be seen in at least of 20% of the events, and this kind of calculation can be performed at the search head.  2.  by specifying a field using "fields" command (which is not an indexed field or field extracted with 'rex'), will it be extracted on the on the search head or on the indexers (assuming it fits for distributable streaming)?  Overall I am trying to understand if using 'fields' command by specifying a regular field (and not an indexed one) it affects the amount of data returning from the indexers to the search head.  
I've never tried this app so can't really tell. In the description I see that it needs a working JRE installation on the HF machine so you have to fiddle with the server manually anyway. And the doc... See more...
I've never tried this app so can't really tell. In the description I see that it needs a working JRE installation on the HF machine so you have to fiddle with the server manually anyway. And the docs are not very good - they don't say which tier you have to install the app on, they assume you're installing the app using GUI; I suppose the author of the add-on has never seen any bigger than all-in-one installation of Splunk.
Unfortunately it isnt possible to disable this at the moment. These docs (https://splunkui.splunk.com/Packages/visualizations/SingleValue) give you the options that can be passed but unfortunately t... See more...
Unfortunately it isnt possible to disable this at the moment. These docs (https://splunkui.splunk.com/Packages/visualizations/SingleValue) give you the options that can be passed but unfortunately the hover border isnt something that can be changed. Regarding your question about Classic dashboards - Unless you really need to I would try and stick with Dashboard Studio dashboards as this is where ongoing development is being done (and the dashboards generally look nicer, in my opinion!) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Yes license is there... 
Hi @Karthikeya  Do you have a license installed on your HF? I believe you need a license on your HF for this app to work because certain features are not enabled on the free license. Please let me... See more...
Hi @Karthikeya  Do you have a license installed on your HF? I believe you need a license on your HF for this app to work because certain features are not enabled on the free license. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi - you did not add the options to the fieldset as posted above. You just wrapped the input in fieldset. Try adding the two additional options like this: <fieldset submitButton="true" autoRun="f... See more...
Hi - you did not add the options to the fieldset as posted above. You just wrapped the input in fieldset. Try adding the two additional options like this: <fieldset submitButton="true" autoRun="false">   See the full dashboard: <form version="1.1" theme="dark"> <label>Metrics222</label> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod-,prod,*">PROD</choice> <choice value="np-,test,*">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value="*">ALL</choice> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html id="APIStats"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">API USAGE STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique User / Unique Client</title> <search> <query>index=$indexToken$ AND source="/aws/lambda/g-lambda-au-$stageToken$" | stats dc(claims.sub) as "Unique Users", dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app, claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <html id="nspCounts"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">NSP STREAM STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique Consumer</title> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | sort -_time | head 1 | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name="*costing*" | table Name, DistinctAdminUserCount</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <title>Event Processed</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "success Published to NSP3 objectType*" | rex field=msg "objectType\s*:\s*(?&lt;objectType&gt;[^\s]+)" | stats count by objectType</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table> <title>Number of Errors</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "error*" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>API : Data/Search Count</title> <html id="errorcount5"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user"> API COUNT STATISTICS</h2> </html> </panel> </row> <row> <panel> <title>Total Request Data</title> <table> <search> <query>(index=$indexToken$ source="/aws/lambda/api-data-$stageToken$-$entityToken$" OR source="/aws/lambda/api-commands-$stageToken$-*") ge:*:init:*:invoke | spath path=event.path output=path | spath path=event.httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Request Search</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") ge:init:*:invoke | spath path=path output=path | spath path=httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Error Count :</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") msg="error*" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Response Time Count in ms</title> <table> <search>rliest&gt;<query>index=np-papi source IN ("/aws/lambda/api-search-test-*") "ge:init:search:response" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="Search API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime | append [ search index=np-papi source IN ("/aws/lambda/api-data-test-*") msg="ge:init:data:*" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="DATA API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime ] | table API, TotalResponseTime, AvgResponseTime</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <html id="errorcount16"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">Request per min</h2> </html> </panel> </row> <row> <panel> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityToken$","/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:*:*" | timechart span=1m count by source | untable _time source count | stats sum(count) as TotalCount, avg(count) as AvgCountPerMin by source | eval AvgCountPerMin = round(AvgCountPerMin, 2) | eval source = if(match(source, "api-data-test-(.*)"), replace(source, "/api-data-test-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-data-prod-(.*)"), replace(source, "/aws/lambda/api-data-prod-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda/api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambdaapi-search-prod-(.*)", "search-\\1")))) | table source, TotalCount, AvgCountPerMin</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>SLA % :DATA API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambdaapi-data-$stageToken$-$entityToken$") "ge:init:data:responseTime" | eval SLA_threshold = 113 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "DATA API" | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>SLA % :SEARCH API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:search:response:time" | eval SLA_threshold = 100 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "SEARCH API" | eval source = if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda\api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambda/api-search-prod-(.*)", "search-\\1")) | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
Yup. As @richgalloway hinted at - Splunk uses a two-tiered search process. Simplifying a bit (and not taking into account initial commands which run on SH), the search is initiated on SH. SH breaks ... See more...
Yup. As @richgalloway hinted at - Splunk uses a two-tiered search process. Simplifying a bit (and not taking into account initial commands which run on SH), the search is initiated on SH. SH breaks it down into phase1 and phase2. Phase1 is spawned from SH to indexers (let's not dig deeply into which indexers the search is spawned to; it's a topic for another time). The indexer(s) have a so-called knowledge bundle which contains search-time settings replicated from the SH (again - how it's happening is another topic). So the indexers know how fields are extracted. And they extract those fields if needed. Phase1 contains only an initial events search or distributed streaming commands because each indexer processes its data independently and cannot rely on events held elsewhere. And it ends either by simply passing the results back to SH for displaying (if there are no more commands in the search pipeline or next command is centralized streaming one) or ends with the map part of a transforming or dataset processing command which prepare the results for aggregation by the SH. Next the intermediate results are gathered by SH which performs phase2 of the search. Phase2 can contain any type of commands, phase1 can only contain the initial search, distributable streaming commands or the "prestats" part of transforming or dataset processing command. So everytime you use a command which is not a distributable streaming command in your search after the initial search, the processing is moved at this point to the SH tier and you lose the concurrent processing. Therefore it's better to use fields command than table unless you're at the end of your search and want to format your data pretty for viewing
Hi I tried with fieldset in the form ..but its still fetch result based on first dropdown and runs the result Current Behavior: The dashboard fetches results immediately when the "env" dropdow... See more...
Hi I tried with fieldset in the form ..but its still fetch result based on first dropdown and runs the result Current Behavior: The dashboard fetches results immediately when the "env" dropdown is selected (e.g., "test" or "prod"). Results are fetched without considering other filters like "data entity" or "time."  Expected behaviour The dashboard should wait for the user to: Select a value from the "env" dropdown (e.g., "test" or "prod"). Select a value from the "data entity" dropdown. Specify a time range. Only after all selections are made and the "Submit" button is clicked, the query should execute and fetch results. Could someone help on this .I tried adding fieldset     <form version="1.1" theme="dark"> <label>Metrics222</label> <fieldset> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod-,prod,*">PROD</choice> <choice value="np-,test,*">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value="*">ALL</choice> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html id="APIStats"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">API USAGE STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique User / Unique Client</title> <search> <query>index=$indexToken$ AND source="/aws/lambda/g-lambda-au-$stageToken$" | stats dc(claims.sub) as "Unique Users", dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app, claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <html id="nspCounts"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">NSP STREAM STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique Consumer</title> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | sort -_time | head 1 | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name="*costing*" | table Name, DistinctAdminUserCount</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <title>Event Processed</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "success Published to NSP3 objectType*" | rex field=msg "objectType\s*:\s*(?&lt;objectType&gt;[^\s]+)" | stats count by objectType</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table> <title>Number of Errors</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "error*" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>API : Data/Search Count</title> <html id="errorcount5"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user"> API COUNT STATISTICS</h2> </html> </panel> </row> <row> <panel> <title>Total Request Data</title> <table> <search> <query>(index=$indexToken$ source="/aws/lambda/api-data-$stageToken$-$entityToken$" OR source="/aws/lambda/api-commands-$stageToken$-*") ge:*:init:*:invoke | spath path=event.path output=path | spath path=event.httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Request Search</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") ge:init:*:invoke | spath path=path output=path | spath path=httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Error Count :</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") msg="error*" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Response Time Count in ms</title> <table> <search>rliest&gt;<query>index=np-papi source IN ("/aws/lambda/api-search-test-*") "ge:init:search:response" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="Search API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime | append [ search index=np-papi source IN ("/aws/lambda/api-data-test-*") msg="ge:init:data:*" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="DATA API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime ] | table API, TotalResponseTime, AvgResponseTime</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <html id="errorcount16"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">Request per min</h2> </html> </panel> </row> <row> <panel> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityToken$","/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:*:*" | timechart span=1m count by source | untable _time source count | stats sum(count) as TotalCount, avg(count) as AvgCountPerMin by source | eval AvgCountPerMin = round(AvgCountPerMin, 2) | eval source = if(match(source, "api-data-test-(.*)"), replace(source, "/api-data-test-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-data-prod-(.*)"), replace(source, "/aws/lambda/api-data-prod-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda/api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambdaapi-search-prod-(.*)", "search-\\1")))) | table source, TotalCount, AvgCountPerMin</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>SLA % :DATA API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambdaapi-data-$stageToken$-$entityToken$") "ge:init:data:responseTime" | eval SLA_threshold = 113 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "DATA API" | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>SLA % :SEARCH API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:search:response:time" | eval SLA_threshold = 100 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "SEARCH API" | eval source = if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda\api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambda/api-search-prod-(.*)", "search-\\1")) | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>      
Thank you for your feedback. I appreciate the time and expertise you and others volunteer here. My intention wasn’t to exclude anyone.I’ll keep your advice in mind for future posts to ensure all cont... See more...
Thank you for your feedback. I appreciate the time and expertise you and others volunteer here. My intention wasn’t to exclude anyone.I’ll keep your advice in mind for future posts to ensure all contributions are valued equally.  And will repost without tagging anyone