All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is unlikely to be random, since it is generated by a system. There is likely to be some pattern to it, but if you do not share that information, it is unlikely that we will be able to guess it, an... See more...
It is unlikely to be random, since it is generated by a system. There is likely to be some pattern to it, but if you do not share that information, it is unlikely that we will be able to guess it, and therefore would be wasting our time attempting to provide a solution until you provide sufficient relevant details.
Has the bug been resolved in Splunk Enterprise version 9.2.1 (latest version)?
It is not that you will always have Entity Value next to data. It is random.
Hello, there I hope you are doing well. I was studying Splunk basics and came to an image that made me ask the same question you have asked here, but I don't understand the explanation. I would be... See more...
Hello, there I hope you are doing well. I was studying Splunk basics and came to an image that made me ask the same question you have asked here, but I don't understand the explanation. I would be grateful if you could explain to my why the UF has a parsing queue in it  Thank you 
Try something like this | rex mode=sed "s/(Data\": )\"/\1[/g s/}\"(, \"EntityType)/}]\1]/g s/\\\\\"/\"/g"
Input Event : [so much data exists in the same single line ] ,"Comments": "New alert", "Data": "{\"etype\":\"MalwareFamily\",\"at\":\"2024-06-21T11:34:07.0000000Z\",\"md\":\"2024-06-21T11:34:07.00000... See more...
Input Event : [so much data exists in the same single line ] ,"Comments": "New alert", "Data": "{\"etype\":\"MalwareFamily\",\"at\":\"2024-06-21T11:34:07.0000000Z\",\"md\":\"2024-06-21T11:34:07.0000000Z\",\"Investigations\":[{\"$id\":\"1\",\"Id\":\"urn:ZappedUrlInvestigation:2cc87ae3\",\"InvestigationStatus\":\"Running\"}],\"InvestigationIds\":[\"urn:ZappedUrlInvestigation:2cc8782d063\"],\"Intent\":\"Probing\",\"ResourceIdentifiers\":[{\"$id\":\"2\",\"AadTenantId\":\"2dfb29-729c918\",\"Type\":\"AAD\"}],\"AzureResourceId\":null,\"WorkspaceId\":null,\"Metadata\":{\"CustomApps\":null,\"GenericInfo\":null},\"Entities\":[{\"$id\":\"3\",\"MailboxPrimaryAddress\":\"abc@gmail.com\",\"Upn\":\"abc@gmail.com\",\"AadId\":\"6eac3b76357\",\"RiskLevel\":\"None\",\"Type\":\"mailbox\",\"Urn\":\"urn:UserEntity:10338af2b6c\",\"Source\":\"TP\",\"FirstSeen\":\"0001-01-01T00:00:00\"}, \"StartTimeUtc\": \"2024-06-21T10:12:37\", \"Status\": \"Investigation Started\"}","EntityType": "MalwareFamily", [so much data exists in the same single line ] In a single line, there exists so much data, I want to substitue(\") with (") only that falls between Data dictionary value, nothing before and nothing after. sample regex : https://regex101.com/r/Gsfaay/1 ( highlighted data only in group 4 should be modified.) And the Dictionary value is enclosed between quotes(as string) want it to be replaced by []braces as list ( group 3 and 6 ) Ouptut Required : [so much data exists in the same single line ],"Comments": "New alert", "Data": [{"etype":"MalwareFamily", so on,"Status":"Investigation Started"}],"EntityType": "MalwareFamily", [so much data exists in the same single line ]   Trials :  [testing_logs] SEDCMD-DataJson = s/\\\"/\"/g s/"Data": "{"/"Data": \[{"/g s/("Data": \[{".*})",/$1],/g INDEXED_EXTRACTIONS = json KV_MODE = json I tried it in the multiple steps as mentioned in my above example, but In splunk sedcmd works on the entire _raw value. I shouldnt apply it globally 1. regex101.com/r/0g2bcL/1  2. regex101.com/r/o3eFgJ/1  3. regex101.com/r/D7Of0v/1  only issue with the first regex, it shouldnt be applied globally on entire event value, it should be applying only between data dictionary value.
I want to set up an alert in Splunk that sends a message to two different public Slack channels. Currently, sending a message to one channel works fine, but I'm having trouble sending messages to mu... See more...
I want to set up an alert in Splunk that sends a message to two different public Slack channels. Currently, sending a message to one channel works fine, but I'm having trouble sending messages to multiple channels. Here is how I'm configuring it for a single Slack channel: Is there any way I can use to fix this issue using this UI?
Try using a single show token <init> <unset token="showCollapseLink5"/> </init> <row depends="$alwaysHideCSSStyleOverride$"> <panel> <html> <style> div[id^="li... See more...
Try using a single show token <init> <unset token="showCollapseLink5"/> </init> <row depends="$alwaysHideCSSStyleOverride$"> <panel> <html> <style> div[id^="linkCollapse"], div[id^="linkExpand"]{ width: 32px !important; float: right; } div[id^="linkCollapse"] button, div[id^="linkExpand"] button{ flex-grow: 0; border-radius: 50%; border-width: thick; border-color: lightgrey; border-style: inset; width: 32px; padding: 0px; } div[id^="linkCollapse"] label, div[id^="linkExpand"] label{ display:none; } div[id^="panel"].fieldset{ padding: 0px; } </style> </html> </panel> </row> <row> <panel> <title>Chart title</title> <input id="linkCollapse5" type="link" token="tokLinkCollapse5" searchWhenChanged="true" depends="$showCollapseLink5$"> <label></label> <choice value="collapse">-</choice> <change> <condition value="collapse"> <unset token="showCollapseLink5"></unset> <unset token="form.tokLinkCollapse5"></unset> </condition> </change> </input> <input id="linkExpand5" type="link" token="tokLinkExpand5" searchWhenChanged="true" rejects="$showCollapseLink5$"> <label></label> <choice value="expand">+</choice> <change> <condition value="expand"> <set token="showCollapseLink5">true</set> <unset token="form.tokLinkExpand5"></unset> </condition> </change> </input> <table depends="$showCollapseLink5$"> <search> <query> My query </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
<form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <fieldset submitButton="false"> <input type="multiselect" token="toksele... See more...
<form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <fieldset submitButton="false"> <input type="multiselect" token="tokselect"> <label>Field</label> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <valueSuffix>=REPLACE</valueSuffix> <delimiter> OR </delimiter> <prefix>(</prefix> <suffix>)</suffix> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> </fieldset> <row> <panel> <event> <title>$tokfilter$</title> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </event> </panel> </row> </form>
As @yuanliu says, you will have a much better chance of getting a useful answer if you follow some simple guidelines. It is not possible to tell from what you have posted so far what your events act... See more...
As @yuanliu says, you will have a much better chance of getting a useful answer if you follow some simple guidelines. It is not possible to tell from what you have posted so far what your events actually look like (or a close anonymised representation of them), nor what it is you are trying to determine from your search. For example, do all the rule fields have either "None" or a rule name in? If so, isnotnull() will always return true, hence all your rules are coming through as "urlurlelabel" (should this be "urlrulelabel"?) Can there be more than one rule label you are interested in for an event or does apprulelabel always take presidence, even if it is "None", which is what your search is doing, hence the high count for "None"? Please provide more relevant information (if you would like more help).
It doesn't go in the query. The legend is a feature of the viz. The charting option goes in the SimpleXML source. https://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference ... See more...
It doesn't go in the query. The legend is a feature of the viz. The charting option goes in the SimpleXML source. https://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference  
Hi, Iam having this error since first of the june. Here is my splunkd.log   06-22-2024 14:54:00.405 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app... See more...
Hi, Iam having this error since first of the june. Here is my splunkd.log   06-22-2024 14:54:00.405 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:00.406 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:00.406 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:04.800 +0800 INFO ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" action=start_task_server, configFile=/opt/splunk/etc/apps/splunk_app_db_connect/config/dbx_task_server.yml 06-22-2024 14:54:04.842 +0800 INFO ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=start_dbxquery_server, configFile=/opt/splunk/etc/apps/splunk_app_db_connect/config/dbxquery_server.yml 06-22-2024 14:54:04.981 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" 14:54:04.980 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection 06-22-2024 14:54:05.015 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 14:54:05.013 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection 06-22-2024 14:54:05.102 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" 14:54:05.101 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store found, store=com.splunk.Entity@d7b1517 06-22-2024 14:54:05.129 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 14:54:05.129 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store found, store=com.splunk.Entity@d7b1517 06-22-2024 14:54:05.214 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.server.bootstrap.TaskServerStart.startTaskServer(TaskServerStart.java:108)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:69)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ 06-22-2024 14:54:05.215 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:05.215 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:74)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ 06-22-2024 14:54:05.233 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:05.233 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:05.233 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ And here is the mongod.log 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2024-06-19T07:46:17.513Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2024-06-19T07:46:17.522Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2024-06-19T07:46:17.524Z W ASIO [main] No TransportLayer configured during NetworkInterface startup 2024-06-19T07:46:17.527Z I ACCESS [main] permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open I tried create new ssl certificate but it doesnt work. And tried change the permission of the /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key still encountering same error. What should i do? Please help.
My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ?  Like @ITWhisperer said, you didn't explain what you expect to get by saving the dropped even... See more...
My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ?  Like @ITWhisperer said, you didn't explain what you expect to get by saving the dropped events WHEN App_name IN (*) is in force.  Unless you illustrate the desired output - which is an essential part of an answerable question, your question is a simple statement of contradictions. Now I suspect you do not merely want to contradict yourself.  Let me try mind reading: you want a count of events with App_Name, and a separate count for events without. index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* | eval app_name_or_no = if(isnull(App_Name), "no", "yes") | stats count by app_name_or_no  If this is tea leaf is telling, the question has nothing to do with events being dropped. One more thing, I don't see any point of inserting that search command on the 4th line.  It is much more effective if you throw all filters in index search.  What's wrong with this? index=msad_hcv NOT ("forwarded") Environment=* type=* request.path=* | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | eval app_name_or_no = if(isnull(App_Name), "no", "yes") | stats count by app_name_or_no  
Have you tried the search I suggested?  That does exactly what you are saying here, and doesn't use lookup. (I understand field_A, field_B, etc., are standins for real field names.)
Without other samples, I'd go with the most aggressive: | rex "title=(?<title>[^\"]+)"
What happens if indexer acknowledgment is enabled and there is multisite clustering or minimum rf is not met due to indexer failure. If I understand it right, no acknowledgement will be sent till da... See more...
What happens if indexer acknowledgment is enabled and there is multisite clustering or minimum rf is not met due to indexer failure. If I understand it right, no acknowledgement will be sent till data has been replicated as per rf value and in such scenario UF will hold the data block in its queue. Would ingestion stall when queue is full (it's default size is only few MBs) even though data ingestion is working on another site. And if so, ingestion would completely stop even when ingestion is working fine only replication is impacted. I am looking for below journeys: UK --> Intermediate Heavy Forwarder --> Indexer pcf microservices --> Intermediate Heavy Forwarder --> Indexer
Hi, I hope all is well. I want to ask for more information and simple explanation, as i came across the Distributed Search document and came out with two questions:   what is the search artifacts?... See more...
Hi, I hope all is well. I want to ask for more information and simple explanation, as i came across the Distributed Search document and came out with two questions:   what is the search artifacts? - it's the search result, and should be replicated but what is the content of that? what is the the knowledge bundle? - a set of configuration which directly transfer from the Search heads to the search peers, but why?   Thanks in advance!
Hoping to find a solution here for my rex query (new to rex)   I have an event that looks like this   time="2024-06-22T00:31:43.939620127Z" level=info msg="uuid="KENT-12345678-1234-1234-1234-1234... See more...
Hoping to find a solution here for my rex query (new to rex)   I have an event that looks like this   time="2024-06-22T00:31:43.939620127Z" level=info msg="uuid="KENT-12345678-1234-1234-1234-123456789123", tkt=INC123456789, ci=SAP, state=Escalate, opened=2024-06-22 00:31:06, closed=0001-01-01 00:00:00 +0000 UTC, title=server123.corp: userTrap: 1 ABC Job: 927370523:ABC_0001_DEF_XX_GHIJK_XXXX2_MOSU:Killed"   How do I write a query that will extract this string "server123.corp: userTrap: 1 ABC Job: 927370523:ABC_0001_DEF_XX_GHIJK_XXXX2_MOSU:Killed"   Thank you very much
Just noticed this in our data but after we updated the TA-Akamai_SIEM version back in March of this year our Akamai log are no longer being filtered out to their respective fields. Any ideas as to wh... See more...
Just noticed this in our data but after we updated the TA-Akamai_SIEM version back in March of this year our Akamai log are no longer being filtered out to their respective fields. Any ideas as to what might be wrong?
Hi @Anud, We can optimize your search if you provide mock samples of your data, but here's an example using makeresults and your current search structure to simulate the fields required by the visua... See more...
Hi @Anud, We can optimize your search if you provide mock samples of your data, but here's an example using makeresults and your current search structure to simulate the fields required by the visualization: | makeresults format=csv data="QUE_NAM,FINAL,QUE_DEP S_FOO,MQ SUCCESS, S_FOO,CONN FAILED, S_FOO,MEND FAIL, S_FOO,,3" | stats sum(eval(if(FINAL=="MQ SUCCESS", 1, 0))) as good sum(eval(if(FINAL=="CONN FAILED", 1, 0))) as error sum(eval(if(FINAL=="MEND FAIL", 1, 0))) as warn avg(QUE_DEP) as label by QUE_NAM | rename QUE_NAM as to | eval from="internal", label="Avg: ".label." Good: ".good." Warn: ".warn." Error: ".error | append [| makeresults format=csv data="queue_name,current_depth BAR_Q,1 BAZ_R,2" | bin _time span=10m | stats avg(current_depth) as label by queue_name | rename queue_name as to | eval from="external", label="Avg: ".label | appendpipe [ stats values(to) as from | mvexpand from | eval to="internal" ]] good, error, and warn are special fields supported by the visualization. Add the label field to provide a custom link label, and leave the special fields intact to produce the flowing dot animation.