All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can use below splunk to check locked out accounts   sourcetype="wineventlog" EventCode=4740 OR EventCode=644 |eval src_nt_host=if(isnull(src_nt_host),host,src_nt_host) |stats latest(_time) AS... See more...
You can use below splunk to check locked out accounts   sourcetype="wineventlog" EventCode=4740 OR EventCode=644 |eval src_nt_host=if(isnull(src_nt_host),host,src_nt_host) |stats latest(_time) AS time latest(src_nt_host) AS host BY dest_nt_domain user |eval ltime=strftime(time,"%c") |table ltime,dest_nt_domain user host |rename ltime AS "Lockout Time",dest_nt_domain AS Domain,user AS "Account Locked Out", host AS "Workstation"  
Hello to everyone We have about >300 hosts sending syslog messages to the indexer cluster The cluster runs on Windows Server All settings across the indexer cluster that relate to syslog ingestion... See more...
Hello to everyone We have about >300 hosts sending syslog messages to the indexer cluster The cluster runs on Windows Server All settings across the indexer cluster that relate to syslog ingestion look like this: [udp://port_number] connection_host = dns index = index_name sourcetype = sourcetype_name   So I expected to see no IP addresses in the host field when I ran searches I created the alert to be aware that no one message has an IP in the host field But a couple of hosts have this problem I know that PTR records are required for this setting, but we checked that the record exists. When I run "dnslookup *host_ip* *dns_server_ip* I see that everything is OK I also cleared the DNS cache across the indexer cluster, but I still see this problem Does Splunk have some internal logs that can help me identify where the problem is? Or do I only have the opportunity to catch the network traffic dump with DNS queries?
Hi @Siddharthnegi , I don't think that's possible, but why do you want to remove it? it restore the default visualization you configured. Ciao. Giuseppe
Point 1. This will help in sourcetype naming conventions (People still use my_app_soucetype, meaning they use underscores, which isn't a problem, but this is the recommended naming convention, see li... See more...
Point 1. This will help in sourcetype naming conventions (People still use my_app_soucetype, meaning they use underscores, which isn't a problem, but this is the recommended naming convention, see link below.  https://docs.splunk.com/Documentation/AddOns/released/Overview/Sourcetypes   Point 2.  As there are a plethora of data sources, many are common and at some point, you will have a custom log source and will need to create your own sourcetype. These are some of the pretrained ones https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Listofpretrainedsourcetypes   Many common sourcetypes come out of the box with the Splunk TA's so this is your starting point, these should be used and do not need to change them as they categorise the data which is important for parsing.  For any custom data sources, you need to analyse your log source, check the format, timestamp, and other settings, and use props.conf to set the sourcetype with a naming convention standards, this makes admin life easier.  See the below link https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Configuring_new_source_types    Point 3.  Syncing two different SH Clusters' mean you have one Deployer for each, that’s the Splunk setup. So, you need some kind of Repo like git where your KO's / Apps are located and keep that under code control. You can then Ansible to deploy the apps to the deployers for them to push the Apps. You could also use the linux rsync command to have a repo and sync with the deployers. So you should have a strategy for this type of app management based on the tools you use.   
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_... See more...
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_id) as total_emails from datamodel=Email where (All_Email.action="quarantined" OR All_Email.action="delivered") AND NOT [| `email_whitelist_generic`] by All_Email.src_user, All_Email.subject, All_Email.action | `drop_dm_object_name("All_Email")` | eventstats sum(eval(if(action="quarantined", count, 0))) as quarantined_count_peruser, sum(eval(if(action="delivered", count, 0))) as delivered_count_peruser by src_user, subject | where total_emails>50 AND quarantined_count_peruser>10 AND delivered_count_peruser>0 I want to count the number of quarantined emails and the delivered ones only and than filter them for some threshold, but it seems that the eventstats command is not working as expected. I already used this logic for authentication searches and it's working fine. Any help?
can i remove the button which is just below (-) button ?  
When I creating "on poll" action on App Wizard, I always get an error: "Action type: Select a valid choice. ingest is not one of the available choices." Does anyone know a way to avoid this?
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial ac... See more...
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial account in Splunk Cloud platform Generated a HEC Token Send telemetry data to Splunk Cloud platform using a OpenTelemetry collectory with Splunk HEC exporter    splunk_hec: token: "<hec-token>" endpoint: https://prd-p-e7xnh.splunkcloud.com:8088/services/collector/event source: "otel" sourcetype: "otel" splunk_app_name: "ThousandEyes OpenTelemetry" tls: insecure: false       I see the following error in my `otel-collector`:   Post "https://splunkcloud.com:8088/services/collector/event": tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match splunkcloud.com       The endpoint `https://prd-p-e7xnh.splunkcloud.com:8088` seems to have a invalid certificate. It was sign by a self-sign CA. It does not include subject name for the endpoint.   openssl s_client -showcerts -connect prd-p-e7xnh.splunkcloud.com:8088 CONNECTED(00000005) depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify error:num=19:self-signed certificate in certificate chain verify return:1 depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify return:1 depth=0 CN = SplunkServerDefaultCert, O = SplunkUser verify return:1 --- Certificate chain 0 s:CN = SplunkServerDefaultCert, O = SplunkUser i:C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256 v:NotBefore: May 28 17:34:47 2024 GMT; NotAfter: May 28 17:34:47 2027 GMT     We confirmed that for the paid version using the port 443, Splunk is using a valid CA certificate:   echo -n | openssl s_client -connect prd-p-e7xnh.splunkcloud.com:443 | openssl x509 -text -noout Warning: Reading certificate from stdin since no -in or -new option is given depth=2 C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Global Root G2 verify return:1 depth=1 C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 verify return:1 depth=0 C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com verify return:1 DONE Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:04:07:e1:b9:47:0f:a1:83:02:a7:45:99:a4:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 Validity Not Before: May 28 00:00:00 2024 GMT Not After : May 27 23:59:59 2025 GMT Subject: C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: 74:85:80:C0:66:C7:DF:37:DE:CF:BD:29:37:AA:03:1D:BE:ED:CD:17 X509v3 Subject Key Identifier: 35:18:36:ED:18:F5:18:A6:89:90:28:E0:12:AB:14:47:18:37:61:F9 X509v3 Subject Alternative Name: DNS:*.prd-p-e7xnh.splunkcloud.com, DNS:prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.pvt.prd-p-e7xnh.splunkcloud.com, DNS:pvt.prd-p-e7xnh.splunkcloud.com     Could you use the same certificate for both Trial and Paid version? Why are you using a different one? Could you please help us. It is blocking us when using Trial accounts.  Thank you in advance.
These all sort by field value of the last time value of the timechart, they are not sorting by name - what makes you think that?
I would rather have a straight search SPL solution, to drive the Trellis layout The proposed solutions I have seen have the effect of sorting the resulting | timechart fields by name rather than b... See more...
I would rather have a straight search SPL solution, to drive the Trellis layout The proposed solutions I have seen have the effect of sorting the resulting | timechart fields by name rather than by value.  The goal is sort by field value.  I appreciate the attempt so far.
adding to @victor_menezes  said.  When uploading apps to Splunk cloud you must pass various checks. Splunk Appinspect has various standards. (On premise is different) When you upload the app i... See more...
adding to @victor_menezes  said.  When uploading apps to Splunk cloud you must pass various checks. Splunk Appinspect has various standards. (On premise is different) When you upload the app it will run check_package_compression checks, and in there you can't have . or __MACOSX type for when on Mac "Splunk App does not contain any directories or files that start with a ., or directories that start with __MACOSX"   So, check your app compressed app file and remove any hidden folders or files it may even have a leading /.   tar -tzf your_file.tar.gz You may be able to use the git archive command (but need to ensure you remove hidden folder etc) It’s worth looking here for latest cloud check  updates, as this criterion for cloud does change https://dev.splunk.com/enterprise/docs/relnotes/relnotes-appinspectcli/whatsnew/  https://dev.splunk.com/enterprise/reference/appinspect/appinspectcheck/ 
You need to mind the basics of posing an answerable question: clearly illustrate data input (in text table), illustrate desired output (also in text unless it is a graphic visualization), and explain... See more...
You need to mind the basics of posing an answerable question: clearly illustrate data input (in text table), illustrate desired output (also in text unless it is a graphic visualization), and explain the logic in clear language.   It is really unclear what is the relationship between these field values.   If I have to speculate, Appname, and requestor are always paired, S.No and host are always paired.  Is this correct?  Based on this, I made up the following mock data: Appname S.No host logtype requestor source A 1 ghi somelog abc@def.com somelog.jkl A 1 ghi somelog abc@def.com somelog.klm A 1 ghi somelog abc@def.com somelog.lmn B 2 hij somelog bcd@efg.com somelog.klm B 2 hij somelog bcd@efg.com somelog.jkl C 3 ijk somelog cde@fgh.com somelog.lmn C 3 ijk somelog cde@fgh.com somelog.aaa A 4 xyz somelog abc@def.com somelog.opq A 4 xyz somelog abc@def.com somelog.opq A 4 xyz somelog abc@def.com somelog.bbb B 5 wxy somelog bcd@efg.com somelog.rst B 5 wxy somelog bcd@efg.com somelog.uvw C 6 vwx somelog cde@fgh.com somelog.uvw C 6 vwx somelog cde@fgh.com somelog.rst C 6 vwx somelog cde@fgh.com somelog.opq A 7 aaa somelog abc@def.com somelog.klm A 7 aaa somelog abc@def.com somelog.lmn A 7 aaa somelog abc@def.com somelog.jkl B 8 bbb somelog bcd@efg.com somelog.klm C 9 ccc somelog cde@fgh.com somelog.lmn C 9 ccc somelog cde@fgh.com somelog.aaa C 9 ccc somelog cde@fgh.com somelog.bbb A 10 hij somelog abc@def.com somelog.aaa B 11 jkl somelog bcd@efg.com somelog.aaa B 11 jkl somelog bcd@efg.com somelog.bbb B 11 jkl somelog bcd@efg.com somelog.ccc C 12 ijk somelog cde@fgh.com somelog.abc C 12 ijk somelog cde@fgh.com somelog.ccc As @ITWhisperer says, Splunk is not a spreadsheet.  You cannot have cell merge and such.  But if you really want to simulate the effect, you can do something like this:     | stats values(source) as source by Appname logtype requestor S.No host | rename S.No as S_No | eval source = mvjoin(source, ", ") | tojson S_No host source | stats values(_raw) as _raw by Appname requestor logtype | eval host = mvmap(_raw, spath(_raw, "host")) | eval S.No = mvmap(_raw, spath(_raw, "S_No")) | eval source = mvmap(_raw, spath(_raw, "source")) | table S.No Appname requestor logtype source     Result from the above mock data is S.No Appname requestor logtype source 1 10 4 7 A abc@def.com somelog somelog.jkl, somelog.klm, somelog.lmn somelog.aaa somelog.bbb, somelog.opq somelog.jkl, somelog.klm, somelog.lmn 11 2 5 8 B bcd@efg.com somelog somelog.aaa, somelog.bbb, somelog.ccc somelog.jkl, somelog.klm somelog.rst, somelog.uvw somelog.klm 12 3 6 9 C cde@fgh.com somelog somelog.abc, somelog.ccc somelog.aaa, somelog.lmn somelog.opq, somelog.rst, somelog.uvw somelog.aaa, somelog.bbb, somelog.lmn Is this something you are looking for? Here is mock data emulation:     | makeresults format=csv data="S.No, requestor, Appname, host, logtype, source 1, abc@def.com, A, ghi, somelog, somelog.jkl 1, abc@def.com, A, ghi, somelog, somelog.klm 1, abc@def.com, A, ghi, somelog, somelog.lmn 2, bcd@efg.com, B, hij, somelog, somelog.klm 2, bcd@efg.com, B, hij, somelog, somelog.jkl 3, cde@fgh.com, C, ijk, somelog, somelog.lmn 3, cde@fgh.com, C, ijk, somelog, somelog.aaa 4, abc@def.com, A, xyz, somelog, somelog.opq 4, abc@def.com, A, xyz, somelog, somelog.opq 4, abc@def.com, A, xyz, somelog, somelog.bbb 5, bcd@efg.com, B, wxy, somelog, somelog.rst 5, bcd@efg.com, B, wxy, somelog, somelog.uvw 6, cde@fgh.com, C, vwx, somelog, somelog.uvw 6, cde@fgh.com, C, vwx, somelog, somelog.rst 6, cde@fgh.com, C, vwx, somelog, somelog.opq 7, abc@def.com, A, aaa, somelog, somelog.klm 7, abc@def.com, A, aaa, somelog, somelog.lmn 7, abc@def.com, A, aaa, somelog, somelog.jkl 8, bcd@efg.com, B, bbb, somelog, somelog.klm 9, cde@fgh.com, C, ccc, somelog, somelog.lmn 9, cde@fgh.com, C, ccc, somelog, somelog.aaa 9, cde@fgh.com, C, ccc, somelog, somelog.bbb 10, abc@def.com, A, hij, somelog, somelog.aaa 11, bcd@efg.com, B, jkl, somelog, somelog.aaa 11, bcd@efg.com, B, jkl, somelog, somelog.bbb 11, bcd@efg.com, B, jkl, somelog, somelog.ccc 12, cde@fgh.com, C, ijk, somelog, somelog.abc 12, cde@fgh.com, C, ijk, somelog, somelog.ccc" ``` data emulation above ```  
so how can i ensure asset data correlation with logs as its based on ips ,anyway can it be done with hostname?
Hi @simuneer, your request isn't so easy to answer because there are many information missing: have you an asset list from your data (also from your Vulnerability Assessment tool)? have you a CVE... See more...
Hi @simuneer, your request isn't so easy to answer because there are many information missing: have you an asset list from your data (also from your Vulnerability Assessment tool)? have you a CVE list from a site (e.g. VulnDB) or a tool? have you a common key to correate your CVE list to the assets (e.g. OS)? having the above information, it's possible and we developed this for one of our customer but it isn't a question for the Community but a project. Ciao. Giuseppe
There is also two pure search versions which are very ugly Assuming this is a base search that creates the timechart | makeresults count=3600 | streamstats c | eval _time=relative_time(now(), "@m... See more...
There is also two pure search versions which are very ugly Assuming this is a base search that creates the timechart | makeresults count=3600 | streamstats c | eval _time=relative_time(now(), "@m") - c | eval type=mvindex(split("Budgeting,General Ledger,Payables,Expenses,eProcurement,Purchasing",","), random() % 6) | timechart fixedrange=f span=1m count by type then this version does a double transpose - the good thing about the double transpose is that it does not change the column order on the second transpose - however it does require that you know the sort column number - although you could work that out in a separate dashboard search | transpose 0 | sort - "row 60" | transpose 0 header_field=column | fields - column this version uses a mechanism to get the column names, sort them and then prefix the column with a numeric which will cause the columns to be ordered correctly, however, you can't rename them back as Splunk will then reorder alphabetically. | appendpipe [ | stats latest(*) as * | transpose 0 | sort - "row 1" | streamstats c | eval name="_".c."_".column | fields name ] | eventstats values(name) as _name | fields - name | foreach * [ eval f=mvindex(_name, mvfind(_name, "<<FIELD>>")), f=replace(f, "^_", ""), {f}='<<FIELD>>' | fields - "<<FIELD>>" ] | fields - f | where isnotnull(_time)  
Hi @shadysplunker, sorry, maybe I misunderstood. please, describe better your requirements because tcpout is only for sending logs from a UF to an Indexer, why are you speaking of http?  in the ab... See more...
Hi @shadysplunker, sorry, maybe I misunderstood. please, describe better your requirements because tcpout is only for sending logs from a UF to an Indexer, why are you speaking of http?  in the above link there's a description of performing a selective addressing of logs from an UF to IDXs. Ciao. Giuseppe  
Hi @VijaySrrie , In general, you have only to read them using a Forwarder (Universal or heavy), if you haven't a Forwarder in the server where the reports are stored, you should copy them in a serve... See more...
Hi @VijaySrrie , In general, you have only to read them using a Forwarder (Universal or heavy), if you haven't a Forwarder in the server where the reports are stored, you should copy them in a server with Forwarder. But the question is: what's the format of these reports? they are readable if they are in text format (.txt or .csv) not pdf or .xlsx. To bring them, you have only to create an input stanza in inputs.conf (on the Forwarder) to read these reports. Ciao. Giuseppe
I have wasted so many hours trying to troubleshoot why my ForwardedEvents were not being ingested into the index. Thank you, this fixed the issue. The formatting of the search is very different tho... See more...
I have wasted so many hours trying to troubleshoot why my ForwardedEvents were not being ingested into the index. Thank you, this fixed the issue. The formatting of the search is very different though, and not all fields are showing up in the results; not sure why. Edit: So how can I get new ingested events to look the same? And have the same fields? E.g. I'm only using Splunk to ingest forwarded applocker logs. I can't display fields for publisher or file path for newly ingested events. They only show up for old ones that were ingested before the issue.   Edit 2: Fixed it I think by adding this line back in: renderXML = 1
Just in case anyone has the same issue as me. My KV store was broken which stopped task server from starting splunkd.log 05-31-2024 01:59:56.804 +0000 ERROR ExecProcessor [1322 ExecProcessor] - ... See more...
Just in case anyone has the same issue as me. My KV store was broken which stopped task server from starting splunkd.log 05-31-2024 01:59:56.804 +0000 ERROR ExecProcessor [1322 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" 01:59:56.804 [main] INFO com.splunk.dbx.utils.SecurityFilesGenerator - Initializing secret kv store collection 05-31-2024 01:59:56.867 +0000 ERROR ExecProcessor [1322 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" 01:59:56.867 [main] INFO com.splunk.dbx.utils.SecurityFilesGenerator - secret KV Store not found, creating mongod.log 2024-05-31T03:20:05.019Z I ACCESS [main] permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open Changing the permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key to 400 and restarting fixed the issue for me
There are a couple of ways (at least) to do this Here is an example dashboard - that shows three techniques. 1. Uses a token in a base search to define the sort order as a token ($sort_order$) - th... See more...
There are a couple of ways (at least) to do this Here is an example dashboard - that shows three techniques. 1. Uses a token in a base search to define the sort order as a token ($sort_order$) - there is an annoying issue with this method, which means that once the trellis is shown with the order, it will NOT reorder the trellis if the underlying table order changes. 2. Uses 6 separate single panels aligned horizontally and six tokens that define the display for the viz. ($f0$ to $f5$). This re-orders on change 3. Uses @chrisyounger number set viz - https://splunkbase.splunk.com/app/4537 which will do all this for you and does not require tokens and will re-order when things change There is possibly a search way directly to do it (appendpipe...?), but with the dashboard, it's pretty simple <dashboard version="1.1" theme="light"> <label>Sort_TC</label> <row> <panel> <table depends="$hidden$"> <search id="base"> <query> | makeresults count=3600 | streamstats c | eval _time=relative_time(now(), "@m") - c | eval type=mvindex(split("Budgeting,General Ledger,Payables,Expenses,eProcurement,Purchasing",","), random() % 6) | timechart fixedrange=f span=1m count by type </query> <earliest>-60m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <table depends="hidden$"> <title>Sorting order is $sort_order$</title> <search base="base"> <done> <set token="sort_order">$result.sort$</set> <set token="f0">$result.f0$</set> <set token="f1">$result.f1$</set> <set token="f2">$result.f2$</set> <set token="f3">$result.f3$</set> <set token="f4">$result.f4$</set> <set token="f5">$result.f5$</set> </done> <query>| tail 1 | fields - _span _time | transpose 0 | sort - "row 1" | stats list(column) as sort list("row 1") as counts | foreach 0 1 2 3 4 5 [ eval f&lt;&lt;FIELD&gt;&gt;=mvindex(sort, &lt;&lt;FIELD&gt;&gt;) ] | eval sort="\"".mvjoin(sort, "\" \"")."\""</query> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <single> <search base="base"> <query>| table _time $sort_order$</query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <single> <title>$f0$</title> <search base="base"> <query> | table _time $f0|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f1$</title> <search base="base"> <query> | table _time $f1|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f2$</title> <search base="base"> <query> | table _time $f2|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f3$</title> <search base="base"> <query> | table _time $f3|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f4$</title> <search base="base"> <query> | table _time $f4|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f5$</title> <search base="base"> <query> | table _time $f5|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <viz type="number_set_viz.number_set_viz"> <search base="base"> <query> | stats sparkline(max(*)) as sparkline_* latest(*) as * | appendpipe [ | foreach sparkline_* [ eval &lt;&lt;MATCHSTR&gt;&gt;='&lt;&lt;FIELD&gt;&gt;'] ] | fields - sparkline_* | transpose 0 | rename "row 1" as value, column as title "row 2" as sparkline | sort - value </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard>