All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

https://splunkbase.splunk.com/app/3079 Qmulos - developer is this a free or paid app? if paid, where can I find pricing? thanks A
I need to filter a list of timestamps which are less than _time. this works: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < 1570... See more...
I need to filter a list of timestamps which are less than _time. this works: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < 1570000010)   but the compared value is whatever is in _time.  this does not work: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval _time = 1570000010 | eval older = mvfilter(timestamps < _time)   I know timestamps work, because this does work: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < now())   Why does now() and static values work, but this does not: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval now_time = now() | eval older = mvfilter(timestamps < now_time)   How can i get a variable in there to compare, since i need to compare the list to _time?
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Gene... See more...
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Generic s3, SQS-s3 and the Data Manager Inputs for AWS available on Splunk but am not getting the required outcome. My use case is given below: There's a s3 bucket named as exampledatastore, in that there's a directory named as statichexcodedefinition, in that there're multiple message Ids and Dates. The s3 example structure is given below: s3://exampledatastore/statichexcodedefinition/{messageId}/functionname/{date}/* - functionnameattribute Where the {messageId} and the {date} values are dynamic. And I have a start date to begin with but the messageId varies. Please can you assist me on this on how to get the data into Splunk. Many Thanks!
Hello Splunkers, The hardcoded time parameters inside a simple search don't work with v9.4.3.  It only takes the input from the time presets. Do you also experience a similar issue? index=index e... See more...
Hello Splunkers, The hardcoded time parameters inside a simple search don't work with v9.4.3.  It only takes the input from the time presets. Do you also experience a similar issue? index=index earliest="-7d@d" latest="-1m@m" and my preset is last 15 mins, then I get this output.  earliestTime latestTime 07/25/2025 10:40:01.636 07/25/2025 10:52:59.564 Very strange. Nothing mentioned on this in the release notes.
Can anyone please confirm if appdynamics machine agent supports TLS 1.3 or not ?  We are using java agent 25.4.0.37061 on Linux X64 platform ; If anyone can suggest an answer or point me towards rele... See more...
Can anyone please confirm if appdynamics machine agent supports TLS 1.3 or not ?  We are using java agent 25.4.0.37061 on Linux X64 platform ; If anyone can suggest an answer or point me towards relevant documentation ?    Thanks
Hello folks, We are doing splunkforwarder upgrade to 9.4.x (from 8.x) recently, we build the splunk sidecar image for our k8s application and i noticed the same procedures which works previous in fw... See more...
Hello folks, We are doing splunkforwarder upgrade to 9.4.x (from 8.x) recently, we build the splunk sidecar image for our k8s application and i noticed the same procedures which works previous in fwd version 8.x don't work anymore in 9.4.x. during the docker image startup, it's clearly to see the process hanging there and wait for interaction. bash-4.4$ ps -ef UID PID PPID C STIME TTY TIME CMD splunkf+ 1 0 0 02:11 ? 00:00:00 /bin/bash /entrypoint.sh splunkf+ 59 1 99 02:11 ? 00:01:25 /opt/splunkforwarder/bin/splunk edit user admin -password XXXXXXXX -role admin -auth admin:xxxxxx --answer-yes --accept-license --no-prompt splunkf+ 61 0 0 02:12 pts/0 00:00:00 /bin/bash splunkf+ 68 61 0 02:12 pts/0 00:00:00 ps -ef bash-4.4$ rpm -qa | grep splunkforwarder splunkforwarder-9.4.3-237ebbd22314.x86_64   there is a workaround to add a "tty: true" to k8s deployment template but this will cause a lot of efforts in our environment.   Any idea if any newer version has the fix? or any splunk command parameter can be used to bypass the tty requirement? Thanks.
I am trying to find the time taken by our processes. I wrote a basic query that fetch a start, end time, and the difference for a particular interaction. This uses the max and min to find the start a... See more...
I am trying to find the time taken by our processes. I wrote a basic query that fetch a start, end time, and the difference for a particular interaction. This uses the max and min to find the start and the end times. But I am not sure how to look for multiple process start and end times by looking at the messages.     index=application_na sourcetype=my_logs:hec appl="*" message="***" interactionid=12345 | table interactionid, seq, _time, host, severity, message, msgsource | sort _time | stats min(_time) as StartTime, max(_time) as EndTime by interactionid | eval Difference=EndTime-StartTime | fieldformat StartTime=strftime(StartTime, "%Y-%m-%d %H:%M:%S.%3N") | fieldformat EndTime=strftime(EndTime, "%Y-%m-%d %H:%M:%S.%3N") | fieldformat Difference=tostring(Difference,"duration") | table interactionid, StartTime, EndTime, Difference   I have messages that look like this: interactionid _time message 12345 2025-06-26 07:55:56.317 TimeMarker: WebService: Received request. (DoPayment - ID:1721 Amount:16 Acc:1234) 12345 2025-06-26 07:55:56.717 OtherApp: -> Sending request with timeout value: 15 12345 2025-06-26 07:55:57.512 TimeMarker: OtherApp: Received result from OtherApp (SALE - ID:1721 Amount:16.00 Acc:1234) 12345 2025-06-26 07:55:58.017 TimeMarker: WebService: Sending result @20234ms. (DoPayment - ID:1721 Amount:16 Acc:1234) So, I want to get an output of time taken by `OtherApp` from when it received a request to when it responded back to my app, and then the total time taken by my service `DoPayment`. Is this something achievable. Output that I am looking for is  interactionid DoPayment Start OtherApp Start OtherApp End DoPayment End          
I have a dotnet application logging template formatted log messages with serilog library and since everything is in JSON format they are great to filter my results when I know the fields to use but I... See more...
I have a dotnet application logging template formatted log messages with serilog library and since everything is in JSON format they are great to filter my results when I know the fields to use but I am having a hard time just to read logs when I dont know the fields available. So for example, the application might log things like: Log.Information("Just got a request {request} in endpoint {endpoint} with {httpMethod}", request,endpoint, httpMethod); And in Splunk I will see something like: { "msg": { "@mt": "Just got a request {request} in endpoint {endpoint} with {httpMethod}", "@sp": "11111", "request": "some_data", "endpoint": "some_url". "httpMethod": "POST" } } So this is awesome to create splunk queries using msg.request or msg.endpoint, but since the application logs pretty much everything using these message templates from serilog, when I am just doing investigations, I have a hard time in making readable results because everythig is hidden behind a placeholder. I am trying to achieve something like in Splunk Search: <some_guid> index=some_index | table _time msg.@mt and of course the msg.@mt will just give me the log line with the placeholders, but how can I just bring back the full log line in the table with the actual values?
I've developed TA's previously, and when using python2, everything worked just fine. But now, using python3 with splunk 9.x, it seems nothing works. Trying to develop a TA that makes some REST calls ... See more...
I've developed TA's previously, and when using python2, everything worked just fine. But now, using python3 with splunk 9.x, it seems nothing works. Trying to develop a TA that makes some REST calls out to a 3rd-party service, and then uses those values in some local confs. It's been a nightmare to try to make this work. Started with a modular input design, but contrary to the docs, my python code would never receive a splunk token on STDIN. Literally had this working perfectly in a python2 TA. This time? Doesn't matter how or when attempting to read STDIN, the python3 code *NEVER RECEIVES ANYTHING*. Finally I just gave up on this... Next try was with a scripted input; at least this **bleep** thing does receive a token on STDIN. Great, that token can be used w/ the SDK, right?  RIGHT??? Well, no, because 1) splunklib is not installed/included in the splunk python env, and 2) attempting to use the system python causes the whole **bleep** thing to crash, and 3) including splunklib inside the TA, and attempting to import it by manipulating python paths is also horribly broken. If we munge the python system paths thusly, we can in theory import our included libs (not concerned if this is idiomatic python; it works m'kay?): import os, sys modules = sys.argv[0].split('/')[:-2] modules.append('lib') sys.path.append('/'.join(modules)) This inserts our local lib path into python's lib search dirs. And it works to find splunklib. But then splunklib fails to load completely since: ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory This is true even if LD_LIBRARY_PATH points to a dir containing libssl.so.1.0.0. I suspect this is due to the fact that Splunk is also doing an LD_PRELOAD="libdlwrapper.so" I don't know what this library is or what it's doing, but I also suspect it's breaking my env preventing anything from running. But it doesn't actually matter. If I remove my "import splunklib" and just leave the REST client to attempt to make its HTTPS request, that too is apparently horribly broken: ...(Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available")) What in the everloving fsck is going on here??!? Best I can tell, these two things are now true: 1) splunklib cannot be used from a TA 2) TA's cannot make HTTPS requests   This is happening in a clean-room environment with a fresh splunk install on a host that is not running selinux or apparmor or any other MAC system that might interfere. This is very much a problem with Splunk and splunklib.  So, how exactly can splunklib be used in TAs? And how exactly can TAs execute HTTPS requests??    
Before one week I created a summary index named waf_opco_yes_summary and it is working fine. Now they asked to change the index name as opco_yes_summary and already existing summary index should be c... See more...
Before one week I created a summary index named waf_opco_yes_summary and it is working fine. Now they asked to change the index name as opco_yes_summary and already existing summary index should be come to this index and that index shouldn't be visible anywhere either in dashboards or searches. That should be deleted and all its data should be moved to new index. What can I do here?  One more problem is we created a single summary index to all applications and afraid of giving access to them because any of them see that there can see other's apps summary data, it will be a security issue right. We have created a dashboard with summary index and disabled open in search. At some point, we need to give them access to summary index and what if they search index=* then their restricted index and this summary index shows up which can be risky. Is there any way we can restrict users running index=*. NOTE - already we are using RBAC to restrict users to their specific indexes. But this summary index will show summarised data of all. Any way to restrict this? Can't create summary index for each application. However in dashboard we are restricting them by a field should be selected then only panel with summary index shows up by filtering. How people handle this type of situations?
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_t... See more...
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_time, "%b-%y") | stats count by customer, Month, product, publicationId, topic | streamstats count as product_rank by customer, Month | where product_rank <= 5 | table customer, product, publicationId, topic, count, Month However, I do not believe it is achieving what I aim for. The data is structured as follows: Products > Publication IDs within those products > Topics within those specific publication IDs. What I am trying to accomplish is find out the top 5 products per customer per month, and then for each of those 5 products find out the top 5 publicationIds within them, and then for each publicationID find out the top 5 topics within them.
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observabil... See more...
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observability/Product_Tips/Observability_Cloud/Setting_up_the_OpenTelemetry_Demo_in_Docker However, Otel Collector throws below errors while connecting to ingestion URL.  tag=otel-collector 2025-07-24T12:49:17.841Z info internal/retry_sender.go:133 Exporting failed. Will retry the request after interval. {"resource": {"service.instance.id": "45be9d90-2946-4ae4-8cd9-f0edff3bc822", "service.name": "otelcol", "service.version": "v0.129.0"}, "otelcol.component.id": "otlphttp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "error": "failed to make an HTTP request: Post \"https://ingest.us1.signalfx.com/v2/trace/otlp\": net/http: TLS handshake timeout", "interval": "27.272487507s"} and I tried with curl and it hang while checking TLS [root@kyn-app-01 opentelemetry-demo]# curl -v https://api.us1.signalfx.com/v2/apm/correlate/host.name/test/service \ -X PUT \ -H "X-SF-TOKEN: accesstoken" \ -H "Content-Type: application/json" \ -d '{"value": "test"}' * Trying 54.203.64.116:443... * Connected to api.us1.signalfx.com (54.203.64.116) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * CAfile: /etc/pki/tls/certs/ca-bundle.crt * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 So, kindly suggest what went wrong and how to fix it. Note: Firewall is disabled and no proxy. Regards, Eshwar      
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of thos... See more...
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of those services.   The end goal is to create an alert.   The following query produces no results index="server" host="*"  source="Unix:Service"   | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval AUDITD = if(UNIT=auditd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval CHRONYD = if(UNIT=chronyd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") | dedup host  | table host IPTABLES AUDITD CHRONYD This query works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES How can I get the query to produce the following results host         IPTABLES       AUDITD    CHRONYD server1       failed                OK                OK
Is it possible to implement the possibility to use a proxy url to download the vendor oui-file? Thanks
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5... See more...
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5,     "minResTime" : 0.0,     "maxResTime" : 4418.0,     "pct1ResTime" : 368.4,     "pct2ResTime" : 3728.049999999985,     "pct3ResTime" : 4418.0,     "throughput" : 0.25086548592644625,     "receivedKBytesPerSec" : 0.16945669591340123,     "sentKBytesPerSec" : 0.3197146692547623   },   "efghxyz" : {     "transaction" : "efghxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 421.8,     "medianResTime" : 32.0,     "minResTime" : 0.0,     "maxResTime" : 3566.0,     "pct1ResTime" : 3258.5,     "pct2ResTime" : 3497.6,     "pct3ResTime" : 3566.0,     "throughput" : 0.24752066797577596,     "receivedKBytesPerSec" : 0.34477244084256037,     "sentKBytesPerSec" : 0.08463804872238082   },   "ijklxyz" : {     "transaction" : "ijklxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 27.733333333333338,     "medianResTime" : 27.5,     "minResTime" : 0.0,     "maxResTime" : 241.0,     "pct1ResTime" : 41.599999999999994,     "pct2ResTime" : 52.699999999999974,     "pct3ResTime" : 241.0,     "throughput" : 0.25115636576738737,     "receivedKBytesPerSec" : 0.3331214746541367,     "sentKBytesPerSec" : 0.08588125143891667   },   "mnopxyz" : {     "transaction" : "mnopxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 491.74999999999994,     "medianResTime" : 279.5,     "minResTime" : 0.0,     "maxResTime" : 4270.0,     "pct1ResTime" : 381.29999999999995,     "pct2ResTime" : 4076.55,     "pct3ResTime" : 4270.0,     "throughput" : 0.2440254437195985,     "receivedKBytesPerSec" : 0.16483632755942018,     "sentKBytesPerSec" : 0.2839297997262848   } } I need to create a table view from the above log event which was captured as a single event, like the below table format: samples abcdxyz efghxyz ijklxyz mnopxyz     "transaction" :             "sampleCount"                                                     "errorCount"              "errorPct"                                                "meanResTime"                                    "medianResTime"                                          "minResTime"                                               "maxResTime"                                               "pct1ResTime"                                          "pct2ResTime"                                        "pct3ResTime"                                           "throughput"                                        "receivedKBytesPerSec"                                      "sentKBytesPerSec"                                
Hi. During the day, some on my Indexers completely stops sending back the ACK, so many agents keep data in queue until there's the ACK and the flow restarts (in some cases also 15/20 minutes passes!... See more...
Hi. During the day, some on my Indexers completely stops sending back the ACK, so many agents keep data in queue until there's the ACK and the flow restarts (in some cases also 15/20 minutes passes!!!). Meanwhile, obviously, i have many delays of data and ACK errors. This happens at some hours, from 09:00 to 17:00, during very high data ingestion the issue is clear visible, during the other hours is trasparent, no issue (few data flowing and few users interaction). I'm wondering, maybe an Indexer internal task to manage indexes/buckets, to optimize system and manage retentions? If so, is this task "editable" to run "once per day only" (in night hours)? Thanks.
ご教授ください。 Linuxでsplunk(9.4.3)の構築を学習中です。 WebUIのフォワーダー管理でエージェントが表示されない状態です。 解決策をご存知ないでしょうか。 Deployment ServerはLicense Manager、Monitoring Serverと同居しています。(XX.XX.XX.10) ClientはHeavy Forwarderです。(XX.XX... See more...
ご教授ください。 Linuxでsplunk(9.4.3)の構築を学習中です。 WebUIのフォワーダー管理でエージェントが表示されない状態です。 解決策をご存知ないでしょうか。 Deployment ServerはLicense Manager、Monitoring Serverと同居しています。(XX.XX.XX.10) ClientはHeavy Forwarderです。(XX.XX.XX.8) Monitoring ConsoleではDSもClientもStatus=Upの状態です。 ・各confは下記のように設定 -------[Client(XX.XX.XX.8)]------- $ sudo cat /opt/splunk/etc/system/local/deploymentclient.conf [deployment-client] disabled = false clientName = HF1 [target-broker:deploymentServer] targetUri = XX.XX.XX.10:8089 $ sudo /opt/splunk/bin/splunk show deploy-poll WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid. Please login. Splunk username: siv_admin Password: Deployment Server URI is set to "XX.XX.XX.10:8089". $ ------------- -------[DS(XX.XX.XX.10)]------- $ sudo cat /opt/splunk/etc/system/local/serverclass.conf [serverClass:HFs] whitelist.0 = host=heavy* whitelist.1 = XX.XX.*.* whitelist.2 = host=HF* whitelist.3 = clientName:HF* [serverClass:HFs:app:app_test] stateOnClient = enabled -------------   ・確認した事 -------[Client(XX.XX.XX.8)]------- #Client=>DS 疎通確認で応答がある $ curl -k https://XX.XX.XX.10:8089/services/server/info -u admin:XXXXX <?xml version="1.0" encoding="UTF-8"?> ・・・・・ <s:list> <s:item>indexer</s:item> <s:item>license_master</s:item> <s:item>license_manager</s:item> <s:item>deployment_server</s:item> <s:item>search_head</s:item> <s:item>kv_store</s:item> </s:list> ・・・・・ -------------- -------[DS(XX.XX.XX.10)]------- #DSにClient の設定はしていない #DS=>Client 疎通確認で応答がある $ sudo /opt/splunk/bin/splunk btool deploymentclient list --debug $ $ curl -k https://XX.XX.XX.8:8089/services/server/info -u admin:XXXXX <?xml version="1.0" encoding="UTF-8"?> ・・・・・ <s:key name="server_roles"> <s:list> <s:item>deployment_client</s:item> <s:item>search_peer</s:item> <s:item>kv_store</s:item> </s:list> </s:key> ・・・・・ --------------   その他気になる事があります。 ・キャッシュ(serverclass_cache.json)が生成されません。 ・テストで作成したAppの配布はできているように見える。 -------[Client(XX.XX.XX.8)]------- $ sudo ls -l /opt/splunk/var/run/HFs total 12 -rw-------. 1 splunk splunk 10240 Jul 22 23:23 app_test-1753173390.bundle -------------- ・splunkd.logでも疎通ができていると思われるログが出ている -------[DS(XX.XX.XX.10)]------- 07-23-2025 23:30:28.287 +0000 INFO PubSubSvr [2431 TcpChannelThread] - Subscribed: channel=deploymentServer/phoneHome/default/reply/heavy-1/HF1 connectionId=connection_XX.XX.XX.8_8089_heavy-1.internal.cloudapp.net_heavy-1_HF1 listener=0x7fbd68ccc400 --------------   よろしくお願いします。
The Splunk documentation says that the order rule is lexicographic. I am trying to sort the following values: | makeresults | eval fruit="apple" | append [ | makeresults | eval fruit="Banana" ] ... See more...
The Splunk documentation says that the order rule is lexicographic. I am trying to sort the following values: | makeresults | eval fruit="apple" | append [ | makeresults | eval fruit="Banana" ] | append [ | makeresults | eval fruit="zebra" ] | append [ | makeresults | eval fruit="10" ] | append [ | makeresults | eval fruit="2" ] | append [ | makeresults | eval fruit="20" ] | append [ | makeresults | eval fruit="30" ] | append [ | makeresults | eval fruit="3" ] | append [ | makeresults | eval fruit="1" ] | append [ | makeresults | eval fruit="25" ] | append [ | makeresults | eval fruit="38" ] | table fruit | sort fruit The output I am getting is: 1, 2, 3, 10, 20, 25, 30, 38, Banana, apple, zebra I understand that Banana appears before apple because B<a. But what is up with string numerics? Shouldn't the order be: 1, 10, 2, 20, 25, 3, 30, 38, Banana, apple, zebra ?  Even the documentation says that between 10, 9, 70, 100 the sorted output should be 10, 100, 70, 90.  https://help.splunk.com/en/splunk-enterprise/search/spl-search-reference/9.2/search-commands/sort 
I created a summary index to call it in dashboard because it has so much data and need to run for larger time frames. Configured summary index in this way - <my search query> ---- ---- ---- | eval ... See more...
I created a summary index to call it in dashboard because it has so much data and need to run for larger time frames. Configured summary index in this way - <my search query> ---- ---- ---- | eval log_datetime=strftime(_time, "%Y-%m-%d %H:%M:%S") | rename log_datetime AS "Time (UTC)" |table _time, "Time (UTC)", <wanted fields> | collect index=sony_summary Now calling it in one of my dashboard panel in this way -  index=sony_summary sourcetype=stash |search <passed drop-down tokens> |sort 0 -"Time (UTC)" | table "Support ID","Time (UTC)", _time --------  Now my requirement is I don't want users to see this summary index data. So I have created a drilldown and linked to different search as below. Whenever they click on any field value in table, new search will be opened with clicked support_id   <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <!-- Drilldown Configuration --> <!-- Enable row-level drilldown --> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">/app/search/search?q=search index=sony* sourcetype=sony_logs support_id="$click.value$"&amp;earliest=$time_range.earliest$&amp;latest=$time_range.latest</link> </drilldown>   Now when I click on dashboard panel's field, it is opening with expected support_id as expected, but it is opening with token time range. I am expecting that this should return the particular time range at what time event indexed as per Time (UTC) or _time. Example - An event has support ID with time 07:00 am, when I click on it it should open for 7 am, but it is taking token time range. When I checked in chatgpt, it given in following one and modified it in this way. <table id="myTable"> <search> <query>index=sony_summary sourcetype=stash |search <passed drop-down tokens> |sort 0 -"Time (UTC)" |eval epoch_time=_time, epoch_plus60=_time+60 (added this now) | table "Support ID","Time (UTC)", _time -------- , epoch_time, epoch_plus60</query> </search> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <!-- Drilldown Configuration --> <!-- Enable row-level drilldown --> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">/app/search/search?q=search index=sony* sourcetype=sony_logs support_id="$click.value$"&amp;earliest=$row.epoch_time$&amp;latest=$row.epoch_plus60</link> </drilldown> Now this is working fine and time range is also coming what I clicked on. but here the issue is I don't want these two new fields - epoch_time, epoch_plus60 to be visible in dashboard. These should get hided completely but still drilldown should work as expected. What to do here? Please suggest me. Am I missing anything? Even if I keep those fields in the last in panel, still my manager said hide it but it should work as expected.
Hi everyone, I'm working on a dashboard that's comparing two different applications. One of the tables has their performance throught different metrics side-by-side, as such: "Avg Time App1" | ... See more...
Hi everyone, I'm working on a dashboard that's comparing two different applications. One of the tables has their performance throught different metrics side-by-side, as such: "Avg Time App1" | "Avg Time App2" | "Max Time App1" | "Max Time App2" | ... Additionally, each row of the table represents a different date, so my team and I can check their performance through an arbitrary time interval.  My idea was to color a certain cell based on its value compared to the equivalent value of the other app. So, for example, let's say "Avg Time App1" = 5.0 and "Avg Time App2" = 8.0 on day X (an arbitrary row). My idea is to highlight the cell for the "Avg Time App2" on day X as its value is bigger than for App1.  I'm aware I can color cells dinamically with the `<format>` block, by setting `type="color"` and the `field` to whatever my field is. But I wanted to know how I can do this by each row (this means that even if the cell on the first row of column X is highlighted, the next rows won't necessarily be) and based on a comparison with another cell, from another column, on the same row.  One other detail is that the name of my columns contains a token. So a somewhat related problem I've been having is accessing the value from the cells, because, to my understanding, it would turn out as something of the sort: $row."Avg Time $app1$"$ So if someone could help me implement this conditional coloring idea, I would be very grateful. Thanks in advance,  Pedro