All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @gcusello  Thanks for the reply, is that possible to share the app info or share the source code of the dashboards ?
We need to connect a FortiWeb Cloud with a Splunk Heavy Forwarder. It is over internet so SSL must be used. We are receiving the test event correctly using TCP (without SSL) But it is not bein... See more...
We need to connect a FortiWeb Cloud with a Splunk Heavy Forwarder. It is over internet so SSL must be used. We are receiving the test event correctly using TCP (without SSL) But it is not being decrypted with SSL Reviewing the documentation, we do not undesrtand how to configure the ssl-tcp input, and what certificates should be configured in FortiWeb. We have seen some solutions centered in SSL between Splunk components, but none of them explain what certificates should be configured on the source. Does anyone know how to make this work? With FortiWeb or any other third party input
@gcusello  Thanks for your response. Yes, the log event is in one block. But the below query is showing incorrect results. It is showing historical data as well. (not the latest block events) Can I ... See more...
@gcusello  Thanks for your response. Yes, the log event is in one block. But the below query is showing incorrect results. It is showing historical data as well. (not the latest block events) Can I handle this in "inputs.conf" file to only show the latest one log file only? I am not looking for any historical data.
Hi @shashankk , if youe logs arrive in block (more or less the same timestamp), you could use a solution like this: index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log [ search ... See more...
Hi @shashankk , if youe logs arrive in block (more or less the same timestamp), you could use a solution like this: index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log [ search index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | head 1 | eval earliest= _time-60, latest=_time+60 | fields earliest latest ] | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate  It runs if your logs are all in blocks of around 60 seconds. Ciao. Giuseppe
My requirement is simple, I have created a Certificate monitoring script and passing the log file through a splunk dashboard. I want splunk to only check the latest log file and not store any histori... See more...
My requirement is simple, I have created a Certificate monitoring script and passing the log file through a splunk dashboard. I want splunk to only check the latest log file and not store any historical data in search events. Below is the sample log file output - (It is a "|" separated log file output)     ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_hcm.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_hcm.jks|Expiring Soon|2025-06-14 INFO|appu2.de.com|rootca13|/applications/hs_cert/cert/live/h_core.jks|Valid|2026-10-18 ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_core.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_core.jks|Expiring Soon|2025-03-22 ALERT|appu2.de.com|key|/applications/hs_cert/cert/live/h_mq.p12|Expired|2025-01-03       I am looking for 2 points here: 1. How do I handle only latest log file content (no history) in "inputs.conf" - what changes to be done? 2. Below is the sample SPL query, kindly check and suggest if any changes.   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   @ITWhisperer - Kindly help
I have tested the EVAL statement as provided in the transforms.conf at seaech time and it is working fine. But the new fields that i want to add from the csv file is not getting appended to the logs ... See more...
I have tested the EVAL statement as provided in the transforms.conf at seaech time and it is working fine. But the new fields that i want to add from the csv file is not getting appended to the logs that are getting ingested on a match dst_ip field of log with the dst_ip field of csv. From the documentation i came to know that i have to configure fields.conf also. I have configured the same with INDEXED=true for the new field that i want to append to the logs. But still the logs are not appended with the new fields.  i followed the link https://docs.splunk.com/Documentation/Splunk/7.2.3/Data/Configureindex-timefieldextraction#Define_additional_indexed_fields . this shows to append new fields to the logs based on extraction from the actual log. What i actually require is that i want the logs to be appended with fields from my csv file. Can you please guide us in configuring the props.conf and transforms.conf properly such that the logs are enriched with fields from the csv file for match. thanks and regards
Hi, I have a pretty long search I want to be able to utilize as a savedsearch and allow others benefit from one shared search and maybe mutually edit the search, if need be. There is a part in the s... See more...
Hi, I have a pretty long search I want to be able to utilize as a savedsearch and allow others benefit from one shared search and maybe mutually edit the search, if need be. There is a part in the search utilizing a structure   search index=ix2 eventStatus="Successful" | localize timeafter=0m timebefore=1m | map search="search index=ix1 starttimeu=$starttime$ endtimeu=$endtime$ ( [ search index=ix2 eventStatus="Successful" | return 1000 eventID ] ) | stats values(client) values(port) values(target) by eventID   This is a simplified extraction of what I am really doing, but the search works fine when run as a plain direct search from the GUI. If I save it and try using it with   |savedsearch "my-savedsearch"   I get the error Error in 'savedsearch' command: Encountered the following error while building a search for saved search 'my-savedsearch': Error while replacing variable name='starttime'. Could not find variable in the argument map. It looks like the $starttime$ and $endtime$ cause trouble, but what can I do to come around? I want to have this stuff in a saved search to avoid operating with a long search all the time in the browser. also, it is essential to use the localize - map construction, because otherwise I am not able to run this search for long time windows and I would really like to be able to do it. There was a ticket by @neerajs_81 about pretty much the same issue, but there were no details about the saved search and above all, there seemed not to be a solution.
I installed the keycloak extension but I don't know how to configure it. Can you help me?
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboa... See more...
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboard is to display data about running active checks for Pull Requests in associated GitHub repository. Checks are designed in Jenkins in a way to have a trigger job which calls downstream jobs. In a dashboard, I'd like to present basic info about pull request and results of test coming from downstream jobs. Unfortunately, event for trigger job does not provide info about its downstream jobs. I collect it in such a way: index="jenkins_statistics" event_tag=job_event build_url="job/test_trigger/10316*" | eventstats latest(type) as latest_type by build_url, host | where latest_type="started" | eval full_build_url=https://+host+"/"+build_url | eval started=tostring(round((now() - strptime(job_started_at, "%Y-%m-%dT%H:%M:%S.%N"))/60,0)) + " mins ago" | append [ search index="jenkins_statistics" event_tag=job_event upstream="job/test_trigger/10316*" trigger_by="*test_trigger*"] where subsearch is appended to provide information about downstream jobs. I checked https://plugins.jenkins.io/splunk-devops/#plugin-content-i-am-using-upstreamdownstream-jobs-how-can-i-consolidate-the-test-results-to-root-trigger-job but this does not fit to my case, as tests are represented by downstream jobs and I'd like to have an actual data of them to display only failed ones in the dashboard. I had a plan to create custom python command (https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/createcustomsearchcmd/) which will: parse data from downstream jobs events create new fields in trigger job event basing on above finally, return only trigger job event Having that, I could be able to have all the interesting data in one event per Pull Request and format the table at the end. Unfortunately, it does not work as I wanted. It makes Splunk hanging even for single Pull Request case (1 trigger event + 20 downstream events). This python script iterates over the events twice (firstly to process downstream jobs and secondly to find trigger and add new fields there and return that). I am afraid that it is not a best approach. Example script based on https://github.com/splunk/splunk-app-examples/blob/master/custom_search_commands/python/customsearchcommands_app/bin/filter.py is presented below:   #!/usr/bin/env python # coding=utf-8 # # Copyright 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys sys.path.append(os.path.join('opt', 'splunk', 'etc', 'apps', 'splunk_app_jenkins', 'bin', 'libs')) # splunklib from splunklib.searchcommands import dispatch, EventingCommand, Configuration, Option @Configuration() class ProcessTestsCommand(EventingCommand): """ Filters and updates records on the events stream. ##Example :code:`index="*" | processtests """ @staticmethod def get_win(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-win' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_ubuntu(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-ubuntu' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_failed_tests(tests_list, pr_num): failed_tests = [] failed_string = '' for test in tests_list: if test.get('upstream') == f"job/test_trigger/{pr_num}" and test.get('job_result') != 'SUCCESS': failed_tests.append(test) for failed_test in failed_tests: name = failed_test.get('job_name').split('/')[-1] status = failed_test.get('job_result') failed_string += f"{name} {status}\n" return failed_string def transform(self, records): tests = [] for record in records: if record.get('job_name') != 'test_trigger': tests.append(record) for record in records: if record.get('job_name') == 'test_trigger': pr_num = record.get('build_number') build_win = self.get_win(tests, pr_num) build_ubuntu = self.get_ubuntu(tests, pr_num) failed_tests = self.get_failed_tests(tests, pr_num) record['win'] = build_win record['ubuntu'] = build_ubuntu record['failed_tests'] = failed_tests yield record return dispatch(ProcessTestsCommand, sys.argv, sys.stdin, sys.stdout, __name__)   Goal is to have all the data (from trigger and downstream jobs) in a single event representing trigger job. Do you have any ideas, what could be the better way to achieve that? I also thought about dropping this subsearch and collecting the data of downstream jobs via GitHub or Jenkins API inside Python script, but this is not preferred (API may return some malformed data, I could be limited by api hitting limits). Appreciate any help.
The results you shared earlier show NB as 0 which is odd since this doesn't appear to be what you should be getting given the events you have shared and the search you are apparently using. Do any of... See more...
The results you shared earlier show NB as 0 which is odd since this doesn't appear to be what you should be getting given the events you have shared and the search you are apparently using. Do any of your events actually have 0 as the trade_id? Have you examined the events which are present in the results to see why they might be being parsed in such a way to give NB as 0? At the end of the day, the searches being suggested work with the data you have shared, so if they are not working as expected, it is likely to be because of the actual data you are using, and in order for us to be able to help you more, you should share some more of the data which is not working.
Sure, also all the results are in a single row, ideally I would want them in separate Wahts important to note is that I did get all the columns in the fields command but they are empty (the ones i... See more...
Sure, also all the results are in a single row, ideally I would want them in separate Wahts important to note is that I did get all the columns in the fields command but they are empty (the ones i stated , eg. NB onwards) Time Event 03/01/2025 16:05:37.609 2025-01-03 16:05:37.609, system="murex", id="646523556", sky_id="646523556", trade_id="32248978", event_id="100120362", mx_status="live", operation="nooperation", action="modification", tradebooking_sgp="2025/01/02 02:01:23.0000", eventtime_sgp="2025/01/02 02:01:21.3800", sky_to_mq_latency="-1.-620", portfolio_name="test_oprtoflio", portfolio_entity="test_entity", trade_type="VanillaSwap" 03/01/2025 11:05:39.000 32248978;LIVE;0.00001000;AUD;IRD;CD;;test_prtoflio;CAMBOOYAPTSYDAU
The by clause will match with events with exactly the same src_mac - this includes any trailing or leading spaces, punctuation, etc. Since MAC addresses are potentially sensitive information, which y... See more...
The by clause will match with events with exactly the same src_mac - this includes any trailing or leading spaces, punctuation, etc. Since MAC addresses are potentially sensitive information, which you might not wish to share, are there any differences between the way the MAC addresses are stored in the different events (apart from the upper/lower case you already mentioned)?
Please share some examples of the events which have these fields which are not being returned
These fields are missing TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO
With the assistance of this forum, I managed to combine the events of two sourcetypes and run stats to correlate the fields on a single shared field between the two sourcetypes. The problem is, when ... See more...
With the assistance of this forum, I managed to combine the events of two sourcetypes and run stats to correlate the fields on a single shared field between the two sourcetypes. The problem is, when running stats, it creates a table with mostly blank spots and only a few with all columns filled. The search is meant to look at switch logs and pull connection data inlcuding the MAC, the IP, the switch name, and the port id. Secondly, the search pulls from a sourcetype containing all devices that have been active on the network where it pulls the hostname and mac for that device. I then use stats to match those results up on the shared mac address field—the only difference between them being the mac field from one of the sourcetypes is in lowercase vs upper. The end goal of this is to be able to have a table showing me a device's name, it's IP, it's MAC, and which switch and port it connected to. As it is, the search does appear to work, however, because of how it's written, the resulting table is filled with blank spots from where the events from each source don't have the fields from the other source. How can I change things so it only shows rows where it has an entry for each column? Right now, based on other posts I've seen on this forum.....I'm considering whether I may be able to use eval and create fields like src_mac-{index} or something like that.....maybe with the inclusion of the coalesce command. Is this the right course of action, or is there a better way? The only other consideration is speed.....unfortunately, there's a very good chance I may end up searching millions of events. I'm trying to find ways restrict the search, but even if I manage to, it's still going to be a lot. I'm not trying to get an instant search, but if I can get it complete in less than thirty seconds as opposed to 3+ minutes..... Thank you     (index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip=172.* earliest=-30d src_mac=*) | fields src_mac dhcp_host_name src_ip IP_Address SwitchName Port_Id | eval src_mac=upper(src_mac) | stats values(dhcp_host_name) as hostname values(src_ip) as IP values(IP_Address) as net_IP values(SwitchName) as switch values(Port_Id) as portID by src_mac    
After upgrade from 9.3.1 to 9.4.0 in windows platform, there is a warning shows 41 files that did not match. After validate files, it shows the following query. How to solve the warning? ... See more...
After upgrade from 9.3.1 to 9.4.0 in windows platform, there is a warning shows 41 files that did not match. After validate files, it shows the following query. How to solve the warning? C:\Program Files\Splunk\bin>splunk.exe validate files Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest' Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-console-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-datetime-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-debug-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-errorhandling-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l2-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-handle-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-heap-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-interlocked-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-libraryloader-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-localization-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-memory-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-namedpipe-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processenvironment-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processthreads-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processthreads-l1-1-1.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-profile-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-rtlsupport-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-string-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-synch-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-synch-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-sysinfo-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-timezone-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-util-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-conio-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-convert-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-environment-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-filesystem-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-heap-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-locale-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-math-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-multibyte-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-private-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-process-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-runtime-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-stdio-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-string-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-time-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-utility-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/ucrtbase.dll': The system cannot find the file specified.  
You can also do it more elegantly by hiding the Populating message when the search is in progress using <progress> and <done> clauses in the search, e.g. <form version="1.1" theme="light"> <label>... See more...
You can also do it more elegantly by hiding the Populating message when the search is in progress using <progress> and <done> clauses in the search, e.g. <form version="1.1" theme="light"> <label>populating</label> <init> <set token="input_message_display">none</set> </init> <fieldset submitButton="false"> <input type="time" token="time_range" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> <change> <set token="input_message_display">none</set> </change> </input> <input id="user_input" type="dropdown" token="tok_user" searchWhenChanged="true"> <label>User</label> <search> <query>index=_audit | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <progress> <set token="input_message_display">none</set> </progress> <done> <set token="input_message_display"></set> </done> </search> <fieldForLabel>user</fieldForLabel> <fieldForValue>user</fieldForValue> </input> </fieldset> <row depends="$AlwaysHideCSS$"> <panel> <html> <style> #user_input .splunk-choice-input-message{ display: $input_message_display$; } </style> </html> </panel> </row> <row> <panel> <table> <search> <query>index=_audit user=$tok_user$ | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> This will hide the populating message every time you change the time that the dropdown depends on, then when the search is finished, it will re-enable it - so that if you get no results, it will say 'Search produced no results'    
That is the splunk-choice-input-message class - so you can do it like this - note that this will hide it always.   <form version="1.1" theme="light"> <label>populating</label> <fieldset submitB... See more...
That is the splunk-choice-input-message class - so you can do it like this - note that this will hide it always.   <form version="1.1" theme="light"> <label>populating</label> <fieldset submitButton="false"> <input type="time" token="time_range" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input id="user_input" type="dropdown" token="tok_user" searchWhenChanged="true"> <label>User</label> <search> <query>index=_audit | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <fieldForLabel>user</fieldForLabel> <fieldForValue>user</fieldForValue> </input> </fieldset> <row depends="$AlwaysHideCSS$"> <panel> <html> <style> #user_input .splunk-choice-input-message{ display: none !important; } </style> </html> </panel> </row> <row> <panel> <table> <search> <query>index=_audit user=$tok_user$ | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>    
Hi! I was wondering if anybody knows if there's a way to hide the "populating..." text under this drillthrough on my dashboard? 
Thanks for clarifying. I understand you want to mark your ingested data at the time of ingest, so that is forever constant regardless of any changes made to the lookup. As @richgalloway has said, it... See more...
Thanks for clarifying. I understand you want to mark your ingested data at the time of ingest, so that is forever constant regardless of any changes made to the lookup. As @richgalloway has said, it should be possible - I am unsure of the sequence of INGEST_EVAL statemensts where there are more than one. Have you tried putting the json_extract AND the lookup in a single statement, as in Rich's linked example to see if that works, at least for one of the fields.