Activity Feed
- Posted Re: Cluster Map - Show Country Border on Splunk Search. Sunday
- Posted Re: geostats cluster map help on Dashboards & Visualizations. 2 weeks ago
- Posted Re: Rest API for Notable Suppression on Splunk Enterprise Security. 2 weeks ago
- Posted Re: Rest API for Notable Suppression on Splunk Enterprise Security. 2 weeks ago
- Karma Re: Rest API for Notable Suppression for Vignesh. 2 weeks ago
- Posted Re: geostats cluster map help on Dashboards & Visualizations. 2 weeks ago
- Got Karma for Re: Rest API for Notable Suppression. 3 weeks ago
- Got Karma for Re: Exciting News: The AppDynamics Community Joins Splunk!. 4 weeks ago
- Posted Re: flatterning BodyJson to match splunk TA for aws guardduty on Getting Data In. 4 weeks ago
- Posted Re: StateSpaceForecast holdback and forecast_k for evaluation on hold out set on Splunk Search. 4 weeks ago
- Posted Re: How we can transfer or back up data to an AWS S3 bucket for the specified existing index? on Splunk Cloud Platform. 4 weeks ago
- Posted Re: Exciting News: The AppDynamics Community Joins Splunk! on Community Blog. 4 weeks ago
- Posted Re: help spl query on Splunk Enterprise Security. 4 weeks ago
- Posted Re: KV store lookup as array on Splunk Enterprise. 02-23-2025 09:51 AM
- Posted Re: Intermediate Forwarder Limited to 1000 connections on Getting Data In. 02-21-2025 06:05 AM
- Got Karma for Re: Intermediate Forwarder Limited to 1000 connections. 02-20-2025 12:14 PM
- Got Karma for Re: Intermediate Forwarder Limited to 1000 connections. 02-20-2025 12:09 PM
- Posted Re: help spl query on Splunk Enterprise Security. 02-17-2025 07:57 AM
- Posted Re: Intermediate Forwarder Limited to 1000 connections on Getting Data In. 02-14-2025 06:46 PM
- Posted Re: Intermediate Forwarder Limited to 1000 connections on Getting Data In. 02-14-2025 05:45 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 |
Sunday
Hi @molla, The geo_countries lookup shipped with Splunk provides boundaries for countries. The tutorial at https://docs.splunk.com/Documentation/Splunk/latest/Viz/GenerateMap provides an example for counties, but you can replace the county references with country references: | makeresults format=csv data="x,country
3,United States
5,United States
4,Canada
1,Canada
1,Mexico
2,Mexico"
| stats sum(x) by country
| geom geo_countries featureIdField=country The output of geom can be used with choropleth maps in both classic (Simple XML) dashboards and Dashboard Studio. You can use the inputlookup command to see the list of supported countries: | inputlookup geo_countries
| table featureId
... View more
2 weeks ago
If the Good, Resetting, etc. fields are counts, @shrija may have been looking for this: | fields lat lon Good Resetting Starting Unknown Faulty
| eval Count=0
| foreach Good Resetting Starting Unknown Faulty
[| eval Count=Count+coalesce('<<FIELD>>', 0) ]
| geostats globallimit=0 latfield=lat longfield=lon sum(Good) as Good sum(Resetting) as Resetting sum(Starting) as Starting sum(Unknown) as Unknown sum(Faulty) as Faulty sum(Count) as Count However, the cluster map visualization generates a pie chart with one half of the pie representing the total count and the other half of the pie representing the individual sums:
... View more
2 weeks ago
... and the forum injected an unintended emoji. I really wish it wouldn't do that. 🙂
... View more
2 weeks ago
Hi @Vignesh, The alerts/suppressions endpoint is hard-coded to use 'nobody' as the owner, which the internal saved/eventtypes/_new endpoint interprets as the current user context. You can change the owner and sharing scope of the event type after it's created using the saved/eventtypes/{name}/acl endpoint (see https://docs.splunk.com/Documentation/Splunk/latest/RESTUM/RESTusing#Access_Control_List😞 curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/saved/eventtypes/notable_suppression-foo/acl \
--data-urlencode owner=jsmith \
--data-urlencode sharing=global You can create the event type directly using the saved/eventtypes endpoint and an alternate owner; however, you'll need to call the saved/eventtypes/{name}/acl endpoint separately to change sharing from private to global. The owner argument is required by the endpoint, so it's effectively the same number of steps as creating the suppression using the alerts/suppressions endpoint: curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/jsmith/SA-ThreatIntelligence/saved/eventtypes \
--data-urlencode name=notable_suppression-foo \
--data-urlencode description=bar \
--data-urlencode search='`get_notable_index` _time>1737349200 _time<1737522000' \
--data-urlencode disabled=false
curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/jsmith/SA-ThreatIntelligence/saved/eventtypes/notable_suppression-foo/acl \
--data-urlencode owner=jsmith \
--data-urlencode sharing=global
... View more
2 weeks ago
Hi @shrija, You can create choropleth (shaded outline) maps in both Classic (Simple XML) and Dashboard Studio map visualizations. In Simple XML, you can also create categorical choropleth maps and pie chart bubbles. In Dashboard Studio, you can create pie charts bubbles and categorical markers. Neither supports color bars. To map events to geographic boundaries, you can use the bundled United States geo lookups or you can upload custom KML files. Combined with a custom tile server, the KML files can represent anything with features and coordinates: topographical maps, nautical charts, office layouts, theme parks, rail/subway systems, etc. Are you working with a specific geographic region?
... View more
4 weeks ago
Hi @jonxilinx, The aws:cloudwatch:guardduty source type was intended to be used with a CloudWatch Logs input after a transform from the aws:cloudwatchlogs source type. To use an SQS input, you can transform the data on your heavy forwarder. The configuration below works on the following event schema: {
"BodyJson": {
"version": "0",
"id": "cd2d702e-ab31-411b-9344-793ce56b1bc7",
"detail-type": "GuardDuty Finding",
"source": "aws.guardduty",
"account": "111122223333",
"time": "1970-01-01T00:00:00Z",
"region": "us-east-1",
"resources": [],
"detail": {
...
}
}
} You may need to adjust the configuration to match your specific input and event format. # local/inputs.conf
[my_sqs_input]
aws_account = xxx
aws_region = xxx
sqs_queues = xxx
index = xxx
sourcetype = aws:sqs
interval = xxx
# local/props.conf
[aws:sqs]
TRANSFORMS-aws_sqs_guardduty = aws_sqs_guardduty_remove_bodyjson, aws_sqs_guardduty_to_cloudwatchlogs_sourcetype
# local/transforms.conf
[aws_sqs_guardduty_remove_bodyjson]
REGEX = "source"\s*\:\s*"aws\.guardduty"
INGEST_EVAL = _raw:=json_extract(_raw, "BodyJson")
[aws_sqs_guardduty_to_cloudwatchlogs_sourcetype]
REGEX = "source"\s*\:\s*"aws\.guardduty"
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype::aws:cloudwatchlogs:guardduty
... View more
4 weeks ago
Hi @rfdickerson, The Python source code for Splunk's implementation of StateSpaceForecast is collectively in: $SPLUNK_HOMEetc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py $SPLUNK_HOMEetc/apps/Splunk_ML_Toolkit/bin/algos_support/statespace/* The StateSpaceForecast algorithm is similar to the Splunk predict command. If you're not managing your own Splunk instance, you can download the MLTK archive from Splunkbase and inspect the files directly. The holdback and forecast_k parameters function as described. You may want to look at the partial_fit parameter for more control over the window of data used to update your model dynamically before using apply and (eventually) calculating TPR and FPR.
... View more
4 weeks ago
Hi @Rakzskull, Splunk support can assist with migrations from DDAA (Splunk-provided S3) to DDSS (customer-provided S3).
... View more
4 weeks ago
1 Karma
Welcome, AppDynamics practitioners! You'll find Splunkers here, of course, but many of us have experience with AppDynamics, too!
... View more
4 weeks ago
I included this: | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" as a placeholder for filtering using Simple XML inputs. The most likely cause of the difference in the number of results is one of the fields above not being present after spath extracts fields. In your second search, the events missing from the first search would have Status=="Unknown". Have you compared the results at the event level to look for differences other than simple truncation?
... View more
02-23-2025
09:51 AM
Hi @arunssd, If 1) your KV store collection uses array fields, 2) all field values have a 1:1:1:1 relationship, and 3) there are no empty/missing/null values within a field, i.e. all array values "line up": asn country maliciousbehavior riskscore
103.152.101.251 => PK => 3 => 9
103.96.75.159 => HK => 3 => 11
104.234.115.155 => CA => 4 => 9 you can transform the data with the transpose, mvexpand, and chart commands: | inputlookup arunssd_kv
| transpose 0
| mvexpand "row 1"
| chart values("row 1") over _mkv_child by column
| fields - _mkv_child
| outputlookup arunssd_lookup.csv However, your results may be truncated by mvexpand if the total size of the in-memory result is greater than the limits.conf max_mem_usage_mb setting (default: 500 MB). See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bmvexpand.5D. If this doesn't work for you, please share your collections.conf (KV store) and transforms.conf (lookup) settings. I used the following settings to test: # collections.conf
[arunssd_kv]
field.asn = array
field.country = array
field.maliciousbehavior = array
field.riskscore = array
# transforms.conf
[arunssd_kv]
collection = arunssd_kv
external_type = kvstore
fields_list = asn,country,maliciousbehavior,riskscore If your KV store fields are strings, the search can be adapted with the foreach and eval commands to coerce the fields values into a multi-valued type. You can also transform the results from a shell using curl and jq or your scripting tools of choice.
... View more
02-21-2025
06:05 AM
You could leave it that way, but you're maintaining 200 connections to the downstream receivers. If you have, for example, 16 cores on your intermediate forwarder and want to leave 2 cores free for other activity (so much overhead!), you can do the same thing with larger queues and fewer pipelines by increasing maxSize values by the same relative factor. If your forwarder doesn't have enough memory to hold all queues, keep an eye on memory, paging, and disk queue metrics.
... View more
02-17-2025
07:57 AM
Hi @anissabnk, Can you describe what's limited? @PickleRick showed a value length example. The spath command is limited to the first 5,000 bytes of the event by default. What is your maximum event length from | stats max(eval(len(_raw))) as max_len? If you meant the number of results, and the xyseries command returns no more than 50,000 results, you may be hitting a limit in an early search command, although I don't see a limited command in your original example.
... View more
02-14-2025
06:46 PM
1 Karma
I'm also assuming that you've already set maxKBps = 0 in limits.conf: # $SPLUNK_HOME/etc/system/local/limits.conf
[thruput]
maxKBps = 0
... View more
02-14-2025
05:45 PM
1 Karma
Hi @MichaelM1, Increasing parallelIngestionPipelines to a value larger than 1 is similar to running multiple instances of splunkd with splunktcp inputs on different ports. As a starting point, however, I would leave parallelIngestionPipelines unset or at the default value of 1. splunkd uses a series of queues in a pipeline to process events. Of note: parsingQueue aggQueue typingQueue rulesetQueue indexQueue There are other queues, but these are the most well-documented. See https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774/highlight/true#M103484. I have copies of the printer and high-DPI display friendly PDFs if you need them. On a typical universal forwarder acting as an intermediate forwarder, parsingQueue, which performs minimal event parsing, and indexQueue, which sends events to outputs, are the likely bottlenecks. Your metrics.log event provides a hint: <date time> Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=1217, largest_size=1217,smallest_size=0 Note that metrics.log logs queue names in lower case, but queue names are case-sensitive in configuration files. parsingQueue is blocked because 1217KB is greater than 512KB. The inputs.conf splunktcp stopAcceptorAfterQBlock setting controls what happens to the listener port when a queue is blocked, but you don't need to modify this setting. In your case, I would start by leaving parallelIngestionPipelines at the default value of 1 as noted above and increasing indexQueue to the next highest factor of 128 bytes larger than twice the largest_size value observed for parsingQueue. In %SPLUNK_HOME\etc\systeml\local\server.conf on the intermediate forwarder: [queue=indexQueue] # 2 * 1217KB <= 20 * 128B = 2560KB maxSize = 2560KB (x86-64, ARM64, and SPARC architectures have 64 byte cache lines, but on the off chance you encounter AIX on PowerPC with 128 byte caches lines, for example, you'll avoid buffer alignment performance penalties, closed-source splunkd memory allocation overhead notwithstanding.) Observe metrics.log following the change and keep increasing maxSize until you no longer see instances of blocked=true. If you run out of memory, add more memory to your intermediate forwarder host or consider scaling your intermediate forwarders horizontally with additional hosts. As an alternative, you can start by increasing maxSize for parsingQueue and only increase maxSize for indexQueue if you see blocked=true messages in metrics.log: [queue=parsingQueue] maxSize = 2560KB You can usually find the optimal values through trail and error without resorting to a queue-theoretic analysis. If you find that your system becomes CPU-bound at some maxSize limit, you can increase parallelIngestionPipelines, for example, to N-2, where N is the number of cores available. Following that change, modify maxSize from default values by observing metrics.log. Note that each pipeline consumes as much memory as a single-pipeline splunkd process with the same memory settings.
... View more
02-11-2025
06:01 PM
Hi @MichaelM1, Does your test script fail at ~1000 connections when sending a handshake directly to the intermediate forwarder input port and not your server script port? Completing a handshake and sending no data while holding the connection open should work. The splunktcp input will not reset the connection for at least (by default) 10 minutes (see the inputs.conf splunktcp s2sHeartbeatTimeout setting). It still seems as though there may be a limit at the firewall specific to your splunktcp port, but the firewall would be logging corresponding drops or resets. The connection(s) from the intermediate forwarder to the downstream receiver(s) shouldn't directly impact new connections from forwarders to the intermediate forwarder, although blocked queues may prevent new connections or close existing ones. Have you checked metrics.log on the intermediate forwarder for blocked=true events? A large number of streams moving through a single pipeline on an intermediate forwarder will likely require increasing queue sizes or adding pipelines.
... View more
02-09-2025
06:15 PM
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:password
Active forwards:
splunk.example.com:9997 (ssl)
Configured but inactive forwards:
None The command requires authentication, so you'll need to know the local Splunk admin username and password defined at install time. If the local management port is disabled, the command will not be available. You can otherwise search local logs for forwarding activity: Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log" -Pattern "Connected to idx="
Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log" -Pattern "group=per_index_thruput, series=`"_internal`""
... View more
02-09-2025
05:29 PM
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a downstream receiver. The intermediate forwarder would only listen on one port or however many input ports you have defined. TcpTimedWaitDelay adjusts the amount of time a closed socket will be held until it can be reused by another winsock client/server. As a quick test, I installed Splunk Universal Forwarder 9.4.0 for Windows on a clean install of Windows Server 2019 Datacenter Edition named win2019 with the following settings: # %SPLUNK_HOME%\etc\system\local\inputs.conf
[splunktcp://9997]
disabled = 0
# %SPLUNK_HOME%\etc\system\local\outputs.conf
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = splunk:9997
[tcpout-server://splunk:9997] where splunk is a downstream receiver. To simulate 1000+ connections, I installed Splunk Universal Forwarder 9.4.0 for Linux on a separate system with the following settings: # $SPLUNK_HOME/etc/system/local/limits.conf
[thruput]
maxKBps = 0
# $SPLUNK_HOME/etc/system/local/outputs.conf
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = win2019:9997
[tcpout-server://win2019:9997]
# $SPLUNK_HOME/etc/system/local/server.conf
# additional default settings not shown
[general]
parallelIngestionPipelines = 2000
[queue]
maxSize = 1KB
[queue=AQ]
maxSize = 1KB
[queue=WEVT]
maxSize = 1KB
[queue=aggQueue]
maxSize = 1KB
[queue=fschangemanager_queue]
maxSize = 1KB
[queue=parsingQueue]
maxSize = 1KB
[queue=remoteOutputQueue]
maxSize = 1KB
[queue=rfsQueue]
maxSize = 1KB
[queue=vixQueue]
maxSize = 1KB parallelIngestionPipelines = 2000 creates 2000 connections to win2019:9997. (Don't do this in real life. It's a Splunk instance using 2000x the resources of a typical instance. You'll consumer memory very quickly as stack space is allocated for new threads.) So far, I have no issues creating 2000 connections. Do you have a firewall or transparent proxy between forwarders and your intermediate forwarder? If yes, does the device limit the number of inbound connections per destination ip:port:protocol tuple?
... View more
02-09-2025
02:45 PM
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report
| eval process_integrity_level=lower(process_integrity_level) This search is expanded by Splunk to include the contents of the cim_EndPoint_indexes macros and all event types that match tag=process and tag=report. To compare like for like searches, start with the same base search: (`cim_Endpoint_indexes`) tag=process tag=report earliest=-4h@h latest=-2h@h
| eval process_integrity_level=lower(process_integrity_level)
| stats count values(process_id) as process_id by dest and construct a similar tstats search: | tstats summariesonly=f count values(Processes.process_id) as process_id from datamodel=Endpoint.Processes where earliest=-4h@h latest=-2h@h by Processes.dest The underlying searches should be similar. Optimization may vary. You can verify the SPL in the job inspector: Job > Inspect Job > search.log and the UnifiedSearch component's log output. When summariesonly=f, the searches have similar behavior. When summariesonly=t, the data model search only looks at indexed field values. This is similar to using field::value for indexed fields and TERM() for indexed terms in a normal search.
... View more
02-09-2025
02:18 PM
Hi @Rafaelled, Both parameters should work. See my previous post at https://community.splunk.com/t5/Getting-Data-In/integrating-Splunk-with-Elasticsearch/m-p/696647/highlight/true#M115609 for a few limitations. Depending on the number and size of documents you need to migrate, the app may not be appropriate. A custom REST input would give you the most flexibility with respect to the Elasticsearch Search API. There are pre-written tools like https://github.com/elasticsearch-dump/elasticsearch-dump that may help. If you have a place to host it, an instance of Logstash and a configuration that combines an elasticsearch input with an http output (for Splunk HEC) would be relatively easy to manage. If you don't have a large amount of data or if you're willing to limit yourself to 1 TB per day, a free Cribl Stream license could also do the job. I'm happy to help brainstorm relatively simple solutions here.
... View more
02-09-2025
10:47 AM
1 Karma
Hi @HaakonRuud, When mode = single, it's implied the stats setting is ignored. As a result, the event will contain an average of samples collected every samplingInterval millseconds over interval seconds. Your collection interval is 60 seconds (1 minute): interval = 60 Your mstats time span is also 1 minute: span=1m As a result, you only have 1 event per time interval, so the mean and maximum will be equivalent.
... View more
02-09-2025
10:09 AM
Hi @madhav_dholakia, Can you provide more context for the defaults object and a small sample dashboard that doesn't save correctly? Is defaults defined at the top level of the dashboard? I.e.: {
"visualizations": {
...
},
"dataSources": {
...
},
"defaults": {
...
},
...
}
... View more
02-09-2025
09:14 AM
Hi @splunk_user_99, Which version of MLTK do you have installed? The underlying API uses a simple payload: {"name":"my_model","app":"search"} where name is the value entered in New Main Model Title and app is derived from the value selected in Destination App. The app list is loaded when Operationalize is clicked and sorted alphabetically by display name. On submit, the request payload is checked to verify that it contains only the 'app' and 'name' keys. Do you have the same issue in a sandboxed (private, incognito, etc.) browser session with extensions disabled?
... View more
02-08-2025
09:18 AM
Here's a straightforward hack that uses a zero width space as a padded value prefix to determine a cell's status. For example, a status of Unknown is one zero width space. The SPL uses the urldecode() eval function to convert URL-encoded UTF-8 characters to strings. <table id="table2">
<search>
<query>| makeresults format=csv data="
_time,HOSTNAME,PROJECTNAME,JOBNAME,INVOCATIONID,RUNSTARTTIMESTAMP,RUNENDTIMESTAMP,RUNMAJORSTATUS,RUNMINORSTATUS,RUNTYPENAME
2025-01-20 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-19 20:18:25.0,,STA,RUN,Run
2025-01-19 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-18 20:18:25.0,2025-01-18 20:18:29.0,FIN,FWF,Run
2025-01-18 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-17 20:18:25.0,2025-01-17 20:18:29.0,FIN,FOK,Run
2025-01-17 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-16 20:18:25.0,2025-01-16 20:18:29.0,FIN,FWW,Run
2025-01-16 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-15 20:18:25.0,2025-01-15 20:18:29.0,FIN,HUH,Run
"
``` use zero width space as pad ```
| eval status_unknown=urldecode("%E2%80%8B")
| eval status_success=urldecode("%E2%80%8B%E2%80%8B")
| eval status_failure=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B")
| eval status_warning=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B")
| eval status_running=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B")
| eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S.%Q")
| search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*"
| eval status=case(RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", status_warning, RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", status_success, RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", status_failure, RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", status_running, 1=1, status_unknown)
| eval tmp=JOBNAME."|".INVOCATIONID
| eval date=strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%Y-%m-%d")
| eval value=status.if(status==status_unknown, "Unknown", "start time: ".coalesce(strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "").urldecode("%0a").if(status==status_running, "Running", "end time: ".coalesce(strftime(strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "")))
| xyseries tmp date value
| eval tmp=split(tmp, "|"), Job=mvindex(tmp, 0), Country=mvindex(tmp, 1)
| fields - tmp
| table Job Country *</query>
</search>
<option name="drilldown">none</option>
<option name="wrap">true</option>
<format type="color">
<colorPalette type="expression">case(match(value, "^\\u200b{1}[^\\u200b]"), "#D3D3D3", match(value, "^\\u200b{2}[^\\u200b]"), "#90EE90", match(value, "^\\u200b{3}[^\\u200b]"), "#F0807F", match(value, "^\\u200b{4}[^\\u200b]"), "#FEEB3C", match(value, "^\\u200b{5}[^\\u200b]"), "#ADD9E6")</colorPalette>
</format>
</table>
... View more
02-07-2025
06:53 PM
Hi @anissabnk, As a quick workaround in a classic dashboard, you can use colorPalette elements with type="expression" to highlight cells if the cell value also includes the status: <dashboard version="1.1" theme="light">
<label>anissabnk_table</label>
<row depends="$hidden$">
<html>
<style>
#table1 th, #table1 td {
text-align: center !important
}
</style>
</html>
</row>
<row>
<panel>
<table id="table1">
<search>
<query>| makeresults format=csv data="
_time,HOSTNAME,PROJECTNAME,JOBNAME,INVOCATIONID,RUNSTARTTIMESTAMP,RUNENDTIMESTAMP,RUNMAJORSTATUS,RUNMINORSTATUS,RUNTYPENAME
2025-01-20 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-19 20:18:25.0,,STA,RUN,Run
2025-01-19 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-18 20:18:25.0,2025-01-18 20:18:29.0,FIN,FWF,Run
2025-01-18 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-17 20:18:25.0,2025-01-17 20:18:29.0,FIN,FOK,Run
2025-01-17 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-16 20:18:25.0,2025-01-16 20:18:29.0,FIN,FWW,Run
2025-01-16 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-15 20:18:25.0,2025-01-15 20:18:29.0,FIN,HUH,Run
"
| eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S.%Q")
| search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*"
| eval status=case(RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", "Completed with Warnings", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", "Successful Launch", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", "Failure", RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", "In Progress", 1=1, "Unknown")
| eval tmp=JOBNAME."|".INVOCATIONID
| eval date=strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%Y-%m-%d")
| eval value=if(status=="Unknown", "Unknown", "start time: ".coalesce(strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "").urldecode("%0a").if(status=="In Progress", "Running", "end time: ".coalesce(strftime(strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), ""))).urldecode("%0a").status
| xyseries tmp date value
| eval tmp=split(tmp, "|"), Job=mvindex(tmp, 0), Country=mvindex(tmp, 1)
| fields - tmp
| table Job Country *</query>
</search>
<option name="drilldown">none</option>
<option name="wrap">true</option>
<format type="color">
<colorPalette type="expression">case(like(value, "%Unknown"), "#D3D3D3", like(value, "%Successful Launch"), "#90EE90", like(value, "%Failure"), "#F0807F", like(value, "%Completed with Warnings"), "#FEEB3C", like(value, "%In Progress"), "#ADD9E6")</colorPalette>
</format>
</table>
</panel>
</row>
</dashboard> There may be arcane methods for formatting cells without using JavaScript or including the status in the value, but I don't have them readily available.
... View more