All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, How to change dataSource in table dynamically based on token in Splunk Dashboard Studio? I tried to assign a token on the "primary" field, so it can change dynamically to "Data 1" or "Data... See more...
Hello, How to change dataSource in table dynamically based on token in Splunk Dashboard Studio? I tried to assign a token on the "primary" field, so it can change dynamically to "Data 1" or "Data 2" based on selection. However, this solution does not seem to work.  I've seen a suggestion to use "saved search", but I don't want to use that solution.  Please suggest. Thanks "viz_dynamictable": {     "type": "splunk.table",     "dataSources": {         "primary": "$datasource_token$"          },     "title": "$title_token$" } "dataSources": {      "ds_index1": {                "type": "ds.search",                 "options": {                      "query": "index=index1"                  },             "name": "Data 1"         },       "ds_index2": {                  "type": "ds.search",                  "options": {                      "query": "index=index2"                   } ,             "name": "Data 2"         },
Hello! I have a Classic Dashboard in Splunk and I am currently working with an Events pane. I am trying to set a token via drilldown. Here is my code: <event> <search> <query>$case_token$ $host_t... See more...
Hello! I have a Classic Dashboard in Splunk and I am currently working with an Events pane. I am trying to set a token via drilldown. Here is my code: <event> <search> <query>$case_token$ $host_token$ $level_token$ $rule_token$</query> </search> <fields>Timestamp, host, Computer, Level, Channel, RecordID, EventID, RuleTitle, Details, _time</fields> <option name="count">50</option> <option name="list.drilldown">none</option> <option name="list.wrap">1</option> <option name="raw.drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="table.drilldown">all</option> <option name="table.sortDirect">asc</option> <option name="table.wrap">1</option> <option name="type">table</option> <drilldown> <condition field="Channel"> <set token="channel_token">$click.value$</set> </condition> </drilldown> </event>  There's two problems: 1. The token is not being set when I click on the table. 2. The condition to only select from the Channel fields is not working. Thank you in advance!
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign diff... See more...
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. { "type": "splunk.map", "options": { "center": [ 24.007647480837704, 107.43997967141127 ], "zoom": 2.3155822324586683, "showBaseLayer": true, "layers": [ { "type": "bubble", "latitude": "> primary | seriesByName('latitude')", "longitude": "> primary | seriesByName('longitude')", "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')", "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')" } ] }, "dataSources": { "primary": "ds_PHhx1Fxi" }, "context": { "colorMatchConfig": [ { "match": "high", "value": "#FF0000" }, { "match": "low", "value": "#00FF00" }, { "match": "critical", "value": "#0000FF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = rout... See more...
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = route_to_vmware transforms.conf [route_to_vmware] REGEX = (?i)vpxa DEST_KEY = _MetaData:Index FORMAT = vmware Created an index in splunk. Example of event, ending up in main index... any help would be appreciated.  thank you I did restart splunk from the GUI after changes were made.
Looking for assistance in adding a percentage to an existing chart result. I have the following Splunk search that is able to chart the maximum value found of ValueA and ValueB and chart by hosts. Va... See more...
Looking for assistance in adding a percentage to an existing chart result. I have the following Splunk search that is able to chart the maximum value found of ValueA and ValueB and chart by hosts. ValueA  is the maximum count found (lets say total number of objects). ValueB is the maximum observed usage of ValueA. I do not use a bin or time reference directly in the search, rather using Splunk's pre-build time reference on-demand (Example , "last 24 hours" when executing the search) index=indextype  sourcetype=sourcetype  "search_string"  |  chart  max(valueA)  max(valueB)  by  host
Good morning, I’m experiencing an issue with the following log: 15:41:41,341 2025-05-13 15:41:41,340 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=31365... See more...
Good morning, I’m experiencing an issue with the following log: 15:41:41,341 2025-05-13 15:41:41,340 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=31365aee-0e03-43bc-9ccd-fd465aa7a4ca Request: GET http://something.com/something/else Headers[Accept=application/json If-Modified-Since=Tue, 13 May 2025 04:00:27 GMT User-Agent=Quarkus REST Client], Empty body 2025-05-13 15:41:39,970 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=95a1a839-2967-4ab8-8302-f5480106adb6 Response: GET http://something.com/something/else, Status[304 Not Modified], Headers[access-control-allow-credentials=true access-control-allow-headers=content-type, accept, authorization, cache-control, pragma access-control-allow-methods=OPTIONS,HEAD,POST,GET access-control-allow-origin=* cache-control=no-cache server-timing=intid;desc=4e7d2996fd2b9cc9 set-cookie=d81b2a11fe1ca01805243b5777a6e906=abae4222185903c47a832e0c67618490; path=/; HttpOnly] A bit of context that may be relevant: these logs are shipped using Splunk OTEL collectors. In the _raw logs, I see the following field values: Field Value requestID 95a1a839-2967-4ab8-8302-f5480106adb6 Response: GET http://something.com/something/else requestID requestId=31365aee-0e03-43bc-9ccd-fd465aa7a4ca Request: GET http://something.com/something/else   What I want is for the requestID, and the Request or Response parts to be extracted into separate fields. I’ve already added the following to my props.conf: [sourcetype*] EXTRACT-requestId = requestId=(?<field_request>[a-f0-9\-]+) EXTRACT-Response = Response:\s(?<field_response>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) EXTRACT-Request = Request:\s(?<field_request>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) I verified on regex101 that the regex matches correctly, but it's not working in Splunk. Could the issue be that the log show Response: instead of Response= and Splunk doesn’t treat it as a proper field delimiter? Unfortunately, I’m unable to modify the source lo What else can I check? Do I need to modify the .yml configuration for the Splunk OTEL collector, or should I stick to using props.conf and transforms.conf?   Thank you in advance, Best Regards. Matteo
I have taken a rather long query and condensed it down to the following to remove any possibility that something was possibly filtering it out.   | ldapsearch search="(&(cn=*userhere*))"   That w... See more...
I have taken a rather long query and condensed it down to the following to remove any possibility that something was possibly filtering it out.   | ldapsearch search="(&(cn=*userhere*))"   That will output all of the available data for the user including memberOf. memberOf skips "Domain Users" but seems to display every other group. I am currently running 3.0.8 of Splunk Supporting Add-on for Active Directory. Release notes do not mention this issue, no one seems to be reporting this issue but I have confirmed it happening on two completely independent instances. Both were 3.0.8.
Hi All, I have the log file like below : [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZ... See more...
Hi All, I have the log file like below : [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZS_BPM_Service, PNO - 1234, Service ID - abc12nf [Request END] Success :  [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZS_BPM_Service, PNO - 1234, Service ID - abc12nf [Request END] Success :  Details about the failure   along with above details there are lot of details in the log but these are the detail i need to create a dashboard, Can anyone please help me on how to extract all the above field. and how can a create a dashboard for how many request are successful along with details about the success request like IP and service name etc. Thanks a lot in advance. Regards, AKM
  I'm trying to enable SignalFx AlwaysOn Profiling for my Java application. The app is already instrumented to: Send metrics directly to the ingest endpoint, and Send traces via a Collector ag... See more...
  I'm trying to enable SignalFx AlwaysOn Profiling for my Java application. The app is already instrumented to: Send metrics directly to the ingest endpoint, and Send traces via a Collector agent running on the host. I have a couple of questions: Can the ingest endpoint also be used for profiling, similar to how it's used for metrics? If yes, could you please share the exact endpoint format or a link to the relevant documentation? I attempted to enable profiling by pointing to the same Collector endpoint used for tracing. The logs indicate that the profiler is enabled, but I’m also seeing a message saying "Exporter failed", without a specific reason for the failure. Could you help me troubleshoot this issue? Here are the relevant log entries: com.splunk.opentelemetry.profiler.ConfigurationLogger - ----------------------- com.splunk.opentelemetry.profiler.ConfigurationLogger - Profiler configuration: com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.enabled : true com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.directory : /tmp com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.recording.duration : 20s com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.keep-files : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.logs-endpoint : http://<host_ip>:4318 com.splunk.opentelemetry.profiler.ConfigurationLogger - otel.exporter.otlp.endpoint : http://<host_ip>:4318 com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.memory.enabled : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.tlab.enabled : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.memory.event.rate : 150/s com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.include.internal.stacks : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.tracing.stacks.only : false com.splunk.opentelemetry.profiler.JfrActivator - Profiler is active. com.splunk.opentelemetry.profiler.EventProcessingChain - In total handled 151 events in 32ms io.opentelemetry.sdk.logs.export.SimpleLogRecordProcessor - Exporter failed Any help in understanding the root cause and resolving the export failure would be appreciated.
Hi Team, How to optimize the startup process of the AppDynamics Events Service Cluster (Event Service & Elastic Search) within the system and service manager of the operating system, including imple... See more...
Hi Team, How to optimize the startup process of the AppDynamics Events Service Cluster (Event Service & Elastic Search) within the system and service manager of the operating system, including implementing self-healing mechanisms to automatically detect and resolve any issues that may arise during startup for below scenarios. 1. Graceful Shutdown 2. Unplanned Downtime (unexpected VM shutdown) 3. Accidentally kill the process (Event service and Elastic) 4. Optional: OOM (Out Of Memory)
We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is no... See more...
We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is not forwarding them to 3rd party.
Hi All We have a requirement where user needs to send mail a dashboard periodically. The Dashboard is made using Dashboard studio so the Export is available, I configured the export option and sent ... See more...
Hi All We have a requirement where user needs to send mail a dashboard periodically. The Dashboard is made using Dashboard studio so the Export is available, I configured the export option and sent a mail but the PDF output showing no data on individual panels, it gives the output while the panel are searching for the result. The dashboard has  time picker in it, no matter which value I set it ( last 4 hours to last 30 days) the result is same. Has anybody faced the issue similarly, have any workaround is there for this.   Please help.
When we delete a row in a csv lookup file, it gets deleted for that moment. But on saving, that row re-appears. Looks like a bug in latest version 4.0.5, working perfectly fine in 4.0.4 version. Upgr... See more...
When we delete a row in a csv lookup file, it gets deleted for that moment. But on saving, that row re-appears. Looks like a bug in latest version 4.0.5, working perfectly fine in 4.0.4 version. Upgrading to 4.0.5 because of vulnerabilities in 4.0.4. Anyone noticed  this issue?
Hi, I recently created a dash studio dashboard and I see while creating the dashboard the dashboard title and widget title are in one font format but once I finished my dashboard and shared it publi... See more...
Hi, I recently created a dash studio dashboard and I see while creating the dashboard the dashboard title and widget title are in one font format but once I finished my dashboard and shared it publicly I get one public URL which I see has a different font format when opened.  First snap is having the normal font format on which I created the dashboard Normal dashboard font format Once I open the shared URL the font looks like as below. Please help on how to restore it to original font.    Dashboard opened via shared URL  
Hello Splunkers,   Need help/reference to onboard  Azure SignIn logs to Splunk? i am trying with Splunk Add for Microsoft Azure (Splunk Add on for Microsoft Azure | Splunkbase) But unable to do so,... See more...
Hello Splunkers,   Need help/reference to onboard  Azure SignIn logs to Splunk? i am trying with Splunk Add for Microsoft Azure (Splunk Add on for Microsoft Azure | Splunkbase) But unable to do so,  getting below error: 2025-05-12 13:04:13,042 log_level=ERROR pid=319128 tid=MainThread file=base_modinput.py:log_error:317 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py", line 141, in collect_events resources = az_resource_graph.get_resources_by_query(helper, access_token, query, subscription_id.split(","), environment, resources=[]) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 63, in get_resources_by_query raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 48, in get_resources_by_query r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 05-12-2025 18:25:07.429 +0000 ERROR TcpInputProc [1731515 FwdDataReceiverThread-0] - Error encountered for connection from src=127.0.0.1:42076. error:140890C7:SSL routines:ssl3_get_client_certificate:peer did not return a certificate   05-12-2025 13:04:13.134 +0000 ERROR ExecProcessor [326053 ExecProcessor] - message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py" 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 - Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 141, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_metrics.py", line 141, in collect_events resources = az_resource_graph.get_resources_by_query(helper, access_token, query, subscription_id.split(","), environment, resources=[]) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 63, in get_resources_by_query raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/resource_graph.py", line 48, in get_resources_by_query r.raise_for_status() File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/script.py", line 67, in run_script self.stream_events(self._input_definition, event_writer) File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 150, in stream_events raise RuntimeError(str(e)) RuntimeError: 429 Client Error: Too Many Requests for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01   TIA,    
It doesnt appear this is a feature.  Tried all the existing solutions but they are all old and none of them work with 2025 Dashboard Studio.
We are currently using Splunk Enterprise on-premises, and the client has expressed plans to migrate to Splunk Cloud. In addition, they have clearly stated the need to work, specifically focusing on S... See more...
We are currently using Splunk Enterprise on-premises, and the client has expressed plans to migrate to Splunk Cloud. In addition, they have clearly stated the need to work, specifically focusing on Synthetic Monitoring and Real User Monitoring (RUM). While it appears they intend to adopt Splunk Cloud as the primary observability platform, I would like to confirm whether their strategy involves solely utilizing Splunk Cloud or if they intend to integrate with AWS or Azure cloud platforms as part of the observability or hosting architecture. Could you please provide guidance or clarity on whether the migration includes leveraging Splunk Cloud hosted on a public cloud provider (e.g., AWS or Azure), or if there is a broader hybrid/cloud-native observability strategy in play?
I have a coldToFrozenScript that controls all of the indexes at an installation. I want the data in the "main" index to simply be deleted when it's time to be frozen. My question is, if I set the co... See more...
I have a coldToFrozenScript that controls all of the indexes at an installation. I want the data in the "main" index to simply be deleted when it's time to be frozen. My question is, if I set the coldToFrozenDir for the "main" stanza in indexes.conf to a blank, will it delete the buckets?  coldToFrozenDir =    Thank you. 
I am monitoring some of the information from the TrackMe App, and I noticed that for the trackme_host_monitoring lookup AKA the data host monitoring utils of the app, all the hosts I can see had the ... See more...
I am monitoring some of the information from the TrackMe App, and I noticed that for the trackme_host_monitoring lookup AKA the data host monitoring utils of the app, all the hosts I can see had the data_last_time_seen value greater than 04/03. Today is 05/12. If I use metadata command, I can see the hosts that has not sent log from earlier than 04/03. Like 03/31. So what config/macro of the app is probably the cause?
hi folks, the scenario is like below - have Enterprise security (ESS) in Splunk cloud + ESCU (content updates) as part of it - if we enable a ESCU detection it works all good. - we need to modify ... See more...
hi folks, the scenario is like below - have Enterprise security (ESS) in Splunk cloud + ESCU (content updates) as part of it - if we enable a ESCU detection it works all good. - we need to modify the ESCU slightly with a standard field and also the name of the search to fit existing organisation policy - The uuid remain the same What will happen when the next ESCU update comes? Will it overwrite the custom changes? What is the actual ESCU update looking for? is it looking for 'search name' or the 'search id (uuid)?'?   What will happen when the next ESCU update comes?