All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am running a rest APi basically curl to query Splunk for results and export them to the server.  below is my api query.  My Splunk query is very big and the results are also kind of huge.  Query is... See more...
I am running a rest APi basically curl to query Splunk for results and export them to the server.  below is my api query.  My Splunk query is very big and the results are also kind of huge.  Query is running fine but I don't see any results.   #!/bin/bash search_query=$(cat <<'EOF' search index=my long splunk query EOF ) echo "Running Splunk search..." curl --http1.1 -k -u admin:password \ "https://<splunk uri>:8089/services/search/jobs/export" \ --data-urlencode "search=$search_query" \ -d output_mode=csv \ -d earliest_time='-24d@d' \ -d latest_time='@d' \ -o output-file.csv echo "Done. Results in output-file.csv"   This pi returns below results -  curl: (18) transfer closed with outstanding read data remaining with empty ouput-file.csv.  Looks like it is not able to run such huge query.  I tried the curl command with some simple search query and it is working.  How can I make this work ?
Hi, i want developers (with GUI-access only) to be able to export the apps they have built. I can't find any information about that in the documentation. Is there a way to export apps without comma... See more...
Hi, i want developers (with GUI-access only) to be able to export the apps they have built. I can't find any information about that in the documentation. Is there a way to export apps without commandline-access to the server? Best Regards, M
Hi All,  I have a query that converts event logs to metrics  (search time processing) : | index=<indexname> sourcetype=<sourcetype> host=<hostame> | spath input=log.dmc  | eval metric_name = ... See more...
Hi All,  I have a query that converts event logs to metrics  (search time processing) : | index=<indexname> sourcetype=<sourcetype> host=<hostame> | spath input=log.dmc  | eval metric_name = 'log_processed.dmc.metricName'  | eval tenantId = 'log.dmc.tenantId'  | eval metric_value = tonumber('log_processed.dmc.value')  | eval _time = strptime('log_processed.timestamp', "%Y-%m-%d %H:%M:%S.%3N")  | fields _time, metric_name, tenantId, metric_value , | rename metric_value as metric_name::metric_value metric_name as metric | table metric "metric_name::metric_value" _time tenantId | mcollect index=test_metrics The test_metrics here is the index with metrics category. From the documentation , I understood the metric field should be displayed as below  on using metric_name::metric_value.  https://help.splunk.com/en/splunk-enterprise/get-data-in/metrics/9.4/introduction-to-metrics/get-started-with-metrics But with the query I am using , it is getting displayed as separate field with just numerical value (not in above  screenshot example format).  Also, metric_name field is getting displayed only after it is renamed. Please let me know what is that I am doing wrong.  Thanks, PNV    
I have a group of indexes, one of which contains sensitive data that must be encrypted so that no one can copy and upload the data to another Splunk instance unless they have the key to decrypt it. I... See more...
I have a group of indexes, one of which contains sensitive data that must be encrypted so that no one can copy and upload the data to another Splunk instance unless they have the key to decrypt it. Is this possible with Splunk?
I am looking for change the source type for this apps Splunk Add-on for Microsoft Security
Without using a SubSearch since there is a limit of 10000 results index="xxxx" field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) >... See more...
Without using a SubSearch since there is a limit of 10000 results index="xxxx" field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | spath output=yyyId path=xxxId input=_raw | where isnotnull(yyyId) ANDyyyId!="" | bin _time span=5m AS hour_bucket | stats latest(_time) as last_activity_in_hour, count by hour_bucket, yyyId | stats count by hour_bucket | sort hour_bucket | rename hour_bucket AS _time | timechart span=5m values(count) AS "Unique Customers per Hour" Still doesn't return any results
I have events already in an index looking like this: {    "location": "Paris",    "temperature": 25,    "humidity": 57 } I have a data model looking like this: {    "location": string    "na... See more...
I have events already in an index looking like this: {    "location": "Paris",    "temperature": 25,    "humidity": 57 } I have a data model looking like this: {    "location": string    "name": string   "value": number } and so I would need my event to show up as two events under this data model: {    "location":"Paris"    "name": "temperature"   "value": 25 } and {    "location":"Paris"    "name": "humidity"   "value": 57 }  What would be the best way to proceed? I have had no luck with field manipulation / tables so far.
How do I resolve authentication or pass4SymmKey mismatch between search head and peer? Also getting a situation where 0 clients phone home.
Hi all, I have dashboard created in studio, which uses multiple tabs, each tab housing different dashboards.  I wanted to see if it was possible to change the color of the font in the tab name? I t... See more...
Hi all, I have dashboard created in studio, which uses multiple tabs, each tab housing different dashboards.  I wanted to see if it was possible to change the color of the font in the tab name? I tried to add "fontColor": "#...", to various sections in the source code but nothing I do seems to be able to change the color of the tab name.  Thanks!
We are planning to transition to the cloud, realizing that about 30% of our searches are dashboard refreshes, and in order to minimize SVCs, I wonder if there is a way to disable dashboard refreshes,... See more...
We are planning to transition to the cloud, realizing that about 30% of our searches are dashboard refreshes, and in order to minimize SVCs, I wonder if there is a way to disable dashboard refreshes, and whether the features are different in on-prem vs. cloud.
I have installed the Splunk O11y agent via linus script.  I have the smartagent/rabbitmq receiver/pipeline per the instructions at https://help.splunk.com/en/splunk-observability-cloud/manage-data/av... See more...
I have installed the Splunk O11y agent via linus script.  I have the smartagent/rabbitmq receiver/pipeline per the instructions at https://help.splunk.com/en/splunk-observability-cloud/manage-data/available-data-sources/supported-integrations-in-splunk-observability-cloud/applications-messaging/rabbitmq Whicle restarting the service and viewing logs, I see a failure: Jul 11 14:40:06 ip-1**-**-**-**.ec2.internal otelcol[3749684]: 2025-07-11T14:40:06.105Z info collectd/logging.go:41 configfile: stat (/usr/lib/splunk-otel-collector/agent-bundle/run/collectd/global/managed_config/*.conf) failed: No such file or directory {"resource": {"service.instance.id": "418ee031-25d8-4d3c-a115-6fb7e98c4992", "service.name": "otelcol", "service.version": "v0.128.0"}, "otelcol.component.id": "smartagent/rabbitmq", "otelcol.component.kind": "receiver", "otelcol.signal": "metrics", "name": "default", "collectdInstance": "global"} Since the instructions do not mention this directory/.conf files, I would expect these to have been installed by default. Does anybody else have experience with this? Also, just a note: The RabbitMQ developers have deprecated the http api in favor of prometheus, and while the agenbt has the capacity to build promehteus receivers, the 
Hi everyone. I have a panel that contains a list of links to other dashboards. I need to create a new list item with a link that changes dinamically according to the values of three tokens evaluat... See more...
Hi everyone. I have a panel that contains a list of links to other dashboards. I need to create a new list item with a link that changes dinamically according to the values of three tokens evaluated inside "eval" blocks. But, as splunk makes it clear, I can't create "eval" blocks inside of a panel. So I wanted to know if there's a way to evaluate these tokens such that they can be used within the scope of the "panel" block. Thanks in advance,  Pedro
Hi everyone, We have the following setup: Check Point Firewall is configured to send logs via syslog over UDP (port 514). Logs are received by a Linux server running rsyslog. rsyslog writes... See more...
Hi everyone, We have the following setup: Check Point Firewall is configured to send logs via syslog over UDP (port 514). Logs are received by a Linux server running rsyslog. rsyslog writes these logs to a local file (e.g., /var/log/CheckPoint.log). Splunk (on the same server) reads this file and indexes the logs Although the Check Point firewall sends complete logs (visible in tcpdump, including structured data and original timestamps), only a truncated version of the log is written to the file by rsyslog. Specifically: The structured message body is missing. Only the syslog header (timestamp, hostname, program name) appears in the file. Can anyonehelp !!  Ty 
Hello, I have old version of Splunk (8.2.1, rhel7), inherited, need perform software upgrade, prior to migrate to new rhel 8 OS.  Seems for successful configuration migration I need first of all... See more...
Hello, I have old version of Splunk (8.2.1, rhel7), inherited, need perform software upgrade, prior to migrate to new rhel 8 OS.  Seems for successful configuration migration I need first of all upgrade on old rhel7 current verion of Splunk from 8.2.1 to 9.0.6 version, unfortunately 9.0.6 is out of support and I can't download from web this old version. would you please help me with it ?
Hi, We’re currently facing a load imbalance issue in our Splunk deployment and would appreciate any advice or best practices. Current Setup: Universal Forwarders (UFs) → Heavy Forwarders (HFs) → C... See more...
Hi, We’re currently facing a load imbalance issue in our Splunk deployment and would appreciate any advice or best practices. Current Setup: Universal Forwarders (UFs) → Heavy Forwarders (HFs) → Cribl We originally had 8 HFs handling parsing and forwarding. Recently, we added 6 new HFs (total of 14 HFs) to help distribute the load more evenly and to offload congested older HFs. All HFs are included in the UFs’ outputs.conf under the same TCP output group. Issue:  We’re seeing that some of the original 8 HFs are still showingblocked=true in metrics.log (splunktcpin queue full), while the newly added HFs have little to no traffic. It looks like the load is not being evenly distributed across the available HFs. Here's our current outputs.conf deployed in UFs: [tcpout] defaultGroup = HF_Group forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:HF_Group] server = HF1:9997,HF2:9997,...HF14:9997 We have not set autoLBFrequency yet. Questions: Do we need to set autoLBFrequency in order to achieve true active load balancing across all 14 HFs, even when none of them are failing? If we set autoLBFrequency = 30, are there any potential downsides (e.g., performance impact, TCP session churn)? Are there better or recommended approaches to ensure even distribution of UF traffic to multiple HFs in environments before forwarding to Cribl? Please note that we are sending a large volume of data, primarily wineventlogs. Your help is very much appreciated. Thank you
Hello, I have problem with Analyst queue: I am not able to add column to Analyst Queue in GUI. When I do this (using the cogwheel icon on Analyst queue dashboard), column is added, but when I log o... See more...
Hello, I have problem with Analyst queue: I am not able to add column to Analyst Queue in GUI. When I do this (using the cogwheel icon on Analyst queue dashboard), column is added, but when I log off and log on again, previously added column disseapers and Analyst queue is in default setting again. I expected that new config of Analyst Queue will be saved in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/log_review.conf, but I found that this file remains untouched when I add new column. Is there any way how to add new column to the AQ permanently? Now I am aware only about one way - manually edit $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/log_review.conf file (this works), but this is not usable for analysts who would like to customize AQ GUI and have not privilege to edit config files. Environment is Search Head cluster (3 nodes) with Splunk Enterprise 9.3.0 and Enterprise Security 8.1.0. Any hint would be highly appreciated. Best regards Lukas Mecir
Hi Splunk Community I am  using the Splunk Add-on for ServiceNow v8.0.0 in a Splunk Cloud environment and have correctly configured a custom input for the change_request table with: timefield = s... See more...
Hi Splunk Community I am  using the Splunk Add-on for ServiceNow v8.0.0 in a Splunk Cloud environment and have correctly configured a custom input for the change_request table with: timefield = sys_updated_on filter_parameters = sysparm_query=ORDERBYDESCsys_updated_on&sysparm_limit=100 Despite this, logs in _internal consistently show the add-on attempting to use: change_request.change_request.sys_updated_on Example from splunkd_access.log: 200 GET /change_request.change_request.sys_updated_on This causes data not to be ingested, despite 200 HTTP responses. The correct behavior should use change_request as the table and include sys_updated_on in the query string, not the URL path. Request: Please confirm if its a know issue and if there is a workaround for this? Thank you. Thank you.
Hello, I'm looking to secure the connection to our deployment server using HTTPS following this doc: https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Securingyourdeploymentserverandclien... See more...
Hello, I'm looking to secure the connection to our deployment server using HTTPS following this doc: https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Securingyourdeploymentserverandclients I'm wondering if having client certificate is mandatory or if it would be possible to only install a certificate on the DS server itself ? I don't need the to have mTLS, my goal is only to have an encrypted connection between the server and the clients. Thanks for you help Lucas
Hello (excuse me for my english it's not my native language) I upgrade splunk entreprise from 9.2.1 to 9.3.4 Everything went right, each server was upgraded without any problem (linux and windows)... See more...
Hello (excuse me for my english it's not my native language) I upgrade splunk entreprise from 9.2.1 to 9.3.4 Everything went right, each server was upgraded without any problem (linux and windows). But when i click on Settings>Users and authentication>Roles  (same for users) , the panel with the list of users is empty I can see only the top menu of splunk, the remaining of the page is blank. This occurs when the language is set to French  (fr-FR) , in english (en-US) it's ok. Can somebody help me ? Where can i find the language pack, to see if something's missing? thank you
The Splunk app for Linux already provided a stanza for collecting all the .log files in the /var/log folder ([monitor::///var/log]). But what if I want to write specific regex/transformations for spe... See more...
The Splunk app for Linux already provided a stanza for collecting all the .log files in the /var/log folder ([monitor::///var/log]). But what if I want to write specific regex/transformations for specific .log file, given its path. For example, I want to apply transformation by writing specific stanzas in props.conf and transforms.conf for file /var/log/abc/def.log and /var/log/abc/ghi.log.  How to make these have the same sourcetype as "alphabet_log" and then write its regex functions? I also have a question regarding the docs from Splunk In the props.conf docs, it stated that: For settings that are specified in multiple categories of matching [<spec>] stanzas, [host::<host>] settings override [<sourcetype>] settings. Additionally, [source::<source>] settings override both [host::<host>] and [<sourcetype>] settings.  What does "override" here mean? Does it override everything, or it combines and only override the duplicate configs?