All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Akamai Guadicore App shows it is cloud app but I don't see it as option to install in Splunk Cloud. The add-on is available, just not the app. 
We are running Elasticsearch Data integrator -modular input to ingest logs from elasticsearch to Splunk. However, the app only works when Splunk is restarted and the app stops working a few minutes l... See more...
We are running Elasticsearch Data integrator -modular input to ingest logs from elasticsearch to Splunk. However, the app only works when Splunk is restarted and the app stops working a few minutes later until the next time Splunk is restarted again. Error message: ERROR PersistentScript [3778898 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/TA_elasticsearch_data_integrator___modular_input_rh_settings.py persistent}: f"Failed to get password of realm={self._realm}, user={user}." Can you help fix the problem?  
Hi, Could you help me retrieve message-tracking logs from our on-premises Exchange server? I added the following lines to inputs.conf, but the data still isn’t being parsed. I guess smt is missing o... See more...
Hi, Could you help me retrieve message-tracking logs from our on-premises Exchange server? I added the following lines to inputs.conf, but the data still isn’t being parsed. I guess smt is missing or incorrect. I’m also unsure how to set up the Exchange add-on and haven’t found clear documentation. Any guidance would be greatly appreciated   [monitor://C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking\] disabled = false sourcetype = exchange_messagetracking index = exchange host_segment = 4 [monitor://C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking\*.log] disabled = false sourcetype = exchange_messagetracking index = exchange    
Hello All, We have a Splunk Universal Forwarder 9.4.0 (then 9.4.3) installed on a Windows 2022 box to which we don't have direct access. We have deployed some apps and the forwarder manages to se... See more...
Hello All, We have a Splunk Universal Forwarder 9.4.0 (then 9.4.3) installed on a Windows 2022 box to which we don't have direct access. We have deployed some apps and the forwarder manages to send us its splunkd.log and some other monitor inputs but we are not able to get the WinEvents (Applications/System/Security) using the specific stanzas.  The host is more hardened that usual,  but the Admins managed to configure what they believe are the EventLog permissions, to no avail. Something like this, never happened to us. We tried updating the agent version and configuring the installation both with LOCAL System permissions and Virtual Account permissions, but still no success. We don't see any relevant internal info regarding some problem with Permissions or EventLog access.  - is there any event we should look for on Windows Logs or UFW logs to undertand this problem? - Is there anything we can activate in the UFW to get more info about this limitation?  Thank you
I have a field called key. key has multivalues that are also dynamic. I have another field called values, that is also multivalued and dynamic. The values in "values" line-up with the values in "key"... See more...
I have a field called key. key has multivalues that are also dynamic. I have another field called values, that is also multivalued and dynamic. The values in "values" line-up with the values in "key". Example: key values AdditionalInfo user has removed device with id alpha_numeric_field" in area "alpha_numeric_field" for user "alpha_numeric_field". DeviceID alpha_numeric_field DeviceType mobile_device OS Windows   Thanks in advance and I hope this makes sense. I want to create a new field using the values from the field "key" and have the values be the values from "values". The oucome would be: AdditionalInfo user has removed device with id alpha_numeric_field" in area "alpha_numeric_field" for user "alpha_numeric_field". DeviceID alpha_numeric_field DeviceType mobile_device OS Windows
    I want to show the hyper link in the error message instead of showing the actual link. How to acheive it. im using splunk ucc framework in the confiugration page.
Hi, Since one week, the service "splunk-otel-collector" does not start. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Started Splunk OpenTelemetry Collector. Jul 21 14:00:22 svx-jsp-121i otelcol[408332... See more...
Hi, Since one week, the service "splunk-otel-collector" does not start. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Started Splunk OpenTelemetry Collector. Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:483: Set config to /etc/otel/collector/agent_config.yaml Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:539: Set memory limit to 460 MiB Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:524: Set soft memory limit set to 460 MiB Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:373: Set garbage collection target percentage (GOGC) to 400 Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:414: set "SPLUNK_LISTEN_INTERFACE" to "127.0.0.1" Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025-07-21T14:00:22.250+0200#011warn#011envprovider@v1.35.0/provider.go:61#011Configuration references unset environment variable#011{"name": "SPLUNK_GATEWAY_URL"} Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: Error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 'service.telemetry.metrics' decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: '' has invalid keys: address Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 main.go:92: application run finished with error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 'service.telemetry.metrics' decoding failed due to the following error(s): Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: '' has invalid keys: address Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Main process exited, code=exited, status=1/FAILURE Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Service RestartSec=100ms expired, scheduling restart. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Stopped Splunk OpenTelemetry Collector. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Start request repeated too quickly. Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Jul 21 14:00:22 svx-jsp-121i systemd[1]: Failed to start Splunk OpenTelemetry Collector. I need help Regards Olivier
Hello All,  Require guidance to pass the default  Global time token to be passed  from one studio dashboard to  another studio dashboard. Both dashboard have the same default global time token , n... See more...
Hello All,  Require guidance to pass the default  Global time token to be passed  from one studio dashboard to  another studio dashboard. Both dashboard have the same default global time token , no changes made. And the token used across the datasource of the respective panels..  i use the below custom url under drilldown to pass the token to another dashbaord. https://asdfghjkl:8000/en-US/app/app_name/dashboard_name?form.global_time.earliest=$global_time.earliest$&form.global_time.latest=$global_time.latest$ on the redirecting page , below is my input , on redirects it always loads the dashboard as per default value declare on the redirecitng dashbaord.  { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "0," }, "title": "Global Time Range" } kindly advice, the time range i select on main dashbaord , should be the same im passing to subdashbaord also. 
Hi all, Multiple universal forwarders are installed on both Windows and Linux, and they work fine. The deployment server forwarder management tabs no longer show them; however, after making changes ... See more...
Hi all, Multiple universal forwarders are installed on both Windows and Linux, and they work fine. The deployment server forwarder management tabs no longer show them; however, after making changes to apps in /opt/splunk/etc/deployment-apps/app, they called the deployment server and received the changes, but still have issues with managing them. I found a lot of logs on the search-head when I checked the internal index: INFO DC:DeploymentClient [8072 PhonehomeThread] - channel=deploymentServer/phoneHome/default Will retry sending phonehome to DS; err=not_connected There is no problem connecting from UF to DS on port TCP 8089. Does anyone have any ideas on how I could solve this? DS version = 9.3.1 UF version = 9.3.1 $splunk show deploy-poll Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Deployment Server URI is set to "10.121.29.10:8089".
I have a requirement where I want to see all users and their last login time, we are connected through Ldap so setting > users > last login time doesnot work.   I tried below query but it only show... See more...
I have a requirement where I want to see all users and their last login time, we are connected through Ldap so setting > users > last login time doesnot work.   I tried below query but it only shows lastest users not all. | rest /services/authentication/httpauth-tokens splunk_server=* | table timeAccessed userName splunk_server Also I want to know when a user was created on splunk as well, as users are created via LDAP  
What are the options i have for diagram in dashboard studio? I have version 9.1.8 
I would appreciate help from anyone who has encountered a similar problem: We are using Microsoft's E5 licensing with the following products: Intune Entra ID Defender for endpoint office 365 ... See more...
I would appreciate help from anyone who has encountered a similar problem: We are using Microsoft's E5 licensing with the following products: Intune Entra ID Defender for endpoint office 365 teams All events from Microsoft are streamed to EventHub and from there to our Splunk ES We are very confused and don't know which Add-Ons we should install. I would love to hear from anyone who uses these technologies. Splunk Enterprise Security
Hi Splunk Community, I'm new to Splunk and working on a deployment where we index large volumes of data (approximately 500GB/day) across multiple sources, including server logs and application metri... See more...
Hi Splunk Community, I'm new to Splunk and working on a deployment where we index large volumes of data (approximately 500GB/day) across multiple sources, including server logs and application metrics. I've noticed that some of our searches are running slowly, especially when querying over longer time ranges (e.g., 7 days or more). Here’s what I’ve tried so far: Used summary indexing for some repetitive searches. Limited the fields in searches using fields command. Ensured searches are using indexed fields where possible. However, performance is still not ideal, and I’m looking for advice on: Best practices for optimizing search performance in Splunk for large datasets. How to effectively use data models or accelerated reports to improve query speed. Any configuration settings (e.g., in limits.conf) that could help. My setup: Splunk Enterprise 9.2.1 Distributed deployment with 1 search head and 3 indexers Data is primarily structured logs in JSON format Any tips, configuration recommendations, or resources would be greatly appreciated! Thanks in advance for your help.
I was able to write a query that group by api (msgsource) to show the response times, but I am trying to see if I can extract the result in a different format. Here is the query I used: query | rex ... See more...
I was able to write a query that group by api (msgsource) to show the response times, but I am trying to see if I can extract the result in a different format. Here is the query I used: query | rex field=_raw "Time=(?<NewTime>\d{4}\.\d+)" | eval TimeMilliseconds=(NewTime*1000) | timechart span=1d count as total, count(eval(TimeMilliseconds<=1000)) as "<1sec", count(eval(TimeMilliseconds>1000 AND TimeMilliseconds<=2000)) as "1sec-2sec" count(eval(TimeMilliseconds>2000 AND TimeMilliseconds<=5000)) as "2sec-5sec" count(eval(TimeMilliseconds>48000 )) as "48sec+", by msgsource Here is the output that I get today: _time total: retrieveApi total: createApi <1sec: retireveApi <1sec: createApi 1sec-2sec: retrieveApi 1sec-2sec: createApi 2sec-5sec: retrieveApi 2sec-5sec: createApi 25-07-13 1234 200 1200 198 34 1 0 1 2025-07-14 1000 335 990 330 8 5 2 0   This is what I would like to see, the results grouped by `_time` and `msgsource` both. _time msgsource total <1sec 1sec-2sec 2sec-5sec 2025-07-13 retrieveApi 1234 1200 34 0 2025-07-13 createApi 200 198 1 1 2025-07-14 retrieveApi 1000 990 8 2 2025-07-14 createApi 335 330 5 0
I want to search the "NONE" not in 3 allowed enum value. I need to ignore the "NONE" if it is in the allowed enum. For example, if the "ALLLOWED1" : "NONE" is in the event,  but no "NONE" other than ... See more...
I want to search the "NONE" not in 3 allowed enum value. I need to ignore the "NONE" if it is in the allowed enum. For example, if the "ALLLOWED1" : "NONE" is in the event,  but no "NONE" other than that, I do not count it. If "ALLOWED2": "NONE" and "not-allowed": "NONE" in same record, I need this record.  format in my record. \"ALLOWEDFIELD\": \"NONE\" I am not sure how should I deal with " and \ in the string for the query.
     Hello, maybe I don't have the vocabulary to find the answer when Googling.  I only submit this question after many attempts to find the answer on my own.  I am trying to figure out why ... See more...
     Hello, maybe I don't have the vocabulary to find the answer when Googling.  I only submit this question after many attempts to find the answer on my own.  I am trying to figure out why neither "started" nor "blocked" will show events when I add them to my search criteria, as shown in the images. The "success" action returns events found in  the same "Interesting Fields" category ("action"). When using the search: index=security action="*" the event listings include what's been "blocked" (and what's been "started"). I can then add a search on "failed" password and the correct number of events display.  All of the "report" options: Top value, Events with this field, etc all display the proper count for "Blocked". I have tried other "Interesting fields" with greater values wondering if there was some kind of limit set somewhere, but they work.    I'm sure it's simple but I cannot figure it out.  Please advise. Thanks LS      
I am ingesting data from the Splunk Add on for O365. I want to use the Eval Expression filter within an ingestion action to filter what email addresses we ingest data from. Sampling the data is easy ... See more...
I am ingesting data from the Splunk Add on for O365. I want to use the Eval Expression filter within an ingestion action to filter what email addresses we ingest data from. Sampling the data is easy but the next bit isn't. I drop events where the RecipientAddress is not splunk.test@test.co.uk. Creating an | eval within a search is simple but creating something that works for a filter using eval expression,  which drops Events is where i am struggling. Our Exchange/Entra team are having problems limiting the online mailboxes the Splunk application which is why I am looking at this workaround. Ignore the application thats tagged as we are using Enterprise 9.3.4. Can you help?
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situatio... See more...
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situations where an alert is triggered, but the related lookup file hasn't been updated yet, resulting in missing or incomplete context at the time of the alert. What are the best practices for ensuring that lookup files are refreshed frequently and reliably? Should I be using scheduled saved searches, external scripts, KV store lookups, or another mechanism to guarantee the most recent data is available for correlation in real-time or near-real-time? Any advice or example workflows would be greatly appreciated. Use case for context: I’m working with AWS CloudTrail data to detect when new ports are opened in Security Groups. When such an event is detected, I want to enrich it with additional context — for example, which EC2 instance the Security Group is attached to. This context is available from AWS Config and ingested into a separate Splunk index. I’m currently generating a lookup to map Security Group IDs to related assets, but sometimes the alert triggers before this lookup is updated with the latest AWS Config data. Thanks in advance!
Hi,  I have saw there are many recommendations to rebuild and migrate with its existing data and configuration. It abit confusing for me as a new Splunk user, would appreciate if there some guidanc... See more...
Hi,  I have saw there are many recommendations to rebuild and migrate with its existing data and configuration. It abit confusing for me as a new Splunk user, would appreciate if there some guidance for it. The following are the instances. 1x Search Head 3x Indexer 3x Heavy Forwarder 1x License server 1x deployment server Current version: 9.3.2 Assuming the hostname/IP could be the same or different for the rebuild. What is the best way to perform the rebuild and migration with it existing data and configuration? Same hostname/IP: - Copy the entire contents of the $SPLUNK_HOME directory from the old server to the new server - Install all instance for the new Splunk component into new server Different hostname/IP: - Copy the entire contents of the $SPLUNK_HOME directory from the old server to the new server - Install all instance for the new Splunk component into new server - Update individual .conf of instances if using new hostname - Update individual instances to point to their respecitive instances roles And could i install a newer version of Splunk without going to 9.3.2 when rebuilding and migrating? For testing purpose, I'll be trying it at one AIO instances for the rebuilding/migration due to space limitation.
Hello! I'm new to Splunk Cloud. Could you please explain the difference between hot, warm, cold and thawed buckets in Splunk Enterprise and Splunk Cloud? I understand that in Splunk Enterprise, a buc... See more...
Hello! I'm new to Splunk Cloud. Could you please explain the difference between hot, warm, cold and thawed buckets in Splunk Enterprise and Splunk Cloud? I understand that in Splunk Enterprise, a bucket moves through several states (from hot to thawed). However, when I click on a new index in Splunk Cloud, I only saw "Searchable retention (days)" and "Dynamic Data Storage". Does this mean that the amount of data that can be searched in the hot and warm buckets before it goes to cold is basically equal to the searchable retention (days)? Does Dynamic Data Storage basically equate to the Cold, Frozen and Thawed buckets (as in Splunk Enterprise)?   Furthermore, in the Splunk Cloud Monitoring Console, I can see DDAS and DDAA in the 'License' section. What exactly are these, and what is their relationship with data retention? What happens if the DDAS/DDAA exceeds 100%? Does this affect searching performance, or does Splunk Cloud simply not allow you to search data? Thanks.