All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have been having some strange performance issues with some of our dashboards and we would like some advice on how to troubleshoot these issues and fix them. Despite the underlying searches being ... See more...
We have been having some strange performance issues with some of our dashboards and we would like some advice on how to troubleshoot these issues and fix them. Despite the underlying searches being extremely fast, sometimes results will take upwards of 30 seconds to be displayed in the corresponding dashboard panels. Infrastructure and dashboard details We are running a distributed on-prem Splunk environment with one search head and a cluster of three indexers. All instances are on version 9.2.2, although we have been able to replicate these issues with a 9.4.2 search head as well. We have six core dashboards, ranging from simple and static to considerably dynamic and complex. About 95% of the searches in this app’s dashboards are metric-based and use mstats. Each individual search is quite fast, with most searches running in under 0.5s, even in the presence of joins/appends. Most of these searches have a 10s refresh time by default. Problem We have been facing a recurring issue where certain panels will sometimes not load for several seconds (10-30 seconds usually). This tends to happen in some of the more complex dashboards, particularly after drilldowns/input interactions – doing so often leads to "Waiting for data" messages displayed inside the panels. One of two things tends to happen: The underlying search jobs run successfully but the panels do not display data until the next refresh, which causes the search to re-run; panels behave as normal afterwards: The pending searches start executing but do not fetch any results for several seconds, which can lead to the same search taking variable amounts of time to execute. Here is an example of the same search taking significantly different amounts of time to run (ran just 27s apart): Whenever a search takes long to run, the component of the search that takes the longest to run, is, by far, the dispatch.stream.remote.<one_of_the_indexers> component which, to the best of our knowledge, represents the amount of time spent by the search head waiting for data streamed back from an indexer during a distributed search. We have run load tests consisting of opening our dashboards several times in different tabs simultaneously for prolonged periods of time and monitoring system metrics such as CPU, network, and memory. We were not able to detect any hardware bottlenecks, only a modest increase in the CPU usage and load average for the search head and indexers, which is expected. We have also upgraded the hardware the search head is running on (96 cores, 512 GB RAM) and despite the noticeable performance increase, the problem still occurs occasionally. We would greatly appreciate the community's assistance in helping us troubleshoot these issues.
Hi everyone and thanks in advance. I'm trying to collate all our SOCKS traffic on our network over the last 90 days. Our IP's rotate and as a result I can't run this search for all time, I have to ... See more...
Hi everyone and thanks in advance. I'm trying to collate all our SOCKS traffic on our network over the last 90 days. Our IP's rotate and as a result I can't run this search for all time, I have to run it for 90 days individually, Which is where I got to here: index=*proxy* SOCKS earliest=-1d latest=-0d | eval destination=coalesce(dest, dest_port), userid=coalesce(user, username) | rex field=url mode=sed "s/^SOCKS:\/\/|:\d+$//g" | eval network=case(match(src_ip,"<REDACTED>"),"user",1=1,"server") | stats values(domain) as Domain values(userid) as Users values(destination) as Destinations by url, src_ip, network | convert ctime(First_Seen) ctime(Last_Seen) | sort -Event_Count | join type=left max=0 src_ip [ search index=triangulate earliest=-1d latest=-0d |stats count by ip,username |rename username AS userid |rename ip as src_ip ] | join type=left max=0 src_ip [ search index=windows_events EventID=4624 NOT src_ip="-" NOT user="*$" earliest=-1d latest=-0d | stats count by IpAddress, user | rename IpAddress as src_ip | rename user as win_userid | fields - count ] |eval userid=coalesce(userid, win_userid) | join type=left max=0 userid [ search index="active_directory" earliest=-1d latest=-0d | stats count by username,fullname,title,division,mail | rename username as userid ] Then a colleague suggested I do it slightly differently and run it over the 90 days but link it together which is where we got to here: index=*proxy* SOCKS | eval destination=coalesce(dest, dest_port) | rex field=url mode=sed "s/^SOCKS:\/\/|:\d+$//g" | eval network=case(match(src_ip,"<Redacted>"),"user",1=1,"server") | eval Proxy_day = strftime(_time, "%d-%m-%y") | join type=left max=0 src_ip [ search index=windows_events EventID=4624 NOT src_ip="-" NOT user="*$" | stats count by IpAddress, user | rename IpAddress as src_ip | rename user as win_userid | fields - count ] | eval userid=coalesce(userid, win_userid) | join type=left max=0 userid [ search index="active_directory" | stats count by username, fullname, title, division, mail | rename username as userid ] | rename src_ip as "Source IP" | stats values(mail) as "Email Address" values(username) as "User ID" values(destination) as Destination values(network) as Network values(Proxy_day) as Day values(url) as URL by "Source IP" However the problem I'm running into now is in the data produced there could be 100's of URL's / Emails / Day associated with the source IP which makes the data unactionable and actually starts to break a .csv when exported. Would anyone be able to help? Ideally I'd just like the top for example 5 results, but I've had no luck with that or a few other methods I've tried. Even SplunkGPT is failing me - is it even possible?
I'm having an issue trying to setup an Audit Input with the server I created connecting my Splunk SOAR and Enterprise. The server is setup correctly with the authentication key and when I test the co... See more...
I'm having an issue trying to setup an Audit Input with the server I created connecting my Splunk SOAR and Enterprise. The server is setup correctly with the authentication key and when I test the connection its good, but for some reason when I set the interval to 60 I'll just get No session key received errors coming from the phantom_retry.py script. Not sure where I'm suppose to update a key or if I'm suppose to edit a certain script when I made the server or what but I could use some assistance. Thanks!    
[monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.0.71\logs\*] disabled = false host = NJROS1BVA0621 alwaysOpenFile = 1 sourcetype = Image Importer Logs Is there a way to add a Wild card for any u... See more...
[monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.0.71\logs\*] disabled = false host = NJROS1BVA0621 alwaysOpenFile = 1 sourcetype = Image Importer Logs Is there a way to add a Wild card for any upcoming version updates like below? will this work? [monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.*\logs\*] Or does it have to be like this? [monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.[0-9].[0-9][0-9]\logs\*]
Hi all, When creating a systemd unit file for and old UF (<9.1) using "splunk enable boot-start -systemd-managed 1 -user .. " a systemd file is created with content: [Service] ExecStartPost=/bin/b... See more...
Hi all, When creating a systemd unit file for and old UF (<9.1) using "splunk enable boot-start -systemd-managed 1 -user .. " a systemd file is created with content: [Service] ExecStartPost=/bin/bash -c "chown -R splunkfwd:splunkfwd /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunkfwd:splunkfwd /sys/fs/cgroup/memory/system.slice/%n" This is also documented in here: https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/working-with-the-universal-forwarder/manage-a-linux-least-privileged-user In "Reference unit file template".  Does anyone have an idea why this is done? The paths are using cgroupv1 which only exists on old linux systems, on up-to-date systems this chown fails, but service starts anyway.  When creating a systemd config with recent UFs these ExecStartPost Parameters are not set anymore.  BUT when installing Splunk Enterprise this line is set in systemd unit ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/%n" AFAIK splunk core uses cgroups for Workspace Management, but not on UF. Is the reference unit file template for UF just old&false and the settings never had a sense or is there any good reason? thanks for your help and best regards, Andreas
I am standing up a Linux server to host Splunk Enterprise 9.4.3. I have 30+ windows hosts. Can I  upload Splunk Add-on for Microsoft Windows and use it to config the windows hosts even though the se... See more...
I am standing up a Linux server to host Splunk Enterprise 9.4.3. I have 30+ windows hosts. Can I  upload Splunk Add-on for Microsoft Windows and use it to config the windows hosts even though the server is running on a Linux host?   Thank you
Hi everyone! Quick question. I would like to know how can I send data to an index using a python script. We need to ingest some data without using a forwarder and I would like to use an script for ... See more...
Hi everyone! Quick question. I would like to know how can I send data to an index using a python script. We need to ingest some data without using a forwarder and I would like to use an script for this reason. Did anyone do this already? Ty! Regards.
I'm cloning the event and before cloning  extracting sourcetype to use later. transforms.conf [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 W... See more...
I'm cloning the event and before cloning  extracting sourcetype to use later. transforms.conf [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 WRITE_META = true [clone_for_thirdparty] SOURCE_KEY = _MetaData:Index REGEX = ^test_np$ DEST_KEY = MetaData:Sourcetype CLONE_SOURCETYPE = data_to_thirdparty WRITE_META = true [sourcetype_raw_updated] SOURCE_KEY=MetaData:orig_sourcetype REGEX=^orig_sourcetype::(.*)$ FORMAT = $1##$0 DEST_KEY=_raw But when I try to retrieve extracted original value  I'm getting nothing. Is there any way to persist original sourcetype ? @PickleRick @isoutamo @gcusello 
Hello, two of our Splunk apps "Splunk Add-on for Microsoft Cloud Services" and "Splunk Add-on for Office 365" are no longer collecting data. It looks like they stopped working June 30. I checked the ... See more...
Hello, two of our Splunk apps "Splunk Add-on for Microsoft Cloud Services" and "Splunk Add-on for Office 365" are no longer collecting data. It looks like they stopped working June 30. I checked the client secret in the Azure App Registrations panel and it had not expired. I went ahead and created a new key anyway and updated the two Splunk app configurations with the new key, but they still aren't collecting any data. I checked index="_internal" log_level=ERROR but didn't really see anything that stood out specific to these apps. Any suggestions on settings I can check, other logs to examine, etc?
As we prepare to transition our Splunk deployment to the cloud, we are aiming to estimate the Splunk Virtual Compute (SVCs) that may be incurred during typical usage. Specifically, we are interested ... See more...
As we prepare to transition our Splunk deployment to the cloud, we are aiming to estimate the Splunk Virtual Compute (SVCs) that may be incurred during typical usage. Specifically, we are interested in understanding how to best calculate on-prem SVC usage using data available from the _audit index, or any other recommended sources. Our primary focus is on dashboard refreshes, as they represent a significant portion of our ongoing search activity. We’re looking for guidance on any methodologies, SPL queries, or best practices that can help us approximate SVC consumption in our current environment to better forecast usage and cost implications post-migration.
Hi community I've been pulling my hair for quite some time regarding field extraction using the Splunk_TA_nix app. One thing that has been annoying me is the absence of a field which contains the fu... See more...
Hi community I've been pulling my hair for quite some time regarding field extraction using the Splunk_TA_nix app. One thing that has been annoying me is the absence of a field which contains the full command executed. My question/comment which I seek to get some feedback on. While trying to figure out why I am not seeing the expected/desired content I noticed something. Splunk_TA_nix/default/props.conf [linux_audit] REPORT-command = command_for_linux_audit Splunk_TA_nix/default/transforms.conf [command_for_linux_audit] REGEX = exe=.*\/(\S+)\" FORMAT = command::$1 This regex only applies to the "type=SYSCALL" audit log entry which is the only one containing "exe=" and it does not work in our environment. There is no trailing quotation mark in our log so this field is not properly extracted with this regex. So to work as intended this would need to be changed to [command_for_linux_audit] REGEX = exe=.*\/(\S+) FORMAT = command::$1 This would generate a field called "command" with the executed command (binary) only. Is this just in our environment where we have a make-shift solution to generate a second audit log file for collection, or is this a general issue? And the rant: It seems that if not defined elsewhere the default field separator is space. This means that most <field>=<value> entries in the audit log are extracted . The sourcetype=linux_audit type= PROCTITLE events actually has a field called "proctitle" which contains the full command executed. While a field called "proctitle" is extracted the value of this field is cut short after the first space, meaning only the command (binary) is available. Assuming this is expected behaviour, I suppose that I have to define a field extraction overwriding the "default" behaviour to get a field "proctitle" with the desired content.
Could you please advise Is there any Splunk Cloud security policy or best practice guidance on onboarding external data sources when the integration requires admin-level permissions at source? Doe... See more...
Could you please advise Is there any Splunk Cloud security policy or best practice guidance on onboarding external data sources when the integration requires admin-level permissions at source? Does Splunk recommend or require any formal risk review or CCSA-like process for such cases? Do you have any documentation or recommendations to share with us to justify this elevated access for log collection? Any alternatives or Splunk add-ons/plugins that could achieve the same without needing admin-level permissions?
Hi Experts,   Scenario: I have DB agents installed on standalone VMs for a group of DB servers & get connected using the DB agent VM. In event notification, the actual DB server name is coming in th... See more...
Hi Experts,   Scenario: I have DB agents installed on standalone VMs for a group of DB servers & get connected using the DB agent VM. In event notification, the actual DB server name is coming in the below format. "dbmon:11432|host:xxxxxxxagl.xxxxxx.com|port:1433" Is there any way I can customize this using AppD placeholders in JSON payload? I tried "${event.db.name}" & "${event.node.name}", but it's not working.  Appreciate your inputs. Thanks, Raj
Subject: Trustar API : Data Retention Policy Inquiry Dear Splunk Community, We are currently utilizing your search_indicators API, as documented here: https://docs.trustar.co/api/v13/indicators/sea... See more...
Subject: Trustar API : Data Retention Policy Inquiry Dear Splunk Community, We are currently utilizing your search_indicators API, as documented here: https://docs.trustar.co/api/v13/indicators/search_indicators.html. While we understand that the API supports a maximum time range of 1 year per query, we require clarification on the overall data retention policy for indicators. I just want to know the total historical period for which indicator data is stored and retrievable via this API, regardless of the single query window limit? Your insight into this would be greatly appreciated for our data strategy. TruSTAR 
Configuring Internal Log Forwarding  1- 1sh 2 indx 2 if and 4 uf 1 mc 2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_in... See more...
Configuring Internal Log Forwarding  1- 1sh 2 indx 2 if and 4 uf 1 mc 2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_indexers] stanza in outputs.conf 3- what could be the issues with these simple setup not being to see the internal logs of the sh, idx, mc and if Base Config output.conf # BASE SETTINGS [tcpout] defaultGroup = primary_indexers # When indexing a large continuous file that grows very large, a universal # or light forwarder may become "stuck" on one indexer, trying to reach # EOF before being able to switch to another indexer. The symptoms of this # are congestion on *one* indexer in the pool while others seem idle, and # possibly uneven loading of the disk usage for the target index. # In this instance, forceTimebasedAutoLB can help! # ** Do not enable if you have events > 64kB ** # Use with caution, can cause broken events #forceTimebasedAutoLB = true # Correct an issue with the default outputs.conf for the Universal Forwarder # or the SplunkLightForwarder app; these don't forward _internal events. # 3/6/21 only required for versions prior to current supported forwarders. # Check forwardedindex.2.whitelist in system/default config to verify #forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) [tcpout:primary_indexers] server = server_one:9997, server_two:9997 # If you do not have two (or more) indexers, you must use the single stanza # configuration, which looks like this: #[tcpout-server://<ipaddress_or_servername>:<port>] # <attribute1> = <val1>   # If setting compressed=true, this must also be set on the indexer. # compressed = true # INDEXER DISCOVERY (ASK THE CLUSTER MANAGER WHERE THE INDEXERS ARE) # This particular setting identifies the tag to use for talking to the # specific cluster manager, like the "primary_indexers" group tag here. # indexerDiscovery = clustered_indexers # It's OK to have a tcpout group like the one above *with* a server list; # these will act as a seed until communication with the manager can be # established, so it's a good idea to have at least a couple of indexers # listed in the tcpout group above. # [indexer_discovery:clustered_indexers] # pass4SymmKey = <MUST_MATCH_MANAGER> # This must include protocol and port like the example below. # manager_uri = https://manager.example.com:8089 # SSL SETTINGS # sslCertPath = $SPLUNK_HOME/etc/auth/server.pem # sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem # sslPassword = password # sslVerifyServerCert = true # COMMON NAME CHECKING - NEED ONE STANZA PER INDEXER # The same certificate can be used across all of them, but the configuration # here requires these settings to be per-indexer, so the same block of # configuration would have to be repeated for each. # [tcpout-server://10.1.12.112:9997] # sslCertPath = $SPLUNK_HOME/etc/certs/myServerCertificate.pem # sslRootCAPath = $SPLUNK_HOME/etc/certs/myCAPublicCertificate.pem # sslPassword = server_privkey_password # sslVerifyServerCert = true # sslCommonNameToCheck = servername # sslAltNameToCheck = servername Thanks for your time!
I am at my wits end trying to figure this out. I have Splunk Secure Gateway deployed and I'm successfully receiving push alerts via the "Send to Splunk Mobile" alert trigger action. This trigger acti... See more...
I am at my wits end trying to figure this out. I have Splunk Secure Gateway deployed and I'm successfully receiving push alerts via the "Send to Splunk Mobile" alert trigger action. This trigger action has the option to set a visualization, which I have picked, along with a "Token name" and "Result Fieldname" to pre-populate the dashboard visualization based on the alert that has just run. This is the piece I cannot seem to get working. I'm able to dynamically set the alert title in the mobile app by using $result.user$ (user is the field in the Alert search that I'm interested in). I cannot seem to get that value into my dashboard, however. The visualization shows up inline with the search but it is not populated with data. I'm setting: Token Name: netidToken Result Fieldname: $result.user$ The dashboard that I'm linking to has an input with a token called "netidToken". This functionality works when calling it via URL, but it passes nothing to the dashboard in the mobile app, so clicking the "View dashboard" button on the alert just opens an empty dashboard. The Splunk documentation around this is woefully incomplete and never really explains the specifics of using these settings. Any insight would be appreciated!
I want to provide a standard Splunk user the ability to upload files via the web UI. Specifically, so that members of our finance team can upload supplier bills for reconciliation with our platform ... See more...
I want to provide a standard Splunk user the ability to upload files via the web UI. Specifically, so that members of our finance team can upload supplier bills for reconciliation with our platform data. In this scenario granting full sc_admin is certainly not appropriate! I had (incorrectly) assumed that Power Users had this ability, but that is not the case. There is an article from 2014 that details what was required 11 years ago, but the cited permissions in that article are no longer relevant in 2025: https://community.splunk.com/t5/Getting-Data-In/Capability-to-upload-data-files-via-the-gui-for-a-user/m-p/190518 What is required in Splunk >9.3 (specifically Splunk Cloud) to enable this feature for a non-admin user?
Hi everyone. I'm trying to link my dashboard to a separate platform and the url of this new platform needs to contain a timestamp in epoch time. I have a table such that each row represents a cycl... See more...
Hi everyone. I'm trying to link my dashboard to a separate platform and the url of this new platform needs to contain a timestamp in epoch time. I have a table such that each row represents a cycle and I have a column that redirects the user to a separate platform passing into the url the epoch time of that row's timestamp. The issue is that, for some reason, Splunk seems to be converting the timestamp to epoch + my timezone. So, for example, on the screenshot below, you can see the timestamp of a certain row in UTC as 16:33:27.967 and, to debug, I built a new column such that whenever I click on it, it redirects me to an url that's simply the timestamp converted to epoch time. The code is of the form: <table> <search> <query> ... </query> </search> <drilldown> <condition field="Separate Platform"> <eval token="epochFromCycle">case($row.StartTime$=="unkown", null(), 1==1, strptime($row.StartTime$, "%Y-%m-%dT%H:%M:%S.%Q"))</eval> <link target="_blank"> <![CDATA[ $epochFromCycle$ ]]> </link> </condition> </drilldown> </table> But, when clicking on this "Separate Platform" column for the timestamp shown on the screenshot, I get the epoch time 1752521607. When looking into "epochconverter.com": As stated on the screenshot, I'm at GMT-03. But the issue happens exactly the same way for a coworker who's located at GMT-04: for the same splunk timestamp, he clicks on the column to generate the link, and the epoch time that splunk returns is in fact 4 hours ahead (in this case, it returns the epoch equivalent of 8:33:27 PM). What am I missing? Thanks in advance,  Pedro
Currently I have setup Splunkstream, but there is a condition where I want to disable some data sources from certain protocols because they consume licenses. Is this possible? my case is i want to di... See more...
Currently I have setup Splunkstream, but there is a condition where I want to disable some data sources from certain protocols because they consume licenses. Is this possible? my case is i want to disable the stream:udp sourcetype. when i investigating the data it still come from source stream:ES_UDP_RAW.      
I would greatly appreciate support for customer model as a correlation search option in the VT4splunk app.