All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm having an issue trying to setup an Audit Input with the server I created connecting my Splunk SOAR and Enterprise. The server is setup correctly with the authentication key and when I test the co... See more...
I'm having an issue trying to setup an Audit Input with the server I created connecting my Splunk SOAR and Enterprise. The server is setup correctly with the authentication key and when I test the connection its good, but for some reason when I set the interval to 60 I'll just get No session key received errors coming from the phantom_retry.py script. Not sure where I'm suppose to update a key or if I'm suppose to edit a certain script when I made the server or what but I could use some assistance. Thanks!    
[monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.0.71\logs\*] disabled = false host = NJROS1BVA0621 alwaysOpenFile = 1 sourcetype = Image Importer Logs Is there a way to add a Wild card for any u... See more...
[monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.0.71\logs\*] disabled = false host = NJROS1BVA0621 alwaysOpenFile = 1 sourcetype = Image Importer Logs Is there a way to add a Wild card for any upcoming version updates like below? will this work? [monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.*\logs\*] Or does it have to be like this? [monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.[0-9].[0-9][0-9]\logs\*]
Hi all, When creating a systemd unit file for and old UF (<9.1) using "splunk enable boot-start -systemd-managed 1 -user .. " a systemd file is created with content: [Service] ExecStartPost=/bin/b... See more...
Hi all, When creating a systemd unit file for and old UF (<9.1) using "splunk enable boot-start -systemd-managed 1 -user .. " a systemd file is created with content: [Service] ExecStartPost=/bin/bash -c "chown -R splunkfwd:splunkfwd /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunkfwd:splunkfwd /sys/fs/cgroup/memory/system.slice/%n" This is also documented in here: https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/working-with-the-universal-forwarder/manage-a-linux-least-privileged-user In "Reference unit file template".  Does anyone have an idea why this is done? The paths are using cgroupv1 which only exists on old linux systems, on up-to-date systems this chown fails, but service starts anyway.  When creating a systemd config with recent UFs these ExecStartPost Parameters are not set anymore.  BUT when installing Splunk Enterprise this line is set in systemd unit ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/%n" AFAIK splunk core uses cgroups for Workspace Management, but not on UF. Is the reference unit file template for UF just old&false and the settings never had a sense or is there any good reason? thanks for your help and best regards, Andreas
I am standing up a Linux server to host Splunk Enterprise 9.4.3. I have 30+ windows hosts. Can I  upload Splunk Add-on for Microsoft Windows and use it to config the windows hosts even though the se... See more...
I am standing up a Linux server to host Splunk Enterprise 9.4.3. I have 30+ windows hosts. Can I  upload Splunk Add-on for Microsoft Windows and use it to config the windows hosts even though the server is running on a Linux host?   Thank you
Hi everyone! Quick question. I would like to know how can I send data to an index using a python script. We need to ingest some data without using a forwarder and I would like to use an script for ... See more...
Hi everyone! Quick question. I would like to know how can I send data to an index using a python script. We need to ingest some data without using a forwarder and I would like to use an script for this reason. Did anyone do this already? Ty! Regards.
I'm cloning the event and before cloning  extracting sourcetype to use later. transforms.conf [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 W... See more...
I'm cloning the event and before cloning  extracting sourcetype to use later. transforms.conf [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 WRITE_META = true [clone_for_thirdparty] SOURCE_KEY = _MetaData:Index REGEX = ^test_np$ DEST_KEY = MetaData:Sourcetype CLONE_SOURCETYPE = data_to_thirdparty WRITE_META = true [sourcetype_raw_updated] SOURCE_KEY=MetaData:orig_sourcetype REGEX=^orig_sourcetype::(.*)$ FORMAT = $1##$0 DEST_KEY=_raw But when I try to retrieve extracted original value  I'm getting nothing. Is there any way to persist original sourcetype ? @PickleRick @isoutamo @gcusello 
Hello, two of our Splunk apps "Splunk Add-on for Microsoft Cloud Services" and "Splunk Add-on for Office 365" are no longer collecting data. It looks like they stopped working June 30. I checked the ... See more...
Hello, two of our Splunk apps "Splunk Add-on for Microsoft Cloud Services" and "Splunk Add-on for Office 365" are no longer collecting data. It looks like they stopped working June 30. I checked the client secret in the Azure App Registrations panel and it had not expired. I went ahead and created a new key anyway and updated the two Splunk app configurations with the new key, but they still aren't collecting any data. I checked index="_internal" log_level=ERROR but didn't really see anything that stood out specific to these apps. Any suggestions on settings I can check, other logs to examine, etc?
As we prepare to transition our Splunk deployment to the cloud, we are aiming to estimate the Splunk Virtual Compute (SVCs) that may be incurred during typical usage. Specifically, we are interested ... See more...
As we prepare to transition our Splunk deployment to the cloud, we are aiming to estimate the Splunk Virtual Compute (SVCs) that may be incurred during typical usage. Specifically, we are interested in understanding how to best calculate on-prem SVC usage using data available from the _audit index, or any other recommended sources. Our primary focus is on dashboard refreshes, as they represent a significant portion of our ongoing search activity. We’re looking for guidance on any methodologies, SPL queries, or best practices that can help us approximate SVC consumption in our current environment to better forecast usage and cost implications post-migration.
Hi community I've been pulling my hair for quite some time regarding field extraction using the Splunk_TA_nix app. One thing that has been annoying me is the absence of a field which contains the fu... See more...
Hi community I've been pulling my hair for quite some time regarding field extraction using the Splunk_TA_nix app. One thing that has been annoying me is the absence of a field which contains the full command executed. My question/comment which I seek to get some feedback on. While trying to figure out why I am not seeing the expected/desired content I noticed something. Splunk_TA_nix/default/props.conf [linux_audit] REPORT-command = command_for_linux_audit Splunk_TA_nix/default/transforms.conf [command_for_linux_audit] REGEX = exe=.*\/(\S+)\" FORMAT = command::$1 This regex only applies to the "type=SYSCALL" audit log entry which is the only one containing "exe=" and it does not work in our environment. There is no trailing quotation mark in our log so this field is not properly extracted with this regex. So to work as intended this would need to be changed to [command_for_linux_audit] REGEX = exe=.*\/(\S+) FORMAT = command::$1 This would generate a field called "command" with the executed command (binary) only. Is this just in our environment where we have a make-shift solution to generate a second audit log file for collection, or is this a general issue? And the rant: It seems that if not defined elsewhere the default field separator is space. This means that most <field>=<value> entries in the audit log are extracted . The sourcetype=linux_audit type= PROCTITLE events actually has a field called "proctitle" which contains the full command executed. While a field called "proctitle" is extracted the value of this field is cut short after the first space, meaning only the command (binary) is available. Assuming this is expected behaviour, I suppose that I have to define a field extraction overwriding the "default" behaviour to get a field "proctitle" with the desired content.
Could you please advise Is there any Splunk Cloud security policy or best practice guidance on onboarding external data sources when the integration requires admin-level permissions at source? Doe... See more...
Could you please advise Is there any Splunk Cloud security policy or best practice guidance on onboarding external data sources when the integration requires admin-level permissions at source? Does Splunk recommend or require any formal risk review or CCSA-like process for such cases? Do you have any documentation or recommendations to share with us to justify this elevated access for log collection? Any alternatives or Splunk add-ons/plugins that could achieve the same without needing admin-level permissions?
Hi Experts,   Scenario: I have DB agents installed on standalone VMs for a group of DB servers & get connected using the DB agent VM. In event notification, the actual DB server name is coming in th... See more...
Hi Experts,   Scenario: I have DB agents installed on standalone VMs for a group of DB servers & get connected using the DB agent VM. In event notification, the actual DB server name is coming in the below format. "dbmon:11432|host:xxxxxxxagl.xxxxxx.com|port:1433" Is there any way I can customize this using AppD placeholders in JSON payload? I tried "${event.db.name}" & "${event.node.name}", but it's not working.  Appreciate your inputs. Thanks, Raj
Subject: Trustar API : Data Retention Policy Inquiry Dear Splunk Community, We are currently utilizing your search_indicators API, as documented here: https://docs.trustar.co/api/v13/indicators/sea... See more...
Subject: Trustar API : Data Retention Policy Inquiry Dear Splunk Community, We are currently utilizing your search_indicators API, as documented here: https://docs.trustar.co/api/v13/indicators/search_indicators.html. While we understand that the API supports a maximum time range of 1 year per query, we require clarification on the overall data retention policy for indicators. I just want to know the total historical period for which indicator data is stored and retrievable via this API, regardless of the single query window limit? Your insight into this would be greatly appreciated for our data strategy. TruSTAR 
Configuring Internal Log Forwarding  1- 1sh 2 indx 2 if and 4 uf 1 mc 2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_in... See more...
Configuring Internal Log Forwarding  1- 1sh 2 indx 2 if and 4 uf 1 mc 2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_indexers] stanza in outputs.conf 3- what could be the issues with these simple setup not being to see the internal logs of the sh, idx, mc and if Base Config output.conf # BASE SETTINGS [tcpout] defaultGroup = primary_indexers # When indexing a large continuous file that grows very large, a universal # or light forwarder may become "stuck" on one indexer, trying to reach # EOF before being able to switch to another indexer. The symptoms of this # are congestion on *one* indexer in the pool while others seem idle, and # possibly uneven loading of the disk usage for the target index. # In this instance, forceTimebasedAutoLB can help! # ** Do not enable if you have events > 64kB ** # Use with caution, can cause broken events #forceTimebasedAutoLB = true # Correct an issue with the default outputs.conf for the Universal Forwarder # or the SplunkLightForwarder app; these don't forward _internal events. # 3/6/21 only required for versions prior to current supported forwarders. # Check forwardedindex.2.whitelist in system/default config to verify #forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) [tcpout:primary_indexers] server = server_one:9997, server_two:9997 # If you do not have two (or more) indexers, you must use the single stanza # configuration, which looks like this: #[tcpout-server://<ipaddress_or_servername>:<port>] # <attribute1> = <val1>   # If setting compressed=true, this must also be set on the indexer. # compressed = true # INDEXER DISCOVERY (ASK THE CLUSTER MANAGER WHERE THE INDEXERS ARE) # This particular setting identifies the tag to use for talking to the # specific cluster manager, like the "primary_indexers" group tag here. # indexerDiscovery = clustered_indexers # It's OK to have a tcpout group like the one above *with* a server list; # these will act as a seed until communication with the manager can be # established, so it's a good idea to have at least a couple of indexers # listed in the tcpout group above. # [indexer_discovery:clustered_indexers] # pass4SymmKey = <MUST_MATCH_MANAGER> # This must include protocol and port like the example below. # manager_uri = https://manager.example.com:8089 # SSL SETTINGS # sslCertPath = $SPLUNK_HOME/etc/auth/server.pem # sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem # sslPassword = password # sslVerifyServerCert = true # COMMON NAME CHECKING - NEED ONE STANZA PER INDEXER # The same certificate can be used across all of them, but the configuration # here requires these settings to be per-indexer, so the same block of # configuration would have to be repeated for each. # [tcpout-server://10.1.12.112:9997] # sslCertPath = $SPLUNK_HOME/etc/certs/myServerCertificate.pem # sslRootCAPath = $SPLUNK_HOME/etc/certs/myCAPublicCertificate.pem # sslPassword = server_privkey_password # sslVerifyServerCert = true # sslCommonNameToCheck = servername # sslAltNameToCheck = servername Thanks for your time!
I am at my wits end trying to figure this out. I have Splunk Secure Gateway deployed and I'm successfully receiving push alerts via the "Send to Splunk Mobile" alert trigger action. This trigger acti... See more...
I am at my wits end trying to figure this out. I have Splunk Secure Gateway deployed and I'm successfully receiving push alerts via the "Send to Splunk Mobile" alert trigger action. This trigger action has the option to set a visualization, which I have picked, along with a "Token name" and "Result Fieldname" to pre-populate the dashboard visualization based on the alert that has just run. This is the piece I cannot seem to get working. I'm able to dynamically set the alert title in the mobile app by using $result.user$ (user is the field in the Alert search that I'm interested in). I cannot seem to get that value into my dashboard, however. The visualization shows up inline with the search but it is not populated with data. I'm setting: Token Name: netidToken Result Fieldname: $result.user$ The dashboard that I'm linking to has an input with a token called "netidToken". This functionality works when calling it via URL, but it passes nothing to the dashboard in the mobile app, so clicking the "View dashboard" button on the alert just opens an empty dashboard. The Splunk documentation around this is woefully incomplete and never really explains the specifics of using these settings. Any insight would be appreciated!
I want to provide a standard Splunk user the ability to upload files via the web UI. Specifically, so that members of our finance team can upload supplier bills for reconciliation with our platform ... See more...
I want to provide a standard Splunk user the ability to upload files via the web UI. Specifically, so that members of our finance team can upload supplier bills for reconciliation with our platform data. In this scenario granting full sc_admin is certainly not appropriate! I had (incorrectly) assumed that Power Users had this ability, but that is not the case. There is an article from 2014 that details what was required 11 years ago, but the cited permissions in that article are no longer relevant in 2025: https://community.splunk.com/t5/Getting-Data-In/Capability-to-upload-data-files-via-the-gui-for-a-user/m-p/190518 What is required in Splunk >9.3 (specifically Splunk Cloud) to enable this feature for a non-admin user?
Hi everyone. I'm trying to link my dashboard to a separate platform and the url of this new platform needs to contain a timestamp in epoch time. I have a table such that each row represents a cycl... See more...
Hi everyone. I'm trying to link my dashboard to a separate platform and the url of this new platform needs to contain a timestamp in epoch time. I have a table such that each row represents a cycle and I have a column that redirects the user to a separate platform passing into the url the epoch time of that row's timestamp. The issue is that, for some reason, Splunk seems to be converting the timestamp to epoch + my timezone. So, for example, on the screenshot below, you can see the timestamp of a certain row in UTC as 16:33:27.967 and, to debug, I built a new column such that whenever I click on it, it redirects me to an url that's simply the timestamp converted to epoch time. The code is of the form: <table> <search> <query> ... </query> </search> <drilldown> <condition field="Separate Platform"> <eval token="epochFromCycle">case($row.StartTime$=="unkown", null(), 1==1, strptime($row.StartTime$, "%Y-%m-%dT%H:%M:%S.%Q"))</eval> <link target="_blank"> <![CDATA[ $epochFromCycle$ ]]> </link> </condition> </drilldown> </table> But, when clicking on this "Separate Platform" column for the timestamp shown on the screenshot, I get the epoch time 1752521607. When looking into "epochconverter.com": As stated on the screenshot, I'm at GMT-03. But the issue happens exactly the same way for a coworker who's located at GMT-04: for the same splunk timestamp, he clicks on the column to generate the link, and the epoch time that splunk returns is in fact 4 hours ahead (in this case, it returns the epoch equivalent of 8:33:27 PM). What am I missing? Thanks in advance,  Pedro
Currently I have setup Splunkstream, but there is a condition where I want to disable some data sources from certain protocols because they consume licenses. Is this possible? my case is i want to di... See more...
Currently I have setup Splunkstream, but there is a condition where I want to disable some data sources from certain protocols because they consume licenses. Is this possible? my case is i want to disable the stream:udp sourcetype. when i investigating the data it still come from source stream:ES_UDP_RAW.      
I would greatly appreciate support for customer model as a correlation search option in the VT4splunk app.
I am running a rest APi basically curl to query Splunk for results and export them to the server.  below is my api query.  My Splunk query is very big and the results are also kind of huge.  Query is... See more...
I am running a rest APi basically curl to query Splunk for results and export them to the server.  below is my api query.  My Splunk query is very big and the results are also kind of huge.  Query is running fine but I don't see any results.   #!/bin/bash search_query=$(cat <<'EOF' search index=my long splunk query EOF ) echo "Running Splunk search..." curl --http1.1 -k -u admin:password \ "https://<splunk uri>:8089/services/search/jobs/export" \ --data-urlencode "search=$search_query" \ -d output_mode=csv \ -d earliest_time='-24d@d' \ -d latest_time='@d' \ -o output-file.csv echo "Done. Results in output-file.csv"   This pi returns below results -  curl: (18) transfer closed with outstanding read data remaining with empty ouput-file.csv.  Looks like it is not able to run such huge query.  I tried the curl command with some simple search query and it is working.  How can I make this work ?
Hi, i want developers (with GUI-access only) to be able to export the apps they have built. I can't find any information about that in the documentation. Is there a way to export apps without comma... See more...
Hi, i want developers (with GUI-access only) to be able to export the apps they have built. I can't find any information about that in the documentation. Is there a way to export apps without commandline-access to the server? Best Regards, M