All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Mirza_Jaffar1 , Maybe you did not enable receiving on indexers? inputs.conf [splunktcp://9997]  
Have you also add this into outputs.conf? [indexAndForward] index = false https://help.splunk.com/en/splunk-enterprise/administer/distributed-search/9.4/deploy-distributed-search/best-practice-forw... See more...
Have you also add this into outputs.conf? [indexAndForward] index = false https://help.splunk.com/en/splunk-enterprise/administer/distributed-search/9.4/deploy-distributed-search/best-practice-forward-search-head-data-to-the-indexer-layer
yes the local directory was copied from another instance Tried to sync the directory from instance idx old to instance idx new There seem some permission issues during the migration
Configuring Internal Log Forwarding  1- 1sh 2 indx 2 if and 4 uf 1 mc 2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_in... See more...
Configuring Internal Log Forwarding  1- 1sh 2 indx 2 if and 4 uf 1 mc 2- I can see only idx internal logs though I have configured correctly the Updated the server list under the [tcpout:primary_indexers] stanza in outputs.conf 3- what could be the issues with these simple setup not being to see the internal logs of the sh, idx, mc and if Base Config output.conf # BASE SETTINGS [tcpout] defaultGroup = primary_indexers # When indexing a large continuous file that grows very large, a universal # or light forwarder may become "stuck" on one indexer, trying to reach # EOF before being able to switch to another indexer. The symptoms of this # are congestion on *one* indexer in the pool while others seem idle, and # possibly uneven loading of the disk usage for the target index. # In this instance, forceTimebasedAutoLB can help! # ** Do not enable if you have events > 64kB ** # Use with caution, can cause broken events #forceTimebasedAutoLB = true # Correct an issue with the default outputs.conf for the Universal Forwarder # or the SplunkLightForwarder app; these don't forward _internal events. # 3/6/21 only required for versions prior to current supported forwarders. # Check forwardedindex.2.whitelist in system/default config to verify #forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) [tcpout:primary_indexers] server = server_one:9997, server_two:9997 # If you do not have two (or more) indexers, you must use the single stanza # configuration, which looks like this: #[tcpout-server://<ipaddress_or_servername>:<port>] # <attribute1> = <val1>   # If setting compressed=true, this must also be set on the indexer. # compressed = true # INDEXER DISCOVERY (ASK THE CLUSTER MANAGER WHERE THE INDEXERS ARE) # This particular setting identifies the tag to use for talking to the # specific cluster manager, like the "primary_indexers" group tag here. # indexerDiscovery = clustered_indexers # It's OK to have a tcpout group like the one above *with* a server list; # these will act as a seed until communication with the manager can be # established, so it's a good idea to have at least a couple of indexers # listed in the tcpout group above. # [indexer_discovery:clustered_indexers] # pass4SymmKey = <MUST_MATCH_MANAGER> # This must include protocol and port like the example below. # manager_uri = https://manager.example.com:8089 # SSL SETTINGS # sslCertPath = $SPLUNK_HOME/etc/auth/server.pem # sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem # sslPassword = password # sslVerifyServerCert = true # COMMON NAME CHECKING - NEED ONE STANZA PER INDEXER # The same certificate can be used across all of them, but the configuration # here requires these settings to be per-indexer, so the same block of # configuration would have to be repeated for each. # [tcpout-server://10.1.12.112:9997] # sslCertPath = $SPLUNK_HOME/etc/certs/myServerCertificate.pem # sslRootCAPath = $SPLUNK_HOME/etc/certs/myCAPublicCertificate.pem # sslPassword = server_privkey_password # sslVerifyServerCert = true # sslCommonNameToCheck = servername # sslAltNameToCheck = servername Thanks for your time!
I am at my wits end trying to figure this out. I have Splunk Secure Gateway deployed and I'm successfully receiving push alerts via the "Send to Splunk Mobile" alert trigger action. This trigger acti... See more...
I am at my wits end trying to figure this out. I have Splunk Secure Gateway deployed and I'm successfully receiving push alerts via the "Send to Splunk Mobile" alert trigger action. This trigger action has the option to set a visualization, which I have picked, along with a "Token name" and "Result Fieldname" to pre-populate the dashboard visualization based on the alert that has just run. This is the piece I cannot seem to get working. I'm able to dynamically set the alert title in the mobile app by using $result.user$ (user is the field in the Alert search that I'm interested in). I cannot seem to get that value into my dashboard, however. The visualization shows up inline with the search but it is not populated with data. I'm setting: Token Name: netidToken Result Fieldname: $result.user$ The dashboard that I'm linking to has an input with a token called "netidToken". This functionality works when calling it via URL, but it passes nothing to the dashboard in the mobile app, so clicking the "View dashboard" button on the alert just opens an empty dashboard. The Splunk documentation around this is woefully incomplete and never really explains the specifics of using these settings. Any insight would be appreciated!
  After some reading https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.4/manage-splunk-platform-users-and-roles/define-roles-on-the-splunk-platform-with-capabilit... See more...
  After some reading https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.4/manage-splunk-platform-users-and-roles/define-roles-on-the-splunk-platform-with-capabilities Followed by experimenting and testing... The current platform versions provide a capability called edit_upload_and_index which is defined as “Lets the user use the indexing preview feature when creating inputs in Splunk Web” This sounds highly promising, however granting that capability alone does not enable the Add Data Button in the settings menu.. In order to present the option to a standard user, additionally the edit_tcp_stream capability is also required – this is not immediately obvious, because the name of the capability masks the documented definition: “Lets the user send data to the the /services/receivers/stream REST endpoint.”   I suspect that this second permission has the side effect of granting access to the relevant rest API which allows the /manager/<app>/adddata button to be added to the settings menu. These two permissions allow the Upload Data option to function, and whilst the option is also presented for other monitor types, the UI throws a (partial) 404 and prevents the user from adding anything more exotic. It is not clear to me if this is an expected combination of permissions Splunk intends you to grant (in which case, I will submit a documentation update suggestion) or if the edit_upload_and_index capability is intended to facilitate the outcome on its own. TLDR: To enable a non-admin user to upload files via the UI (in at least Splunk versions greater than 9.3), grant the edit_upload_and_index AND edit_tcp_stream capability to the users role.  
I want to provide a standard Splunk user the ability to upload files via the web UI. Specifically, so that members of our finance team can upload supplier bills for reconciliation with our platform ... See more...
I want to provide a standard Splunk user the ability to upload files via the web UI. Specifically, so that members of our finance team can upload supplier bills for reconciliation with our platform data. In this scenario granting full sc_admin is certainly not appropriate! I had (incorrectly) assumed that Power Users had this ability, but that is not the case. There is an article from 2014 that details what was required 11 years ago, but the cited permissions in that article are no longer relevant in 2025: https://community.splunk.com/t5/Getting-Data-In/Capability-to-upload-data-files-via-the-gui-for-a-user/m-p/190518 What is required in Splunk >9.3 (specifically Splunk Cloud) to enable this feature for a non-admin user?
Hi, Splunk and ITSI licenses must be the same type, like prod, dev or test. So I assume while both are NFR they are the same type. You have installed the "ITSI license checker" out of the itsi-pac... See more...
Hi, Splunk and ITSI licenses must be the same type, like prod, dev or test. So I assume while both are NFR they are the same type. You have installed the "ITSI license checker" out of the itsi-package on your license server?
What do you mean by that? 1. This is a third-party provided app so it's the creators who are capable of adding anything to its code. 2. As far as I can see, the app provides some custom search comm... See more...
What do you mean by that? 1. This is a third-party provided app so it's the creators who are capable of adding anything to its code. 2. As far as I can see, the app provides some custom search commands. What does it have to do (or what it should have to do) with data models? 3. What does it all have to do with correlation searches? You can use the app-provided commands in correlation searches. What more do you expect?
you're right... It works perfectly... Thank you so much, @PickleRick !
You might simply filter out UDP on the OS level so that you don't "filter out" the events but simply don't generate them because you don't see this traffic. But ask yourself is it what you want. Sin... See more...
You might simply filter out UDP on the OS level so that you don't "filter out" the events but simply don't generate them because you don't see this traffic. But ask yourself is it what you want. Since you're capturing network data it seems you want network visibility. But now you deliberately want to lose some of this visibility...
With strptime Splunk always uses the timezone of the user calling the function unless the time string to be parsed contains timezone information and the time format uses it. So you could just set a s... See more...
With strptime Splunk always uses the timezone of the user calling the function unless the time string to be parsed contains timezone information and the time format uses it. So you could just set a static GMT timezone spec and parse from there. But. Since you're parsing this from a row of search result why do the strftime/strptime both ways? Just use epoch timestamp returned from the search.
Hi everyone. I'm trying to link my dashboard to a separate platform and the url of this new platform needs to contain a timestamp in epoch time. I have a table such that each row represents a cycl... See more...
Hi everyone. I'm trying to link my dashboard to a separate platform and the url of this new platform needs to contain a timestamp in epoch time. I have a table such that each row represents a cycle and I have a column that redirects the user to a separate platform passing into the url the epoch time of that row's timestamp. The issue is that, for some reason, Splunk seems to be converting the timestamp to epoch + my timezone. So, for example, on the screenshot below, you can see the timestamp of a certain row in UTC as 16:33:27.967 and, to debug, I built a new column such that whenever I click on it, it redirects me to an url that's simply the timestamp converted to epoch time. The code is of the form: <table> <search> <query> ... </query> </search> <drilldown> <condition field="Separate Platform"> <eval token="epochFromCycle">case($row.StartTime$=="unkown", null(), 1==1, strptime($row.StartTime$, "%Y-%m-%dT%H:%M:%S.%Q"))</eval> <link target="_blank"> <![CDATA[ $epochFromCycle$ ]]> </link> </condition> </drilldown> </table> But, when clicking on this "Separate Platform" column for the timestamp shown on the screenshot, I get the epoch time 1752521607. When looking into "epochconverter.com": As stated on the screenshot, I'm at GMT-03. But the issue happens exactly the same way for a coworker who's located at GMT-04: for the same splunk timestamp, he clicks on the column to generate the link, and the epoch time that splunk returns is in fact 4 hours ahead (in this case, it returns the epoch equivalent of 8:33:27 PM). What am I missing? Thanks in advance,  Pedro
Currently I have setup Splunkstream, but there is a condition where I want to disable some data sources from certain protocols because they consume licenses. Is this possible? my case is i want to di... See more...
Currently I have setup Splunkstream, but there is a condition where I want to disable some data sources from certain protocols because they consume licenses. Is this possible? my case is i want to disable the stream:udp sourcetype. when i investigating the data it still come from source stream:ES_UDP_RAW.      
I would greatly appreciate support for customer model as a correlation search option in the VT4splunk app.
It's better to create a new questions instead of use old closed thread to get more comments.
Sounds great @LucioKub , how exactly can I reduce or control refreshes? Sounds really interesting. I always wondered if it is possible to detect with JavaScript if the page is inactive and then close... See more...
Sounds great @LucioKub , how exactly can I reduce or control refreshes? Sounds really interesting. I always wondered if it is possible to detect with JavaScript if the page is inactive and then close the tab, because I see pages with plenty of panels that refresh frequently, and the dashboard is up and running for days and days. How can we avoid it?
@burwell Thanks for sharing the info. Seems you are handling very big infra.
Oops, yes, I meant to share this one. It's a native OTel receiver (as opposed to a smartagent receiver) https://help.splunk.com/en/splunk-observability-cloud/manage-data/available-data-sources/s... See more...
Oops, yes, I meant to share this one. It's a native OTel receiver (as opposed to a smartagent receiver) https://help.splunk.com/en/splunk-observability-cloud/manage-data/available-data-sources/supported-integrations-in-splunk-observability-cloud/opentelemetry-receivers/rabbitmq-receiver
Check web.conf settings Make sure the following key is present and correct [settings] enableWebSSL = true privKeyPath = $SPLUNK_HOME/etc/auth/mycerts/your_private.key serverCert = $SPLUNK_... See more...
Check web.conf settings Make sure the following key is present and correct [settings] enableWebSSL = true privKeyPath = $SPLUNK_HOME/etc/auth/mycerts/your_private.key serverCert = $SPLUNK_HOME/etc/auth/mycerts/your_certificate.pem   If you're not using custom certificates, Splunk will default to:   privKeyPath = $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem serverCert = $SPLUNK_HOME/etc/auth/splunkweb/cert.pem   And confirm that ES is not over writing web.conf you can use the below command to find the directory   find $SPLUNK_HOME/etc/apps/SplunkEnterpriseSecuritySuite -name web.conf   modify the command based on your app setup, if you found any config check the stanza like below   [settings] enableWebSSL = false   use the below btool command and validate the stanza $SPLUNK_HOME/bin/splunk btool web list --debug Once you modified restart the Splunk services.  Quick overview: 1.enableWebSSL= true in the /system/local/web.conf 2.Privkeypath and Servercert should be exist and readable 3.web.conf should n't override in the ES App.