All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new f... See more...
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new files.  Example:  /opt/someApplication/logs/someApplication.202412160600.out I am unable to wildcard /opt/someApplication/logs/someApplication.*.out as there are logs dating back to 2017 and I'd exceed our daily license/quota by several orders of magnitude.  Changing the logging format is not an option.  Exclude-lists appear to be a solution, but even using regex would be incredibly burdensome. Thoughts?    
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some d... See more...
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some degree but by the time I joined the team and took over had fallen into disuse. After getting it upgraded from 9.0 to 9.3.2, rolling out Universal Forwarders, tinkering with inputs.conf, and fixing some network issues, I found myself finally able to get Windows Event Log data into my indexer from a couple of different test machines. The inputs.conf I was using was something I had found on one of the existing machines before reinstalling the UF, and I noticed that it had a lot more stuff in it than Windows Event Log stanzas.  Some of it was suggesting it monitored stuff I was interested in right now, such as CPU utilization.  However, I noticed that exactly nothing outside of Windows Event data was ever making it across the wire, no matter how I reconfigured the inputs.conf stanzas. The one I honed in on first was the CPU utilization, and through research I discovered that when I invoke a stanza in inputs.conf it has to exist in some degree within the Settings > Data Inputs library (?) present on my Splunk instance. perfmon://CPU, perfmon://CPULoad, and perfmon://Processor were all stanzas I found online for (among other things) checking to see what % CPU utilization a target server was at.  None of them worked.  Looking into these Data Inputs, it looks like something is broken - when I select these three (as an example) Splunk's web UI throws up an error saying that "Processor is not a valid object".   Following some guidance online, I was able to make my own custom Data Input just called testCPU, pointing at a custom index I call testWindows, and basically make it a clone of CPU (taking in % Processor Time and % User Time as counters and whatnot).  For the required object, I noticed that "Processor Information" was an option I could pick rather than "Processor", so I went with that one.  I then deployed a stanza in inputs.conf that says perfmon://testCPU on one of my UFs, and it absolutely works.  My Indexer is now pulling in CPU % use information.  I suspect if I went back to the three CPU-related entries above and set it to "Processor Information", it would work and any of the existing Apps I inherited that invoke those stanzas would themselves start pulling in data through it. However, I do not know why my built-in Data Inputs are broken - it isn't just limited to the CPU ones I used as an example above.  For example, the "System" input claims "System is not a valid object" and the available objects dropdown does not have an obvious replacement (there's no "System Information" to follow the pattern above).  The "PhysicalDisk" DI claims "PhysicalDisk is not a valid object" but has nothing obvious to replace it either.  Available Memory claims "Memory" is not a valid object with no obvious replacement, etc. Does anyone know what might be going on here?  Looking at how the Stanzas are configured online the examples I see for the handful above I have looked into do in fact invoke object = "xxx" that matches the names of things my Splunk says isn't valid.  Some of these might have some obvious replacements ("Network" might be "Physical Network Card Activity" or something like that) but a lot of them don't. How should I go fix these?  My first assumption was that I would find some kind of "Objects" config file that may have clues to how these got redefined, but that wasn't the case. I have a ticket in with support, but I am broadening the scope here to see if anyone else has familiarity with something like this (and also to create something for another user with the same issue to find in the future).
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the ... See more...
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the Manager.  I then go to register the indexer as a peer and enter in the vm host IP of the manager and successfully register the indexer as a peer.   When I reboot and check the indexer manager, it shows the indexer peer is up and up but shows the ip address of the manager container for the indexer peer?  When I try to add another indexer it does the same thing and will not let me add another indexer.  I have tried statically assigning IPs and confirmed all IPs are different etc.  I wasn't sure If anyone has ran into this issue. All vm hosts are on the same subnet and can communicate.  Firewall off and selinux off. Using 9887 as rep port and 8089 as manager comms port.  I am running as rootless outside and root inside.  It has to be a permission or file that I am missing.  I set it up as root:root and it works perfect.  Any ideas I appreciate it. 
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i ... See more...
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i am part of 3 roles(A, B, C) which has search filters but I am already part of role D which has access to all indexes but when I am trying to search any data, I am not getting any data, But On Enterprise Security SH, I am able to view all the data as expected. Is it something like precedence issue on Splunk Enterprise SH that is causing the issue?Please help me.     Thanks
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How d... See more...
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How do I set a different threshold for each result? I tried using a custom trigger as follows and was hoping to only get an email for "client" and "credentials", but I still get all 3.   search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND count > 8 ) OR (errorType = "Other" AND count >8 )      
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observatio... See more...
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observations: Setup Overview: I am using Packetbeat to capture DNS queries across multiple servers. Packetbeat generates JSON log files, rotating logs into 10 files, each with a maximum size of 50 MB. Packetbeat generates 3-4 JSON files every minute Setup -> Splunk Cloud 9.2.2 , On-Prem Heavy Forwarder 9.1.2 , and Universal Forwarder 9.1.2 Example list of Packetbeat log files (rotated by Packetbeat): packetbeat.json packetbeat.1.json packetbeat.2.json packetbeat.3.json ... packetbeat.9.json Issue Observed: On some servers, the logs are ingested and monitored consistently by the Splunk agent, functioning as expected. However, on other servers: Logs are ingested for a few minutes, followed by a 5–6-minute gap. This cycle repeats,  resulting in missing data in between, while other data collected from the same server ingesting correctly.  Additional Observations: While investigating the issue, I observed the following log entry in the Splunk Universal Forwarder _internal index:       11-15-2024 17:27:35.615 -0600 INFO HealthChangeReporter - feature="Real-time Reader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." host = EAA-DC index = _internal source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log sourcetype = splunkd​       The following conf applied to all DNS servers: limits.conf       [thruput] maxKBps = 0       server.conf       [queue] maxSize = 512MB       inputs.conf       [monitor://C:\packetbeat.json] disabled = false index = dns sourcetype = packetbeat             Any direction to resolve this is appreciated! Thank you!
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because... See more...
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because it is showing all down results which is not my aim.   Please help and suggest !   My query :   index="network" %BGP-5 *clip* | dedup src_ip | stats count by state_to,Device_name,src_ip with exclude result: ( which I dont want) My expected result/AIM :  it will just show the result the devices which are down at the moment and dont want see the UP result       AIM :  With the help of query     
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the la... See more...
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the last line of the CSV file will not be indexed."  But the CSV standard (https://www.ietf.org/rfc/rfc4180.txt) does not require a CRLF at end of last row.  Can you please remedy this so a standard-compliant CSV file without a final CRLF still works and ingests the final row?  Some source solutions only output CSV files in this way (without final CRLF).
  Hi Splunk Community, I recently upgraded my Splunk environment from version 9.1.1 to the latest version. After the upgrade, I’m encountering errors when trying to restart Splunk across different ... See more...
  Hi Splunk Community, I recently upgraded my Splunk environment from version 9.1.1 to the latest version. After the upgrade, I’m encountering errors when trying to restart Splunk across different components. For example, I get the following error: Invalid key in stanza [default] in /opt/splunk/etc/system/local/indexes.conf, line 132: archiver.enableDataArchive (value: false) It seems like some configuration keys are no longer valid in the updated version. Has anyone faced similar issues, and how can I resolve these errors to ensure my configurations are compatible with the latest Splunk version? Thanks in advance for your help! Best regards,
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separat... See more...
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separated by commas.
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, bu... See more...
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, but when I changed the master node URL in the indexer it was not working. The indexer doesn't turn on by itself and even when I turn it on manually, the indexer stays running for some time but during that time the web UI of the indexer does not work. In some time the indexer stops automatically. The same happened for another indexer as well. When I revert to the old cluster master, all the issues are sorted automatically. Splunk indexer always keeps running, web UI is available. No issues are noticed. Any idea why the indexer keeps shutting down? I am Splunk version 9.0.4   Regards, Pravin
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects... See more...
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects the unix time stamp in seconds. Is there a workaround for this issue? ->  Correct would be:    
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the l... See more...
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the logs at the sc4s level. Note that the sc4s tool uses syslog-ng for filtering and parsing. The use case is as follows: when an event arrives on the sc4s server and contains an ip address of 10.9.40.245, the event is dropped. Does anyone have any idea how to create this filter on SC4S? Thank you.
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I enco... See more...
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I encounter the error Phantom Startup failed: postgresql-11
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on to... See more...
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on top without removing the 32bit version and will this cause issues later?  i'm also running splunk web on the same linux machine too.  any advise or suggestion please.  Amit 
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ... See more...
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ES and on all indexers via ClusterManager App. Then we set up all the inputs for the addon on the searchhead and could not select the index “M365” but only enter it manually. The problem now is that this index is not filled by the indexers! What are we doing wrong here?
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections... See more...
Hi Experts, Has any one achieved SNMP polling to network device from redhat based Splunk HF. Trying to follow below documentation but end up getting some errors related to data bases and connections. Collect data with input plugins | Telegraf Documentation
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resyn... See more...
I have a SHC of 3 search heads. I changed some fields in data model of 1 sh. it is replicated on 2nd SH, but 3rd SH does not have the same fields. Even though that SH was the captian.   I ran resync command but still the same issue.    
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated t... See more...
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated the commands and custom action with the new name. While testing it on my local Splunk instance I observed that the existing application isn't getting replaced with a new one as the folder name and app name/ID is different compared to the older version. I believe that is fine as I can ask users to remove it from their instances, but I want the saved searches as well as local data of the older app to be available in the renamed app (newer app) but I'm unable to find any appropriate way of doing so. Lastly, There was a post in the community where the solution was to clone the local data from the older app to the newer app but that isn't feasible for me as I don't have access to the instances that the users are having with the older app installed. Can someone please help me with this? Also, I had a few other questions related to older applications: What is the procedure for deleting an already existing application on Splunkbase? Is emailing Splunk support the only way? Tried app archiving but it doesn't restrict the users from installing it. Is there a way to transfer the old Splunk application or account to a new account? any alternative to emailing the Splunk support team?  TL;DR How can I replace the already installed application on the user's end with the newly renamed application in Splunk? Since the names of the applications differ, Splunk installs a separate app for the new name instead of updating the existing one. If there are users who are already using the existing application and have the application's saved configurations and searches, how can we get it migrated to the newly renamed application?
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show ... See more...
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show the issue, so other options like "use .* or something wont work) C:\\Windows\\System32\\test\\ I try to regex this field like: "C:\\\\Windows\\\\System32\\\\test\\\\" This does not work But as soon as I delete this second folder "C:\\\\Windows\\\\.*\\\\test\\\\" it works. And this will be over all fields, no matter which field with a path I take, as soon as I enter this second folder, it will immediately stop working. I also tried to add different special characters, all numbers and letters, space, tab etc. also tried to change the "\\\\", Adding ".*System32.*" but nothing works out. Someone else ever ran into this issue and got a solution?