All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Definitely you should move old logs into some other archive directory on source side. Depending on OS and its version, your current situation could be a big bottleneck soon or even it could be it alre... See more...
Definitely you should move old logs into some other archive directory on source side. Depending on OS and its version, your current situation could be a big bottleneck soon or even it could be it already. I have seen environments where even ls or dir didn’t work due to number of files. IgnoreOlderThan is what you should/could try, BUT you must remember that it’s looking for file modification time. If someone somehow update mtime of file then splunk read it no matter of whenever has really modified. ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons
Perfmon can be tricky. See my thread here https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042 Even if you don't have the same problem, you can see how to speci... See more...
Perfmon can be tricky. See my thread here https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042 Even if you don't have the same problem, you can see how to specify counters.
There is also user context which needs also app context. https://docs.splunk.com/Documentation/Splunk/9.3.2/Troubleshooting/Usebtooltotroubleshootconfigurations
You can use ignoreOlderThan option but be aware that the forwarder still has to keep track of file existence and metadata so if you have many files in the source directory you might need to raise pro... See more...
You can use ignoreOlderThan option but be aware that the forwarder still has to keep track of file existence and metadata so if you have many files in the source directory you might need to raise process limits and you'll be wasting your resources on files you don't care about.
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new f... See more...
I am in the process of implementing Splunk in a fairly long-lived environment.  Log directories contain date-masked log files.  I would like to ignore files before today's date, and only import new files.  Example:  /opt/someApplication/logs/someApplication.202412160600.out I am unable to wildcard /opt/someApplication/logs/someApplication.*.out as there are logs dating back to 2017 and I'd exceed our daily license/quota by several orders of magnitude.  Changing the logging format is not an option.  Exclude-lists appear to be a solution, but even using regex would be incredibly burdensome. Thoughts?    
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some d... See more...
I am new to Splunk and am teaching myself how to us it as I integrate it with my environment. I inherited an existing Splunk Enterprise instance that, at one point, apparently used to work to some degree but by the time I joined the team and took over had fallen into disuse. After getting it upgraded from 9.0 to 9.3.2, rolling out Universal Forwarders, tinkering with inputs.conf, and fixing some network issues, I found myself finally able to get Windows Event Log data into my indexer from a couple of different test machines. The inputs.conf I was using was something I had found on one of the existing machines before reinstalling the UF, and I noticed that it had a lot more stuff in it than Windows Event Log stanzas.  Some of it was suggesting it monitored stuff I was interested in right now, such as CPU utilization.  However, I noticed that exactly nothing outside of Windows Event data was ever making it across the wire, no matter how I reconfigured the inputs.conf stanzas. The one I honed in on first was the CPU utilization, and through research I discovered that when I invoke a stanza in inputs.conf it has to exist in some degree within the Settings > Data Inputs library (?) present on my Splunk instance. perfmon://CPU, perfmon://CPULoad, and perfmon://Processor were all stanzas I found online for (among other things) checking to see what % CPU utilization a target server was at.  None of them worked.  Looking into these Data Inputs, it looks like something is broken - when I select these three (as an example) Splunk's web UI throws up an error saying that "Processor is not a valid object".   Following some guidance online, I was able to make my own custom Data Input just called testCPU, pointing at a custom index I call testWindows, and basically make it a clone of CPU (taking in % Processor Time and % User Time as counters and whatnot).  For the required object, I noticed that "Processor Information" was an option I could pick rather than "Processor", so I went with that one.  I then deployed a stanza in inputs.conf that says perfmon://testCPU on one of my UFs, and it absolutely works.  My Indexer is now pulling in CPU % use information.  I suspect if I went back to the three CPU-related entries above and set it to "Processor Information", it would work and any of the existing Apps I inherited that invoke those stanzas would themselves start pulling in data through it. However, I do not know why my built-in Data Inputs are broken - it isn't just limited to the CPU ones I used as an example above.  For example, the "System" input claims "System is not a valid object" and the available objects dropdown does not have an obvious replacement (there's no "System Information" to follow the pattern above).  The "PhysicalDisk" DI claims "PhysicalDisk is not a valid object" but has nothing obvious to replace it either.  Available Memory claims "Memory" is not a valid object with no obvious replacement, etc. Does anyone know what might be going on here?  Looking at how the Stanzas are configured online the examples I see for the handful above I have looked into do in fact invoke object = "xxx" that matches the names of things my Splunk says isn't valid.  Some of these might have some obvious replacements ("Network" might be "Physical Network Card Activity" or something like that) but a lot of them don't. How should I go fix these?  My first assumption was that I would find some kind of "Objects" config file that may have clues to how these got redefined, but that wasn't the case. I have a ticket in with support, but I am broadening the scope here to see if anyone else has familiarity with something like this (and also to create something for another user with the same issue to find in the future).
That should work already. Could you try putting that search filter at the end of your alert search? <yoursearch> | search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND coun... See more...
That should work already. Could you try putting that search filter at the end of your alert search? <yoursearch> | search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND count > 8 ) OR (errorType = "Other" AND count > 8 )  
Try excluding Up states at the end rather than at the beginning. index="network" %BGP-5 *clip* | dedup src_ip | stats count by state_to,Device_name,src_ip | where state_to!="Up"
At first glance I would suspect that the search filters for your roles are contradicting each other and filtering out all events. E.g. if you have the following roles with search filters: ROLE A - ... See more...
At first glance I would suspect that the search filters for your roles are contradicting each other and filtering out all events. E.g. if you have the following roles with search filters: ROLE A - (index=index1 sourcetype=something) ROLE B - (index=index2 sourcetype=something) Then if you have role A and B, then Splunk will force you to search with "(index=index1 sourcetype=something) (index=index2 sourcetype=something)" which will retrieve 0 events because none exist in both index1 and index2 at the same time. Are you able to post your sanitized search filters to look for contradictory filters?
Officially HEC isn't supported on UF. I read several times that it works but never actually got to testing it myself.
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the ... See more...
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the Manager.  I then go to register the indexer as a peer and enter in the vm host IP of the manager and successfully register the indexer as a peer.   When I reboot and check the indexer manager, it shows the indexer peer is up and up but shows the ip address of the manager container for the indexer peer?  When I try to add another indexer it does the same thing and will not let me add another indexer.  I have tried statically assigning IPs and confirmed all IPs are different etc.  I wasn't sure If anyone has ran into this issue. All vm hosts are on the same subnet and can communicate.  Firewall off and selinux off. Using 9887 as rep port and 8089 as manager comms port.  I am running as rootless outside and root inside.  It has to be a permission or file that I am missing.  I set it up as root:root and it works perfect.  Any ideas I appreciate it. 
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i ... See more...
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i am part of 3 roles(A, B, C) which has search filters but I am already part of role D which has access to all indexes but when I am trying to search any data, I am not getting any data, But On Enterprise Security SH, I am able to view all the data as expected. Is it something like precedence issue on Splunk Enterprise SH that is causing the issue?Please help me.     Thanks
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How d... See more...
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How do I set a different threshold for each result? I tried using a custom trigger as follows and was hoping to only get an email for "client" and "credentials", but I still get all 3.   search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND count > 8 ) OR (errorType = "Other" AND count >8 )      
I found a way to add earliest and latest using tstats from datamodel, but the values are not matching when querying from tstats and direct index? What could be the fix? I set the frequency to run onc... See more...
I found a way to add earliest and latest using tstats from datamodel, but the values are not matching when querying from tstats and direct index? What could be the fix? I set the frequency to run once in every 5 minutes and earliest time to 91 days. And max summarization search time to 1 hour.
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observatio... See more...
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observations: Setup Overview: I am using Packetbeat to capture DNS queries across multiple servers. Packetbeat generates JSON log files, rotating logs into 10 files, each with a maximum size of 50 MB. Packetbeat generates 3-4 JSON files every minute Setup -> Splunk Cloud 9.2.2 , On-Prem Heavy Forwarder 9.1.2 , and Universal Forwarder 9.1.2 Example list of Packetbeat log files (rotated by Packetbeat): packetbeat.json packetbeat.1.json packetbeat.2.json packetbeat.3.json ... packetbeat.9.json Issue Observed: On some servers, the logs are ingested and monitored consistently by the Splunk agent, functioning as expected. However, on other servers: Logs are ingested for a few minutes, followed by a 5–6-minute gap. This cycle repeats,  resulting in missing data in between, while other data collected from the same server ingesting correctly.  Additional Observations: While investigating the issue, I observed the following log entry in the Splunk Universal Forwarder _internal index:       11-15-2024 17:27:35.615 -0600 INFO HealthChangeReporter - feature="Real-time Reader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." host = EAA-DC index = _internal source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log sourcetype = splunkd​       The following conf applied to all DNS servers: limits.conf       [thruput] maxKBps = 0       server.conf       [queue] maxSize = 512MB       inputs.conf       [monitor://C:\packetbeat.json] disabled = false index = dns sourcetype = packetbeat             Any direction to resolve this is appreciated! Thank you!
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because... See more...
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because it is showing all down results which is not my aim.   Please help and suggest !   My query :   index="network" %BGP-5 *clip* | dedup src_ip | stats count by state_to,Device_name,src_ip with exclude result: ( which I dont want) My expected result/AIM :  it will just show the result the devices which are down at the moment and dont want see the UP result       AIM :  With the help of query     
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the la... See more...
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the last line of the CSV file will not be indexed."  But the CSV standard (https://www.ietf.org/rfc/rfc4180.txt) does not require a CRLF at end of last row.  Can you please remedy this so a standard-compliant CSV file without a final CRLF still works and ingests the final row?  Some source solutions only output CSV files in this way (without final CRLF).
That setting is for Splunk Cloud only.  Remove it from your config.
The values are *supposed* to be the same in the new field, except with commas added. Please share sanitized examples of what results you have now and what you want.
This does not work. newField have the same values as host field. Its not concatenating.