All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the ... See more...
I am developing a splunk setup using docker image and Podman.  I am trying to setup 2 indexers along with an indexer manager.  Each container will run on separate rhel vm.  I successfully set up the Manager.  I then go to register the indexer as a peer and enter in the vm host IP of the manager and successfully register the indexer as a peer.   When I reboot and check the indexer manager, it shows the indexer peer is up and up but shows the ip address of the manager container for the indexer peer?  When I try to add another indexer it does the same thing and will not let me add another indexer.  I have tried statically assigning IPs and confirmed all IPs are different etc.  I wasn't sure If anyone has ran into this issue. All vm hosts are on the same subnet and can communicate.  Firewall off and selinux off. Using 9887 as rep port and 8089 as manager comms port.  I am running as rootless outside and root inside.  It has to be a permission or file that I am missing.  I set it up as root:root and it works perfect.  Any ideas I appreciate it. 
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i ... See more...
Hello, I have an issue where I was part of multiple roles on Splunk Enterprise and Splunk Enterprise Security, the same role and saml group has access to all the indexes, On the Splunk Enterprise i am part of 3 roles(A, B, C) which has search filters but I am already part of role D which has access to all indexes but when I am trying to search any data, I am not getting any data, But On Enterprise Security SH, I am able to view all the data as expected. Is it something like precedence issue on Splunk Enterprise SH that is causing the issue?Please help me.     Thanks
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How d... See more...
I got an alert working "for each result" by using a query that creates the following table: errorType             count Client                  10 Credentials      50 Unknown             5 How do I set a different threshold for each result? I tried using a custom trigger as follows and was hoping to only get an email for "client" and "credentials", but I still get all 3.   search (errorType = "Client" AND count > 8 ) OR (errorType = "Credentials" AND count > 8 ) OR (errorType = "Other" AND count >8 )      
I found a way to add earliest and latest using tstats from datamodel, but the values are not matching when querying from tstats and direct index? What could be the fix? I set the frequency to run onc... See more...
I found a way to add earliest and latest using tstats from datamodel, but the values are not matching when querying from tstats and direct index? What could be the fix? I set the frequency to run once in every 5 minutes and earliest time to 91 days. And max summarization search time to 1 hour.
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observatio... See more...
Hello,  I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observations: Setup Overview: I am using Packetbeat to capture DNS queries across multiple servers. Packetbeat generates JSON log files, rotating logs into 10 files, each with a maximum size of 50 MB. Packetbeat generates 3-4 JSON files every minute Setup -> Splunk Cloud 9.2.2 , On-Prem Heavy Forwarder 9.1.2 , and Universal Forwarder 9.1.2 Example list of Packetbeat log files (rotated by Packetbeat): packetbeat.json packetbeat.1.json packetbeat.2.json packetbeat.3.json ... packetbeat.9.json Issue Observed: On some servers, the logs are ingested and monitored consistently by the Splunk agent, functioning as expected. However, on other servers: Logs are ingested for a few minutes, followed by a 5–6-minute gap. This cycle repeats,  resulting in missing data in between, while other data collected from the same server ingesting correctly.  Additional Observations: While investigating the issue, I observed the following log entry in the Splunk Universal Forwarder _internal index:       11-15-2024 17:27:35.615 -0600 INFO HealthChangeReporter - feature="Real-time Reader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." host = EAA-DC index = _internal source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log sourcetype = splunkd​       The following conf applied to all DNS servers: limits.conf       [thruput] maxKBps = 0       server.conf       [queue] maxSize = 512MB       inputs.conf       [monitor://C:\packetbeat.json] disabled = false index = dns sourcetype = packetbeat             Any direction to resolve this is appreciated! Thank you!
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because... See more...
Hi Team,  In below query I don't want to show up the result as "Up" in state_to field, I just want to see data with down state , also it not possible to use (exclude operator state_to!=Up ) because it is showing all down results which is not my aim.   Please help and suggest !   My query :   index="network" %BGP-5 *clip* | dedup src_ip | stats count by state_to,Device_name,src_ip with exclude result: ( which I dont want) My expected result/AIM :  it will just show the result the devices which are down at the moment and dont want see the UP result       AIM :  With the help of query     
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the la... See more...
In the TA documentation at https://splunk.github.io/splunk-add-on-for-amazon-web-services/S3/ -- it is stated, "Ensure that all files have a carriage return at the end of each file. Otherwise, the last line of the CSV file will not be indexed."  But the CSV standard (https://www.ietf.org/rfc/rfc4180.txt) does not require a CRLF at end of last row.  Can you please remedy this so a standard-compliant CSV file without a final CRLF still works and ingests the final row?  Some source solutions only output CSV files in this way (without final CRLF).
That setting is for Splunk Cloud only.  Remove it from your config.
The values are *supposed* to be the same in the new field, except with commas added. Please share sanitized examples of what results you have now and what you want.
This does not work. newField have the same values as host field. Its not concatenating.
Thank you all, since I have already the HEC cluster and it is with HFs, wouldn't it be better to have UFs  since my understanding is that data flows quicker through UFs? and we don't process the data... See more...
Thank you all, since I have already the HEC cluster and it is with HFs, wouldn't it be better to have UFs  since my understanding is that data flows quicker through UFs? and we don't process the data at all at this HEC cluster level.
Try this from the documentation. https://splunk.github.io/splunk-connect-for-syslog/main/sources/#filtering-events-from-output https://splunk.github.io/splunk-connect-for-syslog/main/sources/#anoth... See more...
Try this from the documentation. https://splunk.github.io/splunk-connect-for-syslog/main/sources/#filtering-events-from-output https://splunk.github.io/splunk-connect-for-syslog/main/sources/#another-example-to-drop-events-based-on-src-and-action-values-in-message  
  Hi Splunk Community, I recently upgraded my Splunk environment from version 9.1.1 to the latest version. After the upgrade, I’m encountering errors when trying to restart Splunk across different ... See more...
  Hi Splunk Community, I recently upgraded my Splunk environment from version 9.1.1 to the latest version. After the upgrade, I’m encountering errors when trying to restart Splunk across different components. For example, I get the following error: Invalid key in stanza [default] in /opt/splunk/etc/system/local/indexes.conf, line 132: archiver.enableDataArchive (value: false) It seems like some configuration keys are no longer valid in the updated version. Has anyone faced similar issues, and how can I resolve these errors to ensure my configurations are compatible with the latest Splunk version? Thanks in advance for your help! Best regards,
OK. Two things. 1. @isoutamo it doesn't matter how many times you have the same setting specified. Only the "last one" is effective. So on its own just specifying INDEXED_EXTRACTIONS twice doesn't d... See more...
OK. Two things. 1. @isoutamo it doesn't matter how many times you have the same setting specified. Only the "last one" is effective. So on its own just specifying INDEXED_EXTRACTIONS twice doesn't do anything. Of course which setting is the "last one" depends on the settings priority. 2. @dbray_sd It makes sense. If you simply run btool on a SH without providing app context you will get effective settings flattened using "normal" setting precedence - not taking into account those context (as if all settings were specified in a global system context). For quite a few versions now btool has supported an --app=something argument so you can evaluate your settings in an app context. But it will - as far as I remember - still not check user's settings and I'm not 100% sure if it will properly check permissions. So yes, your solution makes sense. If you haven't explicitly exported your app's contents they'll only be usable in that app's context.
That's to be expected as well. If your input is not broken into single events properly you might end up with a small number of huge data blobs (effectively consisting of several "atomic" events). Sin... See more...
That's to be expected as well. If your input is not broken into single events properly you might end up with a small number of huge data blobs (effectively consisting of several "atomic" events). Since they'd get cut off at TRUNCATE point, all the data following that point would be lost.
Thank you for your input. Might be the line breaker field that is causing this.  In addition, the amount of events received is low taking into consideration it's an Azure Firewall with 10-15 GB Dail... See more...
Thank you for your input. Might be the line breaker field that is causing this.  In addition, the amount of events received is low taking into consideration it's an Azure Firewall with 10-15 GB Daily of logs.
  | eval newField=mvjoin(host, ",")  
Well, right as I go to create a ticket, I stumbled onto an old note that had the fix. I had to do the following on the SH: sudo -u splunk mkdir -p /opt/splunk/etc/apps/RabbitMQ_Settings/metadata su... See more...
Well, right as I go to create a ticket, I stumbled onto an old note that had the fix. I had to do the following on the SH: sudo -u splunk mkdir -p /opt/splunk/etc/apps/RabbitMQ_Settings/metadata sudo -u splunk vim /opt/splunk/etc/apps/RabbitMQ_Settings/metadata/local.meta [] access = read : [ * ], write : [ admin ] export = system   I recalled something about permissions or some other weirdo Splunk requirement. Aggravating, but at least it's working as expected now.
Hi @Sailesh6891 , you could try somethng like this: <your_search> | stats values(host) AS host BY index | nomv host Ciao. Giuseppe
The indexer and the new CM will have logs to help indicate what is happening, something to point you in the right direction.  Please look there and post anything of interest if you still need help af... See more...
The indexer and the new CM will have logs to help indicate what is happening, something to point you in the right direction.  Please look there and post anything of interest if you still need help after reviewing. As for the webURL on the indexer, IMO from a security stance should always be disabled.  Your environment so hopefully there is a good reason for that. Knowing that the webURL availability turns on and off does tell me that your old CM has a custom app that enables webURL, the new CM likely does not so when the new CM pushes a bundle the indexer removes the oldCM custom app and disables the webURL.