All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Deta... See more...
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Details: Platform: AWS EKS (Kubernetes) UF Deployment: Helm chart Splunk UF Version: 9.1.2 Indexers: Splunk Enterprise 9.1.1 (self-managed) Source Logs: Kubernetes container logs (/var/log/containers, etc.)   Symptoms: After UF pod restarts/re-deployed: Previously ingested logs are duplicated. Logs that were generated during the restart window are missing(not all logs) in Splunk. The fishbucket is recreated at each restart: Confirmed by logging into the UF pod post-restart and checking: /opt/splunkforwarder/var/lib/splunk/fishbucket/ Timestamps indicate it is freshly recreated (ephemeral).   Our Hypothesis: We suspect this behavior is caused by the Splunk UF losing its ingestion state (fishbucket) on pod restart, due to the lack of a PersistentVolumeClaim (PVC) mounted to: /opt/splunkforwarder/var/lib/splunk   This would explain both: Re-ingestion of previously-read files (-> duplicates) Fail to re-ingest certain logs that may no longer be available or tracked (-> causing data loss) However, we are not yet certain if the missing logs are due to non-persistent fishbucket and container log rotation What We Need from Splunk Support: How can we conclusively verify whether the missing logs are caused by fishbucket loss, file rotation, inode mismatch, or other ingestion tracking issues? What is the recommended and supported approach for maintaining ingestion state in a Kubernetes/Helm-based Splunk UF deployment? Is mounting a PersistentVolumeClaim (PVC) to /opt/splunkforwarder/var/lib/splunk sufficient and reliable for preserving fishbucket across pod restarts? Are there additional best practices to prevent both log loss and duplication, especially in dynamic environments like Kubernetes?
Hi @NickEot  Can I check - do you not deploy to your HF from a Deployment Server? Ideally HF would be relatively stateless and be rebuildable from a DS if it was to disappear and come back without c... See more...
Hi @NickEot  Can I check - do you not deploy to your HF from a Deployment Server? Ideally HF would be relatively stateless and be rebuildable from a DS if it was to disappear and come back without configuration, however if you're not in a position to do this then you would need to look at a custom app to collect the data you need and index it. I cant find it right now (but I'll keep looking) but I once created a simple app which ran btool and piped the output to a file. You could do something like this as a custom app which could then be configured to send the conf files you are interested in to a file which is configured to be monitored in inputs.conf and ingested to whichever index you need on your indexers.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Staff, we have a distributed systems with 1 Splunk enterprise and N Heavy forwarder pushing data to it. We would like to backup every night one .conf file inside the Heavy forwarder  directly in... See more...
Hi Staff, we have a distributed systems with 1 Splunk enterprise and N Heavy forwarder pushing data to it. We would like to backup every night one .conf file inside the Heavy forwarder  directly into a specific folder of the enterprise machine by using the same port 9997 or 8089 avoiding any other port configuration. Is this possible? How can we get the right solution? Thanks in advance. Nick  
Hi, I’m looking for query which helps me to find if login is successful or not. Unfortunately, there is no direct log which would show this, so I need to use following logic: If there is EventID 1... See more...
Hi, I’m looking for query which helps me to find if login is successful or not. Unfortunately, there is no direct log which would show this, so I need to use following logic: If there is EventID 1000, check if there is following EventID 1001 with the same filed called Username in time range of 1s If EventID with above conditions exist – Status=SUCCESSS If EventID with above conditions doesn’t exist – Status=FAILURE Disaply table with following fields with match both events: _Time of event 1000 Computer from event 1000 Status Resource from event 1001 Is it possible to get this in Splunk?
Hello @Kosyay , Those additionnal lowercase fields are defined by the Splunk_TA_Windows add-on to be compliant with Splunk CIM. You can find the list of the CIM fieldnames here : https://help.splu... See more...
Hello @Kosyay , Those additionnal lowercase fields are defined by the Splunk_TA_Windows add-on to be compliant with Splunk CIM. You can find the list of the CIM fieldnames here : https://help.splunk.com/en/splunk-cloud-platform/common-information-model/6.0/data-models/cim-fields-per-associated-data-model The CIM helps you to normalize your data to match a common standard, using the same field names and event tags for equivalent events from different sources or vendors. So I'm not sure you want to remove those fields If you want to see where those fields are defined, open the add-on archive, and you will see the alias and eval definitions in the file "Splunk_TA_windows\default\props.conf"
In general, it should be.  That said, make sure you don't try to send events via HEC to an undefined index in Splunk Cloud.  You will likely lose any events after the one destined for the undefined i... See more...
In general, it should be.  That said, make sure you don't try to send events via HEC to an undefined index in Splunk Cloud.  You will likely lose any events after the one destined for the undefined index in a given HEC batch.
Hello. We ended up choosing the Splunk_TA_Windows add-on, but some questions came up. During parsing, some field names are in uppercase (e.g., Target_User_Name, WorkstationName, Source_Workstation), ... See more...
Hello. We ended up choosing the Splunk_TA_Windows add-on, but some questions came up. During parsing, some field names are in uppercase (e.g., Target_User_Name, WorkstationName, Source_Workstation), while others are in lowercase (e.g., user, dvc_nt_host, dvc, src_user). Could someone please explain: Why are these additional lowercase fields needed? Is it possible to hide or remove them? Thanks in advance!
I need to disagree to use any x.0.0 versions. Especially when dbx’s 4.0.0 contains totally new features (real HA), it’s more than probably that there will be less or more other annoying issues with it!
The port 8089 is used only for rest api requested a responses, not for sending logs! You need separate port for those like 9997 is in normal situation. It doesn’t mater what it is. Only ensure that it... See more...
The port 8089 is used only for rest api requested a responses, not for sending logs! You need separate port for those like 9997 is in normal situation. It doesn’t mater what it is. Only ensure that it’s allowed in all FWs between SH and indexers. When you are flipping the port to a XXXX or 9998 then indexer discovery tells SH that there is a new receiver port activated and SH should use also it and remove previous 9997. If there is e.g. FW blocking traffic from SH to indexers for those new ports then SH can’t work as expected and , I expect, later when it lost access to its current LM logs there will start those other issues which you have mentioned? You should find some hints from your instances internal logs if this is really what has happened.
When we set up a cluster, the SH, CM and the indexers stay connected over the management port 8089 and will keep sending _internal logs no matter what, but the forwarders use the inputs port 9997 to ... See more...
When we set up a cluster, the SH, CM and the indexers stay connected over the management port 8089 and will keep sending _internal logs no matter what, but the forwarders use the inputs port 9997 to send data to the indexers. In our case, we only flip the port to XXXX or 9998, depending on the type of forwarding setup used. We have controlled data ingestion and always stay within limits, but sometimes unexpected testing causes a high input flow, and thus, we have to take measures to make sure we don't breach the license.
ok my license was expired. So probably the problem
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", event... See more...
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", eventhough we configured the earliest and latest offset with $info_min_time$ and $info_max_time$: After saving the search again the problem vanished. I also created a new search and worked correct immediately. It worked before the update for the existing searches and stopped working after the upgrade.  Anybody else with the same experience?  Best regards  
Has it ever worked and now suddenly stopped? (If so - what changes were made to your environment?) Or is it a new installation? (How exactly did you install it?) Do the logs show any related errors?
The github project seems kinda old. Very old. As far as I remember the modern UF runs... fairly well with SELinux but needs tweaking in order to grant access to specific items. So the audit2allow ap... See more...
The github project seems kinda old. Very old. As far as I remember the modern UF runs... fairly well with SELinux but needs tweaking in order to grant access to specific items. So the audit2allow approach is a fairly proper one.
Hi @tanjiro_rengo  It ultimately depends on what configuration file changes you have applied to determine if this is a search-time or index-time change. Index-time changes will not apply retrospecti... See more...
Hi @tanjiro_rengo  It ultimately depends on what configuration file changes you have applied to determine if this is a search-time or index-time change. Index-time changes will not apply retrospectively to existing indexed data.  Please could you share you configuration changes and let us know how you are sending this file to Splunk?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hv64  Can you please confirm if this previously worked, or if this is a new install? I see you arent running on the latest version - it might be worth upgrading to 4.0.0 unless there are any iss... See more...
Hi @hv64  Can you please confirm if this previously worked, or if this is a new install? I see you arent running on the latest version - it might be worth upgrading to 4.0.0 unless there are any issues you are aware of which could prevent this, this could rule out any previous bugs etc. There are a huge range of things which could be wrong based on this error, the first thing I would try is restarting splunkd incase this resolves it, have you already tried this? I'd also recommend checking out https://splunk.my.site.com/customer/s/article/Splunk-App-for-DB-Connect-Cannot-Communicate-With-Task-Server which has a number of troubleshooting steps, rather than listing them all here! Good luck!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tanjiro_rengo , as I said, it depends on how you upload the file: is you use the manual Data Input by web GUI, you can upload the file many times without ani issue. If instead you are using a c... See more...
Hi @tanjiro_rengo , as I said, it depends on how you upload the file: is you use the manual Data Input by web GUI, you can upload the file many times without ani issue. If instead you are using a conf input, Splunk doesn't index twice a log, so you sould rename it and use the option crcSal=<SOURCE>. Ciao. Giuseppe
That's one of the limitations of ingesting windows events in the "traditional" form. Open Event Viewer on your windows computer. Open the Security log and find a 4624 event. What you're ingesting a... See more...
That's one of the limitations of ingesting windows events in the "traditional" form. Open Event Viewer on your windows computer. Open the Security log and find a 4624 event. What you're ingesting at this point is what you can see in the bottom panel in the "General" tab - the event rendered to a human-readable text. It does contain fields named the same way (like Account Name) just differently "scoped" (indented a bit in sections regarding either Subject or New Logon).  So Splunk parses those fields as key/value pairs and simply gathers two different values of the same named field because the source data contains it. You could probably bend over backwards and try to write custom regexes to extract those specific values but it will be very ugly and fairly bad performance-wise. If you switch to ingesting XML versions of events, apart from saving on space occupied by events (and license usage!), you get more unambiguous structure. You'd be ingesting the event as it's presented in the bottom Event Log panel in the Details tab in XML view. The structure might not be as readable here but Splunk can parse this XML much better and present it to you in a useful form. And here you have much more straightforward and unique field names - in your case it would be SubjectUserName and TargetUserName - two completely distinct fields.
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on ... See more...
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on 9998 port or consult documentation for details.   did you have a idea ?   We use Splunk Enterprise 9.2.1          
@TestUser  I don't think you can prefill a file upload field with a previously uploaded file. It's a standard security and privacy feature in web browsers and web applications Regards, Prewin Spl... See more...
@TestUser  I don't think you can prefill a file upload field with a previously uploaded file. It's a standard security and privacy feature in web browsers and web applications Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!