All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The port 8089 is used only for rest api requested a responses, not for sending logs! You need separate port for those like 9997 is in normal situation. It doesn’t mater what it is. Only ensure that it... See more...
The port 8089 is used only for rest api requested a responses, not for sending logs! You need separate port for those like 9997 is in normal situation. It doesn’t mater what it is. Only ensure that it’s allowed in all FWs between SH and indexers. When you are flipping the port to a XXXX or 9998 then indexer discovery tells SH that there is a new receiver port activated and SH should use also it and remove previous 9997. If there is e.g. FW blocking traffic from SH to indexers for those new ports then SH can’t work as expected and , I expect, later when it lost access to its current LM logs there will start those other issues which you have mentioned? You should find some hints from your instances internal logs if this is really what has happened.
When we set up a cluster, the SH, CM and the indexers stay connected over the management port 8089 and will keep sending _internal logs no matter what, but the forwarders use the inputs port 9997 to ... See more...
When we set up a cluster, the SH, CM and the indexers stay connected over the management port 8089 and will keep sending _internal logs no matter what, but the forwarders use the inputs port 9997 to send data to the indexers. In our case, we only flip the port to XXXX or 9998, depending on the type of forwarding setup used. We have controlled data ingestion and always stay within limits, but sometimes unexpected testing causes a high input flow, and thus, we have to take measures to make sure we don't breach the license.
ok my license was expired. So probably the problem
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", event... See more...
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", eventhough we configured the earliest and latest offset with $info_min_time$ and $info_max_time$: After saving the search again the problem vanished. I also created a new search and worked correct immediately. It worked before the update for the existing searches and stopped working after the upgrade.  Anybody else with the same experience?  Best regards  
Has it ever worked and now suddenly stopped? (If so - what changes were made to your environment?) Or is it a new installation? (How exactly did you install it?) Do the logs show any related errors?
The github project seems kinda old. Very old. As far as I remember the modern UF runs... fairly well with SELinux but needs tweaking in order to grant access to specific items. So the audit2allow ap... See more...
The github project seems kinda old. Very old. As far as I remember the modern UF runs... fairly well with SELinux but needs tweaking in order to grant access to specific items. So the audit2allow approach is a fairly proper one.
Hi @tanjiro_rengo  It ultimately depends on what configuration file changes you have applied to determine if this is a search-time or index-time change. Index-time changes will not apply retrospecti... See more...
Hi @tanjiro_rengo  It ultimately depends on what configuration file changes you have applied to determine if this is a search-time or index-time change. Index-time changes will not apply retrospectively to existing indexed data.  Please could you share you configuration changes and let us know how you are sending this file to Splunk?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hv64  Can you please confirm if this previously worked, or if this is a new install? I see you arent running on the latest version - it might be worth upgrading to 4.0.0 unless there are any iss... See more...
Hi @hv64  Can you please confirm if this previously worked, or if this is a new install? I see you arent running on the latest version - it might be worth upgrading to 4.0.0 unless there are any issues you are aware of which could prevent this, this could rule out any previous bugs etc. There are a huge range of things which could be wrong based on this error, the first thing I would try is restarting splunkd incase this resolves it, have you already tried this? I'd also recommend checking out https://splunk.my.site.com/customer/s/article/Splunk-App-for-DB-Connect-Cannot-Communicate-With-Task-Server which has a number of troubleshooting steps, rather than listing them all here! Good luck!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tanjiro_rengo , as I said, it depends on how you upload the file: is you use the manual Data Input by web GUI, you can upload the file many times without ani issue. If instead you are using a c... See more...
Hi @tanjiro_rengo , as I said, it depends on how you upload the file: is you use the manual Data Input by web GUI, you can upload the file many times without ani issue. If instead you are using a conf input, Splunk doesn't index twice a log, so you sould rename it and use the option crcSal=<SOURCE>. Ciao. Giuseppe
That's one of the limitations of ingesting windows events in the "traditional" form. Open Event Viewer on your windows computer. Open the Security log and find a 4624 event. What you're ingesting a... See more...
That's one of the limitations of ingesting windows events in the "traditional" form. Open Event Viewer on your windows computer. Open the Security log and find a 4624 event. What you're ingesting at this point is what you can see in the bottom panel in the "General" tab - the event rendered to a human-readable text. It does contain fields named the same way (like Account Name) just differently "scoped" (indented a bit in sections regarding either Subject or New Logon).  So Splunk parses those fields as key/value pairs and simply gathers two different values of the same named field because the source data contains it. You could probably bend over backwards and try to write custom regexes to extract those specific values but it will be very ugly and fairly bad performance-wise. If you switch to ingesting XML versions of events, apart from saving on space occupied by events (and license usage!), you get more unambiguous structure. You'd be ingesting the event as it's presented in the bottom Event Log panel in the Details tab in XML view. The structure might not be as readable here but Splunk can parse this XML much better and present it to you in a useful form. And here you have much more straightforward and unique field names - in your case it would be SubjectUserName and TargetUserName - two completely distinct fields.
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on ... See more...
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on 9998 port or consult documentation for details.   did you have a idea ?   We use Splunk Enterprise 9.2.1          
@TestUser  I don't think you can prefill a file upload field with a previously uploaded file. It's a standard security and privacy feature in web browsers and web applications Regards, Prewin Spl... See more...
@TestUser  I don't think you can prefill a file upload field with a previously uploaded file. It's a standard security and privacy feature in web browsers and web applications Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Good afternoon! The problem is that the logs record two values in the account name field: the workstation name and the user login. Is it possible to modify this behavior so that only the login is rec... See more...
Good afternoon! The problem is that the logs record two values in the account name field: the workstation name and the user login. Is it possible to modify this behavior so that only the login is recorded, since the workstation name is already captured in a separate field?
I have used a file upload field on the configuration page. I successfully uploaded the file using this field. However, when I edit the configuration, all other fields are prefilled with the... See more...
I have used a file upload field on the configuration page. I successfully uploaded the file using this field. However, when I edit the configuration, all other fields are prefilled with the previously saved values, except the file upload field. The file field does not get prefilled with the saved value. Is this the expected behavior, or is there any configuration I need to update to achieve this?
@danielbb  I dont think there is any public document available from Splunk for this field-to-field explanations. They doesn't seem mutually exclusive, as it can be same or differ depends on the sea... See more...
@danielbb  I dont think there is any public document available from Splunk for this field-to-field explanations. They doesn't seem mutually exclusive, as it can be same or differ depends on the search. Also you can refer - #https://community.splunk.com/t5/Splunk-Search/index-audit-contents/m-p/338588 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@mristic  confining Splunk Forwarder with a custom SELinux policy is extremely challenging because of Splunk's complex architecture. There is a community project for your ref. #https://github.com/d... See more...
@mristic  confining Splunk Forwarder with a custom SELinux policy is extremely challenging because of Splunk's complex architecture. There is a community project for your ref. #https://github.com/doksu/selinux_policy_for_splunk Also you can try splunk in permissive mode, colelct denials and build policy with audit2allow #https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-allowing_access_audit2allow Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid, api_lt and api_et should correspond to the UI time range or the earliest_time and latest_time search API paramters as you noted, although I don't know if this is publicly documented.... See more...
Hi @livehybrid, api_lt and api_et should correspond to the UI time range or the earliest_time and latest_time search API paramters as you noted, although I don't know if this is publicly documented. Similarly, api_index_et and api_index_lt should correspond to the index_earliest and index_latest search API parameters. search_lt and search_et should correspond to the computed epoch second values from the earliest, latest, and other time modifiers if they're provided as part of the base search: index=main foo earliest=-24h@h latest=now index=main foo starttime=06/29/2025:20:50:00 The audit log doesn't appear to capture the values passed to _index_earliest and _index_latest or translate them to api_index_et and api_index_lt, unfortunately, but they should be present in the search text.  
Hi @mristic, While no specific guidance is available for Splunk Universal Forwarder, Splunk did publish RHEL 7/8-compatible SELinux policies as recently as Splunk Enterprise 9.2.2. You may be able t... See more...
Hi @mristic, While no specific guidance is available for Splunk Universal Forwarder, Splunk did publish RHEL 7/8-compatible SELinux policies as recently as Splunk Enterprise 9.2.2. You may be able to adapt them to your needs. See https://docs.splunk.com/Documentation/Splunk/9.2.2/CommonCriteria/InstallSELinux. https://download.splunk.com/products/security/splunk-selinux-0-0.9.0.el7.noarch.tgz https://download.splunk.com/products/security/splunk-selinux-0-0.9.0.el8.noarch.tgz
Hi @ReiGjuzi, The last version with support for Windows 7 was 6.4.11. The 32-bit and 64-bit links still work; however, the forwarder is no longer supported, the forwarder may contain vulnerabilities... See more...
Hi @ReiGjuzi, The last version with support for Windows 7 was 6.4.11. The 32-bit and 64-bit links still work; however, the forwarder is no longer supported, the forwarder may contain vulnerabilities, and the forwarder may not communicate with supported versions of Splunk Enterprise or Splunk Cloud. Use these entirely at your own risk: https://download.splunk.com/products/universalforwarder/releases/6.4.11/windows/splunkforwarder-6.4.11-0691276baf18-x86-release.msi https://download.splunk.com/products/universalforwarder/releases/6.4.11/windows/splunkforwarder-6.4.11-0691276baf18-x64-release.msi
Has anyone managed to create an SELinux policy that confines Splunk Forwarder while not limiting it's functions? I'm trying to address cis-benchmark "Ensure no unconfined services exist", as splun... See more...
Has anyone managed to create an SELinux policy that confines Splunk Forwarder while not limiting it's functions? I'm trying to address cis-benchmark "Ensure no unconfined services exist", as splunkd fails the test: system_u:system_r:unconfined_service_t:s 0 11315 ? 00:00:40 splunkd In #act, two process instances are seen (not sure why).   # ps -eZ | grep "unconfined_service_t" system_u:system_r:unconfined_service_t:s0 11379 ? 00:29:50 splunkd system_u:system_r:unconfined_service_t:s0 11402 ? 00:02:28 splunkd   "Advice" seems to be as follows: "Determine if the functionality provided by the unconfined service is essential for your operations. If it is, you may need to create a custom SELinux policy to confine the service. Create Custom SELinux Policy: If the service needs to be confined, create a custom SELinux policy. For the splunkd service, we need to determine if it can be confined without disrupting its functionality. If splunkd requires unconfined access to function correctly, confining it might lead to degraded performance or loss of functionality. " This has proven to be very, very difficult, especially as I ultimately need to make this happen using Ansible automation. Thoughts? Solutions? Anything?