All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tanjiro_rengo , as I said, it depends on how you upload the file: is you use the manual Data Input by web GUI, you can upload the file many times without ani issue. If instead you are using a c... See more...
Hi @tanjiro_rengo , as I said, it depends on how you upload the file: is you use the manual Data Input by web GUI, you can upload the file many times without ani issue. If instead you are using a conf input, Splunk doesn't index twice a log, so you sould rename it and use the option crcSal=<SOURCE>. Ciao. Giuseppe
That's one of the limitations of ingesting windows events in the "traditional" form. Open Event Viewer on your windows computer. Open the Security log and find a 4624 event. What you're ingesting a... See more...
That's one of the limitations of ingesting windows events in the "traditional" form. Open Event Viewer on your windows computer. Open the Security log and find a 4624 event. What you're ingesting at this point is what you can see in the bottom panel in the "General" tab - the event rendered to a human-readable text. It does contain fields named the same way (like Account Name) just differently "scoped" (indented a bit in sections regarding either Subject or New Logon).  So Splunk parses those fields as key/value pairs and simply gathers two different values of the same named field because the source data contains it. You could probably bend over backwards and try to write custom regexes to extract those specific values but it will be very ugly and fairly bad performance-wise. If you switch to ingesting XML versions of events, apart from saving on space occupied by events (and license usage!), you get more unambiguous structure. You'd be ingesting the event as it's presented in the bottom Event Log panel in the Details tab in XML view. The structure might not be as readable here but Splunk can parse this XML much better and present it to you in a useful form. And here you have much more straightforward and unique field names - in your case it would be SubjectUserName and TargetUserName - two completely distinct fields.
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on ... See more...
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on 9998 port or consult documentation for details.   did you have a idea ?   We use Splunk Enterprise 9.2.1          
@TestUser  I don't think you can prefill a file upload field with a previously uploaded file. It's a standard security and privacy feature in web browsers and web applications Regards, Prewin Spl... See more...
@TestUser  I don't think you can prefill a file upload field with a previously uploaded file. It's a standard security and privacy feature in web browsers and web applications Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Good afternoon! The problem is that the logs record two values in the account name field: the workstation name and the user login. Is it possible to modify this behavior so that only the login is rec... See more...
Good afternoon! The problem is that the logs record two values in the account name field: the workstation name and the user login. Is it possible to modify this behavior so that only the login is recorded, since the workstation name is already captured in a separate field?
I have used a file upload field on the configuration page. I successfully uploaded the file using this field. However, when I edit the configuration, all other fields are prefilled with the... See more...
I have used a file upload field on the configuration page. I successfully uploaded the file using this field. However, when I edit the configuration, all other fields are prefilled with the previously saved values, except the file upload field. The file field does not get prefilled with the saved value. Is this the expected behavior, or is there any configuration I need to update to achieve this?
@danielbb  I dont think there is any public document available from Splunk for this field-to-field explanations. They doesn't seem mutually exclusive, as it can be same or differ depends on the sea... See more...
@danielbb  I dont think there is any public document available from Splunk for this field-to-field explanations. They doesn't seem mutually exclusive, as it can be same or differ depends on the search. Also you can refer - #https://community.splunk.com/t5/Splunk-Search/index-audit-contents/m-p/338588 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@mristic  confining Splunk Forwarder with a custom SELinux policy is extremely challenging because of Splunk's complex architecture. There is a community project for your ref. #https://github.com/d... See more...
@mristic  confining Splunk Forwarder with a custom SELinux policy is extremely challenging because of Splunk's complex architecture. There is a community project for your ref. #https://github.com/doksu/selinux_policy_for_splunk Also you can try splunk in permissive mode, colelct denials and build policy with audit2allow #https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-allowing_access_audit2allow Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid, api_lt and api_et should correspond to the UI time range or the earliest_time and latest_time search API paramters as you noted, although I don't know if this is publicly documented.... See more...
Hi @livehybrid, api_lt and api_et should correspond to the UI time range or the earliest_time and latest_time search API paramters as you noted, although I don't know if this is publicly documented. Similarly, api_index_et and api_index_lt should correspond to the index_earliest and index_latest search API parameters. search_lt and search_et should correspond to the computed epoch second values from the earliest, latest, and other time modifiers if they're provided as part of the base search: index=main foo earliest=-24h@h latest=now index=main foo starttime=06/29/2025:20:50:00 The audit log doesn't appear to capture the values passed to _index_earliest and _index_latest or translate them to api_index_et and api_index_lt, unfortunately, but they should be present in the search text.  
Hi @mristic, While no specific guidance is available for Splunk Universal Forwarder, Splunk did publish RHEL 7/8-compatible SELinux policies as recently as Splunk Enterprise 9.2.2. You may be able t... See more...
Hi @mristic, While no specific guidance is available for Splunk Universal Forwarder, Splunk did publish RHEL 7/8-compatible SELinux policies as recently as Splunk Enterprise 9.2.2. You may be able to adapt them to your needs. See https://docs.splunk.com/Documentation/Splunk/9.2.2/CommonCriteria/InstallSELinux. https://download.splunk.com/products/security/splunk-selinux-0-0.9.0.el7.noarch.tgz https://download.splunk.com/products/security/splunk-selinux-0-0.9.0.el8.noarch.tgz
Hi @ReiGjuzi, The last version with support for Windows 7 was 6.4.11. The 32-bit and 64-bit links still work; however, the forwarder is no longer supported, the forwarder may contain vulnerabilities... See more...
Hi @ReiGjuzi, The last version with support for Windows 7 was 6.4.11. The 32-bit and 64-bit links still work; however, the forwarder is no longer supported, the forwarder may contain vulnerabilities, and the forwarder may not communicate with supported versions of Splunk Enterprise or Splunk Cloud. Use these entirely at your own risk: https://download.splunk.com/products/universalforwarder/releases/6.4.11/windows/splunkforwarder-6.4.11-0691276baf18-x86-release.msi https://download.splunk.com/products/universalforwarder/releases/6.4.11/windows/splunkforwarder-6.4.11-0691276baf18-x64-release.msi
Has anyone managed to create an SELinux policy that confines Splunk Forwarder while not limiting it's functions? I'm trying to address cis-benchmark "Ensure no unconfined services exist", as splun... See more...
Has anyone managed to create an SELinux policy that confines Splunk Forwarder while not limiting it's functions? I'm trying to address cis-benchmark "Ensure no unconfined services exist", as splunkd fails the test: system_u:system_r:unconfined_service_t:s 0 11315 ? 00:00:40 splunkd In #act, two process instances are seen (not sure why).   # ps -eZ | grep "unconfined_service_t" system_u:system_r:unconfined_service_t:s0 11379 ? 00:29:50 splunkd system_u:system_r:unconfined_service_t:s0 11402 ? 00:02:28 splunkd   "Advice" seems to be as follows: "Determine if the functionality provided by the unconfined service is essential for your operations. If it is, you may need to create a custom SELinux policy to confine the service. Create Custom SELinux Policy: If the service needs to be confined, create a custom SELinux policy. For the splunkd service, we need to determine if it can be confined without disrupting its functionality. If splunkd requires unconfined access to function correctly, confining it might lead to degraded performance or loss of functionality. " This has proven to be very, very difficult, especially as I ultimately need to make this happen using Ansible automation. Thoughts? Solutions? Anything?  
Hi @kn450, For a basic setup with either a standalone Splunk/Stream instance or separate Splunk and Stream instances, the steps at https://docs.splunk.com/Documentation/StreamApp/latest/DeployStream... See more...
Hi @kn450, For a basic setup with either a standalone Splunk/Stream instance or separate Splunk and Stream instances, the steps at https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/UseStreamtoingestNetflowandIPFIXdata result in a working configuration. In my test environment using a standalone instance on RHEL, I made only the following changes to $SPLUNK_HOME/etc/apps/Splunk_TA_stream/local/streamfwd.conf to enable both capture and NetFlow/IPFIX: [streamfwd] streamfwdcapture.0.interfaceRegex = ens.+ netflowReceiver.0.port = 9996 netflowReceiver.0.decoder = netflow I then enabled the netflow metadata stream in the Splunk Stream app. Using SolarWinds NetFlow Generator <https://www.solarwinds.com/free-tools/flow-tool-bundle> (not an endorsement, but it's free), I sent sample IPFIX data to the standalone instance, which Stream successfully decoded: {"endtime":"2025-06-29T23:20:12Z","timestamp":"2025-06-29T23:20:12Z","bytes_in":0,"dest_ip":"192.168.1.25","dest_port":443,"dest_sysnum":0,"event_name":"netFlowData","exporter_ip":"192.168.1.158","exporter_time":"2025-Jun-29 23:20:12","flow_end_rel":0,"flow_start_rel":0,"input_snmpidx":8,"netflow_version":10,"nexthop_addr":"1.1.1.2","observation_domain_id":0,"output_snmpidx":5,"packets_in":0,"protoid":6,"seqnumber":23000,"src_ip":"192.168.1.132","src_port":15449,"src_sysnum":0,"tcp_flags":0,"tos":0} Custom NetFlow parsing is described at https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/AutoinputNetflow. Can you confirm the default configuration works? If it does, we can dig into any customizations you need. If it doesn't, confirm your Stream instance is receiving correctly formatted IPFIX packets using tcpdump or another local capture tool.
If you could figure out working version, then you could try to download it with this https://github.com/ryanadler/downloadSplunk  
Yes, you could define several ports if needed, by adding a new receiver into those indexers by app via CM. I’m not sure if I understood correctly that you flip your receiver port to some invalid valu... See more...
Yes, you could define several ports if needed, by adding a new receiver into those indexers by app via CM. I’m not sure if I understood correctly that you flip your receiver port to some invalid value or something like that? Basically you could have separate port reserved for internal nodes which is blocked by firewall from normal traffic from UFs etc. This is allowed only from SH etc. Another receiver port is for all other UFs and IHFs (intermediate heavy forwarders). Then when you need to block real incoming indexing traffic, just disable that port. Then SH stop using this, as it gets information that this is closed by indexer discovery! And continue to use that SH only port. But still I said that you should update your license to cover your real ingestion needs or remove unnecessary ingestion.
Hi, This is a bit of a guess—but maybe it will spark some ideas to try. I wonder if closing the computation inside of the loop is not giving the server enough time to send it’s final response. It mi... See more...
Hi, This is a bit of a guess—but maybe it will spark some ideas to try. I wonder if closing the computation inside of the loop is not giving the server enough time to send it’s final response. It might be worth trying introducing some delay before closing or maybe try using a “try/catch” approach when closing the computation.
Hi @Fenilleh    Is the issue resolved or still you are facing an issue? If issue still persists,please paste the error whatsoever you are getting in splunkd and mongod.  Also, I am attaching one... See more...
Hi @Fenilleh    Is the issue resolved or still you are facing an issue? If issue still persists,please paste the error whatsoever you are getting in splunkd and mongod.  Also, I am attaching one KB article, have a look if that is relevant.  https://splunk.my.site.com/customer/s/article/KV-Store-Backup-Fails
Contact Splunk Support.  They may have a link to a version that works for you.
@ReiGjuzi  Finding a legacy Splunk Universal Forwarder MSI for Windows 7 SP1 (x64) is tricky since Microsoft and Splunk no longer support Windows 7, and official download pages prioritize newer vers... See more...
@ReiGjuzi  Finding a legacy Splunk Universal Forwarder MSI for Windows 7 SP1 (x64) is tricky since Microsoft and Splunk no longer support Windows 7, and official download pages prioritize newer versions for supported OSes like Windows 10 and 11.   If you can’t find the MSI on Splunk’s official site, avoid unofficial mirrors due to security risks.  
@simonsa I see that you're uploading a .gz file. Please extract it and upload the original, uncompressed file.