All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk should works on both RHEL 8 or 9, but with 9 there are some additional steps which one must do before splunk can installed there. RHEL 9 have cgroups v2 as default and that version of Splunk s... See more...
Splunk should works on both RHEL 8 or 9, but with 9 there are some additional steps which one must do before splunk can installed there. RHEL 9 have cgroups v2 as default and that version of Splunk support only version 1. There are probably also some security changes which must notice before splunk works correctly. If I recall right at least some of those are already on 8 and maybe some more in 9?
Here is one old answer for indexing windows .evtx files. https://community.splunk.com/t5/Getting-Data-In/Ingesting-offline-Windows-Event-logs-from-different-systems/m-p/649515
Hi @VeloPunk , for my knowledge, it shouldn't be a probem, the only mandatory requirement is that Splunk must be the same. Then RHEL8 or 9 should be the same, obviously it should be better having t... See more...
Hi @VeloPunk , for my knowledge, it shouldn't be a probem, the only mandatory requirement is that Splunk must be the same. Then RHEL8 or 9 should be the same, obviously it should be better having the same version, but for a transient period, they should live together. To have an official answer, open a case to Splunk Support or ask yo your Splunk Sales Engineer. Ciao. Giuseppe
Please provide some sample events which demonstrate the issue you have with your search
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lif... See more...
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lifecycle, and I'd rather provide them with RHEL 9, which is now our standard build. The fact that they still use RHEL 7 servers gives you some sense of how long it takes them to move their application to a new(ish) OS. They are insistent that we deploy them RHEL 8 servers so they are "all the same." I want to encourage them to move forward and have a platform that will be fully supported for several  years to come. Is having some servers on RHEL 8 and some on RHEL 9 for a period of time an actual problem? They use version 9.1.2. I found this document: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements It lists support both for x86_64 kernels 4.x (rhel and 5.x (rhel 9). It doesn't elaborate any further.  I know that for various reasons we'd want to eventually have all servers on the same OS version; I'm just wondering if having RHEL 8 and RHEL 9 coexist for a limited period presents an actual problem. I'd appreciate your thoughts.  Daniel
You should set pair of HF or UF as a gateway / “NLB” between the source client in public subnet and cluster peers in private network. Those gateway nodes use indexer discovery towards splunk indexers... See more...
You should set pair of HF or UF as a gateway / “NLB” between the source client in public subnet and cluster peers in private network. Those gateway nodes use indexer discovery towards splunk indexers in private subnet. The they have static IPs towards public subnet and they received events from source systems. Then in source systems are static outputs.conf where are static ips of those gateway nodes. There is no direct connections between source systems and splunk indexers or manager node. NLB cannot be e.g. F5, AWS NLB or any similar real load balancer.
I'm dumb, turns out that none of the apps I have installed are even using the KV Store after messaging Splunk support.
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVC... See more...
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVCs. What exactly is this search launcher? How do I deep dive into whats running under the hood? Any tips on how to approach on reducing the svc consumed by this?
Perhaps I just need to check when more than 7 days have passed between one VA and the next.
I will explain my issue from the beginning to make it clearer. I have an index that contains vulnerabilities related to an IP, and on Splunk, I receive VA data every week. I would like to check base... See more...
I will explain my issue from the beginning to make it clearer. I have an index that contains vulnerabilities related to an IP, and on Splunk, I receive VA data every week. I would like to check based on my IP and vulnerabilities for different cases: Which vulnerabilities are new, i.e., those VA that appear only in the current week. Which vulnerabilities have reappeared in a week after being absent (I think I should check when a VA is missing for a week and then reappears, perhaps by looking at when the time between results is greater than 7 days). When a vulnerability has disappeared, i.e., when the last week in which we had that VA is not the same as the current one.**
I am a bit confused on the guidance here... Does this re-enable the log(s) ?  We use the file /opt/splunkforwarder/var/log/splunk/metrics.log to check on our linux UF deploys that the /var/log/me... See more...
I am a bit confused on the guidance here... Does this re-enable the log(s) ?  We use the file /opt/splunkforwarder/var/log/splunk/metrics.log to check on our linux UF deploys that the /var/log/messages and auditd are appearing to send with some basic foo in our deploy scripts. With the SPL-263518 this is disabled by default now and we either need to identify another method of a simple local check or we need to re-enable group=per_source_thruput so we can rely on that check sudo grep -c /opt/splunkforwarder/var/log/splunk/metrics.log -e 'INFO Metrics - group=per_source_thruput, series="/var/log/messages", kbps=') -ne 0   Is there a full writeup on SPL-263518 that has more info than the simple blurb on known-issues starting with 9.3.x? aka: was this removed for a security reason or just simply to reduce local log writes, etc? 
Hi @omcollia , I suppose that your inserted the weeksum extraction with eventstat before the eval. Ciao. Giuseppe
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then... See more...
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then saved under C:\Windows\System32\winevt\Logs with names such as Application.evtx or Security.evtx These files are in a somehow "binary" format, but this format is known and there are tools to extract their data in text format. E.g. using the Python language there's a module named "python-evtx". I did not try using this module inside a Linux based Indexer to directly read the data from the files. Doing this is probably a bad idea for the standard Windows Event Logs as these are best read using the solution provided above, but in case of "standalone" event files, which other applications might create, using such tools is a way to go.
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eve... See more...
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eventstats command: | eventstats values(week) as weeksum by IP,dest_ip,plugin_id and maybe the multivalue field is in a format that's not readable?"
Another system is trying to send logs to the splunk in a TLS way. I was wondering if I need to create a certificate in Splunk, and I want to know how to set up TLS reception in Splunk.
You probably wanted to do something like stats count(eval(isnotnull(attack_type))) I must say though that I don't like the stats eval syntax - it can be confusing. I prefer to do stuff explicitly. ... See more...
You probably wanted to do something like stats count(eval(isnotnull(attack_type))) I must say though that I don't like the stats eval syntax - it can be confusing. I prefer to do stuff explicitly. Like this: | eval isattack=if(isnotnull(attack_type),1,0) | stats sum(attack_type) PS: Oh, and don't search across all your indexes. While it might work relatively not that bad on some small deployments or for a user with very limited permissions, it's a very bad habit which doesn't scale well. And don't use wildcards at the beginning of your search term (like *juniper*).
When I launch an application, the name of the application no longer appears at the top right      
What exactly are you trying to do? You should never need another party's private key.
If I understand you correctly you want to remove all-empty columns from your original data, right? <your_search> | transpose 0 include_empty=f  
Yes. crcSalt is rarely the way to go. The solution is usually to raise the initCrcLength value so that if you have a constant "header" in your file it's getting skipped. As for your original questio... See more...
Yes. crcSalt is rarely the way to go. The solution is usually to raise the initCrcLength value so that if you have a constant "header" in your file it's getting skipped. As for your original question - there can be several different reasons for it. Try checking output of splunk list monitor and splunk list inputstatus regarding those problematic files