All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Search Launcher is the process that starts (launches) searches.  The reason is has such a large proportion of SVCs is it's the catch-all category for searches that complete too fast (<10 seconds) to ... See more...
Search Launcher is the process that starts (launches) searches.  The reason is has such a large proportion of SVCs is it's the catch-all category for searches that complete too fast (<10 seconds) to have their own metrics. Unfortunately, there is no way to dive into what is contributing to the Search Launcher metrics. I would focus on SVCs that are NOT used by Search Launcher as they represent bigger potential improvements.
If you want to use current time instead of time of previous event which are different to current time the you could in props.conf DATETIME_CONFIG = [<filename relative to $SPLUNK_HOME> | CURRENT | NO... See more...
If you want to use current time instead of time of previous event which are different to current time the you could in props.conf DATETIME_CONFIG = [<filename relative to $SPLUNK_HOME> | CURRENT | NONE] Select value CURRENT. But if the event time can be something else than current then probably not.
Verssion : 9.3.1    yes, the issue is with all apps Thx
Which splunk version and is this issue with all apps?
Hi @omcollia , ok, you need a completely different thing! you should run a search to understand if a vulnerability is present in more weeks, so, if vulnerabilities are contained in a fied called vu... See more...
Hi @omcollia , ok, you need a completely different thing! you should run a search to understand if a vulnerability is present in more weeks, so, if vulnerabilities are contained in a fied called vulnerability, you could run something like this: <your_search> | eval weeksum=strftime(_time,"%Y:%V") | stats dc(weeksum) AS weeksum_count values(weeksum) AS weeksum BY vulnerabilities | eval present_weeksum=strftime(now(),"%Y:%V") | eval status=case( weeksum_count=1 AND weeksum=present_weeksum,"Present in Last Week", weeksum_count=1 AND NOT weeksum=present_weeksum,"Present in Week: ".weeksum, weeksum_count>1,"Present in More Weeks") you can customize this search using the field you have for vulnerabilities and the additional conditions for status following my approach. Ciao. Giuseppe
Splunk should works on both RHEL 8 or 9, but with 9 there are some additional steps which one must do before splunk can installed there. RHEL 9 have cgroups v2 as default and that version of Splunk s... See more...
Splunk should works on both RHEL 8 or 9, but with 9 there are some additional steps which one must do before splunk can installed there. RHEL 9 have cgroups v2 as default and that version of Splunk support only version 1. There are probably also some security changes which must notice before splunk works correctly. If I recall right at least some of those are already on 8 and maybe some more in 9?
Here is one old answer for indexing windows .evtx files. https://community.splunk.com/t5/Getting-Data-In/Ingesting-offline-Windows-Event-logs-from-different-systems/m-p/649515
Hi @VeloPunk , for my knowledge, it shouldn't be a probem, the only mandatory requirement is that Splunk must be the same. Then RHEL8 or 9 should be the same, obviously it should be better having t... See more...
Hi @VeloPunk , for my knowledge, it shouldn't be a probem, the only mandatory requirement is that Splunk must be the same. Then RHEL8 or 9 should be the same, obviously it should be better having the same version, but for a transient period, they should live together. To have an official answer, open a case to Splunk Support or ask yo your Splunk Sales Engineer. Ciao. Giuseppe
Please provide some sample events which demonstrate the issue you have with your search
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lif... See more...
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lifecycle, and I'd rather provide them with RHEL 9, which is now our standard build. The fact that they still use RHEL 7 servers gives you some sense of how long it takes them to move their application to a new(ish) OS. They are insistent that we deploy them RHEL 8 servers so they are "all the same." I want to encourage them to move forward and have a platform that will be fully supported for several  years to come. Is having some servers on RHEL 8 and some on RHEL 9 for a period of time an actual problem? They use version 9.1.2. I found this document: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements It lists support both for x86_64 kernels 4.x (rhel and 5.x (rhel 9). It doesn't elaborate any further.  I know that for various reasons we'd want to eventually have all servers on the same OS version; I'm just wondering if having RHEL 8 and RHEL 9 coexist for a limited period presents an actual problem. I'd appreciate your thoughts.  Daniel
You should set pair of HF or UF as a gateway / “NLB” between the source client in public subnet and cluster peers in private network. Those gateway nodes use indexer discovery towards splunk indexers... See more...
You should set pair of HF or UF as a gateway / “NLB” between the source client in public subnet and cluster peers in private network. Those gateway nodes use indexer discovery towards splunk indexers in private subnet. The they have static IPs towards public subnet and they received events from source systems. Then in source systems are static outputs.conf where are static ips of those gateway nodes. There is no direct connections between source systems and splunk indexers or manager node. NLB cannot be e.g. F5, AWS NLB or any similar real load balancer.
I'm dumb, turns out that none of the apps I have installed are even using the KV Store after messaging Splunk support.
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVC... See more...
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVCs. What exactly is this search launcher? How do I deep dive into whats running under the hood? Any tips on how to approach on reducing the svc consumed by this?
Perhaps I just need to check when more than 7 days have passed between one VA and the next.
I will explain my issue from the beginning to make it clearer. I have an index that contains vulnerabilities related to an IP, and on Splunk, I receive VA data every week. I would like to check base... See more...
I will explain my issue from the beginning to make it clearer. I have an index that contains vulnerabilities related to an IP, and on Splunk, I receive VA data every week. I would like to check based on my IP and vulnerabilities for different cases: Which vulnerabilities are new, i.e., those VA that appear only in the current week. Which vulnerabilities have reappeared in a week after being absent (I think I should check when a VA is missing for a week and then reappears, perhaps by looking at when the time between results is greater than 7 days). When a vulnerability has disappeared, i.e., when the last week in which we had that VA is not the same as the current one.**
I am a bit confused on the guidance here... Does this re-enable the log(s) ?  We use the file /opt/splunkforwarder/var/log/splunk/metrics.log to check on our linux UF deploys that the /var/log/me... See more...
I am a bit confused on the guidance here... Does this re-enable the log(s) ?  We use the file /opt/splunkforwarder/var/log/splunk/metrics.log to check on our linux UF deploys that the /var/log/messages and auditd are appearing to send with some basic foo in our deploy scripts. With the SPL-263518 this is disabled by default now and we either need to identify another method of a simple local check or we need to re-enable group=per_source_thruput so we can rely on that check sudo grep -c /opt/splunkforwarder/var/log/splunk/metrics.log -e 'INFO Metrics - group=per_source_thruput, series="/var/log/messages", kbps=') -ne 0   Is there a full writeup on SPL-263518 that has more info than the simple blurb on known-issues starting with 9.3.x? aka: was this removed for a security reason or just simply to reduce local log writes, etc? 
Hi @omcollia , I suppose that your inserted the weeksum extraction with eventstat before the eval. Ciao. Giuseppe
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then... See more...
This is just to add some pieces of information. The Windows Event Log data is written to disk at least before and after a reboot or a restart of the "Windows Event Log" service. These files are then saved under C:\Windows\System32\winevt\Logs with names such as Application.evtx or Security.evtx These files are in a somehow "binary" format, but this format is known and there are tools to extract their data in text format. E.g. using the Python language there's a module named "python-evtx". I did not try using this module inside a Linux based Indexer to directly read the data from the files. Doing this is probably a bad idea for the standard Windows Event Logs as these are best read using the solution provided above, but in case of "standalone" event files, which other applications might create, using such tools is a way to go.
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eve... See more...
  Here’s the translation of your text into English: "If I run this command: | eval year=substr(weeksum,1,4) the field remains empty, maybe because my field weeksum comes from an eventstats command: | eventstats values(week) as weeksum by IP,dest_ip,plugin_id and maybe the multivalue field is in a format that's not readable?"
Another system is trying to send logs to the splunk in a TLS way. I was wondering if I need to create a certificate in Splunk, and I want to know how to set up TLS reception in Splunk.