All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah ok - that helpful info. the SPL-263518 on both 9.3 and 9.4 releases doesnt really state it was a regression and no link there explaining that...would be easier as a consumer if that SPL linked to ... See more...
Ah ok - that helpful info. the SPL-263518 on both 9.3 and 9.4 releases doesnt really state it was a regression and no link there explaining that...would be easier as a consumer if that SPL linked to a longer writeup/explanation. Do you happen to know if there a plan/timeline for re-adding it? Will it go into like 9.3.3 and 9.4.1 or will 9.3 and 9.4 just keep this regression and then 9.5 will re-add perhaps?
>Does this re-enable the log(s) ?  Yes >we need to re-enable group=per_source_thruput so we can rely on that check Apply the workaround. >was this removed for a security reason or just si... See more...
>Does this re-enable the log(s) ?  Yes >we need to re-enable group=per_source_thruput so we can rely on that check Apply the workaround. >was this removed for a security reason or just simply to reduce local log writes, etc?  Accidentally got removed( regression)
Did you tied( if it's one of the input types mentioned) ? run_only_one= <boolean> * Determines if a scripted or modular inputs runs on one search head in SHC.  
One thing which I “found” (fortunately in test with backups). If your deployer is down enough long time your SHC members lost all apps which have deployed by deployer! Have you lost your deployer or... See more...
One thing which I “found” (fortunately in test with backups). If your deployer is down enough long time your SHC members lost all apps which have deployed by deployer! Have you lost your deployer or only connection between it and members? If first then it should be enough that you restore the connectivity between those. Of course you must ensure that deployer still has all those apps which have previously deployed to members. If you have lost whole node and you haven’t backup and you must build it from scratch then there are some things which you must check and update before you can put it back online. Ensure that you have those apps there what you have previously deployed with same configuration on place. Ensure that lookups are correct. check that deployment modes are globally and app level correctly setup (how local and default are transferred into members) check that how lookups should push into members override or keen member version? check if members have same splunk.secrets and if copy this into deployer before start it first time then those what you and @gcusello already mentioned  check that all nodes have same time! Maybe something else? If you could do and test this on test environment, do it first and check what are issues which arise after deployer is back online. If you haven’t the definite you must take backup of all those nodes when they are offline! And include kvstore backup too. I would like to hear how this succeed after you have done it!
Hi folks, Looking to use es_notable_events as a way of building out a panel that will get info on ES events for the past 7 days, specifically how many alerts were closed by the team and what the al... See more...
Hi folks, Looking to use es_notable_events as a way of building out a panel that will get info on ES events for the past 7 days, specifically how many alerts were closed by the team and what the alert name is. The search I am using is as follows: | `es_notable_events` | search timeDiff_type=current | stats sparkline(sum(count),30m) as sparkline,sum(count) as count by rule_name | sort 100 - count | table rule_name, count This works perfectly for the past 48 hours but it doesn't go back as far as a week (a known limitation when using es_notable_events apparently!). My question is, are there any altenative searches that I can run that will get these results?
I'm trying to learn about splunk for an upcoming position. I recently purchased parallels so I could utilize windows vms. I was trying to set up an indexer on one vm and the forwarder on another and ... See more...
I'm trying to learn about splunk for an upcoming position. I recently purchased parallels so I could utilize windows vms. I was trying to set up an indexer on one vm and the forwarder on another and just mess around with splunks capabilities. Is this even possible? So far it hasn't worked and I have tried a few alterations on the output.conf file in the forwarder. since the VMs have the same public address, I tried to use the private address and I also tried to go by hostname and it still didn't work. Any suggestions?
I’m not sure if I fully understood your issue. Anyhow sourcetype’s purpose is separate different log formats. If all those apps have exactly same lexical content of logs then they could/should have ... See more...
I’m not sure if I fully understood your issue. Anyhow sourcetype’s purpose is separate different log formats. If all those apps have exactly same lexical content of logs then they could/should have same sourcetype. But if the e.g. apple and orange creates logs which have different fields etc. then those should have different sourcetypes assigned for those logs. When you have two different apps and those have exactly same log format but you want separate those you could named those like sourcetype=“apple:module:log:1” and sourcetype=“orange:module:log1” or what ever names you want to use. Then just define those equally in props.conf.
Search Launcher is the process that starts (launches) searches.  The reason is has such a large proportion of SVCs is it's the catch-all category for searches that complete too fast (<10 seconds) to ... See more...
Search Launcher is the process that starts (launches) searches.  The reason is has such a large proportion of SVCs is it's the catch-all category for searches that complete too fast (<10 seconds) to have their own metrics. Unfortunately, there is no way to dive into what is contributing to the Search Launcher metrics. I would focus on SVCs that are NOT used by Search Launcher as they represent bigger potential improvements.
If you want to use current time instead of time of previous event which are different to current time the you could in props.conf DATETIME_CONFIG = [<filename relative to $SPLUNK_HOME> | CURRENT | NO... See more...
If you want to use current time instead of time of previous event which are different to current time the you could in props.conf DATETIME_CONFIG = [<filename relative to $SPLUNK_HOME> | CURRENT | NONE] Select value CURRENT. But if the event time can be something else than current then probably not.
Verssion : 9.3.1    yes, the issue is with all apps Thx
Which splunk version and is this issue with all apps?
Hi @omcollia , ok, you need a completely different thing! you should run a search to understand if a vulnerability is present in more weeks, so, if vulnerabilities are contained in a fied called vu... See more...
Hi @omcollia , ok, you need a completely different thing! you should run a search to understand if a vulnerability is present in more weeks, so, if vulnerabilities are contained in a fied called vulnerability, you could run something like this: <your_search> | eval weeksum=strftime(_time,"%Y:%V") | stats dc(weeksum) AS weeksum_count values(weeksum) AS weeksum BY vulnerabilities | eval present_weeksum=strftime(now(),"%Y:%V") | eval status=case( weeksum_count=1 AND weeksum=present_weeksum,"Present in Last Week", weeksum_count=1 AND NOT weeksum=present_weeksum,"Present in Week: ".weeksum, weeksum_count>1,"Present in More Weeks") you can customize this search using the field you have for vulnerabilities and the additional conditions for status following my approach. Ciao. Giuseppe
Splunk should works on both RHEL 8 or 9, but with 9 there are some additional steps which one must do before splunk can installed there. RHEL 9 have cgroups v2 as default and that version of Splunk s... See more...
Splunk should works on both RHEL 8 or 9, but with 9 there are some additional steps which one must do before splunk can installed there. RHEL 9 have cgroups v2 as default and that version of Splunk support only version 1. There are probably also some security changes which must notice before splunk works correctly. If I recall right at least some of those are already on 8 and maybe some more in 9?
Here is one old answer for indexing windows .evtx files. https://community.splunk.com/t5/Getting-Data-In/Ingesting-offline-Windows-Event-logs-from-different-systems/m-p/649515
Hi @VeloPunk , for my knowledge, it shouldn't be a probem, the only mandatory requirement is that Splunk must be the same. Then RHEL8 or 9 should be the same, obviously it should be better having t... See more...
Hi @VeloPunk , for my knowledge, it shouldn't be a probem, the only mandatory requirement is that Splunk must be the same. Then RHEL8 or 9 should be the same, obviously it should be better having the same version, but for a transient period, they should live together. To have an official answer, open a case to Splunk Support or ask yo your Splunk Sales Engineer. Ciao. Giuseppe
Please provide some sample events which demonstrate the issue you have with your search
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lif... See more...
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lifecycle, and I'd rather provide them with RHEL 9, which is now our standard build. The fact that they still use RHEL 7 servers gives you some sense of how long it takes them to move their application to a new(ish) OS. They are insistent that we deploy them RHEL 8 servers so they are "all the same." I want to encourage them to move forward and have a platform that will be fully supported for several  years to come. Is having some servers on RHEL 8 and some on RHEL 9 for a period of time an actual problem? They use version 9.1.2. I found this document: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements It lists support both for x86_64 kernels 4.x (rhel and 5.x (rhel 9). It doesn't elaborate any further.  I know that for various reasons we'd want to eventually have all servers on the same OS version; I'm just wondering if having RHEL 8 and RHEL 9 coexist for a limited period presents an actual problem. I'd appreciate your thoughts.  Daniel
You should set pair of HF or UF as a gateway / “NLB” between the source client in public subnet and cluster peers in private network. Those gateway nodes use indexer discovery towards splunk indexers... See more...
You should set pair of HF or UF as a gateway / “NLB” between the source client in public subnet and cluster peers in private network. Those gateway nodes use indexer discovery towards splunk indexers in private subnet. The they have static IPs towards public subnet and they received events from source systems. Then in source systems are static outputs.conf where are static ips of those gateway nodes. There is no direct connections between source systems and splunk indexers or manager node. NLB cannot be e.g. F5, AWS NLB or any similar real load balancer.
I'm dumb, turns out that none of the apps I have installed are even using the KV Store after messaging Splunk support.
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVC... See more...
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVCs. What exactly is this search launcher? How do I deep dive into whats running under the hood? Any tips on how to approach on reducing the svc consumed by this?