All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you try to follow KISS (keep it simple stupid) I said that you should do this forwarding/splitting to separate targets in 1st full Splunk Enterprise instance (in this case your SH, MC, LM etc). Ju... See more...
If you try to follow KISS (keep it simple stupid) I said that you should do this forwarding/splitting to separate targets in 1st full Splunk Enterprise instance (in this case your SH, MC, LM etc). Just splits those to there. And in this case is suppose that you don't even need separate transforms/props conf to do this. Just add a new app where you are using own inputs.conf which contains this additional outputs.conf with _SYSLOG_ROUTING and add inputs.conf which are sending all events (via default output) as both. But as you have QRadar as a target it might be need some modifications into log event? I cannot recall now what kind of syslog feed QRadar is needing? But if it support those default which splunk can send, you should use those. Otherwise you must add props+transforms to modify those events as needed.
Unfortunately there is no list ready to use. But you could create it based on this https://docs.splunk.com/Documentation/Splunk/9.4.2/ReleaseNotes/Deprecatedfeatures#Platform_support_changes_in_vers... See more...
Unfortunately there is no list ready to use. But you could create it based on this https://docs.splunk.com/Documentation/Splunk/9.4.2/ReleaseNotes/Deprecatedfeatures#Platform_support_changes_in_version_9.4 unfortunately you must go through quite many release notes to get splunk version vs deprecated versions. After that you could utilize @livehybrid 's query with some modifications to utilize your os-support.csv. Then if you are needing this information also for UFs, you must remember that their support times are longer that core components! Here is link to Splunk Core support times https://www.splunk.com/en_us/legal/splunk-software-support-policy.html#core and just after it you can see UF's support times which are longer e.g. UF 9.0 ends after 36 month instead of 24 months like core.
The best ways to solve this kind of issues is use btool like splunk btool outputs list --debug This shows all stanzas and values as splunk takes those into use after rebooting splunkd if those are... See more...
The best ways to solve this kind of issues is use btool like splunk btool outputs list --debug This shows all stanzas and values as splunk takes those into use after rebooting splunkd if those are added after last reboot into files. 
In my case I had this configuration line in another app: forwardedindex.2.whitelist = (_audit|_introspection|_internal) I´ve recognized that in etc/apps/SplunkDeploymentServerConfig/default/outputs... See more...
In my case I had this configuration line in another app: forwardedindex.2.whitelist = (_audit|_introspection|_internal) I´ve recognized that in etc/apps/SplunkDeploymentServerConfig/default/outputs.conf there is statement forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) My took preference because of being in local. After I have removed my local one I am able to see clients in GUI of deployment server again.
Hi @msatish  If you open the Developer Console in your browser and navigate to the Network tab, have a look for any resource which load slowly, please let us know which URIs load slowly and that mig... See more...
Hi @msatish  If you open the Developer Console in your browser and navigate to the Network tab, have a look for any resource which load slowly, please let us know which URIs load slowly and that might help us work out what the slowness might be caused by. Please can you also confirm your architecture, resource allocation (CPU/Memory etc), number of SH/IDX etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  If you're looking for Splunk versions and their support status then you can use the following: index=_internal hostname=* os=* | stats first(fwdType) as fwdType, first(os) as os, first(ve... See more...
Hi @SN1  If you're looking for Splunk versions and their support status then you can use the following: index=_internal hostname=* os=* | stats first(fwdType) as fwdType, first(os) as os, first(version) as splunk_version by hostname | rex field=splunk_version "(?<version_minor>[0-9]+\.[0-9]+)" | append [| makeresults format=csv data="version_minor, eos_date 9.0,Jun 14 2024 9.1,Jun 28 2025 9.2,Jan 31 2026 9.3,Jul 24 2026 9.4,Dec 16 2026" | eval eos_unix=strptime(eos_date,"%b %d %Y") ] | stats values(hostname) as hosts, first(eos_date) as eos_date, first(eos_unix) as eos_unix by version_minor | eval support_status=IF(eos_unix>time(),"In Support","Out of Support") | fillnull eos_date value="Unknown" | where hosts!="" However if you want the base OS then this might be a little tricker.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You have an accelerated perception of time so things appear slower than they really are?  You are using under-powered technology that struggles with the workload being executed?  On a less friv... See more...
You have an accelerated perception of time so things appear slower than they really are?  You are using under-powered technology that struggles with the workload being executed?  On a less frivolous note, please expand on what you are seeing and how you have determined that there is a slowness that you have observed?
Works great!. Thanks!
@Jimenez   
What are the reasons for the slowness observed in the Splunk Mission Control incident review dashboard?
| appendpipe [| chart values(fieldB) as unique by fieldA] | eventstats sum(unique) as sum_unique | where isnull(unique) | fields - unique
Hi all,  I have the following situation with a query returning a table of this kind: fieldA fieldB A 2 A 2 B 4 B 4   I need to add a column to this table that sums up field... See more...
Hi all,  I have the following situation with a query returning a table of this kind: fieldA fieldB A 2 A 2 B 4 B 4   I need to add a column to this table that sums up fieldB only once per fieldA unique value, meaning a new column that sums 2+4 = 6 table would look like this: fieldA fieldB sum_unique A 2 6 A 2 6 B 4 6 B 4 6   I know that I have to use | eventstats sum() here but I am struggling how to define it has to be once per fieldA unique value. Thanks in advance Miguel    
It really depends on the details. It might be easier to use the RULESET functionality on the indexers. It might be easier to send the data directly from the SH/LM/CM/whatever to Qradar using another ... See more...
It really depends on the details. It might be easier to use the RULESET functionality on the indexers. It might be easier to send the data directly from the SH/LM/CM/whatever to Qradar using another (non-Splunk) method. Each of those methods has its pros and cons, mostly tied to manageability and "cleanliness" of architecture.
Hi @kn450  Apologies I didnt realise you wanted to search Elastic in Native SPL, I inferred the requirement as being able to use DSL within SPL.  It sounds like what you are looking for is Federate... See more...
Hi @kn450  Apologies I didnt realise you wanted to search Elastic in Native SPL, I inferred the requirement as being able to use DSL within SPL.  It sounds like what you are looking for is Federated Search (" to search datasets outside of your local Splunk platform deployment.") against Elastic, which is not currently possible.  There are currently no apps/add-ons which translate SPL into DSL for searching Elastic.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@PickleRick so is it better to send logs from SH, LM, and CM directly to the remote server as recommended earlier by configuring output.conf and props.conf, also will it increase the processing on SH... See more...
@PickleRick so is it better to send logs from SH, LM, and CM directly to the remote server as recommended earlier by configuring output.conf and props.conf, also will it increase the processing on SH, LM and CM?
Hi @SN1 , what do you mean with "outdated OS"? then outdated respect what: Splunk or what else? Could you better describe your requirement? Ciao. Giuseppe
We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL ... See more...
We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This limits the integration with some of Splunk’s core functionalities and does not provide the desired level of efficiency in terms of performance and deep analysis. We are currently looking for best practices and would prefer to adopt a solution that has been widely used over a long period without issues, offering better integration and higher performance with Splunk. If you have any proven experiences or reliable recommendations, we would appreciate you sharing them
Thank you for your input. We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not writ... See more...
Thank you for your input. We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This limits the integration with some of Splunk’s core functionalities and does not provide the desired level of efficiency in terms of performance and deep analysis. We are currently looking for best practices and would prefer to adopt a solution that has been widely used over a long period without issues, offering better integration and higher performance with Splunk. If you have any proven experiences or reliable recommendations, we would appreciate you sharing them.
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple spl... See more...
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple splunk instances on one host. That doesn't mean you should. If you do so (I'm still advising against it), each instance will have its own set of inputs and outputs so if you - for example, just point your HF instance to indexers A and UF instance to indexers B, you will get _all_events from HF into indexers A (including _internal) and _all_ events from UF to indexers B. EDIT: I still don't see how it would solve your problems of sending logs from the "non-indexer" hosts to remote third party solution without sending them directly there...
Yes I agree, its very confusing but I think they mean not on the same host, as they will conflict, but for a distributed "deployment" you install app the apps but in different places.  Did this an... See more...
Yes I agree, its very confusing but I think they mean not on the same host, as they will conflict, but for a distributed "deployment" you install app the apps but in different places.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing