All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You have an accelerated perception of time so things appear slower than they really are?  You are using under-powered technology that struggles with the workload being executed?  On a less friv... See more...
You have an accelerated perception of time so things appear slower than they really are?  You are using under-powered technology that struggles with the workload being executed?  On a less frivolous note, please expand on what you are seeing and how you have determined that there is a slowness that you have observed?
Works great!. Thanks!
@Jimenez   
What are the reasons for the slowness observed in the Splunk Mission Control incident review dashboard?
| appendpipe [| chart values(fieldB) as unique by fieldA] | eventstats sum(unique) as sum_unique | where isnull(unique) | fields - unique
Hi all,  I have the following situation with a query returning a table of this kind: fieldA fieldB A 2 A 2 B 4 B 4   I need to add a column to this table that sums up field... See more...
Hi all,  I have the following situation with a query returning a table of this kind: fieldA fieldB A 2 A 2 B 4 B 4   I need to add a column to this table that sums up fieldB only once per fieldA unique value, meaning a new column that sums 2+4 = 6 table would look like this: fieldA fieldB sum_unique A 2 6 A 2 6 B 4 6 B 4 6   I know that I have to use | eventstats sum() here but I am struggling how to define it has to be once per fieldA unique value. Thanks in advance Miguel    
It really depends on the details. It might be easier to use the RULESET functionality on the indexers. It might be easier to send the data directly from the SH/LM/CM/whatever to Qradar using another ... See more...
It really depends on the details. It might be easier to use the RULESET functionality on the indexers. It might be easier to send the data directly from the SH/LM/CM/whatever to Qradar using another (non-Splunk) method. Each of those methods has its pros and cons, mostly tied to manageability and "cleanliness" of architecture.
Hi @kn450  Apologies I didnt realise you wanted to search Elastic in Native SPL, I inferred the requirement as being able to use DSL within SPL.  It sounds like what you are looking for is Federate... See more...
Hi @kn450  Apologies I didnt realise you wanted to search Elastic in Native SPL, I inferred the requirement as being able to use DSL within SPL.  It sounds like what you are looking for is Federated Search (" to search datasets outside of your local Splunk platform deployment.") against Elastic, which is not currently possible.  There are currently no apps/add-ons which translate SPL into DSL for searching Elastic.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@PickleRick so is it better to send logs from SH, LM, and CM directly to the remote server as recommended earlier by configuring output.conf and props.conf, also will it increase the processing on SH... See more...
@PickleRick so is it better to send logs from SH, LM, and CM directly to the remote server as recommended earlier by configuring output.conf and props.conf, also will it increase the processing on SH, LM and CM?
Hi @SN1 , what do you mean with "outdated OS"? then outdated respect what: Splunk or what else? Could you better describe your requirement? Ciao. Giuseppe
We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL ... See more...
We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This limits the integration with some of Splunk’s core functionalities and does not provide the desired level of efficiency in terms of performance and deep analysis. We are currently looking for best practices and would prefer to adopt a solution that has been widely used over a long period without issues, offering better integration and higher performance with Splunk. If you have any proven experiences or reliable recommendations, we would appreciate you sharing them
Thank you for your input. We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not writ... See more...
Thank you for your input. We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This limits the integration with some of Splunk’s core functionalities and does not provide the desired level of efficiency in terms of performance and deep analysis. We are currently looking for best practices and would prefer to adopt a solution that has been widely used over a long period without issues, offering better integration and higher performance with Splunk. If you have any proven experiences or reliable recommendations, we would appreciate you sharing them.
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple spl... See more...
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple splunk instances on one host. That doesn't mean you should. If you do so (I'm still advising against it), each instance will have its own set of inputs and outputs so if you - for example, just point your HF instance to indexers A and UF instance to indexers B, you will get _all_events from HF into indexers A (including _internal) and _all_ events from UF to indexers B. EDIT: I still don't see how it would solve your problems of sending logs from the "non-indexer" hosts to remote third party solution without sending them directly there...
Yes I agree, its very confusing but I think they mean not on the same host, as they will conflict, but for a distributed "deployment" you install app the apps but in different places.  Did this an... See more...
Yes I agree, its very confusing but I think they mean not on the same host, as they will conflict, but for a distributed "deployment" you install app the apps but in different places.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@PickleRick @livehybrid can i install Splunk UF in the SH, CM and LM, is it possible, also will it work? also will it cause duplicate logs from Splunk as well as from UF.
I think for distributed systems we have to install all: IA, TA, and the app.  I think when they say " Do not install Add-Ons and Apps on the same system" They mean not on a same host.
Hi @lrod99  The Conducive App for HL7 app isnt available for download directly from Splunkbase because it needs to be obtained directly from Conducive Consulting, which I believe will require a lice... See more...
Hi @lrod99  The Conducive App for HL7 app isnt available for download directly from Splunkbase because it needs to be obtained directly from Conducive Consulting, which I believe will require a license/support agreement with them. The only other Splunkbase HL7 app (HL7 Add-On for Splunk) was created by Joe Welsh who previously worked for Splunk but has now left the company, therefore it is unlikely that this will be updated, unless you reach out directly to Joe to see if this could be done? Are you looking for something to act as an endpoint/ingest or field extractions for an existing HL7 feed which you have?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Where can i get list of all outdated OS for my dashboard. Is there a site or something
Hi @livehybrid , I tried below by removing 2nd line, but nothing is being transmitted to splunk. As i mentioned , in the otel collector log, the body is getting printed correctly, somehow nothing... See more...
Hi @livehybrid , I tried below by removing 2nd line, but nothing is being transmitted to splunk. As i mentioned , in the otel collector log, the body is getting printed correctly, somehow nothing is being sent to splunk server. I see nothing in splunk with below change. processors: transform/logs: log_statements: - context: log statements: - set(body, ParseJSON(body)["message"])  
Also testing API on a API dev tool  indicated that we have to append "ApiToken" at the beginning of the key. Hopefully that is the way to enter it for the S1 App also.