All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And never try to manage DS by itself! This will not end nice! Be careful as some people confuse deployment server and SHC deployer. As already told those are different tools/roles and you must use co... See more...
And never try to manage DS by itself! This will not end nice! Be careful as some people confuse deployment server and SHC deployer. As already told those are different tools/roles and you must use correct one.
Hello @bishida, Thanks for sharing the information. As per the document Splunk Enterprise it says "Choose this option if you manage Splunk Enterprise in a data center or public cloud. Follow the st... See more...
Hello @bishida, Thanks for sharing the information. As per the document Splunk Enterprise it says "Choose this option if you manage Splunk Enterprise in a data center or public cloud. Follow the steps in the wizard to securely connect to Splunk Enterprise instance and query logs data using Log Observer." If we are using Splunk Enterprise for logging and want to forward data to the Observability Cloud, is it possible for the Splunk Enterprise host to be on a private network? If yes, what additional steps or configurations are needed to enable the Splunk Enterprise host to transfer data to the Observability Cloud? Additionally, can this be achieved if the splunk-otel-collector.service is running on the Splunk Enterprise host in private network? Thanks  
Here is a working example of statsd receiver: After you restart the collector, it will be listening on UDP port 8125. Since this is UDP and not TCP, you can't test the port like you normal... See more...
Here is a working example of statsd receiver: After you restart the collector, it will be listening on UDP port 8125. Since this is UDP and not TCP, you can't test the port like you normally would and get a response. Send a test metric to that port and then search for it in the Metric Finder in O11y Cloud. echo "statsd.test.metric:42|c|#mykey:#myval" | nc -w 1 -u -4 localhost 8125
The query checks the lookup file, but then does nothing with it.  That's why all events are counted.  Try this index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dat... See more...
The query checks the lookup file, but then does nothing with it.  That's why all events are counted.  Try this index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dataeventcode.csv Event_Code OUTPUT Event_Code as found | where isnotnull(found) | timechart span=1d dc(Event_Code) If the Event_Code field did not need to be extracted via rex then we could have used inputlookup to give Splunk a list of codes to search for.
Hi @Ste , Please share the code of your dashboard with the error using the "Insert/Edit Code Cample" button. Ciao. Giuseppe
Hi @secure , if you want to filter results from main search using the Event_Codes from the lookup, you must use a subsearch: index=abc | rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?... See more...
Hi @secure , if you want to filter results from main search using the Event_Codes from the lookup, you must use a subsearch: index=abc | rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | search [ | inputlookup dataeventcode.csv | fields Event_Code ] | timechart span=1d dc(Event_Code) If you extract the Event_Code field before the search as a field, you can put the subsearch in the main search. Ciao. Giuseppe
At one time, parsing on an HF actually made the indexers work *harder*, but I'm not sure that's still the case. HFs should off-load some SVCs from your Splunk Cloud indexers. HFs will increase the ... See more...
At one time, parsing on an HF actually made the indexers work *harder*, but I'm not sure that's still the case. HFs should off-load some SVCs from your Splunk Cloud indexers. HFs will increase the network traffic to Splunk Cloud.
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count... See more...
Hi All i have a csv look up with below data Event_Code AUB01 AUB36 BUA12 i want to match it with a dataset which has field name  Event_Code with several values i need to extract the count of the event code per day from the matching csv lookup  my query index=abc |rex field=data "\|(?<data>[^\.|]+)?\|(?<Event_Code>[^\|]+)?\|" | lookup dataeventcode.csv Event_Code | timechart span=1d dc(Event_Code) however the result is showing all 100 count per day instaed of matching the event code from the CSV and then give the total count per day
Splunk Observability Cloud relies on the Splunk Core Platform (Splunk Cloud or Splunk Enterprise) for logging capabilities. So, logs aren’t sent directly to Observability Cloud—you send them to Splun... See more...
Splunk Observability Cloud relies on the Splunk Core Platform (Splunk Cloud or Splunk Enterprise) for logging capabilities. So, logs aren’t sent directly to Observability Cloud—you send them to Splunk Cloud/Enterprise and then pull them in to view with the Log Observer Connect integration in Observability Cloud. When you click to "Log Observer" in Observability Cloud, the logs you see are brought in to view at that moment by reading them from your Splunk Cloud/Enterprise.
Thanks for the feedback. My understanding is that I would gain performance in the future. Am I wrong? I am currently using field extract in splunk cloud.
Adding to @richgalloway 's answer - DS is a central point to distribute config items packaged into apps to its clients. Strictly theoretically, any Splunk component can be a deployment client and use... See more...
Adding to @richgalloway 's answer - DS is a central point to distribute config items packaged into apps to its clients. Strictly theoretically, any Splunk component can be a deployment client and use the DS to pull its apps from. But for some components (forwarders) it's the natural way and most often used (it's not, however, the only way to manage forwarders - you could use any configuration management tool of your choice if you feel more comfortable with it). For other ones (stand-alone search-heads or indexers) DS can be used but rarely is. There are some components (clustered search heads and clustered indexers) for which DS mustn't be used directly - they are managed by SHC deployer and Cluster Manager respectively. There are also some even more complicated scenarios but we'll not dig into them here since this is a relatively advanced topic. TL;DR - While other components can sometimes be managed with DS as well, typically only forwarders are.
@isoutamo , many thanks for the advice, we have now seperated all inputs to the HF. SH is now just for searching but has the TA installed.  @PickleRick  many thanks also for the hint!
Using the html tags from your proposal would lead into the error message "Error in Line 100: Invalid character entity"
As @PickleRick said, you should Fix _time on indexing phase, not in search time! Do always real data onboarding and ensure tuhat you have Events correctly in your indexes. You could/should use @bowes... See more...
As @PickleRick said, you should Fix _time on indexing phase, not in search time! Do always real data onboarding and ensure tuhat you have Events correctly in your indexes. You could/should use @bowesmana ’s eval in your transforms.confs INGEST_EVAL.
Yes. If you had your event sanitized before ingesting (not have a whole json structure inserted as a text member of another json), you could have it parsed as a normal json without manually having to... See more...
Yes. If you had your event sanitized before ingesting (not have a whole json structure inserted as a text member of another json), you could have it parsed as a normal json without manually having to extract each field (and manipulating structured data with regexes is bound to hit some walls sooner or later). Also - I'd advise against doing indexed extractions unless you have a very good use case for them.
Hi @Ste , in dashboards, you cannot use <> and you have to replace them with &lt; and &gt; | where my_time&gr;=relative_time(now(),"-1d@d") AND my_time&lt;=relative_time(now(),"@d") Ciao. giuseppe
As @PickleRick said, you should use separate HF in distributed environment for all modular inputs, don’t put those into SH. Of course you need TA in SH too, but not inputs configured there.
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I wa... See more...
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I want to use exactly this code in a dashboard, I get the error message: "Error in line 100: Unencoded <" ? The dashboard code validator somehow fails with the <= comparison.  >= works, as well = but not <=  We're on splunkcloud. 
As this is your clean new installation on your test machine, I propose that you remove whole UF installation (probably in /opt/splunkforwarder or something similar). Then if you really want to install... See more...
As this is your clean new installation on your test machine, I propose that you remove whole UF installation (probably in /opt/splunkforwarder or something similar). Then if you really want to install also UF on the same machine install it with your package manager from rpm or dpg file. Then just try to start it again. Just follow UF’s installation instructions from docs.splunk.com.
Ok makes sense. Thanks for the reply!