All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @shonias, I don't believe this functionality exists in Dashboard Studio today; however, you can create custom dashboards using @splunk/create and ReactJS. See https://splunkui.splunk.com/Packages... See more...
Hi @shonias, I don't believe this functionality exists in Dashboard Studio today; however, you can create custom dashboards using @splunk/create and ReactJS. See https://splunkui.splunk.com/Packages/create/Overview and https://splunkui.splunk.com/Create/ExamplesGallery#Custom%20dashboards%20and%20visualizations. An interested user has also created a Splunk Idea related to tooltips. See https://ideas.splunk.com/ideas/EID-I-2183.
Hi @MattKr , Docker containers have only minimum requires tools. It is not easy to add additional tools. Don't think this as a standart linux distribution.
@AL3Z  To get Notable events from ES into SOAR you need the Splunk App for SOAR Export  setup on your Search Head then you will need to add the Adaptive Response to send to SOAR when the detection t... See more...
@AL3Z  To get Notable events from ES into SOAR you need the Splunk App for SOAR Export  setup on your Search Head then you will need to add the Adaptive Response to send to SOAR when the detection triggers an event in Splunk. 
hello all! is there a default time that events (containers/cases) are stored in the SOAR server to approach to? and if so, can I change the time? @phanTom  Thank you in advance
Hi @kate, You can enable the introspection generator add-on on forwarders by following the process at https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ConfigurePIF#Enable_the_intr... See more...
Hi @kate, You can enable the introspection generator add-on on forwarders by following the process at https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ConfigurePIF#Enable_the_introspection_generator_add-on_using_deployment_server. If you're not using a deployment server, you can enable the add-on locally on any forwarder. Note that the SplunkForwarder service account, e.g. NT SERVICE\SplunkForwarder, must have the "Debug programs" (SeDebugPrivilege) user right. While this isn't equivalent to administrator privileges, it does grant the user the ability to inject arbitrary code into another process running with administrator privileges. You can find more information in Microsoft security documentation. Don't fear the privilege, though. Just understand what it does and how to mitigate the risk of assigning in the context of Splunk. By default, introspection:generator:resource_usage will be enabled and collect metrics every 10 minutes when the add-on is enabled is enabled on universal forwarders. You can find metrics in index=_introspection, an event index containing source types with INDEXED_EXTRACTIONS = json: | tstats avg(data.cpu_idle_pct) as cpu_idle_pct where index=_introspection sourcetype=splunk_resource_usage component=Hostwide by _time host | chart avg(eval(100-cpu_idle_pct)) ``` cpu_used_pct ``` over _time by host On instances of Splunk Enterprise, metrics are also cloned to index=_metrics; however, events sent from forwarders with INDEXED_EXTRACTIONS set are "cooked" by the forwarder, and transforms on receivers will not be applied without modifying configuration to reroute cooked events to parsingQueue or adding ingest actions (rulesets) that reference the transforms behavior.  
Hi, i would like to install additional tools on my splunk docker container but yum is not installed, rpm is available but needs to be configured along with a repo i guess? What is the best way to do... See more...
Hi, i would like to install additional tools on my splunk docker container but yum is not installed, rpm is available but needs to be configured along with a repo i guess? What is the best way to do this, do i need a Red Hat subscription for this?
@phanTom  I dnt see any notables set to soar in the adaptive response action, But we dnt relay on incident review dashboard in our environment, All incidents are automated through the soar it self... See more...
@phanTom  I dnt see any notables set to soar in the adaptive response action, But we dnt relay on incident review dashboard in our environment, All incidents are automated through the soar it self.  
Yes, in the meantime it turned out the default way it to listen on a UNIX Domain Socket and I need to switch with config back to the tcp method.
What you described is the new default behavior.
Hi @scelikok , That's working great. Thank you for saving my time. Regards, Eshwar  
Hi @Eshwar, Please try below; curl -k -u admin:password "https://localhost:8089/services/alerts/fired_alerts?output_mode=json&count=0"
Hi @scelikok , I tried with output_mode=json but not able to get JSON response as my REST end point is for fired alerts as below. https://localhost:8089/services/alerts/fired_alerts
Hi @kate, surely you are using an add on for your Universal Forwarder (Linux or Windows), in this case, you have to enable the cpu counter metrics in this add-on, then you can use these data to calc... See more...
Hi @kate, surely you are using an add on for your Universal Forwarder (Linux or Windows), in this case, you have to enable the cpu counter metrics in this add-on, then you can use these data to calculate percentage use. Ciao. Giuseppe
@uagraw01 , I am not experienced about Kafka inside Kubernetes. Please check how to install Kafka Connect Cluster inside Kubernetes, after that you can install "Splunk Connect for Kafka" into this c... See more...
@uagraw01 , I am not experienced about Kafka inside Kubernetes. Please check how to install Kafka Connect Cluster inside Kubernetes, after that you can install "Splunk Connect for Kafka" into this cluster.
Hi @Eshwar, You can add "output_mode=json" parameter to get json output. Please see below; curl -k -u admin:password https://localhost:8089/services/search/jobs/export -d search="search sourcetype=... See more...
Hi @Eshwar, You can add "output_mode=json" parameter to get json output. Please see below; curl -k -u admin:password https://localhost:8089/services/search/jobs/export -d search="search sourcetype=splunkd earliest=-1h" -d output_mode=json  
@scelikok Yes I know this add-on. But the hec token works because my kafka is inside kubernetes cluster.
Hi @AL3Z, You can check directly from notable index, but using notable macro is much easier. `notable` | timechart count by rule_name
Hi @uagraw01, If your need is ingesting data from Kafka to Splunk, you can check  "Splunk Connect for Kafka" https://splunkbase.splunk.com/app/3862 
I am looking forward to utilize only splunk internal logs for the same. How can I utilize splunk internal metric log of a UF to fetch CPU and memory data for the same UF?
Hi @snix, have you read this: https://docs.splunk.com/Documentation/Splunk/latest/Security/AboutsecuringyourSplunkconfigurationwithSSL ? Ciao. Giuseppe