All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @RS , I suppose that the total execution time is always displayed in minutes, otherwise, you have convert it based on the forma, so, please try, something like this: index = XXXXXX1 host = hostn... See more...
Hi @RS , I suppose that the total execution time is always displayed in minutes, otherwise, you have convert it based on the forma, so, please try, something like this: index = XXXXXX1 host = hostname.com source = artifactory-servicesourcetype = artifactory-service "Storage TRASH_AND_BINARIES garbage collector report" | rex "Total\s+execution\s+time:\s+(?<minutes>\d+)\.(?<seconds>\d+)" | eval Total_execution_time=minutes*60+seconds | timechart sum(Total_execution_time) AS Total_execution_time BY host  Ciao. Giuseppe
Hi @Roy_9 , if it's a script from the Universal Forwarder, you can open a case to Splunk Support. Some yers ago I had an issue with the UF installed on a server that wasn't able to connect to DNS s... See more...
Hi @Roy_9 , if it's a script from the Universal Forwarder, you can open a case to Splunk Support. Some yers ago I had an issue with the UF installed on a server that wasn't able to connect to DNS server and used too much memory to resolve addresses, and Support solved the issue, call them. Ciao. Giuseppe
Hello, I am currently using Splunk version 9.1.0.2, which includes several newly reported CVEs. Therefore, I need to upgrade it to the latest version. In the following link, it is mentioned that "S... See more...
Hello, I am currently using Splunk version 9.1.0.2, which includes several newly reported CVEs. Therefore, I need to upgrade it to the latest version. In the following link, it is mentioned that "Splunk recommends that customers use version 9.2.0.1 instead of version 9.2.0." Release Notes However, in the download link (Splunk Enterprise Download Page), the latest version available is 9.2.0. Could you please inform us when Splunk Enterprise 9.2.0.1 will be released?
Hi, I have following log data that are in splunk. Below is example data taken from splunk: 2024-02-04T00:15:15.209Z [jfrt ] [INFO ] [64920151065ecdd9] [.s.b.i.GarbageCollectorInfo:81] [cdd9|art-exe... See more...
Hi, I have following log data that are in splunk. Below is example data taken from splunk: 2024-02-04T00:15:15.209Z [jfrt ] [INFO ] [64920151065ecdd9] [.s.b.i.GarbageCollectorInfo:81] [cdd9|art-exec-153205] - Storage TRASH_AND_BINARIES garbage collector report: Total execution time:    15.25 minutes Candidates for deletion: 4,960 Checksums deleted:       4,582 Binaries deleted:        4,582 host = hostname.com index = XXXXXX1 source = artifactory-servicesourcetype = artifactory-service How I can display trend/timechart of "Total execution time" using splunk query group by timestamp and host name for Storage TRASH_AND_BINARIES garbage collector report? I appreciate any help. Thanks Rahul  
@gcusello  It came with splunk forwarder package. "C:\ProgramFiles\SplunkUniversalForwarder\bin\splunk-powershell.ps1   Thanks  
Hi @meshorer, Splunk SOAR keeps event, containers, etc. on its database up to database size limit. You can delete them according to your retention needs. Please see https://docs.splunk.com/Document... See more...
Hi @meshorer, Splunk SOAR keeps event, containers, etc. on its database up to database size limit. You can delete them according to your retention needs. Please see https://docs.splunk.com/Documentation/SOARonprem/6.2.0/Admin/DeleteContainers  
Hi @shonias, I don't believe this functionality exists in Dashboard Studio today; however, you can create custom dashboards using @splunk/create and ReactJS. See https://splunkui.splunk.com/Packages... See more...
Hi @shonias, I don't believe this functionality exists in Dashboard Studio today; however, you can create custom dashboards using @splunk/create and ReactJS. See https://splunkui.splunk.com/Packages/create/Overview and https://splunkui.splunk.com/Create/ExamplesGallery#Custom%20dashboards%20and%20visualizations. An interested user has also created a Splunk Idea related to tooltips. See https://ideas.splunk.com/ideas/EID-I-2183.
Hi @MattKr , Docker containers have only minimum requires tools. It is not easy to add additional tools. Don't think this as a standart linux distribution.
@AL3Z  To get Notable events from ES into SOAR you need the Splunk App for SOAR Export  setup on your Search Head then you will need to add the Adaptive Response to send to SOAR when the detection t... See more...
@AL3Z  To get Notable events from ES into SOAR you need the Splunk App for SOAR Export  setup on your Search Head then you will need to add the Adaptive Response to send to SOAR when the detection triggers an event in Splunk. 
hello all! is there a default time that events (containers/cases) are stored in the SOAR server to approach to? and if so, can I change the time? @phanTom  Thank you in advance
Hi @kate, You can enable the introspection generator add-on on forwarders by following the process at https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ConfigurePIF#Enable_the_intr... See more...
Hi @kate, You can enable the introspection generator add-on on forwarders by following the process at https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ConfigurePIF#Enable_the_introspection_generator_add-on_using_deployment_server. If you're not using a deployment server, you can enable the add-on locally on any forwarder. Note that the SplunkForwarder service account, e.g. NT SERVICE\SplunkForwarder, must have the "Debug programs" (SeDebugPrivilege) user right. While this isn't equivalent to administrator privileges, it does grant the user the ability to inject arbitrary code into another process running with administrator privileges. You can find more information in Microsoft security documentation. Don't fear the privilege, though. Just understand what it does and how to mitigate the risk of assigning in the context of Splunk. By default, introspection:generator:resource_usage will be enabled and collect metrics every 10 minutes when the add-on is enabled is enabled on universal forwarders. You can find metrics in index=_introspection, an event index containing source types with INDEXED_EXTRACTIONS = json: | tstats avg(data.cpu_idle_pct) as cpu_idle_pct where index=_introspection sourcetype=splunk_resource_usage component=Hostwide by _time host | chart avg(eval(100-cpu_idle_pct)) ``` cpu_used_pct ``` over _time by host On instances of Splunk Enterprise, metrics are also cloned to index=_metrics; however, events sent from forwarders with INDEXED_EXTRACTIONS set are "cooked" by the forwarder, and transforms on receivers will not be applied without modifying configuration to reroute cooked events to parsingQueue or adding ingest actions (rulesets) that reference the transforms behavior.  
Hi, i would like to install additional tools on my splunk docker container but yum is not installed, rpm is available but needs to be configured along with a repo i guess? What is the best way to do... See more...
Hi, i would like to install additional tools on my splunk docker container but yum is not installed, rpm is available but needs to be configured along with a repo i guess? What is the best way to do this, do i need a Red Hat subscription for this?
@phanTom  I dnt see any notables set to soar in the adaptive response action, But we dnt relay on incident review dashboard in our environment, All incidents are automated through the soar it self... See more...
@phanTom  I dnt see any notables set to soar in the adaptive response action, But we dnt relay on incident review dashboard in our environment, All incidents are automated through the soar it self.  
Yes, in the meantime it turned out the default way it to listen on a UNIX Domain Socket and I need to switch with config back to the tcp method.
What you described is the new default behavior.
Hi @scelikok , That's working great. Thank you for saving my time. Regards, Eshwar  
Hi @Eshwar, Please try below; curl -k -u admin:password "https://localhost:8089/services/alerts/fired_alerts?output_mode=json&count=0"
Hi @scelikok , I tried with output_mode=json but not able to get JSON response as my REST end point is for fired alerts as below. https://localhost:8089/services/alerts/fired_alerts
Hi @kate, surely you are using an add on for your Universal Forwarder (Linux or Windows), in this case, you have to enable the cpu counter metrics in this add-on, then you can use these data to calc... See more...
Hi @kate, surely you are using an add on for your Universal Forwarder (Linux or Windows), in this case, you have to enable the cpu counter metrics in this add-on, then you can use these data to calculate percentage use. Ciao. Giuseppe
@uagraw01 , I am not experienced about Kafka inside Kubernetes. Please check how to install Kafka Connect Cluster inside Kubernetes, after that you can install "Splunk Connect for Kafka" into this c... See more...
@uagraw01 , I am not experienced about Kafka inside Kubernetes. Please check how to install Kafka Connect Cluster inside Kubernetes, after that you can install "Splunk Connect for Kafka" into this cluster.