All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, i've been wondering is there any method to get notifications when SOAR configured app is down.    I am using On prem SOAR and version is 6.1.1.211 Phantom 
Version interoperability of Splunk Add-on for CyberArk I was thinking of using the add-on for CyberArk to change logs' format from CyberArki PTA into CEF format input to Splunk Enterprise. Splunk A... See more...
Version interoperability of Splunk Add-on for CyberArk I was thinking of using the add-on for CyberArk to change logs' format from CyberArki PTA into CEF format input to Splunk Enterprise. Splunk Add-on for CyberArk | Splunkbase However, as the link above shows, it seems the latest version of the add-on support PTA 12.2, and there is no updates on this add-on. Anyone knows about the version interoperability of PTA version 14.2 and this add-on? Or, is there are alternatives for this add-on? I really apprecitate any comment. Thank you. ##Splunk-Add-on-for-CyberAr
Hi My setup is Splunk Enterprise on ubuntu server. Ive setup netflow config on the edgerouter but can't seem to get any data into splunk or the stream addon. I have looked online but conflicting in... See more...
Hi My setup is Splunk Enterprise on ubuntu server. Ive setup netflow config on the edgerouter but can't seem to get any data into splunk or the stream addon. I have looked online but conflicting instructions and I tried chatgpt. Can someone point me in the right direction into why I cant get it to work?
I applied for a preview version, downloaded the Data Monitoring App and uploaded it to my Splunk cloud stack. App validation is successful, but when I press the Install button, the installation will ... See more...
I applied for a preview version, downloaded the Data Monitoring App and uploaded it to my Splunk cloud stack. App validation is successful, but when I press the Install button, the installation will not be performed when the error is as follows.   Data Monitoring could not be installed. Unable to install package. - 1EF9555E-306C-4928-B9FF-2F8A03CC35A7
I am running splunk docker containers to distribute splunk.  I am up and up for my indexer cluster, indexer Manager and search head cluster.  All is well.  I am using persistent storage which confirm... See more...
I am running splunk docker containers to distribute splunk.  I am up and up for my indexer cluster, indexer Manager and search head cluster.  All is well.  I am using persistent storage which confirmed is persistent.  Only issue I am having with my deployment is when I stop my indexer Manager container and remove it, then start another one from the same yml file the ansible pre flight checklist thinks it wants to upgrade to the 9.3.2 (which its on).  This only happens on the indexer Manager and none of my other containers.  Any thoughts.
Does anyone know if it is possible to create specific thresholds for each host in the dashboard studio table? I'm using a table where I focus on hard disk usage, previously I had no problems with the... See more...
Does anyone know if it is possible to create specific thresholds for each host in the dashboard studio table? I'm using a table where I focus on hard disk usage, previously I had no problems with the configuration since the threshold was 80% amber and 90% red. However, I was asked to adjust this threshold to 90 amber and 95 red only for a specific server. My question is, is it possible for the color format table to have different thresholds depending on the host?    
Hi, we are a splunk partner previously Appdynamics partner.  In Appdynamics we had a solution to monitor IBM Z, however we are not really sure how to work with IBM Z Operational Log and Data Analytic... See more...
Hi, we are a splunk partner previously Appdynamics partner.  In Appdynamics we had a solution to monitor IBM Z, however we are not really sure how to work with IBM Z Operational Log and Data Analytics in Splunk, could you help us with a link or docs regarding this splunk solution.   Thanks in advance
Hi, we've configured the "Message Trace" input type for Splunk Add-On for Microsoft Office 365 but don't seem to be receiving any data. Other input types (Mailbox, Management Activity, etc) are worki... See more...
Hi, we've configured the "Message Trace" input type for Splunk Add-On for Microsoft Office 365 but don't seem to be receiving any data. Other input types (Mailbox, Management Activity, etc) are working. Not sure what the problem is, any suggestions on how to troubleshoot?  I did notice a discrepancy when viewing the current configuration of the input versus the options available when editing the input (the same value is reported "in days" in one place and "in minutes" in another): Could it be my delay throttle truly is set to 1440 days rather than minutes? I believe I have all the API permissions set correctly, but let me know if this doesn't look right:  
I have Splunk Enterprise 9.4.0 (build 6b4ebe426ca6) installed.  My security team flagged a possible vuln on /opt/splunk/opt/mongo/lib/libcurl.so.4.8.0 related to CVE-2024-7264, which apparently affe... See more...
I have Splunk Enterprise 9.4.0 (build 6b4ebe426ca6) installed.  My security team flagged a possible vuln on /opt/splunk/opt/mongo/lib/libcurl.so.4.8.0 related to CVE-2024-7264, which apparently affects libcurl versions between 7.32.0 and prior to 8.9.1. I ran both the following commands   splunk cmd curl --version splunk cmd mongodb --version   and confirmed the libcurl version is affected. The relevant results were: Curl:   curl 7.61.1 ... libcurl/7.61.1 ...   Mongod:   mongod: /opt/splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /opt/splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /opt/splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /opt/splunk/lib/libssl.so.10: no version information available (required by mongod) db version v7.0.14 Build Info: { "version": "7.0.14", ... }      How do I go about disabling Mongod (if possible)? Alternatively, is there any info on whether this will be addressed in a future update or if this is relevant at all for Splunk Enterprise?
I am training and evaluating a forecast model using MLTK's StateSpaceForecast. I would like to fit on part of the dataset, and have a held back testing set to evaluate. The trick, though, is that I w... See more...
I am training and evaluating a forecast model using MLTK's StateSpaceForecast. I would like to fit on part of the dataset, and have a held back testing set to evaluate. The trick, though, is that I want the forecaster to forecast out 15 minutes in the future while autoregressively looking at the current feature values.  For example, take my query that tries to find the TPR, FPR, etc. for exceeding some SLA violation using my holdout set. Currently, it just uses the beginning of the holdout set to predict out 2 hours.      | fit StateSpaceForecast latency_p95_log from latency_p95_log, threadcount_p95, threadcount, total_socket_errors, n_running_procs, time_wait_cpu, HourOfDay, DayOfWeek holdback=2h forecast_k=15m conf_interval=95 into ml_latency_forecast | apply ml_latency_forecast forecast_k=2h holdback=2h | eval predicted = exp('predicted(latency_p95_log)') | eval predicted_low=exp('lower95(predicted(latency_p95_log))'), predicted_high=exp('upper95(predicted(latency_p95_log))') | eval predicted_SLA = if(predicted > 1.0, 1, 0) | eval true_positive = if(predicted_SLA=1 AND SLA_violation=1, 1, 0) | eval false_positive = if(predicted_SLA=1 AND SLA_violation=0, 1, 0) | eval true_negative = if(predicted_SLA=0 AND SLA_violation=0, 1, 0) | eval false_negative = if(predicted_SLA=0 AND SLA_violation=1, 1, 0) | eval holdout = if(isnull('lower95(predicted(latency_p95_log))'), 0, 1) | table _time predicted predicted_high predicted_low latency_p95     Is there any examples someone can help give me for doing a forecast and evaluating the fit on on-seen data during training?  Splunk MLTK Algorithms on GitHub 
HI I have just read this post that all these good apps will no longer be available. This is a bit shocking to me as I use them all the time. Is anyone else affected by this? If you are using enter... See more...
HI I have just read this post that all these good apps will no longer be available. This is a bit shocking to me as I use them all the time. Is anyone else affected by this? If you are using enterprise and can't use Dashboards studio, as I have very complex code, what are we supposed to do? https://lantern.splunk.com/Splunk_Platform/Product_Tips/Extending_the_Platform/Splunk_Custom_Visualizations_apps_end_of_life_FAQ Any help would be Great. Robert 
Hello, I have a question regarding the prompt action, is there any possibility to make the answer to a question that is via message mandatory Can it have a minimum of mandatory characters?
http.server.request.duration histogram Duration of HTTP server requests. metrics coming as grouped like below http.server.request.duration_sum http.server.request.duration_count http.ser... See more...
http.server.request.duration histogram Duration of HTTP server requests. metrics coming as grouped like below http.server.request.duration_sum http.server.request.duration_count http.server.request.duration_max http.server.request.duration_bucket http.server.request.duration_min http.client.request.duration_count similarly... http.route as well coming as Gain/Vl/* instead of full end point. Any solution for this.   
Hi All, I have a dropdown multi-select created using dashboard studio with default value set as "All".  This All is nothing but the static value set under menu configuration. Label - "All" Val... See more...
Hi All, I have a dropdown multi-select created using dashboard studio with default value set as "All".  This All is nothing but the static value set under menu configuration. Label - "All" Value - * Query used :  index=test sourcetype="billing_test" productcode="testcode"  | fields account_id account_name cluster namespace pod cost  | search account_id IN ($account_id$) AND clustername IN ($cluster$) AND account_name IN ($account_name$) | stats count by namespace But when I click on this multi-select dropdown it is loading another "All" as value together with the default value I have set. Example Screenshot :      Full xml code "visualizations": {}, "dataSources": { "ds_1sGu0DN2": { "type": "ds.search", "options": { "query": "index=test sourcetype=\"billing_test\" productcode=\"testcode\"| fields account_id account_name cluster namespace pod cost" }, "name": "Base search" }, "ds_fURg97Gu": { "type": "ds.chain", "options": { "extend": "ds_1sGu0DN2", "query": "| search account_id IN ($account_id$) AND eks_clustername IN ($cluster$) AND account_name IN ($account_name$)| stats count by namespace" }, "name": "Namespacefilter" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-7d@h,now" }, "title": "Global Time Range" }, "input_jHd4pV3L": { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": [ "All" ], "token": "account_id" }, "title": "Namespace", "type": "input.multiselect", "dataSources": { "primary": "ds_fURg97Gu" }, "context": {} } }, "layout": { "options": {}, "globalInputs": [ "input_global_trp", "input_jHd4pV3L" ], "tabs": { "items": [ { "layoutId": "layout_1", "label": "New tab" } ] }, "layoutDefinitions": { "layout_1": { "type": "grid", "structure": [], "options": { "width": 1440, "height": 960 } } } }, "description": "", "title": "Test Dashboard" } Please can anyone of you help me to know what is going wrong. Thanks , NVP
Below was the question for me "I need a running report to be exported, with the number of errors on each of the services in the last 7 days then it has to show a graph for each week" i would need... See more...
Below was the question for me "I need a running report to be exported, with the number of errors on each of the services in the last 7 days then it has to show a graph for each week" i would need a query to search for this Serivce "Per****ng.N**s.Platform.Host" Index="Nex" where i would need data for Information, Error, Debug, Warnings. Please help me with this 
Dear Cisco, today, I'm not able to see the list of the snapshots in many SaaS Controllers (at least 10 Controllers to which I have access). It seems that the snapshots were saved until yesterday 11:... See more...
Dear Cisco, today, I'm not able to see the list of the snapshots in many SaaS Controllers (at least 10 Controllers to which I have access). It seems that the snapshots were saved until yesterday 11:00 PM CET. I opened a ticket with severity S2 3 hours ago, but I didn't receive any information. The status page doesn't report any issues. Could you post some updates about this? Thanks Alberto
Hi Team,  we are planning to build DR Splunk indexer on AWS Cloud. could you give the detailed instructions for creating the DR Splunk indexer. Thanks & Regards  Ramamohan   
Dear Everyone, I would like to create a custom correlation search to identify hostnames that have not been updated in one month or 30 days or longer. However, upon finalizing my query, I encountered... See more...
Dear Everyone, I would like to create a custom correlation search to identify hostnames that have not been updated in one month or 30 days or longer. However, upon finalizing my query, I encountered a discrepancy in the data. For instance, I found that the hostname "ABC" has not been updated for 41 days; however, when I checked in Sophos Central via the website, it indicated "No Devices Found." I am inquiring about how Splunk is able to read this data while Sophos Central reports that the device is not found. Thank you for your assistance.
Hey Splunkers,  I'm trying to create a conditional search that will run on the same index but will have different search terms according to a variable I have that can have one of three values. It i... See more...
Hey Splunkers,  I'm trying to create a conditional search that will run on the same index but will have different search terms according to a variable I have that can have one of three values. It is supposed to be something like that: index = my_index variable = 1/2/3 if variable=1 then run search1 if variable=2 then run search2 if variable=3 then run search3 i tried multiple ways but they didn't work so im trying to get some help here
im trying to write a splunk search to extract the user id and time of a login.  log sample below:   trx# datetime                           remaining text in event 10    1/17/2025 15:03:20   acco... See more...
im trying to write a splunk search to extract the user id and time of a login.  log sample below:   trx# datetime                           remaining text in event 10    1/17/2025 15:03:20   account record user100 does not exist 12   1/17/2025  15:03:20   login as admin, raising privileges   both results represent a successful login. both results have the same datetime but different trx# (10 or 12) ive tried streamstats count  by _time which generates a count for each result. the issue is how do I isolate the first result (trx# =10) so i can extract the userid (user100)? the streamstats  command doesnt always assign the same count value (1 or 2) to the two logs.    Thanks in advance for your help.