All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have inherited the maintenance for an app and it has a couple of errors that need to be fixed. I have fixed all the others except the one mentioned here. category: app_cert_validation ... See more...
Hello, I have inherited the maintenance for an app and it has a couple of errors that need to be fixed. I have fixed all the others except the one mentioned here. category: app_cert_validation description: Check that Splunk SDK for Python is up-to-date. ext_data: ([+] message_id: 7004 messages: (("result": "failure" "message": "Detected an outdated version of the Splunk SDK for Python (1.6.6). Please upgrade to version 1.6.16 or later. File: bin/.../aob_py2/solnlib/packages/splunklib/binding.py", "message_filename" "bin/.../aob_py2/solnlib/packages/splunklib/binding.py","message_line":null),("result":"failure" "message" "Detected an outdated version of the Splunk SDK for Python (1.6.6). Please upgrade to version 1.6.16 or later. File: bin/.../aob_py2/splunklib/binding.py" "message_filename" "bin/.../aob_py2/splunklib/binding.py', "message_line": null} A couple more files have the same error. It looks like all are Add-on Builder files and i am not sure how to fix this. Also, I cannot import the add-on in Splunk Add-on Builder as i dont have an original extracted version.
use case : How to detect threats from MySQL database and as a threat response how to safeguard Storage volume used for Storage? What are all Splunk components and integration required from Splunk... See more...
use case : How to detect threats from MySQL database and as a threat response how to safeguard Storage volume used for Storage? What are all Splunk components and integration required from Splunk to create this use case, can someone help me? I am very new to Splunk.
Hi all, I'm trying to get a list of phone numbers for each event by sessionId. I can't quite figure it out. I think I need to use some sort of rex command. Here's what I have so far.   index=co... See more...
Hi all, I'm trying to get a list of phone numbers for each event by sessionId. I can't quite figure it out. I think I need to use some sort of rex command. Here's what I have so far.   index=convo (input_type=VOICE OR input_type=SPEECH) botId=123456789 customerANI | rex field=phone "\+1(?<phone_number>\d{10})" | stats values(phone) as PhoneNumber by sessionId   Example event:     2022-09-26T06:18:41,105+0000 [INFO ] level=INFO [https-jssa-exec-10]-[tid=be75a0f9-9039-41ea-8104-afe25cfa7177 authId=123456789 sessionId=10987654321 test=false botId=123456789 cfBotId=123456789 offl_TKT=true proto=V2 platform=WEB input_type=SPEECH appId=web.intlgntsys.cui.sbgiva sku= pn= cid=123456789123456789 convo=service_routing_info_call]-[ServiceClient]-[55 ] ExecutingRequest requestState=executing action=contact_channels input={"appName":"voice_bot","language":"en","locale":"en-us","query":"talk with an agent","inputs":{"customerQuestion":"a wrong charge","DNIS":"+18008008000","Level":"|","Year":"2019","universalId":"123456789","Rating":"|","edition":"Blue|Yellow|Green","experience":"phone","sku":"0","intent":"BILLING","platform":"web","customerANI":"+15555555555"}}    
We are in the process of building out a whole new Splunk environment. As a result we are trying to be thoughtful about every piece of the new environment to make it as efficient as possible. One qu... See more...
We are in the process of building out a whole new Splunk environment. As a result we are trying to be thoughtful about every piece of the new environment to make it as efficient as possible. One question I have about the hot/warm and cold storage is should these be on physically different volumes? I guess one advantage I see about having them on the same volume is that when the buckets  roll to cold the data doesn't have to be moved to a different volume thus saving some speed there. However, I also want to consider read/write contention and having separate volumes means that the cold reads wouldn't interfere with the read/writes on the hot/warm volume. My gut tells me to do separate volumes but I've not seen anything in the docs recommending one or the other. Maybe it's there and I'm  just not finding it . Thanks.
Here is my query. In final line chart when I hover, I am not getting different dates.  Rather only 26th Sept (Today's date). (I want to have today, last week same day, 2 weeks back same day & 3 week ... See more...
Here is my query. In final line chart when I hover, I am not getting different dates.  Rather only 26th Sept (Today's date). (I want to have today, last week same day, 2 weeks back same day & 3 week back same day in the same visualization)   index=xyz sourctype=abc earliest = -60m@m latest = @m |eval ReportKey="Today" |append [search index=index=xyz sourctype=abc earliest = -60m@m-1w latest = @m-1w |eval ReportKey="LastWeek" | eval _time=_time+60*60*24*7] |append [search index=index=xyz sourctype=abc earliest = -60m@m-2w latest = @m-2w |eval ReportKey="TwoWeeksBefore" | eval _time=_time+60*60*24*14] |append [search index=index=xyz sourctype=abc earliest = -60m@m-3w latest = @m-3w |eval ReportKey="ThreeWeeksBefore" | eval _time=_time+60*60*24*21] |timechart span = 1m count(index) as Volume by Reportkey    
Hello, I'm trying to sign up for Splunk Phantom Community to download an OVA file for a college project but the review process is taking much longer than expected. Is there any way I can talk to a su... See more...
Hello, I'm trying to sign up for Splunk Phantom Community to download an OVA file for a college project but the review process is taking much longer than expected. Is there any way I can talk to a support to figure out what is taking so long? Any information would be appreciated, thank you.
As part of deployment rollback, how do we undo integrating SHC with multisite indexer cluster done with following command from this instruction - https://docs.splunk.com/Documentation/Splunk/9.0.1/Di... See more...
As part of deployment rollback, how do we undo integrating SHC with multisite indexer cluster done with following command from this instruction - https://docs.splunk.com/Documentation/Splunk/9.0.1/DistSearch/SHCandindexercluster#Configure_members ?     splunk edit cluster-config -mode searchhead -site site0 -manager_uri https://10.152.31.202:8089 -secret newsecret123 -auth login:password splunk restart        
Hello, one user wants to convert dashboard with token to summary indexing dashboard. We are using | sistats or similar, scheduling data collection each minute or other frequency. However user has... See more...
Hello, one user wants to convert dashboard with token to summary indexing dashboard. We are using | sistats or similar, scheduling data collection each minute or other frequency. However user has token input to filter later dynamically search results. Is it possible to have scheduled saved search using summary indexing and dynamic token depending on user query? May I remove the filter and grab all results then filter in the final summary indexing dashboard? Thanks for your help.
Hello, I have data like below.  {"property":"XYZ", "period":{ "start":"2022-09-16", "end":"2022-10-02" }, "nb-day":17, "nb-rate-plans":518, "nb-products":16, "total":{ "avail":48, "price":0 }, "f... See more...
Hello, I have data like below.  {"property":"XYZ", "period":{ "start":"2022-09-16", "end":"2022-10-02" }, "nb-day":17, "nb-rate-plans":518, "nb-products":16, "total":{ "avail":48, "price":0 }, "filtered":{ "avail":0, "price":0 }, "rate-plans":{ "IWU35":{ "avail":16, "price":0 }, "IWU30":{ "avail":16, "price":0 }, "IWU40":{ "avail":16, "price":0 } }, "check-ins":{ "0":{ "avail":3, "price":0 }, "1":{ "avail":3, "price":0 }, "2":{ "avail":3, "price":0 }, "3":{ "avail":3, "price":0 }, "4":{ "avail":3, "price":0 }, "5":{ "avail":3, "price":0 }, "6":{ "avail":3, "price":0 }, "7":{ "avail":3, "price":0 }, "8":{ "avail":3, "price":0 }, "9":{ "avail":3, "price":0 }, "10":{ "avail":3, "price":0 }, "11":{ "avail":3, "price":0 }, "12":{ "avail":3, "price":0 }, "13":{ "avail":3, "price":0 }, "14":{ "avail":3, "price":0 }, "15":{ "avail":3, "price":0 } } } { "property":"ABC", "period":{ "start":"2022-09-16", "end":"2022-10-02" }, "nb-day":17, "nb-rate-plans":518, "nb-products":16, "total":{ "avail":48, "price":0 }, "filtered":{ "avail":0, "price":0 }, "rate-plans":{ "IWU35":{ "avail":16, "price":0 }, "IWU30":{ "avail":16, "price":0 }, "IWU40":{ "avail":16, "price":0 } }, "check-ins":{ "0":{ "avail":3, "price":0 }, "1":{ "avail":3, "price":0 }, "2":{ "avail":3, "price":0 }, "3":{ "avail":3, "price":0 }, "4":{ "avail":3, "price":0 }, "5":{ "avail":3, "price":0 }, "6":{ "avail":3, "price":0 }, "7":{ "avail":3, "price":0 }, "8":{ "avail":3, "price":0 }, "9":{ "avail":3, "price":0 }, "10":{ "avail":3, "price":0 }, "11":{ "avail":3, "price":0 }, "12":{ "avail":3, "price":0 }, "13":{ "avail":3, "price":0 }, "14":{ "avail":3, "price":0 }, "15":{ "avail":3, "price":0 } } } 1. Need to calculate date based on below example-> start : 2022-09-16  "check-ins":{ "0":{ "avail":3, "price":0 }, "1":{ "avail":3, "price":0 }, "2":{ "avail":3, "price":0 }, "3":{ "avail":3, "price":0 }, "4":{ "avail":3, "price":0 }, "5":{ "avail":3, "price":0 }, "6":{ "avail":3, "price":0 }, "7":{ "avail":3, "price":0 }, "8":{ "avail":3, "price":0 }, "9":{ "avail":3, "price":0 }, "10":{ "avail":3, "price":0 }, "11":{ "avail":3, "price":0 }, "12":{ "avail":3, "price":0 }, "13":{ "avail":3, "price":0 }, "14":{ "avail":3, "price":0 }, "15":{ "avail":3, "price":0 } } Index from check-ins need to be added in start-date Date:                                                                          desync 2022-09-16  + 0 = 2022-09-16                   avail+price(3+0) 2022-09-16  + 1 =2022-09-17 2022-09-16  + 2= 2022-09-18 2022-09-16  + 15 = 2022-10-01 I need to convert check-ins index into date and then calculate desync for each day Thanks in advance!!  
Hello, For security reasons such as,how to block the view of JVM variables in appdynamics console. a different way to block, other than through config with agent. through sensitive-data-filter a... See more...
Hello, For security reasons such as,how to block the view of JVM variables in appdynamics console. a different way to block, other than through config with agent. through sensitive-data-filter and app-agent-config.xml. thanks
Hi, I have problems with the drilldown button in the "Risk Event Timeline" view for an Risk Notable. When expanding Risk rules in the "Risk Event Timeline" view, you can click on a drilldown fiel... See more...
Hi, I have problems with the drilldown button in the "Risk Event Timeline" view for an Risk Notable. When expanding Risk rules in the "Risk Event Timeline" view, you can click on a drilldown field named "Contributing events: View contribting events". This button is disabled with the following message: "View contributing events" link is disabled as there is no drilldown search available for this risk rule. The Risk rule is configured as a notable and has a drilldown search.  Does anybody know how to enabled the drilldownsearch in the "Risk Event Timeline" view  
  want to implement below mentioned red highlighted xml code in splunk dashboard source if dropdown field value is "db2_cloud2" for stats table. <format type="color" field="REPLAY_LATENCY"> <col... See more...
  want to implement below mentioned red highlighted xml code in splunk dashboard source if dropdown field value is "db2_cloud2" for stats table. <format type="color" field="REPLAY_LATENCY"> <colorPalette type="expression">if(value&gt;45,"#D93F3C","")</colorPalette> </format> Below mentioned is screenshot of Dashboard.  
Here are the error messages 2022-09-26 12:38:02,976 ERROR [itsi_re(reId=cRdG)] [main] RulesEngineSearch:75 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NoSuchMethodEr... See more...
Here are the error messages 2022-09-26 12:38:02,976 ERROR [itsi_re(reId=cRdG)] [main] RulesEngineSearch:75 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonParser.getReadCapabilities()Lcom/fasterxml/jackson/core/util/JacksonFeatureSet;" host = myhost = _internalsource = /opt/splunk/var/log/splunk/itsi_rules_engine.log sourcetype = itsi_internal_log 2022-09-26 12:38:02,976 ERROR [itsi_re(reId=cRdG)] [main] RulesEngineSearch:74 - RulesEngineTask=RulesEngineJob, Status=Stopped host = myhost = _internalsource = /opt/splunk/var/log/splunk/itsi_rules_engine.log sourcetype = itsi_internal_log 2022-09-26 12:38:02,902 DEBUG [itsi_re(reId=cRdG)] [main] PropertyLoader:209 - itsiRulesEngine.localConfigurationFile properties file is not defined. host = myhost = _internalsource = /opt/splunk/var/log/splunk/itsi_rules_engine.log sourcetype = itsi_internal_log All the SH are on the same lan/network, no firewall. The ERROR [itsi_re(reId=yVNs)] [main] RulesEngineSearch:75 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NoSuchMethodError: 'com.fasterxml.jackson.core.util.JacksonFeatureSet com.fasterxml.jackson.core.JsonParser.getReadCapabilities()'" is logged every minute.
Hello, I'm trying to change my date format two times because i want to sort to order my month from January to December. I've been trying this search but the field newPeriode2 isn't showing any resu... See more...
Hello, I'm trying to change my date format two times because i want to sort to order my month from January to December. I've been trying this search but the field newPeriode2 isn't showing any results : | eval newPeriode = strftime(strptime(Période,"%Y-%m-%d"),"%m-%Y") | sort newPeriode | eval newPeriode2 = strftime(strptime(newPeriode,"%m-%Y"), "%B-%Y") this is what it looks like. I want my newPeriode2 looks like : January-2022 etc... Thanks for your help !
Hi, I have this search: | stats count by application | eval application = case( application=="malware-detection", "Malware", !isnull(application), upper(substr(application,1,1)).s... See more...
Hi, I have this search: | stats count by application | eval application = case( application=="malware-detection", "Malware", !isnull(application), upper(substr(application,1,1)).substr(application,2) ) | eventstats sum(count) as total | eval count_2=round(100*count/total,2) | fields- total | eval count_perc="".count_2."%" | rename application as Application, count as Count   and I would like to show the Application, Count and count_perc fields on mu Pie Chart, but splunk still show the Count%. My goal is to round the percentage, how can I do this? Thanks for support!
Used the Spunk JS Stack version 1.4 and when the code is executed, it is raising $.klass is not a function. Is there any way to resolve this issue?
Hello users, it seems that TA-webtools app is not fully compatible with Splunk 9 version according to "Upgrade Readiness App": Will you upgrade it? Thank you!
Hi, I am trying to get the Splunk_TA_esxilogs app to work in our Splunk Enviroment, but cant get it working together with our app that rewrites index and sourcetype. I suspect that one Splunk Enterp... See more...
Hi, I am trying to get the Splunk_TA_esxilogs app to work in our Splunk Enviroment, but cant get it working together with our app that rewrites index and sourcetype. I suspect that one Splunk Enterprice instance cannot rewrite the sourcetype and index more that one time. The ESXi logs are allready collected at an syslog server, and forwarded to the Heavy Forwarder. At the HF we use "rewrite app" with an regex to change the sourcetype from "syslog" to "esxi", based out of the hostname, like this: props.conf: [syslog] TRANSFORMS-force_vmware = force_sourcetype_vmware, force_ix_vmware transforms.conf: [force_sourcetype_vmware] SOURCE_KEY = MetaData:Host REGEX = ^host::(10\.24[1289]\.70\.\d+|10\.243\.12\.\d+|10\.25[01]\.70\.\d+|10\.252\.198\.50|10\.30\.209\.19[5-6]|10\.36\.1[128]\.\d+|10\.37\.12\.\d+|10\.45\.[12]\.\d+|10\.6[23]\.12.\d+|10\.63\.10\.20|10\.65\.(0|64)\.\d+|10\.65\.65\.65) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::vmw-syslog [force_ix_vmware] SOURCE_KEY = MetaData:Sourcetype REGEX = ^sourcetype::(?i)vmw-syslog$ DEST_KEY = _MetaData:Index FORMAT = vmware-esxilog So far, so good. This rewrite app does its job. The data now has index "vmware-esxilog" and sourcetype "vmw-syslog". Now the Splunk_TA_esxilog app should in theory start baking the data: props.conf: ####### INDEX TIME EXTRACTION ########## [vmw-syslog] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?:.*?(?:[\d\-]{10}T[\d\:]{8}(?:\.\d+)?(?:Z|[\+\-][\d\:]{5})?)\s[^ ]+\s+[^ ]+\s+[^\->])|([\r\n]+)(?:.*?\w+\s+\d+\s+\d{2}:\d{2}:\d{2})(?:\s+[^ ]+\s+)+[^\->] TZ = UTC DATETIME_CONFIG = /etc/apps/Splunk_TA_esxilogs/default/syslog_datetime.xml TRANSFORMS-nullqueue = vmware_generic_level_null TRANSFORMS-vmsyslogsourcetype = set_syslog_sourcetype,set_syslog_sourcetype_4x,set_syslog_sourcetype_sections TRANSFORMS-vmsyslogsource = set_syslog_source   But it doesnt. The data gets indexed without beeing touched by the Splunk_TA_esxilogs app. It works IF i disable the HF rewrite app, and change the stanza in Splunk_TA_esxilogs from [vmw-syslog] to [syslog], but that will hit way to wide. The name of the HF rewrite app starts with "05", so its configuration comes before the app named "Splunk_TA_esxilogs". Any suggestions is highly appreciated
Hi - I am trying to run the below query to help create an alert that will show when we haven't had an alert for a particular index after 15 minutes. I need to make it so it only includes specific ind... See more...
Hi - I am trying to run the below query to help create an alert that will show when we haven't had an alert for a particular index after 15 minutes. I need to make it so it only includes specific indexes rather than all the indexes within Splunk but can't seem to get it right. Any help on how to fix it or letting me know if there is a better way to do this would be massively appreciated! | tstats latest(_time) as latest where index=* earliest=-24hr by index | eval recent = if(latest > relative_time(now(),"-15m"),1,0), realLatest = strftime(latest,"%c") | rename realLatest as "Last Log" | where recent=0
I have a need to compare the average time for certain events with the 5 min bucket/bins of the same events. The idea is to find 5 min intervals that deviate more than a certain percentage from the av... See more...
I have a need to compare the average time for certain events with the 5 min bucket/bins of the same events. The idea is to find 5 min intervals that deviate more than a certain percentage from the average response times and then in some way display those intervals. I am however struggling to figure out how to output the Average for the entire time period but also calculate the 5 minute intervals. The following query, returns nothing (can you even do 2 Stats in the same query?): search | stats avg(Value) as AvgEntirePeriod | bin _time span=5m | stats avg(Value) by _time Any ideas on how to write this?