All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, For security reasons such as,how to block the view of JVM variables in appdynamics console. a different way to block, other than through config with agent. through sensitive-data-filter a... See more...
Hello, For security reasons such as,how to block the view of JVM variables in appdynamics console. a different way to block, other than through config with agent. through sensitive-data-filter and app-agent-config.xml. thanks
Hi, I have problems with the drilldown button in the "Risk Event Timeline" view for an Risk Notable. When expanding Risk rules in the "Risk Event Timeline" view, you can click on a drilldown fiel... See more...
Hi, I have problems with the drilldown button in the "Risk Event Timeline" view for an Risk Notable. When expanding Risk rules in the "Risk Event Timeline" view, you can click on a drilldown field named "Contributing events: View contribting events". This button is disabled with the following message: "View contributing events" link is disabled as there is no drilldown search available for this risk rule. The Risk rule is configured as a notable and has a drilldown search.  Does anybody know how to enabled the drilldownsearch in the "Risk Event Timeline" view  
  want to implement below mentioned red highlighted xml code in splunk dashboard source if dropdown field value is "db2_cloud2" for stats table. <format type="color" field="REPLAY_LATENCY"> <col... See more...
  want to implement below mentioned red highlighted xml code in splunk dashboard source if dropdown field value is "db2_cloud2" for stats table. <format type="color" field="REPLAY_LATENCY"> <colorPalette type="expression">if(value&gt;45,"#D93F3C","")</colorPalette> </format> Below mentioned is screenshot of Dashboard.  
Here are the error messages 2022-09-26 12:38:02,976 ERROR [itsi_re(reId=cRdG)] [main] RulesEngineSearch:75 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NoSuchMethodEr... See more...
Here are the error messages 2022-09-26 12:38:02,976 ERROR [itsi_re(reId=cRdG)] [main] RulesEngineSearch:75 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonParser.getReadCapabilities()Lcom/fasterxml/jackson/core/util/JacksonFeatureSet;" host = myhost = _internalsource = /opt/splunk/var/log/splunk/itsi_rules_engine.log sourcetype = itsi_internal_log 2022-09-26 12:38:02,976 ERROR [itsi_re(reId=cRdG)] [main] RulesEngineSearch:74 - RulesEngineTask=RulesEngineJob, Status=Stopped host = myhost = _internalsource = /opt/splunk/var/log/splunk/itsi_rules_engine.log sourcetype = itsi_internal_log 2022-09-26 12:38:02,902 DEBUG [itsi_re(reId=cRdG)] [main] PropertyLoader:209 - itsiRulesEngine.localConfigurationFile properties file is not defined. host = myhost = _internalsource = /opt/splunk/var/log/splunk/itsi_rules_engine.log sourcetype = itsi_internal_log All the SH are on the same lan/network, no firewall. The ERROR [itsi_re(reId=yVNs)] [main] RulesEngineSearch:75 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NoSuchMethodError: 'com.fasterxml.jackson.core.util.JacksonFeatureSet com.fasterxml.jackson.core.JsonParser.getReadCapabilities()'" is logged every minute.
Hello, I'm trying to change my date format two times because i want to sort to order my month from January to December. I've been trying this search but the field newPeriode2 isn't showing any resu... See more...
Hello, I'm trying to change my date format two times because i want to sort to order my month from January to December. I've been trying this search but the field newPeriode2 isn't showing any results : | eval newPeriode = strftime(strptime(Période,"%Y-%m-%d"),"%m-%Y") | sort newPeriode | eval newPeriode2 = strftime(strptime(newPeriode,"%m-%Y"), "%B-%Y") this is what it looks like. I want my newPeriode2 looks like : January-2022 etc... Thanks for your help !
Hi, I have this search: | stats count by application | eval application = case( application=="malware-detection", "Malware", !isnull(application), upper(substr(application,1,1)).s... See more...
Hi, I have this search: | stats count by application | eval application = case( application=="malware-detection", "Malware", !isnull(application), upper(substr(application,1,1)).substr(application,2) ) | eventstats sum(count) as total | eval count_2=round(100*count/total,2) | fields- total | eval count_perc="".count_2."%" | rename application as Application, count as Count   and I would like to show the Application, Count and count_perc fields on mu Pie Chart, but splunk still show the Count%. My goal is to round the percentage, how can I do this? Thanks for support!
Used the Spunk JS Stack version 1.4 and when the code is executed, it is raising $.klass is not a function. Is there any way to resolve this issue?
Hello users, it seems that TA-webtools app is not fully compatible with Splunk 9 version according to "Upgrade Readiness App": Will you upgrade it? Thank you!
Hi, I am trying to get the Splunk_TA_esxilogs app to work in our Splunk Enviroment, but cant get it working together with our app that rewrites index and sourcetype. I suspect that one Splunk Enterp... See more...
Hi, I am trying to get the Splunk_TA_esxilogs app to work in our Splunk Enviroment, but cant get it working together with our app that rewrites index and sourcetype. I suspect that one Splunk Enterprice instance cannot rewrite the sourcetype and index more that one time. The ESXi logs are allready collected at an syslog server, and forwarded to the Heavy Forwarder. At the HF we use "rewrite app" with an regex to change the sourcetype from "syslog" to "esxi", based out of the hostname, like this: props.conf: [syslog] TRANSFORMS-force_vmware = force_sourcetype_vmware, force_ix_vmware transforms.conf: [force_sourcetype_vmware] SOURCE_KEY = MetaData:Host REGEX = ^host::(10\.24[1289]\.70\.\d+|10\.243\.12\.\d+|10\.25[01]\.70\.\d+|10\.252\.198\.50|10\.30\.209\.19[5-6]|10\.36\.1[128]\.\d+|10\.37\.12\.\d+|10\.45\.[12]\.\d+|10\.6[23]\.12.\d+|10\.63\.10\.20|10\.65\.(0|64)\.\d+|10\.65\.65\.65) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::vmw-syslog [force_ix_vmware] SOURCE_KEY = MetaData:Sourcetype REGEX = ^sourcetype::(?i)vmw-syslog$ DEST_KEY = _MetaData:Index FORMAT = vmware-esxilog So far, so good. This rewrite app does its job. The data now has index "vmware-esxilog" and sourcetype "vmw-syslog". Now the Splunk_TA_esxilog app should in theory start baking the data: props.conf: ####### INDEX TIME EXTRACTION ########## [vmw-syslog] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?:.*?(?:[\d\-]{10}T[\d\:]{8}(?:\.\d+)?(?:Z|[\+\-][\d\:]{5})?)\s[^ ]+\s+[^ ]+\s+[^\->])|([\r\n]+)(?:.*?\w+\s+\d+\s+\d{2}:\d{2}:\d{2})(?:\s+[^ ]+\s+)+[^\->] TZ = UTC DATETIME_CONFIG = /etc/apps/Splunk_TA_esxilogs/default/syslog_datetime.xml TRANSFORMS-nullqueue = vmware_generic_level_null TRANSFORMS-vmsyslogsourcetype = set_syslog_sourcetype,set_syslog_sourcetype_4x,set_syslog_sourcetype_sections TRANSFORMS-vmsyslogsource = set_syslog_source   But it doesnt. The data gets indexed without beeing touched by the Splunk_TA_esxilogs app. It works IF i disable the HF rewrite app, and change the stanza in Splunk_TA_esxilogs from [vmw-syslog] to [syslog], but that will hit way to wide. The name of the HF rewrite app starts with "05", so its configuration comes before the app named "Splunk_TA_esxilogs". Any suggestions is highly appreciated
Hi - I am trying to run the below query to help create an alert that will show when we haven't had an alert for a particular index after 15 minutes. I need to make it so it only includes specific ind... See more...
Hi - I am trying to run the below query to help create an alert that will show when we haven't had an alert for a particular index after 15 minutes. I need to make it so it only includes specific indexes rather than all the indexes within Splunk but can't seem to get it right. Any help on how to fix it or letting me know if there is a better way to do this would be massively appreciated! | tstats latest(_time) as latest where index=* earliest=-24hr by index | eval recent = if(latest > relative_time(now(),"-15m"),1,0), realLatest = strftime(latest,"%c") | rename realLatest as "Last Log" | where recent=0
I have a need to compare the average time for certain events with the 5 min bucket/bins of the same events. The idea is to find 5 min intervals that deviate more than a certain percentage from the av... See more...
I have a need to compare the average time for certain events with the 5 min bucket/bins of the same events. The idea is to find 5 min intervals that deviate more than a certain percentage from the average response times and then in some way display those intervals. I am however struggling to figure out how to output the Average for the entire time period but also calculate the 5 minute intervals. The following query, returns nothing (can you even do 2 Stats in the same query?): search | stats avg(Value) as AvgEntirePeriod | bin _time span=5m | stats avg(Value) by _time Any ideas on how to write this?
There  is a bug on the Custom Dashboards page with the functionality of the "Show My Dashboards Only" check box.  The option either does not display the dashboards related or errors out. In earli... See more...
There  is a bug on the Custom Dashboards page with the functionality of the "Show My Dashboards Only" check box.  The option either does not display the dashboards related or errors out. In earlier versions of the Controller, enabling this option would apply a filter that would only show dashboards related to the current logged in user.  We have worked with AppD support on this to point out the error and support did acknowledge this, but in the following controller upgrades the error persists.  AppD Support ticket Request #335757 Created back on August the 11th. Pre-Prod Controller: AppDynamics Controller build 22.8.1-715 Prod Controller: AppDynamics Controller build 22.9.1-1134 Pre-prod: Before filter, dashboards for "dietrich" Pre-prod: After Filter Prod: Before filter, dashboards for "dietrich" Prod: After Filter
Hello All, I have email exchange transactional data with below fields. Looking some data with span of 1day. Like how many emails sent by users having attachment vs no attachment.  message_id, ema... See more...
Hello All, I have email exchange transactional data with below fields. Looking some data with span of 1day. Like how many emails sent by users having attachment vs no attachment.  message_id, email_id, attachment_count, recipient_name abc, nameA, 0, xyz   Expected result is : date(like dd/mm/yy), email_ID,  HasAttachmnetcount, NoAttachmnet count.  1/1/2022,nameA, 4, 3 I am able to write chart (over email_id by isattachmnet) and get data for the selected duration, but unable to list data splited day wise. 
Hi everyone, I am searching data in Splunk, after different steps, I have now this table:   _time count Type Mon Sep 12 00:00:00 2022 820 1 Mon Sep 12 00:00:00 2022 ... See more...
Hi everyone, I am searching data in Splunk, after different steps, I have now this table:   _time count Type Mon Sep 12 00:00:00 2022 820 1 Mon Sep 12 00:00:00 2022 885 2 Tue Sep 13 00:00:00 2022 773 1 Tue Sep 13 00:00:00 2022 922 2 Wed Sep 14 00:00:00 2022 825 1 Wed Sep 14 00:00:00 2022 844 2 Thu Sep 15 00:00:00 2022 748 1 Thu Sep 15 00:00:00 2022 943 2 Fri Sep 16 00:00:00 2022 794 1 Fri Sep 16 00:00:00 2022 890 2 Sat Sep 17 00:00:00 2022 684 1 Sat Sep 17 00:00:00 2022 793 2 Sun Sep 18 00:00:00 2022 737 1 Sun Sep 18 00:00:00 2022 795 2 Mon Sep 19 00:00:00 2022 764 1 Mon Sep 19 00:00:00 2022 890 2 Tue Sep 20 00:00:00 2022 792 1 Tue Sep 20 00:00:00 2022 876 2 Wed Sep 21 00:00:00 2022 754 1 Wed Sep 21 00:00:00 2022 853 2 Thu Sep 22 00:00:00 2022 784 1 Thu Sep 22 00:00:00 2022 883 2 Fri Sep 23 00:00:00 2022 731 1 Fri Sep 23 00:00:00 2022 820 2 Sat Sep 24 00:00:00 2022 691 1 Sat Sep 24 00:00:00 2022 788 2 Sun Sep 25 00:00:00 2022 726 1 Sun Sep 25 00:00:00 2022 762 2 Mon Sep 26 00:00:00 2022 403 1 Mon Sep 26 00:00:00 2022 431 2 Actually there are more than 2 types but I just put here 2 for simplify. For now I can view the trending of data for each type thanks to Trellis, by 7 days per week. But I want to have another view to have data display by each type , compare the same day of different weeks. Something like this: Do you have any idea please? Thanks, Julia
Hey Splunkers !! I'm Explore around summary index... Wanted to know how to extract timestamps for each event when importing data into the summary index in Splunk Enterprise 9.0.1..... check on the... See more...
Hey Splunkers !! I'm Explore around summary index... Wanted to know how to extract timestamps for each event when importing data into the summary index in Splunk Enterprise 9.0.1..... check on the documentation.. couldn't able to find any inputs... And if _time value of the event being summarized ... will the the _time value of the event be extracted even the _time is not included in search results....  
Hi, if I had logs as such: "Client authentication successful PAN-OS ver: 9.1.11-h3 Panorama ver:10.1.6-h3 Client IP: 10.68.196.211 Server IP: 10.58.217.123 Client CN: 013101004861" "Client aut... See more...
Hi, if I had logs as such: "Client authentication successful PAN-OS ver: 9.1.11-h3 Panorama ver:10.1.6-h3 Client IP: 10.68.196.211 Server IP: 10.58.217.123 Client CN: 013101004861" "Client authentication successful PAN-OS ver: 9.1.11 Panorama ver:10.1.6-h6 Client IP: 10.58.90.53 Server IP: 10.58.90.200 Client CN: 010401005346",   How can I extract BOTH the PAN-OS and Panorma ver, i.e, 9.1.11, 10.1.6-h6, 10.1.6-h3, 9.1.11-h3????   I tried the following but it doesn't work - | rex field=body "[Panorama][PAN-OS]\s*:(?<Software_Version>.+?) Client" Can you please help?
Hello All, It is with reference to the Logs ingestion of IIS server. I  have universal forwarder installed on the IIS server and is getting windows log. I want to ingest IIS logs. have downloaded... See more...
Hello All, It is with reference to the Logs ingestion of IIS server. I  have universal forwarder installed on the IIS server and is getting windows log. I want to ingest IIS logs. have downloaded  https://splunkbase.splunk.com/app/3185  and installed on search head was referencing https://docs.splunk.com/Documentation/AddOns/released/MSIIS/Setupaddon but it is showing invalid directory and i am stuck. My questions is where all i have to install Add on only Search head because that was mentioned in Splunk document      
Hello everyone! i have the following search:     index="xyz" "restart" | eval _time = strftime(_time,"%F %H:%M:%S") | stats count as "count_of_starts" values(_time) as "restart_time" by ... See more...
Hello everyone! i have the following search:     index="xyz" "restart" | eval _time = strftime(_time,"%F %H:%M:%S") | stats count as "count_of_starts" values(_time) as "restart_time" by host     now i get a table with the "host" "count_of_starts" "restart_time", but the time inside values is ordered like: 2022-09-22 12:19:22 2022-09-22 12:19:46 2022-09-22 15:02:12 2022-09-22 15:02:36 2022-09-23 11:00:51 2022-09-23 11:01:16 2022-09-23 15:18:10 2022-09-23 15:18:34 2022-09-23 15:35:47 2022-09-23 15:36:11 2022-09-23 16:15:05 2022-09-23 16:15:30 2022-09-24 09:47:43 2022-09-24 09:48:06 I need this results but in opposite order, how can i implement this? |sort - _time before or after stats doesn´t worked and | sort restart_time also didn´t affect the results. Thank you all in advance! Kind regards Ben
when i was learning splunk  i encountered following question: analyze  following SPL query * | outputlookup my dummy.cvs if no events were generated for the query, choose the appropriate option ... See more...
when i was learning splunk  i encountered following question: analyze  following SPL query * | outputlookup my dummy.cvs if no events were generated for the query, choose the appropriate option available answers : a.at least one event needs to be created to add results to mydummy b.my dummy .cvs will be created but the file will be empty c.outputlook cannot be used for queries which generate 0 results d. my dummy.cvs should be created before events can be added e. my dummy.cvs  will be created   im doubting between C or D , or am i totally wrong ?    
How to extract data from log message data using rex field=_raw? Sample data is Instance Name : ABCDEFGH1 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ampxwdp1o.pharma.aventis.com)(... See more...
How to extract data from log message data using rex field=_raw? Sample data is Instance Name : ABCDEFGH1 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ampxwdp1o.pharma.aventis.com)(PORT=12345))) Alias ABCDEFGH1 Uptime 4 days 6 hr. 39 min. 25 sec Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=113.09.126.234)(PORT=12345))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC12345))) The command completed successfully Instance Name : ABCDEFGH1TEMP Instance Name : ABCDEFGQ1 I need to extract Instance name, Alias Uptime