All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have 3 panels which are displaying SIngle value, with a condition if result count is zero, that panel should not display on dashboard. However, it is not working properly. Stll the panels are ... See more...
Hi, I have 3 panels which are displaying SIngle value, with a condition if result count is zero, that panel should not display on dashboard. However, it is not working properly. Stll the panels are hidden even though result count is > 0 for all. How can this be fixed? <dashboard> <init> <set token="eduration">-24h@m</set> <set token="lduration">now</set> </init> <row> <panel depends="$show$"> <single> <title>Panel</title> <search> <query>sarch_queryt</query> <earliest>$eduration$</earliest> <latest>$lduration$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' == 0"> <set token="show">true</set> </condition> <condition> <unset token="show"></unset> </condition> </progress> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> <panel depends="$show1$"> <single> <title>Panel1</title> <search> <query>sarch_queryt</query> <earliest>$eduration$</earliest> <latest>$lduration$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' == 0"> <set token="show1">true</set> </condition> <condition> <unset token="show1"></unset> </condition> </progress> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> <panel depends="$show2$"> <single> <title>Panel3</title> <search> <query>sarch_queryt</query> <earliest>$eduration$</earliest> <latest>$lduration$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' == 0"> <set token="show2">true</set> </condition> <condition> <unset token="show2"></unset> </condition> </progress> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>  
   Provide details about client purchase details           1. Total purchase split by product ID          2. Total Products split by product ID  with raw data
We are trying to Configure Azure Storage Blob Modular Inputs for Splunk Add-on for Microsoft Cloud Services to get reports, that come in csv format. We have created props.conf TA folder in /opt/splun... See more...
We are trying to Configure Azure Storage Blob Modular Inputs for Splunk Add-on for Microsoft Cloud Services to get reports, that come in csv format. We have created props.conf TA folder in /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local folder with the following sourcetype stanza and still field extraction is not working. Any advices? [mscs:storage:blob:csv] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = CURRENT INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false Thank you!
Hello If now, it is 30/12/2021 22:30, how can I search for timestamps from 29/12/2021 00:00:00 (i.e. beginning of 29/12/2021 or dynamically 'beginning of yesterday')? I need this in a search code r... See more...
Hello If now, it is 30/12/2021 22:30, how can I search for timestamps from 29/12/2021 00:00:00 (i.e. beginning of 29/12/2021 or dynamically 'beginning of yesterday')? I need this in a search code rather than the GUI presets etc. Thanks!
Our DNS logs are sent via syslog to a HF through an Epilog agent. The EpiLog agent reads the dns log file line by line and each line is sent as a separate event to the HF,  looking something like thi... See more...
Our DNS logs are sent via syslog to a HF through an Epilog agent. The EpiLog agent reads the dns log file line by line and each line is sent as a separate event to the HF,  looking something like this: Dec 24 04:05:11 192.####### MSDNSLog 0 12/24/2021 12:02:06 AM 04B4 PACKET 000000####### UDP Rcv 142.####### 3f94 R Q [8281 DR SERVFAIL] PTR (2)87in-addr(4)arpa(0) Dec 24 04:05:11 192.####### MSDNSLog 0 UDP response info at 000000EE456861F0 Dec 24 04:05:11 192.####### MSDNSLog 0 Socket = 1244 Dec 24 04:05:11 192.####### MSDNSLog 0 Remote addr 142.1#######, port 53 Dec 24 04:05:11 192.#######MSDNSLog 0 Time Query=1220313, Queued=0, Expire=0 Dec 24 04:05:11 192.####### MSDNSLog 0 Buf length = 0x0fa0 (4000) Dec 24 04:05:11 192.####### MSDNSLog 0 Msg length = 0x0037 (55) Dec 24 04:05:11 192.####### MSDNSLog 0 Message: Dec 24 04:05:11 192.####### MSDNSLog 0 XID 0x3f94 Dec 24 04:05:11 192.####### MSDNSLog 0 Flags 0x8182 Dec 24 04:05:11 192.####### MSDNSLog 0 QR 1 (RESPONSE) Dec 24 04:05:11 192.####### MSDNSLog 0 OPCODE 0 (QUERY) Dec 24 04:05:11 192.####### MSDNSLog 0 AA 0 Dec 24 04:05:11 192.####### MSDNSLog 0 TC 0 Dec 24 04:05:11 192.####### MSDNSLog 0 RD 1 Dec 24 04:05:11 192.####### MSDNSLog 0 RA 1 Dec 24 04:05:11 192.####### MSDNSLog 0 Z 0 Dec 24 04:05:11 192.####### MSDNSLog 0 CD 0 Dec 24 04:05:11 192.####### MSDNSLog 0 AD 0 Dec 24 04:05:11 192.#######MSDNSLog 0 RCODE 2 (SERVFAIL) Dec 24 04:05:11 192.####### MSDNSLog 0 QCOUNT 1 Dec 24 04:05:11 192.1####### MSDNSLog 0 ACOUNT 0 Dec 24 04:05:11 192.#######  MSDNSLog 0 NSCOUNT 0 Dec 24 04:05:11 192.1###### MSDNSLog 0 ARCOUNT 1 Dec 24 04:05:11 192.1##### MSDNSLog 0 QUESTION SECTION: Dec 24 04:05:11 192.1##### MSDNSLog 0 Offset = 0x000c, RR count = 0 So originally each of those lines was indexed as a separate event in Splunk. I played around with the props.conf file for that specific sourcetype and set  the parameters as follows: SHOULD_LINEMERGE=TRUE TIME_PREFIX to match Dec 24 04:05:11 192.###### MSDNSLog 0 TIME_FORMAT=%m/%d/%Y %l:%M:%S %p BREAK_ONLY_BEFORE=PACKET (Every event starts with a line that contains packet) LINE_BREAKER = ([\r\n]+) TRUNCATE=0 MAX_EVENTS=500000 (I've seen some  events be very long) MAX_TIMESTAMP_LOOKAHEAD=100 SEDCMD-null = regex to get rid of  Dec 24 04:05:11 192.####### MSDNSLog 0 at the beginning of every line Based on my understanding (and I played around with Add Data on a searchhead and the above parameters, where it works), the following should happen: The lines are broken on each new line, then they are merged, with each new event being formed when a line has PACKET in it, timestamp is extracted and then the MSDNSLOG stuff at the beginning of each line is removed.  However, I'm not seeing the timestamp being extracted properly and some (not all)of the DNS events get split like below into separate events: What could I be missing to get all events merged correctly? Please keep in mind that using sysmon/network tap/stream is not an option at the moment so I stuck with trying to the data ingested properly using the conf files.  
Hello, I am new to Splunk.  I have successfully got our SC4S server setup and sending info to Splunk.  I am working on getting data in from our Barracuda Web Filter.  The data is going in but gettin... See more...
Hello, I am new to Splunk.  I have successfully got our SC4S server setup and sending info to Splunk.  I am working on getting data in from our Barracuda Web Filter.  The data is going in but getting assigned a source type of nix:syslog.  I have installed the BarracudaWebFilter app in Splunk but for it to work I am reading the sourcetype needs to be "barracuda".   I believe I need to add a line in the splunk_metadata.csv file on the SC4S server but not sure what it should be.  Anybody else set this up and have any info the could provide. Thanks,
Hi Team, We are frequently seeing dispatch directory messages in the splunk GUI. Show please help me how to handle it in a right way with some permanent solution. Also, we have an idea that we can... See more...
Hi Team, We are frequently seeing dispatch directory messages in the splunk GUI. Show please help me how to handle it in a right way with some permanent solution. Also, we have an idea that we can increase the threshold limit, so help me correctly how to increase the threshold limit so that we could stop seeing these messages in near future. Regards,
I have looked for solutions but I have mostly found results regarding only current and past time comparison which is not what I need. I have a query that bins _time by 24h spans over the previous ... See more...
I have looked for solutions but I have mostly found results regarding only current and past time comparison which is not what I need. I have a query that bins _time by 24h spans over the previous 7 days. and calculates a numeric value associated with those time spans. What I need is to compare each day's values to the entire week's and find any time period (so 48h) where the number jumped significantly.  An example of something similar to my my code: index=sandwiches saved_search_name="yum" earliest=-7d | bin span=24h _time | search sandwich_type="PB&J" | stats count by total_bread_type _time | stats sum(total_bread_type) as bread by _time | eval bread = round(bread / 10000, 2) currently the results are like this: _time bread 2021-12-22 18:00 22 2021-12-23 18:00 23 2021-12-24 18:00 21 2021-12-25 18:00 47 2021-12-26 18:00 48 2021-12-27 18:00 46 2021-12-28 18:00 47 Basically I am looking to compare the 'bread' values by _time and figure out if/where there is a jump of 10 or more and return that data. Any insight would be appreciated. Thanks!
How do I pair events 4778 & 4779 for the same Logon_ID when I have multi 4778 and multi 4779? I would like to pair the first 4779 event (disconnect) with the first 4778 event (reconnect) and than do... See more...
How do I pair events 4778 & 4779 for the same Logon_ID when I have multi 4778 and multi 4779? I would like to pair the first 4779 event (disconnect) with the first 4778 event (reconnect) and than do the same for the second 4779 event with the second 4778 event etc'
I have been thrown into upgrading our current Splunk servers. We are running 7.0.3 and need to get to the latest 8.2.4 Is that possible or do I need to do a step plan? My current setup is as... See more...
I have been thrown into upgrading our current Splunk servers. We are running 7.0.3 and need to get to the latest 8.2.4 Is that possible or do I need to do a step plan? My current setup is as follow: Master (not sure it is used) Deployer indexers (4) searchheads (3) Each individual server is a universal forwarder.  But we are looking at putting in intermediate UFs between the servers and the indexers. I a new to Splunk, so not sure where all to start.  Also, how to verify licenses.
i have data in an index=xyz in json format like with http status code from specific applications this below is a single event data {   "Application1": "200",   "Application2": "200",   "App... See more...
i have data in an index=xyz in json format like with http status code from specific applications this below is a single event data {   "Application1": "200",   "Application2": "200",   "Application3": "200" }       i want the data to be visualized like Application Status reltime application1 200 3 hours ago application 2 200 3 hours ago   how can i get output like this ?
Hello, As we know, trying to create an all-encompassing search for the log4j is a very difficult task because of the infinite number of possibilities for entering the letters jndi and any of the pos... See more...
Hello, As we know, trying to create an all-encompassing search for the log4j is a very difficult task because of the infinite number of possibilities for entering the letters jndi and any of the possible protocols; ex. ldap; dns; https; etc. We came up with this SPL, which has been very successful. However, there is a good possibility of some false positives. We haven't found too many though.     index=* AND "\$" ```All sites discussing log4j examples containing at least one dollar sign. Therefore, we are only reporting those types of events.``` ```No evidence (thus far) has been shown that these logs would contain log4j-type strings. Therefore, these logs are excluded.``` AND NOT source IN (/var/adm/messages, /var/adm/sulog, /var/adm/syslog*, /var/log/authlog, /var/log/messages, /var/log/secure, /var/log/syslog, /var/log/sudo.log, bandwidth, cpu, interfaces, iostat, netstat, openPorts, protocol, ps, top, vmstat, nfsiostat) AND NOT sourcetype=syslog ```These 12 strings have been found in events with different variations of the log4j string.``` AND ((Basic AND Base64) OR "/securityscan" OR "/callback" OR exploit OR "/nessus" OR (interact OR interactsh) OR kryptoslogic OR "service.exfil.site" OR secresponstaskfrce OR billdemirkapi OR mburpcollab OR leakix) ```Flags/Indicators to match the different strings above.``` | eval base64=if(match(_raw,"Base64"),"X","") | eval dnsscan=if(match(_raw,"/securityscan"),"X","") | eval exploit=if(match(_raw,"Exploit"),"X","") | eval nessus=if(match(_raw,"/nessus"),"X","") | eval interact=if(match(_raw,"interact") or match(_raw,"interactsh"),"X","") | eval kryptos=if(match(_raw,"kryptoslogic"),"X","") | eval exfilsite=if(match(_raw,"service.exfil.site"),"X","") | eval secrettask=if(match(_raw,"secresponstaskfrce"),"X","") | eval billdemirk=if(match(_raw,"billdemirkapi"),"X","") | eval burpcollab=if(match(_raw,"mburpcollab"),"X","") | eval leakix=if(match(_raw,"leakix"),"X","") ```These are the known protocols where log4j attacks have been seen. These matches look for the first letter used for each protocol (j), followed by anything, then the next letter (n), etc. This "hopefully" will catch any/all possible variations used by attackers. Note: A future search is being designed to find where URL Encoding repalces any/all of the letters within each JNDI protocol string.``` | where match(_raw,"j.*n.*d.*i.*\:.*l.*d.*a.*p") or match(_raw,"j.*n.*d.*i.*\:.*d.*n.*s") or match(_raw,"j.*n.*d.*i.*\:.*r.*m.*i") or match(_raw,"j.*n.*d.*i.*\:.*l.*d.*a.*p.*s") or match(_raw,"j.*n.*d.*i.*\:.*n.*i.*s") or match(_raw,"j.*n.*d.*i.*\:.*i.*i.*o.*p") or match(_raw,"j.*n.*d.*i.*\:.*c.*o.*r.*b.*a") or match(_raw,"j.*n.*d.*i.*\:.*n.*d.*s") or match(_raw,"j.*n.*d.*i.*\:.*h.*t.*t.*p") or match(_raw,"j.*n.*d.*i.*\:.*h.*t.*t.*p.*s") or match(_raw,"(\:)*-") or match(_raw,"lower\:") or match(_raw,"upper\:") or match(_raw,"date\:") or match(_raw,"env\:") or match(_raw,"jndi") | sort 0 -_time | table _time, index, host, source, status, base64, dnsscan, exploit, nessus, interact, kryptos, exfilsite, secrettask, billdemirk, burpcollab, leakix, _raw, http_user_agent,   We have found events also substituting URL Encoded characters for jndi........ Please let me, and the rest of our Splunk community, know if there are any issues with this search. Also, any new text signatures discovered other than those in Step 2. And any other discoveries. Together, let's find, stop this vulnerability.  Thanks and God bless, Genesius  
Aside from the MC in distributed mode checks do you have a comprehensive check list you run making sure all counters & components are healthy in the Ent. & the ES. I know a lot of us love the MC. Jus... See more...
Aside from the MC in distributed mode checks do you have a comprehensive check list you run making sure all counters & components are healthy in the Ent. & the ES. I know a lot of us love the MC. Just wondering what else the champs do in their environment please? Thank u & happy 2022.
I have a join where there are 2 different SLAs (Active and E2E) that need to be linked to incidents on one row. How can I follow the below up to do that? All the fields in table show twice except for... See more...
I have a join where there are 2 different SLAs (Active and E2E) that need to be linked to incidents on one row. How can I follow the below up to do that? All the fields in table show twice except for dv_sla which shows 1 each of the SLAs. Both SLAs contribute to different measures that I need to follow up with. index=servicenow sourcetype=incident | fields sys_id, number, closed_at, dv_state, dv_u_technical_service, dv_problem_id, proactive, dv_parent_incident | join type=inner number max=0 [ | search index=servicenow sourcetype="task_sla" dv_sla="Active*" OR dv_sla="E2E*" | fields sys_id, dv_task, dv_sla, dv_stage, dv_has_breached, business_duration | rename dv_task as number, dv_state as task_state ] | stats latest(*) as * by sys_id | search dv_stage="Completed" AND proactive="false" | table number, dv_sla, closed_at, dv_state, dv_u_technical_service, dv_problem_id, proactive, dv_parent_incident   Thanks
Hi, I am trying to utilize the Splunk Enterprise Security 7-Day Trial, through this link: https://www.splunk.com/en_us/form/splunk-enterprise-security-guided-product-tour.html And after signing up ... See more...
Hi, I am trying to utilize the Splunk Enterprise Security 7-Day Trial, through this link: https://www.splunk.com/en_us/form/splunk-enterprise-security-guided-product-tour.html And after signing up with 2 of my emails, I am redirected to the link below. https://www.splunk.com/en_us/form/splunk-enterprise-security-guided-product-tour/thanks.html I have yet to receive any instructions as to how to utilize the Splunk App in Splunk Enterprise, or is there any extra steps I am missing?
Hi All, As i install universal forwarder on different pc using local user in domain environment  logs received at Splunk enterprise, when i used domain user it did not. Did someone face this issue ?... See more...
Hi All, As i install universal forwarder on different pc using local user in domain environment  logs received at Splunk enterprise, when i used domain user it did not. Did someone face this issue ?  
Hello, We have IBM VIOS servers running AIX and we need to monitor them, mainly in term of Security. Is there anyone having experience on that? Did you installed a Splunk Universal Forwarder or are... See more...
Hello, We have IBM VIOS servers running AIX and we need to monitor them, mainly in term of Security. Is there anyone having experience on that? Did you installed a Splunk Universal Forwarder or are you sending data out via syslog? Thanks a lot, Edoardo
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from... See more...
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from Splunkbase over the existing app and then push the new bundle to the peers to fully update the app? ex. having version 1 and 2 in the same shcluster/apps directory, will the latest version take priority over the older while also benefitting from the configurations made in the previous version? Search Head Local changes to not appear to be visible in the deployer server, so do I have to also include the local directory of the related app from the search heads and include it inside the newly updated app before pushing through deployer?  PS: App I am trying to update is ES Content Update Or maybe there is a spesific push command to preserve local changes? Any and all help is welcome, thanks in advance!
Hello,   I am using the below query to output which of our Searches/Rules are mapped to which Mitre Technique IDs.   | inputlookup mitre_all_rule_technique_lookup | `lookup_technique_tactic_from_... See more...
Hello,   I am using the below query to output which of our Searches/Rules are mapped to which Mitre Technique IDs.   | inputlookup mitre_all_rule_technique_lookup | `lookup_technique_tactic_from_rule_name` | search rule_disabled=0 | dedup rule_name, technique_id, rule_disabled   The Result is as follows: rule_name tactic_ID tactic_name Technique_ID Tecnique_name Rule001 TA001 Persistence T1136 Create Account Rule001 TA002 Persistence T1098 Account Manipulation Rule001 TA008 Defense Evasion Txxxx Modify infrastructrue   As you can see ,  it is showing different entries for  the same data in the "rule_name" column .   The Rule mentioned in the Rule_name column is mapped to 3 different Tactic_ID ,Technique_IDs etc which is why  it shows 3 results for the same rule.  How can i consolidate all this ? Basically this is the output i want : rule_name tactic_ID tactic_name Technique_ID Technique_name Rule001 TA001 TA002 TA008 Persistence Persistence Defense Evasion T1136 T1098 TXXXX Create Account Account Manipulation Modify infrastructure Rule002 TAxxx TAXXX .... ..... ......           If i change my dedup command in the query  to:   | dedup rule_name  ,  then it displays only the 1st row  of every rule_name and omits the remaining values. Pls advise. I am sure this is something very fundamental.
Currently it's difficult to parse out the details of Cluster events in Splunk, to enable more useful Dashboard panels. Looking for suggestions to figure out a way to extract from the splunk event.go ... See more...
Currently it's difficult to parse out the details of Cluster events in Splunk, to enable more useful Dashboard panels. Looking for suggestions to figure out a way to extract from the splunk event.go events, the columns that we would see when we run "oc get events" on a cluster; namespace, last seen, type, reason, object, message. Once we can extract those fields and make available as variables for splunk stats/tables/timechart, we can put some useful panels together to gauge plant health. Realtime views around created/started containers/pod and failures Realtime views around job start/failure/complete Realtime views into failed mounts and types of failures Realtime views on image pulls, success, backoffs, failures, denies Appreciate the help with any docs/leads and high level ideas to achieve this please. Sample Events: Time Event   12/30/21 1:59:07.000 AM   <135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.139184 1 event.go:291] "Event occurred" object="openshift-logging/elasticsearch-im-infra" kind="CronJob" apiVersion="batch/v1beta1" type="Warning" reason="FailedNeedsStart" message="Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew" host = laas-agent-log-forwarder-6dddb6d69c-95t4b source = /namespace/openshift-kube-controller-manager sourcetype = ocpprod.stepping-infra-openshift-kube-controller-manager:application   12/30/21 1:59:07.000 AM   <135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.133312 1 event.go:291] "Event occurred" object="openshift-logging/elasticsearch-im-audit" kind="CronJob" apiVersion="batch/v1beta1" type="Warning" reason="FailedNeedsStart" message="Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew" host = laas-agent-log-forwarder-6dddb6d69c-95t4b source = /namespace/openshift-kube-controller-manager sourcetype = ocpprod.stepping-infra-openshift-kube-controller-manager:application