All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have been thrown into upgrading our current Splunk servers. We are running 7.0.3 and need to get to the latest 8.2.4 Is that possible or do I need to do a step plan? My current setup is as... See more...
I have been thrown into upgrading our current Splunk servers. We are running 7.0.3 and need to get to the latest 8.2.4 Is that possible or do I need to do a step plan? My current setup is as follow: Master (not sure it is used) Deployer indexers (4) searchheads (3) Each individual server is a universal forwarder.  But we are looking at putting in intermediate UFs between the servers and the indexers. I a new to Splunk, so not sure where all to start.  Also, how to verify licenses.
i have data in an index=xyz in json format like with http status code from specific applications this below is a single event data {   "Application1": "200",   "Application2": "200",   "App... See more...
i have data in an index=xyz in json format like with http status code from specific applications this below is a single event data {   "Application1": "200",   "Application2": "200",   "Application3": "200" }       i want the data to be visualized like Application Status reltime application1 200 3 hours ago application 2 200 3 hours ago   how can i get output like this ?
Hello, As we know, trying to create an all-encompassing search for the log4j is a very difficult task because of the infinite number of possibilities for entering the letters jndi and any of the pos... See more...
Hello, As we know, trying to create an all-encompassing search for the log4j is a very difficult task because of the infinite number of possibilities for entering the letters jndi and any of the possible protocols; ex. ldap; dns; https; etc. We came up with this SPL, which has been very successful. However, there is a good possibility of some false positives. We haven't found too many though.     index=* AND "\$" ```All sites discussing log4j examples containing at least one dollar sign. Therefore, we are only reporting those types of events.``` ```No evidence (thus far) has been shown that these logs would contain log4j-type strings. Therefore, these logs are excluded.``` AND NOT source IN (/var/adm/messages, /var/adm/sulog, /var/adm/syslog*, /var/log/authlog, /var/log/messages, /var/log/secure, /var/log/syslog, /var/log/sudo.log, bandwidth, cpu, interfaces, iostat, netstat, openPorts, protocol, ps, top, vmstat, nfsiostat) AND NOT sourcetype=syslog ```These 12 strings have been found in events with different variations of the log4j string.``` AND ((Basic AND Base64) OR "/securityscan" OR "/callback" OR exploit OR "/nessus" OR (interact OR interactsh) OR kryptoslogic OR "service.exfil.site" OR secresponstaskfrce OR billdemirkapi OR mburpcollab OR leakix) ```Flags/Indicators to match the different strings above.``` | eval base64=if(match(_raw,"Base64"),"X","") | eval dnsscan=if(match(_raw,"/securityscan"),"X","") | eval exploit=if(match(_raw,"Exploit"),"X","") | eval nessus=if(match(_raw,"/nessus"),"X","") | eval interact=if(match(_raw,"interact") or match(_raw,"interactsh"),"X","") | eval kryptos=if(match(_raw,"kryptoslogic"),"X","") | eval exfilsite=if(match(_raw,"service.exfil.site"),"X","") | eval secrettask=if(match(_raw,"secresponstaskfrce"),"X","") | eval billdemirk=if(match(_raw,"billdemirkapi"),"X","") | eval burpcollab=if(match(_raw,"mburpcollab"),"X","") | eval leakix=if(match(_raw,"leakix"),"X","") ```These are the known protocols where log4j attacks have been seen. These matches look for the first letter used for each protocol (j), followed by anything, then the next letter (n), etc. This "hopefully" will catch any/all possible variations used by attackers. Note: A future search is being designed to find where URL Encoding repalces any/all of the letters within each JNDI protocol string.``` | where match(_raw,"j.*n.*d.*i.*\:.*l.*d.*a.*p") or match(_raw,"j.*n.*d.*i.*\:.*d.*n.*s") or match(_raw,"j.*n.*d.*i.*\:.*r.*m.*i") or match(_raw,"j.*n.*d.*i.*\:.*l.*d.*a.*p.*s") or match(_raw,"j.*n.*d.*i.*\:.*n.*i.*s") or match(_raw,"j.*n.*d.*i.*\:.*i.*i.*o.*p") or match(_raw,"j.*n.*d.*i.*\:.*c.*o.*r.*b.*a") or match(_raw,"j.*n.*d.*i.*\:.*n.*d.*s") or match(_raw,"j.*n.*d.*i.*\:.*h.*t.*t.*p") or match(_raw,"j.*n.*d.*i.*\:.*h.*t.*t.*p.*s") or match(_raw,"(\:)*-") or match(_raw,"lower\:") or match(_raw,"upper\:") or match(_raw,"date\:") or match(_raw,"env\:") or match(_raw,"jndi") | sort 0 -_time | table _time, index, host, source, status, base64, dnsscan, exploit, nessus, interact, kryptos, exfilsite, secrettask, billdemirk, burpcollab, leakix, _raw, http_user_agent,   We have found events also substituting URL Encoded characters for jndi........ Please let me, and the rest of our Splunk community, know if there are any issues with this search. Also, any new text signatures discovered other than those in Step 2. And any other discoveries. Together, let's find, stop this vulnerability.  Thanks and God bless, Genesius  
Aside from the MC in distributed mode checks do you have a comprehensive check list you run making sure all counters & components are healthy in the Ent. & the ES. I know a lot of us love the MC. Jus... See more...
Aside from the MC in distributed mode checks do you have a comprehensive check list you run making sure all counters & components are healthy in the Ent. & the ES. I know a lot of us love the MC. Just wondering what else the champs do in their environment please? Thank u & happy 2022.
I have a join where there are 2 different SLAs (Active and E2E) that need to be linked to incidents on one row. How can I follow the below up to do that? All the fields in table show twice except for... See more...
I have a join where there are 2 different SLAs (Active and E2E) that need to be linked to incidents on one row. How can I follow the below up to do that? All the fields in table show twice except for dv_sla which shows 1 each of the SLAs. Both SLAs contribute to different measures that I need to follow up with. index=servicenow sourcetype=incident | fields sys_id, number, closed_at, dv_state, dv_u_technical_service, dv_problem_id, proactive, dv_parent_incident | join type=inner number max=0 [ | search index=servicenow sourcetype="task_sla" dv_sla="Active*" OR dv_sla="E2E*" | fields sys_id, dv_task, dv_sla, dv_stage, dv_has_breached, business_duration | rename dv_task as number, dv_state as task_state ] | stats latest(*) as * by sys_id | search dv_stage="Completed" AND proactive="false" | table number, dv_sla, closed_at, dv_state, dv_u_technical_service, dv_problem_id, proactive, dv_parent_incident   Thanks
Hi, I am trying to utilize the Splunk Enterprise Security 7-Day Trial, through this link: https://www.splunk.com/en_us/form/splunk-enterprise-security-guided-product-tour.html And after signing up ... See more...
Hi, I am trying to utilize the Splunk Enterprise Security 7-Day Trial, through this link: https://www.splunk.com/en_us/form/splunk-enterprise-security-guided-product-tour.html And after signing up with 2 of my emails, I am redirected to the link below. https://www.splunk.com/en_us/form/splunk-enterprise-security-guided-product-tour/thanks.html I have yet to receive any instructions as to how to utilize the Splunk App in Splunk Enterprise, or is there any extra steps I am missing?
Hi All, As i install universal forwarder on different pc using local user in domain environment  logs received at Splunk enterprise, when i used domain user it did not. Did someone face this issue ?... See more...
Hi All, As i install universal forwarder on different pc using local user in domain environment  logs received at Splunk enterprise, when i used domain user it did not. Did someone face this issue ?  
Hello, We have IBM VIOS servers running AIX and we need to monitor them, mainly in term of Security. Is there anyone having experience on that? Did you installed a Splunk Universal Forwarder or are... See more...
Hello, We have IBM VIOS servers running AIX and we need to monitor them, mainly in term of Security. Is there anyone having experience on that? Did you installed a Splunk Universal Forwarder or are you sending data out via syslog? Thanks a lot, Edoardo
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from... See more...
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from Splunkbase over the existing app and then push the new bundle to the peers to fully update the app? ex. having version 1 and 2 in the same shcluster/apps directory, will the latest version take priority over the older while also benefitting from the configurations made in the previous version? Search Head Local changes to not appear to be visible in the deployer server, so do I have to also include the local directory of the related app from the search heads and include it inside the newly updated app before pushing through deployer?  PS: App I am trying to update is ES Content Update Or maybe there is a spesific push command to preserve local changes? Any and all help is welcome, thanks in advance!
Hello,   I am using the below query to output which of our Searches/Rules are mapped to which Mitre Technique IDs.   | inputlookup mitre_all_rule_technique_lookup | `lookup_technique_tactic_from_... See more...
Hello,   I am using the below query to output which of our Searches/Rules are mapped to which Mitre Technique IDs.   | inputlookup mitre_all_rule_technique_lookup | `lookup_technique_tactic_from_rule_name` | search rule_disabled=0 | dedup rule_name, technique_id, rule_disabled   The Result is as follows: rule_name tactic_ID tactic_name Technique_ID Tecnique_name Rule001 TA001 Persistence T1136 Create Account Rule001 TA002 Persistence T1098 Account Manipulation Rule001 TA008 Defense Evasion Txxxx Modify infrastructrue   As you can see ,  it is showing different entries for  the same data in the "rule_name" column .   The Rule mentioned in the Rule_name column is mapped to 3 different Tactic_ID ,Technique_IDs etc which is why  it shows 3 results for the same rule.  How can i consolidate all this ? Basically this is the output i want : rule_name tactic_ID tactic_name Technique_ID Technique_name Rule001 TA001 TA002 TA008 Persistence Persistence Defense Evasion T1136 T1098 TXXXX Create Account Account Manipulation Modify infrastructure Rule002 TAxxx TAXXX .... ..... ......           If i change my dedup command in the query  to:   | dedup rule_name  ,  then it displays only the 1st row  of every rule_name and omits the remaining values. Pls advise. I am sure this is something very fundamental.
Currently it's difficult to parse out the details of Cluster events in Splunk, to enable more useful Dashboard panels. Looking for suggestions to figure out a way to extract from the splunk event.go ... See more...
Currently it's difficult to parse out the details of Cluster events in Splunk, to enable more useful Dashboard panels. Looking for suggestions to figure out a way to extract from the splunk event.go events, the columns that we would see when we run "oc get events" on a cluster; namespace, last seen, type, reason, object, message. Once we can extract those fields and make available as variables for splunk stats/tables/timechart, we can put some useful panels together to gauge plant health. Realtime views around created/started containers/pod and failures Realtime views around job start/failure/complete Realtime views into failed mounts and types of failures Realtime views on image pulls, success, backoffs, failures, denies Appreciate the help with any docs/leads and high level ideas to achieve this please. Sample Events: Time Event   12/30/21 1:59:07.000 AM   <135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.139184 1 event.go:291] "Event occurred" object="openshift-logging/elasticsearch-im-infra" kind="CronJob" apiVersion="batch/v1beta1" type="Warning" reason="FailedNeedsStart" message="Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew" host = laas-agent-log-forwarder-6dddb6d69c-95t4b source = /namespace/openshift-kube-controller-manager sourcetype = ocpprod.stepping-infra-openshift-kube-controller-manager:application   12/30/21 1:59:07.000 AM   <135>Dec 30 06:59:07 9000n2.nodes.com kubernetes.var.log.containers.ku: namespace_name=openshift-kube-controller-manager, container_name=kube-controller-manager, pod_name=kube-controller-manager-9000n2.nodes.com, message=I1230 06:58:56.133312 1 event.go:291] "Event occurred" object="openshift-logging/elasticsearch-im-audit" kind="CronJob" apiVersion="batch/v1beta1" type="Warning" reason="FailedNeedsStart" message="Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew" host = laas-agent-log-forwarder-6dddb6d69c-95t4b source = /namespace/openshift-kube-controller-manager sourcetype = ocpprod.stepping-infra-openshift-kube-controller-manager:application
Hi all,    I am using Splunk Cloud and would like to configure a universal forwarder in a VM on a non-domain joined laptop. The goal is to run attacks and malware samples. As such, I will be using ... See more...
Hi all,    I am using Splunk Cloud and would like to configure a universal forwarder in a VM on a non-domain joined laptop. The goal is to run attacks and malware samples. As such, I will be using a VPN to mask my IP address which will not be associated with my cloud instance. My company has whitelisted IPs for access to the console.  Will I be able to configure this or will the cloud firewall not allow logs to be ingested from a non company IP address?  Thanks!
Hey all, Just started learning Splunk this week, interesting so far. How can I sort the top header from lowest to highest? Attached an example of what I'm working with below. Just want to organise... See more...
Hey all, Just started learning Splunk this week, interesting so far. How can I sort the top header from lowest to highest? Attached an example of what I'm working with below. Just want to organise it.  
I am running Splunk 8.1.0.1 on Windows Server 2016. The KVStore keeps failing when I start up Splunk service. This causes Splunkd server to fail after some time causing the need to restart it to acce... See more...
I am running Splunk 8.1.0.1 on Windows Server 2016. The KVStore keeps failing when I start up Splunk service. This causes Splunkd server to fail after some time causing the need to restart it to access the Splunk GUI. Are there any logs I should gather to identify what the issue is? I have read through some forums and tried the "stop Splunk, move server.pem file, start Splunk" to generate a new server certificate, but I am still getting the KVStore failure. Any help would be greatly appreciated as I am at a loss at this point.
People that have been on cloud for a while what was your onboarding like and how difficult is it to add new products? I got my trial account to see what the product can do and basically so far the an... See more...
People that have been on cloud for a while what was your onboarding like and how difficult is it to add new products? I got my trial account to see what the product can do and basically so far the answer is "nothing". I get that its a trial but I literally can do nothing with it beside login and look around. I installed the addons I was interested in which was the reason for the trial but that is pointless because you have to restart some services that i don't have access to. Since it's a trial I'm not entitled to support portal to ask for this so I called and I get a loop between it being a trial to press this for support and that is really sales who sends me back to support. I tried setting up the universal forwarder because I can't just point logs at the cloud instance for some reason and those instructions are so convoluted and link to each other in this nested way I feel like I'm dealing with Nvidia GRID. So I go online looking for some instructions from 3rd parties and I got to say no one appears to use Splunk cloud and honestly I understand why. They have way more control and don't need to use the universal forwarders.  I though the cloud would be simple, there is nothing to install but if i don't have a way to restart my instance and its a requirement. I emailed support pleading my case and i managed to get the forwarder installed but I have no way to check this. Honestly the product looks really promising and has a learning curve which i expect and would find my way around but i don't think cloud is going to be a good fit.  
Hello Experts,    Kindly help to filter out latest one year date for the particular field.  For ex:  index="abc" sourcetype="xyz"  |table ID, COMPLETION_DATE, LEARNING_ITEM_ID, LEARNING_ITEM_TITL... See more...
Hello Experts,    Kindly help to filter out latest one year date for the particular field.  For ex:  index="abc" sourcetype="xyz"  |table ID, COMPLETION_DATE, LEARNING_ITEM_ID, LEARNING_ITEM_TITLE, TARGET_DATE Here I just need to filter out who has completed within last one year in the completion date . Actually, Completion date showing for last five years .. But I just need to filter out only for past year without mentioning any date in query. I am wondering if we can use latest command .. Kindly help    
I want to look for requests in a service mesh ingest log which have no corresponding application log entries. My first search is  index=kubernetes source=*envoy-proxy* (api.foo.com OR info) AND ... See more...
I want to look for requests in a service mesh ingest log which have no corresponding application log entries. My first search is  index=kubernetes source=*envoy-proxy* (api.foo.com OR info) AND downstream_remote_disconnect | rex field=_raw "\[[^\]]+\] \"(?<downstream>[^\"]+)\".*\"(POST|GET) \"(?<host>[^\"]+)\" \"(?<path>[^\"\?]+)[\?]?\" [^\"]+\" (?<status>\d+).*\"(?<id1>[0-9a-f]{8})-(?<id2>[0-9a-f]{4})-(?<id3>[0-9a-f]{4})" | eval id=id1.id2.id3 | fields id my second search is  index=kubernetes source=*proxy* operation: | rex field=_raw "span_id:(?<id>[0-9a-f]{16});" | fields id and the obvious way of combining them yields no results index=kubernetes source=*envoy-proxy* (api.foo.com OR info) AND downstream_remote_disconnect | rex field=_raw "\[[^\]]+\] \"(?<downstream>[^\"]+)\".*\"(POST|GET) \"(?<host>[^\"]+)\" \"(?<path>[^\"\?]+)[\?]?\" [^\"]+\" (?<status>\d+).*\"(?<id1>[0-9a-f]{8})-(?<id2>[0-9a-f]{4})-(?<id3>[0-9a-f]{4})" | eval id=id1.id2.id3 | fields id | search NOT [ search index=kubernetes source=*proxy* operation: | rex field=_raw "span_id:(?<id>[0-9a-f]{16});" | fields id ]
Hi Team,    Need your help in creating regex to create a field.  "User_Claim":("sub":"qweihaytej"; "login_id":"Abc@domain.com";........)  Here User_Claim is a field. I have to create a field for ... See more...
Hi Team,    Need your help in creating regex to create a field.  "User_Claim":("sub":"qweihaytej"; "login_id":"Abc@domain.com";........)  Here User_Claim is a field. I have to create a field for login_id. I have tried with this, and it's not working.  ..... | rex field=User_Claim " login_id"(? <loginID>\w+.) " I am unable to see the field name in the interesting fields.    Please suggest in this.    Thanks Sagar      
Hello all, I need the installer or zip file for Splunk Universal Forwarder 8.2.3.  I can only find 8.2.2.1 or older or 8.2.4.   Thanks!
Hey all, I've got an interview and I need to show some level of competency at using Splunk, I'm doing a short presentation on it and I have used it a little. I know it organises a lot of data from... See more...
Hey all, I've got an interview and I need to show some level of competency at using Splunk, I'm doing a short presentation on it and I have used it a little. I know it organises a lot of data from logs into useful information and it's handy for forensics, security and auditing users - I'm sure much more as well. My task is this, to run Splunk on my computer and monitor  operating system events and/or performance. I did monitor data from the source called "Local Event Logs" and picked Security, Application, System and Setup and I have had a quick look over them but something is bugging me.  How can I make this more interesting because I'm doing  a presentation on it? Is there a field or something that would be good to talk about? There's so many options so it's a bit tough to pick or a find a good one. Odd question, I know but any suggestions would be appreciated. Thank you for the read guys.