All Topics

Top

All Topics

Hello Splunk Community, I'm preparing for the Advanced Splunk Certification from splunk free courses and splunk course and would appreciate your insights: Any challenging topics in the exam? Reco... See more...
Hello Splunk Community, I'm preparing for the Advanced Splunk Certification from splunk free courses and splunk course and would appreciate your insights: Any challenging topics in the exam? Recommended resources beyond official docs? Tips for effective hands-on practice? Any recent exam updates? Thanks for your help!
Hi, I want to do hands on over splunk. I have completed this sales engineer(Buttercup-games) exercise, now want different like this, can anyone pls help me with this? where can I get same exercise li... See more...
Hi, I want to do hands on over splunk. I have completed this sales engineer(Buttercup-games) exercise, now want different like this, can anyone pls help me with this? where can I get same exercise like this to practice more. Thanks Sudhanshu
Hello, good day I am very new to Splunk, i and my team want to work on a mini project using splunk cloud with the topic "Splunk Enterprise: An organization's go-to in detecting cyberthreats" how/wh... See more...
Hello, good day I am very new to Splunk, i and my team want to work on a mini project using splunk cloud with the topic "Splunk Enterprise: An organization's go-to in detecting cyberthreats" how/where can i easily get datasets/logs that i can use in splunk for monitoring and analysis.  and what best way should we go about this topic?
Hi, We have a splunk cloud instance, and a few of our systems dont have an out of the box add on, so we decided to try and get data via api. However our instance dosent have any api data inputs, nor... See more...
Hi, We have a splunk cloud instance, and a few of our systems dont have an out of the box add on, so we decided to try and get data via api. However our instance dosent have any api data inputs, nor can we find any way to create an input of our own. We tried to install the add on builder app, but the installation fails every time. Is there any way to create our own add on, or a way to get splunk to pull data via api?
We have a few instances hosted in AWS that are extremely underutilized (single digit avg. cpu% for the 3 months. The AWS compute optimizer has recommended the following changes to the instances Cur... See more...
We have a few instances hosted in AWS that are extremely underutilized (single digit avg. cpu% for the 3 months. The AWS compute optimizer has recommended the following changes to the instances Current Instance Type | Recommended Instance Type c4.4xLarge  | r6i.xlarge c4.8xlarge | r6i.2xlarge and  r6i.xlarge c5.2xLarge | r6i.large,  r6i.xlarge,  t3.medium,  t3.small c5.4xlarge | r6i.2xlarge c5.9xlarge | r6i.4xlarge c5.xLarge | r6i.large t3.medium |  t3.large t3.micro | t3.medium   We noticed that most of the recommendations are about replacing 'compute-optimized' instances with new-gen 'mem-optimized' intances. This also reduced the CPU cores.    Question - can we consider and replace the instances based on the recommendations.
This splunk search is not showing any result.   index=os OR index=linux sourcetype=vmstat OR source=iostat [| input lookup SEI-build_server_lookup.csv where platform=eid_rhel6 AND where NOT (role-c... See more...
This splunk search is not showing any result.   index=os OR index=linux sourcetype=vmstat OR source=iostat [| input lookup SEI-build_server_lookup.csv where platform=eid_rhel6 AND where NOT (role-code-sonar) | fields host | format ] | rex field=host (?<host>\w+)?\..+" | timechart avg(avgWaitMillis) | eval cores=4 | eval loadAvg1mipercore=loadAvg1mi/cores | stats avg(loadAvg1mipercore) as load by host   Please help to correct my search.
Hi, I am looking to parse the nested JSON events. basically need to break them into multiple events. I an trying some thing like this but its just duplicating same record in multiple lines.   ... See more...
Hi, I am looking to parse the nested JSON events. basically need to break them into multiple events. I an trying some thing like this but its just duplicating same record in multiple lines.   | spath path=list.entry{}.fields output=items | mvexpand items   I am looking to get all key/vale pair as single event under  "fields"  Sample Records   { "total": 64, "list": { "entry": [ { "recordId": 7, "created": 1682416024092, "id": "e70dbd86-53cf-4782-aa84-cf28cde16c86", "fields": { "NumDevRes001": 11111, "NumBARes001": 3, "lastUpdated": 1695960000000, "engStartDate": 1538452800000, "RelSupport001": 0, "UnitTest001": 0, "Engaged": 1, "ProdGroup001": 1, "QEResSGP001": 0.5, "QEResTOR001": 1, "QEResLoc001": 3, "SITBugs001": 31, "QEResIND001": 5, "QEResLoc003": 3, "QEResLoc002": 3, "Project": "Registration Employee Directory Services", "AutoTestCount001": 1657, "AppKey001": "ABC", }, "ownedBy": "TEST1" }, { "recordId": 8, "createdBy": "TEST2", "created": 1682416747947, "id": "91e88ae6-0b64-48fc-b8ed-4fcfa399aa3e", "fields": { "NumDevRes001": 22222, "NumBARes001": 3, "lastUpdated": 1695960000000, "engStartDate": 1538452800000, "RelSupport001": 0, "UnitTest001": 0, "Engaged": 1, "ProdGroup001": 1, "QEResSGP001": 0.5, "QEResTOR001": 1, "QEResLoc001": 3, "SITBugs001": 31, "QEResIND001": 5, "QEResLoc003": 3, "QEResLoc002": 3, "Project": "Registration Employee Directory Services", "AutoTestCount001": 1657, "AppKey001": "ABC", }, "ownedBy": "TEST2" } ] } }          
Hello I'm trying to calculate ratio of two fields but im getting wrong results if i'm calculating each one of them separately im getting right results but together something is wrong     ... See more...
Hello I'm trying to calculate ratio of two fields but im getting wrong results if i'm calculating each one of them separately im getting right results but together something is wrong     index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*twofactor* "Request.actionUrl"!="*dev*" AND "Request.actionUrl"!="*staging*" | eval UserAgent = case(match(UserAgent, ".*ios.*"), "iOS FE",match(UserAgent, ".*android.*"), "Android FE",1=1, "Web FE") | dedup UserAgent, _time | stats count as AttemptsFE by UserAgent _time | appendcols [search index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*twofactor* "Request.actionUrl"!="*dev*" AND "Request.actionUrl"!="*staging*" "Request.status" IN (201, 207) NOT "Request.data.twoFactor.otp.expiresInMs"="*" | eval UserAgent = case(match(UserAgent, ".*ios.*"), "iOS FE",match(UserAgent, ".*android.*"), "Android FE",1=1, "Web FE") | dedup UserAgent, _time | streamstats count as SuccessFE by UserAgent _time] | eval SuccessRatioFE = round((SuccessFE/AttemptsFE)*100, 2) | eval SuccessRatioFE = (SuccessFE/AttemptsFE)*100 | timechart bins=100 avg(SuccessRatioFE) as SuccessRatioFE BY UserAgent      
I'm planning to start an integration between Splunk and ESET endpoint security cloud platform, but I facing the following issue: the Syslog-ng server started receiving uncleared/encrypted logs from ... See more...
I'm planning to start an integration between Splunk and ESET endpoint security cloud platform, but I facing the following issue: the Syslog-ng server started receiving uncleared/encrypted logs from the ESET endpoint security, so the logs appear on the HF server like this:  ^A^B  ^L 7 ^] ^W  ^^  ^Y  ^X # ^W (^D^C^E^C^F^C^H^G^H^H^H ^H 2 I think I want to decrypt the logs when received by the syslog-ng because Splunk can't handle any decryption process, I need help with how I can decrypt the logs in the Syslog-ng.
Trying to find anomalies for events. I have multiple services and multiple customers. I have an error "bucket" that is caputuring events for failures, exceeded, notified, etc. I'm looking for a way ... See more...
Trying to find anomalies for events. I have multiple services and multiple customers. I have an error "bucket" that is caputuring events for failures, exceeded, notified, etc. I'm looking for a way to identify when there are anomalies or outliers for each of the services/customers. I have combined (eval) service, customer, and the error and just counting the number of error events generated by each service/customer. So for example: svcA svcB svcC custA custB custC would give svcA-custA-failures 10 svcA-custA-exceeded 5 svcA-custA-notified 25 svcB-custA-failures 11 svcB-custA-exceeded 9 svcB-custA-notified 33 svcB-custB-failures 3 svcA-custB-exceeded 7 svcA-custB-notified 22 svcA-custC-exceeded 8 svcA-custC-failures 3 svcA-custC-notified 267 svcC-custC-exceeded 1 svcC-custC-failures 4 svcC-custB-notified 145 svcC-custA-notified 17   Something along the lines of this: | eval Svc-Cust-Evnt=Svc."-".Cust."-".Evnt | stats sum(error) by Svc-Cust-Evnt | rename sum(error) as count | sort -count
I have a search and subsearch that is working as required but there is a field in the subsearch that I want to display in the final table output but is not a field to be searched on. index=aruba sou... See more...
I have a search and subsearch that is working as required but there is a field in the subsearch that I want to display in the final table output but is not a field to be searched on. index=aruba sourcetype="aruba:stm" "*Denylist add*" OR "*Denylist del*" | eval stuff=split(message," ") | eval mac=mvindex(stuff,4) | eval mac=substr(mac,1,17) | eval denyListAction=mvindex(stuff,3) | eval denyListAction= replace (denyListAction,":","") | eval reason=mvindex(stuff,5,6) | search mac="*:*" [ search index=main host=thestor Username="*adgunn*" | dedup Client_Mac | eval Client_Mac = "*" . replace(Client_Mac,"-",":") . "*" | rename Client_Mac AS mac | fields mac ] | dedup mac,denyListAction,reason | table _time,mac,denyListAction,reason What I want is for the value held in field Username to be included in the table command of the outer search.  How do I pass it from the subsearch to be used in the table command and not used as part of the search? Thanks.
Hi, I have a simple xml dashboard. I want to be able to move the Export-To-PDF button (more of a html button) to the bottom of the dashboard in order to print the whole dashboard.  Any easy way of d... See more...
Hi, I have a simple xml dashboard. I want to be able to move the Export-To-PDF button (more of a html button) to the bottom of the dashboard in order to print the whole dashboard.  Any easy way of doing this? Thank You Everyone!  
Hi Splunkers...  Assumptions... The HF we want to deploy.. it should be inside a DMZ network, the license master is outside the DMZ and all necessary ports will be opened as required now the questi... See more...
Hi Splunkers...  Assumptions... The HF we want to deploy.. it should be inside a DMZ network, the license master is outside the DMZ and all necessary ports will be opened as required now the question is.. Can License Master to HF have only one way direction communication(info flow is only from LM to HF... not two way, in the sense... there will be no HF to LM info flow) OR the LM to HF requires two way communication by default.    please suggest, thanks.   
Can anyone provide a link to Splunk Mission Control API documentation?   Thank you 
Hello, I was wondering if it is possible to locate or search in Splunk if a specific lookup table is being used in a dashboard, alert, saved search, report etc. Thank you for your help!
I have a dashboard with three dropdown inputs.  The first is Date Range and it has a default value of last 24 hours.  The dashboard does the initial search fine, but when I change the date range, via... See more...
I have a dashboard with three dropdown inputs.  The first is Date Range and it has a default value of last 24 hours.  The dashboard does the initial search fine, but when I change the date range, via the presets in the dropdown, nothing updates Code for the dropdown: { "type": "input.timerange", "options": { "token": "dateRange", "defaultValue": "-24h@h,now" }, "title": "Date Range" }
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that V... See more...
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that VCT should almost always be short than EURT. Is this true? Does this sound like configuration issue that was corrected? If so, should I consider the EURT as the VCT for Q3?  
Minimize risk and maximize uptime with policies that continuously monitor vulnerabilities and automatically block attacks CONTENTS | Introduction | Video |Resources | About the presenter  Vid... See more...
Minimize risk and maximize uptime with policies that continuously monitor vulnerabilities and automatically block attacks CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 4 min 13 seconds  See how to use Cisco AppDynamics Secure Application to monitor vulnerabilities, finding and blocking attacks automatically. In this example, see how to configure a runtime policy that blocks the Log4Shell vulnerability from attack.     Additional Resources  Learn more about Cisco AppDynamics Secure Application: Monitoring Application Security Using Cisco Secure Application, in the documentation...  How do I use AppDynamics with Cisco Secure Application find vulnerabilities and block exploits, in the Knowledge Base   In the Blog: Protect cloud native application environments faster — based on business risk with Cisco Secure Application  How business acumen boosts application security  And more  About presenter Keith Richter  Keith Richter, APM Security Domain Architect Hailing from all around the Midwest, Keith Richter extended his peripatetic beginnings during six years in the US Navy, serving and experiencing rich cultures in remote sites worldwide. After his final deployment, Keith put down roots in Iowa, working at area companies including Sun Microsystems and EMC. He started at Cisco AppDynamics over 5 years ago, covering the Midwest as a Sales Engineer. In 2021, Keith moved into the Field Architecture team, leading a team of SMEs covering APM Agents, OTel and Security functionality within the AppD platform. The team currently focuses on supporting Sales, Product, and customers with our industry-leading Application Security and Runtime Protection capabilities.
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we ... See more...
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we are proceeding one at a time: 1. Loading up a new indexer 2. Integrating it into the cluster 3. Taking an old indexer offline, enforcing counts When the decommissioning process finishes and the old indexers are gracefully shutdown, we have an alert that appears on our search heads in the Splunk Health Report: "The search head lost connection to the following peers: <decommissioned peer>.  If there are unstable peers, confirm that the timeout (connectionTimeout and authTokenConnectionTimeout) settings in distsearch.conf are at appropriate values." I cannot figure out why we are seeing this alert. My conclusion is that we must be missing a step somewhere. To decommission a server, we do the following: 1. On the indexer: splunk offline enforce-counts 2. On the cluster master: splunk remove cluster-peers <GUID> 3. On the indexer: Completely uninstall Splunk. 3. On the cluster master: Rebalance indexes. We have also tried reloading the health.conf configuration by running '|rest /services/configs/conf-health.conf/_reload' on the search heads, to no effect. We cannot figure out where the health report is retaining this old data from, and the _internal logs clearly show that the moment of the GracefulShutdown transition on the Cluster Master is where the PeriodicHealthReporter component on the Search Heads begins to alert. The indexers in question are no longer listed as search peers on the search heads, and they're not listed as search peers on the cluster master either. The monitoring console looks fine. What could we be missing?
Hello all,  We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be ... See more...
Hello all,  We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be in splunk already) to add , DNS, DHCP, Threat intel and some endpoint data.     Is it possible to have a search run for the notable index to gather information from other indexes and add them to the notable event?  If so I would love to discuss.