All Topics

Top

All Topics

  February 2023 | Check out the latest and greatest     New capabilities provide deeper visibility and smarter alerting to troubleshoot faster You asked, we delivered. Splunk Observabili... See more...
  February 2023 | Check out the latest and greatest     New capabilities provide deeper visibility and smarter alerting to troubleshoot faster You asked, we delivered. Splunk Observability Cloud has several significant new GAs and previews providing deeper visibility across your environments, and a unified approach to incident response. Now, receive deeper context from the end-user experience through your network and across every transaction, and bring order to on-call chaos with improved alert accuracy and automated scheduling, notification, and escalation capabilities.   Visibility across every user session and transaction, through the network: Several new capabilities help you solve problems faster with deeper visibility and more context across your tech stack and through to end-user experience. Whether you operate monolithic architectures or microservices, new Splunk capabilities provide context from your end user, through your cloud network, and throughout every transaction.   New capabilities include: Digital Experience Monitoring Application Performance Monitoring Infrastructure Monitoring and Logging Incident Management Click here for an overview of each capability   Try these Capabilities Today! If you’re already an Observability Cloud user you can get started today by following the links we’ve provided to documentation. For Splunk Cloud or Enterprise users, start an Observability trial today.     ICYMI: 4 Steps to 'Jump Start' Your Observability Journey with Splunk Observability to Modernize Apps and Increase Business Resilience Optimize Application Performance with Code Profiling Dos and Don’ts of Observability: Lessons Learned from RedMonk Can Your Cloud Migration Strategy Keep Up With the Speed of Business?     The .conf23 Call for Proposals is Open! If you’re thinking of submitting a Call for Speakers proposal, be sure to start here, with our .conf23 Call for Speakers Webinar. You’ll get tips, tricks and recommendations to help you prepare your .conf submission, directly from the .conf23 Review Committee and our speaker coach. You’ll learn: How to write a strong .conf title and abstract This year’s topics and themes to include in your learning objectives and overall submission Practical advice to help your .conf submission stand out and get you ready for a breakout stage Watch On Demand Now!     Lantern  This month we’re excited to announce the relaunch of the Splunk Success Framework, a comprehensive resource for Splunk program managers to create best-practice processes for Splunk implementation. This comprehensive framework has been updated to include a brand-new Fundamentals section, improved navigation, and fresh tips from Splunk experts. The four functional areas covered in the framework include program management, people management, platform management, and data lifecycle management. The best practices in the framework are flexible and modular, allowing you to tailor them to your organization's unique requirements. Check out the Splunk Success Framework today, and please let us know what you think!           Education Corner Free Exam Registration for O11y Beta Certification It might seem like secret code, but if you’ve been around Splunk for a while you know that O11y is short for Observability. If you’re ready to validate your skills in Splunk O11y Cloud, register for our certification exam now in beta and free for all candidates. As a beta, the Splunk O11y Cloud Certified Metrics User exam is a bit longer and the results will not be immediately available, however, the results are completely valid for those who pass. Find out more about Splunk Certification, the exams, and badges too!    There’s a Hero in All of Us! It’s time to envision the adventures you can have with more Splunk skills under your (utility) belt. It’s the Power of Splunk Education. How can data help you save the day?  Watch. The Power of Splunk Education See how learning to use the power of data through Splunk Education can help you become a super hero of your organization.           Talk to Splunk Product Design Our product design team is currently looking for Splunk users to talk to about their experiences with Splunk products. Sign up here to participate in upcoming studies and shape the future of our products and roadmaps!       Tech Talk: DevOps Edition Synthetic Monitoring: Not your Grandma’s Polyester! Seriously. We won’t pepper you with sales and marketing stuff; we’ll jump in and keep it technical! Join Splunk and TekStream on Tuesday, February 28 at 11am PT/2pm ET for a demonstration of Splunk Synthetic Monitoring with real-world examples!     Until Next Month, Happy Splunking!
  February 2023  |  Check out the latest and greatest   Splunk Enterprise Security 7.1 Now Available The recent Splunk Enterprise Security (ES) 7.1 release helps tackle slow detection... See more...
  February 2023  |  Check out the latest and greatest   Splunk Enterprise Security 7.1 Now Available The recent Splunk Enterprise Security (ES) 7.1 release helps tackle slow detection times, lack of context around security incidents, and inefficient implementation and execution of incident response flows. Learn more in this blog, and watch our demos on threat topology and MITRE ATT&CK framework features.     The .conf23 Call for Proposals is Open! If you’re thinking of submitting a Call for Speakers proposal, be sure to start here, with our .conf23 Call for Speakers Webinar. You’ll get tips, tricks and recommendations to help you prepare your .conf submission, directly from the .conf23 Review Committee and our speaker coach. You’ll learn: How to write a strong .conf title and abstract This year’s topics and themes to include in your learning objectives and overall submission Practical advice to help your .conf submission stand out and get you ready for a breakout stage Watch On Demand Now!     New Detections from the Splunk Threat Research Team The Splunk Threat Research Team (STRT) has had two releases of security content, which provide you with 18 new detections and 3 new analytic stories. The new security content is available via the ESCU application update process or via Splunk Security Essentials (SSE). The Splunk Threat Research Team has also published the following blogs to help you stay ahead of threats: All the Proxy(Not)Shells From Registry With Love: Malware Registry Abuses Introducing Splunk Attack Range v3.0   Using MITRE ATT&CK in Splunk Security Essentials The Splunk Security Essentials (SSE) app allows you to use the ATT&CK framework for a wide array of use cases and to answer a wide range of questions. Learn more in this blog.     Splunk App for PCI Compliance We recently released version 5.1 of the Splunk App for PCI Compliance to help solve financial compliance use cases by capturing, monitoring, and reporting on relevant data from any source to quickly investigate and resolve compliance issues. Learn more about the Splunk App for PCI Compliance here.     Splunk at Hackers on the Hill SURGe team member Mick Baccio recently attended Hackers on the Hill to hear from policy makers and experts on technology-related issues and get an overview on the National Cybersecurity Strategy. Learn more about his time at the event in this blog.     Purple Teaming to Enhance Detection Engineering Splunk Threat Research Team member Mauricio Velazco recently presented on a SANS Ask the Expert session highlighting the benefits of purple teaming and how the Splunk Attack Range can be used for purple teaming and detection development. Watch the recording  here.     Splunk Data Security Predictions 2023 If you missed SURGe team members Ryan Kovar and Mick Baccio presenting on the Splunk Data Security Predictions 2023 report live, be sure to check out the recording. The full report is available for Download here.   Lantern    This month we’re excited to announce the relaunch of the Splunk Success Framework, a comprehensive resource for Splunk program managers to create best-practice processes for Splunk implementation. This comprehensive framework has been updated to include a brand-new Fundamentals section, improved navigation, and fresh tips from Splunk experts. The four functional areas covered in the framework include program management, people management, platform management, and data lifecycle management. The best practices in the framework are flexible and modular, allowing you to tailor them to your organization's unique requirements. Check out the Splunk Success Framework today, and please let us know what you think!         Education Corner Splunk Training for All Meet Aspiring Cybersecurity Analyst, Marc Alicea  Splunk is expanding learning opportunities and lowering the barriers to entry for anyone, anywhere so learners can grow their careers and global organizations can find qualified candidates to fill the critical skills gap. This profile tells Marc’s story, an aspiring cybersecurity analyst, who completed 25 of our free, self-paced training courses on his journey to become a Splunk Core Certified User. It’s a real life depiction about one more learner who greatly benefited from our catalog of free Splunk Education courses available to anyone looking to grow their career and feel more confident navigating this highly-technical world.    There’s a Hero in All of Us! It’s time to envision the adventures you can have with more Splunk skills under your (utility) belt. It’s the Power of Splunk Education. How can data help you save the day?  Watch. The Power of Splunk Education See how learning to use the power of data through Splunk Education can help you become a super hero of your organization.           Talk to Splunk Product Design Our product design team is currently looking for Splunk users to talk to about their experiences with Splunk products. Sign up here to participate in upcoming studies and shape the future of our products and roadmaps!     Tech Talk: DevOps Edition Synthetic Monitoring: Not your Grandma’s Polyester! Seriously. We won’t pepper you with sales and marketing stuff; we’ll jump in and keep it technical! Join Splunk and TekStream on Tuesday, February 28 at 11am PT/2pm ET for a demonstration of Splunk Synthetic Monitoring with real-world examples!     Until Next Month, Happy Splunking!
  February 2023  |  Check out the latest and greatest   Operationalized Data Science for Production Optimization: BMW Group Webinar Proactively identifying areas to improve the production... See more...
  February 2023  |  Check out the latest and greatest   Operationalized Data Science for Production Optimization: BMW Group Webinar Proactively identifying areas to improve the production process can be difficult, but by using data science models with Splunk you can derive deep insights from operations and optimize the production process. In this webinar, we share how the Splunk App for Data Science and Deep Learning (DSDL) is used by the BMW Group to accelerate the time to operationalize models in their production systems quickly allowing them to save costs, maximize productivity, and incease product quality.       How Can Analysts Address and Respond to Downtime, and Cyberthreats Quickly? IT and security analysts need to find incidents and cyberthreats easily and quickly. However, inconsistencies in data from different vendors makes it difficult. Data and source types aren’t all the same. So how can analysts address and respond to downtime, and cyberthreats quickly? Find out how the Common Information Model (CIM) can help in this e-book!     How To Optimize Costs and Data Value Using Splunk In this webinar, you can learn a strategic approach to data lifecycle management that will enable you to capture more value from your data AND optimize costs. Further, we will outline the key Splunk capabilities that enable organizations to achieve an ideal balance between data and cost optimization, and help you prove business value.     Building Bridges: Splunk Releases 2022 Global Impact Report Last month, Splunk released our second annual Global Impact Report, which shares our environmental, social and governance (ESG) progress, and advances impact work in our four key pillars of Global Impact — social impact, ethical and inclusive growth, data responsibility and environmental sustainability. Read the highlights in this blog or check out the full report.     The .conf23 Call for Proposals is Open! If you’re thinking of submitting a Call for Speakers proposal, be sure to start here, with our .conf23 Call for Speakers Webinar. You’ll get tips, tricks and recommendations to help you prepare your .conf submission, directly from the .conf23 Review Committee and our speaker coach. You’ll learn: How to write a strong .conf title and abstract This year’s topics and themes to include in your learning objectives and overall submission Practical advice to help your .conf submission stand out and get you ready for a breakout stage Watch On Demand Now!       Complimentary Copy of How to Decide When and How to Move Splunk to a Hybrid Cloud Environment For many enterprise organizations, the key to digital transformation lies in the speed, scalability and cost savings of the cloud. But not every organization can move completely to the cloud, for a wide variety of reasons.  Learn in this e-book why you should choose a Splunk hybrid environment and how Splunk Cloud Platform can make it easy to move to a hybrid cloud.     Lantern  This month we’re excited to announce the relaunch of the Splunk Success Framework, a comprehensive resource for Splunk program managers to create best-practice processes for Splunk implementation. This comprehensive framework has been updated to include a brand-new Fundamentals section, improved navigation, and fresh tips from Splunk experts. The four functional areas covered in the framework include program management, people management, platform management, and data lifecycle management. The best practices in the framework are flexible and modular, allowing you to tailor them to your organization's unique requirements. Check out the Splunk Success Framework today, and please let us know what you think!       Education Corner Cloudy with a Chance of Clarity Are you a new Splunk Cloud customer or administrator who could use a bit of guidance? Splunk Education offers some valuable courses designed to help provide clarity around using this new platform. The Splunk Cloud Administration hands-on, instructor-led course prepares administrators to manage users, get data in, and much more.  Register today.  Do you currently use Splunk Enterprise on-prem but are moving to Splunk Cloud? Well, we’ve got courses to help you with that too!  If you’ve previously completed Splunk Enterprise System Administration and Splunk Enterprise Data Administration, then check out the Transitioning to Splunk Cloud course. Register today.    There’s a Hero in All of Us! It’s time to envision the adventures you can have with more Splunk skills under your (utility) belt. It’s the Power of Splunk Education. How can data help you save the day?  Watch. The Power of Splunk Education See how learning to use the power of data through Splunk Education can help you become a super hero of your organization.         Talk to Splunk Product Design Our product design team is currently looking for Splunk users to talk to about their experiences with Splunk products. Sign up here to participate in upcoming studies and shape the future of our products and roadmaps!     Tech Talk: DevOps Edition Synthetic Monitoring: Not your Grandma’s Polyester! Seriously. We won’t pepper you with sales and marketing stuff; we’ll jump in and keep it technical! Join Splunk and TekStream on Tuesday, February 28 at 11am PT/2pm ET for a demonstration of Splunk Synthetic Monitoring with real-world examples!     Until Next Month, Happy Splunking!
Hello Splunkers, I am trying to create an alert when the log with "UP" state is not received within 15 minutes from the time of "DOWN" state log received. So can anyone help me out... Scenario: ... See more...
Hello Splunkers, I am trying to create an alert when the log with "UP" state is not received within 15 minutes from the time of "DOWN" state log received. So can anyone help me out... Scenario: When the device is Down the splunk will receive the log from solar-winds that the device is "DOWN" along with the host name in the log. So if the splunk doesn't receive the log that containing the "UP" state from the solar-winds in the next 15 minutes then an alert must be raised. So can anyone help me to create an Query for the alert for the above scenario. Thanks in Advance....
Hi, I'm quite fresh in splunk and need your help. Trying to combine spl with sql. tag 25 is event id same as  sql ele.batch_event_id I suspect ele.batch_event_id = $25$ is wrong. Any idea ple... See more...
Hi, I'm quite fresh in splunk and need your help. Trying to combine spl with sql. tag 25 is event id same as  sql ele.batch_event_id I suspect ele.batch_event_id = $25$ is wrong. Any idea please Error is : Unable to run query '| dbxquery query= "SELECT MIN (ele.process_time) as MIN_PROCESS_time ,MAX (ele.process_time) as MAX_PROCESS_time FROM estar.estar_loopback_events ele, estar.engine_configuration ec WHERE ele.engine_instance = ec.engine_instance AND ele.batch_event_id = $25$ AND process_time BETWEEN TO_DATE('20230215:00:00','YYYYMMDD hh24:mi:ss') and TO_DATE('20230216 12:59:59','YYYYMMDD hh24:mi:ss') " connection='stardb' '.   Search: index=star_linux sourcetype=engine_processed_events 2961= BBHCC-S2PBATCHPOS-BO OR BBHCC-S2PBATCHPOS-B2 OR BBHCC-S2PBATCHPOS-PO OR BBHCC-SOD-IF-Weekday-1 AND 55:GEN_STAR_PACE |table 4896,25,55,2961 | map search="| dbxquery query= \"SELECT MIN (ele.process_time) as MIN_PROCESS_time ,MAX (ele.process_time) as MAX_PROCESS_time FROM estar.estar_loopback_events ele, estar.engine_configuration ec WHERE ele.engine_instance = ec.engine_instance AND ele.batch_event_id = $25$ AND process_time BETWEEN TO_DATE('20230215:00:00','YYYYMMDD hh24:mi:ss') and TO_DATE('20230216 12:59:59','YYYYMMDD hh24:mi:ss') \" connection='stardb' " |table 4896, 25,MIN_PROCESS_time, MAX_PROCESS_time
Hi Splunk Gurus,  I am new to lookups and this community has been a great help. I have a few cases where I can't seem to remove rows from a lookup correctly and I can't find a solution for it. I ... See more...
Hi Splunk Gurus,  I am new to lookups and this community has been a great help. I have a few cases where I can't seem to remove rows from a lookup correctly and I can't find a solution for it. I have a lookup table that is used to list maintenance windows on servers. My CSV lookup has 3 columns CI,  chgreq, mStart, and mstop. Example: serverA     CHG0001     2023-02-16 00:00     2023-02-17 13:00 I am pulling in emails from an O365 mailbox that allows the adding and clearing of these maintenance windows. Adding new rows to my lookup is working fine but when I try to remove rows I get a blank lookup. Here is the search I am using: index="maintenance_emails" Clear Maintenance | rex field="subject" "Clear Maintenance for (?<server_name>.+)" | inputlookup append=t maintenance_windows.csv | where CI!=server_name | eval CI=server_name, chgreq=chgreq, mStart=mStart, mStop=mStop | outputlookup maintenance_windows.csv   The server_name field has the correct server name in it and it matches with a CI entry in my lookup. When I run the search I get a blank lookup table. I have done some testing and it looks like my where statement is not working. I appear to also be having the same issue when trying to remove old maintenance window entries from the same table but using values in the mStop column and comparing them to the current date and time. But this may be a separate issue (i.e. with the date/time format or operation). | eval cur_time=strftime(now(), "%Y-%m-%d %H:%M") | inputlookup append=t maintenance_windows.csv | where mStop<=cur_time | eval CI=server_name, chgreq=chgreq, mStart=mStart, mStop=mStop | outputlookup maintenance_windows.csv   Any help would be very appreciated  
Hello Splunk Community,  So I have a table that has results like below   Name                Tom01 Tom02 Tom03 Tom04 Quin01 Yonah01 Yonah02   I want a query that if the text mat... See more...
Hello Splunk Community,  So I have a table that has results like below   Name                Tom01 Tom02 Tom03 Tom04 Quin01 Yonah01 Yonah02   I want a query that if the text matches before the numeric' s it will only select the 01 and ignore the other ones. For example: IF Yonah01 and Yonah02 exist this is a pair so it will exclude Yonah02 and just have Yonah01  or another one, if there is Tom01, Tom02, Tom03, Tom04 it will exclude everything except for the Tom01. Thank you.   
Hello, I am trying to import a json file to SPLUNK. It seems that the file is imported into one event but not all of it, it looks like that the file is imported by 10% (or less). Could it be beca... See more...
Hello, I am trying to import a json file to SPLUNK. It seems that the file is imported into one event but not all of it, it looks like that the file is imported by 10% (or less). Could it be because of a configuration that I have to change? the file is of this format     {"resultsPerPage":344,"startIndex":0,"totalResults":344,"format":"NVD_CVE","version":"2.0","timestamp":"2023-02-15T09:42:40.560","vulnerabilities":[{"cve":{"id":"CVE-2013-10012","sourceIdentifier":"cna@vuldb.com","published":"2023-01-16T11:15:10.037","lastModified":"2023-01-24T15:14:10.117","vulnStatus":"Analyzed","descriptions":[{"lang":"en","value":"A vulnerability, which was classified as critical, was found in antonbolling clan7ups. Affected is an unknown function of the component Login\/Session. The manipulation leads to sql injection. The name of the patch is 25afad571c488291033958d845830ba0a1710764. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-218388."}],"metrics":{"cvssMetricV31":[{"source":"nvd@nist.gov","type":"Primary","cvssData":{"version":"3.1","vectorString":"CVSS:3.1\/AV:N\/AC:L\/PR:N\/UI:N\/S:U\/C:H\/I:H\/A:H","attackVector":"NETWORK","attackComplexity":"LOW","privilegesRequired":"NONE","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"HIGH","integrityImpact":"HIGH","availabilityImpact":"HIGH","baseScore":9.8,"baseSeverity":"CRITICAL"},"exploitabilityScore":3.9,"impactScore":5.9}],"cvssMetricV30":[{"source":"cna@vuldb.com","type":"Secondary","cvssData":{"version":"3.0","vectorString":"CVSS:3.0\/AV:A\/AC:L\/PR:L\/UI:N\/S:U\/C:L\/I:L\/A:L","attackVector":"ADJACENT_NETWORK","attackComplexity":"LOW","privilegesRequired":"LOW","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"LOW","integrityImpact":"LOW","availabilityImpact":"LOW","baseScore":5.5,"baseSeverity":"MEDIUM"},"exploitabilityScore":2.1,"impactScore":3.4}],"cvssMetricV2":[{"source":"cna@vuldb.com","type":"Secondary","cvssData":{"version":"2.0","vectorString":"AV:A\/AC:L\/Au:S\/C:P\/I:P\/A:P","accessVector":"ADJACENT_NETWORK","accessComplexity":"LOW","authentication":"SINGLE","confidentialityImpact":"PARTIAL","integrityImpact":"PARTIAL","availabilityImpact":"PARTIAL","baseScore":5.2},"baseSeverity":"MEDIUM","exploitabilityScore":5.1,"impactScore":6.4,"acInsufInfo":false,"obtainAllPrivilege":false,"obtainUserPrivilege":false,"obtainOtherPrivilege":false,"userInteractionRequired":false}]},"weaknesses":[{"source":"cna@vuldb.com","type":"Primary","description":[{"lang":"en","value":"CWE-89"}]}],"configurations":[{"nodes":[{"operator":"OR","negate":false,"cpeMatch":[{"vulnerable":true,"criteria":"cpe:2.3:a:clan7ups_project:clan7ups:*:*:*:*:*:*:*:*","versionEndExcluding":"2013-02-12","matchCriteriaId":"12D82AEE-3A68-4121-811C-C3462BCEAF25"}]}]}],"references":[{"url":"https:\/\/github.com\/antonbolling\/clan7ups\/commit\/25afad571c488291033958d845830ba0a1710764","source":"cna@vuldb.com","tags":["Patch","Third Party Advisory"]}       I would appreciate any help  Thank you
So i am trying to get a list of inactive splunk users.  I have first tried just grabbing a list of all the users with the last login older than 6 months, but that gives me a list of users that has a... See more...
So i am trying to get a list of inactive splunk users.  I have first tried just grabbing a list of all the users with the last login older than 6 months, but that gives me a list of users that has already been deleted in splunk, like this:   index=_audit action="login attempt" | where strptime('timestamp',"%m-%d-%Y %H:%M:%S")<relative_time(now(),"-6mon") | stats latest(timestamp) by user     Then i tried joining it with a list of the current users from the rest api like this:   | rest /services/authentication/users splunk_server=local | fields realname, title | rename title as user | join user type=left [ search index=_audit action="login attempt" | where strptime('timestamp',"%m-%d-%Y %H:%M:%S")<relative_time(now(),"-6mon") | stats latest(timestamp) by user ]   This doesnt work and just outputs a list of current users. What i want: List of current splunk users with last login attempt older than 6 months with realname username, last login time. I have tried this solution from javiergn, but i cannot get last login time on that https://community.splunk.com/t5/Splunk-Search/How-do-I-edit-my-search-to-identify-inactive-users-over-the-last/m-p/285256
In the Admin classes configuration precedence was defined for index and search time.  However, since the Splunk UF is neither index nor search, what precedence order does the Splunk UF follow?
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN (TCODE="ZORF_BOX_CLOSING") SYUCOMM="SICH_T" ACCOUNT=HRL* | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | ... See more...
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN (TCODE="ZORF_BOX_CLOSING") SYUCOMM="SICH_T" ACCOUNT=HRL* | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | timechart avg(RESPTI) as "Average_Execution_Time" avg(DBCALLTI) as "Average_DB_Time" span=5m | eval Average_Execution_Time = round(Average_Execution_Time,2), Average_DB_Time=round(Average_DB_Time,2) | eventstats | eval UCL='stdev(Average_Execution_Time)'+'mean(Average_Execution_Time)', UCL_DB='stdev(Average_DB_Time)'+'mean(Average_DB_Time)' | eval day_of_week = strftime(_time,"%A") | where day_of_week!= "Saturday" and day_of_week!= "Sunday" | eval New_Field=if(RESPTI >= UCL, 1, 0) | timechart sum(New_Field) span=$span$ This is the search that i am using. I am trying to get a barchart that show the amount of times that the RESPTI goes over the UCL. The problem that i am having is that i cannot compare if RESPTI is bigger than the UCL since it does not want to load in the value. if i try to table it like | table RESPTI, UCL, New_Field then RESPTI will just show up empty.
we have ingested junipet logs as syslogs. trying to create some dashboard for network team Need some dashboard templates for juniper devices log data 
How to configure User experience monitoring for an application, can you provide the steps? Thanks & Regards Anshuman
Hi, I need help to extract a value from field named "message". Field "message" value is as below: The process C:\Windows\system32\winlogon.exe (PRD01) has initiated the power off of computer ... See more...
Hi, I need help to extract a value from field named "message". Field "message" value is as below: The process C:\Windows\system32\winlogon.exe (PRD01) has initiated the power off of computer PC01 on behalf of user ADMIN JABATAN for the following reason: No title for this reason could be found The process C:\Windows\system32\shutdown.exe (PRD01) has initiated the restart of computer PC01 on behalf of user ADMIN\SUPPORT for the following reason: No title for this reason could be found The process C:\Windows\system32\shutdown.exe (PRD01) has initiated the restart of computer PC01 on behalf of user admin for the following reason: No title for this reason could be found The value i want to extract is: newField ADMIN JABATAN ADMIN\SUPPORT admin   Please assist. Thanks.
Hello Splunkers! I'm trying to take a backup of a lookup file(file.csv) and create a backup file(file_backup.csv) and schedule the search on daily basis, the below query will only run and overwrite ... See more...
Hello Splunkers! I'm trying to take a backup of a lookup file(file.csv) and create a backup file(file_backup.csv) and schedule the search on daily basis, the below query will only run and overwrite the old backup file but I want the scheduled search to run only when the new entries are added to the file.csv. |inputlookup file.csv |outputlookup file_backup.csv Also, I want to add 2 new columns (user who edited the lookup and time when it was edited) in the backup lookup  Original file: file.csv column1 column2  Backup file file_backup.csv generated using the scheduled search should have the below  column1 column2 time user  Any thoughts please?   Cheers!
Kindly provide me the solution for the below, Suppose I have created 5 health rules, so I can check the violated health rules in 'Violations & Anomalies' tab on controller. Here my question is how w... See more...
Kindly provide me the solution for the below, Suppose I have created 5 health rules, so I can check the violated health rules in 'Violations & Anomalies' tab on controller. Here my question is how will i get the exact count of particular health rule violated for the specified time period... i want to know that how many times the health rules violated for custom time.
Currently, I am trying to extract the DNS logs from TA_Windows where inputs.conf file has [WinEventLog: //DNS Server) disabled=0 but still not working. I am trying to get DNS logs to index (microsoft... See more...
Currently, I am trying to extract the DNS logs from TA_Windows where inputs.conf file has [WinEventLog: //DNS Server) disabled=0 but still not working. I am trying to get DNS logs to index (microsoft_windows) ion indexer. I have DNS server role installed on the machine. UF is also installed but still not working. I have seen many other blogs but not exactly pointing out the solution. Any help will be appreciated. Thanks    
When trying to deploy from https://github.com/aws-quickstart/quickstart-splunk-enterprise, I am unable to get past the SplunkCM EC2 instance deployment. The error being: Failed to receive 1 resource ... See more...
When trying to deploy from https://github.com/aws-quickstart/quickstart-splunk-enterprise, I am unable to get past the SplunkCM EC2 instance deployment. The error being: Failed to receive 1 resource signal(s) within the specified duration. I have tried to follow the steps here: https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-failed-signal/ The instance appears to be successfully created in EC2, but I am unable to ssh into the instance to view if the cfn-signal scripts are successfully deployed, as this seems to be the likely issue here. Any help would be much appreciated. 
Hi all, First time posting here so please be patient and I am relatively new to the Splunk environment, but I am struggling to figure out this search function. My manager has asked me to create ... See more...
Hi all, First time posting here so please be patient and I am relatively new to the Splunk environment, but I am struggling to figure out this search function. My manager has asked me to create an alert for Load Balancers flapping on our server. Criteria; - Runs every 15 mins (I assume this can be set in the "alert" settings) - Fires if a load balancer switches from Up to Down and Back more than 5 times This second point I am struggling to work out - this is what I have so far;         index=xxx sourcetype="xxx" host="xxx" (State=UP OR State=DOWN) State="*" | stats count by State | eval state_status = if(DOWN+UP == 5, "Problem", "OK") | stats count by state_status           Note - "State" is the field in question as it stores the UP/DOWN events which have values. Based on this, I can get an individual count of when the load balancer displayed UP and when it displayed DOWN, however I need to turn this into a threshold search to only display a count of how many times it changed from UP to DOWN 5x consecutive times. Any and all help will be much appreciated.
Following query is printing 'pp_user_action_name','Total_Calls','Avg_User_Action_Response' not getting 'pp_user_action_user' values as its outside of useractions{} array. Not able club values from in... See more...
Following query is printing 'pp_user_action_name','Total_Calls','Avg_User_Action_Response' not getting 'pp_user_action_user' values as its outside of useractions{} array. Not able club values from inner array and outer array. How to fix this? index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | search pp_user_action_user ="xxxx,xxxx" | spath output=user_actions path="userActions{}" | stats count by user_actions  | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxxx" | spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_targetUrl input=user_actions path=targetUrl | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | stats count(pp_user_action_response) As "Total_Calls" ,avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | eval Avg_User_Action_Response=round(Avg_User_Action_Response,0) | table pp_user_action_user,pp_user_action_name,Total_Calls,Avg_User_Action_Response | sort -Total_Calls PFB sample event.  [-] applicationType: WEB_APPLICATION bounce: false browserFamily: MicrosoftEdge browserMajorVersion: MicrosoftEdge108 browserType: DesktopBrowser clientType: DesktopBrowser connectionType: UNKNOWN dateProperties: [ [+] ] displayResolution: FHD doubleProperties: [ [+] ] duration: 279730 endReason: TIMEOUT endTime: 1676486021319 errors: [ [+] ] events: [ [+] ] hasError: true hasSessionReplay: false internalUserId: xxxxx ip: xxxxx longProperties: [ [+] ] matchingConversionGoals: [ [+] ] matchingConversionGoalsCount: 0 newUser: true numberOfRageClicks: 0 numberOfRageTaps: 0 osFamily: Windows osVersion: Windows10 partNumber: 0 screenHeight: 1080 screenOrientation: LANDSCAPE screenWidth: 1920 startTime: 1676485741589 stringProperties: [ [+] ] syntheticEvents: [ [+] ] tenantId: xxxx totalErrorCount: 3 totalLicenseCreditCount: 1 userActionCount: 12 userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: null customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: null domCompleteTime: null domContentLoadedTime: null domain: xxxxx doubleProperties: [ [+] ] duration: 16292 endTime: 1676485757881 firstInputDelay: null firstPartyBusyTime: 15012 firstPartyResources: 2 frontendTime: 1289 internalApplicationId: xxxxx javascriptErrorCount: 0 keyUserAction: false largestContentfulPaint: null loadEventEnd: null loadEventStart: null longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: clickontasknamexxxxx navigationStart: 1676485742474 networkTime: 1881 requestErrorCount: 0 requestStart: 1175 responseEnd: 15003 responseStart: 14297 serverTime: 13122 speedIndex: 16292 startTime: 1676485741589 stringProperties: [ [+] ] targetUrl: xxxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Xhr userActionPropertyCount: 0 visuallyCompleteTime: 16292 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] userExperienceScore: TOLERATED userId: xxxxx,xxxx userSessionId: xxxxx userType: REAL_USER }