All Topics

Top

All Topics

Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom ... See more...
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom /opt/phantom/bin/phenv pip install pandas   After installing we can use pandas in custom functions just fine.   I want to ask if this is good or can it lead to any compatibility issue in the future? (e.g. SOAR upgrades)   Thanks in advance!
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picki... See more...
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picking right away so any suggested SPL will be apricated ( not sure we can use Tstat so it will be much faster )       index=aws sourcetype="aws:cloudtrail" aws_account_id IN(991650019 55140 5557 39495836 157634 xxxx9015763) | eval now=now() | eval time_since_last=round(((now-Latest)/60)/60,2) | stats latest(_time) as last_event_time, earliest(_time) as first_event_time count by sourcetype aws_account_id | eval time_gap = last_event_time - first_event_time | where time_gap > 4000 | table aws_account_id first_event_time last_event_time time_gap | convert ctime(last_event_time)  
Hi, We've installed this app and tried configuring it to send Splunk alerts to Jira. After entering the API URL and key and hitting 'Complete Setup,' it keeps redirecting back to the configuration p... See more...
Hi, We've installed this app and tried configuring it to send Splunk alerts to Jira. After entering the API URL and key and hitting 'Complete Setup,' it keeps redirecting back to the configuration page. Is anyone else experiencing this issue on Splunk Cloud?
Hi all, I'm trying to figure out a way to edit the alert that is sent to PagerDuty.  Currenty I have a bunch of alerts that are being sent to the notable index, and then a single alert that searche... See more...
Hi all, I'm trying to figure out a way to edit the alert that is sent to PagerDuty.  Currenty I have a bunch of alerts that are being sent to the notable index, and then a single alert that searches that index and is sent to PagerDuty. The problem is, the alert is sending the name of the original alert in the "alert" section (not the notification). Is there a way I can edit the catch-all alert so that it doesn't send the name of the original alert?
Welcome to the "Splunk Classroom Chronicles" series, created to help curious, career-minded learners get acquainted with Splunk Education and our instructor-led classes – and hear what other students... See more...
Welcome to the "Splunk Classroom Chronicles" series, created to help curious, career-minded learners get acquainted with Splunk Education and our instructor-led classes – and hear what other students are saying about their learning experiences.  In today's dynamic workplace, ongoing professional development is more important than ever, and Splunk is at the forefront of facilitating this growth through comprehensive, interactive online training sessions. Our courses are designed not only to enhance technical skills but also to enrich the overall learning experience. From engaging instructors to hands-on labs, each class is tailored to ensure that participants gain practical knowledge and real-world expertise. Let’s Kick it Off  Join us for Part 1 of our series as we explore the tales and testimonials from those who've experienced Splunk Education instructor-led training first-hand. You’ll meet our course instructors and developers – those who are dedicated to making your learning experience interesting, engaging, and valuable. Our Splunk course developers work to develop the quality curriculum and lab experiences, which is then handed off to our instructors. The end result, we hope, is happy learners with constructive feedback to share about our instructor-led courses. Splunk Enterprise Data Administration Course The 18-hour Splunk Enterprise Data Administration course is designed for administrators who are responsible for getting data into Splunk Indexers. The course provides the fundamental knowledge of Splunk forwarders and methods to get remote data into Splunk indexers. It covers installation, configuration, management, monitoring, and troubleshooting of Splunk forwarders and Splunk Deployment Server components. David Lowe is one of the course instructors and Kevin Stewart is the course developer.  Here’s what one student had to say about David Lowe “I wanted to say how much I’ve enjoyed both of the Splunk Enterprise courses I’ve taken with you over the last few weeks. You kept it engaging which can’t be easy given the volume of topics you need to cover…I feel I’ve learned a lot that I can take back to our Splunk instance so definitely a win. I’d certainly recommend you as an instructor for any of my colleagues looking to take these courses.” Using Splunk Enterprise Security Course Using Splunk Enterprise Security is a 13.5-hour course designed to prepare security practitioners to use Splunk Enterprise Security (ES). In this instructor-led course, students identify and track incidents, analyze security risks, use predictive analytics, and discover threats. Lauri Harris is one of the course instructors and Nicole Bichon is the course developer.  Here’s what one student had to say about Lauri “I had Lauri as a trainer in the Splunk ES course. She was absolutely wonderful… so many of us were impressed with her knowledge. I was brand new to Splunk. The prerequisites were somewhat helpful in the intro of it, but Lauri was awesome at explaining so many other features of it. Even though I feel that so much information was covered in two days, she did an awesome job of touching on everything she could, answering questions and going through the features and labs. [I hope] we have the pleasure of learning more from her again.” Splunk Cloud Administration Course Splunk Cloud Administration course is an 18-hour instructor-led course for administrators new to Splunk Cloud and those wanting to become more experienced in managing Splunk Cloud instances. The course provides administrators with the opportunity to gain the skills, knowledge and best practices for data management and system configuration for data collection and ingestion required in a Splunk Cloud environment to create a productive Splunk SaaS deployment. The hands-on labs provide the opportunity to learn and ask questions on how to manage and maintain the platform, the users and how to effectively get data into Splunk Cloud. Modules include data inputs and forwarder configuration, data management, user accounts, and basic monitoring and problem isolation. Sue Rich is one of the course instructors and Tomer Gurantz and Rob Zylstra are  the course developers.  Here’s what one student had to say about Sue Rich “Thanks so much for the awesome job you did teaching us Splunk Cloud Administration and also Splunk Enterprise and forwarder management knowledge that we can apply to our on-premises footprint as well. You made this week of training fun and easily digestible and we certainly learned a great deal about the Splunk environment we're tasked with supporting from the customer side. Take care and hope we all get to roll with you again in the future.”   Resources and Reminders If we’ve piqued your interest in the value of Splunk Education and you’d like to increase your Splunk knowledge or get started on your journey, here are some useful resources: Course Registration: Ready to take the next step? Register for these or any of our courses here.  Splunk Education: Visit the official Splunk Education website to explore more courses and certification details. Splunk Lantern: Get field-tested guidance on use cases and best practices using Splunk Lantern. Community Insights: Join the Splunk Community to connect with other users and get insights into best practices and troubleshooting. Splunk Certification: Validate your Splunk proficiency with any of our Splunk Certifications. Whether you're a new administrator or a seasoned Splunk veteran, our courses are designed to empower you with the knowledge and skills needed to excel in your role. Stay curious, keep learning, and we look forward to seeing you in one of our upcoming classes!
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use th... See more...
As you may know, the Splunk OTel Collector can collect logs from Kubernetes and send them into Splunk Cloud/Enterprise using the Splunk OTel Collector chart distribution. However, you can also use the Splunk OTel Collector to collect logs from Windows or Linux Hosts and send those logs directly to Splunk Enterprise/Cloud as well. However this information isn't easily found from the documentation as it appears the standalone (non Helm Chart) distribution of the OTel Collector can only be used for Splunk Observability. In the below instructions, I will show you how to install the Collector even if you have don't have an Splunk Observability (O11y) subscription. In terms of compatibility, the Splunk OTel Collector is supported on the following Operating Systems: Amazon Linux: 2, 2023. Log collection with Fluentd is not currently supported for Amazon Linux 2023. CentOS, Red Hat, or Oracle: 7, 8, 9 Debian: 9, 10, 11 SUSE: 12, 15 for version 0.34.0 or higher. Log collection with Fluentd is not currently supported. Ubuntu: 16.04, 18.04, 20.04, 22.04, and 24.04 Rocky Linux: 8, 9 Windows 10 Pro and Home, Windows Server 2016, 2019, 2022 Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector: First, use sudo to export the following variable. This variable will be referenced by the Collector and will verify that you aren't installing the Collector for Observability where an Access Token needs to be specified:     sudo export VERIFY_ACCESS_TOKEN=false       Once you have confirmed that your Operating System is compatible, please use these instructions to install the Splunk OTel Collector (in this example we are going to use curl but there are other installation methods that can be found here).     curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --hec-token <token> --hec-url <hec_url> --insecure true​   You may notice we modify the installation script from the original instructions, we specify the HEC Token and HEC Url of the Splunk Instance you want to send your logs to. Please note that both the HEC Token and HEC Url are required fields to specify for the installation to work correctly.  Your installer should then install and start sending logs over to Splunk Instance (assuming your network allows the traffic out) automatically; if you want to know what log ingestion methods are configured out of the box please see the default pipeline for the OTeL Collector as specified here. What if you want your Splunk OTel Collector to send logs to Enterprise/Cloud and you also want to send metrics or traces to Splunk Observability?  If you are in the situation above, then you can modify the installation script we suggest above to include your O11y realm and Access Token in addition to your HEC Url and HEC Token like this:   curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh; sh /tmp/splunk-otel-collector.sh --realm <o11y_realm> --hec-token <token> --hec-url <hec_url> --insecure true -- <ACCESS_TOKEN>​     Please note the Access Token always follows the blank -- template and should always be placed at the end of your installer script for best practice.
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, b... See more...
I have 2 queries where each query retrieve the fields from different source using regex and combining it using append sand grouping the data using stats by common id and then evaluating the result, but what is happening is before it loads the data from query 2 it's evaluating and giving wrong result with large data set Sample query looks like this index=a component=serviceA "incoming data" | eventstats values(name) as name ,values(age) as age by id1,id2 |append [search index=a component=serviceB "data from" | eventstats values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2] | stats values(name) as name ,values(age) as age, values(parentName) as parentName ,values(parentAge) as parentAge by id1,id2 | eval mismatch= case(isnull(name) AND isnull(age) ," data doesn't exist in serviceA", isnull(parentName) AND isnull(parentAge) ," data doesn't exist in serviceB", true, "No mismatch") | table name,age,parentAge,parentName,mismatch,id1,id2 so in my case with large data before the dat get's loaded from query2 it's giving as data doesn't exist in serviceB, even though there is no mismatch. Please suggest how we can tackle this situation, I tried using join , but it's same
Splunk Observability Cloud recently launched an improved design for the access tokens page for better usability and performance, which was announced here. This release made some important token chang... See more...
Splunk Observability Cloud recently launched an improved design for the access tokens page for better usability and performance, which was announced here. This release made some important token changes that offer greater flexibility to control your default org token expiry with longer expiry dates. Prior to this release, org tokens' default expiry was 365 days. This has been shortened to 30 days for enhanced security. However, we understand this might be inconvenient in some cases. To mitigate this change, we also released the ability for customers to make a one-time change to update their default 30-day expiry from the global settings page. To summarize, the following changes have been added: 1. Default expiry:  The default token expiry for org tokens is now 30 days (formerly 365 days). This default expiry value can be changed to x days (up to 18 years) based on customer choice from the General Settings page accessible to admin role. Once this value is changed, all new tokens created will use this value as the default expiry. 2. Token expiry at the time of setting up new tokens  At the time of setting up a new token, customers have a choice of changing the token expiry of that specific token to any value up to 18 years. 3. Token expiry at the time of token rotation  At the time of token rotation, customers can change expiration date to any value up to 18 years from the time of token rotation.
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a ... See more...
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a field from a lookup with something like this as the actual value: file_path="\\\\*\\branch\\system\\type1\\*" OR file_path="\\\\*\\branch\\system\\type2\\*" I want to populate a field in my lookup table with actual key/value pairs and output the entire string based on a menu selection.  Unfortunately, if I try this, Splunk escapes all the double quotes and all the backslashes and it ends up looking like this in the litsearch, which is basically useless: file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type1\\\\*\" OR file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type2\\\\*\" How can I either properly escape the value within the lookup table so this doesn't happen, or is there any way to get Splunk to output the lookup value as a literal string and not try to interpret it?
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expe... See more...
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expected with  | tstats count as order_count from datamodel=spc_orders however if I try and pivot | tstats count as order_count from datamodel=spc_orders where state="CA" 0 results. Whats going on here?
Hi Team, Below is my raw log I want to fetch 38040 from log please guide ArchivalProcessor - Total records processed - 38040
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventory... See more...
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1\" xsi:schemaLocation=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1 FES_InventoryReservation_create.xsd\"><n1:inventoryReservationCreateRequest><n1:brand>xyz</n1:brand><n1:channel>ABC</n1:channel><n1:bannerID>8669</n1:bannerID><n1:location>WD1234</n1:location><n1:genericLogicalResources><n1:genericLogicalResource><ESILib:skuNumber>194253408031</ESILib:skuNumber><ESILib:extendedProperties><ESILib:extendedProperty><ESILib:name>ReserveQty</ESILib:name><ESILib:values><ESILib:item>1</ESILib:item></ESILib:values></ESILib:extendedProperty></ESILib:extendedProperties></n1:genericLogicalResource></n1:genericLogicalResources></n1:inventoryReservationCreateRequest></n1:Request> how to retrieve the banner ID and location from the above using splunk query. index="abc" sourcetype="oracle:transactionlog" OPERATION ="/service/v1/inventory/reservation" |rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" |spath input=REQUEST |spath input=REQUEST output=Bannerid path=body.n1:Request{}.n1:bannerID |table Bannerid I used the above query but it didnot yeild any results
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to ... See more...
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to deploy Splunk in the new lab. How can I break the 5GB license in to 3GB and 2GB, so I can use that 2GB into a new smaller lab?
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past ... See more...
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past the hour like:   Which these aren't the correct event times.  When I run the exact same search manually, I get the correct event times.   Does anyone know why this is occurring and how I can fix it? Thanks for any help on this one, much appreciated. Tom
Hi, I have incoming data from 2 Heavy Forwarders. Both of forward HEC data and the internal logs, how do I identify which HF is sending a particular HEC data?   Regards, Pravin
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t s... See more...
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t start time. Please guide index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS.UNB."|rex "totalOutputRecords=(?&lt;totalOutputRecords&gt;),busDt=(?&lt;busDt&gt;),fileName=(?&lt;fileName&gt;),totalAchCurrOutstBalAmt=(?&lt;totalAchCurrOutstBalAmt&gt;),totalAchBalLastStmtAmt=(?&lt;totalAchBalLastStmtAmt&gt;),totalClosingBal=(?&lt;totalClosingBal&gt;),totalRecordsWritten=(?&lt;totalRecordsWritten&gt;),totalRecords=(?&lt;totalRecords&gt;)"|eval totalAchCurrOutstBalAmt=tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),1)))|eval totalAchBalLastStmtAmt=tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),1)))|eval totalClosingBal=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1)))|table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords|sort busDt|appendcols[search index="abc"sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "CARS\.UNB(CTR)?\.(?&lt;CARS_ID&gt;\w+)" | transaction CARS_ID startswith="Reading Control-File /absin/CARS.UNBCTR." endswith="Completed Settlement file processing, CARS.UNB." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as CARS.UNB_Duration| table StartTime EndTime CARS.UNB_Duration]| fieldformat StartTime = strftime(StartTime, "%F %T.%3N")| fieldformat EndTime = strftime(EndTime, "%F %T.%3N")|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" "CARS.UNB."|rex "FileEventCreator - Completed Settlement file processing, (?&lt;file&gt;[^ ]*) records processed: (?&lt;records_processed&gt;\d+)"| rename file as Files|rename records_processed as Records| table Files Records]|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully"| head 7 | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True]|rename busDt as Business_Date|rename fileName as File_Name|rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes)|table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarder... See more...
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarders. SSL has already been configured for the web interfaces, but I would now like to secure the remaining components and establish SSL-encrypted connections between them as well. The certificates we are using are self-generated. Could you please guide me on how to proceed with securing all internal communications in this setup? Specifically, I would like to know if I should auto-generate a new certificate for each component and each connection or if there’s an efficient way to manage SSL across the entire environment. Thank you in advance for your help!
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have m... See more...
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have many logs there and I'm confused.     Thank you in advance.
My team has created production environment with 6 syslog servers (2 in each of 3 multi site cluster).  My question is do two syslog servers be active active or one active and one stand by? Which wil... See more...
My team has created production environment with 6 syslog servers (2 in each of 3 multi site cluster).  My question is do two syslog servers be active active or one active and one stand by? Which will be the good practice?  And do load balancer needs here for syslog servers? Currently some app teams are using UDP and some are TCP. basically these are network logs from network devices. Differences bw DNS load balancer and LTM load balancer? Which is best? Please suggest what will be the good practice to achieve this without any data loss?  From syslog servers we have UF installed on it and forward it to our indexer.
I am new to Splunk admin and please explain this following stanzas: We have a dedicated syslog server which receives the logs from network devices and UF installed on the server forwards the data to... See more...
I am new to Splunk admin and please explain this following stanzas: We have a dedicated syslog server which receives the logs from network devices and UF installed on the server forwards the data to our cluster manager. These configs are in cluster manager under manager apps.