All Topics

Top

All Topics

I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are no... See more...
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are not in the order I expect it to be. Table output: timestamp,Subject,emailBody,operation --> resulting JSON output is in the order subject,emailbody,operation,timestamp. How do I manipulate tojson to write fields in this order or is there an alternate way of getting json output as expected? 
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence  episode. When I click on it, I want it to not only change the status to "In Progress" an... See more...
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence  episode. When I click on it, I want it to not only change the status to "In Progress" and assign the episode to me, but also trigger an action such as sending an email or creating a ticket in a ticketing system I’m aware that automatic action rules can be set in aggregation policies, but I want these actions to occur specifically when I manually click the "Acknowledge" button. Is there a way to achieve this? Thanks!
probably a basic question i have the following data  600 reason and this rex (?<MetricValue>([^\s))]+))(?<Reason>([^:|^R]+)) what i am getting is 60 in Metric Value and 0 in Reason i presume th... See more...
probably a basic question i have the following data  600 reason and this rex (?<MetricValue>([^\s))]+))(?<Reason>([^:|^R]+)) what i am getting is 60 in Metric Value and 0 in Reason i presume that is due to the match being up to the next NOT space, thus metric value is 60 and 0 remains in the data for Reason what is the right way to do this such that i get value = 600 and reason = reason
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because... See more...
How do I dedup or filter out data with condition? For example: Below I want to filter out row that contains name="name0".   The condition should be able to handle any IPs on the ip field because the IP could change, in the real data the IPs are a lot more.   The name0 is not in order. The dedup/filter should not be applied  to IPs that doesn't contain "name0" AND it should not be applied to unique IP that has "name0" Thank you for your help. Data: ip name location 1.1.1.1 name0 location-1 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.2 name0 location-20 1.1.1.3 name0 location-3 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0 Expected output: ip name location 1.1.1.1 name1 location-1 1.1.1.2 name2 location-2 1.1.1.3 name3 location-3 1.1.1.4 name4 location-4 1.1.1.4 name4b location-4 1.1.1.5 name0 location-0 1.1.1.6 name0 location-0       | makeresults format=csv data="ip, name, location 1.1.1.1, name0, location-1 1.1.1.1, name1, location-1 1.1.1.2, name2, location-2 1.1.1.2, name0, location-20 1.1.1.3, name0, location-3 1.1.1.3, name3, location-3 1.1.1.4, name4, location-4 1.1.1.4, name4b, location-4 1.1.1.5, name0, location-0 1.1.1.6, name0, location-0"    
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that fil... See more...
Hello, I've recently upgraded to 9.3.0 and the file integrity check show that /opt/splunk/bin/jp.py doesn't need to be installed, so we deleted it. However the checker still complains about that file. Is there a way to clear/reset the checker?
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 ... See more...
When I create a timechart using dashboard studio, the visualization only partially loads.  Until I click to open the visualization in a new window, then it loads as expected. We are on Splunk 9.0.5 but I don't see any known issues about this.
The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better cost and data controls, and simplified GDI for new users. New In Splunk Observability ... See more...
The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better cost and data controls, and simplified GDI for new users. New In Splunk Observability This Month SignalFlow Editor for Custom Metrics SLOs Log Observer Connect Enhancements - SVC Optimization OpenTelemetry Kubernetes Control Plane Metrics Token Management Improvements Metrics Pipeline Management Updates Learn More About Each of These Enhancements SignalFlow Editor for Custom Metrics SLOs Using the SignalFlow editor, Observability Cloud users now have the ability to create SLOs based on any metric they are monitoring. The SignalFlow editor, which enables users to define data streams for both good and total events, gives users full flexibility and control over their SLI definitions, including the ability to use histogram data. This new feature is available to all Observability Cloud users at no additional cost.  To create a new SLO, navigate to the Detectors & SLOs section from the left-hand menu in the Observability Cloud platform and select the Service Level Objectives tab. Next, click the Create SLO button to open the wizard and select Custom Metric as the Metric Type. Log Observer Connect Enhancements - SVC Optimization With the latest Log Observer Connect improvements, gain more control over your SVC utilization. Decide when you run your log searches with “pause/play” and “run search” buttons, in addition to filters. By default, you’re now limited to 150K logs but you can change to unlimited depending on your needs. To limit further log activities, we’re stopping search jobs triggered by Related Content after 2 minutes of inactivity and after 15 minutes for other sources such as the UI or Field Summary. This functionality is available to Log Observer Connect/Unified Identity customers only. OpenTelemetry Kubernetes Control Plane Metrics We’ve enabled the collection of Kubernetes control plane metrics with the OpenTelemetry Prometheus receivers that target specific Prometheus endpoints. Today, control plane metrics are collected with the Smart Agent receiver with the Splunk Distribution of the OpenTelemetry Collector. With this change, you can now collect these metrics for different control plane components with the OpenTelemetry Prometheus receiver instead. This functionality will be automatically available to all Obervability Cloud users who have upgraded to the Splunk distribution of the Collector (v0.109.0+)  by enabling the feature gate useControlPlaneMetricsHistogramData. Metrics Pipeline Management Updates Data retention for Archived Metrics has been extended from 8 to 31 days to facilitate long-term data and historical trend analysis. Users can also customize their specific restoration time windows when creating exception rules for additional flexibility.  Additionally, customers can now use Terraform to route metrics to Archived Metrics and create exception rules (select a subset of metrics to route to the real-time tier instead of the archival tier). Also Coming Soon - Token Management Improvements  Admin and Power users will now have a new and improved Token Management interface, with Long-Lived tokens, and improved token visibility and rotation, all within a new design aligned with Splunk Cloud. 
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of ... See more...
Hello everyone,  I have a table (generated from stats) that has several columns, and some values of those columns have "X's".  I would like to count those X's and total them in the last column of the table.  How would I go about doing that?   Here is an example table, and thank you!   Field1 | Field2 | Field3 | Field4 | Field5 | Total_Xs X | X | Foo | Bar | X | 3 Foo2 | X | Foo | Bar | X | 2 X | X | X | Bar | X | 4    
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single ... See more...
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single row resultset which tells me how many events of each UpgradeStatus and how  many events in total i.e. Total Completed Pending Processing 11 6 3 2   I don't know in advance what the different values of UpgradeStatus might be and I don't want to use addtotals (this is the challenge part). I came up with the solution below which kinda "misuses" xyseries (which I'm strangely proud of) .  I feel like I'm missing a more straightforward solution, other than addtotals   Anyone up for the challenge? Dummy data and solution (misusing xyseries) follows...   | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | eventstats sum(count) as Total | xyseries Total UpgradeStatus count        
An extension of this: https://community.splunk.com/t5/Splunk-Search/Looking-at-yesterdays-data-but-need-to-filter-the-data-to-only/m-p/696758#M236798   I've created a dashboard on the above with a... See more...
An extension of this: https://community.splunk.com/t5/Splunk-Search/Looking-at-yesterdays-data-but-need-to-filter-the-data-to-only/m-p/696758#M236798   I've created a dashboard on the above with an input that adds the timewrap line when the option is selected yes and nothing when the option is selected no.   The issue I am having is when no is selected, the graph looks like the following when I select smaller time windows. Below I selected 4 hours but how can I only show the last 4 hours and not the previous window.   Query is as follows: index=foo  [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest] | timechart span=1m sum(value) as value | eval _time=_time  
Hello Splunkers How can i utilize a lookup in a correlation search showing the detected keyword in the search result ? its a requirement that the analyst shouldn't have the capability to view l... See more...
Hello Splunkers How can i utilize a lookup in a correlation search showing the detected keyword in the search result ? its a requirement that the analyst shouldn't have the capability to view lookups Thanks in advance.
Hi Splunk community!  I need to filter events from the Splunk_ta_Windows application by the EventCode, Account_Name and Source_Network_Address fields. Tell me, in what form should props.conf and tra... See more...
Hi Splunk community!  I need to filter events from the Splunk_ta_Windows application by the EventCode, Account_Name and Source_Network_Address fields. Tell me, in what form should props.conf and transform.conf be written and in what folder should they be located?
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkb... See more...
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkbase | apps. As far as I understand it's Splunk buit-in app? Should I delete it or how can I resolve this issue ? P"lease help.
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond"... See more...
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond" : 202684061 }, ...       Thus I tried as config in props.conf:   TIME_FORMAT=%s,\n "nanoOfSecond" : %9N TIME_PREFIX="epochSecond" :\s MAX_TIMESTAMP_LOOKAHEAD=500     That did unfortunately not work.   What is the right way to parse this time stamp with subsecond precision?
How can we send a file as input to an API endpoint from custom spl commands developed for both Splunk Enterprise and Splunk Cloud, ensuring the API endpoint returns the desired enrichment details?
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got trunc... See more...
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got truncated. Is there a setting can increase the captured SQL text size? The controller build is  24.6.3.  DBAgent version 23.6.0.0
Splunk Training and Certification content has moved! C’mon over to the Splunk Training and Certification Community Site for the latest ways you can grow your minds and your careers! These are some ... See more...
Splunk Training and Certification content has moved! C’mon over to the Splunk Training and Certification Community Site for the latest ways you can grow your minds and your careers! These are some blogs you may have missed while you were cruising the rest of the Splunk Community.    The Splunk Education Smartness Series Sometimes it’s hard to visualize how far we can take our careers – until we hear about how others have done it. So, if you’re looking for inspiration and best practices for growing your career with Splunk, check out the first three interviews of the series.  Meet Tom, Pedro, and Brandon   Tick tock! Grab a seat at the last minute When you’re done scrolling TikTok, we’d like to remind you that (tick tock) time’s a-wastin and life’s too short to procrastinate. Splunk Last Minute Learning is motivation du jour, designed so you can quickly get the technical training you might be putting off.  Check out our Last Minute Learning Opportunities   Yes, training units do expire | You’ve got a year  Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, Splunk Education Training Units are no exception – always expiring one year from the date of purchase. Don’t let these slip by without using them to gain access to our valuable instructor-led training or eLearning with labs.  Get the deets on using those training units   Get ready for the new release party Just like the angsty buzz around a new Taylor Swift album, we like to think the new course releases from Splunk Education generate a similar – albeit techy – vibe. Whether you're a fan of fast-paced instructor-led sessions or prefer learning at your own, sweet pace, our new releases are sure to be top hits.  Add these new courses to your resume Don’t miss out—visit the Splunk Training and Certification Site regularly. It’s the place where we continue to share innovative ways you can expand your knowledge and advance your career with Splunk. 
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=jo... See more...
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=job_event job_name="*abc/develop*" | stats count by job_name, type returns completed = 74 and started = 118 Ideally whatever is started should also be completed. so can you help me figuring out what could be the problem?
September 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist... See more...
September 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take Just your style | Over 50 free eLearning courses If you’re searching for a used coffee table to refurbish, it’s possible you could find one on Facebook marketplace for free. But, if you’re searching for career-changing tech courses, you’ll definitely find more than 50 on the Splunk Education website for free. We’re here to help you learn on your own terms in your own style so you can take your career to the next level – no money required. So, start exploring some no-cost Splunk eLearning courses like Using Fields, Intro to Dashboards, Scheduling Reports and Alerts, and many more. We like to think we put the free in freestyle.  Gotta learn for free | Over 50 no-cost eLearning courses The Foundation | Fundamentals of Metrics Monitoring in Splunk Observability Cloud When the lecture hall is your office, the whiteboard is your laptop, and the professor is a Splunk training expert. That’s how we roll in Splunk Education. Our virtual instructor-led trainers teach all the concepts using lectures and scenario-based hands-on activities, which is how you will experience “Fundamentals of Metrics Monitoring in Splunk Observability Cloud.” This course serves as the foundation for all other Splunk Observability courses and is targeted to DevOps/SRE/Observability teams, Senior On-call Engineers, Onboarding and Monitoring Strategists and Developers and provides a fundamental understanding of metrics monitoring in Splunk Observability. Start here and begin your observability journey with a solid foundation.  Gotta see it to believe it | Observability makes it clear Things You Needa Know How Pedro advanced his career | SMARTNESS Series, Episode 2 Sometimes it’s hard to imagine how far we can take our careers – until we hear about how others have done it. So, if you need inspiration to help you grow your career with Splunk, maybe Episode 2 of our Splunk Education SMARTNESS series will show you what’s possible. This episode features Pedro Borges, who shares how he went from being a skeptic to a believer by learning Splunk through training, hands-on experience, and by tapping into the brilliant Splunk community. “Honestly. I’m running out of certifications to take!” See where curiosity and a growth mindset can take you. Needa be inspired | Meet Pedro Tick tock | Grab a seat at the last minute When you’re done scrolling TikTok, we’d like to remind you that (tick tock) time’s a-wastin and life’s too short to procrastinate. Splunk Last Minute Learning is motivation du jour, designed so you can quickly get the technical training you might be putting off. And, since Splunk Training Units expire one year after purchase, taking a class at the last minute may be just the ticket to ensuring those training units don’t go unused. Simply sign up using your Splunk.com account and pay with your company training units (or a credit card if training units are not part of your plan). When you need that dopamine hit, just scroll our course catalog and imagine the places you can go with Splunk. Gotta get training in | Last minute instructor-led courses Places You’ll Wanna Go Splunk Lantern | How to boost LLM observability with Splunk Raise your hand if you’ve used ChatGPT to write a carefully crafted response to a sensitive email? Well then, you’ve experienced just one type of use case for using large language models. This month, Splunk Lantern – the customer success center that provides advice from experts on key use cases and how to optimize Splunk – shares another use case. Thanks to the expertise of Derek Mitchell, Global Observability Specialist at Splunk, you can access a step-by-step guide that demonstrates how OpenTelemetry can be used to view LLM data in Splunk Observability Cloud. You need to know what questions to ask in order to find the answers you seek. Go further with Lantern | LLM data is key to o11y Back to school | STEPtember is here Changing leaves, crisp, cool days, and back to school means it’s autumn in North America! More specifically, it’s STEPtember in Splunk Education. We’re all about providing ways you can learn and excel so you can optimize Splunk for your organization, and we have a learning platform designed with you in mind. The Splunk Training & Enablement Platform (STEP) is the place to view our entire technical training catalog, enroll in courses, access in-progress eLearning, and review completed training and course completion certificates. Grab a PSL, fire up your HP, and STEP into a world of opportunities. Go to tech school | STEPtember, where school is always in session Find Your Way | Learning Bits and Breadcrumbs Go Chat | Join our Community User Group Slack Channel Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Last Minute | Seats Still Available for ILT Go Deep | Register for Security and Observability Tech Talks  Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.    Answer to Index This:  F I V E Remove the 2 letters F and E from five and you have IV.
I went through the process of stopping splunk on all components, untar the installation file to /opt directory with the -C option. After completing the untar, I ran the command and accept the upgrade... See more...
I went through the process of stopping splunk on all components, untar the installation file to /opt directory with the -C option. After completing the untar, I ran the command and accept the upgrade and license. All went well until the end when I get WARNING: web interface does not seem to be available. Everything else says done until the end, when I get the warning message.   I checked splunkd.log and I see this message:   ERROR ClusteringMgr [60815 MainThread] - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. I checked server.conf file and compared with the backup file i made of the entire Splunk etc/system/local directory and the config in the files are the same.