All Topics

Top

All Topics

I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,crit... See more...
I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,critical podc,critical     We also have data in splunk that gives us pod_name, status, and importance. Results from the below search would look like this:     index=abc sourcetype=kubectl | table pod_name, status, importance poda-284489-cs834 Running non-critical podb-834hgv8-cn28s Running critical     Note podc was not found..   I need to be able to compare the results from this search to the list from the inputlookup and show that podc was not found in the results and that it is a critical pod. Need to be able to count how many critical and non-critical pods are not found as well as table the list of missing pods.    I have tried several iterations of searches but havent came across one that allows me to compare a search result to an inputlookup using a partial match. eval result=if(like(pod_name_lookup...etc is close but requires a pattern and not the wildcard value of a field. Thoughts?      
We're so glad you're here! The Splunk Community is place to connect, learn, give back, and have fun! It features Splunk enthusiasts from all kinds of backgrounds, working at just about every k... See more...
We're so glad you're here! The Splunk Community is place to connect, learn, give back, and have fun! It features Splunk enthusiasts from all kinds of backgrounds, working at just about every kind of organization, and working in a variety of roles and functions. If you're landed here, you belong here. And we welcome you! This space is home to several community programs and supported by both a team at Splunk and growing group of our community members, including the SplunkTrust. Please connect with any and all of us, and we've made it pretty easy to tell who's who by their Ranks and profiles.  Meet the Splunk Community Team! Meet the folks who make up our Community Team! If you ever have any questions, concerns, or just want someone to digitally high-five, we're here for you! Anam S, Community Manager (Splunk Answers) Renee W, Sr. Community Manager (SplunkTrust) Brian W, Sr. Community Technology Manager Gretchen F, Sr. Content Manager, Community (Blogs & Announcements) Kara D, Associate Community Manager (User Groups) Jenny B, Community Specialist (Slack) Looking for a spot to introduce yourself?  Drop us a comment below and let us know where you're joining us from!  To get started... Have a look around! You can navigate through our community and programs by using the main navigation, and you can learn a little more about specific programs and areas in this post. Review our Community Guidelines! These spell out some of our expectations and requirements of all community members. So be sure to take a few minutes to review them, and be sure to abide by them.  Ask questions! Splunk Answers is the place to ask questions, get answers, and find technical solutions, for any product in the Splunk portfolio. Join us on Slack! There, you'll have even more opportunities to ask questions, get answers, and connect with your fellow Splunk practitioners. Again, we're so glad you're here!  -- Splunk Community Team 
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and star... See more...
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and starts reporting again but the log file in windows is being continuously updated with recent logs though the modified time does not get updated because of the windows feature. The modified time for the log file is not an issue because the logs starts rolling in even when the modified time is same but the log file had latest logs.   we are using splunk forwarder 9.0.4 version currently. Can someone please help in triaging this issue? It is a problem with only one specific source with this windows host and other sources (windows event logs) are flowing in properly.
I need to identify hosts with errors, but only in block mode MY SPL --------- index=firewall event_type="error [search index=firewall sourcetype="metadata" enforcement_mode=block] | dedup host | ... See more...
I need to identify hosts with errors, but only in block mode MY SPL --------- index=firewall event_type="error [search index=firewall sourcetype="metadata" enforcement_mode=block] | dedup host | table event_type, host, ip   ------------------ each search works separately, but combined it seating on "parsing job"  with no result for long time. Thank you 
After configuring content pack for VMware. I repeatedly get "duplicate entity aliases found". We are also collecting for TA-Nix. How can I fix the duplicate entity alias issue. I am running ITE 4.18.... See more...
After configuring content pack for VMware. I repeatedly get "duplicate entity aliases found". We are also collecting for TA-Nix. How can I fix the duplicate entity alias issue. I am running ITE 4.18.1 and Splunk app for content packs 2.10
So I am creating a dashboard and I keep getting this error:  Error in 'where' command: The expression is malformed. Expected ). This is what I have: | loadjob savedsearch="name:search:cust_info... See more...
So I am creating a dashboard and I keep getting this error:  Error in 'where' command: The expression is malformed. Expected ). This is what I have: | loadjob savedsearch="name:search:cust_info" | where AccountType IN ($AccountType$)   I created a multiselect filter on AccountType and I want the SPL to query on those selected.  What could I be missing or another way to achieve this query to filter on AccountType?
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances"... See more...
I am getting this error:   Error in 'EvalCommand': Type checking failed. '/' only takes numbers.   Here is lines of SPL: | stats count as "Count of Balances", sum(BALANCECHANGE) as "SumBalances" by balance_bin | eventstats sum("SumBalances") as total_balance | eval percentage_in_bin = round(("SumBalances" / total_balance) *100, 2) What could be causing this? Is there a way to olve this without the / symbol? 
In my mv field nameas  errortype.In the error type the counts shows file not found as 4 and empty as 2 .I want to exclude the empty values from the mv fields
Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance Monitoring In today's digital landscape, the adoption of Splunk Real User Monitoring (RUM) and S... See more...
Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance Monitoring In today's digital landscape, the adoption of Splunk Real User Monitoring (RUM) and Splunk Application Performance Monitoring (APM) is pivotal for organizations aiming to enhance their web presence and user experience. RUM is diligently employed to monitor the genuine interactions of users with web and mobile applications, providing a granular view of user experience in real-time. Concurrently, APM is harnessed to investigate application performance, offering critical insights into transaction speeds and system health. Both are integral components of an observability suite that caters to the multifaceted nature of modern digital ecosystems, which encompass on-premise, hybrid, and multi-cloud environments. The integration of RUM and APM is showcased as a strategic move within a growth engineering team, underscoring its significant role in improving marketing operations and public-facing websites. By using observability tools, we can take a preemptive and informed approach to delivering digital services. This shifts incident management from reactive to showcasing digital resilience and excellence.   Understanding Splunk RUM and APM in Observability Practices     Driving Customer Experience With Splunk Observability: A Growth Engineering Team's Story Learn  how the growth engineering team at Splunk adopted Splunk Observability to improve incident detection, resolution, and customer experience. Real User Monitoring (RUM) and Application Performance Monitoring (APM) are essential components of Splunk's observability suite. Their adoption by Splunk's growth engineering team signifies a strategic move towards enhancing the company's marketing operations and public-facing websites. RUM is integral for monitoring the actual experiences of users interacting with web and mobile applications, while APM focuses on the performance of applications, offering insights into transaction speeds, error rates, and system health. Splunk's observability products offer a unified solution for monitoring digital ecosystems that span across on-premise, hybrid, and multi-cloud environments, addressing the complexities that come with integrating multiple technologies and services. By adopting Splunk observability, organizations can achieve digital resilience, enabling them to recover swiftly from disruptions, adopt new operating models quickly, and ensure reliable and outstanding digital experiences for their customers. The growth engineering team within Splunk's marketing organization is a prime example of how internal teams are leveraging these tools to improve efficiency, detect incidents faster, and align business priorities. The team's journey to utilizing RUM and APM has led to impressive results, such as faster page load times, increased engineering efficiency, and significant improvements in core web vitals. These improvements are a testament to the power of a comprehensive observability strategy, which can transform reactive incident management into a foresighted approach to digital service delivery. The Growth Engineering Team's Path to Digital Resilience     Leveraging Splunk Observability for Complex Technology Stacks Explore how Splunk transitions its AEM stack to the cloud and leverages Splunk Observability to manage complex technology ecosystems and deliver exceptional customer experiences. The Growth Engineering Team at Splunk has embarked on a digital resilience journey, with the mission to maintain public-facing websites and internal portals. Their objective is to provide a world-class customer experience by leveraging the power of Splunk products. To achieve this goal, the adoption of observability tools became imperative to address inefficiencies and the apparent lack of service health visibility. With the introduction of Real User Monitoring (RUM) and Application Performance Monitoring (APM), the team is now equipped to detect incidents, isolate and prioritize events, and accelerate root cause analysis effectively. This strategic move has enabled the team to transform from a reactive to a forward-thinking approach, enhancing their ability to recover quickly from disruptions and adopt new operating models seamlessly. The adoption of Splunk's observability tools has proven to be a critical step in the team's journey towards digital resilience. Observability has provided the team with end-to-end visibility, allowing them to monitor the health of their services and correlate events across different teams and microservices. This unified perspective is essential for resolving issues rapidly and efficiently. As a result of these efforts, the team has reported significant improvements in key performance indicators, including faster page load times, increased engineering efficiency, and improved core web vitals. The team's utilization of RUM has been particularly impactful, allowing them to track automated user sessions across websites and applications. For instance, when trial sign-ups on the website were failing, RUM enabled the engineering team to quickly identify and resolve the issue, minimizing the potential impact on users. Similarly, APM has been instrumental in ensuring the performance of business-critical workflows. During a product release, APM allowed the team to address an outage alert swiftly, maintaining a 99.9% uptime and boosting engineering productivity by 50%. Overall, the integration of RUM and APM has set the stage for a more resilient digital presence, empowering the Growth Engineering Team to deliver superior customer experiences and drive business success. This journey towards digital resilience serves as a testament to the power of observability tools in optimizing user experience and operational efficiency.   Maximizing Digital Resilience: The Power of APM and RUM in Action     Comprehensive Application Performance Monitoring and User Experience Analysis With Splunk APM and RUM Discover how Splunk APM provides distributed tracing and detailed error capture, while RUM offers real-time user experience monitoring and error tracking. Application Performance Monitoring (APM) and Real User Monitoring (RUM) are crucial for ensuring service health and optimizing user experiences. Internally, Splunk leverages its own suite of products, including APM and RUM, to monitor and enhance its service offerings. The utilization of these tools within the growth engineering team is shared, demonstrating the tangible advantages of APM and RUM. APM and RUM collectively provide comprehensive visibility into how services perform and how users interact with applications. By integrating both front-end and back-end monitoring, teams can rapidly identify and resolve issues, often before they significantly impact users. These tools offer capabilities ranging from detailed waterfall charts of user interactions to real-time correlation of traces, which are instrumental in accelerating troubleshooting efforts. Splunk's internal stories serve as a testament to the effectiveness of APM and RUM. One case study highlights the ability of RUM to detect and address a failure in website sign-up forms, allowing for a swift resolution that was three times faster, thereby preserving the company's brand reputation. Another example showcases how APM enabled the back-end engineering team to maintain a 99.9% uptime and improve engineering productivity by 50% despite increased traffic. The integration of APM with RUM is particularly beneficial, as it connects front-end user experiences with back-end service performance, providing a holistic view of the system. This end-to-end visibility is crucial for monitoring complex ecosystems, such as Splunk.com, with its multitude of endpoints. By using APM and RUM in unison, teams can now monitor and optimize Splunk's digital ecosystem more efficiently. The success of APM and RUM integration at Splunk is quantifiable, with impressive key performance indicators (KPIs) such as a 50% faster page load times, a 25% increase in engineering efficiency, and a 60% improvement in core web vitals. These statistics underline the transformative impact of Splunk's observability tools in creating resilient digital experiences and fostering a forward-thinking approach to service health management.   Understanding APM and RUM Capabilities for Optimized User Experience     Maximizing User Satisfaction and Business Goals With APM and RUM Correlation Discover how the correlation of Application Performance Monitoring (APM) and Real User Monitoring (RUM) provides end-to-end visibility, effective root cause analysis, and improved performance optimization. Enhancing Digital Experiences with APM and RUM The implementation of Splunk Application Performance Monitoring (APM) and Splunk Real User Monitoring (RUM) within growth engineering teams has led to significant advancements in page load times and engineering efficiency. Through the implementation of these tools, businesses have observed a significant enhancement in core web vitals and the ability to proactively identify issues. This shift towards a more foresighted approach has been facilitated by the adoption of observability, resulting in a transformation of operational strategies. The integration of APM and RUM has empowered teams to gain comprehensive insights into both front-end and back-end systems, enabling a unified approach to issue resolution. This has led to a more efficient and effective method for addressing incidents, with teams now able to quickly zoom in on the performance of critical workflows and preemptively address potential issues before they escalate. The success stories shared emphasize how APM and RUM have revolutionized the monitoring and optimization of digital services, ensuring high availability and optimal performance. These tools not only provide real-time visibility into user experiences but also facilitate faster troubleshooting and resolution, ultimately enhancing the digital experience for both users and the organization. Conclusion  In conclusion, the implementation of Application Performance Monitoring (APM) and Real User Monitoring (RUM) is demonstrated to be essential for achieving digital resilience. Enhanced user experiences and operational efficiency are achieved through the adoption of these monitoring tools. Organizations are empowered to proactively identify and resolve issues, thereby maintaining high service uptime and improving core web vitals. The integration of APM and RUM enables a seamless correlation between user interactions and application performance, offering a comprehensive observability solution. This approach results in significant improvements in page load times and engineering productivity. By leveraging the capabilities of APM and RUM, a transformation in incident management is facilitated, transitioning from a reactive to a preemptive stance in digital service delivery. Speakers Sudhaker Adusumilly, Senior Director, Head of Growth Engineering, Splunk Sandeep Kampa, Sr DevOps Engineer, Growth Engineering, Splunk   Looking for More? Watch the full Demo here   In this demo, Sandeep Kampa, Sr DevOps Engineer at Splunk discusses the powerful capabilities of Splunk APM and RUM, demonstrating how they can revolutionize application performance and user experience. He showcases key features such as service maps, error tracking, and the correlation between APM and RUM for comprehensive front-end and back-end analysis. The walkthrough includes a practical example of troubleshooting a real-world issue with the integrated tools, highlighting their ability to reduce resolution time and improve operational efficiency.    Watch the full Tech Talk   
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com ... See more...
I'll try to explain it with a basic example. As an output of a stats command I have: detection query search1 google.com yahoo.com search2 google.com bing.com   I want to get which queries are not being detected by both search1 and search 2. Or else, getting rid of the queries that are in both searches, either way work. Like ok, search1 is detecting yahoo.com whereas search2 isn't, and viceversa with bing.com I thought about grouping by query instead of by search,  the problem is I have dozens or even hundreds of queries. Any thoughts? Cheers
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. ... See more...
Hi Splunkers, I have a doubt about underscores and path in props.conf. Suppose, in my props.conf, I have: [source::/aaa/bbb/ccc_ddd] As you can see, in my path I have an underscore in path name. This could be a problem? I mean: can I put the underscore without problem or I have to use backslash to escape it?
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about t... See more...
I wonder if a Heavy Forwarder can be the intermediate instance among 1000 Universal Forwarders and 1000 Indexers? The hardware resources are supposed to be unlimited, the problem will be only about the configuration. Any documentations or references will be big helps. Thank you very much!
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet ... See more...
Hello All, We have log flow from fortigate to splunk as follows: Fortigate Analyzer> Syslog server with UF>Deployment server> SearchHead /Indexer. Kindly suggest how can i get logs using fortinet add on over indexer? will i have to install fortinet add on app over syslog server UF as well? and what data source need to be selected over indexer.
Hi, I am trying to get the execution count based on the parentIDs over two different data sets. Please could you review and suggest ?  I would like to see what's execution count  between (sourcet... See more...
Hi, I am trying to get the execution count based on the parentIDs over two different data sets. Please could you review and suggest ?  I would like to see what's execution count  between (sourcetype=cs, sourcetype=ma) , only the field ParentOrderID is common between cs, ma sourcetype. Note: daily close to ~10Million events are loaded  into splunk and unique execution will be 4Million.Also, sometime the join query is getting auto-canceled. SPL: index=india sourcetype=ma NOT (source=*OPT* OR app_instance=MA_DROP_SESSION OR "11555=Y-NOBK" OR fix_applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) stream=Outgoing app_instance=UPSTREAM "clientid=XAC*" | dedup fix_execID,ParentOrderID | stats count | join ParentOrderID [ search index=india sourcetype=cs NOT (source=*OPT* OR "11555=Y-NOBK" OR applicationInstanceID IN(*OPT*,*GWIM*)) msgType=8 (execType=1 OR execType=2 OR execType=F) app_instance=PUBHUB stream=Outgoing "clientid=XAC" "sourceid=AX_DN_XAC" | dedup execID,ParentOrderID | stats count] Thanks, Selvam.
Why I get empty results while I using REST API (results) Search on python? And when I using REST API (events) in Python to got like this  For your information the SID is already successfu... See more...
Why I get empty results while I using REST API (results) Search on python? And when I using REST API (events) in Python to got like this  For your information the SID is already successfully retreived using the python program and when I try to use curl command to search the SID jobs (curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results) the results is show on the screen without any error. Can you help me about this case ? Thank you   
In an indexed cluster environment, I set the following stanza configuration in the deployment server's serverclass.conf file, but [Server class: splunk_indexer_master_cluster] stateOnClient = n... See more...
In an indexed cluster environment, I set the following stanza configuration in the deployment server's serverclass.conf file, but [Server class: splunk_indexer_master_cluster] stateOnClient = noop Whitelist = <ClusterManagerA> The _cluster folder under manager-app disappeared along with his Indexes.conf inside it. Fortunately, Indexes.conf remained in the cluster's peer app, so this was not a problem. If I want to use stateOnClient = noop, how should I maintain Indexes.conf deployed to the cluster on the cluster master?
Requirement - alert only needs to trigger outside window even if server is down in maintenance window | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.... See more...
Requirement - alert only needs to trigger outside window even if server is down in maintenance window | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=_time | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0) | eval is_server_down=if((host="xx.xx.xxx.xxx" AND count == 0) OR (host="xx.xx.xxx.xxx" AND count == 0) 1, 0 ) Trigger condition- |search is_maintenance window = 0 AND is_server_down=1 Alert is not getting triggered outside maintenance window even though one of server is down. Help me what is wrong in query or another possible solution
Hello,   I am facing same issue as you ...I am not receiving email alerts from splunk ....Instead of localhost what name should I kept for  mail server host name?  Could you please suggest
Hello, While using sitimechart instead of timechart - The data has been changed. I would like to calculate an error percentage but the system shows 0 or fields count. Thanks!    
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see ... See more...
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see if this would fix it. It did not. then we upgraded to 9.2, that did not fix it. I started adding debug mode to everything and found that splunk would send the emails to postfix and the postfix logs would state the emails were send. however, after looking at it closer, I notice the from field of the splunk sendemail generated emails had the from field like: splunk@prod not splunk@prod.mydomain.com (as they used to before we upgraded to redhat 9 When we use mailx, the fron field from field is constructed correctly such as: splunk@prod.domain.com extra python debugging does not show the from field but only the user and the domain: from': 'splunk', 'hostname': 'prod.mydomain.com', My stanza in /opt/splunk/etc/system/local/alert_action.conf: [email] hostname = prod.mydomain.com Does anyone know how to fix this? Is there a setting in splunk that would make sure the email from field is constructed correctly. It is funny that if you add an incorrect "to" address splunk whines but if splunk create a incorrect to field address in sendemail, it is fine and, just send it to postfix and let it handle it, lol dandy