All Topics

Top

All Topics

Need help with creating an interactive drill down with value extracted using the rex command.  I want to monitor users saving files to a certain folder and also sort and look at file extension types ... See more...
Need help with creating an interactive drill down with value extracted using the rex command.  I want to monitor users saving files to a certain folder and also sort and look at file extension types that are saved in folder and by who.  Raw test data has: time, user, computer, directory and document as seen below. Test Data _time                     user_name      computer_name      source_directory                document 10/11/2024      user1                  Destop_user1            \\cpn-local\priv\cus\      document1.pdf 10/11/2024      user4                 Destop_user1             \\cpn-local\priv\cus\      document2.doc 10/10/2024      user1                 Destop_user1             \\cpn-local\priv\cus\      document3.pdf 10/10/2024      user2                 Destop_user2             \\cpn-local\priv\cus\      document4.pdf 10/9/2024         user3                 Destop_user3             \\cpn-local\priv\cus\      document5.pdf 10/9/2024         user4                 Destop_user4             \\cpn-local\priv\cus\      document6.doc 10/9/2024         user2                 Destop_user2             \\cpn-local\priv\cus\      document7.doc I have created a drill using a token value of the queried data from the raw logs which allows me to selecte a user from a pie chart and show all logs in a second table. Those two dashboard panels are below and work. ***User Pie Chart with the drilldown token: token_user=$click.value$ *** index="user_files" | rex field="document" "\.(?<extension>[^\.]*$$)" | stats count(user_name) BY user_name ***User Record Table *** index="user_files" user_name = $token$ | table _time, user_name, computer_name, source_directory, document I am now trying to create a dashboard taking the same raw data, add a rex command to filter out extension and have the pie chart show the specific file extension I have logs from an index which I have done using the following query *** File Extension Pie Chart:  Works *** index="user_files" | rex field="document" "\.(?<extension>[^\.]*$$)" | stats count(extension) by extension However, when I call on the token "source = $token$" after declaring the index to display records based on pie chart selection, there is no search results. *** Records by file type selected in Pie Chart: No Records found with selection from Pie Chart ** index="user_files" source=$*token$ | table _time, user_name, computer_name, source_directory, document I also tried (index="user_files" extension=$*token$") and ("|where extension="$token$") in the query and still no results are seen in the record table. Any help would be greatly appreciated.  I understand the logic needed, just having problems executing the drill down.   Thanks
My group and I are creating a senior project for a SIEM through a VM. We were planning to implement Splunk dashboards into the project with python code. To give some background we are starting from s... See more...
My group and I are creating a senior project for a SIEM through a VM. We were planning to implement Splunk dashboards into the project with python code. To give some background we are starting from scratch with python code, and we would like to implement Splunk dashboards into that python code. In short, when we run the python code we would like the Splunks GUI to pop up ( which ever one we choose: charts, pie charts, global map) with the data that we are collecting through the python code. Is there a way we can achieve this goal? 
my alert is not triggered even with many matching events here are the details:   while the activity that generate these logs is running the real time alert is processing and found those eve... See more...
my alert is not triggered even with many matching events here are the details:   while the activity that generate these logs is running the real time alert is processing and found those events in the screenshot. i have waited for 5 mins and same issue i have also tries scheduled and still the same issue no triggering
Anyone have a tip on how to have a token(from field)- and then determine which query to run based on that input?  For example (datasources/queries: fruit, meat, vegetable) Field: banana ->run... See more...
Anyone have a tip on how to have a token(from field)- and then determine which query to run based on that input?  For example (datasources/queries: fruit, meat, vegetable) Field: banana ->run query for fruit ->display table about banana from said query.  Struggling with this one- trying to make a dynamic search bar that populates tables based on the input- thus making multiple of my dashboard redundant. Slimming things down. 
Welcome to the second segment of our guide. In Part 1, we covered the essentials of getting started with ITSI and how to address key IT challenges. Here in Part 2, we will share insight and recommend... See more...
Welcome to the second segment of our guide. In Part 1, we covered the essentials of getting started with ITSI and how to address key IT challenges. Here in Part 2, we will share insight and recommendations on how to advance your usage, optimization, and ways to leverage additional tooling.  Strategies to Enhance Performance In order to take performance a step further, it's essential to implement techniques that take advantage of customization, optimization, and predictability. Maturing the use of these capabilities will help you fine-tune your ITSI setup to more closely match your organization’s specific needs:  Utilize Custom KPIs: Tailor ITSI to your organization's needs by creating and implementing custom KPIs. For instance, a financial services company could monitor the latency of critical transaction processing systems, or business stakeholders at a hospital could monitor ambulance availability. By leveraging custom KPIs, organizations can gain insights beyond revenue and drive higher value decision-making across various functions. In addition, see best practices for using metrics to create KPIs. Mature Your Alerting Strategies: Learn how to set up multi-KPI alerts for changes in service KPIs and be sure that you are notified of critical events without being overwhelmed by noise.  Know Adaptive Thresholding Configuration: Setup Adaptive Thresholding by creating adaptive KPI thresholds and watching a simple step-by-step thresholding process. Next, make sure to monitor and optimize the health of your ITSI environment by taking advantage of ITSI’s Configuration Assistant. This feature will help identify configuration issues for your services, KPIs, entities, and at a glance resolve issues, and apply changes to your objects in bulk. Once setup, review our best practices and take your optimization further by utilizing the ‘ML-Assisted Adaptive Thresholding’ feature. Powered by Splunk AI, ML-Assisted Adaptive Thresholding provides automated adaptive threshold configuration recommendations.   Optimize Entity Rules: Streamline your ITSI setup by learning how to define and manage entity rules for maximum efficiency and accuracy. Deploy Predictive Analytics: Start Implementing predictive analytics (at the right time) to anticipate issues before they impact services.  Enhance Incident Response: Refine correlation rules that reduce noise and focus on actionable events. Explore detailed methods for optimizing the rules and additional guidance on configuring the Notable Event Aggregation Policy so it can further manage event noise. Already ahead of the curve and interested to know what’s new to help fine-tune your setup? The new pre-checks in the recently released ITSI 4.19 Upgrade Readiness Dashboard now help identify actionable insights, such as entities associated with deleted or non-existent services, and issues with entity filtering to ensure a smooth upgrade process.   Maintenance and Optimization Setting the foundation for continuous improvement starts with setting up ways to maintain and optimize your deployment. Updating your growing IT environment, regular performance tuning and best practice workflows will help keep your ITSI setup adaptable to the evolving needs of your business. Follow the embedded links in each: Update Environment Changes: Regularly review and update your ITSI setup to reflect your evolving IT environment, and learn how to conduct a comprehensive review. Remember to go back to Splunkbase to reference other preconfigured IT use cases directly to be used within ITSI. Leverage those prepackaged ITSI for new monitoring use cases in your growing environment.  Maintain Adaptive Thresholds: Ensure thresholds applied to service KPIs represent good service function and can address issues before they escalate. Use Best Practice Workflows: Follow best-practice workflows from identification to remediation.  Harness Machine Learning: Leverage the power of machine-learning to uncover hidden patterns and trends in your IT data by discovering ways to train and deploy ML-models in ITSI. Integrate Other Splunk Products Integrating Splunk ITSI to take advantage of other Splunk products doesn’t have to be challenging. Take the next step in creating a more unified and resilient IT environment by leveraging these must-have integration recommendations:  Critical Notifications: Integrate with your preferred incident response tool, like Splunk On-Call, to ensure actionable episodes reach the right teams quickly.  Precise Business Transactions: Utilize APM Business Workflows in ITSI for automatic service creation using service topologies.  Application Alerts: Take advantage of the ITSI integration with AppDynamics to help accelerate your onboarding of application availability, performance, end user experience, health rule violations, and events data. Use this to deep-link into AppDynamics applications from within an ITSI entity or ITSI event. Another must-have is being able to send alerts from Observability Cloud to an ITSI event index. Secure Visibility: Enhance collaboration between NOC and SOC teams by sharing data between Splunk ITSI and Splunk Enterprise Security.  Curious how other customers are safeguarding their data? The ITSI backup and restore process enhancements in the 4.19 release allow users to protect critical data and ensure the continuity of their services by viewing any missing dependent objects in backup files and preventing restore job failures. Conclusion As we conclude our exploration of advanced strategies, maintenance practices, and integration opportunities for Splunk ITSI, it's clear that optimizing your IT operations is an ongoing journey. By implementing these advanced techniques, regularly maintaining your ITSI environment, and integrating other powerful Splunk products, you can ensure that your IT infrastructure remains robust, efficient, and aligned with business goals. Stay connected with our community and resources for continuous learning and support, and keep pushing the boundaries of what your ITSI deployment can achieve. Thank you for joining us in this series, and we look forward to seeing the remarkable outcomes your organization will achieve with Splunk ITSI. Try Our Latest Innovations To learn more about simulating your services and their health scores, read our Service Sandbox documentation.  To learn more about our one-stop-shop for bulk configuration updates for KPIs with the ITSI Configuration Assistant, read more here.  To identify the origin of problems and reduce manual investigations, read more about ITSI’s Service Impact Analysis here.  Read about our other features in Preview and release notes here. 
As modern IT environments continue to grow in complexity and speed, the ability to efficiently manage and optimize diverse systems has become a business requirement. Splunk IT Service Intelligence (I... See more...
As modern IT environments continue to grow in complexity and speed, the ability to efficiently manage and optimize diverse systems has become a business requirement. Splunk IT Service Intelligence (ITSI) is an AIOps, analytics, and IT management solution that provides visibility to optimize IT performance and help predict incidents before they impact customers. This blog post provides a guide for adopting, implementing, and maximizing the potential of Splunk ITSI. Getting Started with ITSI With a highly extensible solution like ITSI, it can be challenging to determine the efforts that bring the fastest and most effective time to value. To help you navigate this process, here are some tips and best practices to help make the most of your deployment: Explore the Getting Started Guide: If you are in the initial stages of implementing ITSI, our getting started guide provides a comprehensive overview of the initial steps. Adopt ITSI Capabilities Strategically: Prioritize which capabilities to implement based on your organizational needs by reviewing ITSI’s strategic adoption guide.  Optimize Operations for End-User Experience: To ensure the best operational outcomes and end-user experiences, refer to our definitive guide to best practices. Gain Visibility into Third-Party APM Solutions: Utilize our APM solution content pack to enhance visibility for ITOps, Executives, DevOps, and DevSecOps. Curious about recently released features that make getting started easier?  Check out a new and easy to use feature called Service Sandbox (GA) inside ITSI’s 4.19 release. With drag and drop abilities now in the UI, users can map and simulate services, service health scores, and identify potential errors before production for even faster service decomposition.   How to Address Key IT Challenges Today's IT environments are complex and dynamic, presenting numerous challenges that require flexible solutions. Splunk ITSI is designed to address these challenges head-on, providing visibility, intelligent correlation and predictive analytics to deliver smooth and efficient operations. Below, we highlight three critical challenges and reference technical guidance on ways that ITSI helps:  Overwhelming Alert Noise: One of the most significant challenges in modern IT operations is the lack of visibility into the health, dependencies, performance, and impact of IT assets. When teams can’t make sense of their environment, they can’t find and fix issues and spend their time jumping between tools. This becomes even more challenging when the accumulation of tools, siloed teams, and data create an overflow of alerts (many of which are duplicate), and makes it extremely difficult to understand signals in the noise. This results in frustrated teams, lost revenue, and higher costs.  Recommendations: Start reducing alert noise by understanding Splunk’s approach to Event Analytics. From there, see how to process notable events so they can be grouped into meaningful Splunk ITSI episodes. Want more help identifying the alerts or groups of alerts that appear to be unusual compared to what you normally see? Read about approaches you can use to help identify the “unknown unknown” in your alert storms.  Lack of Visibility and Business Context: Connecting the visibility of IT and business stakeholders is crucial in order to align them on the same objectives. Without a centralized location to see all the data, teams spend time trying to surface relationships between applications and infrastructure, how these relationships affect services, and how all of this impacts the business. Piecing the alignment together in complex and dynamic environments makes it challenging for teams to understand the severity of incidents and prioritize issues based on their business impact.  Recommendations: See how to analyze IT service health with advanced tools and dashboards that provide detailed insights into the health and performance of your IT services. Review how executive glass tables offer high-level visibility into critical services, and help to modernize IT operations by aligning them with business goals. Understand how the dashboards provide a clear visual of how IT and engineering impacts business functions, and foster faster identification and prioritization of incidents that impact the bottom line. You can check out more glass table examples here. As a next step, take a look at step-by-step guidance on how to troubleshoot service problems. Unpredictable Incidents and Downtime: Unplanned incidents and downtime can severely disrupt business operations. These issues often lead to decreased productivity, lost revenue, and can directly impact a company’s brand and reputation. To overcome this, IT operations and engineering teams must look for ways to forecast performance and anticipate issues before they impact the business or worse, customers. Recommendations: Splunk ITSI can provide early warning signs of potential incidents, guide teams to take preemptive action, and even automate runbooks. Watch how to proactively prevent incidents and predict outages up to 30 minutes before they happen.  Curious about what’s new in ITSI to help prioritize and respond to key challenges?    Significantly enhance your ability to manage and resolve incidents with ITSI’s new Service Impact Analysis (GA) from the 4.19 release. Now identify the origin of problems for any degraded service and rank the top contributors (e.g., KPIs) by priority to reduce manual investigations and provide a quick starting point for troubleshooting.   Stories You Can Replicate  Learning about how others have implemented ITSI to exceed their goals can provide valuable insight and inspiration for your own deployment. Below are noteworthy examples of how others have achieved significant operational improvements and what it meant for their business:  By implementing ITSI, Leidos reduced event noise by 95-99%, scaling down from 3,500-5,000 to just 50-200 actionable events. This telecom giant drastically reduced incidents across more than 5,000 network exchanges by 90% and increased their customer NPS score by 22 points. Conclusion Ahead of Part 2 of this blog, you should now understand how to get started, prioritize and extract the most value for your time, and know what best practices align with your organization’s mission. To make the most of your discoveries, we invite you to explore ITSI’s comprehensive training resources and join our vibrant user community. Look for Part 2 of this guide to learn about more advanced use cases, optimization, and ways to leverage additional tooling.
Hello,    I would like to create chart with multiple fields in Y axis and time in x axis,  Y axis - FIELD_01 FIELD_02 FIELD_03 FIELD_04 FIELD_05 FIELD_06 (All field values are in strings and numb... See more...
Hello,    I would like to create chart with multiple fields in Y axis and time in x axis,  Y axis - FIELD_01 FIELD_02 FIELD_03 FIELD_04 FIELD_05 FIELD_06 (All field values are in strings and numbers as well) x axis - _time Lets say, If the FIELD_01 consists of values Stopped, Started, Stopped, Stopped In y axis it should change its values with some colours. FIELD_06     Field values FIELD_05     Field values FIELD_04     Field value FIELD_03     Field value FIELD_02     Field value FIELD_01     Field value Y axis/ x axis                                         _time Thanks in Advance!
My Splunk installation can't read files from windows host from a specific folder on the C:// drive. Logs are collected from another folder without problems. There are no errors in index _internal, st... See more...
My Splunk installation can't read files from windows host from a specific folder on the C:// drive. Logs are collected from another folder without problems. There are no errors in index _internal, stanza in inputs.conf looks standard, monitor on the folder and the path are specified correctly. The rights to the folder and files are system ones, as are other files that we can collect. What could be the problem?
  Hi, I encountered an issue where my indexer disconnected from the search head (SH), and similarly, the SH and indexer1 disconnected from the deployment server and license master. I keep receiving... See more...
  Hi, I encountered an issue where my indexer disconnected from the search head (SH), and similarly, the SH and indexer1 disconnected from the deployment server and license master. I keep receiving the following error message:    Error [00000010] Instance name "A.A.A.A:PORT" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time: 2024-10-14T16:23:23.000+02:00; Failed 5 out of 5 times. I've tried re-adding the peer but the issue persists. Does anyone have suggestions on how to resolve this? Thanks in advance!
Is there way to transfer account young.so@gdit.com to young.so@securepro-inc.com? I've switch companies.  I also lost the user group slack access.  oh, and also need the transfer of User group leade... See more...
Is there way to transfer account young.so@gdit.com to young.so@securepro-inc.com? I've switch companies.  I also lost the user group slack access.  oh, and also need the transfer of User group leader for Navy.  I need similar update. @Support  
I have created an index to store my data on Splunk.  The data contains 5 csv files uploaded one by one in the index. Now, if I try to show the data inside  the index, it shows the latest data (the ... See more...
I have created an index to store my data on Splunk.  The data contains 5 csv files uploaded one by one in the index. Now, if I try to show the data inside  the index, it shows the latest data (the csv file that was uploaded at the end ) We can show the data of other files by querying, including specific source names, but by default, we can not see the whole data; we can only see the data of the last table. To overcome this challenge we have used joins to join all the tables and show them through the query in one report. I wanted to find out if there is a better way to do this. I have to show this data in Power BI, and for that, I should have a complete report of the data.
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which di... See more...
We setup a SAML login with Azure AD for our self hosted Splunk Enterprise.  When we try to login we are redirected to  https://<instance>.westeurope.cloudapp.azure.com/en-GB/account/login which displays a blank page with {"status":1}  So login seems somehow to work but after that it gets stuck in this page and in the splunkd.logs I can see the following Error message: "ERROR UiAuth [28137 TcpChannelThread] - user= action=login status=failure reason=missing-username" so it sounds that there is maybe something wrong in the claims mapping ? here is my local/authentication.conf       [roleMap_SAML] admin = test [splunk_auth] constantLoginTime = 0.000 enablePasswordHistory = 0 expireAlertDays = 15 expirePasswordDays = 90 expireUserAccounts = 0 forceWeakPasswordChange = 0 lockoutAttempts = 5 lockoutMins = 30 lockoutThresholdMins = 5 lockoutUsers = 1 minPasswordDigit = 0 minPasswordLength = 8 minPasswordLowercase = 0 minPasswordSpecial = 0 minPasswordUppercase = 0 passwordHistoryCount = 24 verboseLoginFailMsg = 1 [authentication] authSettings = saml authType = SAML [authenticationResponseAttrMap_SAML] mail = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress realName = http://schemas.microsoft.com/identity/claims/displayname role = http://schemas.microsoft.com/ws/2008/06/identity/claims/groups [saml] caCertFile = /opt/splunk/etc/auth/cacert.pem clientCert = /opt/splunk/etc/auth/server.pem entityId = <instance>.westeurope.cloudapp.azure.com fqdn = https://<instance>.westeurope.cloudapp.azure.com idpCertExpirationCheckInterval = 86400s idpCertExpirationWarningDays = 90 idpCertPath = idpCert.pem idpSLOUrl = https://login.microsoftonline.com/<tentantid>/saml2 idpSSOUrl = https://login.microsoftonline.com/<tentantid>/saml2 inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = https://sts.windows.net/<tentantid>/ lockRoleToFullDN = true nameIdFormat = urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress redirectPort = 0 replicateCertificates = true signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST sslPassword = <pw> ssoBinding = HTTP-POST       does anyone has a hint what could go wrong in our setup? Thanks in advance!  
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exact... See more...
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exactly?
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirement... See more...
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirements for hosting these servers? Based on your input, we will plan and provision the necessary hardware. The primary role of the Deployment Master server will be to create custom apps and collect data from client machines using Splunk Universal Forwarder. For the Heavy Forwarders, we will be installing multiple add-ons to configure and fetch data from sources such as Azure Storage (Table, Blob), O365 applications, Splunk DB Connect, Qualys, AWS, and client machine data parsing. We are looking for the minimum, moderate, and maximum hardware requirements as recommended by Splunk Support to host the Splunk DM and HF servers in the Nutanix environment. If there are any support articles or documentation available, that would be greatly appreciated. Thank you!
Hello Splunker!! Could you please help me to optimize below query ? Customer saying dedup is taking so much resource consumption. So what should I change in the query so that the complete query get... See more...
Hello Splunker!! Could you please help me to optimize below query ? Customer saying dedup is taking so much resource consumption. So what should I change in the query so that the complete query gets optimized? index=abc sourcetype=abc _tel type=TEL (trigger=MFC_SND OR trigger=FMC_SND) telegram_type=CO order_type=TO area=D10 aisle=A01 *1000383334* | rex field=_raw "(?P<Ordernumber>[0-9]+)\[ETX\]" | fields _time area aisle section source_tel position destination Ordernumber | join area aisle [ inputlookup isc where section="" | fields area aisle mark_code | rename area AS area aisle AS aisle] | lookup movement_type mark_code source AS source_tel position AS position destination AS destination OUTPUT movement_type | fillnull value="Unspecified" movement_type | eval movement_category = case( movement_type like "%IH - LH%", "Storage", movement_type like "%LH - R%", "Storage", movement_type like "%IH - IH%", "Storage", movement_type like "%R - LH%", "Retrieval", movement_type like "%LH - O%", "Retrieval", 1 == 1, "Unknown" ) | fields - source_tel position destination | dedup Ordernumber movement_category | stats count AS orders by area aisle section movement_category movement_type Ordernumber _raw
Hello,  I would like to know if it's possible to setup a "lot" of automation broker in a single instance within the same tenant ? Or is it only 1 by "tenant" ? My main usecase would be to have ... See more...
Hello,  I would like to know if it's possible to setup a "lot" of automation broker in a single instance within the same tenant ? Or is it only 1 by "tenant" ? My main usecase would be to have access and act upon a lot of "Onprem" client with a few SOAR cloud instance (client are already merge by "group of client", therefore I do not want to re-split with 1 tenant = 1 client)  PSA : I did not manage to find the details about the possibility to have multiple "automation broker" in both Splunk SOAR and Splunk Automation Broker, I assume it's possible based on the API and the "id" for the broker, I just want to confirm it, thanks ! 
Hi, i am using classic dashboard. I have below 2 INPUT boxes ( SRC_Condition and Source IP)  to filter the src_ip.  By default, we can only place input boxes next to one another.   How can i align th... See more...
Hi, i am using classic dashboard. I have below 2 INPUT boxes ( SRC_Condition and Source IP)  to filter the src_ip.  By default, we can only place input boxes next to one another.   How can i align these 2 on top of one another ?   Splunk doesn't allow us to drag and drop them on top of each other.   
Hi, I'm trying to drilldown on a table using two different input values (from two radio button inputs). When I have input from one radio button, it works all fine. For eg, if I have this statement ... See more...
Hi, I'm trying to drilldown on a table using two different input values (from two radio button inputs). When I have input from one radio button, it works all fine. For eg, if I have this statement in drilldown tag of table it works perfectly: <set token="tokenNode">$click.value$</set>   However, when I place second set token statements It just says No Results Found: I tried both click.value & click.value2 Option 1: <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value$</set>   Option 2:   <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value2$</set>  
Hi Splunk Experts, Can you please let me know how we can calculate the max and avg TPS for a time period of last 3 months along with the exact time of occurrence. I came up with below query, ... See more...
Hi Splunk Experts, Can you please let me know how we can calculate the max and avg TPS for a time period of last 3 months along with the exact time of occurrence. I came up with below query, but it is showing me error as the count of event is greater than 50000. Can anyone please help or guide me on how to overcome this issue.   index=XXX "attrs"=traffic NOT metas | timechart span=1s count AS TPS | eventstats max(TPS) as MAX_TPS | eval Peak_Time=if(MAX_TPS==TPS,_time,null()) | stats avg(TPS) as AVG_TPS first(MAX_TPS) as MAX_TPS first(Peak_Time) as Peak_Time | fieldformat Peak_Time=strftime(Peak_Time,"%x %X")      
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output... See more...
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output must include 10-Oct, 11-Oct, 12.-Oct and 13-Oct each with top 10  table names with highest insert sum       index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort limit=10 +_time -count         Thanks in advance