All Topics

Top

All Topics

Hi , does anyone have any experience with Parsing Version 6 schema of Umbrella logs the release notes from the addon https://splunkbase.splunk.com/app/3926/ talks only of version5 1.0.5: Adds suppo... See more...
Hi , does anyone have any experience with Parsing Version 6 schema of Umbrella logs the release notes from the addon https://splunkbase.splunk.com/app/3926/ talks only of version5 1.0.5: Adds support for logging format version 5 + Firewall Logs   the change in Umbrella seems for my environment to be only from Version4 -> version6 and "Schema upgrades are one way; you will not be able to revert this upgrade." Its scary you cant revert   Anyone moved to version6 and did they make changes in the local/{props,transforms} ?  
| datamodel "Change_Analysis" "Account_Management" search | where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*" | stats values(All_Changes.result) as "signature",values(All_Changes.src) as... See more...
| datamodel "Change_Analysis" "Account_Management" search | where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*" | stats values(All_Changes.result) as "signature",values(All_Changes.src) as "src",values(All_Changes.dest) as "dest", values(All_Changes.user) as "users", DC(All_Changes.user) as user_count by "All_Changes.Account_Management.src_user" | rename "All_Changes.Account_Management.src_user" as "src_user","All_Changes.user" as "user"   I am using this query to monitor  for Account Deleted-  But all the time I am getting this alert triggered for the computer account ending with $ symbol  Ex:  XYZLAPTOP$ , ABCLAPTOP$  etc I have added the search  where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*"" How can I exclude this $ symbol account from the report? Can any one please help      
Hello all,   I am trying to extract a field from the below event and the extraction is working fine on events that is coming with the value for the field. However, in the events that are coming in ... See more...
Hello all,   I am trying to extract a field from the below event and the extraction is working fine on events that is coming with the value for the field. However, in the events that are coming in empty values it is picking the next matching value. How to fix it so it only picks the required value and ignore the empty field. Expression used: (?:[^,]+,){23}\"(?<occurance>\w+)\",.*   Below highlighted is the event that is extracting correct: 50271232,00004102,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","System",46769357,"System","Server-I \x83W\x83\x87\x83u\x83l\x83b\x83g(AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX:@5V689)\x82\xF0\x8AJ\x8En\x82\xB5\x82܂\xB7","Information","admin","/App/Sys/AJS2","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","START","20210901225500","","",11,"A0","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds","A1","04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","A3"   The below event does not have the value in the field and the next matching field is picked from below. 50266209,00000501,00000000,3476,"20210901220311","20210901220311",4,-1,-1,"SYSTEM","","psd005",142331,"MS932","OR01201S [psd005:HONDB1] YSN1 free 4.52% \x82\xAA\x82\xB5\x82\xAB\x82\xA2\x92l5%\x82\xF0\x89\xBA\x89\xF1\x82\xE8\x82܂\xB5\x82\xBD (Free size = 1466560KB) [Jp1 Notified]","Alert","","/insight/PI","","","","","","","","","",9,"ACTION_VERSION","510","OPT_CATEGORY","OS","OPT_PARM1","","OPT_PARM2","","OPT_PARM3","","OPT_PARM4","","OPT_SID","HONDB1","OPT_URL1","","OPT_URL2","",   Please help in this.
Hi, We are using Splunk cloud 8.2 and mainly utilizing for Splunk SIEM solution.  Currently we have many scheduled alerts, searches and reports. In the recent days we could see 21% of the searche... See more...
Hi, We are using Splunk cloud 8.2 and mainly utilizing for Splunk SIEM solution.  Currently we have many scheduled alerts, searches and reports. In the recent days we could see 21% of the searches were skipped and job execution time also increased.  From yesterday, we are unable to see output results for any of the jobs, but we are getting the search result when we execute adhoc search. We are also able to see below Errors and warnings in our console. The percentage of non high priority searches skipped (74%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=7056. Total skipped Searches=5271 The instance is approaching the maximum number of historical searches that can be run concurrently. The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance Could you please share some solution to implement in this case.     
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command... See more...
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command. It is working fine if outputlookup is removed. Any suggestion on this? I know there is limit on number of rows to written for the outputlookup command. But, as all searches are within that limit wondering if there is limit on number of concurrent outputlookup command. Is there any such thing? Is it like one search outputlookup will wait for other output lookup to complete? If so, any solution for that?
I need to auto-refresh my dashboard every 30secounds. I am trying to set up a refresh for a form that was created. the below is it correct?   This hasn't been working for me. Am I missing something... See more...
I need to auto-refresh my dashboard every 30secounds. I am trying to set up a refresh for a form that was created. the below is it correct?   This hasn't been working for me. Am I missing something or is this not the correct placement for this? If it matters, I am using Splunk cloud 8.2 <form version="1.1" theme="dark" refresh="30"> <label>Health Dashboard - 24 Hours</label> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timetoken" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Servers_Memory_Usage</title> <chart> <search>
Describe what happens when an adhoc search is issued on a search head in a distributed environment? Does the search head communicate with the cluster master or directly with the indexers? Im looking... See more...
Describe what happens when an adhoc search is issued on a search head in a distributed environment? Does the search head communicate with the cluster master or directly with the indexers? Im looking for clarifications. 
Hello champions, I run the below 1,2,3 queries on the given datasets to find out which users ran the enable command on which host at what time: 1. index= networking user* enable* host* Oct 15 08:1... See more...
Hello champions, I run the below 1,2,3 queries on the given datasets to find out which users ran the enable command on which host at what time: 1. index= networking user* enable* host* Oct 15 08:17:45 brg-c-1.com.au 8279: Oct 15 2021 08:17:44.820 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:John logged command:!exec: enable Oct 15 08:17:35 brg-c-1.com.au 8278: Oct 15 2021 08:17:34.082 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:lili logged command:!exec: enable failed Sep 15 23:29:55 gsw-r-4.com.au 466: Sep 15 23:29:54.009: %PARSER-5-CFGLOG_LOGGEDCMD: User:Khan logged command:!exec: enable Aug 12 15:18:37 edc-r-4.com.au 02: Aug 12 15:18:36.472: %PARSER-5-CFGLOG_LOGGEDCMD: User:Khan logged command:!exec: enable Aug 11 03:31:05 ctc-s.com.au 134: Aug 10 17:31:04.859: %PARSER-5-CFGLOG_LOGGEDCMD: User:cijs logged command:!exec: enable Jan 29 11:30:58 brg-c-1.com.au 2082: Jan 29 2021 11:30:57.141 AEST: %PARSER-5-CFGLOG_LOGGEDCMD: User:chick logged command:!exec: enable failed 2. index=linux_logs host=edc-03-tacacs enable* Oct 26 12:56:13 egc-03-ts tc_plus[149]: enable query for 'kim' tty86 from 202.168.5.22 accepted Oct 26 11:33:44 egc-03-ts tc_plus[259]: enable query for 'kim' tty86 from 202.168.5.22 accepted Oct 21 11:35:59 egc-03-ts tc_plus[285]: enable query for 'John' tty86 from 202.168.5.23 accepted Oct 21 11:35:53 egc-03-ts tc_plus[282]: enable query for 'Han' tty86 from 202.168.5.23 rejected 3. index=linux_logs host=gsw-03-tacacs enable* Sep 30 13:35:53 gdw-02-ts tc_plus[143]: 192.168.2.21 James tty1 192.168.6.56 stop task_id=55161 timezone=AEST service=shell start_time=1632972953 priv-lvl=0 cmd=enable Sep 29 12:38:17 gdw-02-ts tc_plus[319]: 192.168.2.24 linda tty1 192.168.5.3 stop task_id=15729 timezone=AEST service=shell start_time=1632883097 priv-lvl=0 cmd=enable Sep 15 22:23:23 gdw-02-ts tc_plus[1649]: 192.168.4.2 Brown tty322 192.168.46.1 stop task_id=2574 timezone=AEST service=shell start_time=1631708603 priv-lvl=0 cmd=enable Sep 9 14:58:32 gdw-02-ts tc_plus[2030]: 192.168.2.29 Gordan tty1 192.168.26.3 stop task_id=14329 timezone=AEST service=shell start_time=1631163512 priv-lvl=0 cmd=enable I tried hard but could not find a query to merge all these data (indexes and hosts) to find out who ran enable command successfully at what time on which host. And get those results to a table look like |table date host user command(enable) status(success) Could anyone please help me ? Thank you in advance.
Hello, So this is my first time trying to consolidate logs and use the data extraction and I am a little lost. I have the following payload below and I would like to extract the following fields fro... See more...
Hello, So this is my first time trying to consolidate logs and use the data extraction and I am a little lost. I have the following payload below and I would like to extract the following fields from the "line" field in the json payload. An example payload would be   {"line":"2021/10/25 18:49:52.982|DEBUG|GoogleHomeController|Recieved a request for broadcast: {\"Message\":\"Ring Ring Ring Niko and Xander, someone is at your front door and rung the doorbell!\",\"ExecuteTime\":\"0001-01-01T00:00:00\"}","source":"stdout","tag":"b5fcd8b8b5a4"}   Time - "2021/10/25 18:49:52.982" Level - "DEBUG" Controller - "GoogleHomeController" Message - "Recieved a request for broadcast..." It all follows the format "{TIME}|{LEVEL}|{CONTROLLER}|{MESSAGE}" basically the fields seperated by pipe characters. I have all the information there formatted using NLog in my code, but how do I extract the fields that are within a field out that way I can search based on the Time (from the log msg), Log Level, Controller, and Message? How would I go about pulling this information out? I tried going through the field extraction, but it only seems to let me do it at the highlest level, IE the line, source, and tag fields, not the fields within.  
We have an add-on installed on our splunk instance last year and when i checked today in splunkbase, this add-on is not available and it took me to archived page.Does this have any impact on the exis... See more...
We have an add-on installed on our splunk instance last year and when i checked today in splunkbase, this add-on is not available and it took me to archived page.Does this have any impact on the existing instance.There is a lot of dependency on this add-on and we are worried if splunk comes back to us saying that they are going to uninstall this add-on sometime in near future.   Anyone ever faced this situation?  
I wanted to extract the data for every node. As you can see the pg-2 and ss7-2 are the nodes and below is the information for the node. How do I extract the percentage value? I want to find the maxim... See more...
I wanted to extract the data for every node. As you can see the pg-2 and ss7-2 are the nodes and below is the information for the node. How do I extract the percentage value? I want to find the maximum percentage for every node.   
What are AppDynamics application troubleshooting tools, by issue? AppDynamics provides various tools for troubleshooting different application issues. Our official documentation includes several pa... See more...
What are AppDynamics application troubleshooting tools, by issue? AppDynamics provides various tools for troubleshooting different application issues. Our official documentation includes several pages dedicated to troubleshooting different application components.  In this article Find recommended tools to apply in specific use cases, including: Slow Application Sudden increase in exceptions Networking bottleneck Analyzing the JVM Health Rule violation analysis  Performance impacting business  Degraded Kubernetes cluster health Mobile app crash Performance pattern UX design challenges Identifying expensive dB queries In addition, check out these consolidated references: INDEX | Troubleshooting resources by scenario Additional resources for troubleshooting  Slow application USE CASE Imagine what you might do in a situation where your service desk calls, telling you that there are complaints about slow logging into your application. What do you do? Slow Response Times  (AppDynamics Documentation) .NET Slow Response Times (AppDynamics Documentation) And by the way, have you checked Analytics to see how the issue impacted your business?  What is Business iQ? (AppDynamics University) Business iQ (AppDynamics Resources) BiQ Use Cases - take full advantage of BiQ and understand your business performance, too (AppDynamics Resources) BACK TO CONTENTS Sudden increase of exceptions USE CASE An alert reported a higher than usual error rate for certain Business Transactions, and we simultaneously spotted a sudden increase of exceptions in the application dashboard. How do we analyse it? Troubleshoot Error Transactions (AppDynamics University) Errors and Exceptions (AppDynamics Documentation) BACK TO CONTENTS Networking bottleneck  USE CASE While analyzing a Business Transaction, you discovered a lot of issues between Tiers and towards the shared backend. None of the application information, or system, points to a specific root cause but you can see the response time is unexplainably high. Before escalating to the network team, how can you further troubleshoot the Network Visibility yourself? Diagnosing Network problems (AppDynamics University) Network Issues (AppDynamics Documentation) BACK TO CONTENTS Analyzing the JVM USE CASE You notice bottlenecks on all calls processed by a Tier (specifically a JVM cluster). While analyzing you discover that the Java Virtual Machines are under high load, but you can't pinpoint it to a specific piece of code causing the high resource load, and your team of developers needs more detail about the problems you discovered in AppDynamics.  This kind of bottleneck often occurs due to underlying hardware, the JVM I/O Resources, and the JVM memory management which you now start to analyze in more detail.   Java Resource Issues  (AppDynamics Documentation) Java Memory Leaks  (AppDynamics Documentation) Java Memory Thrash  (AppDynamics Documentation) BACK TO CONTENTS Digging into problematic asynchronous calls USE CASE What if the application you monitor takes advantage of multithreading solutions? These documents will help you discover potential problems in the AppD Controller. See the AppDynamics Documentation for:  Code Deadlocks for Java (AppDynamics Documentation) Thread Concatenation (AppDynamics Documentation) Event Loop Blocking in Node.js (AppDynamics Documentation) BACK TO CONTENTS Health Rule violation analysis USE CASE Let’s say you get a new email from the AppDynamics Controller, showing a Health Rule violation. You ask yourself essential questions like:  When was the Health Rule violated?  Is the violation still ongoing? or not?  What condition(s) deviated from the normal performance, and for which entities?  Troubleshooting Health Rule violation (AppDynamics Documentation) Do you know the “Health Rule violation life cycle”? BACK TO CONTENTS Performance impacting business USE CASE Based on your research, you know that your organization’s profits are highly correlated to user experience, and consequently to the performance of crucial business transactions. So, you look for a dedicated tool for such a purpose. BiQ Metrics & Alerts and Experience Level Management (AppDynamics University) Troubleshoot Business (AppDynamics Documentation) BACK TO CONTENTS Degraded cluster health USE CASE Your organization is facing the challenge of monitoring Kubernetes environments at scale. You would like to maintain visibility into the performance and health of the Kubernetes cluster, its underlying infrastructure, and running applications. You don’t want to put too much effort into deploying and maintaining the Agents. What can AppDynamics offer in this scenario? Cluster Agent (AppDynamics Resources) Monitoring of the Cluster health (AppDynamics Documentation) BACK TO CONTENTS Mobile app crash USE CASE Your mobile application’s end users start complaining in their Google Play and App Store reviews, and the app’s ratings fall. From their comments, you conclude that the application continuously crashes. You need an immediate root cause analysis. Your application team wants to start in AppDynamics. Where to look? Troubleshoot Mobile (AppDynamics Documentation) Crash Analyze (AppDynamics Documentation) This time, you were able to solve the issue once you had a look at it. But how could you prevent running into similar issues in the future, without needing to rely on customer feedback? Proactive monitoring (AppDynamics Blog) BACK TO CONTENTS Performance pattern USE CASE Unquestionably, not every IT system is equally active 24/7, and the same is true for your application. At night, load decreases significantly. But Saturday afternoons, the application typically experiences a noticeable increase in active users.  Will AppDynamics adjust to changing characteristics of the application metrics?  Discover normal activity (AppDynamics Documentation) What if a maintenance window or other planned abnormality is approaching, and you would like to suppress alerting for a set time? Action Suppression (AppDynamics Documentation) Or (even better) how can we show that an abnormal behavior event (like Black Friday) is behaving normally despite the expected higher load? — e.g., "when does our system break, versus when the system is just under load or stress?" AIOps - Anomaly Detection, Automated Transaction Diagnostics  (AppDynamics Documentation) BACK TO CONTENTS UX design challenges USE CASE In a competitive environment, every detail matters. Your UX design team puts a lot of effort into distinguishing your application from the competition: with outstanding UX for both browser and web applications.  How can the design team understand your end users more fully? Do both front-ends perform well? Can AppDynamics be close enough to the end user to help in defining the persona?   Mobile Screenshots (AppDynamics Documentation) ExJM (AppDynamics webpage) Experience Journey Map (OneMob Video) Back to Contents Identifying expensive dB queries USE CASE It's an old truth that the database is “always the bottleneck”. Some transaction snapshots were categorized as slow and very slow. You see no pattern in neither Business Transaction, nor nodes. Might some transactions share the same, expensive dB queries?  Troubleshoot DB queries (AppDynamics Resources) BACK TO CONTENTS Index | Troubleshooting resources by scenario SCENARIO TYPE      RESOURCE LINKS Slow application Tools Slow Response Time .NET Slow Response Times Training Troubleshoot Slow Transactions Troubleshooting Using Transaction Snapshots Video Troubleshooting With AppDynamics Automated Transaction Diagnostics Mobile App is crashing Tools Troubleshoot Mobile Crash Analyze Training Mobile Crashes and Code Issues Sudden increase of exceptions Tools Errors and Exceptions Analyze Training Troubleshoot Error Transactions Video Troubleshooting With AppDynamics Networking bottleneck Tools Network Issue Use Cases Training Diagnose Network Issues Analyzing the JVM Tools Java Resource Issues Java Memory Leaks Java Memory Thrash JVM Crash Guard Training Automatic Leak Detection Object Instance Tracking Health Rule violation analysis Tools Troubleshoot Health Rule Violations Training Proactive Monitoring and Dashboards Workshop Video Troubleshooting With AppDynamics Alert Sensitivity Tuning Performance impacting business Tools Troubleshoot Business  Training Experience Level Management Video Information Points Cluster Health is degraded Tools Monitoring of the Cluster Health Articles Introducing Cluster Agent Video K8 Cluster Agent  Performance pattern Tools Discover Normal Activity Automated Transaction Diagnostics Video Metric Browser Automated Transaction Diagnostic Identifying expensive dB queries Articles Troubleshoot DB queries Training Too Many/Slow Database Calls BACK TO CONTENTS Additional Resources The troubleshooter's mind: an introduction (Community, Knowledge Base)  A guide to AppDynamics help resources (Community, Knowledge Base)  BACK TO CONTENTS
This question is based on a comment from @woodcock on this post: https://community.splunk.com/t5/Splunk-Search/Why-are-real-time-searches-not-running-and-getting-error-quot/m-p/281407 in which the al... See more...
This question is based on a comment from @woodcock on this post: https://community.splunk.com/t5/Splunk-Search/Why-are-real-time-searches-not-running-and-getting-error-quot/m-p/281407 in which the alert equation provided is as follows: "Schedule it to cover a span of X and run it every X/2. This covers the case where events at the end of span t an the beginning of t+1 would just miss triggering in those windows but will hit in the next alert run. Then make X as large as you can stomach."  I do not fully understand this so I am hoping someone can help me out here. Let's say I have an alert running every 5 mins. By that equation I should search -10m to now. But isn't that going to also significantly overlap with the prior run? Why not search -6m to now, for example? How do span sizes affect things? Here is an alert I have running every 5 mins. I did notice the search itself picks up the current span and the prior span so I have been wondering how to optimize this properly.     | mstats avg(cpu_metric.pctIdle) as Idle WHERE index="itsi_im_metrics" AND host="*" span=5m by host | eval cpu_utilization=round(100 - Idle,2) | where cpu_utilization > 90 | stats list(host) as host_list list(cpu_utilization) as avg_cpu_utilization    
I have a JSON-based log file for which every line is a valid JSON document. When searching it like this: source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | table message.par... See more...
I have a JSON-based log file for which every line is a valid JSON document. When searching it like this: source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | table message.params.page_hide_metrics I get entries with the JSON I expect, like this:  {"connections":[{"connection_num":1,"initialized":"2021-10-25T20:46:45.318Z","ready_state":1,"connected_duration_seconds":32.296,"ready_state_times":[null,0.512,null,null]}],"tab_session_id":"604931x|concept|1635194804","first_connection_index":0,"percent_uptime":0.9843940502316508,"duration_seconds":32.296,"page_duration_seconds":32.808}   However, when I try to use an example like example #1 given for json_extract in the splunk docs,  source="/path/to/json/logfile" message.path="/ws/ws_metrics/page_hidden/" | eval ph_metrics = json_extract(message.params.page_hide_metrics) | table ph_metrics I don't get any results. Why?
I recently created a Splunk trial to test the Splunk + Okta integration. I have installed the Okta Identity Cloud Add-on for Splunk app but I'm unable to configure it. When I select the Configuration... See more...
I recently created a Splunk trial to test the Splunk + Okta integration. I have installed the Okta Identity Cloud Add-on for Splunk app but I'm unable to configure it. When I select the Configuration tab within the app, the Okta Accounts tab gets stuck on "loading". Below are the steps I have taken thus far. Did I miss something?  + Find More Apps Searched for Okta Located Okta Identity Cloud Add-on for Splunk Clicked Install Provided Splunk Credentials, Checked the "I have read..." and Selected Login and Install Clicked Open the App Select the Configuration Tab Stuck on "Loading" These steps were attempted in Chrome and Safari browsers. Reference Documentation: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?pdfurl=https%3A%2F%2Fraw.githubusercontent.com%2Fmbegan%2FOkta-Identity-Cloud-for-Splunk%2Fmaster%2FREADME%2FOkta%2520Identity%2520Cloud%2520Add-on%2520for%2520Splunk.pdf&clen=1363755&chunk=true 
While running arules command across multiple fields,  The 'Given fields' generated with various 'Implied fields'.  But how come a value of 'Given fields' can have various 'Given fields support'... See more...
While running arules command across multiple fields,  The 'Given fields' generated with various 'Implied fields'.  But how come a value of 'Given fields' can have various 'Given fields support' values ? Sample results like: Given fields            Implied fields           Given fields support            Implied fields support     Strength a1, b1                          c1                                 0.6                                               0.3                                          1.0 a1, b1                          c2                                 0.4                                               0.6                                           0.8
Hello   There are several dashboards in the app created by others and there is a Clone button.   I want to clone/mirror some dashboards which will be totally private i.e. only visible/editable by... See more...
Hello   There are several dashboards in the app created by others and there is a Clone button.   I want to clone/mirror some dashboards which will be totally private i.e. only visible/editable by me.   Will this Clone button offer that? I am reluctant to click it because I do not want to create Dashboard2 and confuse all other users!   Thanks!
Hi How can I find event that have send but not recieved response here is the log: this is send 2021-07-15 00:00:01,892 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[00002... See more...
Hi How can I find event that have send but not recieved response here is the log: this is send 2021-07-15 00:00:01,892 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[0000211] this is recieve 2021-07-15 00:00:11,719 INFO CUST.InEP-Server2-9876543_CUST.InAB-ServerApp-1234567 [MyService] Normal Packet Received: A[55] B[0000211]   step1: find send id 1234567 step2: find response id  9876543 due to send id 1234567, where A=A+5 AND B=B finally show id than not have recieve   e.g 2021-07-15 00:00:01,988 INFO CUST.InAB-ServerApp-0000001 [ApiManager] Send Packet [0000000000000*] to [APP.MODULE]   table   id status id                        status 0000001    no recieve   any idea? thanks
Hello Splunk World,  I'm working on importing raw logs from McAfee ELM to Splunk. The only option I've come across from McAfee documentation is sftp. Reaching out to see if anyone has had experience... See more...
Hello Splunk World,  I'm working on importing raw logs from McAfee ELM to Splunk. The only option I've come across from McAfee documentation is sftp. Reaching out to see if anyone has had experience routing data from ELM into Splunk and a good method to do so.   Thank you.  
Can someone point me to DLTK User Guide please?