All Topics

Top

All Topics

when I upgrade ITSI app to 4.18.1. The services option in the configuration dropdown is missing Reference Screenshot:
Hi Team, We are currently using pyhton 3.9.0 version for Splunk app development. Is it ok or if it can be suggested some better version of python to develop splunk app.   Thanks, Alankrit  
Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't... See more...
Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't find any APP available for Citrix NetScaler, which leverages the logs collected by the add-on and show in visualizations. Any suggestions please. thanks.    
Overview   The .NET Agent Ignore Exceptions Configuration allows you to ignore specific errors reported for a Business Transaction by adding the fully classified exception class to the .NET ignore ... See more...
Overview   The .NET Agent Ignore Exceptions Configuration allows you to ignore specific errors reported for a Business Transaction by adding the fully classified exception class to the .NET ignore exceptions list. This guide provides a comprehensive overview of how to configure and troubleshoot ignore exceptions for the .NET Agent.  Contents  Introduction Configuration Steps Troubleshooting Sample Configuration Additional Resources Introduction   Ignoring specific exceptions can help streamline your monitoring process by filtering out non-critical errors. This ensures that only relevant issues are brought to your attention, enhancing the efficiency of your application performance management.  Configuration Steps  Step 1: Identify Exception Details   Navigate to the BT (Business Transactions) snapshot Error Details page to find the exception details.   Step 2: Add Exception to Ignore List Add the Fully Classified exception class to the .NET ignore exceptions list. This configuration is applied at the controller application level and affects all registered Business Transactions. Make sure you select the .NET Tab under the Error Detection to add the Ignore exception rule.  Step 3: Add Specific Exception Messages (Optional)  You can specify an exception message to ignore by defining the class of an exception in the exception chain. Note that the match condition is applied only to the root exception of the chain, not to any nested exceptions. Troubleshooting Ignore Exception Configuration Reviewing Agent Logs The location of the .NET Agent log files varies based on the underlying OS (Operating System):  Windows:  %programdata%\appdynamics\DotNetAgent\Logs  Linux:   tmp/appd/dotnet  Azure Site Extension:  %home%\LogFiles\AppDynamics  Verify Ignore Rules Check if the ignore rule configurations from the Controller are downloaded in the AgentLog.txt entry. Look for entries like: Info ErrorMonitor Setting ignore exceptions to :[System.Net.WebException] Info ErrorMonitor Setting ignore message patterns to :[SM{ex_type=CONTAINS, ex_pattern='The remote server returned an error: (401) Unauthorized', type=CONTAINS, pattern='The remote server returned an error: (401) Unauthorized', inList=System.String[], regexGroups=[]}] Locate the Exception Key The Ignore exceptions work based on the key sent by the agent for a specific exception. In AgentLog.txt, find the Exception Key in an entry like: Info ErrorProcessor Sending ADDs to register [ApplicationDiagnosticData{key='System.Net.WebException:', name=WebException, diagnosticType=ERROR, configEntities=null, summary='System.Net.WebException'}] Validating Exception Keys Validate the exception key (e.g., key='System.Net.WebException:') entry seen in the AgentLog.txt file against the Ignore Exception configuration in your controller application. Modify/correct the configuration in your controller as needed and verify. Sample Ignore Exception Configuration Scenario Let's use the System.AggregateException with an inner exception of SmallBusiness.Common.SmallBusinessException as an example. You want to ignore this exception only when the SmallBusiness.Common.SmallBusinessException has a specific message, such as "This is a known issue." Here’s an example of how the System.AggregateException and SmallBusiness.Common.SmallBusinessException might be used in your application: try { // Some code that might throw an exception throw new System.AggregateException(new SmallBusiness.Common.SmallBusinessException("This is a known issue")); } catch (System.AggregateException ex) { // Handle the exception Console.WriteLine(ex.Message); } Fully Classified Class Name When dealing with nested exceptions as above, the fully classified class name includes both the outer and inner exceptions to uniquely identify the specific error scenario. Outer Exception: System.AggregateException Inner Exception: SmallBusiness.Common.SmallBusinessException In this case, the fully classified class name is: System.AggregateException:SmallBusiness.Common.SmallBusinessException Log Entry in Agent Log When the exception is thrown, you see an entry in the AgentLog.txt like this: Info ErrorProcessor Sending ADDs to register [ApplicationDiagnosticData{key='System.AggregateException:SmallBusiness.Common.SmallBusinessException:', name=AggregateException : SmallBusinessException, diagnosticType=ERROR, configEntities=null, summary='System.AggregateException caused by SmallBusiness.Common.SmallBusinessException: This is a known issue'}] Ignore Exception Configuration The match condition is applied only to the root exception of the chain. Here the System.AggregateException is thrown with an inner exception of SmallBusiness.Common.SmallBusinessException and the message "This is a known issue," will be ignored by the .NET Agent. The match condition will not apply to nested exceptions' messages unless they are the root exception. Here’s how the Ignore Exception rule would look in the Controller configuration: Fully Qualified Class Name: System.AggregateException:SmallBusiness.Common.SmallBusinessException: Exception Message: Is Not Empty Corresponding Configuration Entry in Agent logs: Info ErrorMonitor Setting ignore exceptions to :[System.AggregateException:SmallBusiness.Common.SmallBusinessException] Info ErrorMonitor Setting ignore message patterns to :[SM{ex_type=NOT_EMPTY, ex_pattern='', type=NOT_EMPTY, pattern='', inList=System.String[], regexGroups=[]}, SM{ex_type=NOT_EMPTY, ex_pattern='', type=NOT_EMPTY, pattern='', inList=System.String[], regexGroups=[]}] Additional Resources AppDynamics Errors and Exceptions AppDynamics Error Detection
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '... See more...
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1143)'))), Anyone experienced this, or know how to solve this certificate issue? FYI, I have updated ssl certificate on both Splunk and Dynatrace, but it didn’t help.
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, fi... See more...
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, files from this input are NOT monitored if their path matches the specified regex. * Takes precedence over the deprecated '_blacklist' setting, which functions the same way. * If a file matches the regexes in both the deny list and allow list settings, the file is NOT monitored. Deny lists take precedence over allow lists. * No default. [monitor:///logs/incoming/file.com/all-messages.log] sourcetype = something index = something_platform disabled = 0 blacklist = audit.log
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like ... See more...
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like /var/log/message; /var/log/secure; /var/log/audit/audit.log; /var/log/cron etc. Recently, I noticed that only logs from /var/log/messages and /var/log/cron being ingested; specially I don't see /var/log/secure and /var/log/audit/audit.log.  I tried restarting splunk process on one of the UF and check splunkd.log and I don’t see any errors. Here is what I see for /var/log/secure in the splunkd.log (looks normal) (I have typed it, as I can copy/paste from the air gapped machine) TailingProcessor [xxxxxx MainTailingThread] passing configuration stanza: monitor:///var/log/secure TailingProcessor [xxxxxx MainTailingThread] Adding watch on path:///var/log/secure WatchedFile [xxxxxx tailreader 0] – Will begin reading at offset=xxxx for file=`/var/log/secure` Here is my inputs.conf [default] host = <indexer> index = linux [monitor:///var/log/secure] disabled = false [monitor:///var/log/messages] disabled = false [monitor:///var/log/audit/audit.log] disabled = false [monitor:///var/log/syslog] disabled = false   File permission seems to be fine for all of those files. Please note, SELinux is enabled but file permission seems to be fine for all of those files. Initially, I did have to run "setfacl -R -m u:splunkfwd:rX /var/log"  for Splunk to get access access to send logs to the indexer.  btool also shown that I am using the correct inputs.conf. Any idea, what's not misconfigured?  
I'm using the punchcard in dashboard studio and the values on the left are getting truncated with ellipses, is there a way to display the full value or edit the truncation style?  
Streamline Troubleshooting with Log Observer Connect: AppDynamics + Splunk Integration   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 3 min 27 seconds  ... See more...
Streamline Troubleshooting with Log Observer Connect: AppDynamics + Splunk Integration   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 3 min 27 seconds  Log Observer Connect for AppDynamics helps your team quickly identify and resolve issues by integrating full-stack APM with Splunk's log analysis. This integration centralizes log collection in Splunk and allows for contextual analysis within AppDynamics, streamlining troubleshooting and reducing operational costs.  Watch this demo by Leandro, a Cisco AppDynamics Advisory Sales Engineer, to see it in action.   Additional Resources  Learn more about Log Observer Connect in the blog and documentation, including  Introducing Log Observer Connect for AppDynamics Log Observer Connect Documentation About presenter Leandro de Oliveira e Ferreira Leandro is an Advisory Sales Engineer at Cisco, having joined the company in 2021. With a decade of experience in the observability space, he has honed expertise in OpenTelemetry, Java, Python, and Kubernetes. Throughout his career, he has been instrumental in guiding clients from various industries through their digital transformation challenges. Before joining Cisco, Leandro held key roles at IBM, CA Technologies and Broadcom where he contributed significantly to advancing observability practices across complex environments.
Introduction We know three key players in observability are metrics, traces, and logs. Metrics help you detect problems within your system. Traces help you troubleshoot where the problems are occur... See more...
Introduction We know three key players in observability are metrics, traces, and logs. Metrics help you detect problems within your system. Traces help you troubleshoot where the problems are occurring. Logs help you pinpoint root causes. These observability components, (along with others), work together to help you remediate issues quickly.  In our previous post, we discussed how Splunk Observability Cloud can help us detect and troubleshoot problems specifically in our Kubernetes environment. But how can we use our telemetry data to identify exactly what’s causing the problems in the first place? In this post, let’s dig into Splunk Log Observer Connect and see how we can diagnose and resolve issues fast.  Splunk Log Observer Connect Overview Splunk Log Observer Connect is an integration that makes it possible to query log data from your existing Splunk Platform products (Enterprise or Cloud) and use the data alongside metrics and traces all from within Splunk Observability Cloud. If you’re a Splunk Enterprise or Splunk Cloud Platform customer, you can use Log Observer Connect to view in-context logs, run queries without SPL, and jump to Related Content with one easy click to quickly detect and resolve system problems.  You can get started with Log Observer Connect by following the setup steps or working with your Support team to add a new connection for Log Observer Connect in Splunk Observability Cloud.  Using the native OpenTelemetry logging capabilities deployed as part of the Helm chart included in the Splunk Distribution of the OpenTelemetry Collector is the recommended way to get logs from Kubernetes environments into Splunk. You can also configure logging during the initial integration process of the OTel Collector by specifying Log collection and providing your Splunk HEC endpoint and access token:  Splunk Log Observer Connect in Action Interacting with logs in Splunk Observability Cloud often begins with an alert triggered by some error event like a problem with a Kubernetes cluster.  In Splunk Infrastructure Monitoring we can see in the Kubernetes Navigator, (which we toured in a previous post), that we have two such active alerts firing:  Opening them up, we can see critical alerts for memory usage. With a single click, we can Explore Further in Splunk Application Performance Monitoring by clicking on the Troubleshoot link:  This will take us to a Service Map view of our application where we can see something isn’t right:  Our paymentservice node is highlighted in red, meaning it’s the source of the root cause of our errors. If we select the red circle, we’ll see more info in the panel on the right and Infrastructure and Logs Related Content at the bottom of the screen. All of this information is specifically scoped to the selected paymentservice.  We can expand the Logs Related Content:  And then jump directly from there to Log Observer to view logs related to this service error:  With help from the logs, we can get to the bottom of what’s causing these errors. Let’s add some additional filters in the Content Control Bar to filter our logs by keywords or field values. Since we arrived via Related Content, we already have logs filtered to service.name = paymentservice. If we only wanted to see logs related to paymentservice errors, we could add another filter for severity = error:  If at any time we wanted to save a query to later validate a fix or share it with the rest of our team, we could add it to our Saved Queries. Select Save at the top right of the screen, then Save query to name and describe the saved query for later use:  Other users and/or your future self can use the Saved Query dropdown to later apply your saved query:  Moving over to the Fields panel on the right of the screen, we can view all available metadata present on entries in the Logs table. This is a great place to filter logs if you’re unsure of what fields you’re looking for. Here, we can see there’s a k8s.cluster.name field with the top values listed. In this case, we know which Kubernetes cluster we want to isolate, so we can include all logs for our specific cluster of interest:  We can then click on an individual log entry to see its details:  From the log details, we can see that the error message is “Failed payment processing through ButtercupPayments: Invalid API Token(test-20e26e90-356b-432e-a2c6-956fc03f5609).” Selecting the error message, we can filter further with a single click of Add to filter to ensure all logs are scoped to those with this error message: We’ve also added the version field as a column by selecting the kebab menu next to the field followed by Add field as column: Now we can easily scan the Logs table and identify which errors are associated with which version. At a glance, it appears that all the error logs are related to the same version number. Suspiciously, if we look for the version field in the Fields list, we can see that there is in fact only one version scoped to the current error logs: Before we jump to resolutions, we can continue to interact with the log details and move through our system. We can explore the traces related to this error and see where in our code this error is being thrown. This could help us track down any recent code changes that may have potentially caused this error. We can simply click on the trace_id in the log detail and then View trace_id to jump to the trace back in Splunk APM:  If we open up one of the span errors and go into Tag Spotlight for the version trace property, we can confirm our suspicions – only our latest release is experiencing this “Invalid API token error”:  If we had first discovered the error while investigating this trace, we could have initially gotten to Log Observer via the Related Content from either the trace:  Or the Tag Spotlight: Wrap Up We used Log Observer Connect to easily locate the cause of our errors. Thanks to the ability to move between Splunk Infrastructure Monitoring, APM, and Log Observer, we were able to confidently move forward with a fix .  If you want to connect your Splunk Enterprise or Splunk Cloud Platform logs to Splunk Observability Cloud using Splunk Log Observer Connect, again, check out the Introduction to Splunk Log Observer Connect.  New to Splunk and want to get started with Splunk Observability Cloud? Start a 14 day free trial! 
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the cur... See more...
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the current value of field1. This can occur several times as the user refreshes the page, or through code behind the scenes that generates an event based on how long the user stays on the page. 2. The user fills in the form and hits submit, which logs an event with "update" eventtype. Here's a simplified list of events: _time,         eventtype,          sessionid,         field1 10:06         update                  session2           newvalue3 10:05         get                          session2           newvalue2 09:15         update                  session1           newvalue2 09:12         get                          session1           newvalue1 09:10         get                          session1           newvalue1 09:09         update                  session1           newvalue1 09:02         get                          session1           oldvalue1 09:01         get                          session1           oldvalue1 08:59         get                          session1           oldvalue1 I'm looking to get the last value of field1 before each "update" eventtype. Basically I'd like to track what the value was before and what it was changed to, something like: _time,              Before,                      After 10:06               newvalue2              newvalue3 09:15               newvalue1              newvalue2 09:09               oldvalue1                newvalue1 I've tried this with a transaction command on the session, but I run into issues with the multiple instances "get" events in the same session, which makes it a little convoluted to extract the running values of field1.  I also tried this with a combination of the latest(field1) and earliest(field1), but then this misses any updates that might take place within the session - we sometimes have users who change the value and then change it back. I'd like to capture those events as well.   Does anyone have any tips on how to get this accomplished? Thanks!
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | ... See more...
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | table ... | collect ... I ran this using Python SDK in VSCode as - oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) conn.cursor().execute(sql, val) I ran the above using psycopg2 and got this error- FATAL: Error in 'lookup' command: Could not construct lookup 'Map.csv, B, OUTPUT, D'. See search.log for more details. The above query works when run inside splunk enterprise i.e. map.csv is looked-up and result fetched correctly. How do I locate my search.log? It is  splunkhome/var/lib/dispatch/run I assume. What is the error above? Thanks
Hi guys  when I extract a selected event it doesn't show all data in event that I need to extracted
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events relat... See more...
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events related to this sourcetype are ingested by a universal forwarder with the settings below: props.conf   [exch_file_trans-front-recv] ANNOTATE_PUNCT = false FIELD_HEADER_REGEX = ^#Fields:\s+(.*) SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = date_time BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 24 initCrcLength = 256 TRANSFORMS-no_column_headers = no_column_headers   transforms.conf   [no_column_headers] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue   In this sourcetype I have some events that I want to delete before indexing. You can see an example below:   2024-08-22T12:58:31.274Z,Sever01\Domain Infrastructure Sever01,08DCC212EB386972,6,172.25.57.26:25,172.21.255.8:29635,-,,Local   So, I'm interested in deleting events with the pattern '...172.21.225.8:....,'. To do it, I created some settings on the indexer cluster layer: props.conf   [exch_file_trans-front-recv] TRANSFORMS-remove_trash = exch_file_trans-front-recv_rt0   transforms.conf   [exch_file_trans-front-recv_rt0] REGEX = ^.*?,.*?,.*?,.*?,.*?,172.21.255.8:\d+, DEST_KEY = queue FORMAT = nullQueue   After applying this configuration across the indexer cluster, I still observe new events with the presented pattern. What am I doing wrong?
Hi Team, Am trying to instrument .NET 4.8 application, which is using the Asp.net SignalR application that is using websocket. When accessing this application, the AppD profiler is getting loaded s... See more...
Hi Team, Am trying to instrument .NET 4.8 application, which is using the Asp.net SignalR application that is using websocket. When accessing this application, the AppD profiler is getting loaded successfully. But I didn't see any measures in the controller. Please check the below URL (XHR Request) I am expecting in the controller. GET /signalr/connect transport=webSockets&clientProtocol=2.1&connectionToken=VRaaiTyPGpv6nQYxV59QI3x6IGjDEvSSCf1ANWpXALK0c6DjkOh9vFnl5MPGlMl4qJWFSAYWcx0HIpiIHBb0HOGSeawT%2FofowF35o5aqOAgrzeYeaAs9spjxBBg6qknK&connectionData=%5B%7B%22name%22%3A%22chathub%22%7D%5D&tid=10 Apart from custom instrumentation since I don't know the application class and function information, is there a way to capture this transaction? 
Hi Splunkers, I’m new to React development and currently working on a React app that handles creating, updating, cloning, and deleting users for a specific Splunk app. The app is working well, but ... See more...
Hi Splunkers, I’m new to React development and currently working on a React app that handles creating, updating, cloning, and deleting users for a specific Splunk app. The app is working well, but for development purposes, I’ve hardcoded the REST API URL, username, and password. Now, I want to enhance the app so it dynamically uses the current session’s user authentication rather than relying on hardcoded credentials. Here’s the idea I’m aiming for: When a user (e.g., "user1" with admin roles) logs into Splunk, their session credentials (like session key or authentication token) are stored somewhere, right? I need to capture those credentials in my React app. Does this approach make sense? I’m looking for advice on how to retrieve and use the session credentials, token, or session key for the logged-in user in my "User Management" React app. Here’s the current code I’m using to fetch all users (with hardcoded credentials):   // Fetch user data from the API const fetchAllUsers = async () => { try { const response = await axios.get('https://localhost:8089/services/authentication/users', { auth: { username: 'admin', password: 'changeme' }, headers: { 'Content-Type': 'application/xml' } }); // Handle response } catch (error) { console.error('Error fetching users:', error); } }; I also tried retrieving the session key using this cURL command: curl -k https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=changeme However, I’m still hardcoding the username and password, which isn’t ideal. My goal is for the React app to automatically use the logged-in user’s session credentials (session key or authentication token) and retrieve the hostname of the deployed environment. Additionally, I’m interested in understanding how core Splunk user management operates and handles authorizations. My current approach might be off, so I’m open to learning the right way to do this. Can anyone guide me on how to achieve this in the "User Management" React app? Any advice or best practices would be greatly appreciated! Thanks in advance!
I'm new on splunk ihave this error when finish installation : [root@rhel tmp]# systemctl restart splunk-otel-collector [root@rhel tmp]# systemctl status splunk-otel-collector ● splunk-otel-co... See more...
I'm new on splunk ihave this error when finish installation : [root@rhel tmp]# systemctl restart splunk-otel-collector [root@rhel tmp]# systemctl status splunk-otel-collector ● splunk-otel-collector.service - Splunk OpenTelemetry Collector Loaded: loaded (/usr/lib/systemd/system/splunk-otel-collector.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/splunk-otel-collector.service.d └─service-owner.conf Active: failed (Result: exit-code) since Thu 2024-08-22 16:30:11 WIB; 273ms ago Process: 2760714 ExecStart=/usr/bin/otelcol $OTELCOL_OPTIONS (code=exited, status=1/FAILURE) Main PID: 2760714 (code=exited, status=1/FAILURE) Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Service RestartSec=100ms expired, scheduling restart. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5. Aug 22 16:30:11 rhel systemd[1]: Stopped Splunk OpenTelemetry Collector. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Start request repeated too quickly. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Aug 22 16:30:11 rhel systemd[1]: Failed to start Splunk OpenTelemetry Collector.
Hello Im trying to create a DB Connect input to log the result of a query inside an index. The query returns data as I can see when I execute it from Splunk however when I go to the Search I cant f... See more...
Hello Im trying to create a DB Connect input to log the result of a query inside an index. The query returns data as I can see when I execute it from Splunk however when I go to the Search I cant find anything in the index that I configured it. 1 - From the "DB Connect Input Health" I see no errors and it shows events from the input I created every x minutes (exactly as I configured it). It also shows this metric that also confirm that there are data been returned in the execution: DBX - Input Performance - HEC Median Throughput Search is completed 0.0465 MB   2 - From "index=_internal pg6 source="/opt/splunk/var/log/splunk/splunk_app_db_connect_server.log"" I can see that it: Job 'my_input_name' started Job 'my_input_name' stopping Job 'my_input_name' finished with status: COMPLETED 3 - If I search the index I created for it, it is empty. 4 - splunk_app_db_connect 3.9.0 Thanks for any light!
Can I pass Splunk Enterprise Certified Admin without getting the plunk Core Certified Power User (SPLK 1002)?
Hi, I am currently working on a ticket reporting.  Each ticket has a lastUpdateDate field which gets updates multiple times leading to duplicates. I only need the first lastUpdateDate and latest las... See more...
Hi, I am currently working on a ticket reporting.  Each ticket has a lastUpdateDate field which gets updates multiple times leading to duplicates. I only need the first lastUpdateDate and latest lastUpdateDate to determine when the ticket has entered the pipe and the latest to see if changes were made in the specific period range of the reporting. I tried using | stats first(_raw) as first_entry last(_raw) as last_entry by ticket_id but it shows me the same lastUpdateDate for both. I have read to use min and max but do not gain results from that either.  Thanks in advance for any hints and tips!