All Topics

Top

All Topics

Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicate... See more...
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicates a potential discrepancy in the timestamp parsing or configuration when handling live data. Could you please suggest me potential reson and cause? Additionally, it would be helpful to review the relevant props.conf configurations to ensure consistency   Sample data: {"@timestamp":"2024-11-19T12:53:16.5310804+00:00","event":{"action":"log","code":"10010","kind":"event","original":"Communication session on line {1:d}, lost.","context":{"parameter1":"12","parameter2":"2","parameter3":"6","parameter4":"0","physical_line":"12","connected_unit_type_code":"2","connect_logical_unit_number":"6","description":"A User Event message will be generated each time a communication link is lost. This message can be used to detect that an external unit no longer is connected.\nPossible Unit Type codes:\n2 Debug line\n3 ACI line\n4 CWay line","severity":"Info","vehicle_index":"0","unit_type":"NT8000","location":"0","physical_module_id":"0","event_type":"UserEvent","software_module_id":"26"}},"service":{"address":"localhost:50005","name":"Eventlog"},"agent":{"name":"ACI.SystemManager","type":"ACI SystemManager Collector","version":"3.3.0.0"},"project":{"id":"fleet_move_af_sim"},"ecs.version":"8.1.0"} Current props: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom #KV_MODE = json pulldown_type = 1 TIME_PREFIX = \"@timestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%7N%:z mismatch timestamp Current results :   Note : I am using http event collector token to get the data into Splunk. Inputs and props settings are arranged under search app.  
Hi, we are using a Splunk Cloud ES and we can't seem to edit the base search macro of the "Alerts" datamodel. The macro in question is, " cim_Alerts_indexes" and it appears it has an extra parameter ... See more...
Hi, we are using a Splunk Cloud ES and we can't seem to edit the base search macro of the "Alerts" datamodel. The macro in question is, " cim_Alerts_indexes" and it appears it has an extra parameter which generates an error when this macro is ran manually. Error: "Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the right hand side" And that is due to the fact that the macro SPL is set up as follows:   (index=(index=azure_security sourcetype="GraphSecurityAlert") OR (index=trendmicro))     The extra "index=" in the beginning is what's messing it up. It should be removed. However, when we try to go to Settings -> Advanced Search and click on this macro, we are taken to the CIM Setup interface (Splunk_SA_CIM) which shows the config settings of the macro, including the:   Indexes whitelist = azure_security,trendmicro Tags whitelist = cloud, pci   Notice, the editable configs do not include the definition which is:   (index=(index=azure_security sourcetype="GraphSecurityAlert") OR (index=trendmicro))     So can anyone assist how we can correct this? Regards  
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of o... See more...
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of our Splunk Infrastructure from Amazon Linux 2 (kernel 5.10.x) to Amazon Linux 2023 (kernel 6.1.x)  due to the approaching Operating System end of life.  Does anyone know if there are plans to support the new Amazon OS by Splunk Enterprise?  
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. ... See more...
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." The console isnt very helpful.  common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror at makeError (eval at e.exports (common.js:1:1), <anonymous>:166:17) at HTMLScriptElement.onScriptError (eval at e.exports (common.js:1:1), <anonymous>:1689:36) // Tokenize.js require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } // Main $('.dashboard-body').on('click', '[data-on-class],[data-off-class],[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); console.log("Inside the click"); var target = $(e.currentTarget); console.log("here"); console.log("target.data('on-class')=" + target.data('on-class')); var cssOnClass= target.data('on-class'); var cssOffClass = target.data('off-class'); if (cssOnClass) { $("." + cssOnClass).attr('class', cssOffClass); target.attr('class', cssOnClass); } var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { var tokens = unsetTokenName.split(","); var arrayLength = tokens.length; for (var i = 0; i < arrayLength; i++) { setToken(tokens[i], undefined); //Do something } //setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });  
---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 Comput... See more...
---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: Ann/King Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Loading program ---------------------------------------------------------------------------------------------------- ---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: James/Bond Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Start APL (pid 8484) ---------------------------------------------------------------------------------------------------- ---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: Martin/King Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Initialising external processes ---------------------------------------------------------------------------------------------------- I am trying to break events at "This is an Example"  [mysourcetype] TIME_FORMAT = %Y-%m-%d/%H:%M:%S TIME_PREFIX = Date\/time:\s+ TZ = US/Eastern LINE_BREAKER = (.*)(This is An Example).* SHOULD_LINEMERGE = false This works when i test in "Add Data" but it is not working under props.conf. All the lines are merged into one event. What is the issue in this?
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I n... See more...
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I need to have this pulled via rex. We manually maintain a list of clients (some are on an old version and we don't populate the "client" field for them) and what host they are on. Some clients have both their application and DB on the same host, so my search below results in some weird duplicates where the displayName is listed twice for a single record in my result set (a field containing two values somehow). I want the lookup to only include records where the "host_type" is "application", not "db". Here is my search:   `Environments(PRODUCTION)` sourcetype=appservice "updaterecords" AND "version" | eval host = lower(host) | lookup clientlist.csv hostname as host, OUTPUT clientcode as clientCode | eval displayName = IF(client!="",client,clientCode) | rex field=_raw "version: (?<AppVersion>.*)$" | eval VMVersion = replace(AppVersion,"release/","") | eval CaptureDate=strftime(_time,"%Y-%m-%d") | dedup clientCode | table displayName,AppVersion,CaptureDate    I did try including host_type right after "..hostname as host.." and using a |where clause later, but that did not work.
You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our coverage to hybrid and three-tier applications and the network. We’re excited to announce ... See more...
You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our coverage to hybrid and three-tier applications and the network. We’re excited to announce new innovations across the entire Splunk Observability portfolio to continue to deliver full-stack observability for any environment and any stack. Explore some of these latest innovations and expansions designed to do the heavy lifting of identifying problems, isolating root causes, and taking corrective action to ensure the resilience of your digital systems. What’s New? Splunk AppDynamics AppDynamics is expanding observability for hybrid and on-prem applications and can help unlock new use cases. This addition introduces unique capabilities for hybrid and on-prem observability, including:  Application performance monitoring linked to business metrics — and every network, ISP, API, and service your applications rely on Application security linked to business risk Monitoring for SAP® Solutions Digital experience monitoring Learn more > Splunk Observability Cloud Metrics Usage Analytics (MUA) in Metrics Management Get detailed breakdowns of metrics consumption and usage for better control so you can optimize data consumption and costs. Designed to provide visibility into how many Metric Time Series (MTS) are being generated, used, and stored across your system, MUA helps you track, manage, and optimize your metric data, allowing you to pinpoint which metrics are most valuable and which ones can be archived or dropped to reduce storage and cost. Simplified Kubernetes Troubleshooting Workflows Enhancements to Kubernetes navigator in Splunk Infrastructure Monitoring helps engineering teams speed up MTTR and maintain optimal performance across their Kubernetes environments. Users can leverage improved drill-down experiences, simplified navigation and new list views in Kubernetes navigator for a simplified troubleshooting experience.   Learn more > Splunk Platform Splunk Observability Cloud  Metrics in Splunk Cloud Eliminate the swivel chair effect and consolidate your charting experience by bringing your Observability Cloud real-time metrics data into Splunk Cloud. You’ll be able to create metric-based charts in Splunk Cloud Dashboard Studio, add existing Observability charts and out-of-the-box Navigators, and bring in infrastructure, application, real user monitoring, synthetic, and custom metrics to existing SPL-powered dashboards. Centralized User and Role Management in Splunk Cloud Now Splunk Admins can manage out-of-the-box Observability Cloud roles and RBAC in Splunk Cloud. Centralizing user management in one interface alleviates admin efforts and provides better data control and access consistency across Splunk products. Learn more  Cross-Portfolio Integrations Log Observer Connect for AppDynamics Introducing another way to reuse your Splunk logs for more value! This new log integration with Splunk Cloud and Enterprise lets AppDynamics users centralize log collection in Splunk and analyze them in context in AppDynamics for frictionless troubleshooting and faster root cause analysis.  Unified Experience From Splunk Platform to AppDynamics We’re removing the friction from your workflows that span three-tier, hybrid and multi-cloud environments to help improve operational efficiency. New single sign-on (SSO) between Splunk AppDynamics Saas and Splunk Cloud plus deep linking between the two helps preserve context and streamline the troubleshooting workflow for a more seamless experience and faster MTTx. Learn more >
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a ... See more...
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a few hours pass, I can no longer find it with the same search query. Of course, I adjust the time settings accordingly. First, I search in real-time (last 30 minutes), then I switch to, for example, Today or the last 4 hours. I have noticed that this happens with searches that include "transaction msg maxspan=5m". I want to see all the related transactions. When I have the command transaction msg maxspan=5m in my search, I find all the related transactions in real-time. After a few hours, I no longer get any hits with the same search query. Only when I remove the transaction command from the search do I see the entries again, but then I don't see as much information as before. Nothing changes if i switch to transaction msg maxevent=3. Do I possibly have a wrong configuration of my environment here, or do I need to adjust something? Thanks in advance. Search Query: index="sys_linux" sourcetype="linux_audit" | transaction msg maxspan=5m | search type=SYSCALL (auid>999 OR auid=0) auid!=44444 auid!=4294967295 comm!=updatedb comm!=ls comm!=bash comm!=find comm!=crond comm!=sshd comm!="(systemd)" | rex field=msg "audit\((?P<date>[\d]+)" | convert ctime(date) | sort by date | table date, type, comm, uid, auid, host, name
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS? ... See more...
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS?   Thanks
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have... See more...
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have:   index="myindex" host="our-hosts*" source="/var/log/nic-errors.log" | rex "RX\serrors\s(?<rxError>\d+)\s" | rex "RX\spackets\s(?<rxPackets>\d+)\s" | rex "RX\serrors\s+\d+\s+dropped\s(?<rxDrop>\d+)\s" | chart last(rxError), last(rxPackets), last(rxDrop) by host   which displays the base data.  Now I want to watch if rxError increases and flag that.  Any ideas? The input data will look something like:   RX packets 2165342 bytes 33209324712 (3.0 GiB) RX errors 0 dropped 123 overruns 0 frame 0 TX packets 1988336 bytes 2848819271 (2.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0  
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/Configureauthext... See more...
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureauthextensionsforSAMLtokens#Configure_and_activate_authentication_extensions_to_interface_with_Microsoft_Azure I can create token for myself, but cannot create tokens for others. I had another admin test and he could create a token for himself, but could not create one for me or other users. The only error Splunk is providing is "User <user> does not exist". Which is not true. The users do exist. All permissions are in place for Splunk admin and Azure side. Any ideas on what is wrong?
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Pla... See more...
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Platform using RHEL 9.5 (latest) already tried RHEL 8.10 (latest) too   Used documentation: https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Linux   using next commands to setup: cd /opt tar xzf /opt/splunkforwarder-9.3.2-d8bb32809498-Linux-x86_64.tgz adduser -d /opt/splunkforwarder splunkfwd export SPLUNK_HOME=/opt/splunkforwarder $SPLUNK_HOME/bin/splunk enable boot-start -systemd-managed 1 -user splunkfwd -group splunkfwd systemctl start SplunkForwarder     cat /etc/systemd/system/SplunkForwarder.service [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network-online.target Wants=network-online.target   [Service] Type=simple Restart=always ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd --accept-license KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 LimitRTPRIO=99 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunkfwd Group=splunkfwd NoNewPrivileges=yes PermissionsStartOnly=true AmbientCapabilities=CAP_DAC_READ_SEARCH ExecStartPre=-/bin/bash -c "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" ---     $ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="9.5 (Plow)" ID="rhel" ID_LIKE="fedora" VERSION_ID="9.5" PLATFORM_ID="platform:el9" PRETTY_NAME="Red Hat Enterprise Linux 9.5 (Plow)" ANSI_COLOR="0;31" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9" BUG_REPORT_URL="https://issues.redhat.com/"   REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_BUGZILLA_PRODUCT_VERSION=9.5 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.5" ---     $ netstat -tulpn [root@splunk-custom-image log]# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::22                   :::*                    LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::20201                :::*                    LISTEN      2517/otelopscol udp        0      0 127.0.0.1:323           0.0.0.0:*                           652/chronyd udp6       0      0 ::1:323                 :::*                                652/chronyd ---       /var/log/messages: [root@splunk-custom-image log]# systemctl status SplunkForwarder ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'      Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; preset: disabled)      Active: active (running) since Thu 2024-11-21 09:03:55 EST; 7min ago     Process: 797 ExecStartPre=/bin/bash -c chown -R splunkfwd:splunkfwd /opt/splunkforwarder (code=exited, status=0/SUCCESS)    Main PID: 1068 (splunkd)       Tasks: 47 (limit: 100424)      Memory: 227.4M         CPU: 3.481s      CGroup: /system.slice/SplunkForwarder.service              ├─1068 splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd              └─2535 "[splunkd pid=1068] splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd [process-runner]"   Nov 21 09:03:55 systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. Nov 21 09:03:58 splunk[1068]: Warning: Attempting to revert the SPLUNK_HOME ownership Nov 21 09:03:58 splunk[1068]: Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Nov 21 09:03:58 splunk[1068]:         Checking mgmt port [8089]: open Nov 21 09:03:59 splunk[1068]:         Checking conf files for problems... Nov 21 09:03:59 splunk[1068]:         Done Nov 21 09:03:59 splunk[1068]:         Checking default conf files for edits... Nov 21 09:03:59 splunk[1068]:         Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.3.2-d8bb32809498-linux-2.6-x86_64-> Nov 21 09:04:00 splunk[1068]: PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped> Nov 21 09:04:00 splunk[1068]: 2024-11-21 09:04:00.038 -0500 splunkd started (build d8bb32809498) pid=1068 ---     /opt/splunkforwarder/var/log/splunk/splunkd.log attached file
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to te... See more...
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to test the viz.  Pretty straightforward.  I have changed around the field order and tried all types and sizes of numbers and nothing seems to change the size of the node in the output graph .  Has anyone else seen this issue, or been able to get the node sizing to work with the weight_* attributes   | makeresults | eval src="node1", dest="node2", color_src="#008000", color_dest="#FF0000", edge_color="#008000", edge_weight=1, weight_src=1, weight_dest=8 | table src, dest, color_src, color_dest, edge_color, weight_src, weight_dest, edge_weight   and the output I am getting:    
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for mor... See more...
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for more details.   We are getting the above errors while trying to connect with DB Connect 3.18.1  We are running splunk 9.3.1    I've tried uninstallng our openjdk and re-installing up but am finding this:   splunk_app_db_connect# rpm -qa |grep java tzdata-java-2024b-2.el9.noarch javapackages-filesystem-6.0.0-4.el9.noarch java-11-openjdk-headless-11.0.25.0.9-3.el9.x86_64 java-11-openjdk-11.0.25.0.9-3.el9.x86_64 DIR: /opt/splunk/etc/apps/splunk_app_db_connect splunk_app_db_connect# java -version openjdk version "11.0.25" 2024-10-15 LTS OpenJDK Runtime Environment (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS, mixed mode, sharing)   One shows 9-1 and one shows 9-3    
As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to get the end-to-end insight we need to keep our applications healthy and our customers hap... See more...
As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to get the end-to-end insight we need to keep our applications healthy and our customers happy. In this post, we’ll explore how to integrate Amazon Elastic Kubernetes Service (EKS) with Splunk Observability Cloud so we can observe EKS alongside the rest of our application telemetry data.  Observability Metrics in AWS  The Amazon EKS management console and Amazon CloudWatch provide insight into EKS observability metrics. From the AWS management console, you can see things like cluster health and details about EKS resources deployed to your cluster. Node details are available within a selected cluster’s Resources tab:  You can dig into pod status, capacity, and pod details: Get an overview of node conditions like MemoryPressure, DiskPressure, etc.: You can even get insight into node events like node status:  For more detailed cluster health visibility, you can enable CloudWatch Observability from the EKS console. With Container Insights enabled, you get even deeper insight into cluster state and resource utilization:  Along with cluster, namespace, node, service, workload, pod, and container performance monitoring:  But if your infrastructure isn’t all in AWS and is spread across multiple platforms, navigating between observability tools, especially during a high-pressure incident, is not great. Instead, having one unified observability platform where you can view all these metrics and more reduces toil and time to incident resolution. End-to-end visibility unified in a central platform makes for a more resilient and efficient observability practice. So let’s look at how to integrate one such observability platform, Splunk Observability Cloud, with Amazon EKS.  Integrate AWS and Splunk Observability Cloud Splunk Observability Cloud provides a unified platform for troubleshooting and monitoring all application systems no matter where they live. Not only can you collect and store Amazon Cloudwatch Metrics data, but if pieces of your applications and infrastructure live outside of AWS, you can view that data right alongside your AWS data for a complete observability picture. You may have already integrated AWS with Splunk Observability Cloud through the Data Management section in Splunk Observability Cloud:  The integration wizard easily takes you through the process of preparing your AWS account:  And getting your AWS data flowing into Splunk Observability Cloud:  But for EKS, data is collected using the Splunk Distribution of the OpenTelemetry Collector, and even with AWS integrated with Splunk Observability Cloud, you’ll notice from the Available Integrations page that we still need to deploy the OpenTelemetry Collector to get our EKS data in:  So let’s install the Splunk Distribution of the OpenTelemetry Collector for Kubernetes.  Install the Splunk Distribution of the OpenTelemetry Collector We’ve gone through how to integrate Kubernetes and Splunk Observability Cloud before, and integrating Amazon EKS isn’t much different. When we follow along with the integration wizard, we just need to specify Amazon Web Services as the provider and Amazon EKS (or Amazon EKS / Fargate profiles) as the distribution:  We can connect to our EKS cluster and then follow along with the rest of the installation instructions. I’m using the AWS CLI in my terminal, but with Helm installed, you could also use AWS CloudShell. I first configured kubectl for my EKS cluster by updating my kubeconfig file:  And verified the connection:  I next ran the commands in the Splunk Observability Cloud installation instructions with splunk-otel-collector --version pinned to 0.111.0:  Once those steps were complete, I could then view my EKS telemetry data from within Splunk Observability Cloud:  In a previous post, we explored what it looks like to navigate Kubernetes data using Splunk Observability Cloud navigators to detect and resolve issues in a Kubernetes environment. Now that our EKS cluster is sending data to Splunk Observability Cloud, we can use all the same products and features within Splunk Observability Cloud to monitor our Amazon EKS environment.  From Infrastructure Monitoring we can view our Amazon EKS navigators:  We can get insight into all of our Kubernetes clusters:  Dive into the health of a specific cluster: And observe critical performance data around nodes, containers, daemonsets, deployments, namespaces, pods, replicasets, and workloads:  From these critical usage metrics, we can create detectors and alerts from within our navigators and they can live right alongside the detectors and alerts for the rest of our applications and infrastructure: With Amazon EKS now successfully integrated, we can use Splunk Observability Cloud to proactively monitor, detect, and alert on anomalies in our EKS environment right alongside the rest of our application and infrastructure telemetry data.  Wrap up Integrating with a third-party observability platform like Splunk Observability Cloud provides a unified observability solution for your applications and infrastructure. This helps with quick and easy incident detection and resolution without having to navigate between a bunch of different observability solutions.  Want to try integrating Amazon EKS with Splunk Observability Cloud? Try Splunk Observability Cloud free for 14 days!  Resources Get started with the Collector for Kubernetes Available Amazon Web Services integrations
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout con... See more...
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout config in outputs.conf but I do not see the metadata being transferred.  syslog config in outputs.conf does not work for me either. 
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsy... See more...
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsyslog means? Please help me with step by step procedure of how to configure this to our deployment server or indexer?  Documentation will be highly appreciated.
 I have a splunk query that does some comparisons and the output is as follows.  If any of the row below for the given hostname has "OK", that host should be marked as "OK" ( irrespective of IP addre... See more...
 I have a splunk query that does some comparisons and the output is as follows.  If any of the row below for the given hostname has "OK", that host should be marked as "OK" ( irrespective of IP addresses it has).  can you help me with the right query pls ?   Hostname IP_Address match esx24 1.14.40.1 missing esx24 1.14.20.1 ok ctx-01 1.9.2.4 missing ctx-01 1.2.1.5 missing ctx-01 1.2.5.26 missing ctx-01 1.2.1.27 missing ctx-01 1.1.5.7 ok ctx-01 1.2.3.1 missing ctx-01 1.2.6.1 missing ctx-01 1.2.1.1 missing w122 1.2.5.15 ok
1] Tried using Until since to pull the no of days between the expirationDateTime and system date, based on token name as we have many token names expirationDateTime eventTimestamp pickupTimesta... See more...
1] Tried using Until since to pull the no of days between the expirationDateTime and system date, based on token name as we have many token names expirationDateTime eventTimestamp pickupTimestamp 2025-07-26T23:00:03+05:30 2024-11-21T17:06:33+05:30 2024-11-21T17:06:33+05:30 Token name AppD can you suggest the query to be used such that we get value in no of days the certificate gets expired
I created a scheduled search that reads 2 input lookup csv files. It returns zero results when I look at the "View Recent"/Job Manager. When I run it by clicking the "Run" selection, I get the result... See more...
I created a scheduled search that reads 2 input lookup csv files. It returns zero results when I look at the "View Recent"/Job Manager. When I run it by clicking the "Run" selection, I get the results that I'm looking for. What am I overlooking?