All Topics

Top

All Topics

Good afternoon, I'm looking for a way to track impossible travel events for users who are logging in to applications using Duo 2fa. Basically if a user gets a duo push from an IP, lets say in Ameri... See more...
Good afternoon, I'm looking for a way to track impossible travel events for users who are logging in to applications using Duo 2fa. Basically if a user gets a duo push from an IP, lets say in America, then another duo event in France within a short time period, this would be an event we want to investigate. Is it possible to do this using splunk queries?
What AppDynamics enhancements are new this month? January 2023 WATCH THIS PAGE FOR UPDATES — Click the caret menu above right, then Subscribe... Want notification of new monthly Product Update edi... See more...
What AppDynamics enhancements are new this month? January 2023 WATCH THIS PAGE FOR UPDATES — Click the caret menu above right, then Subscribe... Want notification of new monthly Product Update editions? Click here, then Subscribe on the message bar In January 2023, the AppDynamics SaaS Controller v23.1.0 was released with an Alert & Respond and a Business Risk Algorithm enhancement for Cisco Secure Application.   AppDynamics Cloud v23.1 was released on January 30, 2023, and includes a large number of logging enhancements for application troubleshooting, as well as enhancements for Kubernetes and database monitoring troubleshooting, and more!   AppDynamics On-Premises v23.1 was released on January 30, 2023, and includes a number of component enhancements and minor fixes to the Enterprise Controller.  In this article… What release highlights should I know about? Highlights: AppDynamics Cloud | Agents | SaaS Controller | On-premises Controller Heads Up - What else should I know about? Resolved and known issues Essentials What release highlights should I know about? What’s new? The following release highlights include the newest features and capabilities for January 2023. Check out the table below to see who in your organization may be most interested in or impacted by each highlighted enhancement. PRODUCTS / ENHANCEMENT HIGHLIGHTS USER & PERFORMANCE ANALYST ADMIN & IMPLEMENTER DEV OPS AppDynamics Cloud Host monitoring ✓ ✓   Log pattern ranking ✓ ✓ ✓ Root cause analysis using Anomaly Detection ✓ ✓ ✓ Database monitoring ✓ ✓ ✓ Workload efficiency and risk profile ✓ ✓ ✓ Grafana plugin ✓ ✓   Agents Analytics Agent   ✓   Cluster Agent (Controller v22.12.2)   ✓ ✓ iOS Agent ✓ ✓ ✓ Java Agent ✓ ✓ ✓ Machine Agent   ✓   .NET Agent ✓ ✓ ✓ SaaS Controller - for Cisco Secure Application Alert and Respond ✓ ✓   Business risk algorithm ✓ ✓   AppDynamics On-premises Enterprise Console v23.1.0 ✓ ✓   Enterprise Console v21.4.21 ✓ ✓   NOTE | Product enhancements are described in detail, and on an ongoing basis, on the respective documentation portal pages: • AppDynamics Cloud Release Notes, January 2023 • AppDynamics (CSaaS) Release Notes, January 2023 • AppDynamics Accounts Portal Release Notes (ongoing) Back to top AppDynamics Cloud highlights NOTE | See the complete AppDynamics Cloud Release Notes for January 2023 in our documentation portal. Monitor hosts in your private environment Monitor the health and performance of your AWS and Azure hosts in your private environment. See the Host Monitoring documentation for more details. (GA v23.1 Released January 30, 2023)  Application trouble-shooting with logs You can now search, filter, and group similar log messages based on similarities. Refer to the Troubleshoot with Logs documentation for details. (GA v23.1 Released January 30, 2023) Root Cause Analysis using Anomaly Detection Detect anomalies in business transactions by using the Anomaly Detection algorithm by applying filters using tags and attributes and selecting the sensitivity level. For more details, visit the Configure Anomaly Detection and Determine the Root Cause of an Anomaly documentation. (GA v23.1 Released January 30, 2023)  Database monitoring This new database monitoring feature for AppDynamics Cloud provides remediation insights and correlation for on-premises and cloud based databases, with cloud APM and infrastructure. Refer to the Database Monitoring documentation for details. (GA v23.1 Released January 30, 2023) Increased visibility into Kubernetes workloads Your Kubernetes workload efficiency and risk profiles now show key metrics on a single pane of glass. Refer to the Workload Efficiency and Risk Profile documentation for details. (GA v23.1 Released January 30, 2023)  Grafana plugin Leverage an industry-leading visualization tool, Grafana, to integrate with AppDynamics Cloud to monitor key metrics on out-of-the-box and fully customizable dashboards. Visit the AppDynamics Cloud with Grafana documentation for details. (GA v23.1 Released January 30, 2023)  Back to top Agent update highlights NOTE |See the full 23.1 Release Notes for a complete, ongoing, and sortable list of Agent enhancements  Analytics Agent The jackson-databind and snakeYAML libraries have been upgraded. (GA v23.1 Released January 29, 2023)  Cluster Agent There was an auto-instrumentation bug fix applied to the 22.10 controller. See Agent Resolved Issues. (GA v22.12.1 Released January 10, 2023)  iOS Agent This release includes an improved internal variable for stability. (GA v23.1 Released January 23, 2023)  Java Agent Additional support is provided for Apache Tomcat, http4s Blaze Client, Scala, and WebSocket frameworks. See the Java Agent Framework for OpenTelemetry for more details. (GA v23.1 Released January 30, 2023)  Machine Agent This release includes two sets of Machine Agent Docker images, each with Debian and Alpine images to support non-admin and admin users. See Access Machine Agent Docker Images   In addition, Apache, JRE, and Jackson Databind third-party libraries have been upgraded. (GA v23.9 Released January 26, 2023) .NET Agent Due to the changes with ASP.NET Core and ASP.NET, several fixes were put in place to address Business Transaction naming. (GA v23.9 Released January 24, 2023)  PLEASE NOTE | .NET 22.12.0 was the last version that will support .NET Core 3.1. See the End of Support Notice. Back to top SaaS Controller enhancement highlights SaaS Controller 23.1 There were a couple of big Cisco Secure Application enhancements made to the latest AppDynamics Controller:  You can now configure and receive Actionable Alerts when new vulnerabilities are detected. Via HTTP. See Alerts Using Cisco Secure Application.  The Business Risk algorithm, which now also leverages Cisco Kenna, for business transactions helps identify sensitive data, enabling you to prioritize what to triage, and reducing exposure to the business. See Monitor Business Transactions.  Refer to the latest 23.1 Release Notes for more details. SaaS Controler 22.12 A number of improvements were also made to AppDynamics Controller version 22.12. These improvements included upgrades to PDFBox, Apache Tikka, and Jetty, along with adding CSRFFilter for additional security. (GA v22.12.2 Released January 20, 2023)  Refer to the 22.12 Release Notes for more details.  Back to top AppDynamics On-premises enhancements NOTE | See the full On-premises and AMP Platform Release Announcements for a complete, ongoing, and sortable list of enhancements There are two Enterprise Console updates. Enterprise Console 23.1 On-premises Controller v23.1.0 was released providing parity with SaaS Controller version 22.12. It contains a number of enhancements and fixes. (GA v23.1.0-5 Released January 30, 2023) Enterprise Console 21.4.21 Additionally, a number of resolved issues were put in place for the 21.4.21 release. (GA v21.4.21-24882 Released January 17, 2023)  Back to top What else should you know? Upcoming deprecation dates for PHP 7.x and Python 3.6  Support for PHP versions 7.0 to 7.4 is deprecated as of February 10, 2023. We recommend upgrading to PHP 8.1.    Support for Python 3.6 will be deprecated as of March 1, 2023, so please upgrade.    Community launches Welcome Center Come check out the new Welcome Center, a space where Community members can get self-service and many-to-many help with the community platform’s features and best practices.   Read how-to articles in Community 101, or raise or answer questions in Welcome Center discussions.  TIP | To find the Welcome Center from anywhere in the Community, click Groups on the navigation bar, then select Welcome Center.  New University courses released in January This month, the AppDynamics University team has released the following courses:  Self-paced courses  Use Service Principals to Connect to AppDynamics Cloud APIs  Premium instructor-led courses  IMP876 - AppDynamics Platform Architecture  Get a precise understanding of the AppDynamics solution platform’s main functional components and the ways they inter-communicate.  IMP877 - Monitoring as Code  Learn how to automate deploying agents into your application landscape.   Premium self-paced courses  Configure SAP Transaction Snapshots to Include Call Graphs  Update the Controller License File (On-premises only)  Change Controller Data Directory (On-premises only)  Collect MRUM Custom User Data    NOTE | Instructor-led training and Premium self-paced courses require a Premium University subscription.  Back to top Resolved issues See the complete lists of resolved Issues in AppDynamics Cloud Release Notes and AppDynamics (CSaaS) Release Notes. Back to top Essentials PLANNING AN UPGRADE? | Please check backward compatibility in the Agent and Controller Compatibility documentation as part of your upgrade planning process Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components) Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release? Product Announcements, Alerts, and Hot Fixes Open Source Extensions License Entitlements and Restrictions Introducing AppDynamics Cloud
This is very similar to a lot of XML parsing questions, however I have read through ~20 topics and am still unable to get my XML log to parse properly. Here is a sample of my XML file: <?xml vers... See more...
This is very similar to a lot of XML parsing questions, however I have read through ~20 topics and am still unable to get my XML log to parse properly. Here is a sample of my XML file: <?xml version="1.0" encoding="UTF-8"?><AuditMessage xmlns:xsi="XMLSchema-instance" xsi:noNamespaceSchemaLocation="HL7-audit-message-payload_1_3.xsd"><EventIdentification EventActionCode="R" EventDateTime="2022-11-07T04:18:01"></EventIdentification></AuditMessage> <?xml version="1.0" encoding="UTF-8"?><AuditMessage xmlns:xsi="XMLSchema-instance" xsi:noNamespaceSchemaLocation="HL7-audit-message-payload_1_3.xsd"><EventIdentification EventActionCode="E" EventDateTime="2022-11-07T05:18:01"></EventIdentification></AuditMessage> Here are the entire contents of my props.conf file:  [xxx:xxx:audit:xml] MUST_BREAK_AFTER = \</AuditMessage\> KV_MODE = xml LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = true TIMESTAMP_FIELDS = <EventDateTime> TIME_PREFIX = <EventDateTime> TIME_FORMAT = %Y-%m-%dT%H:%M:%S category = Custom disabled = false  I would need your assistance to parse the events. Thank you.
I am configuring a hostname validation TLS certificate with a self-signed certificate in splunk enterprise 9.0 and it seems to me that it cannot trust the CA ERROR X509Verify [TelemetryMetricBuffer... See more...
I am configuring a hostname validation TLS certificate with a self-signed certificate in splunk enterprise 9.0 and it seems to me that it cannot trust the CA ERROR X509Verify [TelemetryMetricBuffer] - Server X509 certificate (CN=,DC=,DC=,DC=) failed validation; error=19, reason="self signed certificate in certificate chain" the configuration is the following: [sslConfig] # turns on TLS certificate requirements sslVerifyServerCert = true # turns on TLS certificate host name validation sslVerifyServerName = true serverCert = <path to your server certificate> Do you know how I can tell splunk which CA I'm using so it can trust the certificate? or how can i configure it?
I have a Splunk query as below which pulls some events.   index="windows_events" TargetFileName="*startup*"     Now from the events I picked the below TargetFileName field value      \Device\... See more...
I have a Splunk query as below which pulls some events.   index="windows_events" TargetFileName="*startup*"     Now from the events I picked the below TargetFileName field value      \Device\HarddiskVolume3\Users\XYZ\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\Send to AbC.lnk     Now I wanted to search specifically for the above field and for that I used the below query which gives me no results.      `get_All_CrowdstrikeEDR` event_simpleName=FileCreateInfo os="Win" TargetFileName="*\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\*"     Now, what I dont understand is when I tried the first query I am able to see some events though I used wild cards before and after startup   Now, when I extended the wild card with actual value why isn't working?   Can't I use backslashes in Splunk searches?
I inherited a Splunk environment I was informed the other day that a computers.csv lookup is not generating any results, is there a way to find out what should be populating that file which is curren... See more...
I inherited a Splunk environment I was informed the other day that a computers.csv lookup is not generating any results, is there a way to find out what should be populating that file which is currently empty, I did find the App which houses the lookup csv 
Good afternoon I'm having trouble changing the color of the indices (numbers) that appear on top of the bars. I need to change the current color (black) to white. Can someone help me? Panel co... See more...
Good afternoon I'm having trouble changing the color of the indices (numbers) that appear on top of the bars. I need to change the current color (black) to white. Can someone help me? Panel code:   { "type": "viz.column", "title": "", "dataSources": { "primary": "ds_7YQhhskC" }, "options": { "foregroundColor": "#FFFFFF", "fontColor": "#FFFFFF", "fieldColors": { "Sum of amount": "#A870EF" }, "legend.placement": "top", "axisTitleX.text": "Days of the week", "axisTitleY.text": "Amount of transactions", "chart.showDataLabels": "all", "legend.labelStyle.overflowMode": "ellipsisNone", "yAxisVisibility": "show", "xAxisVisibility": "show", "backgroundColor": "transparent" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }      
Hi,  I want to add a new Search Head to my existing 3 node SHC. My question is regarding the initialization step.   splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<m... See more...
Hi,  I want to add a new Search Head to my existing 3 node SHC. My question is regarding the initialization step.   splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -replication_factor <n> -conf_deploy_fetch_url <URL>:<management_port> -secret <security_key> -shcluster_label <label     -secret <security_key>   IF I look in the server.conf on an existing SHC member you can find the pass4SymmKey [shclustering] pass4SymmKey = $9$dkjajkldjaj-- But I have the original secret that was used to create the pass4SymmKey e.g. password1234 Which do I use?   And when I added the IDX cluster to the new SHC node, do I use the pass4SymmKey or the original secret? Thank you!
I am trialing splunk and have installed the splunk otel collector but nothing is appearing in the console, the access token shows 0 hosts tied to it.
Good day All! UF version 8.2.9 on a series of Linux machines. I've an application containing local/server.conf deploying to a series of Linux machines. The machines have a mixed configuration o... See more...
Good day All! UF version 8.2.9 on a series of Linux machines. I've an application containing local/server.conf deploying to a series of Linux machines. The machines have a mixed configuration of short and fqdn as the hostname. For consistence, want to use the short name. Each instance environment contains a variable called HOST_EXTERNAL which is the short name. The documentation states: * Can contain environment variables. * After any environment variables are expanded, the server name (if not an IPv6 address) can only contain letters, numbers, underscores, dots, and dashes. The server name must start with a letter, number, or an underscore. ERROR: serverName must start with a letter, number, or underscore. You have: $HOST_EXTERNAL ServerName is only set in the apps/app-name/local and system/default/server.conf.  system/default/server.conf:serverName=$HOSTNAME app-name/local/server.conf:serverName = $HOST_EXTERNAL Googling, doesn't produce any examples of using an environment variable other than $HOSTNAME. What am I missing on attempting to use $HOST_EXTERNAL as serverName in server.conf Thoughts?
You asked, we delivered. Splunk Observability Cloud has several new innovations giving you deeper visibility across your environments, and a unified approach to incident response. Now, you receive de... See more...
You asked, we delivered. Splunk Observability Cloud has several new innovations giving you deeper visibility across your environments, and a unified approach to incident response. Now, you receive deeper context from the end-user experience through your network and across every transaction, and bring order to on-call chaos with improved alert accuracy and on-call scheduling, notification, and escalation capabilities. Visibility across every user session and transaction, through the network: Whether you operate monolithic architectures or microservices, new Splunk capabilities provide deeper visibility and more context from your end user, through your cloud network, and throughout every transaction. Here’s a summary of what’s new and what’s coming soon.   New capabilities for Digital Experience Monitoring Mobile RUM React Native Library: With the addition of this latest library, users can now auto instrument react native applications with Splunk RUM. Now mobile developers and operations teams can extend comprehensive performance monitoring, directed troubleshooting and end-to-end observability to their React Native mobile applications. Synthetic Monitoring Private Locations and Advanced Settings: Splunk continues to add key functionality from the legacy Rigor Synthetic Monitoring product into Splunk Observability Cloud to attain feature parity for Synthetics in Observability. Advanced Settings offers our customers more options for browser and uptime test instrumentation to support additional synthetic testing use cases. Private Locations allow users to test beyond Splunk Synthetic Monitoring's public network so they can find, fix, and prevent web performance defects on any internal web application, in any environment - whether inside or outside of their firewalls. RUM Session Replay (coming soon!): With Session Replay, a new capability for Splunk Browser RUM, users can gain visibility into end-user impact with a video reconstruction of every user interaction, correlate replay with the session waterfall view of granular user session data to quickly debug issues and reduce MTTR, and protect end-user PII with built-in text and image redaction options. Integrated Digital Experience Monitoring in Observability Cloud (coming soon!): Splunk delivers integrated digital experience monitoring with this enhancement that allows users to visualize browser RUM metrics correlated with page-level performance metrics from Synthetics test runs on a single screen. The integrated visualization of RUM metrics correlated with synthetic metrics will allow users to quickly discern synthetic or regional anomalies from systemic, real-user impacting errors so they can prioritize and accelerate issue resolution to deliver error-free digital experiences.   New capabilities for APM APM Autodetect: New Splunk APM Autodetect uses machine learning to significantly reduce manual effort and improve accuracy for service alerts. Autodetect establishes performance baselines for every service, creates automatic detectors based on sudden changes in latency, errors, and request rates, and allows engineers to customize and subscribe to notifications for alerts on these detectors. As a result, engineers reduce time and effort in reconfiguring their alerts, and receive the most accurate alerting across cloud-native environments. APM AlwaysOn Profiling, Memory Profiling for .NET and Node.js: AlwaysOn Profiling continues to expand language support, with memory profiling capabilities added last year for .NET and Node.js. Now, engineers can continuously measure how their code impacts CPU and memory usage in .NET, Node.js, and Java applications, linked in context with all of their trace data to help identify problems, all with minimal overhead. APM Trace Analyzer (coming soon!): Trace Analyzer, new from Splunk APM, confidently detects patterns across billions of transactions to find specific issues for any tag, user, or service. Now, teams can identify problems across any tag or attribute in your services, troubleshoot issues for specific users, and understand how an issue impacts customer groups.    New capabilities for Infrastructure Monitoring and Logging Infrastructure Monitoring Network Explorer: Network Explorer is a new feature within Splunk Infrastructure Monitoring to bring cloud network visibility to DevOps teams and help them resolve cloud network outages faster. Now, teams can easily monitor and assess their cloud network health, get a clear picture of their cloud environment and network topology, and optimize cloud network investments. Infrastructure Monitoring Metrics Pipeline Management (coming soon!): Splunk Infrastructure Monitoring’s Metrics Pipeline Management enables you to increase the scale of your monitoring while controlling costs. Now you can easily control and aggregate large volumes of metrics data, filtering out the data you don’t need with dynamically defined policy rules, so you can ingest, store and analyze only the data you need. Observability Cloud Log Timelines (coming soon!): Building upon Splunk’s strengths for logging and Log Views’ capabilities, we are launching  Log Timeline, a new feature that allows users to add logs-based time charts to their observability dashboards. Practitioners can now analyze trends based on log data to investigate a problem more easily and effectively, reducing their time to resolve.   Bring order to on-call chaos Now Splunk users can leverage Incident Intelligence and APM’s new Autodetect capabilities to dramatically increase on-call team efficiency. With these new innovations, DevOps teams get improved alert accuracy and streamlined workflows to quickly get from alert to resolution and reduce their MTTA and MTTR. Here’s a brief overview of what’s new. A unified approach to incident management Incident Intelligence: Splunk Incident Intelligence, part of the Splunk Observability Cloud, is an incident response solution that connects DevOps teams handling on-call responsibilities to the data they need to diagnose, remediate, and restore services, before their customers are impacted. For more, read the docs.   Try these capabilities today! If you’re already an Observability Cloud user you can get started today by following the links we’ve provided to documentation. For Splunk Cloud or Enterprise users, start an Observability Cloud trial today!    
I have an OpenCanary which is using a webhook to deliver data into my Splunk instance. It works really well but my regex is a bit rubbish and the field extraction is not going well.  The wizard is ... See more...
I have an OpenCanary which is using a webhook to deliver data into my Splunk instance. It works really well but my regex is a bit rubbish and the field extraction is not going well.  The wizard is getting me a reasonable way but the OpenCanary moves the log items around in the rows and this foxes the wizard which seems to see the repetition and resists my attempts to defeat it when I try to take the text after some labels (namely Port which works as it's in the same location per line, Username, Password and src_host. Two lines which should help with the understanding of my challenge. message="{\"dst_host\": \"10.0.0.117\", \"dst_port\": 23, \"local_time\": \"2023-02-08 16:20:12.113362\", \"local_time_adjusted\": \"2023-02-08 17:20:12.113390\", \"logdata\": {\"PASSWORD\": \"admin\", \"USERNAME\": \"Administrator\"}, \"logtype\": 6001, \"node_id\": \"hostname.domain\", \"src_host\": \"114.216.162.49\", \"src_port\": 47106, \"utc_time\": \"2023-02-08 16:20:12.113383\"}" path=/opencanary/APIKEY_SECRET full_path=/opencanary/APIKEY_SECRET query="" command=POST client_address=100.86.224.114 client_port=54770 message="{\"dst_host\": \"10.0.0.117\", \"dst_port\": 22, \"local_time\": \"2023-02-08 16:20:11.922514\", \"local_time_adjusted\": \"2023-02-08 17:20:11.922544\", \"logdata\": {\"LOCALVERSION\": \"SSH-2.0-OpenSSH_5.1p1 Debian-4\", \"PASSWORD\": \"abc123!\", \"REMOTEVERSION\": \"SSH-2.0-PUTTY\", \"USERNAME\": \"root\"}, \"logtype\": 4002, \"node_id\": \"hostname.domain\", \"src_host\": \"61.177.172.124\", \"src_port\": 17802, \"utc_time\": \"2023-02-08 16:20:11.922536\"}" path=/opencanary/APIKEY_SECRET full_path=/opencanary/APIKEY_SECRET query="" command=POST client_address=100.86.224.114 client_port=54768 Any regex experts will help me build out pivots and reporting for my OpenCanary which gets around 200'000 connection attempts every 7 days
Hi! I'm trying to export the CMC health overview dashboard as a pdf and, hopefully, set it to send as an email attachment on a regular schedule. I have seen this answer: https://community.splunk.... See more...
Hi! I'm trying to export the CMC health overview dashboard as a pdf and, hopefully, set it to send as an email attachment on a regular schedule. I have seen this answer: https://community.splunk.com/t5/Dashboards-Visualizations/Can-you-copy-a-dashboard-into-a-report/m-p/375881 &  this doc https://docs.splunk.com/Documentation/Splunk/9.0.3/Report/GeneratePDFsofyourreportsanddashboards#:~:text=To%20schedule%20dashboard%20PDF%20emails,in%20the%20Data%20Visualizations%20Manual. on how to accomplish that. But the export options are not visible within the CMC app. Is this possible within the CMC app?
Because of a typo we had the following in our query:     earliest=-1@d     Since Splunk query actually ran I assumed that some kind of default value had been used. I could not find such detail... See more...
Because of a typo we had the following in our query:     earliest=-1@d     Since Splunk query actually ran I assumed that some kind of default value had been used. I could not find such details in docs.  
We're in the middle of a micro-segmentation project and we're cataloging our Splunk resources.   This is for an on-prem deployment Splunk has a handy chart for ports but this chart does not contai... See more...
We're in the middle of a micro-segmentation project and we're cataloging our Splunk resources.   This is for an on-prem deployment Splunk has a handy chart for ports but this chart does not contain the Monitoring Console: https://docs.splunk.com/Documentation/Splunk/9.0.3/InheritedDeployment/Ports  Does anyone one know what ports are needed for the Monitoring Console, 8089 bi-diretionally to all the Splunk Servers + 9997 to the indexers +web port is what I was thinking but I couldn't find documentation to support that.   Thanks, any help is appreciated. 
Hi, I am looking for a way when a notification is triggered in Splunk to mention an employee or a group (@...) in the message in Microsoft Teams so they can get feedback. I already have the notificat... See more...
Hi, I am looking for a way when a notification is triggered in Splunk to mention an employee or a group (@...) in the message in Microsoft Teams so they can get feedback. I already have the notifications set up so that via the webhook the notifications end up in the correct Teams channels. Thanks in advance!      
Here is the query i have and need to extract the "sts:ExternalId"   requestParameters: { [-] policyDocument: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAssumeRoleForAnotherAcco... See more...
Here is the query i have and need to extract the "sts:ExternalId"   requestParameters: { [-] policyDocument: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAssumeRoleForAnotherAccount", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::384280045676:role/jenkins-node-custom-efep" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "efep" } } } ] }
I want to see the 500 error count for each Customers over time (Today/Yesterday/LastWeekOfDay) So total 3 days. Below screenshot is Kibana Chart. How can we create same kind of chart in Splunk? ... See more...
I want to see the 500 error count for each Customers over time (Today/Yesterday/LastWeekOfDay) So total 3 days. Below screenshot is Kibana Chart. How can we create same kind of chart in Splunk? I have tried below timechart query but x axis have time first instead of customerId. index="services" statusCode="500" | timechart span=1d count by customerId I have also tried with below Query But I feel Count in response in not correct. index="services" statusCode="500" | bucket _time span=day | chart count by customerId,_time | head 10 Is there a better way to do it?    
hello everyone, I have a column which contains week1 , week2 ,week3,week4,week5 and I want an input to the chart to show me the data from week1 to week3 for example or week2 to week5 how could I do... See more...
hello everyone, I have a column which contains week1 , week2 ,week3,week4,week5 and I want an input to the chart to show me the data from week1 to week3 for example or week2 to week5 how could I do that? 
Hi, I have the following joined Splunk query: index="myIndex" source="mySource1" | fields _time, _raw | rex "Naam van gebruiker: (?<USER>.+) -" | dedup USER | table USER | sort USER | join type... See more...
Hi, I have the following joined Splunk query: index="myIndex" source="mySource1" | fields _time, _raw | rex "Naam van gebruiker: (?<USER>.+) -" | dedup USER | table USER | sort USER | join type=left [ search index="myIndex" source="mySource2" "User:myUserID The user is authenticated and logged in." | stats latest(_raw) ] The results look like this: Green is myUserID. Red is some other persons user ID. Because I am using my hardcoded user ID, every person gets the "latest(_raw)" record corresponding to my user id. I want each user to get their own event. I believe this can be done if I use the USER field in the second search, but I don't know the syntax to get it to work. I tried: "User:'USER' The user is authenticated and logged in." And also "User:\USER\ The user is authenticated and logged in." But these don't work. What is the correct syntax?