All Topics

Top

All Topics

Hey,   I want to add _time column after stats command but I couldn't select the best command. Forexample;   index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by use... See more...
Hey,   I want to add _time column after stats command but I couldn't select the best command. Forexample;   index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by user, ip, action | iplocation ip | sort -count   How can I add this field?   Thanks    
We’re excited to share an update to our instructor-led training program that enhances the learning experience for Splunk learners. Starting January 1, 2025, the completion criteria for many of our... See more...
We’re excited to share an update to our instructor-led training program that enhances the learning experience for Splunk learners. Starting January 1, 2025, the completion criteria for many of our Instructor-led courses will shift from lab grading to a focus on participation and knowledge comprehension. This change simplifies the learning process, aligns with industry best practices, and fosters a more engaging environment for learners. For those new to Splunk’s instructor-led training, this update will feel seamless, as it reflects the standard structure of our courses moving forward. Updated completion criteria Class Attendance: Learners must attend all scheduled class sessions. Knowledge Check Quiz: A short, open-note quiz will assess understanding. Learners must achieve an 80% passing score and will have up to 10 attempts. These quizzes are designed to support learning and are not certification exams. Lab Engagement (Optional): Labs remain an integral part of the training experience but are no longer mandatory for course credit. The rationale behind the new completion criteria  By eliminating lab grading, we aim to: Simplify the training process for learners and instructors. Minimize administrative hurdles. Focus on active participation and comprehension during sessions. If you have initial questions, we encourage you to review the FAQ for more details. Thank you for being part of the Splunk learning journey! -Callie Skokos on behalf of the Splunk Education Crew
Welcome the new year with our January lineup of Community Office Hours, Tech Talks, and Webinars! Whether you're forging new resolutions or simply seeking fresh inspiration, this month's events... See more...
Welcome the new year with our January lineup of Community Office Hours, Tech Talks, and Webinars! Whether you're forging new resolutions or simply seeking fresh inspiration, this month's events will help you sharpen your skills and spark your creativity. Check out the details below!   What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where participants can ask questions and engage with technical Splunk experts on various topics. Whether you're just starting your journey with Splunk or looking for best practices to take your deployment to the next level, Community Office Hours provides a safe and open environment for you to get help. If you have an issue you can’t seem to resolve, have a question you’re eager to get answered by Splunk experts, are exploring new use cases, or just want to sit and listen in, Community Office Hours is for you! What are Tech Talks? Tech Talks are designed to accelerate adoption and ensure your success. In these engaging 60-minute sessions, we dive deep into best practices, share valuable insights, and explore additional use cases to expand your knowledge and proficiency with our products. Whether you're looking to optimize your workflows, discover new functionalities, or troubleshoot challenges, Tech Talks is your go-to resource. SECURITY Office Hours | Security: Ask Me Anything January 15, 2024 at 1pm PT This is your opportunity to ask questions about your specific Splunk Security needs. Our experts are ready to answer all your questions in our first broad security topic session, such as... How can you get started with Splunk Security better? What are the essential steps? What are the latest innovations in ES, SOAR, Mission Control, SAA, and so on? What are the best practices for implementing security use cases, like incident management, RBA, automation, and so on? What is the best approach to building a unified workflow with ES, SOAR, and other security products? What are Splunk Security's third-party integrations, and how can these tools be configured best? What is the magic of industry frameworks such as ATT&CK, Cyber Kill Chain work within ES and SOAR? Anything else you’d like to learn!   OBSERVABILITY Office Hours | Splunk Application Performance Monitoring January 14, 2024 at 1pm PT What can you ask in this AMA? How can I send traces to APM? How do I track service performance with dashboards? What are some tips for setting up deployment environments? What are AutoDetect detectors, and how can I use them? What are best practices for high-value features like Tag Spotlight and Service-Centric views? How do I set up business workflows? Anything else you'd like to learn!   APP DEVELOPMENT Office Hours | Splunk App Development January 16, 2024 at 1pm PT What can you ask in this AMA? How do we work with REST APIs? What SDKs are available for app development? How should we get started with Splunk UI development? What are some best practices to maintain & evolve Splunk Apps?
Hi all, I have this use case below: Need to create a splunk alert for this scenario: Detections will be created from Splunk logs for specific events like Authentication failed , such as exceeding X... See more...
Hi all, I have this use case below: Need to create a splunk alert for this scenario: Detections will be created from Splunk logs for specific events like Authentication failed , such as exceeding X number of failed logins over Y time.  Below search splunk i am using:           index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* | search userAgent OR "actionName":"login" "timestamp":"2025-01-07T*"| sort -_time           I am not able to write the correct search query to find Authentication failed exceeding, for example 3 times. Attached screenshot. Thanks for your help. Dieudonne.
Hello, We are Splunk Cloud subscribers. We want to utilize the NetApp for Splunk Add-On. We've Two on-site Deployment servers, one Windows, one Linux and an on-site Heavy Forwarder. My interpretatio... See more...
Hello, We are Splunk Cloud subscribers. We want to utilize the NetApp for Splunk Add-On. We've Two on-site Deployment servers, one Windows, one Linux and an on-site Heavy Forwarder. My interpretation of the instructions are that we install the NetApp Add-Ons (ONTAP Indexes & Extractions) within the cloud hosted search head.   The Cloud instructions leave me with the impression, that we may need to utilize the heavy forwarder as a data collection node for the NetApp Add-Ons as well. There we would manually install the app components within the splunk home /etc/apps directory. Looking within the deployment server and the heavy forwarder. Both splunk home directories installed have directory permissions set to 700.  We're hoping this method of installation does not apply to us then and the cloud installation process automated much of this and obviated the need to manually configure the heavy forwarder. Upon completing these Add-On installations via the cloud hosted search head, are there any additional steps or actions we will need to take to complete the installation aside from the NetApp Appliance configurations? Thank you, Terry
I'm currently going over our alerts, cleaning them up and optimizing them.  However, I recall there being a "best practice" when it comes to writing SPL. Obviously, there may be caveats to it, bu... See more...
I'm currently going over our alerts, cleaning them up and optimizing them.  However, I recall there being a "best practice" when it comes to writing SPL. Obviously, there may be caveats to it, but what is the usual best practice when structuring your SPL commands? Is this correct or no? search, index, source, sourcetype | where, filter, regex | rex, replace, eval | stats, chart, timechart | sort, sortby | table, fields, transpose | dedup, head | eventstats, streamstats | map, lookup
Hello, I have a .NET Transaction Rule named:  "/ws/rest/api"  The matching Rule is a Regex: /ws/rest/api/V[0-9].[0-9]/pthru A couple of examples of the the URLs that would match this rule are: /w... See more...
Hello, I have a .NET Transaction Rule named:  "/ws/rest/api"  The matching Rule is a Regex: /ws/rest/api/V[0-9].[0-9]/pthru A couple of examples of the the URLs that would match this rule are: /ws/rest/api/V3.0/pthru/workingorders /ws/rest/api/V4.0/pthru/cart /ws/rest/api/V4.0/pthru/cart/items I am splitting the Rule by URI segments, 4, 5, 6.  but the resulting name is:  /ws/rest/api.V4.0pthruCart Is there a way to add "/" between each segment, or is there a better way to do this that give us a better looking Transaction Name? Thanks for your help, Tom
The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare for this change. What’s changing? OpenSSL version 3 is a significant upgrade from versi... See more...
The Splunk platform will transition to OpenSSL version 3 in a future release. Actions are required to prepare for this change. What’s changing? OpenSSL version 3 is a significant upgrade from version 1. OpenSSL 3 features a new versioning scheme, significantly improved security features, and a new "Provider" concept for managing different cryptographic algorithms. It is generally not backward compatible, meaning applications designed for OpenSSL 1 may need significant changes to work with version 3. The Splunk platform is upgrading to the latest version of OpenSSL 3 in a future release to improve our security posture continuously. Splunk customers’ environments will require a few changes before they can upgrade to the Splunk version with OpenSSL 3 - including, but not limited to, the following:  use TLS 1.2-ONLY include the X509v3 extension for your CA certificate all Splunk apps relying on OpenSSL 3 should be compatible with Python 3.9 and Node.js 20 or higher (if using those languages)  become FIPS-certified for FedRAMP or FISMA customers.  The following delves deeper into each of the criteria mentioned above for an upgrade. 1. Use TLS 1.2 Only With 9.4, Splunk Enterprise announced the deprecation of TLS 1.0 and 1.1. TLS 1.0 and 1.1 (and SSL 3.0 and lower) are outdated protocols that use weak and insecure ciphers (e.g., International Data Encryption Algorithm(IDEA), Data Encryption Standard(DES)) to establish secure connections. They were formally deprecated in RFC 8996 in March 2021. Additionally, the National Institute of Standards and Technology (NIST) formalized policy 800-52 in 2014, which requires US government agencies to adopt TLS 1.2 and deprecate the use of TLS 1.1 and before. Lastly, OpenSSL 3 deprecated the support for any older versions of TLS less than 1.2. Removing support for TLS 1.1, 1.0, and SSL3 will lay the foundation for Splunk and its customers to upgrade to TLS 1.3, another mandate for US PBST + EMEA customers.  Actions to take: Confirm that your Splunk environment is configured to use the TLS 1.2 protocol anywhere you can specify a TLS version. The key places to look for the value are server.conf, web.conf, outputs.conf, and inputs.conf. 2. Ensure CA certificates used in Splunk include the X509v3 extension OpenSSL3 requires that any CA certificate must include the X509v3 Basic Constraints extension with CA: TRUE. Customers should ensure that any certificate used as a CA certificate in Splunk contains this extension. Actions to take: Update or replace any CA certificate that does not include CA: TRUE in the X509v3 Basic Constraints extension 3. Make sure apps are compatible with OpenSSL 3, Python 3.9, and node.js 20 or higher All apps installed in your Splunk environment must be compatible with OpenSSL 3. This means that any configurations in these apps that specify a TLS version must specify TLS 1.2 only, and it also means that apps that directly depend on the OpenSSL library must be using it in a way that’s compatible with OpenSSL 3 (e.g., deprecated APIs and cipher suites should not be used). Apps relying on OpenSSL 3 should also be compatible with Python 3.9 and Node.js 20 or higher (if using those languages). While Splunk does not currently have an automated approach to identifying all of these apps, we advise you to make sure any development teams maintaining private apps you have built for your own internal use cases comply with this change.  4. Prep for FIPS-140-3 certifications Splunk maintains an active commitment to meeting the requirements of the FIPS 140 standard. Splunk Enterprise and Universal Forwarder currently use an embedded cryptographic FIPS 140-2 module (4165), which can be activated for the Linux and Windows operating systems. The FIPS 140-3 standard was introduced in September 2019 and supersedes FIPS 140-2. As of September 2021, the Cryptographic Module Validation Program (CMVP) no longer accepts new FIPS 140-2 modules for validation. All FIPS 140-2 modules can remain active until September 21, 2026, and then will be moved to the Historical List. This means that Splunk must obtain a FIPS 140-3 certification, which requires upgrading to OpenSSL 3. Learn more about the transition from FIPS 140-2 to 140-3 (NIST). Actions to take:  All FedRAMP(Hi/Mod) Splunk Cloud customers and FISMA Splunk Enterprise customers that require a CMVP-validated FIPS module for their crypto library should ensure they are on a supported version of Splunk. All active and supported versions of Splunk are FIPS-certified. Customers should also look and plan for future Splunk releases when we upgrade our FIPS certificate to FIPS-140-3. The operating system on which you run Splunk Enterprise should also run in FIPS mode. For e.g., RHEL 8.x and Ubuntu 20.04 are FIPS-140-2 compliant OSs, whereas RHEL 9.x and Ubuntu 22.04 just recently got FIPS-140-3 certified Any app running on Splunk that requires cryptographic operations should only use a FIPS-certified version of the crypto modules(e.g., OpenSSL, BoringCrypto, BouncyCastle, etc.). Using the FIPS-certified crypto module that already ships with Splunk is easiest.
Contents:  What is the App Agent vs Coordinator? App Agent status vs Machine Agent status Why is my app agent status 0% on IIS applications? What are the options for having 100% app ... See more...
Contents:  What is the App Agent vs Coordinator? App Agent status vs Machine Agent status Why is my app agent status 0% on IIS applications? What are the options for having 100% app agent status? What if I cannot modify IIS settings? What is the App Agent vs Coordinator? The AppDynamics.Agent.Coordinator is the orchestration on when to inject the app agent's DLLs into an application as well as collecting machine metrics (CPU, Memory, Performance Counters, etc). The Coordinator does not monitor any application on the server has this is the responsibility of the app agent. In an environment where the profiler environment variables are defined, any .NET runtime at startup will check if the application should be profiled and what profiler to inject. As part of the installation process of the MSI package, it will create the necessary profiler environment variables.  https://learn.microsoft.com/en-us/dotnet/framework/unmanaged-api/profiling/setting-up-a-profiling-environment Profiler environment variables: COR_PROFILER Full framework profiler to be injected into the application COR_ENABLE_PROFILING Boolean value on whether or not full framework profiling is enabled COR_PROFILER_PATH Path to where the full framework profiler resides CORECLR_PROFILER .NET Core profiler to be injected into the application CORECLR_ENABLE_PROFILING Boolean value on whether or not .NET Core profiling is enabled CORECLR_PROFILER_PATH Path to where the .NET Core profiler resides If the .NET application is a full framework, it will write a message to the Event Viewer's Application logs. Sample of a successful instrumentation: .NET Runtime version 4.0.30319.0 - The profiler was loaded successfully. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 110060. Message ID: [0x2507]. When the application does not match an application to be monitored in the config.xml of the Coordinator it will not inject the agent DLLs: .NET Runtime version 4.0.30319.0 - The profiler has requested that the CLR instance not load the profiler into this process. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 111500. Message ID: [0x2516].   Both messages are at the level of Information. Neither message is a cause for alarm and is only informational. App Agent status vs Machine Agent status The AppDynamics.Agent.Coordinator reports to the controller and one of the metrics it reports is [Availability]. This metric represents the Machine Agent status on the Controller's Tiers & Nodes page.  The App Agent status is the app agent that is injected into your application. If your application is not running then neither is the app agent. This leads us to the next point regarding IIS applications. Why is my app agent status 0% on IIS applications?   The app agent is injected into your application and shares the application's lifecycle. For IIS, this means the app agent's DLLs are injected into the w3wp process on .NET startup. This can only happen at the startup of the process.  However, app pools are managed by IIS, and the default settings do the following: App pools are not started by default. Traffic must be sent to the application first App pools that have not received any traffic for 20 minutes will be terminated As mentioned earlier, the app agent shares the application's lifecycle, so you can see how these default settings might affect the app agent status that is displayed on the controller.  Two possible scenarios with the default IIS settings can cause the app agent status to show 0%.  App pool was killed by IIS because there was no activity on the application. On the controller, you will see a downward trend in the app agent status during periods of idle activity.  The server was restarted and no traffic is currently being sent to the application. Therefore, no w3wp process has been started so the controller shows a 0% on app agent status.  What are the options for having 100% app agent status? Three settings must be changed to ensure that the app pool is running and remains running regardless of traffic or server restart.  Idle timeout https://learn.microsoft.com/en-us/previous-versions/iis/6.0-sdk/ms525537(v=vs.90) Start Mode https://learn.microsoft.com/en-us/iis/configuration/system.applicationhost/applicationpools/applicationpooldefaults/#:~:text=is%201000.-,startMode,-Optional%20enum%20value IIS Application Initialization (requires IIS 8.0) https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization The Idle Timeout property is responsible for terminating an app pool that has not received traffic after some time (default is 20 minutes). Setting this property to 0 will prevent IIS from terminating the app pool regardless of how long the app pool is idle.  Start Mode set to AlwaysRunning instead of the default value of OnDemand.  IIS Application Initialization requires IIS 8.0. When the server starts, IIS will invoke a fake request to the specified page to start the app pool. Follow the instructions listed in the link above for the detailed steps. What if I cannot modify IIS settings? You can modify the config.xml to monitor the performance counter "Current Application Pool State" which is part of the APP_POOL_WAS category for your particular app pool and create a health rule to trigger in the event that the app pool is in a stopped state. "Current Application Pool State" possible values: Starting Started Stopping Stopped Unknown However, you need to be aware of the following: An app pool can be assigned to multiple sites and applications. There is no way to get a granular scope to a single application unless each IIS application/site uses a unique app pool There are really only three states for the "Current Application Pool State" - Started, Stopped, and Unknown. The in-between states are too quick to capture and report on.  The difference between an app pool and worker process. Having an app pool in a started state does not mean your application and, by extension, the agent is running.  In addition, an app pool in the started state does not mean your application is able to start. For example, .NET runtime errors at startup can prevent the application from starting even though the app pool is started.  I strongly recommend modifying the IIS settings to get a true app agent status and then rely on the "Current Application Pool State" performance counter but this option is available if your circumstances prevent modification of the IIS settings and the limitations above are not a concern.  With the caveats out of the way, let's discuss how to make this change.  Config.xml: <machine-agent> <perf-counters> <perf-counter cat="APP_POOL_WAS" name="Current Application Pool State" instance="MY_APP_POOL_NAME" /> </perf-counters> </machine-agent> Then create a new health to trigger if the app pool state is not in a Started state. 
Inherited Splunk deployment.  Looks like authentication was setup with proxysso.  I am unfamiliar with this and we are planning on migrating the proxysso authentication to SAML.   In the past, I hav... See more...
Inherited Splunk deployment.  Looks like authentication was setup with proxysso.  I am unfamiliar with this and we are planning on migrating the proxysso authentication to SAML.   In the past, I have used the web UI for authentications like LDAP.  ProxySSO seems to be a backend conf file? Not sure on how to proceed if there will be a conflict of just adding the SAML authentication method and will it simply override the ProxySSO configurations?  Or does the ProxySSO conf need to be removed first and then saml configured?  If that is the case, what methods to remove? Thank you
Hello, First, I am aware that there are multiple posts regarding my question, but I can't seem to use them in my scenario. Please see an example below. There are two fields, location and name. I ne... See more...
Hello, First, I am aware that there are multiple posts regarding my question, but I can't seem to use them in my scenario. Please see an example below. There are two fields, location and name. I need to filter out name that contain  "2" and stats count name based on location.  I came up with this search, but the problem is it did not include location A (because the count is zero) Please suggest. I appreciate your help.  Thanks | makeresults format=csv data="location, name location A, name A2 location B, name B1 location B, name B2 location C, name C1 location C, name C2 location C, name C3" | search name != "*2*" | stats count by location Data location name location A name A2 location B name B1 location B name B2 location C name C1 location C name C2 location C name C3   Expected output: location count(name) location A 0 location B 1 location C 2
Hello Team,    How to search specific app user successful and failure events by month for Jan to Dec? Base search,   index=my_index app=a | table app action user |eval Month=strftime(_tim... See more...
Hello Team,    How to search specific app user successful and failure events by month for Jan to Dec? Base search,   index=my_index app=a | table app action user |eval Month=strftime(_time,"%m") |stats count by user Month I am not getting any result by above search.    
Recently our splunk security alert integration has stopped working last month (December) where we'd send an alert automatically from splunk cloud to our onmicrosoft.com@amer.teams.ms e-mail. Is th... See more...
Recently our splunk security alert integration has stopped working last month (December) where we'd send an alert automatically from splunk cloud to our onmicrosoft.com@amer.teams.ms e-mail. Is the support of this being deprecated on the Microsoft side? Or is this a whitelisting issue? Anyone else experience a similar problem?
Here is my raw data in the splunk query <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/20... See more...
Here is my raw data in the splunk query <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <application xmlns="http://www.abc.com/services/listService"> <header> <user>def@ghi.com</user> <password>al3yu2430nald</password>   If I want to mask the password value and show in the splunk output as: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <application xmlns="http://www.abc.com/services/listService"> <header> <user>def@ghi.com</user> <password>xxxxxxxxxxxx</password> How can I do that?
Hello, I have 2 queries where indices are different and have a common field dest_ip which is my focus(same field name in both indices). Please note that there are also some other common fields such ... See more...
Hello, I have 2 queries where indices are different and have a common field dest_ip which is my focus(same field name in both indices). Please note that there are also some other common fields such as src_ip, action etc.   Query 1:   index=*corelight* sourcetype=*corelight* server_name="*microsoft.com*   additional fields: action, ssl_version, ssl_cipher   Query 2:   index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100   additional fields: _time, src_zone, src_ip, dest_zone, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name     I'm trying to output all the corresponding server_names for each dest_ip, as a table with all the listed fields from both query outputs   I'm new to Splunk and learning my way; I've tried the following so far -   A) using join (which is usually very slow and sometimes doesn't give me a result)   index=*corelight* sourcetype=*corelight* server_name=*microsoft.com* | join dest_ip [ search index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100 | fields src_ip, src_user, dest_ip, rule, action, app, transport, version, session_end_reason, dvc_name, bytes_out ] | dedup server_name | table _time, src_ip, dest_ip, transport, dest_port, app, rule, server_name, action, session_end_reason, dvc_name | rename _time as "timestamp", transport as "protocol"     b) using an OR    (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100) | dedup src_ip, dest_ip | table src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"       My questions - Would you suggest a better way to write/construct my above queries?   In my OR output, I only see a couple of columns populating values (eg. src_ip, dest_ip, action) while the rest are empty. My guess is they're populating because I'm doing an inner join and these are the common fields between the two. Since I'm unable to populate the others, maybe I need to do a left join?   Can you kindly guide me on how to rename fields specific to each index when combining queries using OR? I've tried a few times but haven't been successful For example, in my above OR statement - how and where in the query do I rename the field ssl_cipher in index=*corelight* to ENCRYPT_ALGORITHM?    Many thanks!
Yesterday I upgraded Splunk on one of my Deployment Servers from 9.3.1 with the 9.4.0 rpm on a Amazon Linux host and ran into the following error after starting splunk with: /opt/splunk/bin/splunk s... See more...
Yesterday I upgraded Splunk on one of my Deployment Servers from 9.3.1 with the 9.4.0 rpm on a Amazon Linux host and ran into the following error after starting splunk with: /opt/splunk/bin/splunk start --accept-license --no-prompt --answer-yes (typical batch of startup messages here ... until) sh: line 1: 16280 Segmentation fault      (core dumped) splunk migrate renew-certs 2>&1 ERROR while running renew-certs migration. Repeated attempts at starting failed to render anything different. Ended up having to revert to the prior version. This is, in fact, the first failed upgrade I've had since I started using this product over 10 years ago. I have backed out of the upgrade, but considering the vagueness of this error message, I'm asking the community if anyone has seen this before.   
FYI, it's possible if you have HF => third party s2s => indexer.
I'm building a search which takes a URL and returns all events from separate indexes/products where a client (user endpoint, server, etc) attempted access.  The goal is to answer "who tried to visit ... See more...
I'm building a search which takes a URL and returns all events from separate indexes/products where a client (user endpoint, server, etc) attempted access.  The goal is to answer "who tried to visit url X". I have reviewed the default CIM data models here: https://docs.splunk.com/Documentation/CIM/5.1.0/User/CIMfields However, none seem to fit this specific use case.  Can anyone sanity check me to see if I've overlooked one?  Thanks!
I need to upgrade the Splunk Universal forwarder version to all the existing installed windows 2016 and 2019 servers. I am using Splunk Enterprise as a Search head and indexer. Is there a way that I... See more...
I need to upgrade the Splunk Universal forwarder version to all the existing installed windows 2016 and 2019 servers. I am using Splunk Enterprise as a Search head and indexer. Is there a way that I can upgrade the old version with the latest without uninstalling the old and install the new one. And how this task can be done for all the servers together instead of one by one.
Hi Everyone,   I am trying to create one dashboard out of search query but I am getting stuck where I am unable to the host details in the dashboard.   query is -  index="vm-details" | eval... See more...
Hi Everyone,   I am trying to create one dashboard out of search query but I am getting stuck where I am unable to the host details in the dashboard.   query is -  index="vm-details" | eval date=strftime(_time, "%Y-%m-%d") | stats dc(host) as host_count, values(host) as hosts by date | sort date I am getting host_count and date in dashboard but my requirement is I need hostname should come while hovering host_count I tried using values(host) directly but that didnt work. can someone help? CC: @ITWhisperer  Thanks, Veeresh Shenoy S