All Topics

Top

All Topics

Does Splunk UF agent 9.0.1 supports AWS Linux 3?
Hi I would like to integrate a viz like below in my dashboard But i wonder what is used to integrate a chart in a table row What kind of vizualisation is really used? Is anybody have xml examples... See more...
Hi I would like to integrate a viz like below in my dashboard But i wonder what is used to integrate a chart in a table row What kind of vizualisation is really used? Is anybody have xml examples? Thanks  
Created test user and assign the viwer role, test user won't  be see the settings option and manage app settings  option , aHow to hide both settings? Please help me detailed process.   Vijreddy
I have created  test user and assigned to viwer role, my requirements  is  to hide the settings & manage setting options,,test user not able to see the above options.   Please help me detailed pr... See more...
I have created  test user and assigned to viwer role, my requirements  is  to hide the settings & manage setting options,,test user not able to see the above options.   Please help me detailed process. Regards, Vijay  
Have a log with related event One event has the number widgets made in the period and another event has the actual time taken to make the widgets in that period. i can do a search and get a time ... See more...
Have a log with related event One event has the number widgets made in the period and another event has the actual time taken to make the widgets in that period. i can do a search and get a time chart of number of widgets and time used . But, what I want is a timechart  of the  actualtime/number of widgets  made.. How do i construct  a search to do that.
Hi All.. how can I search a range of characters in splunk.. example I want to search name of people whose name starts with A-L but not M-Z user = [A*-Z*] , can I have something like this ?
Our java agent isnt reporting to the controller thougn in the logs we see a message saying the agent was successfully started. I dont see any mesage that it is connected to the controller but the nod... See more...
Our java agent isnt reporting to the controller thougn in the logs we see a message saying the agent was successfully started. I dont see any mesage that it is connected to the controller but the node is shown as [null] Picked up _JAVA_OPTIONS: -Djdk.tls.maxCertificateChainLength=20 Java 9+ detected, booting with Java9Util enabled. Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [App_Name] Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [Tier_Name] Full Agent Registration Info Resolver using selfService [false] Full Agent Registration Info Resolver using selfService [false] Full Agent Registration Info Resolver using ephemeral node setting [false] Full Agent Registration Info Resolver using application name [App_Name] Read property [reuse node name] from system property [appdynamics.agent.reuse.nodeName] Full Agent Registration Info Resolver using tier name [Tier_Name] Full Agent Registration Info Resolver using node name [null] Install Directory resolved to[/opt/appdyn/javaagent/23.8.0.35032] getBootstrapResource not available on ClassLoader Class with name [com.ibm.lang.management.internal.ExtendedOperatingSystemMXBeanImpl] is not available in classpath, so will ignore export access. [AD Agent init] Thu Oct 05 17:45:32 UTC 2023[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - UUIDPool size is 10 Agent conf directory set to [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032/conf] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Agent conf directory set to [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032/conf] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [App_Name] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [Tier_Name] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using ephemeral node setting [false] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [App_Name] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Read property [reuse node name] from system property [appdynamics.agent.reuse.nodeName] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [Tier_Name] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [null] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Agent node directory set to [Tier_Name-35-vvcbk] Agent runtime conf directory set to /opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032/conf [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: AgentInstallManager - Agent runtime conf directory set to /opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032/conf [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - JDK Compatibility: 1.8+ [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Using Java Agent Version [Server Agent #23.8.0.35032 v23.8.0 GA compatible with 4.4.1.0 rc2229efcc98cb79cc989b99ed8d8e30995dc1e70 release/23.8.0] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Running IBM Java Agent [No] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Java Agent Directory [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032] Agent logging directory set to [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032/logs] [AD Agent init] Thu Oct 05 17:45:33 UTC 2023[INFO]: JavaAgent - Agent logging directory set to [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032/logs] [AD Agent init] Thu Oct 05 17:45:34 UTC 2023[INFO]: JavaAgent - Logging set up for log4j2 [AD Agent init] Thu Oct 05 17:45:34 UTC 2023[INFO]: JavaAgent - #################################################################################### [AD Agent init] Thu Oct 05 17:45:34 UTC 2023[INFO]: JavaAgent - Java Agent Directory [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032] [AD Agent init] Thu Oct 05 17:45:34 UTC 2023[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdyn/javaagent/23.8.0.35032/ver23.8.0.35032] [AD Agent init] Thu Oct 05 17:45:34 UTC 2023[INFO]: JavaAgent - Using Java Agent Version [Server Agent #23.8.0.35032 v23.8.0 GA compatible with 4.4.1.0 rc2229efcc98cb79cc989b99ed8d8e30995dc1e70 release/23.8.0] [AD Agent init] Thu Oct 05 17:45:34 UTC 2023[INFO]: JavaAgent - All agent classes have been pre-loaded getBootstrapResource not available on ClassLoader Agent will mark node historical at normal shutdown of JVM Started AppDynamics Java Agent Successfully.
I have a query that gives me four totals for a month.  I am trying to figure out how to show each four total for each day searched ? Here is what I have so far: index=anIndex sourcetype=aSourcetype... See more...
I have a query that gives me four totals for a month.  I am trying to figure out how to show each four total for each day searched ? Here is what I have so far: index=anIndex sourcetype=aSourcetype "SFTP upload finished" OR "File sent to MFS" OR "File download sent to user" OR "HTTP upload finished" earliest=-0month@month latest=now | bucket _time span=day | stats count(eval(searchmatch("SFTP upload finished"))) as SFTPCount count(eval(searchmatch("File sent to MFS"))) as MFSCount count(eval(searchmatch("File download sent to user"))) as DWNCount count(eval(searchmatch("HTTP upload finished"))) as HTTPCount | table SFTPCount MFSCount DWNCount HTTPCount SFTPCount MFSCount DWNCount HTTPCount 30843 535 1584 80   Now to show the results by each day ? I have a line to specify my bucket ?
How can one add to the result of a Splunk query running on Splunk UI the time span i.e. the values one can put in earliest_time and latest_time (the earliest and latest time are coming only from the ... See more...
How can one add to the result of a Splunk query running on Splunk UI the time span i.e. the values one can put in earliest_time and latest_time (the earliest and latest time are coming only from the drop down of the time span in Splunk UI)
http://centos7.linuxvmimages.local:8000
Trying to edit the email subject line of alerts I am receiving. I have tried adding host=$host$ to the base search and in the subject line and was unsuccessful.   I have tried using the $result. h... See more...
Trying to edit the email subject line of alerts I am receiving. I have tried adding host=$host$ to the base search and in the subject line and was unsuccessful.   I have tried using the $result. host$ macro and was unsuccessful as well.   search looks like : | stats latest(cpu_load_percent) AS "CPU Utilization" by host _time | where 'CPU Utilization' >= 95 |dedup host
Kindly help me with a new SPL In am getting results for the existing below SPL. I tried applying a new condition in existing SPL EventID=4662 Properties=*EncryptedDSRMPasswordHistory. But i am gett... See more...
Kindly help me with a new SPL In am getting results for the existing below SPL. I tried applying a new condition in existing SPL EventID=4662 Properties=*EncryptedDSRMPasswordHistory. But i am getting the unwanted results for EventID4662. So I want the existing SPL result to compare the below new condition and filter the result if Properties result has "msLAPS-Password".  New Condition: index=winsec_prod EventID=4662 Properties=*EncryptedDSRMPasswordHistory* Existing SPL:     index=winsec_prod 4794 OR (4657 AND DSRMAdminLogonBehavior) OR ((4104 OR 4103) AND DsrmAdminLogonBehavior) | search ((EventCode=4794) OR (EventCode=4657 ObjectName="*HKLM\System\CurrentControlSet\Control\Lsa\DSRMAdminLogonBehavior*") OR (EventCode IN (4104,4103) ScriptBlockText="*DsrmAdminLogonBehavior*")) | eval username=coalesce(src_user,user,user_id), Computer=coalesce(Computer,ComputerName) | stats values(dest) values(Object_Name) values(ScriptBlockText) by _time, index, sourcetype, EventCode, Computer, username | rename values(*) as *      
What's the problem? Many of Splunk’s current customers manage one or more sources producing substantial volumes of ingested logs; however, among this generated content, it’s not uncommon that o... See more...
What's the problem? Many of Splunk’s current customers manage one or more sources producing substantial volumes of ingested logs; however, among this generated content, it’s not uncommon that only a few pieces of information—and therefore a relatively small portion of the overall data—hold the majority of insight relevant to their operational needs. As a result, the goal of this article is to propose, explain, and walk through a solution allowing for the extraction of this targeted information while optimizing resource utilization and cost-efficiency for the customer. What can be done to help remediate this? Rather than sending all of their unfiltered logs directly to Splunk—ultimately incurring fees related to unnecessary storage and processing power—customers can instead make use of Edge Processor. More specifically, pipelines can be set up to extract and route the information of interest directly to Splunk while the rest of the original log is directed to S3 for long-term storage. Because S3 is designed around archival storage as opposed to running analytics, the cost of persisting unused log data in S3 will be substantially more cost-effective. See the architectural diagram below for more information. Step-by-Step Walkthrough The following guide operates under the assumption that you haven’t yet connected your Edge Processor tenant to your Splunk Cloud Platform (SCP) deployment and do not have a live instance on any machine. If you have already connected your tenant to your SCP deployment, feel free to skip step 1 below. Similarly, if you have an Edge Processor instance installed and running on one of your machines, you can skip step 2 as well. Setting Up Splunk Destination(s) Before you can start using Edge Processor to work with your logs, you must first connect the tenant to your SCP deployment. This connection allows communication between the Edge Processor service and SCP, thereby providing indexes for storing the logs and metrics passing through the processors. In order to do this, follow the steps outlined in our first-time setup instructions. Getting an Edge Processor Instance Up-and-Running Creating the Instance Now that you have your Splunk destinations correctly set up and configured, create a new Edge Processor instance by selecting Edge Processors > New Edge Processor in your cloud tenant’s web UI. Enter both a name and a description for the Edge Processor. To further specify a default destination for unprocessed logs, select To a default destination and choose a destination from the resulting drop-down list. In order to turn on receivers allowing your Edge Processor to ingest logs from specific inputs, select inputs as necessary from the Receive data from these inputs section. If you want to use TLS to secure communications between your instance and its corresponding log sources, then do the following: In the Use TLS with these inputs section, select the log inputs for which you want to use TLS encryption. Upload PEM files containing the appropriate certificates in the Server private key, Server certificate, and CA certificates fields. Installing the Instance on a Machine In your cloud tenant, locate and copy the installation commands. This can be found in Edge Processors > [your processor’s row] > Actions Icon (⋮) > Manage instances > Install/uninstall. On the machine from which the instance will be hosted, open the command-line interface, navigate to the desired target directory, and run the commands copied previously. This should create a splunk-edge/ directory in your chosen installation location. To verify the instance was installed successfully, return to your tenant and select Manage instances > Instances. Confirm that a new instance has been created and has a "Healthy" status (may take up to a minute). Setting Up an Amazon S3 Destination Within your tenant’s web UI, select Destinations > New Destination > Amazon S3 and provide all the credentials necessary to add the S3 destination dataset. This information will include a basic name and description, the object key name used to identify your logs in the S3 bucket, as well as the AWS region and authentication method necessary to allow the destination to connect with your bucket. Information regarding these fields can be found in the table below. Field Description Name A unique name for your destination. Description (Optional) A description of your destination. Bucket Name The name of the bucket you want to send your logs to. Edge Processors use this name as a prefix in the object key name. Folder Name (Optional) The name of a folder where you want to store your logs in the bucket. In the object key name, Edge Processors include this folder name after the bucket name and before a set of auto-generated timestamp partitions. File Prefix (Optional) The file name that you want to use to identify your logs. In the object key name, Edge Processors include this file prefix after the auto-generated timestamp partitions and before an auto-generated UUID value. Output Data Format JSON (Splunk HEC schema). This setting causes your logs to be stored as .json files in the Amazon S3 bucket. The contents of these files are formatted into the event schema that's supported by Splunk’s HEC. See Event metadata in the Splunk Cloud Platform Getting Data In manual. Region The AWS region that your bucket is associated with. Authentication The method for authenticating the connection between your Edge Processor and your Amazon S3 bucket. If all of your Edge Processor instances are installed on Amazon EC2, then select Authenticate using IAM role for Amazon EC2. Otherwise, select Authenticate using access key ID and secret access key. AWS Access Key ID The access key ID for your IAM user. This field is available only when Authentication is set to Authenticate using access key ID and secret access key. AWS Secret Access Key The secret access key for your IAM user. This field is available only when Authentication is set to Authenticate using access key ID and secret access key. Constructing Relevant Pipelines When working with multiple destinations in Edge Processor, separate pipelines are needed to route logs to each desired target (i.e. Splunk + Amazon S3 in this case). Thus, using the Pipelines > New pipeline button in the web UI, create and attach two new pipelines to your existing instance. Depending on the fields present in your ingested logs, you’ll want to define your pipeline’s partition by either sourcetype, source, or host. It doesn’t necessarily matter which of these is selected here; however, it’s crucial that both pipelines partition by the exact same field and value. Furthermore, each of these pipelines should obviously specify separate destinations—namely, those set up in steps 1 and 3 above. Splunk Destination: The way in which you filter logs to be sent to Splunk will, of course, vary depending on the nature of its format and contained information; however, SPL2 offers a few avenues through which you may extract relevant values from the ingested logs. For large JSON structures, json_extract and json_extract_exact can be used to distill the relevant information. For instance, consider the following cloudwatch log, applied pipeline, and associated output: EVENT DATA  ↓ APPLIED PIPELINE: Extracts information related to the event ID, request ID, user account ID, as well as various group IDs associated with the request parameters. All other data (i.e. _raw) is dropped. PIPELINE OUTPUT  ↓ Field Result event_id e394a756-ab36-4f7a-a9d9-c2fff8184457 request_id 3c6deda5-e7bf-45c3-8279-3a78f1c42bea user_account_id 987654321955 req_group_ids ["sg-051ccc60","sg-d81fa120","sg-e48b1fcc"] Furthermore, for non-JSON logs, regular expressions can also be used to extract information via the rex command. For instance, consider the following snippet taken from a Windows event security log: EVENT DATA  ↓ APPLIED PIPELINE: Extracts information related to the log’s timestamp, event code, user account name, and corresponding message. All other data (i.e. _raw) is dropped. PIPELINE OUTPUT  ↓ Field Result time 12/06/2021 10:01:28 AM event_code 4624 message An account was successfully logged on account_name WIN-9A3SFCUS26U$ Once you’ve successfully written a pipeline that extracts the targeted information, be sure to double-check that the specified destination is set to the desired Splunk index. This can be seen on the right-hand side of the pipeline builder UI. Amazon S3 Destination: Assuming you want to route all of the ingested logs directly to S3 for comprehensive storage, the SPL contained within the pipeline builder UI need not contain any complex queries. Simply routing all information from source to destination should suffice. APPLIED PIPELINE: Sends all event data directly to the destination (i.e. no processing necessary). Again, it’s important to note that the destination here should be set to the Amazon S3 destination you created in step 3 above. So, what's the takeaway here? To conclude, we have successfully demonstrated how Edge Processor can be used to efficiently reduce and route customer logs to multiple destinations—optimizing both resource utilization and cost efficiency in the process. Specifically, it has been shown that customers have the ability to filter and extract only relevant pieces of information from their ingested logs via SPL2 queries, which can then be sent to Splunk for long-term storage. Upon setting up another destination pointing to Amazon’s S3 cloud storage, a separate pipeline can be applied and used in parallel to store the complete logs there as well.
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re highlighting two sets of articles that illustrate how you can effectively use multiple parts of the Splunk product suite to solve some of your most crucial observability problems. These articles show you the synergies between Splunk products and features, showcasing how they work together to enhance your outcomes beyond each product’s individual parts. We’ve also published a handful of other new articles this month - jump to the bottom to see everything new. Empowering Engineers with Unified Observability Splunk Observability Cloud is a seriously powerful package, giving you the benefits of Splunk APM, Splunk RUM, Splunk Infrastructure Monitoring, Splunk Incident Intelligence, and Splunk Log Observer Connect, all in one interface. Thanks to Lantern’s Use Case Explorer for Observability, you can easily access use cases for all of these separate Splunk products. But sometimes, it might not be too clear how these products fit together. Splunk Lantern’s new article, Empowering engineers with unified observability, shows you how you can use every part of Splunk Observability Cloud to solve key problems in cloud-native environments. We’ve developed four key unified observability use cases that can empower engineers at your organization: Business impact of changes Problems in cloud-native environments Self-service observability Visibility between on-premises and cloud Each of these use cases contains written and video guidance on how you can use the different parts of Splunk Observability Cloud in concert to solve these issues. Dive in today and revolutionize your approach to unified observability! Using OpenTelemetry to Get Log Data into Splunk Cloud Platform Once you’ve got correlated log, trace, and metric data in Splunk Observability Cloud, you can use this to troubleshoot application issues in a very rapid and efficient way. But it can be tricky to work out how best to get log data flowing through to Splunk Observability Cloud in the first place. Our new article, Using OpenTelemetry to get data into Splunk Cloud Platform, lays out an effective process for this. First, you’ll see how to set up the OpenTelemetry Demo application with Docker or Kubernetes, then get that log data into Splunk Cloud Platform. Once you’ve done that, you’ll learn how to use Splunk Log Observer Connect to bring the data into Splunk Observability Cloud. The outcome of this process is you’ll have a very efficient way to troubleshoot your application issues with full log, metric, and trace visibility, and we also show you three different processes you can use to troubleshoot. We’re eager to hear if you have any questions about these articles, or if you’d like to see log collection approaches for environments other than Docker and Kubernetes - drop us a comment below to share your thoughts. This Month’s New Articles We’ve also published a few other articles over the past month that cover other interesting product tips, use cases and more. Here’s the list: Introduction to the Splunk Distributed Deployment Server (SDDS) Configuring Windows security audit policies for Enterprise Security visibility Data descriptor: Docker Configuring Splunk 9.0 for Native Common Access Card (CAC) Authentication Using Session Replay in Splunk RUM We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
In our latest release of Splunk Enterprise Security 7.2, we are excited to introduce capabilities that deliver an improved workflow experience for simplified investigations; enhanced visibility and r... See more...
In our latest release of Splunk Enterprise Security 7.2, we are excited to introduce capabilities that deliver an improved workflow experience for simplified investigations; enhanced visibility and reduced manual workload; and customized investigation workflows for faster decision-making. The majority of these updates and new features were requested directly from Splunk Enterprise Security (ES) users and submitted through the Splunk Ideas portal! Keep the great ideas and suggestions coming - we’re listening! With these new capabilities, ES helps you see more, act faster, and simplify your investigations.  Improved workflow experience for simplified investigations  Multiple Drill-down Searches on Correlation Rules: Users can now create multiple drill-down searches on correlation rules to quickly narrow their investigation stemming from a notable event.   Enhanced Risk Analysis Dashboard: With the enhanced risk analysis dashboard, security analysts have a deeper, more holistic layer of visibility across all detection events. The SOC can assess organizational risk faster from users and entities, and analysts can drill down on specific users and entities for additional context on risk contributions.  Dispositions in Incident Review: With ES 7.2, ES Administrators can require disposition when closing notables. This provides a feedback loop into detection engineering, allowing efficient review of security detections. Hyperlinks in Correlation Search “Next Steps”: This new capability enables ES administrators to include a link to resources such as wiki pages, runbooks, Splunk dashboards, or even third party websites, as part of an analysts’ response workflow. Analysts are able to view details as part of an event’s “Next Steps” which enhances and accelerates the analyst’s investigation process. Enhanced visibility and reduced manual workload With the new Auto Refresh in Incident Review, ES will automatically showcase the most up-to-date events for the SOC. Administrators can now customize and control the frequency of the auto refresh.  Security analysts can currently prioritize notable events within Splunk Enterprise Security, but often want to visualize it by date and time. That’s why we brought back the Timeline function in Incident Review. This interactive timeline for notables supports analysts by enabling the SOC to quickly gain insight into anomalous activity, such as an unusually high number of notables around a certain time, and therefore prioritize time-sensitive critical incidents.  Customize investigation workflows for faster decision-making ES 7.2 introduces optional enhancements to the Incident Review dashboard that provides a more customizable experience when investigating notable events. Analysts are now able to customize and configure the Incident Review dashboard with table filters and columns that provide the capability for practitioners to look at events that matter to them.  Additionally, they can now create saved views of their customized Incident Review Dashboard and share them with other Enterprise Security analysts.   Upgrade today to Splunk Enterprise Security 7.2!  Ready to get hands on with Enterprise Security 7.2? Register for our Tech Talk! If you have ideas and requests, please submit them to Splunk Ideas!
Hi I have main dashboard "MFA Compliance Rate" as shown below in screenshot.   I have enabled drilldown feature in "MFA Compliance Rate Per Country" panel of main dashboard. Drilldown dashboa... See more...
Hi I have main dashboard "MFA Compliance Rate" as shown below in screenshot.   I have enabled drilldown feature in "MFA Compliance Rate Per Country" panel of main dashboard. Drilldown dashboard name is "Country_Compliance" in same splunk app.  Able to pass country data from main dashboard to drilldown dashboard below mentioned screenshot have on click config. I want to pass dropdown field value from main dashboard to drilldown dashboard.   For Example : I want to pass "Business-Unit" Dropdown value to drilldown dashboard along with "country" value after clicking on particular country bar from "MFA Compliance Rate Per Country" panel of main dashboard. Help me out how to pass dropdown value to drilldown dashboard. Thanks Abhineet kumar    
Hi, I have a alert query that uses mstats, I want this query to not throw alert during public holidays (from 9 AM to 5 PM). I have created a lookup holidays.csv with columns "Date","Description". Ho... See more...
Hi, I have a alert query that uses mstats, I want this query to not throw alert during public holidays (from 9 AM to 5 PM). I have created a lookup holidays.csv with columns "Date","Description". How can i use this lookup with the already mstats command to check for the date and time in the lookup file and if its in the timerange in the file then not trigger the alert or probably not search. Thanks in advance. Lookup file:  
I have a search result which gives 2 columns country_name and bytes of data transferred. How can I create a map visualization out of this that shows how many bytes were transferred to each country.  ... See more...
I have a search result which gives 2 columns country_name and bytes of data transferred. How can I create a map visualization out of this that shows how many bytes were transferred to each country.  Thanks
index="jenkins_console" source="*-deploy/*" NOT (source="*/gremlin-fault-injection-deploy/*" OR source="*pipe-test*" OR source="*java-validation-*") ("Approved by" OR "*Finished:*") | fields source |... See more...
index="jenkins_console" source="*-deploy/*" NOT (source="*/gremlin-fault-injection-deploy/*" OR source="*pipe-test*" OR source="*java-validation-*") ("Approved by" OR "*Finished:*") | fields source | stats count(eval(match(_raw, "Approved by"))) as count_approved, count(eval(match(_raw, ".*Finished:*."))) as count_finish by source | where count_approved > 0 AND count_finish > 0 | stats dc(source) as Total | appendcols [ search(index="jenkins_console" source="*-deploy/*" NOT (source="*/gremlin-fault-injection-deploy/*" OR source="*pipe-test*" OR source="*java-validation-*") ("Finished: UNSTABLE" OR "Finished: SUCCESS" OR "Approved by" OR "Automatic merge*" OR "pushed branch tip is behind its remote" OR "WARNING: E2E tests did not pass")) | fields source host | stats count(eval(match(_raw, "Approved by"))) as count_approved, count(eval(match(_raw, "Finished: SUCCESS"))) as count_success, count(eval(match(_raw, "Finished: UNSTABLE"))) as count_unstable, count(eval(match(_raw, "Automatic merge.*failed*."))) as count_merge_fail, count(eval(match(_raw, "WARNING: E2E tests did not pass"))) as count_e2e_failure, count(eval(match(_raw, "pushed branch tip"))) as count_branch_fail by source, host | where count_approved > 0 AND (count_success > 0 OR (count_unstable > 0 AND (count_merge_fail > 0 OR count_branch_fail > 0 OR count_e2e_failure > 0))) | stats dc(source) as success ] | stats avg(success) as S, avg(Total) as T | eval percentage=( S / T * 100) | fields percentage,success, Total
o365 addon has been running fine. Token expired on the Azure side, so I generated a new one. Updating the Splunk addon gives me the error "Only letters, numbers and underscores are supported." and ... See more...
o365 addon has been running fine. Token expired on the Azure side, so I generated a new one. Updating the Splunk addon gives me the error "Only letters, numbers and underscores are supported." and highlights the Tenant Subdomain or Tenant Data Center fields (see attachment). I can't complete the update without values in these fields. Not sure what to do here.