All Topics

Top

All Topics

Hi, I am new to Splunk and couldn't figure out how to work with OpenTelemetry's histogram bucket in Splunk.  I have a basic set up of 3 buckets from OTel, with le=2000, 8000, +Inf and the bucket nam... See more...
Hi, I am new to Splunk and couldn't figure out how to work with OpenTelemetry's histogram bucket in Splunk.  I have a basic set up of 3 buckets from OTel, with le=2000, 8000, +Inf and the bucket name is "http.server.duration_bucket". My goal is to display the number count inside the 3 buckets for a 15min period, perform a calculations using those values, and add the calculated value as a 4th column. I came up with this so far:       | mstats max("http.server.duration_bucket") chart=true WHERE "index"="metrics" span=15m BY le | fields - _span* | rename * AS "* /s" | rename "_time /s" AS _time       But immediately I see 2 issues: a) the 8000 bucket results are added with 2000 bucket results as well because they are recorded as cumulative histograms. b) the values inside the bucket is always increasing, so I cannot isolate how many counts belong to 2000 bucket now vs the same bucket 15mins ago. And I realized that I don't know how to get the right calculation and separate the buckets without using "BY le", so I cannot perform calculations from there. So my question is: 1) Is there an example of function for displaying the real non-cumulative values in the histogram for a given period? 2) If my calculation is max(le=2000)*0.6 + max(le=8000)*0.4, how would I add that as a column to the search? Thanks in advance!  
Hello there: I have the following two events: Event #1 source=foo1  eventid=abc message="some message dfsdfdfgfdggfg fgdfdgfdg "time":"2023-11-09T21:33:05.0738373837278Z, abcefg" Event #2 sour... See more...
Hello there: I have the following two events: Event #1 source=foo1  eventid=abc message="some message dfsdfdfgfdggfg fgdfdgfdg "time":"2023-11-09T21:33:05.0738373837278Z, abcefg" Event #2 source=foo2 eventid=abc time: 2023-11-09T21:33:05Z I need to related these two events based on their event_id and eventid values being the same. I got help before to write that query: index=foo (source=foo1 OR source=foo2) (eventid=* OR event_id=*) | eval eventID = coalesce(eventid, event_id) | stats values(*) as * by eventID Now i need to expand the above query by extracting the timestamp from the message field from Event #1 and compare it against the time field from Event #2. I basically will need to do the timestamp subtraction between the two fields to see if there time differences and by how (second, minutes,etc.) Do u know how to do that? Thanks!    
  For new RBA users, here are some frequently asked questions to help you better get started with the product. 1. What is RBA(Risk-based Alerting)? Risk-Based Alerting (RBA) is Splunk's method to ... See more...
  For new RBA users, here are some frequently asked questions to help you better get started with the product. 1. What is RBA(Risk-based Alerting)? Risk-Based Alerting (RBA) is Splunk's method to aggregate low-fidelity security events as interesting observations tagged with security metadata to create high-fidelity, low-volume alerts. When Splunk customers use RBA, they see a 50% to 90% reduction in alerting volume, while the remaining alerts are higher fidelity, provide more context for analysis, and are more indicative of true security issues. 2. Why RBA? With Splunk RBA, you can: Improve the detection of sophisticated threats including low and slow attacks often missed by traditional SIEM products. Seamlessly align with leading cyber security frameworks such as MITRE ATT&CK, Kill Chain, CIS 20, & NIST.  Scale analyst resources to optimize SOC productivity and efficiency 3. Fundamental terminology for RBA Risk Analysis Adaptive Response Action: Risk Analysis Adaptive Response Action is the actual response action that gets triggered either instead of or in addition to a notable event response action when a risk rule matches. It adds risk scores and security metadata to events that are stored in the risk index as risk events for every risk object. Notable Events: An event generated by a correlation search as an alert. A notable event includes custom metadata fields to assist in the investigation of the alert conditions and to track event remediation. Asset and Identity Framework: The Asset and Identity framework performs asset and identity correlation for fields that might be present in an event set returned by a search. The Asset and Identity framework relies on lookups and configurations managed by the Enterprise Security administrator. Common Information Model(CIM): The Splunk Common Information Model (CIM) is a shared semantic model focused on extracting value from data. The CIM is implemented as an add-on that contains a collection of data models, documentation, and tools that support the consistent, normalized treatment of data for maximum efficiency at search time. Risk Analysis Framework: The Risk Analysis framework provides the ability to identify actions that raise the risk profile of individuals or assets. The framework also accumulates that risk to allow identification of people or devices that perform an unusual amount of risky activities. Risk Event Timeline: Risk Event Timeline is a popup visualization that can drill down and analyze the correlation of the risk events with their associated risk score. Risk Score: A risk score is a single metric that shows the relative risk of an asset or identity such as a device or a user in your network environment over time. Risk Rule: A Risk Rule is a narrowly defined correlation search run against raw events to observe potentially malicious activity. A Risk Rule contains three components: search logic (Search Processing Language), Risk Annotations, and the Risk Analysis Adaptive Response action to generate risk events. All risk events are written to the Risk Index. Risk Incident Rule: A risk incident rule reviews the events in the risk index for anomalous events and threat activities and uses an aggregation of events impacting a single risk object, which can be an asset or identity, to generate risk notables in Splunk Enterprise Security. 4. What are the common use cases of RBA? The most common use case for RBA is detection of malicious compromise.  However the methodology can be utilized in many other ways, some of them include: machine learning, insider risk, fraud.  Machine learning: Risk-Based Alerting (RBA) is key in elevating Machine Learning from hype to practice, filtering through data noise and spotlighting actionable insights by combining domain knowledge with smart data processing. Insider risk: RBA streamlines the process of leveraging the MITRE ATT&CK framework by homing in on the critical data sources and use cases essential for a robust insider risk detection program, resulting in a more focused approach with significantly reduced development time for a mature program while providing high value insights and the capability to alert on activity over large timeframes. Fraud: The Splunk App for Fraud Analytics, driven by the RBA framework, sharpens fraud detection and prevention, particularly for Account Takeover and new account activities. It streamlines the creation of risk rules from its investigative insights, promising significant operational gains post-integration with Splunk ES. 5. What are the prerequisites for using RBA? To use RBA efficiently, you need to have Splunk Enterprise Security 6.4+ (ES) installed. 6. What is the relationship between Enterprise Security and RBA? Enterprise Security(ES) is a SIEM solution that provides a set of out of the box frameworks for a successful security operation program. RBA is the framework to surface high-fidelity, low-volume alerts from subtle or noisy behaviors, and works in conjunction with the Search, Notable Event, Asset and Identity, and Threat Intel frameworks.  7. How can I implement RBA successfully? Follow the four level approach to implementing RBA Check each step in detail using the RBA Essential Guide. 8. What RBA content should I start with? Leverage the MITRE ATT&CK framework mapped against your data sources if you're at the start of your journey OR leverage your existing alert landscape and focus on noisy alerts closed with no action. Consider ingesting a data source like EDR, DLP, or IDS with many of its own signatures and applying different risk amounts by severity. Try and paint a picture with a collection of content. Review fingerprints from successful red team intrusions or create a purple team exercise. If engaging PS, stick to one use case per day. Don't try to boil the ocean - stick to crawl, walk, run approach. It will ramp up as the foundations are set in place. 9. Where do I start and how often do I review the detections? You need events in the risk index to drive risk alerts. Start with at least 5-10 detections / rules (for smaller companies) - utilize the Essential Guide for step by step instructions Make sure they tell a story - spanning a breadth of ATT&CK phases Ensure you have a breadth of risk scores; if your threshold is 100, you want variation so that a high (75) and a low (25), or two mediums (50), or four lows (25) could all bubble up to something interesting. Discuss risk notables with your internal RBA committee on a weekly basis, and maybe monthly with leadership to discuss trends NOTE: Don't be afraid to set the risk score to zero. you have to do this in SPL: | eval risk_score = “0” 10. How to calculate risk scores in RBA? The Splunk Threat Research Team utilizes a combination of Impact, or potential effectiveness of the attack if this was observed, and Confidence as to how likely this is a malicious event. The confidence in every environment can vary, so it is important to test detections on a large timeframe and get an idea of how common this observation is in your environment and score appropriately. You may want to score an observation differently based on a signature, business unit, or anything you find happening too often, so you can also set the risk_score field in your SPL. There are examples of this in the Essential Guide as well as on the RBA Github. 11. What are the best practices for setting and adjusting risk scores as the implementation improves? It’s important to keep your threshold constant and tune your risk scores around the threshold. Risk scores are meant to be dynamic as you find what is or isn’t relevant in the risk notables that surpass the threshold from your events. Often it makes sense to lower the risk based off of attributes about a risk object or other interesting fields indicating non-malicious, regular business traffic in your detections by declaring the risk_score field in your SPL. As you advance, you can try making custom risk incident rules that look at risk events over larger amounts of time and play with increasing the threshold. 12. What are the primary challenges in the RBA implementation process? Buy-in from both technical and business (economic buyer / leadership) sides Time invested in initial development and continued documentation Familiarity with SPL (commands of value: rex, eval, foreach, lookup, makeresults, autoregress) Tuning of the risk scoring Getting the SOC involved (they are the ones intimately involved with all the noise on a daily basis) A&I is ideal, but it doesn't have to be perfect. A train wreck is ok. RBA is a JOURNEY, not a one-and-done deal. 13. How can I simulate events in the risk index for testing RBA? Splunk ATT&CK Range is the perfect fit for this: Introduction GitHub There are also open source solutions like Atomic Red Team which is also available on Github. 14. What are the most helpful self-service RBA resources?  Splunk Lantern RBA Prescriptive adoption motion NEW Standalone RBA Manual The essential guide to Risk-Based Alerting: Comprehensive implementation guide from start to finish The RBA Community hosts a community Slack, regular office hours, and common resources to help with RBA development    
I have json file with below data, I would like to get name and status and display it in table. Help here is much appreciated. I'm new to splunk Name                                                ... See more...
I have json file with below data, I would like to get name and status and display it in table. Help here is much appreciated. I'm new to splunk Name                                                                                                Status assetPortfolio_ValidateAddAssetForOthers                    passed assetPortfolio_ValidatePLaceHolderText                         failure assetPortfolio_ValidateIfFieldUpdated                              passed { "name": "behaviors",  "children": [      {      "name": "assetPortfolio_ValidateAddAssetForOthers",      "status": "passed"      },      {      "name": "assetPortfolio_ValidatePlaceHolderText",      "status": "failure"      },      {      "name": "assetPortfolio_ValidateIfFieldUpdated",      "status": "passed"     }   ] }
We had upgraded to Splunk 9.0.4 on a RHEL7.9 machine. STIG RHEL-07-040000 states the following: Operating system management includes the ability to control the number of users and user sessions tha... See more...
We had upgraded to Splunk 9.0.4 on a RHEL7.9 machine. STIG RHEL-07-040000 states the following: Operating system management includes the ability to control the number of users and user sessions that utilize an operating system. Limiting the number of allowed users and sessions per user is helpful in reducing the risks related to DoS attacks. This requirement addresses concurrent sessions for information system accounts and does not address concurrent sessions by single users via multiple system accounts. The maximum number of concurrent sessions should be defined based on mission needs and the operational environment for each system. The fix is the following: Configure the operating system to limit the number of concurrent sessions to "10" for all accounts and/or account types. Add the following line to the top of the /etc/security/limits.conf or in a ".conf" file defined in /etc/security/limits.d/ : * hard maxlogins 10 Will this any way impact Splunk functionality?  Is this ok to make this change and not impact Splunk?
Hi, I have bar chart powered by a query that uses an eval case pattern to group events into apps.  e.g., index=blah NOT "*test*" NOT "*exe*" Level=Error | eval AppName = case( (SourceName="Foo... See more...
Hi, I have bar chart powered by a query that uses an eval case pattern to group events into apps.  e.g., index=blah NOT "*test*" NOT "*exe*" Level=Error | eval AppName = case( (SourceName="Foo" AND Message="*Bar*"), "app1", (SourceName="Foo"), "app2", (source="Mtn" AND 'Properties.Service'="Barf"), "app3", (SourceName="Whatever" AND match(_raw, ".*Service = OtherThing.*")), "app4", ) | stats count as ErrorCount by AppName What I'd like to do is have each bar, when clicked, open a new window that shows the events corresponding to the app.  e.g., for the above example, the queries would be: index=blah NOT "*test*" NOT "*exe*" Level=Error (SourceName="Foo" AND Message="*Bar*") index=blah NOT "*test*" NOT "*exe*" Level=Error (SourceName="Foo") index=blah NOT "*test*" NOT "*exe*" Level=Error (source="Mtn" AND 'Properties.Service'="Barf") index=blah NOT "*test*" NOT "*exe*" Level=Error (SourceName="Whatever" AND match(_raw, ".*Service = OtherThing.*")) The problem I am having is how to make the drilldown xml node function thusly.  I thought I could use conditional tokens, but when condition nodes are in the drilldown node, I get an error saying "link cannot be condition", even though the link node is the last sibling of all the condition nodes. Please help! Thanks, Orion
Hello, I have just installed the ML toolkit for Splunk. However I keep getting this error when I attempt to create a fit model: "Error in 'fit' command: (ImportError) DLL load failed while importing ... See more...
Hello, I have just installed the ML toolkit for Splunk. However I keep getting this error when I attempt to create a fit model: "Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found." I have installed the Python for Scientific Computing Module before this. I've already tried uninstalling and reinstalling, but I keep getting the same error. Any help would be much appreciated!  
Being fairly new to many features in Splunk, I wish to verify that the fields on 2 different hosts match for consistency. Here's a simple search to show the fields I'd like to verify.  What's the bes... See more...
Being fairly new to many features in Splunk, I wish to verify that the fields on 2 different hosts match for consistency. Here's a simple search to show the fields I'd like to verify.  What's the best way to go about this? index="postgresql" sourcetype="postgres" host=FLSM-ZEUS-PSQL-* | table host, node_name, node_id, active, type | where NOT isnull(node_name)   host node_name node_id active type FLSM-ZEUS-PSQL-02 flsm-zeus-psql-02 2 t standby FLSM-ZEUS-PSQL-02 flsm-zeus-psql-01 1 t primary FLSM-ZEUS-PSQL-01 flsm-zeus-psql-02 2 t standby FLSM-ZEUS-PSQL-01 flsm-zeus-psql-01 1 t primary  
Hi there: I have two events shown below: Event #1 source=foo1 eventid=abcd Event #2 source=foo2 event_id=abcd I am trying to query the above events. The event source is different. One is foo1... See more...
Hi there: I have two events shown below: Event #1 source=foo1 eventid=abcd Event #2 source=foo2 event_id=abcd I am trying to query the above events. The event source is different. One is foo1 and the other foo2. I want to find these events where they are linked with their event_id (from event #1 where source is foo1) and eventid (from event #2 where the source is foo2). Basically the value for eventid and event_id must be the same. Do u know how i can construct the query for this? Thanks!
  splunk 6.1 error and cannot search :   Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting ... See more...
  splunk 6.1 error and cannot search :   Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com....   The search job has failed due to an error. You may be able view the job in the Job Inspector   when i check settings->system->licensing and click "show all messages, there are 5 messages on Nov 3rd, 4th, 7th, 8th, 9th "This pool has exceeded its configured poolsize=21474836480 bytes. A warning has been recorded for all members"   How do we tshoot and resolve this to get search working again? We do not have an active splunk support contract.   Regards, Jason
Hi, is it possible to organize users by functional area for example : Security, IT, NetOps,.... In these areas, each team might monitor specific metrics related to their functional area, or they mig... See more...
Hi, is it possible to organize users by functional area for example : Security, IT, NetOps,.... In these areas, each team might monitor specific metrics related to their functional area, or they might monitor a general set of metrics for the specific systems they manage, or both.
Hi, Splunk Enterprise latest New to splunk. Ingesting from some appliances via Syslog on a UDP port. All is fine for INGESTING logs. Event numbers are actively increasing, however, when I go into "... See more...
Hi, Splunk Enterprise latest New to splunk. Ingesting from some appliances via Syslog on a UDP port. All is fine for INGESTING logs. Event numbers are actively increasing, however, when I go into "Search", it has completely stopped. For example, I have 50k events and a latest update of 10:52. I click on the data source "udp:9006" and the last event shown is from 10:30. Things were working great and in real time up until 10:30, then it just stops completely. Any ideas? Thanks
In today's hyper connected digital world, ensuring reliability and security is top of mind concerns for most businesses. Factoring in the hybrid and multi-cloud reality, larger attack surfaces, ... See more...
In today's hyper connected digital world, ensuring reliability and security is top of mind concerns for most businesses. Factoring in the hybrid and multi-cloud reality, larger attack surfaces, and continuing expansion of the cloud, adds to the complexity that is unlike anything organizations have faced. As a result IT leaders are continuously looking for ways to build a stronger foundation of resiliency and security for their cloud.  Splunk delivers a better way for your business to drive higher resiliency and security while you move deployments to the cloud. By migrating deployments to Splunk Cloud Platform - Splunk Platform capabilities delivered as software as a service (SaaS), your teams can focus on high-value tasks while your organization benefits from Splunk’s deep expertise and continuous investment in innovation. When your teams don’t have to worry about maintaining or regularly updating your cloud platform for security, compliance and performance, since Splunk does it all, your teams can focus more on harnessing data’s full potential to drive higher business efficiency and scale. In this cloud community campaign, we will take you through the journey of migrating to Splunk Cloud Platform with success, and share the tools and resources you will need to accelerate your journey to the cloud. As part of this campaign over the next few months we will share with you why Splunk Cloud Platform delivered as a service is an ideal solution to move your on-prem deployments to cloud and show you how you can: Build Value Examine the considerations and benefits that are driving enterprises to migrate to Splunk Cloud Platform to build higher efficiency, security, at scale.     Get Your Cloud On Get started by getting data in Splunk Cloud Platform using a variety of methods. Splunk is here to help answer any questions to make your transition to cloud easy. Deploy Use Cases Explore the Security and IT Modernization use cases that can be accomplished with the Splunk Cloud Platform. Take your use cases to the next level with ML and AI. Realize Value Get simplified experience to administer and extract more value from your cloud platform using Splunk tools and resources designed for your success.   So, fasten your seat belts and embark on the journey of Building a Foundation of Resilience for Your Cloud. Let’s start the journey by experiencing the drivers and business benefits from leaders and practitioners who chose migrating to Splunk Cloud Platform as a better way to supercharge their value realization. Download the new IDC analyst report to learn from current Splunk customers HSBC, Pacific Dental Services and GAF on how moving deployments to the Splunk Cloud Platform helped them achieve their desired business outcomes:  HSBC: Accelerated time to value and increased scalability by >300% Pacific Dental Services: Increased operational efficiencies by more than 40% GAF: Realized annual cost savings by 20%  Get more information on Migrating to Splunk Cloud Platform here.
Hello, i am reaching out to ask if there is any way to make the chart that was generated with the scheduled PDF report option look any better.  We have this dashboard:     It looks fine. Ever... See more...
Hello, i am reaching out to ask if there is any way to make the chart that was generated with the scheduled PDF report option look any better.  We have this dashboard:     It looks fine. Everything looks nice and clean. When we use the schedule PDF, and we generate a pdf, it does not look good. It looks like  As an fyi, the above SS included both types for chart formatting, one had the values in the middle, and one had values above. Both look good in the dashboard, but neither look good in the pdf.  Is there a way to edit the chart bars so they have more space, or to edit the size of the numerals above the bars?    Is there an app that allows better editing of PDF within splunk? I feel like we have done everything we can to make the pdf look good, but we can not seem to be able to get the numbers to look good on the pdf.    Thank you for any guidance.  
Hi, can someone help me? I'm trying to call a webhook on AWX Tower (Ansible) using the Add-On Builder. This is my script but it doesn't work, but I don't get an error message either:   # encoding... See more...
Hi, can someone help me? I'm trying to call a webhook on AWX Tower (Ansible) using the Add-On Builder. This is my script but it doesn't work, but I don't get an error message either:   # encoding = utf-8 def process_event(helper, *args, **kwargs): """ # IMPORTANT # Do not remove the anchor macro:start and macro:end lines. # These lines are used to generate sample code. If they are # removed, the sample code will not be updated when configurations # are updated. [sample_code_macro:start] # The following example gets the alert action parameters and prints them to the log machine = helper.get_param("machine") helper.log_info("machine={}".format(machine)) # The following example adds two sample events ("hello", "world") # and writes them to Splunk # NOTE: Call helper.writeevents() only once after all events # have been added helper.addevent("hello", sourcetype="sample_sourcetype") helper.addevent("world", sourcetype="sample_sourcetype") helper.writeevents(index="summary", host="localhost", source="localhost") # The following example gets the events that trigger the alert events = helper.get_events() for event in events: helper.log_info("event={}".format(event)) # helper.settings is a dict that includes environment configuration # Example usage: helper.settings["server_uri"] helper.log_info("server_uri={}".format(helper.settings["server_uri"])) [sample_code_macro:end] """ helper.log_info("Alert action awx_webhooks started.") # TODO: Implement your alert action logic here import requests url = 'https://<AWX-URL>/api/v2/job_templates/272/gitlab/' headers = {'Authorization': 'X-Gitlab-Token: <MYTOKEN>'} response = requests.post(url, headers=headers, verify=False) print(response.status_code) print(response.text)  
I just want to pose a quick question about the Microsoft API URLs that are used in the add-on.  At what point will the add-on be updated to reflect the new URL changes?  I had a conversation with a M... See more...
I just want to pose a quick question about the Microsoft API URLs that are used in the add-on.  At what point will the add-on be updated to reflect the new URL changes?  I had a conversation with a Microsoft engineer, and he mentioned that the following URLs may not work past Dec 31 2024:    API_ADVANCED_HUNTING = "/api/advancedhunting/run" API_ALERTS = "/api/alerts" API_INCIDENTS = "/api/incidents This link shows the difference between some of the old vs new urls :  Use the Microsoft Graph security API - Microsoft Graph v1.0 | Microsoft Learn I know it's a while off.  However, it comes quick at times.  Just trying to understand the process so I can stay ahead of it.  Also, I have seen add-ons that have the option for legacy inputs and also for current.  It would be great to have an option like that before the URL switch for this add-on.
Hi, We currently have events where identifying the app that makes the event depends multiple fields, as well as substrings in within those fields.  For example, app 1 is identified by SourceName=... See more...
Hi, We currently have events where identifying the app that makes the event depends multiple fields, as well as substrings in within those fields.  For example, app 1 is identified by SourceName=Foo "bar(" app 2 is identified by SourceName=Foo "quill(" app 3 is identified by SourceName=Foo app 4 is identified by source=abcde app 5 is identified by sourcetype=windows eventcode=11111 I would like to count the number of errors per app, but not having luck yet.  I've tried regexes & an eval case match pattern, & I can't seem to google the correct words to find a similar scenario in others' posts. Please help.  Thanks, Orion
Hi, I need to know the steps and understnading on how to configure LDAP authentication via GUI which is available here: Settings- Authentication methods- LDAP If anyone can share the understanding ... See more...
Hi, I need to know the steps and understnading on how to configure LDAP authentication via GUI which is available here: Settings- Authentication methods- LDAP If anyone can share the understanding and exact steps, that will be helpful. Thanks
Hi Team, I want to get DB top 10 query wait states in AppD dashboard. Kindly suggest.