All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Peterm1993 .. do you mean, you want to convert number of days to number of hours (days divided by 24) .. OR.. when you are using that strftime, instead of picking up the days(%d), you want to pic... See more...
Hi @Peterm1993 .. do you mean, you want to convert number of days to number of hours (days divided by 24) .. OR.. when you are using that strftime, instead of picking up the days(%d), you want to pick up the hours... please confirm.. thanks.  index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1d | eval Total_time=strftime(_time,"%d") ```Comment - looks like you miss-typed the "Total_time" as "_time"``` | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL  
This is my code about the drop down <input type="dropdown" token="start_time" searchWhenChanged="true"> <label>First IR init Time (sec)</label> <fieldForLabel>start_time</fieldForLabel> ... See more...
This is my code about the drop down <input type="dropdown" token="start_time" searchWhenChanged="true"> <label>First IR init Time (sec)</label> <fieldForLabel>start_time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>index=idx_ptd_dataset sourcetype="type:ptd_dataset:data" corp="flight" | where !isnull(location) | where !isnull(landing_time) | eval st_time= round(landing_time,0) | where st_time &lt;=90 | stats values by st_time | sort st_time</query> </search> <default>ALL</default> <choice value="ALL">ALL</choice> </input>      
Hi @bowesmana .. just thought to tell you, the rex was missing the closing parenthesis, and your rex and strptime works nicely.  Hi @djoobbani .. as said on the above reply, pls update us.. the ti... See more...
Hi @bowesmana .. just thought to tell you, the rex was missing the closing parenthesis, and your rex and strptime works nicely.  Hi @djoobbani .. as said on the above reply, pls update us.. the time field is "_time" or you want to extract it from the msg.  if you want to extract from the msg, then, assuming the 2nd msg also same like 1st msg.. try something like this..  | makeresults | eval msg1="some message dfsdfdfgfdggfg fgdfdgfdg \"time\":\"2023-11-09T21:33:05.0738373837278Z, abcefg" | eval msg2="some message dfsdfdfgfdggfg fgdfdgfdg \"time\":\"2023-11-09T21:33:10.0738373837278Z, abcefg" | rex field=msg1 "time.:.(?<event1_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})" | rex field=msg2 "time.:.(?<event2_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})" | eval e1_t=strptime(event1_time, "%FT%T") | eval e2_t=strptime(event2_time, "%FT%T") | eval diff=e1_t-e2_t | table event1_time event2_time diff this gives the result of: event1_time event2_time diff 2023-11-09T21:33:05 2023-11-09T21:33:10 -5.000000  
Thanks @bowesmana  Ok let's make it simpler, i have this event: Event source=abc time1=2023-11-10T00:33:53Z time2=2023-11-11T12:33:53Z How would you construct the query so that time2 is subtrac... See more...
Thanks @bowesmana  Ok let's make it simpler, i have this event: Event source=abc time1=2023-11-10T00:33:53Z time2=2023-11-11T12:33:53Z How would you construct the query so that time2 is subtracted from time1 and display the time difference in the result using rex? Thanks!  
Hi im trying to convert this search to show totals in hours instead of days/dates can anyone help me please? index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR... See more...
Hi im trying to convert this search to show totals in hours instead of days/dates can anyone help me please? index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1d | eval _time=strftime(_time,"%d") | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL
Thank you for answer.  Here is an example where I would like to process data: 1. There are 3 years of data accumulated every 2 seconds. 2. The value of a particular point is always 0 and only beco... See more...
Thank you for answer.  Here is an example where I would like to process data: 1. There are 3 years of data accumulated every 2 seconds. 2. The value of a particular point is always 0 and only becomes 1 or more when a failure occurs. 3. I would like to retrieve the records of any failures over a period of 3 years, i.e. spikes in the data, and save them as csv format. Can you help me one more time?
Does the Splunk default _time field represent the time in the data or is it a different time? If it's a different time, then you need to extract those times (if not already extracted) and then just ... See more...
Does the Splunk default _time field represent the time in the data or is it a different time? If it's a different time, then you need to extract those times (if not already extracted) and then just do the maths after the stats, e.g. something like   index=foo (source=foo1 OR source=foo2) (eventid=* OR event_id=*) | eval eventID = coalesce(eventid, event_id) | rex "time.:.(?<event1_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})" | rex "time:.(?<event2_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})" | stats values(*) as * by eventID | eval e1_t=strptime(event1_time, "%FT%T") | eval e2_t=strptime(event2_time, "%FT%T") | eval diff=e1_t-e2_t   You may need to adjust the rex statements - this also ignores subsecond values - not sure what 13 decimal places is for in the first time - if you want to include those you'll have to change it a bit.
My goal is to make sure that databases on 2 servers have the same data. I'll be using this search in an alert to monitor the health of a sql cluster. My goal it to create an alert that triggers when ... See more...
My goal is to make sure that databases on 2 servers have the same data. I'll be using this search in an alert to monitor the health of a sql cluster. My goal it to create an alert that triggers when  the fields:  node_name, node_id, active or type  on both servers don't match each other.
Have you tried just index=xyz status=complete | fields dur | fields - _* | eventstats p95(dur) as p95Dur | where dur < p95Dur | stats count so you only have the dur field in the dataset - I believ... See more...
Have you tried just index=xyz status=complete | fields dur | fields - _* | eventstats p95(dur) as p95Dur | where dur < p95Dur | stats count so you only have the dur field in the dataset - I believe that will be significantly faster without having to pull all the data to the search head.
Hi, I am new to Splunk and couldn't figure out how to work with OpenTelemetry's histogram bucket in Splunk.  I have a basic set up of 3 buckets from OTel, with le=2000, 8000, +Inf and the bucket nam... See more...
Hi, I am new to Splunk and couldn't figure out how to work with OpenTelemetry's histogram bucket in Splunk.  I have a basic set up of 3 buckets from OTel, with le=2000, 8000, +Inf and the bucket name is "http.server.duration_bucket". My goal is to display the number count inside the 3 buckets for a 15min period, perform a calculations using those values, and add the calculated value as a 4th column. I came up with this so far:       | mstats max("http.server.duration_bucket") chart=true WHERE "index"="metrics" span=15m BY le | fields - _span* | rename * AS "* /s" | rename "_time /s" AS _time       But immediately I see 2 issues: a) the 8000 bucket results are added with 2000 bucket results as well because they are recorded as cumulative histograms. b) the values inside the bucket is always increasing, so I cannot isolate how many counts belong to 2000 bucket now vs the same bucket 15mins ago. And I realized that I don't know how to get the right calculation and separate the buckets without using "BY le", so I cannot perform calculations from there. So my question is: 1) Is there an example of function for displaying the real non-cumulative values in the histogram for a given period? 2) If my calculation is max(le=2000)*0.6 + max(le=8000)*0.4, how would I add that as a column to the search? Thanks in advance!  
Did you ever find a solution to this error?
Hi, Even I'm facing the issue while picking the library classes. Module '"splunk-logging"' has no exported member 'SplunkLoggingService'.   Have installed splunk-javascript package.. pls let me k... See more...
Hi, Even I'm facing the issue while picking the library classes. Module '"splunk-logging"' has no exported member 'SplunkLoggingService'.   Have installed splunk-javascript package.. pls let me know any other installations need to be done.
@FelixLeh no - you are absolutely right and your suggestion looks fine, but @Anud was using append and a subsearch with inputlookup and not what you suggested...
OK, so a lot going on here... You have two searches that look similar - not sure if they are searching the same data set, but in order to diagnose this you should do a number of things. You are also... See more...
OK, so a lot going on here... You have two searches that look similar - not sure if they are searching the same data set, but in order to diagnose this you should do a number of things. You are also using the transaction command that also has limitations and can cause data not to appear if you hit those limitations - and you will not know about it. I suggest you validate first search 1 and see how many results you expect and then run search 2 (the appended data) and determine how many you see then. If you do not see 1 + 2 in the combined search, you are hitting some memory issue. I suspect, but cannot say exactly, that you could remove both the append and the use of transaction and just use stats.  Are the two masked search data sets the same or different?  
Hello there: I have the following two events: Event #1 source=foo1  eventid=abc message="some message dfsdfdfgfdggfg fgdfdgfdg "time":"2023-11-09T21:33:05.0738373837278Z, abcefg" Event #2 sour... See more...
Hello there: I have the following two events: Event #1 source=foo1  eventid=abc message="some message dfsdfdfgfdggfg fgdfdgfdg "time":"2023-11-09T21:33:05.0738373837278Z, abcefg" Event #2 source=foo2 eventid=abc time: 2023-11-09T21:33:05Z I need to related these two events based on their event_id and eventid values being the same. I got help before to write that query: index=foo (source=foo1 OR source=foo2) (eventid=* OR event_id=*) | eval eventID = coalesce(eventid, event_id) | stats values(*) as * by eventID Now i need to expand the above query by extracting the timestamp from the message field from Event #1 and compare it against the time field from Event #2. I basically will need to do the timestamp subtraction between the two fields to see if there time differences and by how (second, minutes,etc.) Do u know how to do that? Thanks!    
Ho often do you want the alert to run. When you decide that, change the cron schedule accordingly. The time window should really be the same as the frequency - only you can decide what that is. Whe... See more...
Ho often do you want the alert to run. When you decide that, change the cron schedule accordingly. The time window should really be the same as the frequency - only you can decide what that is. When making these searches, it is normal to search for a window that is a little bit in the past, e.g. as I suggested in my previous post. If your frequency is 5 minutes, then your time window would be something like  earliest=-7m@m latest=-2m@m so you search a 5 minute window from -7 to -2 minutes ago. It's not clear from your original post what you mean by the incident coming in at 9:16 when the event is 8:20  
| spath children{} output=children | mvexpand children | spath input=children | table name status
Thank-you! Does that mean if I set '* hard maxlogins 10', splunk will operate correctly?
Hi @diptij  The Splunk got some requirements for "open files", filesystems, etc.. but no requirements for maxlogins. pls check the docs links below, thanks.    the Software requirements for your ... See more...
Hi @diptij  The Splunk got some requirements for "open files", filesystems, etc.. but no requirements for maxlogins. pls check the docs links below, thanks.    the Software requirements for your reference: https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems the hardware requirements for your reference:  https://docs.splunk.com/Documentation/Splunk/9.1.1/Capacity/Referencehardware  
  For new RBA users, here are some frequently asked questions to help you better get started with the product. 1. What is RBA(Risk-based Alerting)? Risk-Based Alerting (RBA) is Splunk's method to ... See more...
  For new RBA users, here are some frequently asked questions to help you better get started with the product. 1. What is RBA(Risk-based Alerting)? Risk-Based Alerting (RBA) is Splunk's method to aggregate low-fidelity security events as interesting observations tagged with security metadata to create high-fidelity, low-volume alerts. When Splunk customers use RBA, they see a 50% to 90% reduction in alerting volume, while the remaining alerts are higher fidelity, provide more context for analysis, and are more indicative of true security issues. 2. Why RBA? With Splunk RBA, you can: Improve the detection of sophisticated threats including low and slow attacks often missed by traditional SIEM products. Seamlessly align with leading cyber security frameworks such as MITRE ATT&CK, Kill Chain, CIS 20, & NIST.  Scale analyst resources to optimize SOC productivity and efficiency 3. Fundamental terminology for RBA Risk Analysis Adaptive Response Action: Risk Analysis Adaptive Response Action is the actual response action that gets triggered either instead of or in addition to a notable event response action when a risk rule matches. It adds risk scores and security metadata to events that are stored in the risk index as risk events for every risk object. Notable Events: An event generated by a correlation search as an alert. A notable event includes custom metadata fields to assist in the investigation of the alert conditions and to track event remediation. Asset and Identity Framework: The Asset and Identity framework performs asset and identity correlation for fields that might be present in an event set returned by a search. The Asset and Identity framework relies on lookups and configurations managed by the Enterprise Security administrator. Common Information Model(CIM): The Splunk Common Information Model (CIM) is a shared semantic model focused on extracting value from data. The CIM is implemented as an add-on that contains a collection of data models, documentation, and tools that support the consistent, normalized treatment of data for maximum efficiency at search time. Risk Analysis Framework: The Risk Analysis framework provides the ability to identify actions that raise the risk profile of individuals or assets. The framework also accumulates that risk to allow identification of people or devices that perform an unusual amount of risky activities. Risk Event Timeline: Risk Event Timeline is a popup visualization that can drill down and analyze the correlation of the risk events with their associated risk score. Risk Score: A risk score is a single metric that shows the relative risk of an asset or identity such as a device or a user in your network environment over time. Risk Rule: A Risk Rule is a narrowly defined correlation search run against raw events to observe potentially malicious activity. A Risk Rule contains three components: search logic (Search Processing Language), Risk Annotations, and the Risk Analysis Adaptive Response action to generate risk events. All risk events are written to the Risk Index. Risk Incident Rule: A risk incident rule reviews the events in the risk index for anomalous events and threat activities and uses an aggregation of events impacting a single risk object, which can be an asset or identity, to generate risk notables in Splunk Enterprise Security. 4. What are the common use cases of RBA? The most common use case for RBA is detection of malicious compromise.  However the methodology can be utilized in many other ways, some of them include: machine learning, insider risk, fraud.  Machine learning: Risk-Based Alerting (RBA) is key in elevating Machine Learning from hype to practice, filtering through data noise and spotlighting actionable insights by combining domain knowledge with smart data processing. Insider risk: RBA streamlines the process of leveraging the MITRE ATT&CK framework by homing in on the critical data sources and use cases essential for a robust insider risk detection program, resulting in a more focused approach with significantly reduced development time for a mature program while providing high value insights and the capability to alert on activity over large timeframes. Fraud: The Splunk App for Fraud Analytics, driven by the RBA framework, sharpens fraud detection and prevention, particularly for Account Takeover and new account activities. It streamlines the creation of risk rules from its investigative insights, promising significant operational gains post-integration with Splunk ES. 5. What are the prerequisites for using RBA? To use RBA efficiently, you need to have Splunk Enterprise Security 6.4+ (ES) installed. 6. What is the relationship between Enterprise Security and RBA? Enterprise Security(ES) is a SIEM solution that provides a set of out of the box frameworks for a successful security operation program. RBA is the framework to surface high-fidelity, low-volume alerts from subtle or noisy behaviors, and works in conjunction with the Search, Notable Event, Asset and Identity, and Threat Intel frameworks.  7. How can I implement RBA successfully? Follow the four level approach to implementing RBA Check each step in detail using the RBA Essential Guide. 8. What RBA content should I start with? Leverage the MITRE ATT&CK framework mapped against your data sources if you're at the start of your journey OR leverage your existing alert landscape and focus on noisy alerts closed with no action. Consider ingesting a data source like EDR, DLP, or IDS with many of its own signatures and applying different risk amounts by severity. Try and paint a picture with a collection of content. Review fingerprints from successful red team intrusions or create a purple team exercise. If engaging PS, stick to one use case per day. Don't try to boil the ocean - stick to crawl, walk, run approach. It will ramp up as the foundations are set in place. 9. Where do I start and how often do I review the detections? You need events in the risk index to drive risk alerts. Start with at least 5-10 detections / rules (for smaller companies) - utilize the Essential Guide for step by step instructions Make sure they tell a story - spanning a breadth of ATT&CK phases Ensure you have a breadth of risk scores; if your threshold is 100, you want variation so that a high (75) and a low (25), or two mediums (50), or four lows (25) could all bubble up to something interesting. Discuss risk notables with your internal RBA committee on a weekly basis, and maybe monthly with leadership to discuss trends NOTE: Don't be afraid to set the risk score to zero. you have to do this in SPL: | eval risk_score = “0” 10. How to calculate risk scores in RBA? The Splunk Threat Research Team utilizes a combination of Impact, or potential effectiveness of the attack if this was observed, and Confidence as to how likely this is a malicious event. The confidence in every environment can vary, so it is important to test detections on a large timeframe and get an idea of how common this observation is in your environment and score appropriately. You may want to score an observation differently based on a signature, business unit, or anything you find happening too often, so you can also set the risk_score field in your SPL. There are examples of this in the Essential Guide as well as on the RBA Github. 11. What are the best practices for setting and adjusting risk scores as the implementation improves? It’s important to keep your threshold constant and tune your risk scores around the threshold. Risk scores are meant to be dynamic as you find what is or isn’t relevant in the risk notables that surpass the threshold from your events. Often it makes sense to lower the risk based off of attributes about a risk object or other interesting fields indicating non-malicious, regular business traffic in your detections by declaring the risk_score field in your SPL. As you advance, you can try making custom risk incident rules that look at risk events over larger amounts of time and play with increasing the threshold. 12. What are the primary challenges in the RBA implementation process? Buy-in from both technical and business (economic buyer / leadership) sides Time invested in initial development and continued documentation Familiarity with SPL (commands of value: rex, eval, foreach, lookup, makeresults, autoregress) Tuning of the risk scoring Getting the SOC involved (they are the ones intimately involved with all the noise on a daily basis) A&I is ideal, but it doesn't have to be perfect. A train wreck is ok. RBA is a JOURNEY, not a one-and-done deal. 13. How can I simulate events in the risk index for testing RBA? Splunk ATT&CK Range is the perfect fit for this: Introduction GitHub There are also open source solutions like Atomic Red Team which is also available on Github. 14. What are the most helpful self-service RBA resources?  Splunk Lantern RBA Prescriptive adoption motion NEW Standalone RBA Manual The essential guide to Risk-Based Alerting: Comprehensive implementation guide from start to finish The RBA Community hosts a community Slack, regular office hours, and common resources to help with RBA development