Calculations can be done with fields in the same event. index=test_index sourcetype="test_source" className=export
| eval total = message.totalExportedProfileCounter + message.exportedRecords If t...
See more...
Calculations can be done with fields in the same event. index=test_index sourcetype="test_source" className=export
| eval total = message.totalExportedProfileCounter + message.exportedRecords If these fields do not have values in the same event, you need to use something like stats to correlate different events into the same event. For this you need a common field value between events to correlate them by.
Firstly, you don't need the / at the beginning and end of the regex string in the rex command, if anything these should be replace with double quotes. Secondly, you have an underscore instead of a hy...
See more...
Firstly, you don't need the / at the beginning and end of the regex string in the rex command, if anything these should be replace with double quotes. Secondly, you have an underscore instead of a hyphen in your regex (assumed-role) which doesn't match with your sample data | rex field=responseElements.assumedRoleUser.arn "arn:aws:sts::(?<accountId>\d{12}):assumed-role\/(?<assumedRoled>.*)\/vault-oidc-(?<userId>\w+)-*."
I have a below Splunk query which gives me the result.
My SPL searches the " eventType IN (security.threat.detected, security.internal.threat.detected) " and provides me the result src_ip results. ...
See more...
I have a below Splunk query which gives me the result.
My SPL searches the " eventType IN (security.threat.detected, security.internal.threat.detected) " and provides me the result src_ip results.
But the same src_ip field has multiple user_id results in other eventType.
I want my SPL to search the src_ip results with other eventType and filter if the user_id="*idp*".
Example - If my src_ip=73.09.52.00, then the src_ip should search the other available eventType and filter the result if the user_id=*idp*
My Current SPL
index=appsrv_test sourcetype="OktaIM2:log" eventType IN (security.threat.detected, security.internal.threat.detected)
| rex field=debugContext.debugData.url "\S+username\=(?<idp_accountname>\S+idp-references)"
| search NOT idp_accountname IN (*idp-references*)
| regex src_ip!="47.37.\d{1,3}.\d{1,3}"
| rename actor.alternateId as user_id, target{}.displayName as user, client.device as dvc, client.userAgent.rawUserAgent as http_user_agent, client.geographicalContext.city as src_city client.geographicalContext.state as src_state client.geographicalContext.country as src_country, displayMessage as threat_description
| strcat "Outcome Reason: " outcome.reason ", Outcome Result: " outcome.result details
| stats values(src_ip) as src_ip count by _time signature threat_description eventType dvc src_city src_state src_country http_user_agent details
| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `okta_threatinsight_threat_detected_filter`
Set the default of your text input to "*" and extend your initial search like this index=<your index and other filters>
[| makeresults
| fields - _time
| eval username="$your_text_searc...
See more...
Set the default of your text input to "*" and extend your initial search like this index=<your index and other filters>
[| makeresults
| fields - _time
| eval username="$your_text_search$"
| where username!="*"
| return username]
Could someone explain to me how this cluster command works in the backend? I couldn't find any resource that explain the technique/algorithm behind this cluster command. How does it cluster the ma...
See more...
Could someone explain to me how this cluster command works in the backend? I couldn't find any resource that explain the technique/algorithm behind this cluster command. How does it cluster the matches (termlist/termset/ngramset)? How is t be calculated? It doesn't seem to be probability based. What kind of clustering algorithm it uses? It would be the best if someone can explain the full algorithm for this cluster command. Much thanks
You could save the ip addresses to lookup file and use inputlookup to retrieve the ipaddresses to use in your reports. Admittedly, this answer is vague as it depends on the nature of your searches ...
See more...
You could save the ip addresses to lookup file and use inputlookup to retrieve the ipaddresses to use in your reports. Admittedly, this answer is vague as it depends on the nature of your searches and what you are trying to achieve as output e.g. this method could be used if you want to combine the reports in to just 6 with all the ipaddresses in each report, although you could update the searches to take into account the extra dimension of ipaddress (depending on your actual searches).
Let me describe a different scenario. Suppose you do nothing, and use whichever time marker you already have (and I understand even less why you want to invent your own instead of using Splunk's bui...
See more...
Let me describe a different scenario. Suppose you do nothing, and use whichever time marker you already have (and I understand even less why you want to invent your own instead of using Splunk's built-in time selector - you can customize the selector any way you want, too). Use it for the entire world. Simply ask your users to set their preference to UTC in their own preferences. Would that work? From user.prefs:[general] tz = <timezone>
* Specifies the per-user timezone to use.
* If unset, the timezone of the Splunk Server or Search Head is used.
* Only canonical timezone names such as America/Los_Angeles should be
used (for best results use the Splunk UI).
* No default. Your users do not need to edit the file, of course. The time zone can be set from Splunk Web.
Hi @DaClyde , I love that cartoon! good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giu...
See more...
Hi @DaClyde , I love that cartoon! good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I have six different SPL queries that I run on a specific IP Address. Is it possible to save a search as a report, schedule that report to run, and provide the report a seed file of IP addresses.
...
See more...
I have six different SPL queries that I run on a specific IP Address. Is it possible to save a search as a report, schedule that report to run, and provide the report a seed file of IP addresses.
I have 20+ IP addresses that I need to run the same 6 reports on. Right now i'm just running them interactively, which is a pain.
Any suggestions?
Hi,
I want to use timechart or bucket span to view the result every 30 mins using below query.
Could you please let me know how I can use timechart or bucket span=30m _time here.
index=* ha...
See more...
Hi,
I want to use timechart or bucket span to view the result every 30 mins using below query.
Could you please let me know how I can use timechart or bucket span=30m _time here.
index=* handler=traffic <today timerange> | stats dc(dsid) as today_Traffic | appendcols [search index=* handler=traffic <yesterday timerange> | stats dc(dsid) as Previous_day_Traffic] | eval delta_traffic = today_Traffic-Previous_day_Traffic
HI @yuanliu Please note that task_id's for event_name are constant and only event_id's going to change. Moreover, even if one task_id have an unchanged event_id, then it should trigger an alert. ...
See more...
HI @yuanliu Please note that task_id's for event_name are constant and only event_id's going to change. Moreover, even if one task_id have an unchanged event_id, then it should trigger an alert. The alert should be triggered only if there is unchanged event_id for the corresponding task_id. To your question, the alert should be triggered with both set-1 and set-2 because set-1 have one unchanged event_id whereas set-2 have three unchanged event_id's. Thank You Thank You
Please see this knowledge article for details on the known issue and fix/workaround: https://splunk.my.site.com/customer/s/article/High-Skipped-Search-Ratio-after-upgrading-to-9-x
Splunk newby here. I have a search that works if I change it every day but would like to add it to a dashboard for monitoring without having to change the date. It looks for all the newly created a...
See more...
Splunk newby here. I have a search that works if I change it every day but would like to add it to a dashboard for monitoring without having to change the date. It looks for all the newly created accounts in the current/past day, depending on what date I put in. The search is index=Activedirectory whenCreated="*,9/15/23" |table whenCreated, name, manager, title, description then search for last 24 hours. The format of the time is %H:%M:%S AM/PM, %a %m/%d/%Y. The 2 issues I am having are, how do you specify the AM/PM and how do you set up the search so that it will search for the last 24 hours using the current date. I was thinking it is "time()" but I am not successful in getting the results I need.
I am searching far and wide for recommendations, best practices, even just conversations on this topic - all for naught. Here's my big dilemma that I would love to get something other than "It Depend...
See more...
I am searching far and wide for recommendations, best practices, even just conversations on this topic - all for naught. Here's my big dilemma that I would love to get something other than "It Depends" answer to: Which platform is more appropriate for Metrics: Splunk Cloud or Splunk O11y ? I am not an expert in Splunk, but know that there are certain challenges with metrics indexes in Splunk Core/Cloud . And, I can also see there are Metrics Rules in O11y that help with filtering, and some pre-built dashboards. But I still am lost for direction. Short of actually ingesting the same amount of metrics into both Platforms and seeing what comes out...do I want to do that ? Well, "that depends"! Any guidance deeply appreciated.
I recently upgraded my search head cluster to 9.x and since then my skipped/deferred searches have sky rocketed. index=_internal source=*scheduler.log
status=* | timechart span=60s count...
See more...
I recently upgraded my search head cluster to 9.x and since then my skipped/deferred searches have sky rocketed. index=_internal source=*scheduler.log
status=* | timechart span=60s count by status