All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Correct, null values (as returned by the null() function) are ignored by the dc() function
| where isnotnull(end_time)
Hi All,   I am trying to create an alert via Terraform / REST API with action as "MS teams publish to channel" I could not find any documentation for action value and other parameters required for... See more...
Hi All,   I am trying to create an alert via Terraform / REST API with action as "MS teams publish to channel" I could not find any documentation for action value and other parameters required for it. Could any one let me know those parameters list?   Thanks, somu.
Even though Splunk allows TCP/UDP inputs, it's best practice not to use it if you can. Lots of unpredictable data can come in and then you'll lose data if you happen to do anything with the Splunk se... See more...
Even though Splunk allows TCP/UDP inputs, it's best practice not to use it if you can. Lots of unpredictable data can come in and then you'll lose data if you happen to do anything with the Splunk service (restart/os shutdown etc). It's best if you can use rsyslog for these types of inputs if you can. 
Just to add to this, for the path in the stanza - make sure you use the correct slashes depending which operating system it is (forward slash for Linux and back slash for Windows).   [monitor://<pa... See more...
Just to add to this, for the path in the stanza - make sure you use the correct slashes depending which operating system it is (forward slash for Linux and back slash for Windows).   [monitor://<path>] * Configures a file monitor input to watch all files in the <path> you specify. * <path> can be an entire directory or a single file. * You must specify the input type and then the path, so put three slashes in your path if you are starting at the root on *nix systems (to include the slash that indicates an absolute path). https://docs.splunk.com/Documentation/Splunk/latest/Admin/inputsconf  https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Monitorfilesanddirectorieswithinputs.conf Windows inputs stanza example: [monitor://C:\Windows\System32\WindowsUpdate.log] index = test sourcetype = my_sourcetype  
There are several vulnerabilities, some almost 5 years old, that are still present in the latest Splunk Kubernetes image version. Do we have an ETA on when will these get resolved? Here is the list ... See more...
There are several vulnerabilities, some almost 5 years old, that are still present in the latest Splunk Kubernetes image version. Do we have an ETA on when will these get resolved? Here is the list CVE-2018-1000654 CVE-2018-1000879 CVE-2018-1000880 CVE-2018-1121 CVE-2018-19211 CVE-2018-19211 CVE-2018-20657 CVE-2018-20657 CVE-2018-20657 CVE-2018-20786 CVE-2018-20839 CVE-2019-12900 CVE-2019-14250 CVE-2019-14250 CVE-2019-14250 CVE-2019-17543 CVE-2019-19244 CVE-2019-8905 CVE-2019-8906 CVE-2019-9674 CVE-2019-9674 CVE-2019-9923 CVE-2019-9936 CVE-2019-9937 CVE-2020-17049 CVE-2020-17049 CVE-2020-21674 CVE-2021-20193 CVE-2021-24032 CVE-2021-31879 CVE-2021-35937 CVE-2021-35937 CVE-2021-35938 CVE-2021-35938 CVE-2021-35939 CVE-2021-35939 CVE-2021-3927 CVE-2021-39537 CVE-2021-39537 CVE-2021-3974 CVE-2021-3997 CVE-2021-4166 CVE-2021-4209 CVE-2021-43618 CVE-2022-0351 CVE-2022-1619 CVE-2022-1720 CVE-2022-2124 CVE-2022-2125 CVE-2022-2126 CVE-2022-2129 CVE-2022-2175 CVE-2022-2182 CVE-2022-2183 CVE-2022-2206 CVE-2022-2207 CVE-2022-2208 CVE-2022-2210 CVE-2022-2284 CVE-2022-2285 CVE-2022-2286 CVE-2022-2287 CVE-2022-2309 CVE-2022-2343 CVE-2022-2344 CVE-2022-2345 CVE-2022-23491 CVE-2022-23990 CVE-2022-2522 CVE-2022-27943 CVE-2022-27943 CVE-2022-27943 CVE-2022-2819 CVE-2022-2845 CVE-2022-2849 CVE-2022-2923 CVE-2022-2946 CVE-2022-2980 CVE-2022-3037 CVE-2022-3153 CVE-2022-3219 CVE-2022-3234 CVE-2022-3235 CVE-2022-3256 CVE-2022-3296 CVE-2022-3352 CVE-2022-3705 CVE-2022-40023 CVE-2022-40897 CVE-2022-40897 CVE-2022-40897 CVE-2022-40899 CVE-2022-4292 CVE-2022-4293 CVE-2022-4899 CVE-2023-0049 CVE-2023-0054 CVE-2023-0288 CVE-2023-0433 CVE-2023-0464 CVE-2023-0464 CVE-2023-0465 CVE-2023-0465 CVE-2023-0466 CVE-2023-0466 CVE-2023-0512 CVE-2023-1127 CVE-2023-1170 CVE-2023-1175 CVE-2023-1264 CVE-2023-24056 CVE-2023-24056 CVE-2023-24056 CVE-2023-24056 CVE-2023-27534 CVE-2023-27534 CVE-2023-27536 CVE-2023-27536 CVE-2023-28484 CVE-2023-28486 CVE-2023-28487 CVE-2023-29469
After a lot of experimentation, I've found that I can convert a field into a json-encoded string by simply extracting it from _raw, since json_extract does not seem to operate recursively. It's a bit... See more...
After a lot of experimentation, I've found that I can convert a field into a json-encoded string by simply extracting it from _raw, since json_extract does not seem to operate recursively. It's a bit of a roundabout way of getting there, but it seems to do the trick. So essentially I can do index=whatever my search here | eval subfieldstr = json_extract(_raw, "subfield") | stats dc(subfieldstr) as count  
Thanks. i am all set. 
Correlation Search drilldowns that include newlines have those newlines removed when using a Mission Control Incident's "Contributing events" link. This isn't a terrible problem if each line has a sp... See more...
Correlation Search drilldowns that include newlines have those newlines removed when using a Mission Control Incident's "Contributing events" link. This isn't a terrible problem if each line has a space at the end of it, but if a line of SPL has no trailing space and the newline is removed, the search breaks because each line becomes jammed together with the following one.
Say I have events of the form: { something: "cool", subfield: { this: "may contain", arbitrary: ["things"], and: { more: "stuff" } } ... See more...
Say I have events of the form: { something: "cool", subfield: { this: "may contain", arbitrary: ["things"], and: { more: "stuff" } } } The internal structure of `subfield` is arbitrary. I would like to count how many different `subfield` values I have. How can I accomplish this? My initial thought was maybe there was some function I could use to JSON encode the field, so that I could just have an | eval subfieldstr = to_json_string(subfield) and then I could just do a "stats dc" on subfieldstr, but I can't find such a function, and searching for it is difficult (there are endless results of people trying to do the exact opposite)
Sure!  Just use the concatenation operator (.) in the eval command. | eval today=strftime(now(), "%m/%d/%Y") . "_Response"
its working. can we append any string with this date? For Ex: 11/1/2023_Response
I have a situation where I'm using case to compare 2 fields to identify a fuzzy match, but in field 1 I may have "boa.com" and in field 2 I have "Bank Of America"  what I want to do is to take the le... See more...
I have a situation where I'm using case to compare 2 fields to identify a fuzzy match, but in field 1 I may have "boa.com" and in field 2 I have "Bank Of America"  what I want to do is to take the letters of field 1 and the first letter of each word in field 2 (understanding there is no potential maximum number of words the value may contain).  I know I can usually do something with mvindex by using an index field of -1 to identify the "last value" of a multi value field, but I'm not sure how to try to marry that with case(like and substr().  Has anyone ever accomplished anything like this before?   I'm trying things like | rex field=Company "(?<CamelCase>\b(\w))" but its only returning "b" in CamelCase instead of "boa"
As always in a community forum, you get better answers by better define your use case.  Can you define "frequency analysis" in your context?  Most importantly, from what kind of source are you counti... See more...
As always in a community forum, you get better answers by better define your use case.  Can you define "frequency analysis" in your context?  Most importantly, from what kind of source are you counting?  What result do you expect from such sources?  What is the logic between the source and the result? You mentioned DNS logs.  Suppose your logs have a field named domain.  Do you mean to count how many queries each domain gets in unit time, say, an hour in a given search period, say, past 24 hours? source=mydnslogs | timechart span=1h count by domain What other "analysis" you want to apply?  Sort by frequency? | sort - count  
Here's one method.  There may be others. | makeresults | eval pp_user_action_name="foo",Today_Calls=42,Avg_today=3 | table pp_user_action_name,Today_Calls,Avg_today | rename Avg_today as [| m... See more...
Here's one method.  There may be others. | makeresults | eval pp_user_action_name="foo",Today_Calls=42,Avg_today=3 | table pp_user_action_name,Today_Calls,Avg_today | rename Avg_today as [| makeresults ``` Get today's date and format it ``` | eval today=strftime(now(), "%m/%d/%Y") ``` Return only the value of the today field ``` | return $today]
Wow, it worked..    I will accept this as solution.   Thank you so much What did the "eval if" part do? if score > 0, then include the vuln, if not assign null function, which means DC will ignore ... See more...
Wow, it worked..    I will accept this as solution.   Thank you so much What did the "eval if" part do? if score > 0, then include the vuln, if not assign null function, which means DC will ignore it? eval(if(score > 0,vuln,null()))  
As I noted in https://community.splunk.com/t5/Splunk-Search/Date-time-formatting-variables-not-producing-result-I-expected/m-p/666477#M228639, the letter "Z" signifies a standard time zone and you sh... See more...
As I noted in https://community.splunk.com/t5/Splunk-Search/Date-time-formatting-variables-not-producing-result-I-expected/m-p/666477#M228639, the letter "Z" signifies a standard time zone and you should NOT simply remove it.  Instead, Splunk should process it as a timezone token before you render the end result in any string format you wanted.  In other words, | eval stime=strftime(strptime(stime,"%FT%T%Z"),"%F %T") | eval etime=strftime(strptime(etime,"%FT%T%Z"),"%F %T") | eval orgstime=strftime(strptime(orgstime,"%FT%T%Z"),"%F %T") | eval orgetime=strftime(strptime(orgetime,"%FT%T%Z"),"%F %T")  
So, your formula includes min_score as base, and sets "threshold" at 2/3 between min and max.  In this case, if your data has no range between min and max, this formula will give you the same number ... See more...
So, your formula includes min_score as base, and sets "threshold" at 2/3 between min and max.  In this case, if your data has no range between min and max, this formula will give you the same number as min==max.  Only people with intimate knowledge about that data and this particular use case can determine what the best alternative formula could be. Say, for example, if you decide that instead of min_score + 2/3 * range for all, you want to use the existing formula when range is, say greater than 1/10 of min_score, but use 4/5 * max_score if range is too narrow, you could just express this in SPL. index=ss group="Threat Intelligence" ``` here I'm grouping the domain names in to single group by there naming convention``` | eval domain_group=case( like(domain_name, "%cisco%"), "cisco", like(domain_name, "%wipro%"), "wipro", like(domain_name, "%IBM%"), "IBM", true(), "other" ) | stats count as hits, min(attacker_score) as min_score, max(attacker_score) as max_score by domain_group, attackerip | sort -hits | eval range = max_score - min_score | eval threshold =round(if(range > min_score / 10), min_score + (2 * (range/3)), max_score * 4 / 5), 0) | eventstats max(hits) as max_hits by domain_group ``` eventstats instead of streamstats ``` | where hits >= threshold ``` threshold is used in place of max_hits ``` | table domain_group, min_score, max_score, attackerip, hits, threshold | dedup domain_group This said, I notice the streamstats and dedup in your code, and the criterion hits >= max_hits.  Maybe you have a different use case in mind? threshold is not used at all.  Why calculate it?  The condition hits >= max combined with streamstats (as opposed to eventstats as I illustrated above) will result in alerts for every IP that has larger hits than all previous ones (instead of the largest one, or ones that exceed calculated threshold) - is this what you wanted? your table retains attackerip, but dedup domain_group will lose all except the highest in the group. Maybe your use case is simpler, that you want every domain group to alert, but alert only on the IP address with largest hits? This use case is still very unclear.
Hi @ITWhisperer  Really appreciate your patience and supporting me, Here the results are 'end time is not populating for most of the events, Need only event contain both start and end time stamp. ... See more...
Hi @ITWhisperer  Really appreciate your patience and supporting me, Here the results are 'end time is not populating for most of the events, Need only event contain both start and end time stamp.  
I have a query to display following 3 fields  | table pp_user_action_name,Today_Calls,Avg_today i want to replace 'Avg_today' column header with today's date like '11/1/2023'  is it possible?