All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do we have any content to detect "Moniker Link" - CVE-2024-21413
We have a server that is using splunk enterprise version 7.3.4. However, I couldn't find Splunk DB Connect compatible with that version in splunk base. Could I get a Splunk DB Connect installation fi... See more...
We have a server that is using splunk enterprise version 7.3.4. However, I couldn't find Splunk DB Connect compatible with that version in splunk base. Could I get a Splunk DB Connect installation file that is compatible with splunk enterprise 7.3.4?
One problem at a time Your ask was free-hand search without matching specific field name.  It is perhaps best to close this one and post another question with the need to extract freehand strings ... See more...
One problem at a time Your ask was free-hand search without matching specific field name.  It is perhaps best to close this one and post another question with the need to extract freehand strings based on lookup values?  These are very different search techniques.  You will need to explain your lookup AND event data more specifically than mock values tenant1 tenant2 tenant3 and xxx.  In particular, what does appended "xxx" signify?  How would they appear in event data?  (Anonymize, but be specific enough for volunteers without intimate knowledge about your data to be helpful.)
Running the code below will yield ut_domain as ".com" instead of "somethin.shop". It seems like if the subdomain contains a valid TLD string (e.g. .com), then ut_domain is not parsed correctly. A dom... See more...
Running the code below will yield ut_domain as ".com" instead of "somethin.shop". It seems like if the subdomain contains a valid TLD string (e.g. .com), then ut_domain is not parsed correctly. A domain "somethingbad.shop" will be parsed correctly as it recognizes .shop as a TLD.       | makeresults | eval domain_full = "something.com.somethin.shop" | eval list="*" | `ut_parse(domain_full, list)`        Is it a bug? If so, how can we report it? Any workaround you can think of while waiting for bug fix?  
well, now I have a new issue.  Since the tenant field is changed over to 'search', how can I stats by tenant?  Do i just use the lookup file again?
Ahhhhh got it, when i added [| format] it worked.   And I'm just now seeing your suggestion @yuanliu ,  Thanks!  
To diagnose, run   | inputlookup lookup.csv | stats values(eval(tenant."xxx")) as search | format   This gives you the exact string passed to main search. Alternatively, run   | inputlookup lo... See more...
To diagnose, run   | inputlookup lookup.csv | stats values(eval(tenant."xxx")) as search | format   This gives you the exact string passed to main search. Alternatively, run   | inputlookup lookup.csv | fields tenant | eval search = tenant."xxx"   This way, you can see line by line substitution.  If not, you need to post output of this diagnostic. (Anonymize as needed but must reproduce structure/characteristics precisely.) Then, test | inputlookup lookup.csv | fields tenant | eval search = tenant."xxx" | format
Yeah I tried your too, no dice.  I missed your reply here, but if you look above I am now adding the meta search to the subsearch.  The issue is that it is only adding the first lookup value
OK, I found a different thread and can see that I have to use "search" in the eval.  Awesome.  But now instead of getting '(tenant1xxx OR tenant2xxx OR...)' I am only getting tenant1xxx
You did not get essence of @PickleRick's solution.  In subsearch (weirdly enough but documented nontheless), meta-keyword search has a special meaning.  You cannot replace it with any other string. (... See more...
You did not get essence of @PickleRick's solution.  In subsearch (weirdly enough but documented nontheless), meta-keyword search has a special meaning.  You cannot replace it with any other string. (Actually, t here is ONE synonym:-)  But @PickleRick forgot to close subsearch.  See explanation in my alternative.
I think you take <my_index> in @gcusello's solution a step too literally.  The angular brackets (<>) are not part of search string, just a suggestion for you to substitute with your own index name.  ... See more...
I think you take <my_index> in @gcusello's solution a step too literally.  The angular brackets (<>) are not part of search string, just a suggestion for you to substitute with your own index name.  If I guess correctly, your index name is "ops_sec", not "<ops_sec>".  Is it? The solution should work.  Here is an alternative that takes advantage of some tstats options to remove most subsequent calculations.   | tstats count WHERE index = ops_sec ``` replace ops_sec with your index name ``` earliest=-6d BY sourcetype _time span=3d | stats list(count) as count3d by sourcetype | eval diff_perc = (tonumber(mvindex(count3d, 1)) - tonumber(mvindex(count3d, 0))) / tonumber(mvindex(count3d, 0)) * 100 | where diff_perc < 30   This is a simulation using internal indexes on my laptop:   | tstats count WHERE index = _* ``` replace _* with your index name ``` earliest=-6d BY sourcetype _time span=3d | stats list(count) as count3d by sourcetype | eval diff_perc = (tonumber(mvindex(count3d, 1)) - tonumber(mvindex(count3d, 0))) / tonumber(mvindex(count3d, 0)) * 100 | where diff_perc < 30   Output is sourcetype count3d diff_perc mongod 183 3 -98.36065573770492 splunk_telemetry 13 8 -38.46153846153847 splunk_web_access 17 5 -70.58823529411765 splunk_web_service 274 17 -93.7956204379562
Further, looking at the job I see this: litsearch (index=index1 (tenant="tenant1xxx" OR tenant="tenant2xxx" OR tenant="tenant3xxx" OR tenant="tenant4xxx") (splunk_server::splkindx* | fields keepcol... See more...
Further, looking at the job I see this: litsearch (index=index1 (tenant="tenant1xxx" OR tenant="tenant2xxx" OR tenant="tenant3xxx" OR tenant="tenant4xxx") (splunk_server::splkindx* | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" "new_field" which is almost what I want.  Again, the tenant field does not exist in the original index, I am looking for the explicit string "tenant1xxx" etc.
If you're using the Splunk App for SOAR Export there's an option to create one artifact for each value in the field. https://docs.splunk.com/Documentation/SOARExport/latest/UserGuide/Multivaluefields
This is returning 0 results. I've checked the permissions and availability of the lookup file, all good. I've run the desired query explicitly, and it returns many results. Even this: index=index... See more...
This is returning 0 results. I've checked the permissions and availability of the lookup file, all good. I've run the desired query explicitly, and it returns many results. Even this: index=index1 [ | inputlookup tenants.csv | eval new_field=tenant ```<--- forgoing the append``` | table new_field] is not returning anything.
As @PickleRick points out, adding space does have effect.  The real question is: Where are you displaying these results?  It cannot possibly be from Splunk search.  This is what I see with the two va... See more...
As @PickleRick points out, adding space does have effect.  The real question is: Where are you displaying these results?  It cannot possibly be from Splunk search.  This is what I see with the two values you illustrated. | makeresults | eval test="this is a test " | table test | append [makeresults | eval test="this is a test..........................." | table test] | eval length = len(test) As expected, both of them are aligned to the left. (Splunk search doesn't display right-aligned or center-aligned.)  If some external software cannot handle those trailing spaces, it's a problem with those software, not Splunk.
Don't forget to close subsearch  Here is an alternative to save a command. index=whatever [ | inputlookup lookup.csv | stats values(eval(tenant."xxx")) as search]  
You can either search on each environment separately (which I assume you don't wanna do) or use the LM as a "central search head" from which you'll be able to spawn searches to each of those environm... See more...
You can either search on each environment separately (which I assume you don't wanna do) or use the LM as a "central search head" from which you'll be able to spawn searches to each of those environments. Then you can just search specific peers. https://docs.splunk.com/Documentation/Splunk/9.2.0/Search/Searchdistributedpeers
As you have witnessed first hand, deciphering someone else's complex search is very difficult even for people who are intimately familiar with the specific dataset and detection logic like yourself. ... See more...
As you have witnessed first hand, deciphering someone else's complex search is very difficult even for people who are intimately familiar with the specific dataset and detection logic like yourself.  It is many times more difficult for volunteers unfamiliar with those specifics. My suggestion, then, is to start from a description/illustration of dataset (anonymize as needed), followed by a description of desired output (illustration of current output could help, anonymize as needed), then, a description of the detection logic - i.e., given the data you describe, how will an analyst discern the desired results without using Splunk?  What fields are available (from each data source) for the analyst to make determination?  What is in the lookup and how is it supposed to help?
HFs have always been a bit of an "ugly duckling". They are forwarders so they are covered by forwarder monitoring but only covering the same set of parameters as UFs. You can try to add them as inde... See more...
HFs have always been a bit of an "ugly duckling". They are forwarders so they are covered by forwarder monitoring but only covering the same set of parameters as UFs. You can try to add them as indexers to the MC which should give you their health parameters (but can cause issues if you're using forwarder license on them). Generally there is no single good answer since some of the HFs can't be monitored in any way other than by checking the _internal log (as it is done for UFs) so you can't add them as reachable search peers to MC.
Dear team,  Good day! Hope you are doing well.  I need some help in understanding a correlation search. The search is as follows:  index=email sourcetype="ironport:summary" action=delivered |fi... See more...
Dear team,  Good day! Hope you are doing well.  I need some help in understanding a correlation search. The search is as follows:  index=email sourcetype="ironport:summary" action=delivered |fillnull value="" file_name senderdomain |rex field=sender "\@(?<senderdomain>[^ ]*)" | eval list="mozilla" | `ut_parse_extended(senderdomain,list)` | stats count first(subject) as subject earliest(_time) as earliest latest(_time) as latest values(file_name) as file_name by ut_domain | inputlookup append=t previously_seen_domains.csv | stats sum(count) as No_of_emails values(subject) as subject min(earliest) as earliest max(latest) as latest values(file_name) as file_name by ut_domain | eval isNew=if(earliest >= relative_time(now(), "-1d@d"), 1,0) | where isNew=1 and No_of_emails>=1 | mvcombine file_name delim=" " | eval temp_file=split(file_name," ") | rex field="temp_file" "\.(?<ext>[^\.]*$)" | eventstats values(ext) as extension by ut_domain | table latest earliest ut_domain No_of_emails subject file_name temp_file extension | eval _comment="exchange search here" | join type=outer ut_domain [search index=email sourcetype="MSExchange:2013:MessageTracking" directionality="Incoming" event_id="RECEIVE" | stats count by sender_domain | fields sender_domain | eval list="mozilla" | `ut_parse_extended(sender_domain,list)` | table ut_domain sender_domain ] | eval isExchangeFound=if(isnull(sender_domain),"false","true") | where isExchangeFound="true" | eval qualifiers=if(No_of_emails>=5,mvappend(qualifiers, "- More Than 5 emails from a previously unseen domain (Possible Spam)."),qualifiers) | cluster t=0.5 labelonly=1 showcount=0 field=file_name | eventstats dc(file_name) as similer_attach_count dc(ut_domain) as no_of_domains by cluster_label | eval qualifiers=if(similer_attach_count>=2 AND match(extension,"(?i)(bat|chm|cmd|cpl|exe|hlp|hta|jar|msi|pif|ps1|reg|scr|vbe|vbs|wsf|lnk|scr|xlsm|dotm|lnk|zip|rar|gz|html|iso|img|one)") ,mvappend(qualifiers, "- Suspicious email attachments with similar names, sent from " .no_of_domains. " previously unseen domains. (Qbot Style)"),qualifiers) | where mvcount(qualifiers)>0 | eval _comment="informational qualifier not counted" | eval qualifiers=if(match(extension,"(?i)(bat|chm|cmd|cpl|exe|hlp|hta|jar|msi|pif|ps1|reg|scr|vbe|vbs|wsf|lnk|scr|xlsm|dotm|lnk|zip|rar|gz|html|iso|img|one)") ,mvappend(qualifiers, "- Email attachment contains a suspicious file extension - " .extension ),qualifiers) | eval cluster_label=if(isnull(cluster_label),ut_domain,cluster_label) | stats values(subject) as subject values(no_of_domains) as no_of_domains values(severity) as severity values(file_name) as file_name values(ut_domain) as ut_domain values(qualifiers) as qualifiers min(earliest) as start_time max(latest) as end_time sum(No_of_emails) as No_of_emails by cluster_label | eval sev=if(no_of_domains>1,mvcount(qualifiers) + 1,mvcount(qualifiers)) | eval urgency=case(sev=1,"low",sev=2,"medium",sev>2,"high" ) | eval reason=mvappend("Alert qualifiers:", qualifiers) | eval dd=" index=email sourcetype=ironport:summary sender IN (\"*".mvjoin(ut_domain, "\", \"*")."\") | eventstats last(subject) as subject by sender | eventstats last(file_name) as file_name by sender |table _time action sender recipient subject file_name" | table start_time end_time ut_domain subject No_of_emails file_name reason urgency dd | `security_content_ctime(start_time)` | `security_content_ctime(end_time)` | rename No_of_emails as result | eval network_segment="ABC" |search ut_domain=* NOT [inputlookup domain_whitelist.csv | fields ut_domain] The expansion of the macro `ut_parse_extended(senderdomain,list)`: | lookup ut_parse_extended_lookup url as senderdomain list as list | spath input=ut_subdomain_parts | fields - ut_subdomain_parts   We have this search and it works but giving a lot of false positives. Even though a domain is added to the look up table, still we are getting an alert. I am SOC analyst and I tried to understand this query but it appears to be very difficult. Can someone please help or support me to simplify this query? It will be really helpful. This is the first time I am posting something on a community page. So, if I missed to add any information, I apologize and do let me know if more info is required and I will be more than happy to furnish them.  Appreciate your help and support.