All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have tried to use the following eval to pretty up the return of a field but the result is always test.  I have tried single and double quotes around the host.domain field but it always just gives t... See more...
I have tried to use the following eval to pretty up the return of a field but the result is always test.  I have tried single and double quotes around the host.domain field but it always just gives test.  I keep on looking for a typo or something but I am at a loss   | eval dct_domain=case(host.domain=="prd", "Production", host.domain=="uat", "Pre-Production", host.domain=="dev", "Development", true(), "test" )     TEMPORARY EDIT - So while trying a thing from whats been suggested so far I found that when I click on host.domain in the left side and choose prd it gives me no results despite the fact it clearly lists it in results (which for that type of selection option it has to be in the results).  So I brought the search down to just:   index=dct_foglight_shr "host.domain"=prd   and no results show.  anyway im opening a splunk ticket  
Hi @Nagalakshmi , you can use your search only if in the main search you have the field youused as key (in your case "Hostname"), have you this field in the main search? Anyway, if you want to filt... See more...
Hi @Nagalakshmi , you can use your search only if in the main search you have the field youused as key (in your case "Hostname"), have you this field in the main search? Anyway, if you want to filter the results of a search for the results of a lookup, you have to use a subsearch like the following, putting attention that the field name in main and sub search is the same. So if in the lookup ho have a column called Hostname and in the main search there's a field called Hostname, you can run something like this: index=security host=abc sourcetype = Palo [ | inputlookup Palo_devices.CSV | fields Hostname ] | ...  Ciao. Giuseppe
Hi Team, Need your assistant for below    We have created new csv lookup and we are using the below query but we are getting  all the data from the index & sourcetype . we need to get the events o... See more...
Hi Team, Need your assistant for below    We have created new csv lookup and we are using the below query but we are getting  all the data from the index & sourcetype . we need to get the events only for the hosts which mentioned on the lookup is the requirement Lookup name : Palo_devices.csv, used only one column called Hostname index=security host=abc sourcetype = Palo |lookup Palo_devices.CSV Hostname OUTPUT Hostname   Regards, Nagalakshmi 
This said that command splunk is not recognized. You could try  .\splunk.exe btool .... on that directory.  
Hi  i would like to know how to install btool on windows and can you please tell how to locate the file.  i was trying to open in windows as an administrator and I could get the results. C:\Progra... See more...
Hi  i would like to know how to install btool on windows and can you please tell how to locate the file.  i was trying to open in windows as an administrator and I could get the results. C:\Program Files\Splunk\bin>splunk btool inputs list 'splunk' is not recognized as an internal or external command, operable program or batch file. C:\Program Files\Splunk\bin>splunk cmd transfors list 'splunk' is not recognized as an internal or external command, operable program or batch file. i used Thes commands and i was not able to get please help me out here.
So if you just want to narrow down on the IncidentIds that this occurred on, I thing doing a stats aggregation would be more efficient. Something like this. <base_search> | fields + _time, Incid... See more...
So if you just want to narrow down on the IncidentIds that this occurred on, I thing doing a stats aggregation would be more efficient. Something like this. <base_search> | fields + _time, IncidentId, Description, Status, Severity | sort 0 +_time | stats values(Description) as Description, latest(Status) as Status, dc(Severity) as dc_severity, list(Severity) as Sequence_Severity, earliest(Severity) as Old_Severity, latest(Severity) as New_Severity by IncidentId | where 'dc_severity'>1 | fields - dc_severity   If you want to retain all of the original events apart of any IncidentId that this occurred on then you could use some sort of combo of streamstats and eventstats (less efficient but more detailed) <base_search> | fields + _time, IncidentId, Description, Status, Severity | sort 0 +IncidentId, -_time | streamstats window=2 earliest(Severity) as Old_Severity, latest(Severity) as New_Severity by IncidentId | eventstats max(eval(if(NOT 'Old_Severity'=='New_Severity', 1, 0))) as status_change by IncidentId | where 'status_change'>0 | fields - status_change
@dtburrows3 ,Thank you very much; the knowledge is truly helpful.
So to use your original SPL you posted, it would look something like this. | from datamodel Endpoint.Filesystem | search action=deleted AND Image IN ("*powershell.exe", "*cmd.exe") | lookup file... See more...
So to use your original SPL you posted, it would look something like this. | from datamodel Endpoint.Filesystem | search action=deleted AND Image IN ("*powershell.exe", "*cmd.exe") | lookup files_deleted file_path OUTPUT file_path as path_lookup | where isnotnull(path_lookup) This method assumes that the field "file_path" is properly extracted from your events and that you have enabled the match_type WILDCARD(file_path) setting in the lookup definition. If the field value from "file_path" in the events matches any entry in the lookup, including wildcards, it will return a net-new field to your event named "path_lookup". If an event does not match an entry in the lookup then there will be no new field returned for that event. The final where clause in the search will only keep the events  where a match was made against the lookup.
Hi, @dtburrows3  I'm still having trouble understanding this query. My goal is to retrieve the file_path field in the event and compare it with a lookup file containing files that should not be dele... See more...
Hi, @dtburrows3  I'm still having trouble understanding this query. My goal is to retrieve the file_path field in the event and compare it with a lookup file containing files that should not be deleted. If the file_path in my event matches a file in the lookup file, then the alert should be triggered. Similar to blacklisting malicious IP addresses.
You should be able to set up "Match Type" configuration under advanced settings when defining a lookup definition for your CSV. Example of its usage on my local instance SPL used to s... See more...
You should be able to set up "Match Type" configuration under advanced settings when defining a lookup definition for your CSV. Example of its usage on my local instance SPL used to simulate (you would need to insert your file_paths in the evals to test this) | makeresults | eval file_path="/opt/splunk/etc/apps/custom_app/metadata/local.meta" | append [ | makeresults | eval file_path="/opt/splunk/etc/apps/custom_app/metadata/default.meta" ] | lookup file_deleted file_path OUTPUT file_path as deleted_path ``` | where isnotnull(deleted_path) ```
Hi, Thank you, but I tried and it doesn't work. Thanks
Hello guys I need some help with making a table/dashboard that shows me changes to incidents in our Defender platform. The underlying issue that we see is that Defender sometimes, when an inciden... See more...
Hello guys I need some help with making a table/dashboard that shows me changes to incidents in our Defender platform. The underlying issue that we see is that Defender sometimes, when an incident is handled by automation, de-escalate the severity of a particular incident. So in my index of incidents i want to track for each specific incident that is handled by automation to show me when the severity field changes.  The table should look something link this. IncidentId       Description       Status    Old_Severity     New_Severity I don't know whether to use the streamstats or the dedup command. I've been fiddling abit with both but can't seem to get the right output. Anyways, hope you can help me out here. If theres something unclear about my question, let me know so i can clarify.
Hello everyone, I'm a beginner in using Splunk. I'm facing an issue in finding a search solution for the following idea: I'm logging the deletion behavior of files, and I have whitelisted some import... See more...
Hello everyone, I'm a beginner in using Splunk. I'm facing an issue in finding a search solution for the following idea: I'm logging the deletion behavior of files, and I have whitelisted some important files in a lookup. If the file_path in the event matches any of the file_paths in my lookup file, then it should produce a result. Here is the initial search, and it found 2 file_paths. This is my lookup file. Here is my search, but it's not working correctly. Thank you, everyone, for reading!
Hi usually character \ has used for escape character. I haven't try if this works also in your case, but you could try it like "\$" in your transforms.conf and see if it works or not. r. Ismo
Hi, I have the following transforms.conf: [REPLACEMENT_COST] CLEAN_KEYS = 0 FORMAT = $1"REPLACEMENT_COST2":"$2$s"$3 REGEX = (.*)"REPLACEMENT_COST":([^,]+)(.*) #SOURCE_KEY = REPLACEMENT_COST DEST_... See more...
Hi, I have the following transforms.conf: [REPLACEMENT_COST] CLEAN_KEYS = 0 FORMAT = $1"REPLACEMENT_COST2":"$2$s"$3 REGEX = (.*)"REPLACEMENT_COST":([^,]+)(.*) #SOURCE_KEY = REPLACEMENT_COST DEST_KEY = _raw I had to write s in the FORMAT field right after $, since otherwise, it does nothing. Is there any option to escape the dollar sign in this field? The relevant props.conf is: [json_multiline] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) MAX_DAYS_AGO = 10000 NO_BINARY_CHECK = true TIMESTAMP_FIELDS = LAST_UPDATE TIME_FORMAT = %m/%e/%y %H:%M category = Custom pulldown_type = 1 disabled = false KV_MODE = none EVAL-DESCRIPTION = replace(DESCRIPTION, "([A-Z])", " \1") EVAL-SPECIAL_FEATURES = split(replace(SPECIAL_FEATURES, "([A-Z])", " \1"), ",") LOOKUP-LANGUAGE = LANGUAGE.csv LANGUAGE_ID TRANSFORMS-REPLACEMENT = REPLACEMENT_COST Thanks
Hi as @VatsalJagani said this syntax ab.cd+something@foo.bar is used by some mail systems to reroute addresses to correct recipient by that "+something". So you must contact to your email provider t... See more...
Hi as @VatsalJagani said this syntax ab.cd+something@foo.bar is used by some mail systems to reroute addresses to correct recipient by that "+something". So you must contact to your email provider to add that new domain to trusted ones. r. Ismo
Hi You should try something like this. index=abc sourcetype=123 (source="allocation" TERM("1=1") OR TERM("2=2") TERM("3=C") Sender=aaa TERM("4=region")) OR ( source=*block* TERM("1=1") OR TERM("2... See more...
Hi You should try something like this. index=abc sourcetype=123 (source="allocation" TERM("1=1") OR TERM("2=2") TERM("3=C") Sender=aaa TERM("4=region")) OR ( source=*block* TERM("1=1") OR TERM("2=2")) | dedup source ExecId | stats count Just test if dedup is correct for your case. r. Ismo
Hi, I have below spl query and trying to combine them together. please could you suggest? Expected count is 13919 spl 1: index=abc sourcetype=123 source="allocation" TERM("1=1") OR TERM("2=2"... See more...
Hi, I have below spl query and trying to combine them together. please could you suggest? Expected count is 13919 spl 1: index=abc sourcetype=123 source="allocation" TERM("1=1") OR TERM("2=2") TERM("3=C") Sender=aaa TERM("4=region") | dedup ExecId | stats count ## Results Count = 4698 spl 2: index=abc sourcetype=123 source=*block* TERM("1=1") OR TERM("2=2") | dedup ExecId | stats count ## Results Count = 9221
When users change the permissions on their knowledge objects from private to app-level sharing, Splunk will move the object to the selected app and change the metadata files appropriately.  Splunk al... See more...
When users change the permissions on their knowledge objects from private to app-level sharing, Splunk will move the object to the selected app and change the metadata files appropriately.  Splunk also will make sure there are no duplicate KO names in the same app.  What you suggest will work (use a custom app rather than search), but I recommend letting Splunk (and your users) do the work.
Hi if/when you have enough capability (like admin role) you could move those to another app and also give permission to app or even global. You could try Settings -> All Configurations then Push "R... See more...
Hi if/when you have enough capability (like admin role) you could move those to another app and also give permission to app or even global. You could try Settings -> All Configurations then Push "Reassign Knowledge Objects". Just select correct one and reassign it as you want. There is also some python scripts which you could use for this like https://github.com/harsmarvania57/splunk-ko-change r. Ismo