All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for describing your SOC workflow.  Yes, that can be implemented.  The question remain about your dataset including content of the lookup (whitelist), maybe also the procedure used to produc... See more...
Thank you for describing your SOC workflow.  Yes, that can be implemented.  The question remain about your dataset including content of the lookup (whitelist), maybe also the procedure used to produce this lookup.  One particular aspect is characteristics of  sourcetype="ironport:summary" and sourcetype="MSExchange:2013:MessageTracking". How frequently are they updated respectively? Is one extremely large compared with another? How does each sourcetype, and each lookup contribute to the workflow you are trying to implement?  Do they play similar role or differing roles?  In other words, describe your workflow in terms available data. Which fields in each are of particular interest to SOC analyst?  Which field(s) from which source contains sender domain, which contains known domain, for example? What each of lookups contain?  How are they of interest to SOC analyst? There are a lot of miscellaneous fields in the code sample, e.g., file_name.  Do they materially contribute to the end results?  If so, how? A technical detail is macro `ut_parse_extended()`.  What does it do? (Which input fields does it take - explicit inputs are senderdomain and list, obviously but an SPL macro can also take implicit inputs, and which output fields does it produce/alter?) Another macro `security_content_ctime()` also invoked twice.  What does this do? Additionally, is your main goal to improve performance (join is a major performance killer as @PickleRick points out), or to improve readability (hence maintainability)? These two do not necessarily converge as join is better understood in many circles.
Hi @avikc100, for my knowledge it isn't possible to block a column using the scroll bars as on Excel. Ciao. Giuseppe
Hi @altink , yes, you have to modify it. Ciao. Giuseppe
I have the following source .I want to extract time from source when data is ingesting   source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log"   in props    TRANSFORMS-set_time =source_... See more...
I have the following source .I want to extract time from source when data is ingesting   source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log"   in props    TRANSFORMS-set_time =source_path_time     In transforms    [set_time_from_file_path] INGEST_EVAL = | eval _time = strptime(replace(source, ".*/ute-(\\d{4}-\\d{2}-\\d{2}[a-z])/([^/]+/[^/]+).*","\1"),"%y-%m-%d_%H-%M-%S")       I tried testing it but I am unable to get the _time   | makeresults | eval source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log" | fields - _time ``` above set test data ``` | eval _time = strptime(replace(source, ".*/compute-(\\d{4}-\\d{2}-\\d{2}[a-z])/([^/]+/[^/]+).*","\1"),"%y-%m-%d_%H-%M-%S")     Thanks in Advance
this is my splunk query:   this is output: how can I freeze this 1st column where interface names are showing. problem is when dragging to right then can not see the interface name ... See more...
this is my splunk query:   this is output: how can I freeze this 1st column where interface names are showing. problem is when dragging to right then can not see the interface name below is source code:  
Is there such a thing as a Spunk AI forwarder that is placed on a device that you can control through biometics the flow of data? Or a Smart Forwarder?  
Thank You very much @gcusello  OK that I go for _internal retention from 30 to 60 days. But when I open - "Open In Search" - the two first dashboards, the code is as below Daily License Us... See more...
Thank You very much @gcusello  OK that I go for _internal retention from 30 to 60 days. But when I open - "Open In Search" - the two first dashboards, the code is as below Daily License Usage index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] Percentage of Daily License Quota Used index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b latest(stacksz) AS stacksz by slave, pool, _time | stats sum(b) AS volumeB max(stacksz) AS stacksz by _time | eval pctused=round(volumeB/stacksz*100,2) | timechart span=1d max(pctused) AS "% used" fixedrange=false As you see, there are " earliest=-30d@d" in both of them, two in the first dashboard and one in the second. I guess I need to set those to " earliest=-60d@d".  Otherwise, extending the retention to 60 days, would have any value ? best regards Altin
@PickleRickYou are correct.  It is poorly written.  I have already made three suggestions to them of which one is to split it into an ingest piece and a search piece.
Quick glance over this app (haven't downloaded it and looked into internals, just relying on the docs) suggest that this app is simply badly written. It aims at downloading some data from the Bloodho... See more...
Quick glance over this app (haven't downloaded it and looked into internals, just relying on the docs) suggest that this app is simply badly written. It aims at downloading some data from the Bloodhound service (whatever that is) and putting that data into an index. The problem is that the authors of this app probably have never seen something more complicated than a single-server Splunk installations. Also even the destination index seems to be hardcoded (or at least provided by default and not docummented as configurable). So I wouldn't be surprised at all if thw app used kvstore for whatever it needs to use it for.
CSS would be probably the way I'd try to go for something like this but it might require more fiddling around that just simple align for the cell contents.
I followed all of the steps and I'm not seeing anything in Splunk for these email logs. Doing | sendemail also did nothing. Some alerts work perfectly fine but others don't. Configuration is identica... See more...
I followed all of the steps and I'm not seeing anything in Splunk for these email logs. Doing | sendemail also did nothing. Some alerts work perfectly fine but others don't. Configuration is identical too. 
Try this: ```Gets the original timestamp. In this case it's when the latest data was ingested into Splunk. The friendly time will be in YOUR LOCAL time zone set in Splunk preferences.``` index=s... See more...
Try this: ```Gets the original timestamp. In this case it's when the latest data was ingested into Splunk. The friendly time will be in YOUR LOCAL time zone set in Splunk preferences.``` index=something sourcetype=something | stats latest(_time) as LATEST_DATA_PULL_TIME | eval LATEST_DATA_PULL_TIME_friendly_local=strftime(LATEST_DATA_PULL_TIME, "%m/%d/%Y %I:%M:%S %P") ```Sets the TARGET time zone.``` | eval to_tz="US/Eastern" ```Converts timestamp to friendly showing YOUR LOCAL time zone, then replaces YOUR LOCAL time zone with the TARGET time zone, then converts the time back into epoch. This creates a new epoch timestamp which is shifted by the difference between YOUR LOCAL time zone and the TARGET time zone.``` | eval LATEST_DATA_PULL_TIME_tz_replaced=strptime(mvindex(split(strftime(LATEST_DATA_PULL_TIME, "%c|%Z"), "|"), 0)+"|"+to_tz, "%c|%Z") ```Calculates the difference between the original timestamp and the shifted timestamp, essentially returning the difference between YOUR LOCAL time zone and the TARGET time zone, in seconds.``` | eval time_diff=LATEST_DATA_PULL_TIME-LATEST_DATA_PULL_TIME_tz_replaced ```Increses the original timestamp by the difference calculated in the previous step, and then converts it to friendly time.``` | eval LATEST_DATA_PULL_TIME_tz_corrected_friendly=strftime(LATEST_DATA_PULL_TIME+time_diff, "%m/%d/%Y %I:%M:%S %P")
We can not choose default source type _json while onboarding. Need to extract the json data within the log file, which is essential for an app owner. log format - 2024-01-01T09:50:44+01:00 hostname... See more...
We can not choose default source type _json while onboarding. Need to extract the json data within the log file, which is essential for an app owner. log format - 2024-01-01T09:50:44+01:00 hostname APP2SAP[354]: {JSON data} I have a splunk intermediate forwarder read these log files. Log file has non-json data followed by json data which bread n butter for application team (log format as shown above). If I forward the data as-is to splunk, extraction is not proper, since it has non-json data at beginning. Now, I need props n (or) transforms to extract, which I am not sure how.
Hi @gcusello , The kv-store isn't usable in the SHs.  The original ask was to take the items in the kv-store and create an alert for each item.  Since the kv-store gets recreated every 4 hours, this... See more...
Hi @gcusello , The kv-store isn't usable in the SHs.  The original ask was to take the items in the kv-store and create an alert for each item.  Since the kv-store gets recreated every 4 hours, this would cause alert duplication which we wanted to avoid.  I added another kv-store on the HF that contains a hash of the values in the items and then check to see if an item is new or already exists in the kv-store.  If new, an alert.  If not, it gets dropped.  The analysts decided they wanted to have the kv-store copied to the SHs they use.  Thus the original question since they don't have access to the HF.  Currently the best suggestion is to output the kv-store as a csv, scp it from the HF to the SHs, and load it into a local kv-store. Regards, Joe
Hi @jwhughes58, I'm not sure that a kv-store, created in a HF, is usebla from a Search Head: kv-store must be located in SH. On HF, you should locate the inputs.conf, the props.conf and the transfo... See more...
Hi @jwhughes58, I'm not sure that a kv-store, created in a HF, is usebla from a Search Head: kv-store must be located in SH. On HF, you should locate the inputs.conf, the props.conf and the transforms.conf, not the other parts of the app. Ciao. Giuseppe
Hi @altink , never modify something in the default folder! at the first upgrade you'll loose every changes. copy the entire file or only the stanza to modify in the local folder and modify the val... See more...
Hi @altink , never modify something in the default folder! at the first upgrade you'll loose every changes. copy the entire file or only the stanza to modify in the local folder and modify the value. At the end, restart Splunk. Ciao. Giuseppe
Hi @thaghost99, please try this regex: | rex "(?ms)(?<node>node\d+).*?Attack database version:(?<Attack_database_version>\d+).*?Detector version\s*:(?<Detector_version>[^\n]+).*?Policy template ver... See more...
Hi @thaghost99, please try this regex: | rex "(?ms)(?<node>node\d+).*?Attack database version:(?<Attack_database_version>\d+).*?Detector version\s*:(?<Detector_version>[^\n]+).*?Policy template version\s*:(?<Policy_template_version>\d+)" that you can test at https://regex101.com/r/R9SWnM/1 Ciao. Giuseppe
Using SOAR export app in Splunk, we are pulling certain alerts to SOAR. Depending on the ip, the artifacts are grouped to a single container. Now I need to create 1 ticket for each container using pl... See more...
Using SOAR export app in Splunk, we are pulling certain alerts to SOAR. Depending on the ip, the artifacts are grouped to a single container. Now I need to create 1 ticket for each container using playbook. But what happens is that if the container is having multiple artifacts, it creates 1 ticket for each artifact. Any idea on how to solve this? Phantom  Splunk App for SOAR Export 
Hi Folks,   I have a quick question. currently I have a syslog event and I need to see in splunk the raw data the info in different order: Example original syslog (?<field1>REGEX),(?<field2>REG... See more...
Hi Folks,   I have a quick question. currently I have a syslog event and I need to see in splunk the raw data the info in different order: Example original syslog (?<field1>REGEX),(?<field2>REGEX),(?<field3>REGEX),  etc....... what I want to see indexed in splunk (?<field1>REGEX),(?<field3>REGEX),,(?<TIMESTAP>REGEX),(?<field2>REGEX). I tried with SED command in props.conf is really useful to clean the data but not to reorder the info.   Thanks in advance Alex  
I have a saved "MySearch" that takes a parameter "INPUT_SessionId", something like this: index=foo | ... some stuff | search $INPUT_SessionId$ | ... more stuff And then "MySearch" invoked like... See more...
I have a saved "MySearch" that takes a parameter "INPUT_SessionId", something like this: index=foo | ... some stuff | search $INPUT_SessionId$ | ... more stuff And then "MySearch" invoked like this | savedsearch "MySearch" INPUT_SessionId="abc123" My challenge is that sometimes me & my users accidentally invoke with curly braces around the SessionId (it's a long story), like this: | savedsearch "MySearch" INPUT_SessionId="{abc123}" When invoked this way, the search produces no results, which is confusing for user until they realize they accidentally included curly braces. I'd like to change things inside of "MySearch" so that it strips curly braces from $INPUT_SessionId$ before continuing to use the value. For a typical field value I know how to use trim like | eval someField=trim(someField, "{}") How do I do something like trim() but on the value of the parameter $INPUT_SessionId$ ?