All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Looks like you don't have nested json events in there, so have you tried to just regex by the } and { characters? Try this: [your_sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = \}\s+\{
Hello, I have some issues with parsing events and a few sample events are given below: {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:10:15", ... See more...
Hello, I have some issues with parsing events and a few sample events are given below: {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:10:15", "statusCode":"active"} {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:09:11", "statusCode":"active"} {"eventVer":"2.56", "userId":"A021", "accountId":"Adm02", "accessKey":"26dsaa", "time":"2023-12-03T09:09:08", "statusCode":"active"} {\"eventVer\":\"2.56", "userId":"B001", "accountId":"Test04", "accessKey":"21fsda", "time":"2023-12-03T09:09:04", "statusCode":"active"} {\"eventVer\":\"2.56", "userId":"B009", "accountId":"Adm01", "accessKey":"21assaa", "time":"2023-12-03T09:09:01", "statusCode":"active"} {"eventVer":"2.56", "userId":"B023", "accountId":"Adm01", "accessKey":"30tsaa", "time":"2023-12-03T09:08:55", "statusCode":"active"} {"eventVer":"2.56", "userId":"A025", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:08:51", "statusCode":"active"} {"eventVer":"2.56", "userId":"C015", "accountId":"Dev01", "accessKey":"41scab", "time":"2023-12-03T09:08:48", "statusCode":"active"} The event breaking point is marked as Bold and I used  LINE_BREAKER=([\r\n]*)\{"eventVer":" in my prop.conf file, but not parsing all events as expected. Any recommendations will be highly appreciated. Thank you.
An even longer answer: How can search head know who is viewing and which time zone each user prefers - if not from user preference? By the end of day, this is not a technical question, but a design ... See more...
An even longer answer: How can search head know who is viewing and which time zone each user prefers - if not from user preference? By the end of day, this is not a technical question, but a design question.  As you stated, you have a global workforce, implying that you cannot force everyone to accept Eastern US time.  Is this correct?  If it is, you need to ask yourself: What is the reason why you cannot allow those special users to set their preference? If there is a good reason for 1, the second question is: Will a dashboard selector be acceptable? One way or another, you need to give your global workforce a method to tell search head their preference.  After the user makes a selection, then yes, there is a way to display specific time zone.
Currently, each of my indexes is set to a specific and own frozenTimePeriodInSecs, but I am noticing they are not rolling over to cold when the frozenTimePeriodInSecs value is set. Data Age (keeps g... See more...
Currently, each of my indexes is set to a specific and own frozenTimePeriodInSecs, but I am noticing they are not rolling over to cold when the frozenTimePeriodInSecs value is set. Data Age (keeps growing) vs Frozen Age (stays as what it is set in frozenTimePeriodInSecs) maxWarmDBCount is set to:   maxWarmDBCount = 4294967295    Does this effect? If the value is changed, would data roll to cold?
Let me clarify the requirement.  You want to modify the saved search so it can handle curly brackets that users may accidentally enter when invoking it.  If this correct, you can do something like  ... See more...
Let me clarify the requirement.  You want to modify the saved search so it can handle curly brackets that users may accidentally enter when invoking it.  If this correct, you can do something like   index=foo | ... some stuff | search [makeresults format=csv data="search $INPUT_SessionId$" | eval search = replace(search, "{|}", "") | format] | ... more stuff   (Note trim(someField, "{}") will not work in your use case because "{" does not appear in the beginning of $INPUT_SessionId$.)
Thank you for describing your SOC workflow.  Yes, that can be implemented.  The question remain about your dataset including content of the lookup (whitelist), maybe also the procedure used to produc... See more...
Thank you for describing your SOC workflow.  Yes, that can be implemented.  The question remain about your dataset including content of the lookup (whitelist), maybe also the procedure used to produce this lookup.  One particular aspect is characteristics of  sourcetype="ironport:summary" and sourcetype="MSExchange:2013:MessageTracking". How frequently are they updated respectively? Is one extremely large compared with another? How does each sourcetype, and each lookup contribute to the workflow you are trying to implement?  Do they play similar role or differing roles?  In other words, describe your workflow in terms available data. Which fields in each are of particular interest to SOC analyst?  Which field(s) from which source contains sender domain, which contains known domain, for example? What each of lookups contain?  How are they of interest to SOC analyst? There are a lot of miscellaneous fields in the code sample, e.g., file_name.  Do they materially contribute to the end results?  If so, how? A technical detail is macro `ut_parse_extended()`.  What does it do? (Which input fields does it take - explicit inputs are senderdomain and list, obviously but an SPL macro can also take implicit inputs, and which output fields does it produce/alter?) Another macro `security_content_ctime()` also invoked twice.  What does this do? Additionally, is your main goal to improve performance (join is a major performance killer as @PickleRick points out), or to improve readability (hence maintainability)? These two do not necessarily converge as join is better understood in many circles.
Hi @avikc100, for my knowledge it isn't possible to block a column using the scroll bars as on Excel. Ciao. Giuseppe
Hi @altink , yes, you have to modify it. Ciao. Giuseppe
I have the following source .I want to extract time from source when data is ingesting   source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log"   in props    TRANSFORMS-set_time =source_... See more...
I have the following source .I want to extract time from source when data is ingesting   source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log"   in props    TRANSFORMS-set_time =source_path_time     In transforms    [set_time_from_file_path] INGEST_EVAL = | eval _time = strptime(replace(source, ".*/ute-(\\d{4}-\\d{2}-\\d{2}[a-z])/([^/]+/[^/]+).*","\1"),"%y-%m-%d_%H-%M-%S")       I tried testing it but I am unable to get the _time   | makeresults | eval source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log" | fields - _time ``` above set test data ``` | eval _time = strptime(replace(source, ".*/compute-(\\d{4}-\\d{2}-\\d{2}[a-z])/([^/]+/[^/]+).*","\1"),"%y-%m-%d_%H-%M-%S")     Thanks in Advance
this is my splunk query:   this is output: how can I freeze this 1st column where interface names are showing. problem is when dragging to right then can not see the interface name ... See more...
this is my splunk query:   this is output: how can I freeze this 1st column where interface names are showing. problem is when dragging to right then can not see the interface name below is source code:  
Is there such a thing as a Spunk AI forwarder that is placed on a device that you can control through biometics the flow of data? Or a Smart Forwarder?  
Thank You very much @gcusello  OK that I go for _internal retention from 30 to 60 days. But when I open - "Open In Search" - the two first dashboards, the code is as below Daily License Us... See more...
Thank You very much @gcusello  OK that I go for _internal retention from 30 to 60 days. But when I open - "Open In Search" - the two first dashboards, the code is as below Daily License Usage index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] Percentage of Daily License Quota Used index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b latest(stacksz) AS stacksz by slave, pool, _time | stats sum(b) AS volumeB max(stacksz) AS stacksz by _time | eval pctused=round(volumeB/stacksz*100,2) | timechart span=1d max(pctused) AS "% used" fixedrange=false As you see, there are " earliest=-30d@d" in both of them, two in the first dashboard and one in the second. I guess I need to set those to " earliest=-60d@d".  Otherwise, extending the retention to 60 days, would have any value ? best regards Altin
@PickleRickYou are correct.  It is poorly written.  I have already made three suggestions to them of which one is to split it into an ingest piece and a search piece.
Quick glance over this app (haven't downloaded it and looked into internals, just relying on the docs) suggest that this app is simply badly written. It aims at downloading some data from the Bloodho... See more...
Quick glance over this app (haven't downloaded it and looked into internals, just relying on the docs) suggest that this app is simply badly written. It aims at downloading some data from the Bloodhound service (whatever that is) and putting that data into an index. The problem is that the authors of this app probably have never seen something more complicated than a single-server Splunk installations. Also even the destination index seems to be hardcoded (or at least provided by default and not docummented as configurable). So I wouldn't be surprised at all if thw app used kvstore for whatever it needs to use it for.
CSS would be probably the way I'd try to go for something like this but it might require more fiddling around that just simple align for the cell contents.
I followed all of the steps and I'm not seeing anything in Splunk for these email logs. Doing | sendemail also did nothing. Some alerts work perfectly fine but others don't. Configuration is identica... See more...
I followed all of the steps and I'm not seeing anything in Splunk for these email logs. Doing | sendemail also did nothing. Some alerts work perfectly fine but others don't. Configuration is identical too. 
Try this: ```Gets the original timestamp. In this case it's when the latest data was ingested into Splunk. The friendly time will be in YOUR LOCAL time zone set in Splunk preferences.``` index=s... See more...
Try this: ```Gets the original timestamp. In this case it's when the latest data was ingested into Splunk. The friendly time will be in YOUR LOCAL time zone set in Splunk preferences.``` index=something sourcetype=something | stats latest(_time) as LATEST_DATA_PULL_TIME | eval LATEST_DATA_PULL_TIME_friendly_local=strftime(LATEST_DATA_PULL_TIME, "%m/%d/%Y %I:%M:%S %P") ```Sets the TARGET time zone.``` | eval to_tz="US/Eastern" ```Converts timestamp to friendly showing YOUR LOCAL time zone, then replaces YOUR LOCAL time zone with the TARGET time zone, then converts the time back into epoch. This creates a new epoch timestamp which is shifted by the difference between YOUR LOCAL time zone and the TARGET time zone.``` | eval LATEST_DATA_PULL_TIME_tz_replaced=strptime(mvindex(split(strftime(LATEST_DATA_PULL_TIME, "%c|%Z"), "|"), 0)+"|"+to_tz, "%c|%Z") ```Calculates the difference between the original timestamp and the shifted timestamp, essentially returning the difference between YOUR LOCAL time zone and the TARGET time zone, in seconds.``` | eval time_diff=LATEST_DATA_PULL_TIME-LATEST_DATA_PULL_TIME_tz_replaced ```Increses the original timestamp by the difference calculated in the previous step, and then converts it to friendly time.``` | eval LATEST_DATA_PULL_TIME_tz_corrected_friendly=strftime(LATEST_DATA_PULL_TIME+time_diff, "%m/%d/%Y %I:%M:%S %P")
We can not choose default source type _json while onboarding. Need to extract the json data within the log file, which is essential for an app owner. log format - 2024-01-01T09:50:44+01:00 hostname... See more...
We can not choose default source type _json while onboarding. Need to extract the json data within the log file, which is essential for an app owner. log format - 2024-01-01T09:50:44+01:00 hostname APP2SAP[354]: {JSON data} I have a splunk intermediate forwarder read these log files. Log file has non-json data followed by json data which bread n butter for application team (log format as shown above). If I forward the data as-is to splunk, extraction is not proper, since it has non-json data at beginning. Now, I need props n (or) transforms to extract, which I am not sure how.
Hi @gcusello , The kv-store isn't usable in the SHs.  The original ask was to take the items in the kv-store and create an alert for each item.  Since the kv-store gets recreated every 4 hours, this... See more...
Hi @gcusello , The kv-store isn't usable in the SHs.  The original ask was to take the items in the kv-store and create an alert for each item.  Since the kv-store gets recreated every 4 hours, this would cause alert duplication which we wanted to avoid.  I added another kv-store on the HF that contains a hash of the values in the items and then check to see if an item is new or already exists in the kv-store.  If new, an alert.  If not, it gets dropped.  The analysts decided they wanted to have the kv-store copied to the SHs they use.  Thus the original question since they don't have access to the HF.  Currently the best suggestion is to output the kv-store as a csv, scp it from the HF to the SHs, and load it into a local kv-store. Regards, Joe
Hi @jwhughes58, I'm not sure that a kv-store, created in a HF, is usebla from a Search Head: kv-store must be located in SH. On HF, you should locate the inputs.conf, the props.conf and the transfo... See more...
Hi @jwhughes58, I'm not sure that a kv-store, created in a HF, is usebla from a Search Head: kv-store must be located in SH. On HF, you should locate the inputs.conf, the props.conf and the transforms.conf, not the other parts of the app. Ciao. Giuseppe