Splunk Search

Help with splunk search for unix timestamp?

vinothkumark
Path Finder

I want to create an alert for which I am writing a search query but I am unable to filter using the time range picker.  since the events contains unix timestamp, I tried to convert but it fails during time range picker.
can you help me what is wrong here?

Query:

index=isilon sourcetype="emc:isilon:rest" "memory threshold"
| eval "Start Time" = strftime('events.start', "%d/%m/%Y %I:%M:%S %p")
| table "Start Time" events.message


Ideally when I run this query with time range picker on June 12th then there should be NO results,  but the results contains June8th events(attachment provided)

vinothkumark_0-1689098408447.png

 

Sample event:

{"events": {"devid": 8, "event": 400020001, "id": "8.794044", "lnn": 8, "message": "The SMB server (LWIO) is throttling due to current memory threshold settings. Current memory usage is 90% (23556 MB) and the memory threshold is set to 90%.", "resolve_time": 1686266238, "severity": "critical", "specifier": {"PercentMemoryUsed": 90, "PercentThreshold": 90, "ProcessMemConsumedInMB": 23556, "antime": 1686266290.600042, "devid": 8, "extime": 1686266290.490373, "kmtime": 1686266238.984405, "lnn": 8, "val": 90.0}, "time": 1686266238, "value": 90.0}, "timestamp": "2023-06-12 23:46:57", "node": "0.0.0.0", "namespace": "event"}


{"events": {"devid": 8, "event": 400020001, "id": "8.793138", "lnn": 8, "message": "The SMB server (LWIO) is throttling due to current memory threshold settings. Current memory usage is 90% (23556 MB) and the memory threshold is set to 90%.", "resolve_time": 1686248504, "severity": "critical", "specifier": {"PercentMemoryUsed": 90, "PercentThreshold": 90, "ProcessMemConsumedInMB": 23556, "antime": 1686248570.519368, "devid": 8, "extime": 1686248570.447457, "kmtime": 1686248504.901769, "lnn": 8, "val": 90.0}, "time": 1686248504, "value": 90.0}, "timestamp": "2023-06-12 23:46:57", "node": "0.0.0.0", "namespace": "event"}

Labels (2)
0 Karma

burwell
SplunkTrust
SplunkTrust

Hello. We can't see what time Splunk thinks those events arrived in.

Do you have a props.conf for that sourcetype/source to tell the Splunk indexer which field you are using as the actual event time?

https://docs.splunk.com/Documentation/Splunk/latest/Admin/propsconf

Something like

TIME_FORMAT=%s    (for unixtime format)
TIME_PREFIX = time\":\s*\"

 

 

0 Karma

vinothkumark
Path Finder

Not on the Indexer, but I can see the props on the Heavy forwarder.

[PureStorage_REST]
INDEXED_EXTRACTIONS = JSON
TIMESTAMP_FIELDS = time,opened
TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ
TZ = UTC
detect_trailing_nulls = auto
SHOULD_LINEMERGE = false
KV_MODE = none
AUTO_KV_JSON = false

0 Karma

vinothkumark
Path Finder

_time always picks the time range. for example if I set during June 8th, the results look like:

vinothkumark_0-1689223513515.png



If I set the time range as JUne12th, the results look like,

vinothkumark_1-1689223552213.png

 

0 Karma

inventsekar
SplunkTrust
SplunkTrust
index=isilon sourcetype="emc:isilon:rest" "memory threshold"
| table _time _indextime "Start Time" events.message

Can you pls run this and update us the results.. 

do you know if the UF, heavy forwarder and indexer are having same clock times(are they using NTP for time sync?)

thanks and best regards,
Sekar

PS - If this or any post helped you in any way, pls consider upvoting, thanks for reading !
0 Karma

burwell
SplunkTrust
SplunkTrust

So in your query where you show the events, can you display the _time as well?

0 Karma
Get Updates on the Splunk Community!

See your relevant APM services, dashboards, and alerts in one place with the updated ...

As a Splunk Observability user, you have a lot of data you have to manage, prioritize, and troubleshoot on a ...

Cultivate Your Career Growth with Fresh Splunk Training

Growth doesn’t just happen—it’s nurtured. Like tending a garden, developing your Splunk skills takes the right ...

Introducing a Smarter Way to Discover Apps on Splunkbase

We’re excited to announce the launch of a foundational enhancement to Splunkbase: App Tiering.  Because we’ve ...