All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk always renders the time (either when you explicitly call strftime() or when it displays the _time field) according to the user's timezone set in preferences for said user. There is no way to ... See more...
Splunk always renders the time (either when you explicitly call strftime() or when it displays the _time field) according to the user's timezone set in preferences for said user. There is no way to specify another timezone for time display. The only way you can try to "cheat the system" is to add an artificial offset to the timestamp and pretend it's rendered in another timezone but it's an ugly and a bit unreliable solution.
Hello, I've got a Lamda function exporting AWS logs via HEC to my HF's to my indexers. Unfortunately, the AWS logs are coming in with event.* as all of the field names, whereas the Splunk_TA_aws a... See more...
Hello, I've got a Lamda function exporting AWS logs via HEC to my HF's to my indexers. Unfortunately, the AWS logs are coming in with event.* as all of the field names, whereas the Splunk_TA_aws addon is expecting *. I can easily do a rename event.* as a *, however that's too late for the out of the box props.conf's to take effect. This causes things like the the "FIELDALIAS-eventName-for-aws-cloudtrail-command = eventName AS commandrename eventName as command" in props.conf to fail unless I go in and modify it to be event.eventName. I'd like to fix this before it gets to SPL. Is there a way to do this easily? Thanks!
Please don't duplicate threads. You already asked about the "lag" in another thread.
1. What do you mean by event_time? 2. What is _time assigned from in your sourcetypes? 3. Are your sources properly configured (time synchronized, properly set timezones)? Generally speaking - are... See more...
1. What do you mean by event_time? 2. What is _time assigned from in your sourcetypes? 3. Are your sources properly configured (time synchronized, properly set timezones)? Generally speaking - are your sources properly onboarded or afe you just ingesting "something"?
For that it would be easier to just cut the date after space. Also working with string-formatted timestamps is just asking for trouble.
My assumption is that we are stripping off HH:MM:SS from the original value of Date, but we still want the final results to be in a formatted %m/%d/%Y. Hard to say for sure without seeing the orig... See more...
My assumption is that we are stripping off HH:MM:SS from the original value of Date, but we still want the final results to be in a formatted %m/%d/%Y. Hard to say for sure without seeing the original dataset.
Wait a second. What's the point of doing strptime/strftime over the same value with the same format?
We're not talking about raising a case with support. We're talking about reaching out to your sales contact. BTW, I don't recall there being an offering with completely no support. BTW2, this is no... See more...
We're not talking about raising a case with support. We're talking about reaching out to your sales contact. BTW, I don't recall there being an offering with completely no support. BTW2, this is not a "support substitute".
Since upgrading to 9.1.2, I am no longer able to see table output on the Splunk Search.  Even with the most simplistic search.  I receive the message "Failed to load source for Statistics Table visua... See more...
Since upgrading to 9.1.2, I am no longer able to see table output on the Splunk Search.  Even with the most simplistic search.  I receive the message "Failed to load source for Statistics Table visualization." I am able to see "Events" and also able to use "fields", just not table.  Note that this works when viewing in a Studio Dashboard, so the issue seems to be limited to the Search app.
I think transforming the data in a normal Splunk timechart format then doing a head 12 and then transposing should do what you are asking.     | inputlookup running_data.csv | eval ... See more...
I think transforming the data in a normal Splunk timechart format then doing a head 12 and then transposing should do what you are asking.     | inputlookup running_data.csv | eval _time=strptime(Date, "%m/%d/%Y") | sort 0 -_time | timechart span=1d sum(sats) as sats by team | head 12 | eval Date=strftime(_time, "%m/%d/%Y") | fields - _* | transpose 12 header_field=Date | rename column as team     Example output:  
Thanks, this actually is close with some tweaking but I still cant get around the fact that after the transpose, I want it show the latest 12... Transpose 25 for example will get me the first 25 date... See more...
Thanks, this actually is close with some tweaking but I still cant get around the fact that after the transpose, I want it show the latest 12... Transpose 25 for example will get me the first 25 dates left to right and I want the last 12 right to left if that makes sense?    I could do Transpose with no integer to show everything, but then that would be an extremely wide table as this data grows over time on a weekly basis we get a new date, and on those dates we are trying to show number of sats per team for all teams on that date.
From the looks of the screenshot it appears that event_time probably isn't in epoch format so the diff isn't being properly evaluated.  How does it look when you try this? index=notable | eva... See more...
From the looks of the screenshot it appears that event_time probably isn't in epoch format so the diff isn't being properly evaluated.  How does it look when you try this? index=notable | eval event_epoch=if( NOT isnum(event_time), strptime(event_time, "%m/%d/%Y %H:%M:%S"), 'event_time' ), orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized=coalesce(event_epoch, orig_epoch), diff_seconds='_time'-'event_epoch_standardized', diff=tostring(diff_seconds, "duration") | table _time, search_name, event_time, diff  
Hi, thank you for the update, i have the above query but getting the result for few  events not all. please see the attached screenshot. 
A props.conf for these extractions would look like this. [<sourcetype_name>] EXTRACT-log_level_and_type = \[TID\:(?<tid>[^\]]+)\]\s+(?<log_level>[A-Z]+)\s+(?<log_type>[^\s]+) EXTRACT-cid = \[CID\:(?... See more...
A props.conf for these extractions would look like this. [<sourcetype_name>] EXTRACT-log_level_and_type = \[TID\:(?<tid>[^\]]+)\]\s+(?<log_level>[A-Z]+)\s+(?<log_type>[^\s]+) EXTRACT-cid = \[CID\:(?<cid>[^\]]+)\] EXTRACT-message = [A-Z]+\s+\w+(?:\.\w+)*\s+\-\s+(?<message>.*)\s+\-\s+\( EXTRACT-user = user\s+\'(?<user>[^\']+)\' EXTRACT-client_ip = client\s+(?<client>\d{1,3}(?:\.\d{1,3}){3})\:(?<port>\d+) EXTRACT-cannot_open_service_error = (?i)cannot\s+open\s+(?<service>[^\s]+)\s+service\s+on\s+computer\s+\'(?<computer>[^\']+)\' EXTRACT-unable_to_connect_to_host_exception = (?i)\s+\-\s+(?<app>.*?)\s+unable\s+to\s+connect\s+to\s+(?<hostname>[^\s]+)\s+with\s+exception\s+(?<exception_type>[^\:]+)\:\s+(?<exception_message>.*) EXTRACT-retrieving_class_failed_due_to_error = (?i)\s+\-\s+retrieving\s+the\s+(?<class>[^\s]+)\s+class\s+factory\s+for\s+remote\s+component\s+with\s+clsid\s+\{(?<clsid>[^\}]+)\}\s+from\s+machine\s+(?<hostname>[^\s]+)\s+failed\s+due\s+to\s+the\s+following\s+error\:\s+(?<error_code>[^\s]+) EXTRACT-exception_messages = (?i)(?<exception_type>\w+(\.\w+)*exception)\:\s+(?<exception_message>.*) EXTRACT-error_codes = (?i)due\s+to\s+error\s+(?<error_code>[^\s]+)  And the accompanying default.meta something like this (depending on your desired permissions) [props] access = read : [ * ], write : [ admin, power ] export = system
Thank you everyone for the information and help! We are a non-profit organization and don't have a support entitlement. This is why I'm posting here ))). I will contact the support team to see if the... See more...
Thank you everyone for the information and help! We are a non-profit organization and don't have a support entitlement. This is why I'm posting here ))). I will contact the support team to see if they will provide a reset license. 
I am measuring the lag diff=_time - event_time
Since it sounds like event_time is preferred over orig_time and it is possible for them to exist in the same event then I would suggest using a coalesce() function. The inputs in that function go fro... See more...
Since it sounds like event_time is preferred over orig_time and it is possible for them to exist in the same event then I would suggest using a coalesce() function. The inputs in that function go from highest precedence on the leftmost side and each entry after is the next step lower precedence.  So the first non-null field from left to right is what will be used. And to find avg diff over time for each rule can probably be done with a simple timechart. I don't have access to ES or a notable index at the moment so I will just use fields described in your original question in the example.  Example:   index=notable | eval event_time_standardized=coalesce(event_time, orig_time), diff_seconds='_time'-'event_time_standardized', diff_minutes='diff_seconds'/60 | timechart span=1h avg(diff_seconds) as avg_diff_in_seconds, avg(diff_minutes) as avg_diff_in_minutes by search_name    
You can do that in 3 steps. 1) Verify the user add/update/delete/activate events are indexed in Splunk. 2) Search the appropriate index for the events. 3) When you have search results you like, se... See more...
You can do that in 3 steps. 1) Verify the user add/update/delete/activate events are indexed in Splunk. 2) Search the appropriate index for the events. 3) When you have search results you like, select "Alert" from the Save As menu.  Complete the form and select "Send email" from the Trigger Actions menu.
how to show the how long alert took triggered from the time the event occurred.  To calculate the "diff" in times, to subtract either (_time - event_time) or, if event_time is null, (_time - orig_ti... See more...
how to show the how long alert took triggered from the time the event occurred.  To calculate the "diff" in times, to subtract either (_time - event_time) or, if event_time is null, (_time - orig_time), and then calculate the average time it took for each rule to fire, over time.  i have tried to calculate the diff but event_time and orig_time is present in same event and some doest have.  Please help me to identify the difference in event time and alert triggering time delay.  index=notable | eval diff = _time - event_time | convert ctime(diff), ctime(orig_time) | table event_time orig_time _time diff search_name  
I think doing something like this should work.   | inputlookup running_data.csv | eval EP=strptime(Date, "%m/%d/%Y") | chart sum(sats) as sats over EP ... See more...
I think doing something like this should work.   | inputlookup running_data.csv | eval EP=strptime(Date, "%m/%d/%Y") | chart sum(sats) as sats over EP by team | sort 0 +EP | eval Date=strftime(EP, "%m/%d/%Y") | fields - EP | transpose 25 header_field=Date | rename column as team   This will first sort the dates while they are in epoch time and then we convert to human readable timestamps. Then, a transpose is used to retain the order of ascending time from left to right in the header. Screenshot of local example: