All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I'm still new to Splunk and I understand that I can extend search or report lifecycle either using GUI or change the dispatch.ttl when scheduling a report. I want to know what will happen when I... See more...
Hi, I'm still new to Splunk and I understand that I can extend search or report lifecycle either using GUI or change the dispatch.ttl when scheduling a report. I want to know what will happen when I have hundreds of searches and reports with extended lifetime (7days or more), will there be any impact to the hardware resources when Splunk holds so much data for these reports and searches?  
Hi,  Anyone know a summary index used by Splunk to retain the index sizes? I can calculate a index size by using internal index but I need to go back further than the last month.  Any other metho... See more...
Hi,  Anyone know a summary index used by Splunk to retain the index sizes? I can calculate a index size by using internal index but I need to go back further than the last month.  Any other method is welcomed as well.  Thanks  
Thank you so much! You saved me. I could not find this answer anywhere else.
Looks like I just needed to set up the definition for 1 particular file that was I using for testing versus the others which already were set up. Thank you
It gives the same error when upgrading from Splunk UF 7.1 to Splunk UF 9.0 on windows server 2012 R2. Error 1316. The specified account already exists The mcafee link is broken so cannot see the ... See more...
It gives the same error when upgrading from Splunk UF 7.1 to Splunk UF 9.0 on windows server 2012 R2. Error 1316. The specified account already exists The mcafee link is broken so cannot see the resolution. Could someone share what registry should I look for and remove it?
Splunk always renders the time (either when you explicitly call strftime() or when it displays the _time field) according to the user's timezone set in preferences for said user. There is no way to ... See more...
Splunk always renders the time (either when you explicitly call strftime() or when it displays the _time field) according to the user's timezone set in preferences for said user. There is no way to specify another timezone for time display. The only way you can try to "cheat the system" is to add an artificial offset to the timestamp and pretend it's rendered in another timezone but it's an ugly and a bit unreliable solution.
Hello, I've got a Lamda function exporting AWS logs via HEC to my HF's to my indexers. Unfortunately, the AWS logs are coming in with event.* as all of the field names, whereas the Splunk_TA_aws a... See more...
Hello, I've got a Lamda function exporting AWS logs via HEC to my HF's to my indexers. Unfortunately, the AWS logs are coming in with event.* as all of the field names, whereas the Splunk_TA_aws addon is expecting *. I can easily do a rename event.* as a *, however that's too late for the out of the box props.conf's to take effect. This causes things like the the "FIELDALIAS-eventName-for-aws-cloudtrail-command = eventName AS commandrename eventName as command" in props.conf to fail unless I go in and modify it to be event.eventName. I'd like to fix this before it gets to SPL. Is there a way to do this easily? Thanks!
Please don't duplicate threads. You already asked about the "lag" in another thread.
1. What do you mean by event_time? 2. What is _time assigned from in your sourcetypes? 3. Are your sources properly configured (time synchronized, properly set timezones)? Generally speaking - are... See more...
1. What do you mean by event_time? 2. What is _time assigned from in your sourcetypes? 3. Are your sources properly configured (time synchronized, properly set timezones)? Generally speaking - are your sources properly onboarded or afe you just ingesting "something"?
For that it would be easier to just cut the date after space. Also working with string-formatted timestamps is just asking for trouble.
My assumption is that we are stripping off HH:MM:SS from the original value of Date, but we still want the final results to be in a formatted %m/%d/%Y. Hard to say for sure without seeing the orig... See more...
My assumption is that we are stripping off HH:MM:SS from the original value of Date, but we still want the final results to be in a formatted %m/%d/%Y. Hard to say for sure without seeing the original dataset.
Wait a second. What's the point of doing strptime/strftime over the same value with the same format?
We're not talking about raising a case with support. We're talking about reaching out to your sales contact. BTW, I don't recall there being an offering with completely no support. BTW2, this is no... See more...
We're not talking about raising a case with support. We're talking about reaching out to your sales contact. BTW, I don't recall there being an offering with completely no support. BTW2, this is not a "support substitute".
Since upgrading to 9.1.2, I am no longer able to see table output on the Splunk Search.  Even with the most simplistic search.  I receive the message "Failed to load source for Statistics Table visua... See more...
Since upgrading to 9.1.2, I am no longer able to see table output on the Splunk Search.  Even with the most simplistic search.  I receive the message "Failed to load source for Statistics Table visualization." I am able to see "Events" and also able to use "fields", just not table.  Note that this works when viewing in a Studio Dashboard, so the issue seems to be limited to the Search app.
I think transforming the data in a normal Splunk timechart format then doing a head 12 and then transposing should do what you are asking.     | inputlookup running_data.csv | eval ... See more...
I think transforming the data in a normal Splunk timechart format then doing a head 12 and then transposing should do what you are asking.     | inputlookup running_data.csv | eval _time=strptime(Date, "%m/%d/%Y") | sort 0 -_time | timechart span=1d sum(sats) as sats by team | head 12 | eval Date=strftime(_time, "%m/%d/%Y") | fields - _* | transpose 12 header_field=Date | rename column as team     Example output:  
Thanks, this actually is close with some tweaking but I still cant get around the fact that after the transpose, I want it show the latest 12... Transpose 25 for example will get me the first 25 date... See more...
Thanks, this actually is close with some tweaking but I still cant get around the fact that after the transpose, I want it show the latest 12... Transpose 25 for example will get me the first 25 dates left to right and I want the last 12 right to left if that makes sense?    I could do Transpose with no integer to show everything, but then that would be an extremely wide table as this data grows over time on a weekly basis we get a new date, and on those dates we are trying to show number of sats per team for all teams on that date.
From the looks of the screenshot it appears that event_time probably isn't in epoch format so the diff isn't being properly evaluated.  How does it look when you try this? index=notable | eva... See more...
From the looks of the screenshot it appears that event_time probably isn't in epoch format so the diff isn't being properly evaluated.  How does it look when you try this? index=notable | eval event_epoch=if( NOT isnum(event_time), strptime(event_time, "%m/%d/%Y %H:%M:%S"), 'event_time' ), orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized=coalesce(event_epoch, orig_epoch), diff_seconds='_time'-'event_epoch_standardized', diff=tostring(diff_seconds, "duration") | table _time, search_name, event_time, diff  
Hi, thank you for the update, i have the above query but getting the result for few  events not all. please see the attached screenshot. 
A props.conf for these extractions would look like this. [<sourcetype_name>] EXTRACT-log_level_and_type = \[TID\:(?<tid>[^\]]+)\]\s+(?<log_level>[A-Z]+)\s+(?<log_type>[^\s]+) EXTRACT-cid = \[CID\:(?... See more...
A props.conf for these extractions would look like this. [<sourcetype_name>] EXTRACT-log_level_and_type = \[TID\:(?<tid>[^\]]+)\]\s+(?<log_level>[A-Z]+)\s+(?<log_type>[^\s]+) EXTRACT-cid = \[CID\:(?<cid>[^\]]+)\] EXTRACT-message = [A-Z]+\s+\w+(?:\.\w+)*\s+\-\s+(?<message>.*)\s+\-\s+\( EXTRACT-user = user\s+\'(?<user>[^\']+)\' EXTRACT-client_ip = client\s+(?<client>\d{1,3}(?:\.\d{1,3}){3})\:(?<port>\d+) EXTRACT-cannot_open_service_error = (?i)cannot\s+open\s+(?<service>[^\s]+)\s+service\s+on\s+computer\s+\'(?<computer>[^\']+)\' EXTRACT-unable_to_connect_to_host_exception = (?i)\s+\-\s+(?<app>.*?)\s+unable\s+to\s+connect\s+to\s+(?<hostname>[^\s]+)\s+with\s+exception\s+(?<exception_type>[^\:]+)\:\s+(?<exception_message>.*) EXTRACT-retrieving_class_failed_due_to_error = (?i)\s+\-\s+retrieving\s+the\s+(?<class>[^\s]+)\s+class\s+factory\s+for\s+remote\s+component\s+with\s+clsid\s+\{(?<clsid>[^\}]+)\}\s+from\s+machine\s+(?<hostname>[^\s]+)\s+failed\s+due\s+to\s+the\s+following\s+error\:\s+(?<error_code>[^\s]+) EXTRACT-exception_messages = (?i)(?<exception_type>\w+(\.\w+)*exception)\:\s+(?<exception_message>.*) EXTRACT-error_codes = (?i)due\s+to\s+error\s+(?<error_code>[^\s]+)  And the accompanying default.meta something like this (depending on your desired permissions) [props] access = read : [ * ], write : [ admin, power ] export = system
Thank you everyone for the information and help! We are a non-profit organization and don't have a support entitlement. This is why I'm posting here ))). I will contact the support team to see if the... See more...
Thank you everyone for the information and help! We are a non-profit organization and don't have a support entitlement. This is why I'm posting here ))). I will contact the support team to see if they will provide a reset license.