All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The first error is in using an indexer as a UDP receiver.  That likely will result in data loss.  Recommended practice for the last several years is to send syslog events to a dedicated syslog receiv... See more...
The first error is in using an indexer as a UDP receiver.  That likely will result in data loss.  Recommended practice for the last several years is to send syslog events to a dedicated syslog receiver, such as syslog-ng or rsyslog.  Then use a Universal Forwarder to send the events from the syslog server to Splunk. The current LINE_BREAKER setting says an event doesn't end until it finds a newline so the incoming text is held until that condition is met.  This can be overridden in file inputs, but not with UDP (another case for using a syslog server).  Try this line breaker and also add TIME_FORMAT. LINE_BREAKER = ([\r\n]+).*$ TIME_FORMAT = %b %d %H:%M:%S.%3N  
  We have a disconnected network and have splunk installed on a RedHat Linux server. I can login to the web interface with a local splunk account just fine but cannot login with a domain account. Th... See more...
  We have a disconnected network and have splunk installed on a RedHat Linux server. I can login to the web interface with a local splunk account just fine but cannot login with a domain account. This machine has been configured with domain logins for quite a while and has worked but only recently stopped working with a domain login. I recently needed to put in a temporary license until we get our re-purchase of a new license. Have not gotten far with troubleshooting yet. Where can I look to troubleshoot this issue?. ty.
Where did you show your events?
Doing that to the Search Heads can cause more troubles than it's worth.  Best to backtrack that change. Then opt for a transforms.conf option to rewrite the host field value. [hostname-override] SO... See more...
Doing that to the Search Heads can cause more troubles than it's worth.  Best to backtrack that change. Then opt for a transforms.conf option to rewrite the host field value. [hostname-override] SOURCE_KEY = MetaData:Host REGEX = . FORMAT = host::$HOSTNAME
There is a typo in this for disabling a health rule.  The end parameter should be disable not enable.  URL: http://<controller-host>:<port>/controller/api/accounts/<account-id>/applications/<applic... See more...
There is a typo in this for disabling a health rule.  The end parameter should be disable not enable.  URL: http://<controller-host>:<port>/controller/api/accounts/<account-id>/applications/<application-id>/healthrules/<healthrule-id>/enabled You should then get a list of all the health rules with that were disabled in the return payload.
I don't think there is a way to get this info within a search. It might be (and probably is) returned as additional status along the search job but it's not reflected in the search results themselves... See more...
I don't think there is a way to get this info within a search. It might be (and probably is) returned as additional status along the search job but it's not reflected in the search results themselves. You could try to detect a situation in which this could happen instead of directly looking for the incomplete results by checking cluster health with rest.
I thought showing my logs is enough with that in mind  I need the exact command to be there
Without seeing your events it is difficult to determine what you need to do with the tstats to get the data you want.
I need this query to use top command, but looks like it should be rewritten first in some kind of way 
this is exactly why I'm here. My tstats query isn't completed, I need this data to be shown in logs as it used to be in my usual query (non-tstats one)
Try reducing the lines until the error goes away to find out where the breakpoint is
So your conversion to tstats is not complete then? Using the data you get back from tstats is there sufficient information for you to compile the results you want (or do you need a different version ... See more...
So your conversion to tstats is not complete then? Using the data you get back from tstats is there sufficient information for you to compile the results you want (or do you need a different version of the tstats search?
I have two different data sets within the Updates data model. I catered a few panels within a dashboard that I use to collect the installed updates and update errors. I want to combine both of these ... See more...
I have two different data sets within the Updates data model. I catered a few panels within a dashboard that I use to collect the installed updates and update errors. I want to combine both of these searches into one by combining the datasets to correlate which machines are updating or occurring errors. Here's the two searches I have so far.  Installed Updates:  | datamodel Updates Updates search | rename Updates.dvc as host | rename Updates.status as "Update Status" | rename Updates.vendor_product as Product | rename Updates.signature as "Installed Update" | eval isOutlier=if(lastTime <= relative_time(now(), "-60d@d"), 1, 0) | `security_content_ctime(lastTime)` | eval time = strftime(_time, "%m-%d-%y %H:%M:%S") | search * host=$host$ | rename lastTime as "Last Update Time", | table time host "Update Status" "Installed Update" | `no_windows_updates_in_a_time_frame_filter` Update Errors:  | datamodel Updates Update_Errors search | eval time = strftime(_time, "%m-%d-%y %H:%M:%S") | search * host=$host$ | table _time, host, _raw,    
Hi @tomjb94 , could you share some sample of your logs? Anyway, if in your logs there's only one timestamp, you could try to use only TIME_FORMAT without TIME_PREFIX. ciao. Giuseppe
Thanks Dural for response. Then will need to figure out Dell Unity storage technicals and see how we can do the GDI.
Since both of these are data source content issues it's difficult to determine from the Splunk side.  I would start with more research at a specific machine side.  What traffic is being generated and... See more...
Since both of these are data source content issues it's difficult to determine from the Splunk side.  I would start with more research at a specific machine side.  What traffic is being generated and from what application.  Who if anyone is logged in live.
I have this use case and want to report on bytes by dest_hostname. After adjusting for current Palo field names, the provided answer yields no results: index=firewalls sourcetype=pan:traffic dest_z... See more...
I have this use case and want to report on bytes by dest_hostname. After adjusting for current Palo field names, the provided answer yields no results: index=firewalls sourcetype=pan:traffic dest_zone=untrust dest_port=443 [search index=firewalls sourcetype=pan:threat | fields dest_hostname] | stats sum(bytes) BY dest_hostname  
You really need to investigate your internal logs for bucket replication messages to get an idea of what is happening or not happening.  There are so many contributing factors to what could be occurr... See more...
You really need to investigate your internal logs for bucket replication messages to get an idea of what is happening or not happening.  There are so many contributing factors to what could be occurring it would be difficult to provide an answer at this point.
I am playing around with the splunk-rolling-upgrade app in our DEV environment.  We dont use a kvstore there and we dont use a kvstore on our indexers in PROD either.  Which is were I would like to u... See more...
I am playing around with the splunk-rolling-upgrade app in our DEV environment.  We dont use a kvstore there and we dont use a kvstore on our indexers in PROD either.  Which is were I would like to use this once I sort out the process.  However, the automated upgrade process appears to be failing because it is looking for a healthy kvstore.  Is there a flag or something I can put into the rolling_upgrade.conf file so that it ignores the kvstore?  Especially when it comes to our CM and Indexers where we have the kvstore disabled.
Checking history in answers and Dell/EMC websites this has been an issue for a few years, no obvious solutions were ever provided.