All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok. This makes sense. Unfortunately. ES does some wacky things by running "inputs".
Almost positive ... There are a few Enterprise Security helper apps ( like SA-IdentityManagement ) that as delivered come with: ( cat SA-IdentityManagement/default/inputs.conf ) [shclustering] ... See more...
Almost positive ... There are a few Enterprise Security helper apps ( like SA-IdentityManagement ) that as delivered come with: ( cat SA-IdentityManagement/default/inputs.conf ) [shclustering] conf_replication_include.distsearch = true conf_replication_include.inputs = true conf_replication_include.identityLookup = true I believe that's in some way responsible for this ... but I have no clue as to why this (and several other helper apps) are coming with [shclustering] blocks in an inputs.conf
Join is very rarely the proper solution. It has limitations which can cause your results to be wrong or incomplete.
Hello,   would like to know if there is a way to track the number of Dashboards created by Users over a period of time ?
Are you actually sure that this was what caused your issue? Inputs shouldn't replicate by default AFAIR.
Apologies for the lack of answers etiquette  . join ended up working for me:   index=firewalls sourcetype=pan:traffic dest_zone=untrust dest_port=443 | join dest [ search index=firewalls... See more...
Apologies for the lack of answers etiquette  . join ended up working for me:   index=firewalls sourcetype=pan:traffic dest_zone=untrust dest_port=443 | join dest [ search index=firewalls sourcetype=pan:threat dest_zone=untrust dest_port=443] | stats sum(bytes) as total_bytes by dest_hostname    
1. What do you mean by "correlate" in this case? Just list results from both searches? Find which results occur at more or less the same time? Something else? 2. Moving the host=$host$ condition to ... See more...
1. What do you mean by "correlate" in this case? Just list results from both searches? Find which results occur at more or less the same time? Something else? 2. Moving the host=$host$ condition to the front gives Splunk bigger chance to optimize the search properly and not fetch from indexes the data it doesn't need further down the pipeline.
And where is the question? But seriously - .spl file is just a tar.gz archive.
1. Well, that's some grave digging. This thread is 12 years old. 2. Is this your literal search? Are you aware what it does?
Hello, I have downloaded all the Use Cases in ES app and now I want to open .spl file to look into these Use Cases but do not want to upload the file as an app 
Tagging a decade-old question is not a good way to get answers.  Please start a new question with the following guidelines in mind: Illustrate data input (in raw text, anonymize as needed), whether... See more...
Tagging a decade-old question is not a good way to get answers.  Please start a new question with the following guidelines in mind: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
I want to first point out that using raw events to correlate two different datasets usually do not end very well because the two datasets may not have exact matches in _time field.  If you are confid... See more...
I want to first point out that using raw events to correlate two different datasets usually do not end very well because the two datasets may not have exact matches in _time field.  If you are confident that the two datasets' _time field do not differ by a certain amount, using a time bucket could remedy that, although there can be other side effects you may need to deal with. This said, if the data models have perfectly matching _time, you can use stats to correlate them. | datamodel Updates Updates search | rename Updates.dvc as host | rename Updates.status as "Update Status" | rename Updates.vendor_product as Product | rename Updates.signature as "Installed Update" | eval isOutlier=if(lastTime <= relative_time(now(), "-60d@d"), 1, 0) | `security_content_ctime(lastTime)` | eval time = strftime(_time, "%m-%d-%y %H:%M:%S") | search * host=$host$ | rename lastTime as "Last Update Time", | table time host "Update Status" "Installed Update" | `no_windows_updates_in_a_time_frame_filter` [datamodel Updates Update_Errors search | eval time = strftime(_time, "%m-%d-%y %H:%M:%S") | search * host=$host$ | table time, host, _raw] | stats values(*) as * values(_raw) as _raw by time host
Adding to what @richgalloway already said - if you're receiving events on udp:// input each datagram is treated as separate event (haven't tried what happens if you have additional line breaker in th... See more...
Adding to what @richgalloway already said - if you're receiving events on udp:// input each datagram is treated as separate event (haven't tried what happens if you have additional line breaker in the middle of your event). What can be happening, especially since you're saying there's a big difference between _time and _indextime - there is either an unsynced clock somewhere or the timezone is wrongly parsed, configured or assumed. So your event does come in, does get parsed and indexed but gets indexed into the future and you don't see it in your web interface because for all the default time ranges there is an implicit "latest=now". And your events are later than that. So search for them with something like "earliest=-1h latest=+1d".
The first error is in using an indexer as a UDP receiver.  That likely will result in data loss.  Recommended practice for the last several years is to send syslog events to a dedicated syslog receiv... See more...
The first error is in using an indexer as a UDP receiver.  That likely will result in data loss.  Recommended practice for the last several years is to send syslog events to a dedicated syslog receiver, such as syslog-ng or rsyslog.  Then use a Universal Forwarder to send the events from the syslog server to Splunk. The current LINE_BREAKER setting says an event doesn't end until it finds a newline so the incoming text is held until that condition is met.  This can be overridden in file inputs, but not with UDP (another case for using a syslog server).  Try this line breaker and also add TIME_FORMAT. LINE_BREAKER = ([\r\n]+).*$ TIME_FORMAT = %b %d %H:%M:%S.%3N  
  We have a disconnected network and have splunk installed on a RedHat Linux server. I can login to the web interface with a local splunk account just fine but cannot login with a domain account. Th... See more...
  We have a disconnected network and have splunk installed on a RedHat Linux server. I can login to the web interface with a local splunk account just fine but cannot login with a domain account. This machine has been configured with domain logins for quite a while and has worked but only recently stopped working with a domain login. I recently needed to put in a temporary license until we get our re-purchase of a new license. Have not gotten far with troubleshooting yet. Where can I look to troubleshoot this issue?. ty.
Where did you show your events?
Doing that to the Search Heads can cause more troubles than it's worth.  Best to backtrack that change. Then opt for a transforms.conf option to rewrite the host field value. [hostname-override] SO... See more...
Doing that to the Search Heads can cause more troubles than it's worth.  Best to backtrack that change. Then opt for a transforms.conf option to rewrite the host field value. [hostname-override] SOURCE_KEY = MetaData:Host REGEX = . FORMAT = host::$HOSTNAME
There is a typo in this for disabling a health rule.  The end parameter should be disable not enable.  URL: http://<controller-host>:<port>/controller/api/accounts/<account-id>/applications/<applic... See more...
There is a typo in this for disabling a health rule.  The end parameter should be disable not enable.  URL: http://<controller-host>:<port>/controller/api/accounts/<account-id>/applications/<application-id>/healthrules/<healthrule-id>/enabled You should then get a list of all the health rules with that were disabled in the return payload.
I don't think there is a way to get this info within a search. It might be (and probably is) returned as additional status along the search job but it's not reflected in the search results themselves... See more...
I don't think there is a way to get this info within a search. It might be (and probably is) returned as additional status along the search job but it's not reflected in the search results themselves. You could try to detect a situation in which this could happen instead of directly looking for the incomplete results by checking cluster health with rest.
I thought showing my logs is enough with that in mind  I need the exact command to be there