Thanks for sharing the events. That helps. Am I correct in interpreting them to say a lock event has just the headings with empty values? If so, then the query becomes one of looking for said empt...
See more...
Thanks for sharing the events. That helps. Am I correct in interpreting them to say a lock event has just the headings with empty values? If so, then the query becomes one of looking for said empty values. One problem, of course, is that the empty values convey no information so you won't know which workstation is being used.
I am current denying chrome and edge processes from being indexed with the following regex blacklist7 = EventCode="4673" Message="Audit\sFailure[\W\w]+Process\sName:[^\n]+(chrome.exe|msedge.e...
See more...
I am current denying chrome and edge processes from being indexed with the following regex blacklist7 = EventCode="4673" Message="Audit\sFailure[\W\w]+Process\sName:[^\n]+(chrome.exe|msedge.exe)" This works on majority of the forwarders. However some stragglers still send these events in event though they have the updated inputs deploy on their systems. My work around is to nullqueue the events in transforms.conf in the /etc/system/local directory. I believe this should be working at the forwarder level. Any ideas as to why this is happening. Some perspective is i have 400 windows machines and only 5 of the systems still send in the events even after a deploy server reload.
As I looked into the information you asked me for, the eventcode is supposed to lock out event code 4740, and it stated yes for lockout if it locks out. And No, if it is not. But in the raw data. It ...
See more...
As I looked into the information you asked me for, the eventcode is supposed to lock out event code 4740, and it stated yes for lockout if it locks out. And No, if it is not. But in the raw data. It seems like my lockout question, doesn't exist.
I have the same issue. I'm on the Victoria distro of Splunk cloud. I opened a ticket with Splunk support and they say that the KV Store/Service is already enabled and running on our stack and to go ...
See more...
I have the same issue. I'm on the Victoria distro of Splunk cloud. I opened a ticket with Splunk support and they say that the KV Store/Service is already enabled and running on our stack and to go to proofpoint to fix. I can't install an older version of the app to see if it works because I'm on cloud. Proofpoint initially tried to kick it back to splunk saying it's a splunk issue, so I don't know what else to do. I tried uninstalling and reinstalling the app, restarting the cloud stack, etc, so I'm basically stuck.
Scheduled "All Time" searches are typically a no-fly zone and you should look to see what your average or max run time is for those searches. My general rule of thumb is that any search completion t...
See more...
Scheduled "All Time" searches are typically a no-fly zone and you should look to see what your average or max run time is for those searches. My general rule of thumb is that any search completion time should be less than 50% of the scheduled reoccurrence. ie. Run Time less than 2 mins 30 seconds if schedule reoccurrence is 5 mins If you absolutely must and I highly doubt there is any good reason that you need an "All Time" search try converting to a TSTATs search which processes much faster. Even then I would stay away from that since "All Time" can be a significant drain and occupy resources for long periods of time. Can you get away with putting daily results into a summary index, searching "All Time" on the summary index is far superior then searching raw data events.
Not sure this is really an issue, but I would from practice keep the event and lookup field names a little different. It could prevent any unintended complications in displaying or transforming data.
You wrote "2 syslog servers (UF installed)". I thought you meant - as is often done - that you have two servers which have some form of an external syslog daemon writing to local files and UF which p...
See more...
You wrote "2 syslog servers (UF installed)". I thought you meant - as is often done - that you have two servers which have some form of an external syslog daemon writing to local files and UF which picks up data from those files. Those syslog servers are completely external to Splunk.
In a well-deployed environment you should _not_ index anything locally except for the indexing tier. (and except for DS-es in the recent versions). You should send all your events to the indexer tier.
There is a collection(KVStore) which holds the checkpoint value. You can likely edit to change or remove the current value. I recommend keeping a backup cause editing this on your own comes with a ...
See more...
There is a collection(KVStore) which holds the checkpoint value. You can likely edit to change or remove the current value. I recommend keeping a backup cause editing this on your own comes with a risk, but in a test environment I would have no problem trying this first. [TA_MS_AAD_checkpointer] field.state = string
What you posted with respect to the logs was a table of value for fields (presumably derived from your events). What would be more useful is the raw events e.g. the _raw field for the events you are ...
See more...
What you posted with respect to the logs was a table of value for fields (presumably derived from your events). What would be more useful is the raw events e.g. the _raw field for the events you are trying to use to determine which device(s) the user(s) is(are) locked out from. If this evidence is not in your raw event data, it is highly unlikely that Splunk can help you find it. Having said that, there may be a sequence or possibly an incomplete sequence of events that indicate that a user failed to connect. For example, you may have evidence in your logs of connections and failed log in attempts on those sessions, or even just connection attempts. We have no idea until you share what information you have.
Hi,
I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade...
See more...
Hi,
I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade to below version of Splunk Enterprise, this output is not happening like it used to. Matched output is significantly reduced resulting in NULL values for many fields even though lookup is complete and has no issues. I am wondering what has changed that is causing this in below version and how to remediate it?
Splunk Enterprise Version: 9.3.1 Build: 0b8d769cb912
index=A sourcetype=B
| stats count by XYZ
| lookup ABC XYZ as XYZ OUTPUT FieldA, FieldB
| eval lockout=if(EventCode =True,"Yes","No") Can you share how the field "EventCode" is evaluated. You shared your search and the results from your search. What would be helpful is an anonymized ...
See more...
| eval lockout=if(EventCode =True,"Yes","No") Can you share how the field "EventCode" is evaluated. You shared your search and the results from your search. What would be helpful is an anonymized raw event which feeds into your search. Any event which indicates an account is in lock out status may not show where the authentication attempt came from. This is why knowing the raw event is helpful to outsiders providing feedback. If you are really trying to discover the root of account lock outs then you need a search for failed log in attempts. The 2 data sets might come from different log entries.
Hi, The issue has re-occurred. I modified few of the scheduled searches running on All time, staggered the cron etc. and it helped for a while. From past few days, the error for the delayed search ...
See more...
Hi, The issue has re-occurred. I modified few of the scheduled searches running on All time, staggered the cron etc. and it helped for a while. From past few days, the error for the delayed search has increased upto 15000(+). Could you please me resolve this permanently Also is there any query I can use to find out which all searches are getting delayed. I am using this one- index= _internal sourcetype=scheduler savedsearch_name=* status=skipped
| stats count BY savedsearch_name, app, user, reason
| sort -count Can someone please help me out with this. @PickleRick @richgalloway
Thanks so much for trying to help me. I agree with what you stated. Something is wrong. However. I did; I posted what was in the what was in the search. Then, I posted what was ingested from the...
See more...
Thanks so much for trying to help me. I agree with what you stated. Something is wrong. However. I did; I posted what was in the what was in the search. Then, I posted what was ingested from the logs. I'm not sure what more information you need from me.