All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank for the information
Thanks for the information and explaining again what information you wanted. Here is the raw data you requested:
Scheduled "All Time" searches are typically a no-fly zone and you should look to see what your average or max run time is for those searches.  My general rule of thumb is that any search completion t... See more...
Scheduled "All Time" searches are typically a no-fly zone and you should look to see what your average or max run time is for those searches.  My general rule of thumb is that any search completion time should be less than 50% of the scheduled reoccurrence. ie. Run Time less than 2 mins 30 seconds if schedule reoccurrence is 5 mins If you absolutely must and I highly doubt there is any good reason that you need an "All Time" search try converting to a TSTATs search which processes much faster.  Even then I would stay away from that since "All Time" can be a significant drain and occupy resources for long periods of time. Can you get away with putting daily results into a summary index, searching "All Time" on the summary index is far superior then searching raw data events.
Try adding this line at the top of your python script. This will not store cached script Only in dev import datetime;print(datetime.datetime.now())
Not sure this is really an issue, but I would from practice keep the event and lookup field names a little different.  It could prevent any unintended complications in displaying or transforming data.
You wrote "2 syslog servers (UF installed)". I thought you meant - as is often done - that you have two servers which have some form of an external syslog daemon writing to local files and UF which p... See more...
You wrote "2 syslog servers (UF installed)". I thought you meant - as is often done - that you have two servers which have some form of an external syslog daemon writing to local files and UF which picks up data from those files. Those syslog servers are completely external to Splunk.
In a well-deployed environment you should _not_ index anything locally except for the indexing tier. (and except for DS-es in the recent versions). You should send all your events to the indexer tier.
There is a collection(KVStore) which holds the checkpoint value.  You can likely edit to change or remove the current value.  I recommend keeping a backup cause editing this on your own comes with a ... See more...
There is a collection(KVStore) which holds the checkpoint value.  You can likely edit to change or remove the current value.  I recommend keeping a backup cause editing this on your own comes with a risk, but in a test environment I would have no problem trying this first. [TA_MS_AAD_checkpointer] field.state = string
What you posted with respect to the logs was a table of value for fields (presumably derived from your events). What would be more useful is the raw events e.g. the _raw field for the events you are ... See more...
What you posted with respect to the logs was a table of value for fields (presumably derived from your events). What would be more useful is the raw events e.g. the _raw field for the events you are trying to use to determine which device(s) the user(s) is(are) locked out from. If this evidence is not in your raw event data, it is highly unlikely that Splunk can help you find it. Having said that, there may be a sequence or possibly an incomplete sequence of events that indicate that a user failed to connect. For example, you may have evidence in your logs of connections and failed log in attempts on those sessions, or even just connection attempts. We have no idea until you share what information you have.
Hi, I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade... See more...
Hi, I have a simple search which is using a lookup definition based off of a lookup. This lookup is large. Search has been using this lookup perfectly fine outputting correct results. Since upgrade to below version of Splunk Enterprise, this output is not happening like it used to. Matched output is significantly reduced resulting in NULL values for many fields even though lookup is complete and has no issues. I am wondering what has changed that is causing this in below version and how to remediate it? Splunk Enterprise Version: 9.3.1 Build: 0b8d769cb912 index=A sourcetype=B | stats count by XYZ | lookup ABC XYZ as XYZ OUTPUT FieldA, FieldB
| eval lockout=if(EventCode =True,"Yes","No") Can you share how the field "EventCode" is evaluated.  You shared your search and the results from your search.  What would be helpful is an anonymized ... See more...
| eval lockout=if(EventCode =True,"Yes","No") Can you share how the field "EventCode" is evaluated.  You shared your search and the results from your search.  What would be helpful is an anonymized raw event which feeds into your search. Any event which indicates an account is in lock out status may not show where the authentication attempt came from.  This is why knowing the raw event is helpful to outsiders providing feedback.  If you are really trying to discover the root of account lock outs then you need a search for failed log in attempts.  The 2 data sets might come from different log entries.
Hi, The issue has re-occurred. I modified few of the scheduled searches running on All time, staggered the cron etc. and it helped for a while. From past few days, the error for the delayed search ... See more...
Hi, The issue has re-occurred. I modified few of the scheduled searches running on All time, staggered the cron etc. and it helped for a while. From past few days, the error for the delayed search has increased upto 15000(+). Could you please me resolve this permanently   Also is there any query I can use to find out which all searches are getting delayed. I am using this one-   index= _internal sourcetype=scheduler savedsearch_name=* status=skipped | stats count BY savedsearch_name, app, user, reason | sort -count     Can someone please help me out with this. @PickleRick @richgalloway 
  Thanks so much for trying to help me. I agree with what you stated. Something is wrong. However. I did; I posted what was in the what was in the search. Then, I posted what was ingested from the... See more...
  Thanks so much for trying to help me. I agree with what you stated. Something is wrong. However. I did; I posted what was in the what was in the search. Then, I posted what was ingested from the logs. I'm not sure what more information you need from me.
It's very difficult to help with a search when we don't know what is being searched.  Something in your indexed data must be showing when an account is locked out.  Show us those events and we can he... See more...
It's very difficult to help with a search when we don't know what is being searched.  Something in your indexed data must be showing when an account is locked out.  Show us those events and we can help you craft a search for them.
I use 9.3.2 version, but the issue triggers on the remote environment, not locally, and yes, I tried to clear the cache and open the page in a new tab, it doesnt help
I understand you're not able to help. Thanks for your help anyway.
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we we... See more...
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we were given the Strings. The IDM is now connecting to the Event Hub again but no data is flowing; the IDM's logs say "The supplied sequence number '5529741' is invalid. The last sequence number in the system is '4121'" Is there anything we can do about this?
@PickleRick appreciated your detailed response. 6 point -- where I can implement in syslog server? in syslog can I write props and transforms? In syslog server we will be installing UF to forward th... See more...
@PickleRick appreciated your detailed response. 6 point -- where I can implement in syslog server? in syslog can I write props and transforms? In syslog server we will be installing UF to forward the data to our Splunk. Can you please specify the location and process?
This does not show evidence of the microseconds not matching. The Time field is merely displayed to millseconds not microseconds.  
I have solved this issue. to get the notables accross SHC, you need to send notable data to an index in indexer cluster using outputs.conf once data is sent, new notables will be available in al... See more...
I have solved this issue. to get the notables accross SHC, you need to send notable data to an index in indexer cluster using outputs.conf once data is sent, new notables will be available in all SHs