All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi thanks for helping! I got this still that first entry for 10:03:16... shouldnt be the cause and should be 10:02:43
Thanks @ITWhisperer  That worked perfectly. 
Hey , are you seeing any ERRORs in Splunkd for this App and Python ? i really think this is python computability issue as Splunk enterprise v9.3 supports app with Python v3.9 only.
Hi All, May i know if you are excited to be join the Sep month user group meeting of KL Malaysia Splunk User group?!.. planning this one for  2nd weekend of Sep (14th Sat or 15th Sun) and there wil... See more...
Hi All, May i know if you are excited to be join the Sep month user group meeting of KL Malaysia Splunk User group?!.. planning this one for  2nd weekend of Sep (14th Sat or 15th Sun) and there will be some real practical workshops. so pre-registrations are required so that i can keep a lab setup for you.  See you, virtually, lets learn Splunk, thanks.  Best Regards Sekar
I restarted indexers and CM multiple times.. seems to clear up a few buckets, but then sits there w/ pending state  
The /bin is not in the JAVA_HOME nor the config file which made the problem more perplexing.  after uninstalling/reinstalling and trying everything I could I now have a new problem... The new erro... See more...
The /bin is not in the JAVA_HOME nor the config file which made the problem more perplexing.  after uninstalling/reinstalling and trying everything I could I now have a new problem... The new error is now  " The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for more details"
@ITWhispererThanks, but those didn't work.  I tried both of these. | makeresults | fields - _time | eval count=mvcount($servers_entered$) mvcount($servers_entered$) The first errors.  The seco... See more...
@ITWhispererThanks, but those didn't work.  I tried both of these. | makeresults | fields - _time | eval count=mvcount($servers_entered$) mvcount($servers_entered$) The first errors.  The second returns 0.
Have you tried  mvcount($servers_entered$)
See also: https://community.splunk.com/t5/Splunk-Search/How-to-use-the-concurrency-command-to-timechart-the-top-10/m-p/698332/highlight/true#M237145
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$.  I thought I had a solution for counting how many ite... See more...
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$.  I thought I had a solution for counting how many items are in $servers_entered$, but I found a case that failed.  This is what $servers_entered$ looks like. host_1, host_2, host_3, host_4, ..., host_n What I need is a way of counting how many entries are in $servers_entered$.  So far the commands I've tried have failed.  What would work? TIA, Joe
Thank you.  Was going about that all backwards.  
As I had a similar problem - to count the parallel/concurrent HTTP requests grouping by time and host (which means the active threads in each server), I provide my solution:     index=jira-prod so... See more...
As I had a similar problem - to count the parallel/concurrent HTTP requests grouping by time and host (which means the active threads in each server), I provide my solution:     index=jira-prod source="/opt/jira/logs/access_log*" | rex field=_raw "^(?<IP>\d+\.\d+\.\d+\.\d+) (?<REQUEST_ID>[0-9]+x[0-9]+x[0-9]+) (?<USER>\S+) \[.+\] \"(?<REQUEST>[A-Z]+ \S+)-? HTTP/1.1\" (?<STATUS>[0-9]+) (?<BYTES>[0-9]+) (?<TIME>[0-9]+) \"(?<REFERER>[^\"]+)\".*$" | eval DURATION=TIME/1000 | eval START_AT=floor(_time-DURATION) | eval END_AT=floor(_time) | eval IN_MOMENT=mvrange(START_AT,END_AT,1) | mvexpand IN_MOMENT | eval _time=strptime(""+IN_MOMENT,"%s") | chart count as COUNT, max(DURATION) as MAX_DURATION by _time, host     This is parsing a real log file of Atlassian JIRA where:  line 2 parses the JIRA access log and determines its elements, including the duration in milliseconds of the request. Note that the request is logged at the moment it is complete thus _time is the end time lies 3-5 calculate the duration in seconds, start second and end second line 6 fills in IN_MOMENT each of the seconds the request is active, having at least one value when the start second is equal to the end second line 7 duplicates the even for each of the seconds listed in IN_MOMENT, setting the event's IN_MOMENT field to the current second as a regular single value line 8 is more a hack - convert the IN_MOMENT from epoch number into a timestamp line 9 calculate as whatever statistics/chart/timechart needed grouping by _time and host This worked fine for me.
I'm having similar issue. Any fix yet?
Hi @jm_tesla  May i know if you have further questions?.. if no then, could you pls mark this post as resolved (so that it will move from unanswered to answered and i will get a solution authored as... See more...
Hi @jm_tesla  May i know if you have further questions?.. if no then, could you pls mark this post as resolved (so that it will move from unanswered to answered and i will get a solution authored as well thanks) Best Regards Sekar
I believe I simply needed to restart each instance after I deleted the users on it.
Have you tried removing '/bin' from JAVA_HOME and the config file?
Splunk does not delete individual events - it removes entire buckets when either the size or time limit is reached. When deleting by time, because the whole bucket is deleted, it's important that al... See more...
Splunk does not delete individual events - it removes entire buckets when either the size or time limit is reached. When deleting by time, because the whole bucket is deleted, it's important that all of the events in that bucket be old enough to delete.  If any event is too new then the bucket will not be touched.  Every bucket has two dates (for our purposes, anyway) associated with it - the start date (_time of the first event added) and the end date (_time of the last event added).  The end date is one that determines when the bucket can be deleted/frozen. I've seen sites where data is poorly onboarded and has _time values in the future - sometimes by years.  When that happens, the bucket will remain in the system until frozenTimePeriodInSecs after that future date passes.
Well, even if you use index time as _time, you still can extract and use event's time as a field. You can also use _indextime directly or even extract event time as an indexed field to use it fast. T... See more...
Well, even if you use index time as _time, you still can extract and use event's time as a field. You can also use _indextime directly or even extract event time as an indexed field to use it fast. There are several possibilities. It's just that by default Splunk works in a specific way. And I still think (and it's actually not connected to Splunk itself) that lack of proper time synchronization is an important issue for any monitoring ans security monitoring even more so. True, some SIEMs do have several separate time fields for any event but on the other hand they have very rigid parsing rules and once you have your data indexed, it's over. So each approach has its pros and cons. Splunk's bucketing by _time has one huuuuuge advantage - it speeds up searches by limiting whole buckets from being searched.
You could try using a marker rather than a filler - otherwise, this looks like defect and should be raised with your Splunk support team.
Thanks @ITWhisperer .  That is the solution.