All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers!! Every week, my report runs and gathers the results under the summary index=analyst. You can see that several stash files are being created for the specific report in the screenshot... See more...
Hello Splunkers!! Every week, my report runs and gathers the results under the summary index=analyst. You can see that several stash files are being created for the specific report in the screenshot below. Conversely, multiple stash files won't be created for other reports.   Report with multiple stash files. Report with no duplicate no stash files. Please provide me an assistance on this.
How can we ingest MDI logs to Splunk?
After upgrade from 9.1.0 to 9.2.1, my heavy forwarder has many following lines in log:   04-01-2024 08:56:16.812 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 afte... See more...
After upgrade from 9.1.0 to 9.2.1, my heavy forwarder has many following lines in log:   04-01-2024 08:56:16.812 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.887 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.951 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.982 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.008 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.013 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.024 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.041 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.079 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.097 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.146 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.170 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.190 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.257 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.292 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.327 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.425 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.522 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.528 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.549 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.551 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped.     How to disable this log? Does any error related this INFO log?
Shouldn't we be looking for xz-utils rather than xz-libs? like this source=package sourcetype=package NAME=xz-utils
Hi @PickleRick , i tried the query u suggested its working as expected. please find the below query. but my concern is we want to use this query as an alert, where condition as getperct >50  , putp... See more...
Hi @PickleRick , i tried the query u suggested its working as expected. please find the below query. but my concern is we want to use this query as an alert, where condition as getperct >50  , putperct >10 , deleteperct >80 trigger alert but when i give this 3 conditions its not working as expected, here alert should trigger even if one condition meets. |mstats sum(Transactions) as Transaction_count where index=metrics-logs application=login services IN(get, put, delete) span=1h by services |timechart span=1h values(Transaction_count) by services |autoregress get as old_get |autoregress get as old_put |autoregress get as old_delete |eval getperct=round(old_get/get*100,2) |eval putperct=round(old_put/put*100,2) |eval deleteperct=round(old_delete/delete*100,2) |table getperct putperct deleteperct  
It’s supposed to be based on the data @PickleRick 
You need to "carry over" value from one results row to another using autoregress command or streamstats. Autoregress is pretty straightforward. For example in this case | autoregress get as old_get... See more...
You need to "carry over" value from one results row to another using autoregress command or streamstats. Autoregress is pretty straightforward. For example in this case | autoregress get as old_get Streamstats seems a bit more complicated but can be a pretty powerful tool. Alternative to autoregress here would be | streamstats current=f window=1 values(get) as old_get One caveat to both those commands - they are applied in order of the returned events which by default is the reverse chronological order which means you'd be copying values from a newer result to the older one. If that's not what you want, you'll need to resort your results.
You're trying do dig out a thread from some 8 years ago. Most probably most of the participants are no longer actively following Answers. Your best bet would be to create a new thread and desciribe ... See more...
You're trying do dig out a thread from some 8 years ago. Most probably most of the participants are no longer actively following Answers. Your best bet would be to create a new thread and desciribe your problem there (possibly providing a link to this one if your case is similar.
Again - what dates are these numbers supposed to be?
I tried to convert it but i couldn't get the exact results. Are there any other ways to convert it @ITWhisperer ?
Thank You @Richfez  As a first need (and I should have said this in the opening), I was not asking to access them at all. I just want to know where they are, to backup them just like everything el... See more...
Thank You @Richfez  As a first need (and I should have said this in the opening), I was not asking to access them at all. I just want to know where they are, to backup them just like everything else in /etc/apps folder. But editing them could also be a need, ex: in case of db inputs info  loss of any kind.  And if it be the case, I guess to better edit them (at one's own risk) directly into the OS via a text editor since they are json. regards Altin
It may be complicated, but I think it's necessary.  Perhaps it could be better, though. Even if the SH did expand the query (and maybe it does) before sending to the peers, that's just a part of wha... See more...
It may be complicated, but I think it's necessary.  Perhaps it could be better, though. Even if the SH did expand the query (and maybe it does) before sending to the peers, that's just a part of what the bundle is used for.  Search-time field extractions and lookups done by the indexers make the query more efficient.
Thanks a lot @richgalloway  That behavior of SH seems to be unnecessarily complicated. Instead of sending all of those KO bundles to indexers, could not SH first expand SPL query (to resolve all of... See more...
Thanks a lot @richgalloway  That behavior of SH seems to be unnecessarily complicated. Instead of sending all of those KO bundles to indexers, could not SH first expand SPL query (to resolve all of the names/variables which are search-time) and then sent it to indexers ?  Thanks, Michal
The first example runs entirely on the Search Head where the lookup definition is available. The second example runs on the indexers, which apparently is unaware of the lookup definition.  Either th... See more...
The first example runs entirely on the Search Head where the lookup definition is available. The second example runs on the indexers, which apparently is unaware of the lookup definition.  Either the app defining the lookup is not installed on the indexers or the lookup file is blocked from the knowledge bundle ([replicationDenyList] in distsearch.conf).
Yes, you understand correctly.
Hi @marnall, soory I did not understand. But I tried to combine 2 queries to get combined output but I am not getting it. Can u pls share me the query 
Can you please explain a little bit more about this approach?
Thanks @richgalloway  So just to confirm: "To know what results to return to the SH, the peers need to know the values of the tags, eventtypes, and macros used in the query. " Example:  "index=_au... See more...
Thanks @richgalloway  So just to confirm: "To know what results to return to the SH, the peers need to know the values of the tags, eventtypes, and macros used in the query. " Example:  "index=_audit eventtype=splunk_access". Since event type extraction is search-time (not index-time) indexer does not have definition for that event type. Because of this SH need to push to indexer definition for that event type: [splunk_access] search = index=_audit "action=login attempt" NOT "action=search" Once that is done, indexer will actually expand original SQL query to "index=_audit index=_audit action=login attempt NOT action=search" and will be able to execute the query correctly. The same would happen with most of the other Knowledge Objects. Including all the search time field extractions. So the summary would be: Search Head needs to push Knowledge Objects to indexer, because for indexer those are "unknown variables/names". Indexer does not have those definitions and does not know how to expand/execute SQL queries using those KOs. This is applicable only to search-time operations/objects defined on SH (index-time related configurations like TRANSFORMS should be already on the indexer). Could you please confirm @richgalloway all of this is correct ? Thanks !
Application Success Failed Total percentage IPL 15 2 17 11.764 IPL 10 2 12 16.666 IPL 4 1 5 20.000 WWV 3 2 5 40.000 WWV 1 0 1 0.000 PIP 20 5 25 20.000... See more...
Application Success Failed Total percentage IPL 15 2 17 11.764 IPL 10 2 12 16.666 IPL 4 1 5 20.000 WWV 3 2 5 40.000 WWV 1 0 1 0.000 PIP 20 5 25 20.000 IPL 1 0 1 0.0000 WWV 30 15 45 33.333 PIP 20 10 30 33.333   From the above table, we want to calculate  application wise data. Expected output: Application Success Failed Total percentage IPL 30 5 35 14.285 WWV 34 17 51 33.333 PIP 40 15 55 27.272   How can we do this???
Hi!, This is a contrived example, but could you help me understand why this completes (and functions as expected):   | makeresults format=csv data="filename calc.exe" | lookup isWindowsSystemFile_... See more...
Hi!, This is a contrived example, but could you help me understand why this completes (and functions as expected):   | makeresults format=csv data="filename calc.exe" | lookup isWindowsSystemFile_lookup filename   Whilst this:   index=sandbox | eval filename="calc.exe" | lookup isWindowsSystemFile_lookup filename   throws an error with message:   ... The lookup table 'isWindowsSystemFile_lookup' does not exist or is not available.   The isWindowsSystemFile_lookup is provided by Splunk Security Essentials. Hmm, I'm on splunk cloud. Thanks, Kevin