All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Scarily enough, it appears to be enabled by default. At least with 9.3.1, this feature is not enabled by default: search_history_storage_mode = <string> * The storage mode by which a search hea... See more...
Scarily enough, it appears to be enabled by default. At least with 9.3.1, this feature is not enabled by default: search_history_storage_mode = <string> * The storage mode by which a search head cluster saves search history. * Valid storage modes include "csv" and "kvstore". [...] * Default: csv https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Limitsconf#History
Hi, I know it's bit confusing but when I run my query field Uptime has value 0,00 by _time. It does not matter how many decimals after 0. 
Value 0,00 of which field(s)?
We need more information.  What "splunk" is being installed, Splunk Enterprise, Splunk Universal Forwarder, or something else?  What OS do the devices run? Please elaborate on "we have been unsucces... See more...
We need more information.  What "splunk" is being installed, Splunk Enterprise, Splunk Universal Forwarder, or something else?  What OS do the devices run? Please elaborate on "we have been unsuccessful in getting the CLI commands to work"?  Which commands?  What happens (or doesn't) when you use them?  What error messages do you get? How are the devices managed?  Many sites use their existing management software (SCCM, BigFix, etc.) to deploy Splunk UFs successfully.
In SPL, there's no such thing as a "variable".  We call them "fields".
And if you do | tstats count where index=<your_index> earliest=1 latest=+10y Anyway, that might call for support case.
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings w... See more...
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings we can follow to ensure we're following best practices? Ultimately, we want to ensure we have log retention, and that /var/log is not a cluttered mess.    Thank you!
My office has deployed around 120 devices that they have now requested splunk be added to. We have been unsuccessful in getting the CLI commands to work for a successful install. The GUI version work... See more...
My office has deployed around 120 devices that they have now requested splunk be added to. We have been unsuccessful in getting the CLI commands to work for a successful install. The GUI version works, but that would mean I have to reach out and touch each machine directly to set that up. Is there a way to program the GUI so that we can remote deploy this?   
There is a precaution of this noted in the Machine Learning Toolkit notes as follows:  The Splunk Enterprise Security App relies on MLTK and the PSC add-on. If you are a Splunk Enterprise Security A... See more...
There is a precaution of this noted in the Machine Learning Toolkit notes as follows:  The Splunk Enterprise Security App relies on MLTK and the PSC add-on. If you are a Splunk Enterprise Security App user, and you are upgrading that app, restart your Splunk instance first. Doing so closes any background PSC processes that can cause the Splunk Enterprise Security App upgrade to error out.
Let me add more context. In this example, "userid" is not a field but a variable that I intend to use as a search value from the four fields.  The four fields are: member_dn, member_id, Member_Secur... See more...
Let me add more context. In this example, "userid" is not a field but a variable that I intend to use as a search value from the four fields.  The four fields are: member_dn, member_id, Member_Security_ID, member_user_name  
Sadly, that doesn't seem to be the cause of my problem. I cleaned all indexes, deleted the data partition entirely, recreated the instance from scratch. After that, I checked and eventcount (as well... See more...
Sadly, that doesn't seem to be the cause of my problem. I cleaned all indexes, deleted the data partition entirely, recreated the instance from scratch. After that, I checked and eventcount (as well as tstats) return 0 events for my indexes as expected. When I move my files into the input folder monitored by Splunk, the count start to go up, but the two count start to diverge over time, and after one hour of ingestion, I stumble back to a huge difference : 142 millions seen by eventcount for 68 millions seen by tstats. I'm the only user of this test instance, so no deletion was made. I checked the monitoring console index details of the instance for a particular index, and the numbers shown here are coherent with the numbers returned by eventcount for that index. There seems to be an incoherence between my input files and the events retrieved by tstats. For a specific sourcetype, I have 54 inputs files for a total of 68 millions events (files full of JSON events, with nothing special in it, no specific line breaking or anything). If I index only those files, I see in the logs of the server that the TailReader did saw the 54 files and counted 68 millions events. If I do a "| eventcount index=XXX", it returns 68 millions events. But if I do the search "| tstats count where index=XXX by source", I only have 35 source files for 28 millions of events. When checking the logs of the instance, there's no log of error indicating anything wrong with my sourcetype or indicating reading or parsing problems. Only error messages from a service "STMgr" which indicate that for some buckets, there's an unexpected return code of -9. Would that mean that the indexing is taking too much memory, and if those process are killed by the OS, could that explain the incoherent numbers ?
was able to figure it out! needed to just use an IF statement.  | eval testuser=if(admin=target,1,0) | where testuser=1
apologies, this will be for windows event logs and Ivanti logs. 
What data are you talking about? Splunk account changes? Windows Event Logs? Some (what) Linux audit logs? If your data is CIM-normalized, you should use Change.Account_Management dataset.
Yes, I haven't used it myself but indeed you need to have your CAs issued properly (with proper path length constraint. I thought you wanted an additional setting for that. As far as I remember subse... See more...
Yes, I haven't used it myself but indeed you need to have your CAs issued properly (with proper path length constraint. I thought you wanted an additional setting for that. As far as I remember subsequent subCAs can redefine the constraint from the "upper" CA by making it "stronger"
I am trying to make a search that will fire only when an admin makes a change to their own account. I want to know if a-johndoe gives multiple permissions to a-johndoe and NOT if a-johndoe gives per... See more...
I am trying to make a search that will fire only when an admin makes a change to their own account. I want to know if a-johndoe gives multiple permissions to a-johndoe and NOT if a-johndoe gives permissions to a-janedoe.  would i use an IF statement for this?   Thank you
Hi @PickleRick  Path length constraint validation is working fine when path length is minimum of 1 on Root CA and path length 0 from intermediate CA onwards. Thanks for your guidance.
The absence of third-party software in the documentation implies no third-party software is used in the add-on. Downloading the add-on and examining the code confirms all modules have a Splunk copyr... See more...
The absence of third-party software in the documentation implies no third-party software is used in the add-on. Downloading the add-on and examining the code confirms all modules have a Splunk copyright notice.
Hey, Thanks again for giving me your insight on this one.  I did come across the bin command but thought the transaction might be better to try in this situation.  As I am still learning the power a... See more...
Hey, Thanks again for giving me your insight on this one.  I did come across the bin command but thought the transaction might be better to try in this situation.  As I am still learning the power and uses of many of the commands that can be used in Splunk, this does help me get a better understanding of how to use and when to use the transaction command.   As you pointed out and is my true problem in this case, there are only two common/semi common variables I have between my two indexes,  that being "_time" and "username".   I have compared the raw logs from both indexes and it appears that at most, the print jobs are separated by 2 secs and I haven't seen any print jobs by the same user that have been closer than 10 seconds apart.  But as to your point, I will make note that there could be some issue with my output if a user prints two jobs seconds apart from each other.  As always, appreciate your input and clarification on my questions. 
Hi @PickleRick , My requirement is on path length validation. I can try with having path length = 1. In this case, only 1 intermediate CA should be allowed. If a 2nd level of intermediate CA issues ... See more...
Hi @PickleRick , My requirement is on path length validation. I can try with having path length = 1. In this case, only 1 intermediate CA should be allowed. If a 2nd level of intermediate CA issues server certificate, it should be failed.