All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunk community, I have a dashboard whereby I can search on data going back for a maximum of 30 days.   I'm looking for a way whereby I can achieve long term trending.  What would be the best... See more...
Hello Splunk community, I have a dashboard whereby I can search on data going back for a maximum of 30 days.   I'm looking for a way whereby I can achieve long term trending.  What would be the best approach for comparing data on a month-by-month basis for example?  After 30 days I want to save that data, recall that data at a later date and do a comparison.  Is this even possible?   Thanks in advance.
Yeah, straight up using macros did not work  not with sendemail or alert.action, so 100% correct. I had not noticed the result tokens and this will be a h*ck of a workaround. Though if I understan... See more...
Yeah, straight up using macros did not work  not with sendemail or alert.action, so 100% correct. I had not noticed the result tokens and this will be a h*ck of a workaround. Though if I understand the suggestion correctly I could maintain a macros.conf to have a "central" "distribution list", either by app or globally, using definitions (the format looks OK at least) to generate a field in the alert searches containing the macro list. Then use that results token as $result.recipients$ to actually populate the recipients with the list of email adresses from the macros definition. I'll give this the old college try and push it to the testing environment tomorrow. Thank you and fingers crossed.
I appreciate all your efforts, Now to make things clear, 1- Does i need to install TA Add-on on UF regarding ES? (yes or no) noting that all my values on the security posture dashboard still zero a... See more...
I appreciate all your efforts, Now to make things clear, 1- Does i need to install TA Add-on on UF regarding ES? (yes or no) noting that all my values on the security posture dashboard still zero although i enabled all correlation searches 2- If yes i need on UF, from where i can download it? noting that i didn't find any TA on splunkbase thanks once again
There's a lot of Splunk documentation so I understand why you don't have all the information yet.  See https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall for tips on where ... See more...
There's a lot of Splunk documentation so I understand why you don't have all the information yet.  See https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall for tips on where to install TAs.  The instructions that come with the TA are the best guide, however. Splunkbase apps should be obtained directly from Splunkbase rather than via 3rd-party sources that may not be reputable.  However, once you've downloaded the TA it does not need to be downloaded again until a new version is available.  The one downloaded copy may be installed as many times as you wish.
I must say that Splunk linux packaging is sometimes sub-par (and I suppose the docs are done by more or less the same people and can contain errors. If you have the systemd unit in place, start and ... See more...
I must say that Splunk linux packaging is sometimes sub-par (and I suppose the docs are done by more or less the same people and can contain errors. If you have the systemd unit in place, start and stop the service using systemctl - that's what the service unit is for.
@PickleRick hello, because the Splunk documentation recommends running the commands like /opt/splunk/bin/splunk start|stop|restart using sudo. https://docs.splunk.com/Documentation/Splunk/9.1.2/Admi... See more...
@PickleRick hello, because the Splunk documentation recommends running the commands like /opt/splunk/bin/splunk start|stop|restart using sudo. https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/RunSplunkassystemdservice#Manage_clusters_under_systemd  There is also the following information "Under systemd, splunk start|stop|restart commands are mapped to systemctl start|stop|restart commands." Therefore, I believe that in this case there should be no difference in how exactly the restart is carried out. If you want to reproduce the problem yourself, here is a sample list of steps 1) Create Ubuntu 22.04 VM in Google Cloud Platform as example 2) Install Splunk Enterprise 9.1.2 dpkg -i splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb 3) Enable Systemd Unit for Splunk /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user splunk -group splunk --accept-license 4) Try to do commands like /opt/splunk/bin/splunk start|stop|restart and compare with systemd unit status, you will see errors. Well, in the end, you have commands like /opt/splunk/bin/splunk offline, which are not called through systemd.
Read about the push modes https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/PropagateSHCconfigurationchanges#Choose_a_deployer_push_mode.
It's a general method of installing addons. You need addons for your particular sources.
Why are you trying to start splunk with splunk start using root? Also - if you have the systemd unit, use it.
I'm afraid I don't have good news for you here. The methods for getting events from windows in order of decreasing preference: 1. Directly pulling data from local eventlog with UF - obvious case 2... See more...
I'm afraid I don't have good news for you here. The methods for getting events from windows in order of decreasing preference: 1. Directly pulling data from local eventlog with UF - obvious case 2. Use native Windows Event Forwarding to get events to another host and ingest with UF from there - WEF can be tricky in some cases to set up (especialy in a domainless situation), forwarded events can hit EventLog input performance limits (can happen on a standalone host as well but it's more probable on a WEF collector) but generally works quite well and is often the only available reasonable options (domain admins don't like 3rd party tools on domain controllers) 3. Pull data from another host with WMI input - that's way more tricky to set up, works only in AD environment, needs UF run with a domain Service Account, is worse performancewise and harder to debug. 4. And at the very end you have the worst possible option of pulling the data with 3rd party tools. They are in strange formats, contain some additional data specific to that 3rd party solution but may not forward all data included in original events. So far the only solutions I saw that suggested it could be able to push the full eventlog XML via syslog was the paid version of NXLog (the free one when configured to push XML sends an XML of NXLog event with eventlog entry in plaintext form as one of the fields). There are two problems you face with 3rd party forwarders. One is the necessity of parsing the completely foreign format. And another thing is that you need to make it either CIM compliant so you can do without TA_windows which is a huge undertaking or do a "mapping" to TA_windows supplied sourcetype which seems equally absurd amount of work. So I'm afraid your knee-deep in manure here and I'd advise you to try hard to get those events in any of those first 3 ways. Even WMI is better than external tools.
I see the same behaviour with Ubuntu 22 in GCP and Splunk Enterprise 9.1.2. Splunk management through Systemd looks broken.  
You can check the status of your inputs with splunk list monitor and splunk list inputstatus
https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/Monitorfilesanddirectories
I need inputs.conf stanza to monitor below location file.   c:\test.log
@mpalarchio Hey Thank You for the reply! I was stucked with this column chart configuration and it worked 100%
Not sure if this got solved, but I was able to get it formatted using the following: | inputlookup sc_vuln_data_lookup | eval first_found = strftime(first_found, "%c") | eval last_found = strfrtime... See more...
Not sure if this got solved, but I was able to get it formatted using the following: | inputlookup sc_vuln_data_lookup | eval first_found = strftime(first_found, "%c") | eval last_found = strfrtime(last_found, "%c")
https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall   here is the correct link no one mentioned that it is the same TA for both, did you tried this before? As per do... See more...
https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall   here is the correct link no one mentioned that it is the same TA for both, did you tried this before? As per documentation it should be downloaded directly from splunkbase, but can't find it. The only thing i found is "Splunk-add-on-for-windows" but not sure if that's it or not thanks
Hi @PickleRick , your solution was correct! I only changed a very little detail: in the "drop_dead_all stanza" because I have to remove only the cloned events not all, otherwise also eventual new f... See more...
Hi @PickleRick , your solution was correct! I only changed a very little detail: in the "drop_dead_all stanza" because I have to remove only the cloned events not all, otherwise also eventual new flows are deleted without cloning, but you solution is great! Thank You very much for your help, I hope to have the opportunity to return the favor i the future because you solved a very important issue for my job. On this occasion I take advantage of your knowledge, if you have expertise on WinLogBeat, would you please take a look at my question: https://community.splunk.com/t5/Getting-Data-In/Connect-winlogbeat-log-format-to-Splunk-TA-Windows/m-p/669363 ? Thank you again. Ciao. Giuseppe
Hi In fact my problem is not one, it is the normal behavior of Splunk.  Before I had a single search head and all the .conf files were in the local directory to override the default settings. When I... See more...
Hi In fact my problem is not one, it is the normal behavior of Splunk.  Before I had a single search head and all the .conf files were in the local directory to override the default settings. When I migrated the search head into a search head cluster I kept this principle. Splunk's philosophy and best practice is that the deployer must deploy files that are not changing "locally" on the search head. These files must therefore be put in the default directory. To resolve my problem I had to move the files from local directory to the default directory then run the apply shcluster-bundle command. Now it works as expected. 
One more. I was checking and one of the files has more than  124 000 bytes. What value I should define for initCrcLenght ?