All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If/when you have issues with lookups (e.g. time by time you found old lookups on SHC), you should check this https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationcha... See more...
If/when you have issues with lookups (e.g. time by time you found old lookups on SHC), you should check this https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges#Preserve_lookup_files_across_app_upgrades r. Ismo
Hi 1st it's best to use some real syslog server instead of Splunk UF/HF even you can use also Splunk for that. For PoC you can use also Splunk, but in production you should switch this to something ... See more...
Hi 1st it's best to use some real syslog server instead of Splunk UF/HF even you can use also Splunk for that. For PoC you can use also Splunk, but in production you should switch this to something else. Post under 1024 cannot used unless you are sunning process as root. You shouldn't run splunkd as root. For that reason you must switch port to e.g. 1514 or something similar and also configure SolarWindsSEM to use it.  r. Ismo
I attempted to retrieve REST API in a proxy environment using Splunk Add-on Builder, but was unsuccessful. The proxy settings have been configured on the OS. As a troubleshooting step, I... See more...
I attempted to retrieve REST API in a proxy environment using Splunk Add-on Builder, but was unsuccessful. The proxy settings have been configured on the OS. As a troubleshooting step, I found that while I can execute curl commands from the OS, I am unable to do so from Splunk. Additionally, I am unable to access Splunkbase via Splunk Web. Is there a best practice for working with Splunk in a proxy environment?      
I'm trying to set up a Proof of Concept (POC) environment for Splunk Heavy Forwarder (HF), which is receiving data from SolarWinds SEM. We are using TCP Port 514 to forward logs from SolarWinds SEM.... See more...
I'm trying to set up a Proof of Concept (POC) environment for Splunk Heavy Forwarder (HF), which is receiving data from SolarWinds SEM. We are using TCP Port 514 to forward logs from SolarWinds SEM. Both Splunk HF and SolarWinds are using free licenses.   SolarWinds has performed the forwarding configuration via the admin console. In the Splunk HF Inputs.conf file, details have been added as below: [TCP://514] connection_host = X.X.X.93 sourcetype = * disabled = false index = SolarWinds-index   Both instances are running on the AWS cloud, same subnet. When I check the Splunk HF interface with the Tcpdump command, I receive the following output: Splunk Host Name - ip-X-X-X-72.ap-southeast-1.compute.internal SolarWinds Host Name - ip-X-X-X-93.ap-southeast-1.compute.internal   00:58:05.726708 IP ip-X-X-X-72.ap-southeast-1.compute.internal.shell > ip-X-X-X-93.ap-southeast-1.compute.internal.36044: Flags [R.], seq 0, ack 3531075234, win 0, length 0 00:58:05.727636 IP ip-X-X-X-93.ap-southeast-1.compute.internal.36054 > ip-X-X-X-72.ap-southeast-1.compute.internal.shell: Flags [S], seq 3042331467, win 64240, options [ 1460,sackOK,TS  1136916397  0,nop,wscale 7], length 0   Splunk HF is receiving logs from the Universal Forwarder (UF) on the Windows server but didn't from SolarWinds SEM.   Can anyone advise on this issue?  
First, do NOT enable all ES correlation searches.  That will cause more problems than it will solve.  Enable only the correlation searches that pertain to your use cases and for which you have data i... See more...
First, do NOT enable all ES correlation searches.  That will cause more problems than it will solve.  Enable only the correlation searches that pertain to your use cases and for which you have data ingested in Splunk. Where a TA should be installed depends on what the TA does.  The installation instructions for the TA should specify the location.  If it doesn't use the "Where to install" I link I provided earlier.  Generally speaking, it can't hurt to install a TA on both indexers and UFs. Splunkbase is the source for most Splunk TAs.  Others can be downloaded from the vendors that created them for their products.  Still others are available from GitHub.  It can be difficult to locate a TA without knowing the name, however.  What do you want the TA to do?  Perhaps we can help you find something appropriate.
The data will be retained in Splunk for as long as it's been configured to stay, so although your dashboard may be searching data for the last 30 days, it may be the data is there for longer. Genera... See more...
The data will be retained in Splunk for as long as it's been configured to stay, so although your dashboard may be searching data for the last 30 days, it may be the data is there for longer. Generally the approach to your problem is to look at summary indexing. What people often do is to ingest data from their sources and then do aggregations on those source and save aggregations to a summary index. The main index with all the data is then retained for a short period, whereas the smaller data volume is configured to be retained for a longer period so it can be used for long term analysis. Look at reports/summary indexing which can do summary indexing automatically and also the collect SPL command allows you to do it manually. When people ask the question about whether something is possible, the answer and almost always yes and often there is more than one way. As for dashboarding, that's the easy part - if you have prepared your data, then you can do what you like on that data, as long as you have it.  
Hello Splunk community, I have a dashboard whereby I can search on data going back for a maximum of 30 days.   I'm looking for a way whereby I can achieve long term trending.  What would be the best... See more...
Hello Splunk community, I have a dashboard whereby I can search on data going back for a maximum of 30 days.   I'm looking for a way whereby I can achieve long term trending.  What would be the best approach for comparing data on a month-by-month basis for example?  After 30 days I want to save that data, recall that data at a later date and do a comparison.  Is this even possible?   Thanks in advance.
Yeah, straight up using macros did not work  not with sendemail or alert.action, so 100% correct. I had not noticed the result tokens and this will be a h*ck of a workaround. Though if I understan... See more...
Yeah, straight up using macros did not work  not with sendemail or alert.action, so 100% correct. I had not noticed the result tokens and this will be a h*ck of a workaround. Though if I understand the suggestion correctly I could maintain a macros.conf to have a "central" "distribution list", either by app or globally, using definitions (the format looks OK at least) to generate a field in the alert searches containing the macro list. Then use that results token as $result.recipients$ to actually populate the recipients with the list of email adresses from the macros definition. I'll give this the old college try and push it to the testing environment tomorrow. Thank you and fingers crossed.
I appreciate all your efforts, Now to make things clear, 1- Does i need to install TA Add-on on UF regarding ES? (yes or no) noting that all my values on the security posture dashboard still zero a... See more...
I appreciate all your efforts, Now to make things clear, 1- Does i need to install TA Add-on on UF regarding ES? (yes or no) noting that all my values on the security posture dashboard still zero although i enabled all correlation searches 2- If yes i need on UF, from where i can download it? noting that i didn't find any TA on splunkbase thanks once again
There's a lot of Splunk documentation so I understand why you don't have all the information yet.  See https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall for tips on where ... See more...
There's a lot of Splunk documentation so I understand why you don't have all the information yet.  See https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall for tips on where to install TAs.  The instructions that come with the TA are the best guide, however. Splunkbase apps should be obtained directly from Splunkbase rather than via 3rd-party sources that may not be reputable.  However, once you've downloaded the TA it does not need to be downloaded again until a new version is available.  The one downloaded copy may be installed as many times as you wish.
I must say that Splunk linux packaging is sometimes sub-par (and I suppose the docs are done by more or less the same people and can contain errors. If you have the systemd unit in place, start and ... See more...
I must say that Splunk linux packaging is sometimes sub-par (and I suppose the docs are done by more or less the same people and can contain errors. If you have the systemd unit in place, start and stop the service using systemctl - that's what the service unit is for.
@PickleRick hello, because the Splunk documentation recommends running the commands like /opt/splunk/bin/splunk start|stop|restart using sudo. https://docs.splunk.com/Documentation/Splunk/9.1.2/Admi... See more...
@PickleRick hello, because the Splunk documentation recommends running the commands like /opt/splunk/bin/splunk start|stop|restart using sudo. https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/RunSplunkassystemdservice#Manage_clusters_under_systemd  There is also the following information "Under systemd, splunk start|stop|restart commands are mapped to systemctl start|stop|restart commands." Therefore, I believe that in this case there should be no difference in how exactly the restart is carried out. If you want to reproduce the problem yourself, here is a sample list of steps 1) Create Ubuntu 22.04 VM in Google Cloud Platform as example 2) Install Splunk Enterprise 9.1.2 dpkg -i splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb 3) Enable Systemd Unit for Splunk /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user splunk -group splunk --accept-license 4) Try to do commands like /opt/splunk/bin/splunk start|stop|restart and compare with systemd unit status, you will see errors. Well, in the end, you have commands like /opt/splunk/bin/splunk offline, which are not called through systemd.
Read about the push modes https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/PropagateSHCconfigurationchanges#Choose_a_deployer_push_mode.
It's a general method of installing addons. You need addons for your particular sources.
Why are you trying to start splunk with splunk start using root? Also - if you have the systemd unit, use it.
I'm afraid I don't have good news for you here. The methods for getting events from windows in order of decreasing preference: 1. Directly pulling data from local eventlog with UF - obvious case 2... See more...
I'm afraid I don't have good news for you here. The methods for getting events from windows in order of decreasing preference: 1. Directly pulling data from local eventlog with UF - obvious case 2. Use native Windows Event Forwarding to get events to another host and ingest with UF from there - WEF can be tricky in some cases to set up (especialy in a domainless situation), forwarded events can hit EventLog input performance limits (can happen on a standalone host as well but it's more probable on a WEF collector) but generally works quite well and is often the only available reasonable options (domain admins don't like 3rd party tools on domain controllers) 3. Pull data from another host with WMI input - that's way more tricky to set up, works only in AD environment, needs UF run with a domain Service Account, is worse performancewise and harder to debug. 4. And at the very end you have the worst possible option of pulling the data with 3rd party tools. They are in strange formats, contain some additional data specific to that 3rd party solution but may not forward all data included in original events. So far the only solutions I saw that suggested it could be able to push the full eventlog XML via syslog was the paid version of NXLog (the free one when configured to push XML sends an XML of NXLog event with eventlog entry in plaintext form as one of the fields). There are two problems you face with 3rd party forwarders. One is the necessity of parsing the completely foreign format. And another thing is that you need to make it either CIM compliant so you can do without TA_windows which is a huge undertaking or do a "mapping" to TA_windows supplied sourcetype which seems equally absurd amount of work. So I'm afraid your knee-deep in manure here and I'd advise you to try hard to get those events in any of those first 3 ways. Even WMI is better than external tools.
I see the same behaviour with Ubuntu 22 in GCP and Splunk Enterprise 9.1.2. Splunk management through Systemd looks broken.  
You can check the status of your inputs with splunk list monitor and splunk list inputstatus
https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/Monitorfilesanddirectories
I need inputs.conf stanza to monitor below location file.   c:\test.log