All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello We deployed a new Splunk cluster containing a Cluster Manager, 3x SHC members, 6x Indexers. The cluster has hundreds of vCPUs in the SHC and Indexers, but after installing Enterprise Security ... See more...
Hello We deployed a new Splunk cluster containing a Cluster Manager, 3x SHC members, 6x Indexers. The cluster has hundreds of vCPUs in the SHC and Indexers, but after installing Enterprise Security 7.x we are seeing hundreds of skipped searches, specifically: The maximum number of concurrent historical scheduled searches on an instance or cluster reached The maximum number of concurrent auto-summarization searches reached Logs indicate the searches seem to be getting skipped on the CM (which only has 12 CPU cores). We followed the documentation to install ES on a distributed cluster: Install Splunk Enterprise Security in a search head cluster environment | Splunk Docs (We used the CM which is our deployer to push ES to the SHC via shcluster apps folder) Note: some summarization searches are running on the SHC members but the majority seem to be running on the CM. Would appreciate any ideas as this has me stumped!
The following youtube video shows this tactic Master Splunk Dashboards: Expert Guide to Troubleshooting Tokens! it is at about the 7 minute mark of the video.   <row> <panel> <html> your ... See more...
The following youtube video shows this tactic Master Splunk Dashboards: Expert Guide to Troubleshooting Tokens! it is at about the 7 minute mark of the video.   <row> <panel> <html> your code </html> </panel> </row>
No app necessary and doesn't need javascript.  Works on a post.  This is the way to go.  Using an app is not a bad method, but sometimes you have to go through a change control board or you use splun... See more...
No app necessary and doesn't need javascript.  Works on a post.  This is the way to go.  Using an app is not a bad method, but sometimes you have to go through a change control board or you use splunk cloud.  Using an HTML Tag will work as long as you have edit rights to the dashboard, which you should if you are coding the dashboard.  
It sounds like your clients (forwarders) are not consistently communicating with the Splunk instance after the snapshot rollback. Since telnet confirms ports 8089 and 9997 are open, and the issue on... See more...
It sounds like your clients (forwarders) are not consistently communicating with the Splunk instance after the snapshot rollback. Since telnet confirms ports 8089 and 9997 are open, and the issue only resolves temporarily after restarting the Splunk server, here are a few steps to troubleshoot: Check Forwarder Configuration: Verify the deploymentclient.conf on the clients points to the correct Splunk server hostname/IP and port (8089 for management). Ensure the phoneHomeIntervalInSecs is set appropriately (default is 60 seconds). Validate Server Rollback Impact: The rollback may have caused a mismatch in SSL certificates or server identity. Check if the server’s server.conf or certificates (in $SPLUNK_HOME/etc/auth/) were altered. Regenerate or redeploy certificates if needed. Inspect Splunkd Logs on Clients: Look at $SPLUNK_HOME/var/log/splunk/splunkd.log on the clients for errors related to connection failures or authentication issues when phoning home. Network Stability: Ensure there are no intermittent network issues or firewalls blocking consistent communication. Test with tcpdump or netstat on the server to confirm client connections. Indexer Acknowledgment: If using indexer acknowledgment, verify the outputs.conf on clients has useACK=true and check for any backlog in the indexing queue on the server. Splunk Version Compatibility: Confirm the forwarders are on a compatible version with 9.3.1. If not, upgrade them to match. Try restarting the forwarders after checking the above. If the issue persists, share any relevant errors from the client or server logs for further assistance.
Do this ^^^^^^^
Hi,   Onboarding SUSE Linux (SLES/OpenSUSE) logs into Splunk Enterprise Security (ES) for security-focused use cases is a great initiative, and I’d be happy to share insights on the key log files, ... See more...
Hi,   Onboarding SUSE Linux (SLES/OpenSUSE) logs into Splunk Enterprise Security (ES) for security-focused use cases is a great initiative, and I’d be happy to share insights on the key log files, differences from other Linux distributions, configuration steps, and best practices. Below, I’ll address each of your questions in detail, drawing from general Splunk practices and specific considerations for SUSE Linux, with some references to community insights where applicable. 1. Most Relevant Log Files for Security-Focused Use Cases in Splunk ES   For security-focused use cases in Splunk ES, such as authentication monitoring, audit tracking, change management, and endpoint monitoring, the following SUSE Linux log files are critical. These logs align with Splunk ES’s data models (e.g., Authentication, Change, Endpoint) and support use cases like detecting unauthorized access, privilege escalation, or system changes.   - /var/log/messages (or /var/log/syslog in some configurations):   - Purpose: General system log capturing a wide range of system events, including security-related messages like sudo commands, system service activities, and kernel messages.   - Use Cases: Useful for monitoring system-wide events, detecting anomalies (e.g., unexpected service failures), and correlating with other logs for incident investigation.   - Splunk ES Mapping: Feeds into the *hange and Endpoint data models for tracking system activities.   - /var/log/secure (or /var/log/auth.log in some SUSE configurations):   - Purpose: Captures authentication-related events, such as successful/failed logins, SSH access, su/sudo usage, and PAM (Pluggable Authentication Module) events.   - Use Cases: Essential for the Authentication data model in Splunk ES to detect brute-force attacks, unauthorized login attempts, or privilege escalation.   - Note: On SUSE, the log file is typically /var/log/secure, but verify if your system uses /var/log/auth.log (more common in Debian-based systems like Ubuntu).   - /var/log/audit/audit.log:   - Purpose: Generated by the auditd daemon, this log records detailed system auditing events, including file access, user management (e.g., changes to /etc/passwd), system calls, and security policy violations.   - Use Cases: Critical for the Change and Endpoint data models, enabling tracking of file modifications, user account changes, and system call monitoring for compliance (e.g., PCI DSS, CIS benchmarks).   - Note: Auditd must be properly configured to log meaningful events without overwhelming Splunk with noise (more on tuning below).   - /var/log/firewalld (or firewall-related logs):   - Purpose: Logs firewall activities, such as blocked connections, allowed traffic, or rule changes, typically managed by firewalld or SuSEfirewall2 (legacy in older SLES versions).   - Use Cases: Supports the *Network Traffic* and *Intrusion Detection* data models in Splunk ES for monitoring network security events, such as blocked malicious IPs or unauthorized access attempts.   - Note: Ensure firewalld is enabled and logging is configured (e.g., via LogDenied settings).   - /var/log/apparmor/ (e.g., /var/log/apparmor/audit.log):   - Purpose: Logs AppArmor events, including profile violations or permitted actions, which are critical for mandatory access control (MAC) monitoring.   - Use Cases: Useful for detecting attempts to access restricted files or execute unauthorized processes, feeding into the Endpoint data model.   - Note: AppArmor is commonly used in SUSE for security hardening, so enabling its logging is valuable.   - Application-Specific Logs (e.g Booking.com, Apache, etc.):   - Purpose: Logs from applications like web servers (/var/log/httpd/ or /var/log/apache2/), databases, or other services running on SUSE systems.   - Use Cases: Monitor for application-level security events, such as web attacks or unauthorized API access, feeding into the *Web* or *Endpoint* data models.   - Note: Identify critical applications on your SUSE systems and include their logs based on your security use cases.   - /var/log/zypper.log:   - Purpose: Logs package management activities (installations, updates, removals) via the zypper package manager, unique to SUSE.   - Use Cases: Supports the Change data model for tracking software changes that could indicate unauthorized updates or vulnerabilities.   - Note: Monitor this for compliance and to detect unexpected package modifications.   Recommendation: Start with /var/log/messages, /var/log/secure, and /var/log/audit/audit.log as the core logs for Splunk ES, as they cover most security use cases. Expand to firewall, AppArmor, and application logs based on your environment’s needs. The Splunk Add-on for Unix and Linux (Splunk TA) is a great starting point to configure these inputs, but customize it to avoid collecting unnecessary data.
Suse is not that different from other linux distro's. I'd hazard a guess that ubuntu is more non-standard due to its debian heritage. But more to the point. With practically any linux distro it's m... See more...
Suse is not that different from other linux distro's. I'd hazard a guess that ubuntu is more non-standard due to its debian heritage. But more to the point. With practically any linux distro it's most important to: 1) Determine what data you need to collect (that in turn depends on your use cases; collecting anything "just in case" often just leads to useless pumping up your license). 2) Verifying what sources can/should provide that data and if they are correctly configured to do so (for example - logging from firewall rules requires explicit configuration; auditd might require creating rules to get logs from certain operations and so on). 3) Checking what logging backend are you using for what data (journald/syslog/files...) and possibly reconfiguring it to provide ways of differentiate different kinds of data from each other (for example - reconfigure your local syslog daemon to write to separate files). And that's pretty much it.
Most probably your data was rolled out due to either retention period or index/volume size limits.
1. Please provide more meaningful subject than "ddl" for your post next time. It greatly improves readability of the whole forum and helps visibility of your issue. 2. As far as I remember there are... See more...
1. Please provide more meaningful subject than "ddl" for your post next time. It greatly improves readability of the whole forum and helps visibility of your issue. 2. As far as I remember there are no built-in mechanisms to control visibility of single input either in simplexml or in DS (btw, you didn't specify what type of dashboard you're building). With simplexml you can try to use custom JS to manipulate CSS to make components (in)visible.
We have a stand-alone splunk instance in a closed area. We had to roll back the server to a snapshot and now the clients only phone home when we restart the splunk server. I've looked at the splunk l... See more...
We have a stand-alone splunk instance in a closed area. We had to roll back the server to a snapshot and now the clients only phone home when we restart the splunk server. I've looked at the splunk log, phonehome log, checked the outputs.conf. I've run telnet server:8089 and 9997 from the clients and the ports are open listening. Any help would be appreciated.  We are on version 9.3.1 
Yup. You're right. I keep forgetting about that option. For me it's clearer to do those two operations separately. I wonder though whether there is a performance difference.
Stats should be way faster and efficient but it won't give you other fields. So whether it's stats or dedup depends on the desired results.
No need to use html tags. You could add those into title etc. But when you have lot of those and you will set and unset those based on buttons, clicks etc. then this approach doesn’t work anymore.
The answers already given are spot on.   When I am trying to troubleshoot my correlation searches, the first thing I do is grab the query that is being used for the correlation search and validate... See more...
The answers already given are spot on.   When I am trying to troubleshoot my correlation searches, the first thing I do is grab the query that is being used for the correlation search and validate that it actually returns results.  Do a copy and paste from the search query in the correlation search to an SPL window to validate that you don't actually mistype things. If you get results from the query, than you want to validate that adaptive response is set (in ES versions before Splunk to make a notable.  In ES 8 you will want to make sure that event finding option is selected  The other type of finding goes into a risk score and will not actually create a finding for you in analyst queue.   If none of that works, I tend to copy the correlation search query off to another safe location and replace the query with something that for sure will fire index=_internal | head 1 | table index, sourcetype Then see if that search will fire off an alert, if it doesn't you know that you have a configuration setting messed up in the correlation search.   Hope this helps
The other thing you can do without installing an app is to go into the xml and create an html tag <row> <html> Value of token1 = $token1$ Value of token2 = $otken2$ </html> </row> I may ha... See more...
The other thing you can do without installing an app is to go into the xml and create an html tag <row> <html> Value of token1 = $token1$ Value of token2 = $otken2$ </html> </row> I may have shorthanded the html tags, but basically everytime the token changes the token value will be displayed in that html tag.  Really easy way to keep track of the token value.  If you need a default token value look into using the <set> function in tokens.   But @ITWhisperer was spot on when he said that when a token is not set it is neither empty nor null.  
It sounds like the user should be making choices from the first two dropdowns and that supplies the fields that will be provided in the third dropdown.   what you want to do this is to tokenize th... See more...
It sounds like the user should be making choices from the first two dropdowns and that supplies the fields that will be provided in the third dropdown.   what you want to do this is to tokenize the first two dropdowns so that the answer from them is (use better token names) $optionA$  for the first dropdown $optionB$ for the second dropdown Then in the third dropdown, fill the list from a query and use the tokens in the query so something like this index=yourindex sourceytpe=yoursourcetype valuex=$optionA$ valuey=$optionB$ Or if it is a lookup file | inputlookup yourlookupfile | search valuex=$optionA$ valuey=$optionB$ Hope that helps.  
You should also check if CM see those peers as member of indexer cluster. Then also check what errors and maybe warnings which told what has happened. 
Hello @Jasmine, Is this resolved?
index=aws but i ended up logging onto both servers and moving the whole index from "old" Splunk over to "new" Splunk
When you are playing with tokens in SXML, you should install this app https://classic.splunkbase.splunk.com/app/1603/ Then add this into your forms. <form version="1.1" theme="light" script="simple... See more...
When you are playing with tokens in SXML, you should install this app https://classic.splunkbase.splunk.com/app/1603/ Then add this into your forms. <form version="1.1" theme="light" script="simple_xml_examples:showtokens.js"> After this it shows all tokens what you have and what are their values like When I add ip, but didn't press submit After submit is pressed. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf