All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Wait a second. You can't use props/transforms after the events have been indexed. You can do that during indexing after initial ingestion by input.
Time to add a new entry in ideas.splunk.com and ask this feature! Of course you should check if there is already this kind of idea. Then write up that idea here, so we could vote it too!
Not quite. 1. Ingesting /var/log/messages indiscriminately will result in a high level of noise. Also with typical syslog-enabled system you will have multiple formats of events which you won't easi... See more...
Not quite. 1. Ingesting /var/log/messages indiscriminately will result in a high level of noise. Also with typical syslog-enabled system you will have multiple formats of events which you won't easily parse. 2. There are many issues with TA_nix so advising it as primary source of data is a bit hasty
@dinesh001kumar  Natively, Splunk Cloud Studio dashboards do not support playing audio alerts. As a workaround you can consider alert actions such as emails, webhooks, or integrations with extern... See more...
@dinesh001kumar  Natively, Splunk Cloud Studio dashboards do not support playing audio alerts. As a workaround you can consider alert actions such as emails, webhooks, or integrations with external systems/scripts to play audio. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @bmer , ok,you want only organizationIDs that are common to both the searches that have the same index and user*. in this case you have to use my search with an additional clause: index="abc" "... See more...
Hi @bmer , ok,you want only organizationIDs that are common to both the searches that have the same index and user*. in this case you have to use my search with an additional clause: index="abc" "usr*" ("`DLQuery`DLQuery`POST`" OR ("DLQuery" "DLSqlQueryV2")) | eval type=if(searchmatch("`DLQuery`DLQuery`POST`"), "v1", "v2") | stats dc(type) AS type_count count BY organizationId | where type_count>1 in this way you have only the events that match bpth the searches. Ciao. Giuseppe  
@gcusello I want to count the list of occurrence of events coming in splunk 1 and splunk 2 seperately I suppose that between quotes you have strings to search and the you want to count the occur... See more...
@gcusello I want to count the list of occurrence of events coming in splunk 1 and splunk 2 seperately I suppose that between quotes you have strings to search and the you want to count the occurrences for each organizationId. : First part of your statement is correct BUT I do not want to aggregate by organizationId.It is common for BOTH splunk. The "`DLQuery`DLQuery`POST`" is part of all events related to v1 whereas "DLQuery" "DLSqlQueryV2" is all events related to v2. So at the EOD I want to know daywise v1 and v2 count
Hi @dinesh001kumar , I'm not sure that Splunk Cloud permits to upload an audio file (I never tried before!) but you can ask them. Anyway, if the anomaly you're speaking is an alert, you could conne... See more...
Hi @dinesh001kumar , I'm not sure that Splunk Cloud permits to upload an audio file (I never tried before!) but you can ask them. Anyway, if the anomaly you're speaking is an alert, you could connect to the alert a script run that launch the audio player, otherwise I don't see any other choices. Ciao. Giuseppe
I was having Live Service Monitoring Dashboard, created in Splunk Cloud using Studio Dashboard(JSON). Is there any possibility to play audio sound if there was any abnormalities in any of the servic... See more...
I was having Live Service Monitoring Dashboard, created in Splunk Cloud using Studio Dashboard(JSON). Is there any possibility to play audio sound if there was any abnormalities in any of the service in Studio Dashboard. If its possible can anyone help on this how to achieve the output.
Hi @bmer , just some additional information: what's your purpose: find the count of occurrances or list all the events? what are the backthicks between quotes? I suppose that between quotes you h... See more...
Hi @bmer , just some additional information: what's your purpose: find the count of occurrances or list all the events? what are the backthicks between quotes? I suppose that between quotes you have strings to search and the you want to count the occurrences for each organizationId. In this case, there are many way to reach your purpose, but the most efficient is stats: index="abc" "usr*" ("`DLQuery`DLQuery`POST`" OR ("DLQuery" "DLSqlQueryV2")) | stats count BY organizationId Ciao. Giuseppe
Hello, I have 2 seperate splunks as below . One is "v1 endpoint" and other is "v2 endpoint" v1 endpoint: index="abc" "usr*" organizationId=xxxx "`DLQuery`DLQuery`POST`" v2 endpoint: index="abc" ... See more...
Hello, I have 2 seperate splunks as below . One is "v1 endpoint" and other is "v2 endpoint" v1 endpoint: index="abc" "usr*" organizationId=xxxx "`DLQuery`DLQuery`POST`" v2 endpoint: index="abc" "usr*" organizationId=xxxx "DLQuery" "DLSqlQueryV2" I want to create 1 single splunk which will give me v1, v2 count over a span using timechart function.How do we combine them to achieve the output? Thanks, bmer
support confirmed - no way to exclude old windows event logs from being imported.  "The Splunk Universal Forwarder's Windows Event Log input doesn't offer a built-in way to filter events based on ag... See more...
support confirmed - no way to exclude old windows event logs from being imported.  "The Splunk Universal Forwarder's Windows Event Log input doesn't offer a built-in way to filter events based on age during initial data collection. This means you can't directly configure the forwarder to only send events newer than 7 days when it first starts monitoring.  You'll need to use other methods, like filtering at the indexer level or leveraging props/transforms.conf after the data is indexed, to remove older events. Or else take the backup of the event viewer logs which are older than 7 days in the source machine and remove them before onboarding to splunk." Kind Regards Andre
@Sweets000  Did you remove ES Apps(/etc/apps/) from the CM(deployer in your case) after your deployment to shcluster apps folder, looks like your CM is running all the ES Apps searches(from /etc/app... See more...
@Sweets000  Did you remove ES Apps(/etc/apps/) from the CM(deployer in your case) after your deployment to shcluster apps folder, looks like your CM is running all the ES Apps searches(from /etc/apps/) which is causing skipped searches on CM. Remove ES from the CM’s Splunk instance: Stop Splunk on the CM. Remove the ES app directory from $SPLUNK_HOME/etc/apps/ on the CM. Start Splunk on the CM. Verify ES is only on SHC members: Ensure the ES app and its configurations are only present on the SHC members, deployed via the deployer ($SPLUNK_HOME/etc/shcluster/apps/). Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
It's actually easy enough with a bit of inline CSS - see this simple XML dashboard example, which hides the colour dropdown and sets the colour to black. If you select A, C or D then the colour dropd... See more...
It's actually easy enough with a bit of inline CSS - see this simple XML dashboard example, which hides the colour dropdown and sets the colour to black. If you select A, C or D then the colour dropdown is re-shown. <form version="1.1" theme="light"> <label>ddl</label> <fieldset submitButton="false"> <input type="dropdown" token="component" searchWhenChanged="true"> <label>Component</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <choice value="D">D</choice> <change> <eval token="display_colour">if($component$="B", "display:none", "")</eval> <eval token="colour_name">if($component$="B", "No Colour", $colour_name$)</eval> <eval token="form.colour">if($component$="B", "#000000", $form.colour$)</eval> <eval token="colour">if($component$="B", "#000000", $colour$)</eval> </change> </input> <input type="dropdown" token="severity" searchWhenChanged="true"> <label>Severity</label> <choice value="Info">Info</choice> <choice value="Warning">Warning</choice> </input> <input id="colour_dropdown" type="dropdown" token="colour" searchWhenChanged="true"> <label>Colour Dropdown</label> <choice value="#000000">Black</choice> <choice value="#ff0000">Red</choice> <choice value="#00ff00">Green</choice> <choice value="#0000ff">Blue</choice> <change> <set token="colour_name">$label$</set> </change> </input> </fieldset> <row depends="$AlwaysHideCSS$"> <panel> <html> <style> #colour_dropdown { $display_colour$ } </style> </html> </panel> </row> <row> <panel> <html> <h1 style="color:$colour$">Component $component$, Severity Level $severity$, Colour $colour_name$</h1> </html> </panel> </row> </form>  
As @ITWhisperer points out, the token has no value so your search won't run at all to get to that point. If you want to always have a token so you search will run, you should set the default for the... See more...
As @ITWhisperer points out, the token has no value so your search won't run at all to get to that point. If you want to always have a token so you search will run, you should set the default for the token to something, even if it's just the empty string, i.e. <default></default> but then your SPL would be | eval user=if(len($user_token|s$)=0, user, $user_token|s$) i.e. it checks for length of 0 in the  input. Note the use of $user_token|s$, i.e. with a |s before the final $ sign which effectively quotes the token.  
Hello We deployed a new Splunk cluster containing a Cluster Manager, 3x SHC members, 6x Indexers. The cluster has hundreds of vCPUs in the SHC and Indexers, but after installing Enterprise Security ... See more...
Hello We deployed a new Splunk cluster containing a Cluster Manager, 3x SHC members, 6x Indexers. The cluster has hundreds of vCPUs in the SHC and Indexers, but after installing Enterprise Security 7.x we are seeing hundreds of skipped searches, specifically: The maximum number of concurrent historical scheduled searches on an instance or cluster reached The maximum number of concurrent auto-summarization searches reached Logs indicate the searches seem to be getting skipped on the CM (which only has 12 CPU cores). We followed the documentation to install ES on a distributed cluster: Install Splunk Enterprise Security in a search head cluster environment | Splunk Docs (We used the CM which is our deployer to push ES to the SHC via shcluster apps folder) Note: some summarization searches are running on the SHC members but the majority seem to be running on the CM. Would appreciate any ideas as this has me stumped!
The following youtube video shows this tactic Master Splunk Dashboards: Expert Guide to Troubleshooting Tokens! it is at about the 7 minute mark of the video.   <row> <panel> <html> your ... See more...
The following youtube video shows this tactic Master Splunk Dashboards: Expert Guide to Troubleshooting Tokens! it is at about the 7 minute mark of the video.   <row> <panel> <html> your code </html> </panel> </row>
No app necessary and doesn't need javascript.  Works on a post.  This is the way to go.  Using an app is not a bad method, but sometimes you have to go through a change control board or you use splun... See more...
No app necessary and doesn't need javascript.  Works on a post.  This is the way to go.  Using an app is not a bad method, but sometimes you have to go through a change control board or you use splunk cloud.  Using an HTML Tag will work as long as you have edit rights to the dashboard, which you should if you are coding the dashboard.  
It sounds like your clients (forwarders) are not consistently communicating with the Splunk instance after the snapshot rollback. Since telnet confirms ports 8089 and 9997 are open, and the issue on... See more...
It sounds like your clients (forwarders) are not consistently communicating with the Splunk instance after the snapshot rollback. Since telnet confirms ports 8089 and 9997 are open, and the issue only resolves temporarily after restarting the Splunk server, here are a few steps to troubleshoot: Check Forwarder Configuration: Verify the deploymentclient.conf on the clients points to the correct Splunk server hostname/IP and port (8089 for management). Ensure the phoneHomeIntervalInSecs is set appropriately (default is 60 seconds). Validate Server Rollback Impact: The rollback may have caused a mismatch in SSL certificates or server identity. Check if the server’s server.conf or certificates (in $SPLUNK_HOME/etc/auth/) were altered. Regenerate or redeploy certificates if needed. Inspect Splunkd Logs on Clients: Look at $SPLUNK_HOME/var/log/splunk/splunkd.log on the clients for errors related to connection failures or authentication issues when phoning home. Network Stability: Ensure there are no intermittent network issues or firewalls blocking consistent communication. Test with tcpdump or netstat on the server to confirm client connections. Indexer Acknowledgment: If using indexer acknowledgment, verify the outputs.conf on clients has useACK=true and check for any backlog in the indexing queue on the server. Splunk Version Compatibility: Confirm the forwarders are on a compatible version with 9.3.1. If not, upgrade them to match. Try restarting the forwarders after checking the above. If the issue persists, share any relevant errors from the client or server logs for further assistance.
Do this ^^^^^^^
Hi,   Onboarding SUSE Linux (SLES/OpenSUSE) logs into Splunk Enterprise Security (ES) for security-focused use cases is a great initiative, and I’d be happy to share insights on the key log files, ... See more...
Hi,   Onboarding SUSE Linux (SLES/OpenSUSE) logs into Splunk Enterprise Security (ES) for security-focused use cases is a great initiative, and I’d be happy to share insights on the key log files, differences from other Linux distributions, configuration steps, and best practices. Below, I’ll address each of your questions in detail, drawing from general Splunk practices and specific considerations for SUSE Linux, with some references to community insights where applicable. 1. Most Relevant Log Files for Security-Focused Use Cases in Splunk ES   For security-focused use cases in Splunk ES, such as authentication monitoring, audit tracking, change management, and endpoint monitoring, the following SUSE Linux log files are critical. These logs align with Splunk ES’s data models (e.g., Authentication, Change, Endpoint) and support use cases like detecting unauthorized access, privilege escalation, or system changes.   - /var/log/messages (or /var/log/syslog in some configurations):   - Purpose: General system log capturing a wide range of system events, including security-related messages like sudo commands, system service activities, and kernel messages.   - Use Cases: Useful for monitoring system-wide events, detecting anomalies (e.g., unexpected service failures), and correlating with other logs for incident investigation.   - Splunk ES Mapping: Feeds into the *hange and Endpoint data models for tracking system activities.   - /var/log/secure (or /var/log/auth.log in some SUSE configurations):   - Purpose: Captures authentication-related events, such as successful/failed logins, SSH access, su/sudo usage, and PAM (Pluggable Authentication Module) events.   - Use Cases: Essential for the Authentication data model in Splunk ES to detect brute-force attacks, unauthorized login attempts, or privilege escalation.   - Note: On SUSE, the log file is typically /var/log/secure, but verify if your system uses /var/log/auth.log (more common in Debian-based systems like Ubuntu).   - /var/log/audit/audit.log:   - Purpose: Generated by the auditd daemon, this log records detailed system auditing events, including file access, user management (e.g., changes to /etc/passwd), system calls, and security policy violations.   - Use Cases: Critical for the Change and Endpoint data models, enabling tracking of file modifications, user account changes, and system call monitoring for compliance (e.g., PCI DSS, CIS benchmarks).   - Note: Auditd must be properly configured to log meaningful events without overwhelming Splunk with noise (more on tuning below).   - /var/log/firewalld (or firewall-related logs):   - Purpose: Logs firewall activities, such as blocked connections, allowed traffic, or rule changes, typically managed by firewalld or SuSEfirewall2 (legacy in older SLES versions).   - Use Cases: Supports the *Network Traffic* and *Intrusion Detection* data models in Splunk ES for monitoring network security events, such as blocked malicious IPs or unauthorized access attempts.   - Note: Ensure firewalld is enabled and logging is configured (e.g., via LogDenied settings).   - /var/log/apparmor/ (e.g., /var/log/apparmor/audit.log):   - Purpose: Logs AppArmor events, including profile violations or permitted actions, which are critical for mandatory access control (MAC) monitoring.   - Use Cases: Useful for detecting attempts to access restricted files or execute unauthorized processes, feeding into the Endpoint data model.   - Note: AppArmor is commonly used in SUSE for security hardening, so enabling its logging is valuable.   - Application-Specific Logs (e.g Booking.com, Apache, etc.):   - Purpose: Logs from applications like web servers (/var/log/httpd/ or /var/log/apache2/), databases, or other services running on SUSE systems.   - Use Cases: Monitor for application-level security events, such as web attacks or unauthorized API access, feeding into the *Web* or *Endpoint* data models.   - Note: Identify critical applications on your SUSE systems and include their logs based on your security use cases.   - /var/log/zypper.log:   - Purpose: Logs package management activities (installations, updates, removals) via the zypper package manager, unique to SUSE.   - Use Cases: Supports the Change data model for tracking software changes that could indicate unauthorized updates or vulnerabilities.   - Note: Monitor this for compliance and to detect unexpected package modifications.   Recommendation: Start with /var/log/messages, /var/log/secure, and /var/log/audit/audit.log as the core logs for Splunk ES, as they cover most security use cases. Expand to firewall, AppArmor, and application logs based on your environment’s needs. The Splunk Add-on for Unix and Linux (Splunk TA) is a great starting point to configure these inputs, but customize it to avoid collecting unnecessary data.