All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi there, UFs Don't Initiate SSL: UFs don't initiate SSL connections for management traffic, so they don't directly handle hostname validation. DS Handles It: The Deployment Server takes care of S... See more...
Hi there, UFs Don't Initiate SSL: UFs don't initiate SSL connections for management traffic, so they don't directly handle hostname validation. DS Handles It: The Deployment Server takes care of SSL and hostname validation when communicating with UFs. Good for Server-to-Server: Your server-to-server SSL and hostname validation setup is solid for securing those connections. Additional Tips: Secure UF Data: If you're concerned about securing data sent from UFs to indexers, configure SSL and hostname validation in outputs.conf on UFs. Consult Docs: Always refer to Splunk documentation for the most up-to-date guidance on specific configuration options: <invalid link removed> ~ If the reply helps, a Karma upvote would be appreciated
Hi there, It seems the issue lies in token handling within ITSI scripts, causing values to get replaced by entity keys/types instead of actual values. Here's what you can try: Double-check token... See more...
Hi there, It seems the issue lies in token handling within ITSI scripts, causing values to get replaced by entity keys/types instead of actual values. Here's what you can try: Double-check token names: Ensure token names in both dashboards match exactly. Typo? Case sensitivity? Fix them! Inspect ITSI scripts: If comfortable, take a peek at the ITSI scripts involved. Look for token handling logic and potential overrides. Consider alternative drilldown: Explore using "open in new tab" or custom links instead of the built-in drilldown, bypassing ITSI scripts. Seek ITSI community help: The ITSI community forum is a great resource for specific configuration advice and workarounds. Remember, sometimes it's not about reinventing the wheel, but finding the right community to help navigate its quirks. Good luck! ~ If the reply helps, a Karma upvote would be appreciated
Hi there! Seems like your test logs are working, but real-world ones aren't showing up. Here's what might be happening: Filter Frenzy: Double-check your Splunk filters. You might have one accide... See more...
Hi there! Seems like your test logs are working, but real-world ones aren't showing up. Here's what might be happening: Filter Frenzy: Double-check your Splunk filters. You might have one accidentally hiding those juicy UPS logs. Severity Sleight of Hand: Splunk might not be ingesting lower severity logs by default. Try adjusting your search filters or source type settings to include them. Port Mismatch: Make sure your Splunk server is listening on port 514 for UDP traffic. A quick netstat check can confirm this. If none of these work, give your Splunk logs a good scan for error messages related to UPS data. They might offer more specific clues. ~ If the reply helps, a Karma upvote would be appreciated
Hey there, Adding custom AWS metrics to Splunk with the pull mechanism can be tricky! Editing the default namespaces isn't quite the way to go. Here's the key: New stanza in inputs.conf: Create a ... See more...
Hey there, Adding custom AWS metrics to Splunk with the pull mechanism can be tricky! Editing the default namespaces isn't quite the way to go. Here's the key: New stanza in inputs.conf: Create a new section for your custom namespace with namespace set to its exact name (e.g., MyCompany/CustomMetrics). Specify metrics (optional): Add metric_names if you want specific metrics, otherwise use . for all. Set sourcetype and other params: Ensure sourcetype is aws:cloudwatch and adjust index and period as needed. Remember to restart the Splunk Forwarder for the changes to take effect. If you're still facing issues, double-check your namespace name and Splunk logs for errors. And feel free to ask if you need more help! ~ If the reply helps, a Karma upvote would be appreciated
Hi there, The key is finding those Workspace login logs. While the add-on and apps might be installed, there could be a filtering or indexing issue. Here's a quick rundown: Check the filter: Did... See more...
Hi there, The key is finding those Workspace login logs. While the add-on and apps might be installed, there could be a filtering or indexing issue. Here's a quick rundown: Check the filter: Did you configure any filters that might exclude login events? Double-check your inputs.conf settings specifically. Look for indexing errors: Splunk logs might reveal indexing errors related to Workspace data. Check splunkd.log and python.log for clues. Search smarter: The provided search might not translate perfectly to Workspace. Try broader terms like "google login" or "workspace access" and adjust from there. If you're still stuck, I recommend searching Splunkbase forums or reaching out to Splunk or Google Workspace support directly. They've seen it all and can offer specific guidance. Remember, hunting invaders is like being a detective – persistence and resourcefulness are key! ~ If the reply helps, a Karma upvote would be appreciated
Hey there, Looks like the CrowdStrike TA is throwing an "Err 500" fit! Don't worry, I've got some ideas to fix it. SSL Mismatch: Seems your inputs.conf and server.conf have different SSL settings. ... See more...
Hey there, Looks like the CrowdStrike TA is throwing an "Err 500" fit! Don't worry, I've got some ideas to fix it. SSL Mismatch: Seems your inputs.conf and server.conf have different SSL settings. Make sure they both use the same "sslVersions" like tls1.2 and have valid certificate paths. Double-check those serverCert paths and sslCommonNameToCheck values too. Security Check: If you're feeling brave, you can temporarily disable certificate verification (sslVerifyServerCert = false in server.conf), but only in a safe space! Remember, security first! Other suspects: Make sure Splunk can read those certificate files. Check certificate validity and hostname with tools like openssl s_client. Consider updating the CrowdStrike TA, newer versions might be smoother. Pro tip: Back up your configs before tinkering, and test changes in a separate environment. If these tips don't do the trick, hit up Splunk or CrowdStrike support. They're the pros! ~ If the reply helps, a Karma upvote would be appreciated
Hey there, Adding custom metrics from AWS CloudWatch to Splunk using the Splunk Add-on for AWS pull mechanism can be tricky, but I'm here to help! Here's the key: While editing the default aws name... See more...
Hey there, Adding custom metrics from AWS CloudWatch to Splunk using the Splunk Add-on for AWS pull mechanism can be tricky, but I'm here to help! Here's the key: While editing the default aws namespaces might seem intuitive, it's not the recommended approach for custom metrics. Instead, follow these steps: Identify your custom namespace: Make sure you know the exact namespace (e.g., "MyCompany/CustomMetrics") created in AWS CloudWatch. Configure inputs.conf: Add a new stanza for your custom namespace under the [aws:cloudwatch] section. Use namespace to specify your custom namespace (e.g., namespace = MyCompany/CustomMetrics). Optionally, define specific metric_names or use . for all metrics. Set sourcetype to aws:cloudwatch. Adjust other parameters like index and period as needed. Restart Splunk Forwarder: For the changes to take effect, restart the Splunk Forwarder running inputs.conf. Example inputs.conf stanza: [aws:cloudwatch://custom_metrics] # Replace with your actual namespace namespace = MyCompany/CustomMetrics # Optional: Filter specific metrics # metric_names = metric1, metric2 sourcetype = aws:cloudwatch index = main period = 60 Additional Tips: Double-check your namespace name for accuracy. Use Splunk Web's "Inputs" section to verify if your new input is active. If you still face issues, check Splunk logs for errors related to your custom namespace input. Remember: This approach specifically caters to custom metrics. If you're dealing with custom events, the process might differ slightly. Feel free to share more details if you need further assistance! ~ If the reply helps, a Karma upvote would be appreciated
  Hi there, 1. Implement a 5-Minute Bin Time: Add the bucket command: search (wineventlog_security EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Me... See more...
  Hi there, 1. Implement a 5-Minute Bin Time: Add the bucket command: search (wineventlog_security EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Message EventCode | security_content_ctime(firstTime) | security_content_ctime(lastTime) | suspicious_event_log_service_behavior_filter | bucket _time span=5m | ... (rest of your query) Filter out events with gaps within 5 minutes: ... | stats count as event_count by _time dest Message EventCode | eval is_first_event = if(_time == earliest(_time), 1, 0) | eval is_noisy_event = if(event_count > 1 AND is_first_event == 0, 1, 0) | filter not is_noisy_event 2. Filter by dvc_priority: Add a filter condition: ... | where dvc_priority = "high" OR dvc_priority = "critical" | ... (rest of your query) Additional Tips: Tailor the bin time: Adjust the span value in bucket _time span=5m to match your desired timeframe. Prioritize based on risk: If dvc_priority accurately reflects risk, filtering by it can be effective. Test thoroughly: Implement changes in a non-production environment first to ensure they work as intended. Combine strategies: For optimal results, consider using both bin time and dvc_priority filtering together. Remember: Replace any placeholders like ... (rest of your query) with the actual remaining parts of your query. Adapt field names and values to match your specific Splunk configuration. I'm here to assist further if you have any more questions or need additional guidance! ~ If this helps, a Karma upvote would be much appreciated.
Hey there, I understand you're facing errors with the Dynatrace API Version 2 input in Splunk, even though the API works fine in Postman. Don't worry, I've got some troubleshooting steps to help you... See more...
Hey there, I understand you're facing errors with the Dynatrace API Version 2 input in Splunk, even though the API works fine in Postman. Don't worry, I've got some troubleshooting steps to help you out: 1. Proxy Check: Double-check your proxy settings in Splunk's proxy.conf. Are they accurate and match your environment? Can you connect to the Dynatrace API endpoint through the proxy using tools like curl or wget from the Splunk server? 2. Log Detective: Look for clues in Splunk's splund.log and python.log. Are there any error messages related to proxy, connection, or certificates? Enable debug logging for the Dynatrace add-on to get more details about connection attempts. 3. Certificate Confusion (if applicable): If using SSL/TLS, are the certificates trusted and installed correctly? As a temporary test, try disabling certificate verification in Splunk (not recommended for production!). 4. Network Ninja: Check firewalls on both Splunk and the proxy server for blocked connections to the Dynatrace API. Can you connect directly to the API endpoint from the Splunk server, bypassing the proxy? 5. Token Tweak: Try regenerating your Dynatrace API token and updating it in Splunk. Does it have the necessary permissions? 6. Version Vault: Consider upgrading or downgrading the Dynatrace add-on or Splunk itself for better compatibility. Make sure the add-on version works with your Splunk version. Stuck? Don't hesitate to reach out to Splunk or Dynatrace support for further assistance. Provide them with details about your environment, configuration, and error messages. Remember, I'm here to help you troubleshoot like a pro! ~ If the reply helps, a Karma upvote would be appreciated
Hi there, The remote Splunk Forwarder might not be reachable due to: Connectivity: Ping the remote machine and check WMI service status. WMI configuration: Verify inputs.conf settings (server... See more...
Hi there, The remote Splunk Forwarder might not be reachable due to: Connectivity: Ping the remote machine and check WMI service status. WMI configuration: Verify inputs.conf settings (server, namespace, credentials, source path). Firewall: Ensure firewall allows connections on Splunk dynamic ports (9997, 8089) and WMI port (135). Authentication: Double-check Splunk credentials have WMI access on the remote machine. Logs: Review Splunk logs on both machines for errors or warnings. If these don't help, consider: Testing WMI connection manually using wbemtest.exe. Enabling debug logging in inputs.conf for more detailed logs. Using file inputs instead of WMI if necessary. Please provide more details (Splunk version, error messages) if you need further assistance. ~ If the reply helps, a Karma upvote would be appreciated
Hi there, Understanding the Workflow: Universal Forwarder (UF): Installed on Windows machines. Collects logs from various sources on the machine (e.g., Windows event logs, applications, file... See more...
Hi there, Understanding the Workflow: Universal Forwarder (UF): Installed on Windows machines. Collects logs from various sources on the machine (e.g., Windows event logs, applications, files). Forwards the collected logs to the Heavy Forwarder. Heavy Forwarder (HF): Acts as a central collection point for logs from multiple UFs. Can perform filtering, transformation, and load balancing before forwarding logs to indexers. Often used for: Reducing network traffic to indexers by filtering low-priority logs. Offloading log processing from resource-constrained UFs. Providing redundancy and failover for log forwarding. Indexer: Stores and indexes the forwarded logs, making them searchable and analyzable in Splunk. Tips: Consider using deployment servers to automate Splunk UF configuration on Windows machines. Leverage distributed search and indexes for efficient searching across geographically dispersed data. Regularly update Splunk software and configurations to maintain security and performance. ~ If the reply helps, a Karma upvote would be appreciated
Thanks for your response. It solves the issue
Hi @Nawab, architecture is very easy: at least two HFs will work as concentrators to receive logs from UFs and forwardr them to the Indexers. This is a best practice if you have to send logs to Spl... See more...
Hi @Nawab, architecture is very easy: at least two HFs will work as concentrators to receive logs from UFs and forwardr them to the Indexers. This is a best practice if you have to send logs to Splunk Cloud from an on-premise network or if you have a segregated network and you don't want to open many connections between UFs and IDXs. Otherwise, I always prefer to directly send logs from UFs to IDXs. Tha approach to pass throgh HFs could have another purpose: delegate the parsing jobs to different machines than IDXs to reduce their load, but only if the IDXs are overloaded, and in this case, you have to give more resources (CPUs) to the HFs. About configuration: you have to configure as destination the HFs instead of the IDXs in UFs (outputs.conf); the HFs must be configured as receivers on the 9997 port from the UFs and as Forwarders (still on the 9997 port) to the IDXs. On the HFs you can configure a Forwarder license to avoid to pay the license. Only one attention point: don't use only one HF to concentrate logs, becasue in this way you have a Single Point of Failure. Ciao. Giuseppe
Hi I have installed a Splunk Forwarder on a remote computer and I chose wmi as data input in the main server. But when I want to find a log I get the message that remote computer is not reachable. T... See more...
Hi I have installed a Splunk Forwarder on a remote computer and I chose wmi as data input in the main server. But when I want to find a log I get the message that remote computer is not reachable. This is while I have defined firewall rules for Splunk dynamic ports. Would you please help me?
Hi,   I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.   I found some questions but mostly they were very high level.   ... See more...
Hi,   I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.   I found some questions but mostly they were very high level.   If someone can explain how will it work, that would be great.
If you're using Splunk Cloud Classic Experience, the add-on needs to be installed on your ES SH as well.
Thank you so much, I've spent at least 10 hours on this  
Hi  Thank you for the response. On SH, we are not getting this error. We getting these errors on ES and the app is available there and it's  accessible globally. We are running query specific to... See more...
Hi  Thank you for the response. On SH, we are not getting this error. We getting these errors on ES and the app is available there and it's  accessible globally. We are running query specific to Salesforce index only.  
Hi @pelican, As a quick and dirty solution, you can select "csv" in the "Source type:" drop-down on the Set Source Type page of the Add Data process. This will tell Splunk to read field names from t... See more...
Hi @pelican, As a quick and dirty solution, you can select "csv" in the "Source type:" drop-down on the Set Source Type page of the Add Data process. This will tell Splunk to read field names from the first line of the file and index subsequent lines using the header fields as indexed field extractions. After the file is indexed, you can search for it in the default index using: sourcetype=csv If you specified a non-default index, add the index to the search: index=homework sourcetype=csv
Also note that sync_client.cpp isn't a TLS client. It will only work with a plaintext HTTP server. Connecting to a TLS endpoint should return: Exception: read_until: Connection reset by peer