All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey there, Looks like the CrowdStrike TA is throwing an "Err 500" fit! Don't worry, I've got some ideas to fix it. SSL Mismatch: Seems your inputs.conf and server.conf have different SSL settings. ... See more...
Hey there, Looks like the CrowdStrike TA is throwing an "Err 500" fit! Don't worry, I've got some ideas to fix it. SSL Mismatch: Seems your inputs.conf and server.conf have different SSL settings. Make sure they both use the same "sslVersions" like tls1.2 and have valid certificate paths. Double-check those serverCert paths and sslCommonNameToCheck values too. Security Check: If you're feeling brave, you can temporarily disable certificate verification (sslVerifyServerCert = false in server.conf), but only in a safe space! Remember, security first! Other suspects: Make sure Splunk can read those certificate files. Check certificate validity and hostname with tools like openssl s_client. Consider updating the CrowdStrike TA, newer versions might be smoother. Pro tip: Back up your configs before tinkering, and test changes in a separate environment. If these tips don't do the trick, hit up Splunk or CrowdStrike support. They're the pros! ~ If the reply helps, a Karma upvote would be appreciated
Hey there, Adding custom metrics from AWS CloudWatch to Splunk using the Splunk Add-on for AWS pull mechanism can be tricky, but I'm here to help! Here's the key: While editing the default aws name... See more...
Hey there, Adding custom metrics from AWS CloudWatch to Splunk using the Splunk Add-on for AWS pull mechanism can be tricky, but I'm here to help! Here's the key: While editing the default aws namespaces might seem intuitive, it's not the recommended approach for custom metrics. Instead, follow these steps: Identify your custom namespace: Make sure you know the exact namespace (e.g., "MyCompany/CustomMetrics") created in AWS CloudWatch. Configure inputs.conf: Add a new stanza for your custom namespace under the [aws:cloudwatch] section. Use namespace to specify your custom namespace (e.g., namespace = MyCompany/CustomMetrics). Optionally, define specific metric_names or use . for all metrics. Set sourcetype to aws:cloudwatch. Adjust other parameters like index and period as needed. Restart Splunk Forwarder: For the changes to take effect, restart the Splunk Forwarder running inputs.conf. Example inputs.conf stanza: [aws:cloudwatch://custom_metrics] # Replace with your actual namespace namespace = MyCompany/CustomMetrics # Optional: Filter specific metrics # metric_names = metric1, metric2 sourcetype = aws:cloudwatch index = main period = 60 Additional Tips: Double-check your namespace name for accuracy. Use Splunk Web's "Inputs" section to verify if your new input is active. If you still face issues, check Splunk logs for errors related to your custom namespace input. Remember: This approach specifically caters to custom metrics. If you're dealing with custom events, the process might differ slightly. Feel free to share more details if you need further assistance! ~ If the reply helps, a Karma upvote would be appreciated
  Hi there, 1. Implement a 5-Minute Bin Time: Add the bucket command: search (wineventlog_security EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Me... See more...
  Hi there, 1. Implement a 5-Minute Bin Time: Add the bucket command: search (wineventlog_security EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Message EventCode | security_content_ctime(firstTime) | security_content_ctime(lastTime) | suspicious_event_log_service_behavior_filter | bucket _time span=5m | ... (rest of your query) Filter out events with gaps within 5 minutes: ... | stats count as event_count by _time dest Message EventCode | eval is_first_event = if(_time == earliest(_time), 1, 0) | eval is_noisy_event = if(event_count > 1 AND is_first_event == 0, 1, 0) | filter not is_noisy_event 2. Filter by dvc_priority: Add a filter condition: ... | where dvc_priority = "high" OR dvc_priority = "critical" | ... (rest of your query) Additional Tips: Tailor the bin time: Adjust the span value in bucket _time span=5m to match your desired timeframe. Prioritize based on risk: If dvc_priority accurately reflects risk, filtering by it can be effective. Test thoroughly: Implement changes in a non-production environment first to ensure they work as intended. Combine strategies: For optimal results, consider using both bin time and dvc_priority filtering together. Remember: Replace any placeholders like ... (rest of your query) with the actual remaining parts of your query. Adapt field names and values to match your specific Splunk configuration. I'm here to assist further if you have any more questions or need additional guidance! ~ If this helps, a Karma upvote would be much appreciated.
Hey there, I understand you're facing errors with the Dynatrace API Version 2 input in Splunk, even though the API works fine in Postman. Don't worry, I've got some troubleshooting steps to help you... See more...
Hey there, I understand you're facing errors with the Dynatrace API Version 2 input in Splunk, even though the API works fine in Postman. Don't worry, I've got some troubleshooting steps to help you out: 1. Proxy Check: Double-check your proxy settings in Splunk's proxy.conf. Are they accurate and match your environment? Can you connect to the Dynatrace API endpoint through the proxy using tools like curl or wget from the Splunk server? 2. Log Detective: Look for clues in Splunk's splund.log and python.log. Are there any error messages related to proxy, connection, or certificates? Enable debug logging for the Dynatrace add-on to get more details about connection attempts. 3. Certificate Confusion (if applicable): If using SSL/TLS, are the certificates trusted and installed correctly? As a temporary test, try disabling certificate verification in Splunk (not recommended for production!). 4. Network Ninja: Check firewalls on both Splunk and the proxy server for blocked connections to the Dynatrace API. Can you connect directly to the API endpoint from the Splunk server, bypassing the proxy? 5. Token Tweak: Try regenerating your Dynatrace API token and updating it in Splunk. Does it have the necessary permissions? 6. Version Vault: Consider upgrading or downgrading the Dynatrace add-on or Splunk itself for better compatibility. Make sure the add-on version works with your Splunk version. Stuck? Don't hesitate to reach out to Splunk or Dynatrace support for further assistance. Provide them with details about your environment, configuration, and error messages. Remember, I'm here to help you troubleshoot like a pro! ~ If the reply helps, a Karma upvote would be appreciated
Hi there, The remote Splunk Forwarder might not be reachable due to: Connectivity: Ping the remote machine and check WMI service status. WMI configuration: Verify inputs.conf settings (server... See more...
Hi there, The remote Splunk Forwarder might not be reachable due to: Connectivity: Ping the remote machine and check WMI service status. WMI configuration: Verify inputs.conf settings (server, namespace, credentials, source path). Firewall: Ensure firewall allows connections on Splunk dynamic ports (9997, 8089) and WMI port (135). Authentication: Double-check Splunk credentials have WMI access on the remote machine. Logs: Review Splunk logs on both machines for errors or warnings. If these don't help, consider: Testing WMI connection manually using wbemtest.exe. Enabling debug logging in inputs.conf for more detailed logs. Using file inputs instead of WMI if necessary. Please provide more details (Splunk version, error messages) if you need further assistance. ~ If the reply helps, a Karma upvote would be appreciated
Hi there, Understanding the Workflow: Universal Forwarder (UF): Installed on Windows machines. Collects logs from various sources on the machine (e.g., Windows event logs, applications, file... See more...
Hi there, Understanding the Workflow: Universal Forwarder (UF): Installed on Windows machines. Collects logs from various sources on the machine (e.g., Windows event logs, applications, files). Forwards the collected logs to the Heavy Forwarder. Heavy Forwarder (HF): Acts as a central collection point for logs from multiple UFs. Can perform filtering, transformation, and load balancing before forwarding logs to indexers. Often used for: Reducing network traffic to indexers by filtering low-priority logs. Offloading log processing from resource-constrained UFs. Providing redundancy and failover for log forwarding. Indexer: Stores and indexes the forwarded logs, making them searchable and analyzable in Splunk. Tips: Consider using deployment servers to automate Splunk UF configuration on Windows machines. Leverage distributed search and indexes for efficient searching across geographically dispersed data. Regularly update Splunk software and configurations to maintain security and performance. ~ If the reply helps, a Karma upvote would be appreciated
Thanks for your response. It solves the issue
Hi @Nawab, architecture is very easy: at least two HFs will work as concentrators to receive logs from UFs and forwardr them to the Indexers. This is a best practice if you have to send logs to Spl... See more...
Hi @Nawab, architecture is very easy: at least two HFs will work as concentrators to receive logs from UFs and forwardr them to the Indexers. This is a best practice if you have to send logs to Splunk Cloud from an on-premise network or if you have a segregated network and you don't want to open many connections between UFs and IDXs. Otherwise, I always prefer to directly send logs from UFs to IDXs. Tha approach to pass throgh HFs could have another purpose: delegate the parsing jobs to different machines than IDXs to reduce their load, but only if the IDXs are overloaded, and in this case, you have to give more resources (CPUs) to the HFs. About configuration: you have to configure as destination the HFs instead of the IDXs in UFs (outputs.conf); the HFs must be configured as receivers on the 9997 port from the UFs and as Forwarders (still on the 9997 port) to the IDXs. On the HFs you can configure a Forwarder license to avoid to pay the license. Only one attention point: don't use only one HF to concentrate logs, becasue in this way you have a Single Point of Failure. Ciao. Giuseppe
Hi I have installed a Splunk Forwarder on a remote computer and I chose wmi as data input in the main server. But when I want to find a log I get the message that remote computer is not reachable. T... See more...
Hi I have installed a Splunk Forwarder on a remote computer and I chose wmi as data input in the main server. But when I want to find a log I get the message that remote computer is not reachable. This is while I have defined firewall rules for Splunk dynamic ports. Would you please help me?
Hi,   I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.   I found some questions but mostly they were very high level.   ... See more...
Hi,   I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.   I found some questions but mostly they were very high level.   If someone can explain how will it work, that would be great.
If you're using Splunk Cloud Classic Experience, the add-on needs to be installed on your ES SH as well.
Thank you so much, I've spent at least 10 hours on this  
Hi  Thank you for the response. On SH, we are not getting this error. We getting these errors on ES and the app is available there and it's  accessible globally. We are running query specific to... See more...
Hi  Thank you for the response. On SH, we are not getting this error. We getting these errors on ES and the app is available there and it's  accessible globally. We are running query specific to Salesforce index only.  
Hi @pelican, As a quick and dirty solution, you can select "csv" in the "Source type:" drop-down on the Set Source Type page of the Add Data process. This will tell Splunk to read field names from t... See more...
Hi @pelican, As a quick and dirty solution, you can select "csv" in the "Source type:" drop-down on the Set Source Type page of the Add Data process. This will tell Splunk to read field names from the first line of the file and index subsequent lines using the header fields as indexed field extractions. After the file is indexed, you can search for it in the default index using: sourcetype=csv If you specified a non-default index, add the index to the search: index=homework sourcetype=csv
Also note that sync_client.cpp isn't a TLS client. It will only work with a plaintext HTTP server. Connecting to a TLS endpoint should return: Exception: read_until: Connection reset by peer
Hi @dilipkha, Under the hood, Boost calls getaddrinfo on Linux, which should accept IP addresses as strings. When I compile the example with g++ 8.5.0 and Boost 1.66.0 on my RHEL 8 host, the progra... See more...
Hi @dilipkha, Under the hood, Boost calls getaddrinfo on Linux, which should accept IP addresses as strings. When I compile the example with g++ 8.5.0 and Boost 1.66.0 on my RHEL 8 host, the program works as expected using http as the service:   $ g++ -o sync_client -lboost_system -lpthread sync_client.cpp $ chmod 0775 sync_client $ host httpbin.org httpbin.org has address 23.22.173.247 httpbin.org has address 52.206.0.51 $ ./sync_client 23.22.173.247 /get Date: Sun, 28 Jan 2024 04:47:22 GMT Content-Type: application/json Content-Length: 225 Connection: close Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true { "args": {}, "headers": { "Accept": "*/*", "Host": "23.22.173.247", "X-Amzn-Trace-Id": "Root=1-65b5dc5a-2e13219829bcecc360851dcb" }, "origin": "x.x.x.x", "url": "http://23.22.173.247/get" }   In your implementation, you may need to use a different query constructor on line 35, e.g.:   tcp::resolver::query query(argv[1], "8089", boost::asio::ip::resolver_query_base::numeric_host | boost::asio::ip::resolver_query_base::numeric_service);   Note that I've also replaced "http" with "8089" to use the default Splunk management port. On most systems, the http service resolves to port 80. See e.g. /etc/services.
Hi, i'm using the splunk cloud platform for a  school project. When I import my csv files into splunk, it doesn't seem to recognise the headers of my csv as a field. Does anyone know how to get splun... See more...
Hi, i'm using the splunk cloud platform for a  school project. When I import my csv files into splunk, it doesn't seem to recognise the headers of my csv as a field. Does anyone know how to get splunk to recognise my headers? thanks for any help
At a glance, it's a score calculated from _audit data based on search run time, the absence of an index predicate, the presence of prestats transforming commands, the position of other transforming c... See more...
At a glance, it's a score calculated from _audit data based on search run time, the absence of an index predicate, the presence of prestats transforming commands, the position of other transforming commands, memory use, and the presence of an initial makeresults or metadata command. Pain is inversely proportional to efficiency. @sideview may be lurking. Have you tried contacting them directly?
Hi @klim, I don't have an active IdP to validate, but as I recall, you would specify your preferred mapping as the Name ID format/attribute in the SAML IdP and not in the SAML SP (Splunk). Home dir... See more...
Hi @klim, I don't have an active IdP to validate, but as I recall, you would specify your preferred mapping as the Name ID format/attribute in the SAML IdP and not in the SAML SP (Splunk). Home directories can be managed at the file system level in $SPLUNK_HOME/etc/users by renaming directories. Ownership of most knowledge objects can be changed from Settings > All Configurations > Reassign Knowledge Objects. For the few objects that can't be reassigned via the user interface, you'll need to update all instances of $SPLUNK_HOME/etc/apps/*/metadata/*.meta as needed.
The scheduled alert should be owned by a user with access to the app and (probably) be saved within the app. Their access to Splunk should have no bearing on whether they can access MIME attachments ... See more...
The scheduled alert should be owned by a user with access to the app and (probably) be saved within the app. Their access to Splunk should have no bearing on whether they can access MIME attachments in their email client; however, they may not not be able to access any links you include. The CSV file will be an attachment, not a link.