All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't se... See more...
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't see any queues building up in the metrics.log - plus the sample server we are using (an SQL member server), doesn't really have a high level of traffic. During the period that the Security logs aren't sending, I can see data still coming in from the Windows Application Event Log, other Windows Event logs (like App Locker event logs, SMB auditing event logs) - so Event Log data is coming, but just not from the Security Log in the error periods. A restart of the UF causes it to re-process anything that is still local in the Security Event Log. We had an older case sort of like this for the Windows Defender Antivirus event logs - not capturing data - the outcome - Splunk added a new directive - channel_wait_time= - to cause the Splunk UF to retest the Event Log existed after not being able to access for a time period, and this would cause the data to start recapturing. It could be a similar directive needs to be added - but its not been required during the many years we have had splunk running. Recently they changed on the indexers on advice from Splunk from an ongoing case - about another issue  - so that bit is set as you mentioned useACK = false  * A value of "false" means the forwarder consider s the data fully processed when it finishes writing it to the network socket. in our setup, they currently have - they mentioned the value of false is a legacy behaviour. autoBatch = true * When set to 'true', the forwarder automatically sends chunks/events in batches to target receiving instance connection. The forwarder creates batches only if there are two or more chunks/events available in output connection queue. * When set to 'false', the forwarder sends one chunk/event to target receiving instance connection. This is old legacy behavior. * Default: true  
The issue you're facing with index clustering, where one indexer peer shows an `addingbatch` status and fluctuates between `up` and `batchadding`, while the other peer shows `up` and then goes to `pe... See more...
The issue you're facing with index clustering, where one indexer peer shows an `addingbatch` status and fluctuates between `up` and `batchadding`, while the other peer shows `up` and then goes to `pending` status, suggests potential problems with data replication, network connectivity, or resource allocation. ### Possible Causes and Solutions: 1. **Data Replication Lag or Bottlenecks**: - **Cause**: The `addingbatch` status indicates that the peer is adding data from the replication queue to the index, but it is either delayed or encountering issues. This can occur if there is a backlog in the replication queue or if the peer is unable to process the data quickly enough. - **Solution**: - Check for any network latency or packet loss between the indexer peers. Ensure there is sufficient bandwidth for replication traffic. - Verify if the indexers have adequate disk I/O performance. If the disks are slow or under heavy load, consider upgrading the storage or optimizing disk usage. 2. **Connectivity Issues Between Peers**: - **Cause**: The fluctuating statuses (`up` to `batchadding` or `pending`) could indicate intermittent network connectivity issues between the indexer peers or between the indexers and the cluster master. - **Solution**: - Review the network configuration and ensure that all indexer peers and the cluster master are correctly configured to communicate with each other. - Check the Splunk internal logs (`index=_internal`) for any network-related errors or warnings (`source=*splunkd.log`). 3. **Cluster Master Configuration or Load Issues**: - **Cause**: The cluster master may be overwhelmed or improperly configured, leading to inconsistent status updates for the peers. - **Solution**: - Verify the cluster master’s health and ensure it is not overloaded. - Review the cluster master's logs for any errors or configuration issues that might be causing delays in managing the peer status. 4. **Resource Constraints on Indexer Peers**: - **Cause**: The indexers might be under-resourced (CPU, memory, or disk space), causing them to be slow in processing incoming data or managing replication. - **Solution**: - Check the hardware resources (CPU, RAM, disk space) on each indexer. Ensure they meet the requirements for the volume of data being handled. - Increase the allocated resources or optimize the current configuration for better performance. 5. **Splunk Version Compatibility or Bugs**: - **Cause**: There may be bugs or version compatibility issues if different versions of Splunk are running on the cluster master and indexer peers. - **Solution**: - Make sure that all instances (cluster master and indexer peers) are running compatible versions of Splunk. - Review the [Splunk Release Notes](https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes) for any known issues or bugs that might match your problem. 6. **Configuration Issues**: - **Cause**: Misconfiguration in the `indexes.conf` or other related files may cause replication or status reporting issues. - **Solution**: - Review your `indexes.conf`, `server.conf`, and `inputs.conf` files for any configuration errors. Ensure that all settings are aligned with best practices for index clustering. ### Next Steps: 1. **Log Analysis**: - Review the `_internal` logs (`splunkd.log`) on all affected peers and the cluster master. Look for errors, warnings, or messages related to clustering or replication. 2. **Network Diagnostics**: - Run network diagnostics to ensure there are no connectivity issues between indexer peers or between the peers and the cluster master. 4. contacting Splunk Support for further assistance. By systematically checking these areas, you can identify the root cause and apply the appropriate solution to stabilize the indexer peers in your Splunk cluster.
Hi @FQzy  To set up the Cisco Security Cloud app in Splunk, you can find detailed guidance and documentation on the (https://splunkbase.splunk.com/app/7404). ### Key Steps for Setup: 1. Download a... See more...
Hi @FQzy  To set up the Cisco Security Cloud app in Splunk, you can find detailed guidance and documentation on the (https://splunkbase.splunk.com/app/7404). ### Key Steps for Setup: 1. Download and Install: You can download the Cisco Security Cloud app from Splunkbase. Make sure to follow the specific instructions for installation, which include compatibility checks and required add-ons. 2. Configure Data Inputs: The app requires the configuration of several data inputs based on your Cisco security products (e.g., Firewalls, Intrusion Protection, Web Security, etc.). The documentation provides step-by-step guidance for each type. 3. Troubleshooting Common Errors: For issues like "failed to create an input," ensure that all prerequisites (like appropriate permissions and network settings) are met. You may need to consult the app's [Splunkbase page](https://splunkbase.splunk.com/app/5558) for specific troubleshooting tips. If you run into errors or specific issues during setup, it might also be helpful to check the community discussions and resources available on the Splunk website. For your reference you can refer this document: https://developer.cisco.com/docs/cloud-security/cloud-security-app-for-splunk/#cloud-security-app-for-splunk
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registe... See more...
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registered in All other traffic instead of First Class BT. I have attached the Two Screenshot for your reference(last 1 hour & last 1 week BT list) Please let me know the appropriate answer/solution to my aforementioned question. Thanks & Regards, Satishkumar
Issue is fixed after removing passwords.conf and restart splunk.
In your sample logs, it looks like you have a space after the first pipe which does not appear to be accounted for in your LINE_BREAKER pattern. Try something like this LINE_BREAKER=([\r\n]+)\d{4}\-... See more...
In your sample logs, it looks like you have a space after the first pipe which does not appear to be accounted for in your LINE_BREAKER pattern. Try something like this LINE_BREAKER=([\r\n]+)\d{4}\-\d{2}\-\d{2}\s\|\s\d{2}:\d{2}:\d{2}.\d{3}\s\|
  ### Updated Solution Using Internal Logs Here's an alternative method to monitor your daily license usage without requiring the "dispatch_rest_to_indexers" capability: 1. Search Query for Licens... See more...
  ### Updated Solution Using Internal Logs Here's an alternative method to monitor your daily license usage without requiring the "dispatch_rest_to_indexers" capability: 1. Search Query for License Usage: Use the following search query to fetch the license usage information from the internal logs: index=_internal source=*license_usage.log* type="RolloverSummary" | eval GB_used=round(b/1024/1024/1024, 2) | eval license_usage_percentage = (GB_used / 250) * 100 | stats latest(GB_used) as GB_used latest(license_usage_percentage) as license_usage_percentage | where license_usage_percentage > 70 | table GB_used, license_usage_percentage This query: - Fetches the license usage data from the internal license usage logs. - Converts the bytes used (`b`) to gigabytes (GB). - Calculates the percentage of license usage based on your 250 GB daily limit. 2. Create Alerts Based on Usage Percentage: You need to create three separate alerts based on different license usage thresholds. - Alert for 70% Usage: - Go to Search & Reporting in Splunk Cloud. - Enter the updated query in the search bar. - Click on Save As and select Alert. - Name the alert (e.g., "License Usage 70% Alert"). - Set the trigger condition to: license_usage_percentage > 70 - Choose the alert type as "Scheduled" and configure it to run once per day or at your preferred schedule. - Set the alert actions (e.g., email notification, webhook). - Alert for 80% Usage: - Repeat the steps above to create another alert. - Set the trigger condition to: license_usage_percentage > 80 - Alert for 90% Usage: - Follow the same process to create the final alert. - Set the trigger condition to: license_usage_percentage > 90 3. Configure Alert Notifications: - For each alert, set up the notification actions under Alert Actions (e.g., send an email, trigger a webhook). 4. Test Your Alerts: - Manually run the search and ensure it produces the expected results. - Adjust thresholds or conditions to test if the alerts trigger correctly.
The metadata command doesn't take filters other than index so filter after the data is returned   | metadata type=sources where index=gwcc | search source !="*log.2024-*" | eval source2=lower(mvind... See more...
The metadata command doesn't take filters other than index so filter after the data is returned   | metadata type=sources where index=gwcc | search source !="*log.2024-*" | eval source2=lower(mvindex(split(source,".2024"),-1)) | eval file=lower(mvindex(split(source,"/"),-1)) | table source, source2, file  
If facing issue after migrating single site to multisite indexer cluster. SF/RF not met after 5 days still fixup task increasing. Can any help to resolve this SF/RF issue ? 
Yes, That's true i got connectivity issue from an indexer and the problem happened surprisingly without any circumstances before    could you help ?
i did the rolling restart nothing happened the issue still persists and i don't know why it's happened ...
Hi @Samantha , as also @PickleRick and @ITWhisperer said, this seems to be a job for a scheduled report. If you want a dashboard, you could schedule a search (e.g. as an alert) running your search ... See more...
Hi @Samantha , as also @PickleRick and @ITWhisperer said, this seems to be a job for a scheduled report. If you want a dashboard, you could schedule a search (e.g. as an alert) running your search and sabing aggregated results in a summary index, then you could run the searches of your dashboard on this summary index. Ciao. Giuseppe
Hi @sumarri , Splunk isn't a database wre you can manually write notes. The only way it to add to your dashboard a link to a lookup to open using Splunk App for Lookup Editor, or use a workaround ... See more...
Hi @sumarri , Splunk isn't a database wre you can manually write notes. The only way it to add to your dashboard a link to a lookup to open using Splunk App for Lookup Editor, or use a workaround that I described in one past answer: https://community.splunk.com/t5/Dashboards-Visualizations/Dynamically-Update-a-lookup-file-on-click-of-a-field-and-showing/m-p/674605 Ciao. Giuseppe
Hi @mahesh27 , two questions: 1) do you see logs in a wrong format or don't you see logs? in the first case, props.conf is relevant, in the second case, there's a different issue. 2) if you see... See more...
Hi @mahesh27 , two questions: 1) do you see logs in a wrong format or don't you see logs? in the first case, props.conf is relevant, in the second case, there's a different issue. 2) if you see your logs in wrong format, I suppose that your logs are in one row (because you used SHOULD_LINEMERGE=false), so why are you using the LINE_BREAKER in that way? See how to index csv files using pipe as delimiter. My hint is to same some logs in a text file and try to ingest it using the manual Add logs feature, that guides you in props.conf definition and test. Ciao. Giuseppe
Hi, I am getting below error when trying to save the data inputs [all 5] which comes as part of Nutanix add-on. Has anyone seen this before and can suggest something?   Error- Encountered the foll... See more...
Hi, I am getting below error when trying to save the data inputs [all 5] which comes as part of Nutanix add-on. Has anyone seen this before and can suggest something?   Error- Encountered the following error while trying to save: Argument validation for scheme=nutanix_alerts: script running failed (PID 24107 killed by signal 9: Killed).  
metadata and with pipe at the front of .... completely new command/structure for me, but  it works, and works much faster But one more unexpected case has appeared due to this change. I cannot fi... See more...
metadata and with pipe at the front of .... completely new command/structure for me, but  it works, and works much faster But one more unexpected case has appeared due to this change. I cannot filter out rotated files which are in the directory and are not necessary . It looks something like file1.log file1.2024-09-01.log file1.2024-08-02.log etc. etc. and of course I only need the main , the most present file ( without any dates) so I tried | metadata type=sources where index=gwcc AND source !='*log.2024-*' | eval source2=lower(mvindex(split(source,".2024"),-1)) | eval file=lower(mvindex(split(source,"/"),-1)) | table source, source2, file but my "filter" does not work .
Hi Team "Could you please let us know when the latest version of the Splunk OVA for VMware will be released?"
Hi all, I'm having issues comparing user field in Palo Alto traffic logs vs last user reported by Crowdstrike/Windows events.Palo-Alto traffic logs is showing a different user in logs initiating the... See more...
Hi all, I'm having issues comparing user field in Palo Alto traffic logs vs last user reported by Crowdstrike/Windows events.Palo-Alto traffic logs is showing a different user in logs initiating the traffic during the time window compared to Crowd strike last user login reported for same endpoint. Has anyone you know faced similar issue ?   Thanks 
Using below props, but we don't see logs reporting to Splunk,   We are assuming that | (pipe symbol) works as a delimiter and we cannot use it in props.  Just want to know is this props are correct ... See more...
Using below props, but we don't see logs reporting to Splunk,   We are assuming that | (pipe symbol) works as a delimiter and we cannot use it in props.  Just want to know is this props are correct [tools:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\d{4}\-\d{2}\-\d{2}\s\|\d{2}:\d{2}:\d{2}.\d{3}\s\| TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d | %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=28 Sample logs:   2022-02-22 | 04:00:34:909 | main stream logs | Staticapp-1 - Restart completed 2022-02-22 | 05:00:34:909 | main stream applicationlogs | Staticapp-1 - application logs (total=0, active=0, waiting=0) completed 2022-02-22 | 05:00:34:909 | main stream applicationlogs | harikpool logs-1 - mainframe script (total=0, active=0, waiting=0) completed      
Splunk support confirmed this is a bug in 9.1.0.2. Based on the SPL, it has been resolved in Beryllium 9.1.4 and Cobalt 9.2.1. As a workaround until we upgrade, I have appended a bogus OR condition ... See more...
Splunk support confirmed this is a bug in 9.1.0.2. Based on the SPL, it has been resolved in Beryllium 9.1.4 and Cobalt 9.2.1. As a workaround until we upgrade, I have appended a bogus OR condition with a wildcard, e.g.: OR noSuchField=noSuchValue* to the other OR conditions in our WHERE clauses, and this causes Splunk to return the correct result.