All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I mean if you uncheck that option, you should see "ALL" the BTs (hopefully). When that option is enabled, it will only display BT that has performance data.
Hi, Can anyone with experience using SOCRadar Alarm addon help me figure out why the field "alarm asset" and "title" returned empty value? We're using Splunk v9.1.5 and SOCRadar Alarm COllector... See more...
Hi, Can anyone with experience using SOCRadar Alarm addon help me figure out why the field "alarm asset" and "title" returned empty value? We're using Splunk v9.1.5 and SOCRadar Alarm COllector v1.0.2.
Hello Terence, I have check and the option "Transactions with Performance Data"  is already checked. Please find the attached screenshot for your reference and provide any other alternate solutions... See more...
Hello Terence, I have check and the option "Transactions with Performance Data"  is already checked. Please find the attached screenshot for your reference and provide any other alternate solutions. Thanks Satishkumar
i have checked everything and it's appears the splunk saying connectivity issue but there is no issues. i think it's require support from splunk it self ....
From your screen shot, if you click on the Filters, is the option "Transactions with Performance Data" checked?
Corrected
Hi, i have problem with Data model search. This is my SPL: |datamodel Network_Resolution_DNS_v2 search| search DNS.message_type=Query |rename DNS.query as query | fields _time, query | streamstat... See more...
Hi, i have problem with Data model search. This is my SPL: |datamodel Network_Resolution_DNS_v2 search| search DNS.message_type=Query |rename DNS.query as query | fields _time, query | streamstats current=f last(_time) as last_time by query | eval gap=last_time - _time | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime and it's work fine but slowly, so i would like to change to Data Model. How looks like query ?  I have DM model DNS_v2 and it's work for another queries, but not for this. | tstats summariesonly=true values(DNS.query) as query FROM datamodel="DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_time) as last_time by query | eval gap=(last_time - _time) | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime Has anyone had this problem before?  
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor ... See more...
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor these files as if the data is live, so that I can apply Splunk Enterprise Security (ES) rules to them.
great!   Works as expected one correction:  it should be double quotes instead of single in search    | search source !="*log.2024-*"      
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't se... See more...
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't see any queues building up in the metrics.log - plus the sample server we are using (an SQL member server), doesn't really have a high level of traffic. During the period that the Security logs aren't sending, I can see data still coming in from the Windows Application Event Log, other Windows Event logs (like App Locker event logs, SMB auditing event logs) - so Event Log data is coming, but just not from the Security Log in the error periods. A restart of the UF causes it to re-process anything that is still local in the Security Event Log. We had an older case sort of like this for the Windows Defender Antivirus event logs - not capturing data - the outcome - Splunk added a new directive - channel_wait_time= - to cause the Splunk UF to retest the Event Log existed after not being able to access for a time period, and this would cause the data to start recapturing. It could be a similar directive needs to be added - but its not been required during the many years we have had splunk running. Recently they changed on the indexers on advice from Splunk from an ongoing case - about another issue  - so that bit is set as you mentioned useACK = false  * A value of "false" means the forwarder consider s the data fully processed when it finishes writing it to the network socket. in our setup, they currently have - they mentioned the value of false is a legacy behaviour. autoBatch = true * When set to 'true', the forwarder automatically sends chunks/events in batches to target receiving instance connection. The forwarder creates batches only if there are two or more chunks/events available in output connection queue. * When set to 'false', the forwarder sends one chunk/event to target receiving instance connection. This is old legacy behavior. * Default: true  
The issue you're facing with index clustering, where one indexer peer shows an `addingbatch` status and fluctuates between `up` and `batchadding`, while the other peer shows `up` and then goes to `pe... See more...
The issue you're facing with index clustering, where one indexer peer shows an `addingbatch` status and fluctuates between `up` and `batchadding`, while the other peer shows `up` and then goes to `pending` status, suggests potential problems with data replication, network connectivity, or resource allocation. ### Possible Causes and Solutions: 1. **Data Replication Lag or Bottlenecks**: - **Cause**: The `addingbatch` status indicates that the peer is adding data from the replication queue to the index, but it is either delayed or encountering issues. This can occur if there is a backlog in the replication queue or if the peer is unable to process the data quickly enough. - **Solution**: - Check for any network latency or packet loss between the indexer peers. Ensure there is sufficient bandwidth for replication traffic. - Verify if the indexers have adequate disk I/O performance. If the disks are slow or under heavy load, consider upgrading the storage or optimizing disk usage. 2. **Connectivity Issues Between Peers**: - **Cause**: The fluctuating statuses (`up` to `batchadding` or `pending`) could indicate intermittent network connectivity issues between the indexer peers or between the indexers and the cluster master. - **Solution**: - Review the network configuration and ensure that all indexer peers and the cluster master are correctly configured to communicate with each other. - Check the Splunk internal logs (`index=_internal`) for any network-related errors or warnings (`source=*splunkd.log`). 3. **Cluster Master Configuration or Load Issues**: - **Cause**: The cluster master may be overwhelmed or improperly configured, leading to inconsistent status updates for the peers. - **Solution**: - Verify the cluster master’s health and ensure it is not overloaded. - Review the cluster master's logs for any errors or configuration issues that might be causing delays in managing the peer status. 4. **Resource Constraints on Indexer Peers**: - **Cause**: The indexers might be under-resourced (CPU, memory, or disk space), causing them to be slow in processing incoming data or managing replication. - **Solution**: - Check the hardware resources (CPU, RAM, disk space) on each indexer. Ensure they meet the requirements for the volume of data being handled. - Increase the allocated resources or optimize the current configuration for better performance. 5. **Splunk Version Compatibility or Bugs**: - **Cause**: There may be bugs or version compatibility issues if different versions of Splunk are running on the cluster master and indexer peers. - **Solution**: - Make sure that all instances (cluster master and indexer peers) are running compatible versions of Splunk. - Review the [Splunk Release Notes](https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes) for any known issues or bugs that might match your problem. 6. **Configuration Issues**: - **Cause**: Misconfiguration in the `indexes.conf` or other related files may cause replication or status reporting issues. - **Solution**: - Review your `indexes.conf`, `server.conf`, and `inputs.conf` files for any configuration errors. Ensure that all settings are aligned with best practices for index clustering. ### Next Steps: 1. **Log Analysis**: - Review the `_internal` logs (`splunkd.log`) on all affected peers and the cluster master. Look for errors, warnings, or messages related to clustering or replication. 2. **Network Diagnostics**: - Run network diagnostics to ensure there are no connectivity issues between indexer peers or between the peers and the cluster master. 4. contacting Splunk Support for further assistance. By systematically checking these areas, you can identify the root cause and apply the appropriate solution to stabilize the indexer peers in your Splunk cluster.
Hi @FQzy  To set up the Cisco Security Cloud app in Splunk, you can find detailed guidance and documentation on the (https://splunkbase.splunk.com/app/7404). ### Key Steps for Setup: 1. Download a... See more...
Hi @FQzy  To set up the Cisco Security Cloud app in Splunk, you can find detailed guidance and documentation on the (https://splunkbase.splunk.com/app/7404). ### Key Steps for Setup: 1. Download and Install: You can download the Cisco Security Cloud app from Splunkbase. Make sure to follow the specific instructions for installation, which include compatibility checks and required add-ons. 2. Configure Data Inputs: The app requires the configuration of several data inputs based on your Cisco security products (e.g., Firewalls, Intrusion Protection, Web Security, etc.). The documentation provides step-by-step guidance for each type. 3. Troubleshooting Common Errors: For issues like "failed to create an input," ensure that all prerequisites (like appropriate permissions and network settings) are met. You may need to consult the app's [Splunkbase page](https://splunkbase.splunk.com/app/5558) for specific troubleshooting tips. If you run into errors or specific issues during setup, it might also be helpful to check the community discussions and resources available on the Splunk website. For your reference you can refer this document: https://developer.cisco.com/docs/cloud-security/cloud-security-app-for-splunk/#cloud-security-app-for-splunk
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registe... See more...
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registered in All other traffic instead of First Class BT. I have attached the Two Screenshot for your reference(last 1 hour & last 1 week BT list) Please let me know the appropriate answer/solution to my aforementioned question. Thanks & Regards, Satishkumar
Issue is fixed after removing passwords.conf and restart splunk.
In your sample logs, it looks like you have a space after the first pipe which does not appear to be accounted for in your LINE_BREAKER pattern. Try something like this LINE_BREAKER=([\r\n]+)\d{4}\-... See more...
In your sample logs, it looks like you have a space after the first pipe which does not appear to be accounted for in your LINE_BREAKER pattern. Try something like this LINE_BREAKER=([\r\n]+)\d{4}\-\d{2}\-\d{2}\s\|\s\d{2}:\d{2}:\d{2}.\d{3}\s\|
  ### Updated Solution Using Internal Logs Here's an alternative method to monitor your daily license usage without requiring the "dispatch_rest_to_indexers" capability: 1. Search Query for Licens... See more...
  ### Updated Solution Using Internal Logs Here's an alternative method to monitor your daily license usage without requiring the "dispatch_rest_to_indexers" capability: 1. Search Query for License Usage: Use the following search query to fetch the license usage information from the internal logs: index=_internal source=*license_usage.log* type="RolloverSummary" | eval GB_used=round(b/1024/1024/1024, 2) | eval license_usage_percentage = (GB_used / 250) * 100 | stats latest(GB_used) as GB_used latest(license_usage_percentage) as license_usage_percentage | where license_usage_percentage > 70 | table GB_used, license_usage_percentage This query: - Fetches the license usage data from the internal license usage logs. - Converts the bytes used (`b`) to gigabytes (GB). - Calculates the percentage of license usage based on your 250 GB daily limit. 2. Create Alerts Based on Usage Percentage: You need to create three separate alerts based on different license usage thresholds. - Alert for 70% Usage: - Go to Search & Reporting in Splunk Cloud. - Enter the updated query in the search bar. - Click on Save As and select Alert. - Name the alert (e.g., "License Usage 70% Alert"). - Set the trigger condition to: license_usage_percentage > 70 - Choose the alert type as "Scheduled" and configure it to run once per day or at your preferred schedule. - Set the alert actions (e.g., email notification, webhook). - Alert for 80% Usage: - Repeat the steps above to create another alert. - Set the trigger condition to: license_usage_percentage > 80 - Alert for 90% Usage: - Follow the same process to create the final alert. - Set the trigger condition to: license_usage_percentage > 90 3. Configure Alert Notifications: - For each alert, set up the notification actions under Alert Actions (e.g., send an email, trigger a webhook). 4. Test Your Alerts: - Manually run the search and ensure it produces the expected results. - Adjust thresholds or conditions to test if the alerts trigger correctly.
The metadata command doesn't take filters other than index so filter after the data is returned   | metadata type=sources where index=gwcc | search source !="*log.2024-*" | eval source2=lower(mvind... See more...
The metadata command doesn't take filters other than index so filter after the data is returned   | metadata type=sources where index=gwcc | search source !="*log.2024-*" | eval source2=lower(mvindex(split(source,".2024"),-1)) | eval file=lower(mvindex(split(source,"/"),-1)) | table source, source2, file  
If facing issue after migrating single site to multisite indexer cluster. SF/RF not met after 5 days still fixup task increasing. Can any help to resolve this SF/RF issue ? 
Yes, That's true i got connectivity issue from an indexer and the problem happened surprisingly without any circumstances before    could you help ?
i did the rolling restart nothing happened the issue still persists and i don't know why it's happened ...