All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

.
I found this in the msi log ’SetAccountType:  Error 0x80004005: Cannot set GROUPPERFORMANCEMONITORUSERS=1 since the local users/groups are not available on Domain Controller’ Now I just need to fou... See more...
I found this in the msi log ’SetAccountType:  Error 0x80004005: Cannot set GROUPPERFORMANCEMONITORUSERS=1 since the local users/groups are not available on Domain Controller’ Now I just need to found out if I can bypass it  
I'm a bit confused about how you're getting data to your SOAR instance - you say you're pulling, but you're also using the Splunk App for SOAR Export? When I hear pull, I'm assuming you have the SOAR... See more...
I'm a bit confused about how you're getting data to your SOAR instance - you say you're pulling, but you're also using the Splunk App for SOAR Export? When I hear pull, I'm assuming you have the SOAR App for Splunk configured to pull events in from SOAR (yes, I know the app names are all similar and this definitely caused some confusion for my team early on)   Assuming you just have the event forwarding configured, you can maybe try changing your advanced options to this: This would send everything as a single artifact, rather than the multiple you have.
Awesome! Glad to know that. Please remember to mark this as resolved so others can know about it. Happy splunking!
Further more, if I "open In search" the search code of the two dashboards, and change every occurrence of -30d@d to -60d@d, and search, there will be 60 bars - last 60 days. This happens although ... See more...
Further more, if I "open In search" the search code of the two dashboards, and change every occurrence of -30d@d to -60d@d, and search, there will be 60 bars - last 60 days. This happens although the frozenTimePeriodInSec = 2592000 (30 days) for index _internal. Do we really need to double this setting? It looks that just changing -30d@d to -60d@d,  is enough. best regards Altin
@Symon   To effectively monitor Linux Auditd events in Splunk, you can use the Splunk Add-on for Linux. This add-on allows you to collect and analyze audit logs from your Linux devices. Here’s how ... See more...
@Symon   To effectively monitor Linux Auditd events in Splunk, you can use the Splunk Add-on for Linux. This add-on allows you to collect and analyze audit logs from your Linux devices. Here’s how you can set it up: Configure AuditD to Send Data to the Splunk Add-on for Linux: https://docs.splunk.com/Documentation/AddOns/released/Linux/Configure4  https://splunkbase.splunk.com/app/833  This Add On for linux Auditd allows Administrators to make their data OCSF Compliant and CIM compliant for related Linux Auditd Events https://preview.splunkbase.splunk.com/app/7045   
@pavithra  Kindly refer the below links:  How to display the count in piechart as labels - Splunk Community  Chart configuration reference - Splunk Documentation Chart configuration reference - S... See more...
@pavithra  Kindly refer the below links:  How to display the count in piechart as labels - Splunk Community  Chart configuration reference - Splunk Documentation Chart configuration reference - Splunk Documentation
@kiran_panchavathas some good suggestions.  Also consider using post-processing  and/or saved searches in the dashboards to reduce CPU usage and speed up the dashboards.
thanks that did it. thank you
Thank You @victor_menezes  .I tried below and it worked . | eval _time = strptime(replace(source, ".*(\d\d\d\d-\d\d-\d\d\_\d\d-\d\d-\d\d).*","\1"),"%Y-%m-%d_%H-%M-%S")
Before you format your table, you'll need to take your return value in the array and convert it to a string. You will need to do some custom code for this. The beauty of SOAR is that you're able to ... See more...
Before you format your table, you'll need to take your return value in the array and convert it to a string. You will need to do some custom code for this. The beauty of SOAR is that you're able to throw in some python code to manipulate the data in whatever way you want it to.
Hello community, I am testing the interactions on a pie chart to allow my users to click on a specific segment, and for the rest of the dashboard to adapt accordingly: Before / After clicking on an... See more...
Hello community, I am testing the interactions on a pie chart to allow my users to click on a specific segment, and for the rest of the dashboard to adapt accordingly: Before / After clicking on an element of the pie chart: For this, I use a token on the pie chart that I added in the elements which must be updated accordingly (via a simple "search"): I wanted to add a "Reset" button to reset this filter. However, I'm a little stuck, I don't really know how to configure it. I tried like the pie chart interaction, telling myself that when we click on it, we reset the token but it breaks my dashboard:   While waiting to find a solution, I "cheated" by putting an interaction on the button which reloads the web page to return to the dashboard, which cancels any existing filter but it is not optimal. Do you have any idea how to reset the pie chart token without completely reloading the dashboard page? Best regards, Rajaion  
ok something is not clear.  the trigger condition is results count greater than 4, then trigger/run the trigger conditions.  1) do you say that when the results are greater than 4, but still the tr... See more...
ok something is not clear.  the trigger condition is results count greater than 4, then trigger/run the trigger conditions.  1) do you say that when the results are greater than 4, but still the trigger did not work.  2) on your latest reply, you got only one result, but the trigger condition ran successfully ya? Can you pls attach the trigger conditions screenshot pls. 
Thank you so much very helpful!
@trha_ Can you check this  https://www.tekstream.com/blog/blog-deleting-the-unsupported-splunk-windows-universal-forwarder/ 
But this form has no Edit button. Which means it is not meant to be modified other then by Splunk itself.
@yazeed    Splunk's _configtracker can be used to monitor changes to alerts and saved searches in Splunk. The _configtracker index: With Splunk 9, the _configtracker index was introduced. Thi... See more...
@yazeed    Splunk's _configtracker can be used to monitor changes to alerts and saved searches in Splunk. The _configtracker index: With Splunk 9, the _configtracker index was introduced. This index stores changes to Splunk configuration files, including the date and time of the change, as well as all the new and old values associated with the modified item. However, the data in _configtracker has a limitation: it only monitors changes to configuration files. Consequently, a crucial piece of information is missing from these logs: the user responsible for the change. While it does provide a record of the previous and updated settings, this information is not available in the same event. Therefore, to create a comprehensive alert, we need to perform data aggregation and enrichment. For instance, after the described change to the Windows failed logons alert use case, the configtracker will contain two related events. Note the search looks in the _configtracker index, for a configuration update, where the changed item (data.changes{}.stanza) is specified, and particularly for a saved search being changed, independently of the app and Splunk installation directory ("*/savedsearches.conf"). Here is the SPL query: index=_configtracker component=ConfigChange data.action=update data.changes{}.stanza=* data.path="*/savedsearches.conf"  
I have two different queries, one calculates total critical alerts and the second one calculates total time critical alerts where "opened". I need to calculate the average between them time/count, h... See more...
I have two different queries, one calculates total critical alerts and the second one calculates total time critical alerts where "opened". I need to calculate the average between them time/count, how can i achieve it?  
@PickleRick Do you see any difference? I am not seeing any different in the files in the both server.
@splukiee    Certainly! Addressing high CPU usage in your Splunk environment due to specific dashboards can be challenging, but let’s explore some strategies to mitigate the issue: Dashboard Optim... See more...
@splukiee    Certainly! Addressing high CPU usage in your Splunk environment due to specific dashboards can be challenging, but let’s explore some strategies to mitigate the issue: Dashboard Optimization:- Review Dashboard Components: Inspect each dashboard panel. Are there any resource-intensive visualizations (e.g., complex charts, tables, or maps)? Simplify or optimize them. Reduce Real-Time Updates: If dashboards update in real-time, consider increasing the refresh interval. Frequent updates can strain CPU resources. Limit Concurrent Sessions: Since users have multiple sessions open, limit the number of concurrent sessions per user. Excessive sessions can overload the system. Use Summary Indexes: Consider using summary indexes for frequently accessed data. This reduces the need for real-time searches. Monitoring and Troubleshooting: Splunk Monitoring Console (MC): Use MC to monitor resource usage. Check the CPU Usage by process class graph to identify which components consume the most CPU. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Troubleshooting_high_resource_usage  https://docs.splunk.com/Documentation/Splunk/9.2.0/DMC/ResourceusageCPU