All Topics

Top

All Topics

We are using Splunk Enterprise 9.0.1 OnPrem, with Splunk App for Lookup File Editing version 3.6.0. We need to get a user to modify a column in a lookup, so we give him access and capabilities to ... See more...
We are using Splunk Enterprise 9.0.1 OnPrem, with Splunk App for Lookup File Editing version 3.6.0. We need to get a user to modify a column in a lookup, so we give him access and capabilities to do so. But we dont want this user to have the power to modify all the columns in that lookup. Is there anyway that we can restrict which columns a user can edit? Regards. EDIT: Here is an open idea for this feature request: https://ideas.splunk.com/ideas/APPSID-I-529. Please vote if you consider it is useful.
Hello, I have a distributed environment:  1 Search Head (SH), 1 Indexer, 1 Deployment Server, and 1 Syslog Server. I deployed my apps to the Syslog server for those devices that cannot have a forwa... See more...
Hello, I have a distributed environment:  1 Search Head (SH), 1 Indexer, 1 Deployment Server, and 1 Syslog Server. I deployed my apps to the Syslog server for those devices that cannot have a forwarder installed. In Splunk Add-on for VMware, it shows a diagram of a distributed environment.  I have my VMware device logs sent to the syslog server, then the syslog server sends them to the indexer. I've installed the add-ons for VMware Esxi Logs on SH and Syslog, and VMware add-on for Indexes on the Indexer. I don't understand what a Data Collection Node, or a Data Collection Scheduler is?  The documentation is confusing.   What are these and do I need them in my environment for my VMware devices? Thanks    
Create_Failed: The following resource(s) failed to create: SplunkDMCtrailCWLogSubscriptionFilterCustomResource.    We are able to pass all the prereq's and then after deploying the cloudformation t... See more...
Create_Failed: The following resource(s) failed to create: SplunkDMCtrailCWLogSubscriptionFilterCustomResource.    We are able to pass all the prereq's and then after deploying the cloudformation template in AWS it fails to create the SplunkDMCtrailCWLogSubscriptionFilterCustomResource and we are never able to ingest the cloudtrail logs. Any help would be greatly appreciated.     
Getting the Most Out of Event Correlation and Alert Storm Detection in Splunk IT Service Intelligence During a recent Observability Edition Tech Talk, Diving Deeper with AIOps. Attendees joined Jef... See more...
Getting the Most Out of Event Correlation and Alert Storm Detection in Splunk IT Service Intelligence During a recent Observability Edition Tech Talk, Diving Deeper with AIOps. Attendees joined Jeff Wiedemann, Principal Observability Strategist at Splunk and heard about how Splunk ITSI can reduce alert noise, provide business context, and help you be proactive instead of reactive. As many of you know, noisy alerts, a lack of business context, and being reactive are common problems in organizations.   See the key moments from the live Diving Deeper with AIOps   Tech Talk  below!   Trends in Alerts and Event Correlation Customers are still facing challenges such as noisy alerts, lack of business context, and being reactive, which cause outages and down times and result in costly and problematic incidents.     Splunk Observability Cloud and Splunk ITSI: Reducing Noise and Understanding the Environment By grouping related alerts together, Splunk Observability Cloud and Splunk ITSI enabled the cloud operations team to reduce noise and ask more questions about the environment, such as if it is healthy, the incoming alert volume, MTTA and MTTR, and if they are in the middle of an alert storm. Alert Storm Detection - Understanding Incoming Alerts and Episode Analytics This KPI helps detect alert and episode storms by monitoring the volume of incoming alerts, episodes created, and the aggregation policies used to create them.   Want to learn more? Check out the entire Tech Talk   Splunk's IT Service Intelligence (ITSI) can help reduce alert noise by deduplication and grouping, provide alert and episode monitoring and storm detection, and allow for continuous improvement analytics. Through Splunk ITSI's alert pipeline, external alerts can be pulled into Splunk and ITSI as notable events with correlation searches. Using the Splunk ITSI Content Pack for Monitoring and Alerting can unlock these capabilities.      Detecting Alert and Episode Storm Activity with Splunk ITSI After reducing alert noise, more questions arise: is the environment healthy? Is incoming alert volume normal? Are there any high-volume alerts that need to be tuned? And, are we in the middle of an alert storm? Splunk analytics can provide insights to answer these questions. How to Power the Episode Analytics Service Tree To power the episode analytics service tree, an entity discovery search must be run to find all the aggregation policies, which then need to be turned into entities, and then the Splunk ITSI event analytics, episode analytics, and alert analytics services must be enabled.     Are you tracking the volume of incoming alerts in your environment? If you're not, you should be! It's an essential KPI for understanding what's going on in the environment. The Splunk ITSI Content Pack for Monitoring and Alerting provides a variety of KPIs to help you do this, including Incoming Alerts by Monitoring Tool, Incoming Alerts by Severity, and Incoming Alerts by Source, in addition to have Episode Analytics, which tracks the volume of new episodes being created in the environment.     How to Set Up Proactive Alert Storm Detection in Splunk ITSI This dashboard helps you apply adaptive thresholding to the Alert Storm Detection KPI and leverage service monitoring correlation searches to produce notable events when the KPI goes critical. The Splunk ITSI Alert and Episode Monitoring Aggregation Policy helps to configure proactive actions to be taken when an alert or an episode storm is detected. Understanding the Fields to Analyze Feature on a Dashboard Panel The Fields to Analyze input allows users to customize the dashboard to plot values over time to identify what might be causing an alert storm, and the Dynamic Alert Clustering input provides on-the-fly grouping of alerts based on typical aggregation policies.   How to Use Historical Analysis for Continuous Improvement in Your Operations Center Are you a leader at an operations center looking to make smart decisions about alert tuning, staffing, and grouping? Splunk ITSI Event and Incident Operations Posture dashboard can help! With this dashboard, you can get a detailed view of how your teams have been operating over the last 30 days or more, understand how your team is functioning, or how alerts have behaved over time. This dashboard lets you see the number of episodes created, unacknowledged, and the rate of acknowledgement. You can also compare episodes of different severities, filter out episodes by their acknowledgement or severity and get an understanding of how your teams handle critical versus non-critical episodes. Alert Storm Detection and triage capabilities are built in, so you can easily detect and triage the cause of an alert storm. Historical analysis for continuous improvement with visualizations like field value distributions and time series charts can help you find the source of the storm. Plus, you can customize the fields to make it relevant to your organization and alerts.   Reducing Alert Noise and Providing Business Context with Splunk IT Service Intelligence (ITSI) Watch the entire Tech Talk, Diving Deeper with AIOps for a full discussion on how IT Service Intelligence can help tame alert storms. We hope that this Tech Talk was useful, and we look forward to seeing you at .conf to learn more.  It is highly recommended that attendees have attended or watched Part 1: Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in Splunk IT Service Intelligence  prior to attending Part 2: Diving Deeper with AIOps. As this Tech Talk will be a much deeper dive into the concepts and capabilities covered in Part 1. Watch Now Don't forget to check out Tech Talks right here on the Community site for additional resources and answers to your questions.       Highly Recommended .conf23 Session Starting Fast With IT Service Intelligence - How Pre-built Content and Key Features Will Maximize Your Operational Visibility Fast! From Chaos to Correlation: How IT Service Intelligence Helped Splunk’s Own Cloud Operations Team Tame Alert Storms
I have a lookup table that contains usernames and userids. I want to use this to match a username to userid & vice versa. I want to take the output from said lookup and search across multiple indexes... See more...
I have a lookup table that contains usernames and userids. I want to use this to match a username to userid & vice versa. I want to take the output from said lookup and search across multiple indexes for the username OR the userid. It would look ruffly something like this: |inputlookup username2userid.csv | search username=a@a.com | table username userid | search (index=a $username$) OR (index=b $userid$)   If I manually replace either variable with the actual values the search works. Is it not possible to pass a variable from a lookup into a search?   Thank you in advance! 
I have installed and setup the VirusTotal TA with basic configuration i.e. API key and Max Batch Size just to test things. However when I try to run the following command   index=advanced_hunti... See more...
I have installed and setup the VirusTotal TA with basic configuration i.e. API key and Max Batch Size just to test things. However when I try to run the following command   index=advanced_hunting category="AdvancedHunting-UrlClickEvents" properties.UrlChain=* | virustotal domain=properties.UrlChain   I get the following error   Error in 'virustotal' command: External search command exited unexpectedly with non-zero error code 1. Streamed search execute failed because: Error in 'virustotal' command: External search command exited unexpectedly with non-zero error code 1.   I'm scrolling through Google but nothing is helping at the moment. Was wondering if anyone else has experienced the same issue
I was setting `ModularInputs` to WARNING.. wanted to know the default value of `AdminManagerDispatch` ... as of now it is set to "WARN", I want to know if this is the default for this one too? ... See more...
I was setting `ModularInputs` to WARNING.. wanted to know the default value of `AdminManagerDispatch` ... as of now it is set to "WARN", I want to know if this is the default for this one too? "splunk set log-level ModularInputs -level WAR     Can someone please guide me? ___________________________________________________________________________________ splunk@sh-i-***************c7e3:~$ splunk set log-level ModularInputs -level WARN WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Log level changed. splunk@sh-i-**************e3:~$ splunk show log-level ModularInputs WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Component: ModularInputs     Level: WARN     Buffering: 0 splunk@sh-i-*************7e3:~$ splunk show log-level AdminManagerDispatch WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Component: AdminManagerDispatch     Level: WARN     Buffering: 0 splunk@sh-i-*********:~$ hostname -f;date Wed Jun 21 10:29:29 UTC 2023 ydholakia@sh-i-***************:~$ timed out waiting for input: auto-logout  ~/Downloads/ splunk@sh-i-***********:~$ splunk set log-level ModularInputs -level WARN WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Log level changed. splunk@sh-i-*************:~$ splunk show log-level ModularInputs WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Component: ModularInputs     Level: WARN     Buffering: 0 splunk@sh-i-************7e3:~$ splunk show log-level AdminManagerDispatch WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Component: AdminManagerDispatch     Level: WARN     Buffering: 0 splunk@sh-i-********************2f2c7e3:~$ hostname -f;date Wed Jun 21 10:29:29 UTC 2023        
Hi, i have field IP ADDRESS when user login, so i want to alert email when to have a new ip address.  Can you help me
Please! Help me fix search code. Thank you very much!  
Hi Just to make it clear - these pages are for texting - a bit tighter target than what you have in the public Slack. If you want to know about the meetings (live) we have then you should sign up h... See more...
Hi Just to make it clear - these pages are for texting - a bit tighter target than what you have in the public Slack. If you want to know about the meetings (live) we have then you should sign up here: https://usergroups.splunk.com/stockholm-splunk-user-group/ If you want to have a wider chit-chat - then Slack is your choice: https://splunk-usergroups.slack.com  
Any UG meetups planned in the summer/autumn?
Hello, is there a way to upgrade Splunk universal forwarder to all onboarded endpoints using the deployment server? I was looking for answers but I didn't find anything helpful. Thank you.
Hello I was reading about making requests to the Splunk API. When I was reading this link below and when making a request the username (admin) and password (pass) need to be included in the request... See more...
Hello I was reading about making requests to the Splunk API. When I was reading this link below and when making a request the username (admin) and password (pass) need to be included in the request which is seen below: curl -k -u admin:pass https://localhost:8089/servicesNS/admin/-/alerts/alert_actions https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/RESTREF/RESTsearch#search.2Fjobs However there was another link mentioning that authentication tokens are needed to make API requests. curl -H "Authorization: <type> <token>" -X <method> https://<instance host name or IP address>:<management port>/<REST endpoint> -d <data...> [-d <data...>...] https://docs.splunk.com/Documentation/SplunkCloud/9.0.2209/Security/UseAuthTokens Is the first API request can only be used by admins and is the second request only given to users granted access by admins where they are given authentication tokens?
Hi people, I need help designing a regex that will cover the below strings, please. ------------------------------------------------------------------------------ wmic useraccount get /ALL /for... See more...
Hi people, I need help designing a regex that will cover the below strings, please. ------------------------------------------------------------------------------ wmic useraccount get /ALL /format:csv wmic process get caption,executablepath,commandline /format:csv wmic qfe get description,installedOn /format:csv wmic /node:"#{node}" service where (caption like "%#{service_search_string}%") wmic process call create #{process_to_execute} wmic process where name='#{process_to_execute}' delete >nul 2>&1 wmic /user:#{user_name} /password:#{password} /node:"#{node}" process call create #{process_to_execute} wmic /user:#{user_name} /password:#{password} /node:"#{node}" process where name='#{process_to_execute}' delete >nul 2>&1 wmic /node:#{node} process call create "rundll32.exe #{dll_to_execute} #{function_to_execute}" wmic /node:"#{node}" product where "name like '#{product}%%'" call uninstall ---------------------------------------------------------------- Thank you!
source="Application_Vulnerabilities_*.csv" index="vuln_mgmt" sourcetype="csv" one of the dashboard has above query . where to fetch the source file mentioned in splunk.
Hi All, I have got the below query at two different time range (Last 24 hrs and All Time). index=* | stats count by index,host  which gives a table as below: index host count ... See more...
Hi All, I have got the below query at two different time range (Last 24 hrs and All Time). index=* | stats count by index,host  which gives a table as below: index host count abc hdcgcgmefla02uv 127976   Now I want to compare the host column in both the tables and populate it in a new column in a tabular view. If host is available in both time ranges, then the value is "Availabe" and if host is not available in any of the  time ranges the value will be "Not Available" Like below: index host Comparision abc hdcgcgmefla02uv Available abc hdcgcgmefla22uv Not Available xyz hdcgcgmefla12uv Available   Please help to create a query to get the table with the desired comparisons. Your kind inputs are highly appreciated. Thank you..!!
My splunk cloud is configured with my server and now for every transaction hit we are getting multiple events, it is hard to check how many transaction and harder to understand, is there any way to o... See more...
My splunk cloud is configured with my server and now for every transaction hit we are getting multiple events, it is hard to check how many transaction and harder to understand, is there any way to only get one event for every one transaction with all the information inside it
Hi Splunkers, I have to build a rule, based on Windows Logs (XML ones), that must check this: Notify me is there are at least 3 consecutive occurreces of EventID 4776 from a list of host. Tje desid... See more...
Hi Splunkers, I have to build a rule, based on Windows Logs (XML ones), that must check this: Notify me is there are at least 3 consecutive occurreces of EventID 4776 from a list of host. Tje desiderd output must show: Host Number of consecutive events User/account associated to  events So for example, if we have that Host A has 4 consecutive events of EventID 4776 for user "Admin" Host B has 19 consecutive events of EventID 4776 for user "Test" Host C has 2 consecutive events of EventID 4776 for user "Joker" Host D has 3 Events of EvenID 4776, but only 2 consecutive; than has another different event and only after this another occurrence of 4776 for user "Hello" Host C don't match the consecutive count clause and must be escluded; same for Host D, because he has 3 events but not consecutive. The expected output is: Host User N. of consecutive events A Admin 4 B Test 19   What get me in stuck here is how to check that events are consecutive.Any suggestion?
Hi, For given sample data set, how can I extract all the numbers (will be always 3 digits) from desc?       | makeresults | eval desc="Frankfurt (123) & Saarbrucken (456), Germany - Primary... See more...
Hi, For given sample data set, how can I extract all the numbers (will be always 3 digits) from desc?       | makeresults | eval desc="Frankfurt (123) & Saarbrucken (456), Germany - Primary down / Secondary down" | append [| makeresults | eval desc="Frankfurt (123), Saarbrucken (456), Frankfurt Zeil (789) & Kaiserslautern (012), Germany - Primary up / Secondary up"] | append [| makeresults | eval desc="Test - Creteil - (123) - France - Primary Up // Secondary Up"] | append [| makeresults | eval desc="All devices at 456 London, England are alerting as down and unreachable"] | append [| makeresults | eval desc="Test - 123-Clonmel ( Ireland) - Primary DOWN / Secondary UP/ Switch UP"]         output required:   can you please suggest regex I can use for the same? Thank you.
Where can I see ES content searches performance in terms of avg. time taken to run a particular correlation rule or saved search?