All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Let´s assume you have a multi-site indexer cluster with 2 sites, 3 indexers each and the following RF/SF. site_replication_factor = origin:2, total:4 site_search_factor = origin:2, total:4  So for... See more...
Let´s assume you have a multi-site indexer cluster with 2 sites, 3 indexers each and the following RF/SF. site_replication_factor = origin:2, total:4 site_search_factor = origin:2, total:4  So for each indexer getting data in, there is one site-local replication copy and two remote site copies. When one indexer becomes unavailable, the replication switches to the 2 remaining indexers on that site and everything is still working as before. But what happens in case of 2 unavailable indexers on one site or a complete site failure (the site without the master node)? As far as i understand, events should still be received by the remaining indexers (due to indexer discovery and load balancing of the forwarders), indeed replication factor is not met (because 2 copies on the site with failure is not possible anymore at this time), but local indexing and local site replication will still happen without any interruption or manual steps necessary (when master is still available and will not be restarted) and also searching should be fully possible, as there is at least one searchable copy of each bucket on the remaining indexers. The indexer cluster / master node knows, that there are buckets with pending replication tasks (as RF/SF is not met, because not enough target indexers are available), but everything is still working and when the indexers/site is back, this will be fixed automatically and after some time, the replication factor and search factor is met again. This is called "disaster recovery" in the Splunk documentation and what anyone could expect from "high-availability" imho. So is this explanation and my understanding in theory correct and what one can expect? Or are there any doubts or am I wrong in some details?  Are there any real-world practical experiences that this is fully working or that there are any problems/errors or exceptions in any cases?
My csv source data file contains below timestamp . how can we convert the timestamp into TIME_FORMET representation in props.conf file 18-AUG-21 11.40.00.027 PM"
Hi,  I'm an intern so new to splunk etc. I am trying to do data analysis for Garbage collection (Gc) and so far have created the table above from raw GC logs.  Pause_type - 0 stands for when it... See more...
Hi,  I'm an intern so new to splunk etc. I am trying to do data analysis for Garbage collection (Gc) and so far have created the table above from raw GC logs.  Pause_type - 0 stands for when it was full GC collection (Pause Full).   1 stands  minor GC collection  (pause Young) and the rest of the columns are literally differences between old and new values. GCRuntime - the time GC run for in m/s. I was wondering if anyone has done data analysis on GC, I am hoping to use the ML toolkit but I don't know where to go and identify correlations now.  Any help would be appreciated!
Hi Team, My customer using the IBM maximo as ticketing tool. its integrated with Netcool Omnibus for Incident. Now after Splunk comes in infra, we are looking Splunk alerts send to Maximo for INCID... See more...
Hi Team, My customer using the IBM maximo as ticketing tool. its integrated with Netcool Omnibus for Incident. Now after Splunk comes in infra, we are looking Splunk alerts send to Maximo for INCIDENT.   Please guide, or if someone achieved this please show the path how can we start.   Thanks Pawak023
Hi,   I have a log server with universal forwarder and some Linux server, and I set a cronjob to make those Linux server upload their /var/log/secure and /var/log/messages to log server every 10 m... See more...
Hi,   I have a log server with universal forwarder and some Linux server, and I set a cronjob to make those Linux server upload their /var/log/secure and /var/log/messages to log server every 10 mins, and universal forwarder will monitor them.   But every time, when linux server upload their log files to log  server, universal forwarder will index not only difference part but entire files, and it caused a lot of waste of license. here is my inputs.conf -- [monitor://D:\Log\Linux\*\messages] sourcetype = message index = os host_segment = 3 -- How can I fix it?  
Hey. Im trying to create a search that lists users that have for example more than 90 days between the last 2 logons. I have tried getting the last log on time with this: index="index" sou... See more...
Hey. Im trying to create a search that lists users that have for example more than 90 days between the last 2 logons. I have tried getting the last log on time with this: index="index" sourcetype="wineventlog:security" EventCode=4624 | stats max(_time) by user But that doesnt really work for me. Not sure how i proceed from here however
Memory utilization health report is not showing the correct memory usage percentage when compared with the unix box memory utilization for the same server node. This is causing false alarm in the sys... See more...
Memory utilization health report is not showing the correct memory usage percentage when compared with the unix box memory utilization for the same server node. This is causing false alarm in the system. How can I correct the Appdynamics Memory utilization health report.
We have the count of different fields We need to get all that data on x-axis for the that we are using appendcols more than thrice. Our data base contains huge data running search command more than o... See more...
We have the count of different fields We need to get all that data on x-axis for the that we are using appendcols more than thrice. Our data base contains huge data running search command more than once is creating a problem. We would like to group the count data. Can i please know how. Below is the query we are using: index="main" sourcetype="SF1" | stats count(CASS_RESULT) as CASS by CASS_RESULT |appendcols [search index="main" sourcetype="SF4" | stats count(DIALOGUE_RESULT) as DIALOGUE by DIALOGUE_RESULT] |appendcols [search index="main" sourcetype="SF2" | stats count(TPOS_RESULT) as TPOS by TPOS_RESULT] |appendcols [search index="main" sourcetype="SF3" | stats count(PCO_RESULT) as PCO by PCO_RESULT] |appendcols [search index="main" sourcetype="SF5" | stats count(VAS_RESULT) as VAS by VAS_RESULT] |table CASS_RESULT CASS DIALOGUE TPOS PCO VAS | transpose header_field=CASS_RESULT
index=Myindex sourcetype=mine mysearch    | eval Result=if(Apple="1","Bad","Good") | stats count by Result   The search above gives me the correct count of events where Apple="1" eg Result      ... See more...
index=Myindex sourcetype=mine mysearch    | eval Result=if(Apple="1","Bad","Good") | stats count by Result   The search above gives me the correct count of events where Apple="1" eg Result                                                 Count Bad                                                       5  Good                                                  12392   How do I express the stats as a  single value in percentage ie  Bad/Good? How do I alert  if the percentage  > .02%
Details of the event Please tell me how to pass the time of the search result in "Link to search" on the dashboard. For events that include time in search results, such as the attached sample dashboa... See more...
Details of the event Please tell me how to pass the time of the search result in "Link to search" on the dashboard. For events that include time in search results, such as the attached sample dashboard When searching using the "link to search" function, I would like to pass the time of the event as a parameter. Currently, earliest / latest is specified for SPL as shown in the sample. However, in this case, the time range picker when executing the search is not the earliest / latest specified by the parameter. The time range picker on the dashboard will be set. In the actual search, the earliest / latest specified in SPL is prioritized, so the result is as expected, but Since the range of the time range picker is different from the actual search result, users are confused. question 1 How can I pass the earliest / latest specified in SPL to the time range picker when executing a search? Question 2 In this sample, $ click.value $ is used because the time field is on the far left. Please tell me how to specify when passing the time field in any column. For reference, if you specify "$ row._time $", unlike the case of $ click.value $, it is not UNIX time. The displayed time has been set as a parameter.
Heavy Forwarder is installed to acquire Azure AD logs. Add-on uses the following. Splunk Add-on for Microsoft Office 365 Please tell me the following description of the above Add-on troubleshooting. ... See more...
Heavy Forwarder is installed to acquire Azure AD logs. Add-on uses the following. Splunk Add-on for Microsoft Office 365 Please tell me the following description of the above Add-on troubleshooting. It is a recognition that duplicate log capture may occur, What are the conditions under which duplicate logs occur? 
I have a dynamic table extracted from a search result. Example Table1 that I can get: Error Code Computer Internet Connection 100 CompA Blue Screen   CompB App Crash 5 CompC ... See more...
I have a dynamic table extracted from a search result. Example Table1 that I can get: Error Code Computer Internet Connection 100 CompA Blue Screen   CompB App Crash 5 CompC   My desired result: For each row in the Table1, I would like to join the columns and make multisearch as below.     index=computer OR index=application (Internet Connection AND 100 AND CompA) | tail 1 index=computer OR index=application (Blue Screen AND CompB) | tail 1 index=computer OR index=application (App Crash AND 5 AND CompC) | tail 1     I didn't use format in this case because it would end up like..     index=computer OR index=application ( (Internet Connection AND 100 AND CompA) OR (Blue SCreen AND CompB) OR (App Crash AND 5 AND CompC) )     and returns many result instead of only 3 results. Is the desired result possible to achieve?
hi all, am running into an inconsistency with simple round function depending on the decimal placing,   here's wat am getting  index=_internal type=usage | eval totalGB = b/(1024*1024*1024) | ... See more...
hi all, am running into an inconsistency with simple round function depending on the decimal placing,   here's wat am getting  index=_internal type=usage | eval totalGB = b/(1024*1024*1024) | eval roundGB= round (totalGB,5) one day value = 4.47213 when its eval roundGB= round (totalGB,3)    -- i get 2.791  | eval roundGB= round (totalGB,2) -- i get 0.32 for the same day. any idea what is happening here?
Hello, I'm getting the following error when trying to Trigger an alert to ServiceNow   08-18-2021 18:42:04.461 -0400 INFO sendmodalert - Invoking modular alert action=sn_sec_multi_incident_alert f... See more...
Hello, I'm getting the following error when trying to Trigger an alert to ServiceNow   08-18-2021 18:42:04.461 -0400 INFO sendmodalert - Invoking modular alert action=sn_sec_multi_incident_alert for search="test" sid="scheduler__nobody_RGlmZW5kYS1UaHJlYXRIdW50aW5n__test_at_1629326520_324_E6AF4A46-E741-44E5-8200-C53A9BA036B3" in app="Alert App" owner="nobody" type="saved" 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - ERROR: Unexpected error: Traceback (most recent call last): 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - File "/opt/splunk/etc/apps/TA-ServiceNow-SecOps/bin/sn_sec_multi_incident_alert.py", line 47, in <module> 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - for result in csv.DictReader(csvResult): 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - File "/opt/splunk/lib/python3.7/csv.py", line 111, in __next__ 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - self.fieldnames 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - File "/opt/splunk/lib/python3.7/csv.py", line 98, in fieldnames 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - self._fieldnames = next(self.reader) 08-18-2021 18:42:04.947 -0400 ERROR sendmodalert - action=sn_sec_multi_incident_alert STDERR - _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)   Not sure what the problem is here, we upgraded from Splunk 7.3.3 to Splunk 8.1.5 and upgraded the ServiceNow Security Operations Add-on from 1.23.3 to 1.23.4
Hi there! I have a splunk instance running in centos. In SOAR, I have implemented the APP connecting it through IMAP to a GMAIL account. What I have planned is to build a playbook that reads the inb... See more...
Hi there! I have a splunk instance running in centos. In SOAR, I have implemented the APP connecting it through IMAP to a GMAIL account. What I have planned is to build a playbook that reads the inbox of my email account and identifies when a new email arrives, obtaining and processing it. I have integrated the APP, but I don't know how to extract or download the emails from the inbox to process them. Greetings!
I am using the Splunk Add-On for Linux on my deployment server (which is a windows server) and trying to use this to collect data from my linux machines that have the universal forwarder connected to... See more...
I am using the Splunk Add-On for Linux on my deployment server (which is a windows server) and trying to use this to collect data from my linux machines that have the universal forwarder connected to my deployment server. I was curious if anyone knows if this is because that add-on isn't compatible - because the server hosting it is Windows? (even though its being deployed to Linux machines). If this is the case - is there any easy work around other than creating another deployment server that is Linux for deploying to my Linux machines?
Hi Team, I am trying to install Splunk enterprise in my windows 10. But i  am getting this error as splunk enterprise setup wizard ended prematurely. I have tried multiple time still get the same ... See more...
Hi Team, I am trying to install Splunk enterprise in my windows 10. But i  am getting this error as splunk enterprise setup wizard ended prematurely. I have tried multiple time still get the same error . Please any once can help me in this . Thank you.     
Hello, we are trying to set up Dell Emc Isilon Add-on on our Splunk Heavy forwarder and we are seeing an error "Error occurred while authenticating to server"   Things we have taken care at our en... See more...
Hello, we are trying to set up Dell Emc Isilon Add-on on our Splunk Heavy forwarder and we are seeing an error "Error occurred while authenticating to server"   Things we have taken care at our end:- Service account has been created and granted all admin privileges to that account in order to reach isilon node.   Index in the name of isilon has been created.   We opened port 443 and 8080 between HF and Isilon node and a UDP port between Isilon nodes and Splunk HF.   When i pass the IP address, account and index details in the set up page, i am seeing the above error.   Did anyone have an idea on the above?Please let me know.
Scenario: Example query :  index=XXXX name=somefile | stats count(msg) as MESSAGE The above query will always return some count. I want to alert if the Message=0 for two consecutive 5 min interval... See more...
Scenario: Example query :  index=XXXX name=somefile | stats count(msg) as MESSAGE The above query will always return some count. I want to alert if the Message=0 for two consecutive 5 min interval over the last 15 min interval i.e. when no values are returned. earliest=-15m if two out of three interval (5min) the message=0 i want to take some actions
Today I have a custom sourcetype = custom:access_combined this is routed in its entirety at the heavy forwarder to two different index clusters. Indexer1 is the dev team, indexer2 is ops. S... See more...
Today I have a custom sourcetype = custom:access_combined this is routed in its entirety at the heavy forwarder to two different index clusters. Indexer1 is the dev team, indexer2 is ops. So the problem I'm running into is that I'd like to: - route a full copy to indexer1 - for indexer2, run through transforms and drop a bunch of noise (like 75%) ops doesn't need to nullqueue Any ideas on how to approach this?