All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I am experiencing issues with an index cluster and it would be great if you could help me out. Every time I change or create an index a restart is required and it takes up to an hour u... See more...
Hi Splunkers, I am experiencing issues with an index cluster and it would be great if you could help me out. Every time I change or create an index a restart is required and it takes up to an hour until all the indexers are ready again. This used to work without a restart and only started happening after an upgrade at some point. I found this, but that doesn't say anything about creating indexes. Do you have an idea where this is coming from exactly and if it can be avoided in some way? Since changes are made weekly, it is really annoying.
Hi, Splunkers, I have a field  duration in the panel, I added the following  <format></format> into the panel. I want this duration field to show different color when it  is greater than 20. but i... See more...
Hi, Splunkers, I have a field  duration in the panel, I added the following  <format></format> into the panel. I want this duration field to show different color when it  is greater than 20. but it doesn't work.  <format type="color" field="duration">           <colorPalette type="expression">if(tonumber(value)>20),"#789056")</colorPalette> </format>   thx in advance.   Kevin
After reviewing the Intelligence Audit Events, the following error message shows up, it seems that the feed cannot write to intel. Any idea?   2022-01-20 09:24:09,703+0000 ERROR pid=28186 tid=MainT... See more...
After reviewing the Intelligence Audit Events, the following error message shows up, it seems that the feed cannot write to intel. Any idea?   2022-01-20 09:24:09,703+0000 ERROR pid=28186 tid=MainThread file=threat_intelligence_manager.py:process:432 | status="Error when writing output - threat intelligence may be incomplete." filename="/opt/splunk/etc/apps/SA-ThreatIntelligence/local/data/threat_intel/2022-01-18T11-23-26.053064.xml" Traceback (most recent call last): File "/opt/splunk/etc/apps/DA-ESS-ThreatIntelligence/bin/threat_intelligence_manager.py", line 427, in process self.write_output(filename, metadata, intel) File "/opt/splunk/etc/apps/DA-ESS-ThreatIntelligence/bin/threat_intelligence_manager.py", line 497, in write_output time_field='time' File "/opt/splunk/etc/apps/SA-Utils/lib/SolnCommon/kvstore.py", line 150, in batch_create response, content = splunk.rest.simpleRequest(uri, sessionKey=session_key, jsonargs=json.dumps(records)) File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 500, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) SplunkdConnectionException: Splunkd daemon is not responding: ("Error connecting to /servicesNS/nobody/DA-ESS-ThreatIntelligence/storage/collections/data/threat_group_intel/batch_save: ('The read operation timed out',)",)
HI All, I am looking for host mapped to a list of app in the serverclass using the btool command but unfortunately I am unable  to get details.   ./splunk cmd btool serverclass list --app="test-fw... See more...
HI All, I am looking for host mapped to a list of app in the serverclass using the btool command but unfortunately I am unable  to get details.   ./splunk cmd btool serverclass list --app="test-fwd-p" --debug | grep -i "testhost*"     Please correct me if the above syntax is in correct. 
Hi. As in Subject, only Admin Role can edit an object "ACL", turning an object from Private to Public, with relative "ACL" permissions. What's the "Capability" giving this feature, only Admins have?
Getting the following message in splunkd logs ERROR CMRemotePrimaryManager - Failed to evict delete for bid=index_name~XXXX.XXXX.XXXX as part of onPrimary initialization task, delete files might not... See more...
Getting the following message in splunkd logs ERROR CMRemotePrimaryManager - Failed to evict delete for bid=index_name~XXXX.XXXX.XXXX as part of onPrimary initialization task, delete files might not be synced Please help fix this issue. Thanks
I would like to send a monthly  report with month and year in the Subject , like   "Report for 01/2021" I have done  "Report for  - $trigger_date$" , but now I want to print the month and year only ... See more...
I would like to send a monthly  report with month and year in the Subject , like   "Report for 01/2021" I have done  "Report for  - $trigger_date$" , but now I want to print the month and year only  How I can do it  ? 
In my environment, I've setup the SSL communication and authentication between Deployment Server and its deployment client. It is working fine. The trouble came when nearly 1 year - the renewal of t... See more...
In my environment, I've setup the SSL communication and authentication between Deployment Server and its deployment client. It is working fine. The trouble came when nearly 1 year - the renewal of the SSL is needed, meaning the server.pem and cacert.pem in UF require to be updated with renewed SSL.  For the first year, we have used DS to push the SSL cert over to UF. Question is - is there any way to push the second year's SSL cert (server.pem and cacert.pem) over to UF using Deployment servers while the first year SSL still valid? Or is there any best practice how to renew the cert in UF (deployment client) in yearly basis?   Thanks.
Hi, When I ran the command ./splunk list forward-server , we are getting below error message. Active forwards: 10.20.30.40:9997 Configured but inactive forwards: None Can you please help me to ... See more...
Hi, When I ran the command ./splunk list forward-server , we are getting below error message. Active forwards: 10.20.30.40:9997 Configured but inactive forwards: None Can you please help me to troubleshoot the below error? Regards, Rahul Gupta
As per the Smartstore docs, tstatsHomePath must remain unset but I noticed the /default/indexes.conf on 8.1.5 version, the value of tstatsHomePath attribute is already set as shown in screenshot belo... See more...
As per the Smartstore docs, tstatsHomePath must remain unset but I noticed the /default/indexes.conf on 8.1.5 version, the value of tstatsHomePath attribute is already set as shown in screenshot below: I have also noticed /opt/splunk/var/lib/splunk/index_name/datamodel_summary having numerous large files. Not sure if its to do with the tstatsHomePath attribute being set. Should I add the following in /local/indexes.conf ? tstatsHomePath =  
I recently migrated non-smartstore indexes to Smartstore as per the doc - https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/MigratetoSmartStore However, on some of the indexers I noticed th... See more...
I recently migrated non-smartstore indexes to Smartstore as per the doc - https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/MigratetoSmartStore However, on some of the indexers I noticed the local drive had high disk usage by Splunk, upon investigation noticed /opt/splunk/var/lib/splunk/index_name/datamodel_summary/ of various indexes is holding datamodel summary data that caused the high disk usage. Considering summary replication is unnecessary for Smartstore, I dont believe the directory need to exist anymore. In this case, should empty these /datamodel_summary/ directories ?
In splunk logs, _time is showing 0 milliseconds with latest version of 1.11.4 splunk logging library. Ealier to this  when using v1.6.0 was giving exact _time with  milliseconds which made table view... See more...
In splunk logs, _time is showing 0 milliseconds with latest version of 1.11.4 splunk logging library. Ealier to this  when using v1.6.0 was giving exact _time with  milliseconds which made table view with sequence of results.  Due to inacurate timestamp( in millisecinds) the sorting is not proper hence the monitoring results are inacurate. Please suggest if there can be something done on Splunk App or Log4j settings.  TIA, Sahithi Parsa
      I would like to get the list of those items in the properties field, like appName, levelId, etc.    
Hey, Can anyone help me convert Age to Days? Have trouble parsing and calculating.   Sample Data Age 2 years 3 months 2 days 3 months 4 days 2 days   I want to have a column with converted v... See more...
Hey, Can anyone help me convert Age to Days? Have trouble parsing and calculating.   Sample Data Age 2 years 3 months 2 days 3 months 4 days 2 days   I want to have a column with converted values to just days. Dont want exact days. Year could be 365 and month could be 30. Age, d_age 2 years 3 months 2 days, 457 3 months 4 days, 94 2 days, 2
We currently have a C1 Architecture (3 clustered indexers/1 search head, replication factor of 3) and would like to ask if there are any best practices and guidelines on how to do it ourselves? I've... See more...
We currently have a C1 Architecture (3 clustered indexers/1 search head, replication factor of 3) and would like to ask if there are any best practices and guidelines on how to do it ourselves? I've checked the docs and somehow it is indicated that having a replication factor of more than 3 can become more complicated to archive. Please see the excerpt below from https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/Automatearchiving : The problem of archiving multiple copies Because indexer clusters contain multiple copies of each bucket. If you archive the data using the techniques described earlier in this topic, you archive multiple copies of the data. For example, if you have a cluster with a replication factor of 3, the cluster stores three copies of all its data across its set of peer nodes. If you set up each peer node to archive its own data when it rolls to frozen, you end up with three archived copies of the data. You cannot solve this problem by archiving just the data on a single node, since there's no certainty that a single node contains all the data in the cluster. The solution to this would be to archive just one copy of each bucket on the cluster and discard the rest. However, in practice, it is quite a complex matter to do that. If you want guidance in archiving single copies of clustered data, contact Splunk Professional Services. They can help design a solution customized to the needs of your environment.
I am using "sendresults" command and pass the search results to an email body template; however, the search results didn't show up from the body.  Unfortunately, the Splunk sendresults page doesn't h... See more...
I am using "sendresults" command and pass the search results to an email body template; however, the search results didn't show up from the body.  Unfortunately, the Splunk sendresults page doesn't have an example for passing the result to the email body.  I wonder if it is possible to pass search results to the email body.  Does anyone know?   This is the sample code I used.     | makeresults | eval score=90, email_to="john.doe@xyz.com", name="john" | append [|makeresults | eval score=76, email_to="jane.doe@abc.com",name="jane"] | fields - _time | sendresults showresults=f subject="Your Score" body="Hi $result.name$", your score is $result.score$."      
I am trying to add 2 new fields into a chart, which is calculated by the exisiting columns in the following chart. Basically I want to add A3=A2/A1, and B3=B2/B1:    Can anyone suggest which co... See more...
I am trying to add 2 new fields into a chart, which is calculated by the exisiting columns in the following chart. Basically I want to add A3=A2/A1, and B3=B2/B1:    Can anyone suggest which command to use? 
Hello,   We are trying to uninstall ITSI from our search head cluster, so i removed all the ITSI related apps and add-ons from the deployer and pushed to the cluster and the bundle also got pushed ... See more...
Hello,   We are trying to uninstall ITSI from our search head cluster, so i removed all the ITSI related apps and add-ons from the deployer and pushed to the cluster and the bundle also got pushed successfully but the search heads did not take any rolling restart and the ITSI related apps are still there on the search heads. I did a manual rolling restart but they are still there. can anyone please let me know if you ran into this issue before.   Thanks, Sathwik.
Hello I have a question about installing and configuring IT Essentials Works 4.11.3 and removing some of the out of the box functions in a windows environment. History: We currently have Splunk... See more...
Hello I have a question about installing and configuring IT Essentials Works 4.11.3 and removing some of the out of the box functions in a windows environment. History: We currently have Splunk running on a windows 2012 server that is being decommissioned.  So we are looking at moving it to a new MS 2019 server. One of the apps we used in the past was Windows add on infrastructure and we noticed that it has been "decommissioned"  for lack of a better word and replaced with IT essentials works. We noticed after installing the add-on there were many functions that out of the box we will never need, eg: VMWare, Nix, etc....   We wish to focus on the Windows components of IT essentials and remove the rest.  Because of the many components of IT essentials ( eg: DA_ITSI_EUEM.... DA_ITSI_LB....  SA_ITSI_ATSA.... etc) It is unclear what is the best approach to remove the unnecessary components. We were unable to find any data online or in the answers as to how to best approach this. Any information that can be provided would be beneficial Thank you Dan Is there anyone that has edited the IT essentials add on to use just what they need and remove the unnecessary components for efficiency?  
I have Splunk light install on a server 2012 R2. I'm unable to start the services (splunkd, splunkweb). I notice that secondary drive where where indexdata folder is locate is 95% full.