All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am in the process of upgrading the 6.1.1 add-on to version 6.3.1 on an index cluster Splunk 7.3.5. When checking the bundle config "Validate and check Restart" I'm presented with the warnings below... See more...
I am in the process of upgrading the 6.1.1 add-on to version 6.3.1 on an index cluster Splunk 7.3.5. When checking the bundle config "Validate and check Restart" I'm presented with the warnings below. Should anything be modified in the conf file to clear the warning and function with Python2? Or, is the app not compatible? semstrp01: [Not Critical] Invalid key in stanza [autofocus_export] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/inputs.conf, line 9: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [aperture] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/inputs.conf, line 12: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [minemeld_feed] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/inputs.conf, line 19: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [admin_external:Splunk_TA_paloalto_settings] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/restmap.conf, line 10: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [admin_external:Splunk_TA_paloalto_aperture] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/restmap.conf, line 16: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [admin_external:Splunk_TA_paloalto_autofocus_export] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/restmap.conf, line 22: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [admin_external:Splunk_TA_paloalto_account] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/restmap.conf, line 28: python.version (value: python3). semstrp01: [Not Critical] Invalid key in stanza [admin_external:Splunk_TA_paloalto_minemeld_feed] in /opt/splunk/etc/master-apps/Splunk_TA_paloalto/default/restmap.conf, line 34: python.version (value: python3).
I'm facing an issue where after configuring a drilldown to set one token and unset another, I'm unable to convert the dashboard to HTML. I can convert the dashboard if I am only either setting or uns... See more...
I'm facing an issue where after configuring a drilldown to set one token and unset another, I'm unable to convert the dashboard to HTML. I can convert the dashboard if I am only either setting or unsetting a token. When I configure a to panel to do both, the Convert to HTML button becomes unresponsive. Looking at the console, when clicking Convert to HTML I receive a 500 Internal Server Error from /converttohtml. Is this a known limitation to HTML conversion or unexpected behavior?
Hello all, I've been using Splunk for the past four years and am loving it. I would like to know from the Splunk Community what you all think of the following configuration setups. I am currently r... See more...
Hello all, I've been using Splunk for the past four years and am loving it. I would like to know from the Splunk Community what you all think of the following configuration setups. I am currently running a single instance Splunk server with 16 cores, 64 GB of RAM and 7.2k HDD's. I'm monitoring about 50-100 servers (90%+ are different flavors of Linux) which have very low indexing. All servers together amount for about 150-200MB/day. But I also have about 75 users which need access to the dashboards. I inherited the server about four years ago and it was intially deisgned as of a PoC and it eventually got shipped into Production (with no changes). The server is old and is needs to be evergreened. My question is this: If you could design a new system, what would you go with? I have the possibility of using VM's to create a more distributed environment instead of a single instance, but is it worth it? My idea was to throw AMD's new 64 core EPYC chips with similar 64GB of RAM, have a 500GB SATA SSD and 4TB of 7.2k drives for historical searches. I'm curious to see what the Splunk Trust & Community has to say, because I was talking with several people at .conf19 and most of them had not seen this kind of environment before (low amounts of data and high numbers of users). Any ideas or suggestions would be appreciated.   Thanks! Erick
Hello, I'm trying to find a search to correlate (graph overlay) log collect with specific windows eventcode (4608 for windows is starting up ; 6005 :The event log service was started 6006 The Event... See more...
Hello, I'm trying to find a search to correlate (graph overlay) log collect with specific windows eventcode (4608 for windows is starting up ; 6005 :The event log service was started 6006 The Event log service was stopped) like this  host=machine | timechart count by host and the other part would be  host=machine EventCode=4608 OR EventCode=6005 OR EventCode=6006 | timechart count by EventCode I'm a little bit lost with appendcols /append/ join ... How can I do this? Thank you for your help
For our accelerated datamodels,  acceleration.max_concurrent is set to 3 and we reach situations where lots of cpu is dedicated to the datamodel acceleration built-up. Should we lower acceleration.ma... See more...
For our accelerated datamodels,  acceleration.max_concurrent is set to 3 and we reach situations where lots of cpu is dedicated to the datamodel acceleration built-up. Should we lower acceleration.max_concurrent from 3 to protect the system?
Hello, I need to place static images in one of my dashboard in splunk cloud.  Where should i place the image file if my splunk cloud is managed service ?  And how can i add the image in the dashboar... See more...
Hello, I need to place static images in one of my dashboard in splunk cloud.  Where should i place the image file if my splunk cloud is managed service ?  And how can i add the image in the dashboard Thanks in advance
I have two alerts which send alert emails whenever a server on our loadbalancer changes status from UP to DOWN or vice versa. its working great but due to a really cheesy program we are forced to us... See more...
I have two alerts which send alert emails whenever a server on our loadbalancer changes status from UP to DOWN or vice versa. its working great but due to a really cheesy program we are forced to use it requires manual reboots every day or it hangs up during work hours preventing employees from working. we have the servers scheduled to reboot every day between 0200 and 0330 hours unfortunately this causes a daily spam storm from the alert I have configured which sends alert emails for each instance of the servers that are rebooting changing from up to down and down to up again. I have found a lot of other posts regarding excluding time ranges but none of them that I tried have worked for me. is there a way to edit my alert search to EXCLUDE any events with timestamps between 0200 and 0400 hours EVERY DAY? my search is below sourcetype=syslog_nsxedge host="NSX-Edge03-0" server!="NULL" | rex "(?<json>{.*})" | spath input=json systemEvents{} output=systemEvents | stats values(_time) as _time by systemEvents | spath input=systemEvents | fields - systemEvents | eval _time=strptime(timestamp,"%s") | search message=*DOWN | sort - _time | table _time,eventCode,metaData.server,metaData.listener,eventCode,message,moduleName,severity | rename metaData.listener TO Site,metaData.server TO Server
I have added a new option in the navigation menu ,  by updating the index.html in the setting->User Interface -> Navigation Menu. and it shows as expected. myDASHBOARD. However, I want to add a co... See more...
I have added a new option in the navigation menu ,  by updating the index.html in the setting->User Interface -> Navigation Menu. and it shows as expected. myDASHBOARD. However, I want to add a condition so that the new navigation menu should show up only if the condition is met. I am unsure if I am writing it correctly. <nav search_view="search" color="#65A637"> <view name="search" default='true' /> <view name="data_models" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> <collection label="myDASHBOARD" /> <condition match = 'splunk_serv_name', wekwyeiyu134> <view name="myDASHBOARD_1 />     <view name="myDASHBOARD_2 />  </condition> </collection>   I believe I am using the condition statement wrong. Any help ?
we are using ITSI version 4.4.2 I per the ITSI documentation, we should be having service_name field in the events , however its missing for all our services. We were using ITSI 2.1 before and have ... See more...
we are using ITSI version 4.4.2 I per the ITSI documentation, we should be having service_name field in the events , however its missing for all our services. We were using ITSI 2.1 before and have moved to the newer version few months ago and all the existing services were backed up and restored to the newer version. https://docs.splunk.com/Documentation/ITSI/4.4.2/Configure/IndexRef Existing event log sample of our ITSI kpi data: 08/27/2020 10:42:55 +0100, search_name="Indicator - f6e4106b7a49f3b882d7fff4 - ITSI Search", search_now=1598521380.000, info_min_time=1598521315.000, info_max_time=1598521375.000, info_search_time=1598521402.096, qf="", kpi="Test Kpi", kpiid=f6e4106b7a49f3b882d7fff4, urgency=11, serviceid="6fb709cc-b8e9-4fce-8ffe-16f24a775500", itsi_service_id="6fb709cc-b8e9-4fce-8ffe-16f24a775500", is_service_aggregate=1, is_entity_in_maintenance=0, is_entity_defined=0, entity_key=service_aggregate, is_service_in_maintenance=0, is_filled_gap_event=0, alert_color="#99D18B", alert_level=2, alert_value=0, itsi_kpi_id=f6e4106b7a49f3b882d7fff4, is_service_max_severity_event=1, alert_severity=normal, alert_period=1, entity_title=service_aggregate Below is the expected event log as per newer ITSI version. 05/14/2020 13:40:00 +0100, search_name=disabled_kpis_healthscore_generator, search_now=1589460060.000, info_min_time=1589460000.000, info_max_time=1589460060.000, info_search_time=1589460078.816, kpi="Test kpi", color="#CCCCCC", kpiid=76e0d65b920711618c59571e, enabled=0, urgency=5, kpi_name="Test kpi", gs_kpi_id=76e0d65b920711618c59571e, serviceid="8e827332-35f7-435d-bae3-134e81e943f9", gs_service_id="8e827332-35f7-435d-bae3-134e81e943f9", indexed_is_service_max_severity_event=0, indexed_is_service_aggregate=1, itsi_service_id="8e827332-35f7-435d-bae3-134e81e943f9", indexed_itsi_service_id="8e827332-35f7-435d-bae3-134e81e943f9", is_service_aggregate=1, is_entity_defined=0, entity_key=service_aggregate, alert_color="#CCCCCC", alert_level="-3", alert_value="N/A", itsi_kpi_id=76e0d65b920711618c59571e, kpi_urgency=5, search_name="Indicator-Disabled_kpis- ITSI search", is_service_max_severity_event=0, alert_severity=disabled, alert_period=5, entity_title=service_aggregate, indexed_itsi_kpi_id=76e0d65b920711618c59571e, service_name="Test service" I want to know what could be the possible reasons behind this, and what is the easiest and preferred way to fix this, like can this be fixed when we upgrade to a higher version of ITSI? Thanks in advance!
Hi,  We have correlation search with action as notable. Initially we made it low Severity on notable to monitor and set threshold . when we changed the severity to high for same notable all the old ... See more...
Hi,  We have correlation search with action as notable. Initially we made it low Severity on notable to monitor and set threshold . when we changed the severity to high for same notable all the old low severity notable events changed to High automatically.  (this search is on data model so dose not have any eval urgency in search). How to avoid changing old notable event severity  ? we just want new alerts to be with high urgency not change the old once.   
Hi We have multiple automated tests running with different IDs and jenkins build number. One testid, build can have multiple tests too. Each test goes through some stages which are not common across... See more...
Hi We have multiple automated tests running with different IDs and jenkins build number. One testid, build can have multiple tests too. Each test goes through some stages which are not common across different tests. test1 can have stage a,b,c and test2 can have d,e,f I want to chart duration of these stages for each test over testID/build number e.g. i want to see how duration of stage A for test 1 is changing across different testiD t1, t2, t3 or build b1,b2,b3.  I can chart avg duration for stages by test but not across testids/build this is how I do it for avg duration for stages by test: index=dx_campaign_utf_jenkins_console source=job/test-pipeline-perf/* test testId stage | rex field=_raw "test=(?<test_name>[^\]]*)]" | rex field=source "job/test-pipeline-perf/(?<build_number>.*)/console" | stats min(_time) as startTime max(_time) as endTime by test_name, stage, testId, build_number | eval duration = (endTime-startTime) | eval duration = if(duration == 0, 1, duration) | eval duration = duration/60 | chart avg(duration) by test_name, stage I can do it individually for each testname if I search for each and create different chart for each, is it possible to chart stages duration per test per testid using chart and trellis layout? Since test names can change, I don't want to keep my query specific
I have an 8.0.5 Splunk environment. It is not a cluster, there is one indexer. I am quickly headed towards a distributed environment and have an search head setup but not integrated yet. I have 1 hea... See more...
I have an 8.0.5 Splunk environment. It is not a cluster, there is one indexer. I am quickly headed towards a distributed environment and have an search head setup but not integrated yet. I have 1 heavy forwarder/deployment server/license manager. The new Heavy forwarder I'm setting up I need some help with setting up certs. Here is the scenario. New heavy forwarder will received data from 2 universal forwarders managed by another team (we don't have access to them). Part of their requirement for receiving this data is that the connection is encrypted. We have purchased a server cert for the heavy forwarder (servername.splunkidy.splunk.com), and it is in PEM format. The question is how do we install it when trying to secure a universal forwarder to a heavy forwarder?  I haven't been able to find an example of universal to heavy forwarder ssl cert setup. I admit Certs are my kryptonite, but the Linux admin that is helping is having trouble figuring out how we need to do this. Do we send the other team the public key so they can access the heavy forwarder? I feel like we're missing a piece and can't figure out what that is.
Hello  when i make a search i got an hour plus 
We recently designed some custom Splunk apps using the Splunk Add-on Builder app, and these would display on the Add-on Builder home page. However, this morning, the custom apps are no longer appeari... See more...
We recently designed some custom Splunk apps using the Splunk Add-on Builder app, and these would display on the Add-on Builder home page. However, this morning, the custom apps are no longer appearing on the home page. We confirmed that these custom apps are visible via the "Manage Apps" link (and permissions have not been changed), but not displaying on the Add-on Builder app page. We also confirmed that these custom apps also appear within the "View Objects" menu of the Splunk Add-on Builder app, and have the correct permissions in place as well. Any idea why these apps are no longer appearing on the Add-on Builder home page? The only activity performed between yesterday (when the apps were visible on Add-on Builder) and this morning was generating app manifests for several of our apps. Those should not have suddenly modified any conf files or settings in place.
Hi All, I observed in Splunk Query Language that, Any field’s values are getting trimmed after going through Splunk’s Custom Command’s execution. Commands are developed using Splunk’s API and we ... See more...
Hi All, I observed in Splunk Query Language that, Any field’s values are getting trimmed after going through Splunk’s Custom Command’s execution. Commands are developed using Splunk’s API and we are not touching those fields inside the command. In configurations(commands.conf) found there is no such property values which can avoid this.   With Custom Command Without Custom Command Custom Command Code, import splunk.Intersplunk results,unused1,unused2 = splunk.Intersplunk.getOrganizedResults() keywords, argvals = splunk.Intersplunk.getKeywordsAndOptions() field = argvals.get("field") for r in results:     try:         r[field] = r[r[field]]     except Exception, e:         r[field] = None splunk.Intersplunk.outputResults(results)
Hi everybody, I've attached an error that occurs recently on the splunk infrastructure based on a SHC of 3 members and an indexer cluster of 2 nodes. Sometimes, when I run a search on a SH I retrie... See more...
Hi everybody, I've attached an error that occurs recently on the splunk infrastructure based on a SHC of 3 members and an indexer cluster of 2 nodes. Sometimes, when I run a search on a SH I retrieve a result but if I repeat the same search - even after few hours - with the same conditions on the same SH I retrieve a different result! Then, again, after a few hours, the same search gives me back the same result had in the first run. The customer says that there aren't such network problem between the servers, even if the error message appears very clear. Have anyone an idea about this? Thanks in advance
Hi, I have sucessfully played with column and cell colors for tables, but could not find any configuration option for table gridlines (size in px and colour). Is there anybody with a hint? ju... See more...
Hi, I have sucessfully played with column and cell colors for tables, but could not find any configuration option for table gridlines (size in px and colour). Is there anybody with a hint? just would need the <option name=xxxx>, if it works that way Many thanks in advance
Hi, I'm looking to add CIM compliant database data to the databases datamodel. To give you some context what im trying to approach, im looking to do use case discovery for ES correlated searches fo... See more...
Hi, I'm looking to add CIM compliant database data to the databases datamodel. To give you some context what im trying to approach, im looking to do use case discovery for ES correlated searches focused on the databases datamodel, so i would love data to do queries against. What database(s) would be suggested that is childishly easy to setup and can fill as much as possible on the datamodel in splunk?   Thanks in advance!
Hi all, got the problem with sort, When I change the time format from default e.g. 2020-05-08 19:46:20 to this :08/05/20 19:46:20 by the use  this conversion command:  | eval _time=strftime(_time,"%... See more...
Hi all, got the problem with sort, When I change the time format from default e.g. 2020-05-08 19:46:20 to this :08/05/20 19:46:20 by the use  this conversion command:  | eval _time=strftime(_time,"%d/%m/%y %H:%M:%S") the sort function does not work. To overcome this, I can put the sort command e.g.  "  | sort -_time " prior eval command but this does not resolve the main Splunk build in feature .i. e. arrows up and down next to the table headers.  Since I changed the time output format the sort will work only on one page with results, if there are multiple pages with results, this will not work, is this normal?
Hello, I would need to add the splunk search results to an existing lookup table.  Example. I have a splunk lookup table and this would be a static as below, where I have static text defined, and e... See more...
Hello, I would need to add the splunk search results to an existing lookup table.  Example. I have a splunk lookup table and this would be a static as below, where I have static text defined, and every row there would a splunk query has to run / scheduled and the result has to be printed in the lookup table on "Splunk Search Result" column every day.  Is there a way to achieve the same?  Thanks