All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have little issue with input made via Add-on Builder (Python3). I have been made some inputs and all other inputs work correctly, but only one does not run, and in _internal a have error "I... See more...
Hi all, I have little issue with input made via Add-on Builder (Python3). I have been made some inputs and all other inputs work correctly, but only one does not run, and in _internal a have error "Invalid key in stanza [input_http] in /opt/splunk/etc/apps/application-for-splunk/default/inputs.conf, line 68: python.version (value: python3)." I use Add-on builder version 3.0.1 and Splunk Enterprise 7.3.1.1. How I can fix it? Thanks.
All data and apps from our distributed architecture suddenly got deleted, including indexes and other configurations. Anyone faced this issue before? Any way to check how this happened?
Hi , I am getting 500 Internal Server Error when I click on Manage Apps, however other functionalities are working fine. Any quick help would be greatly appreciated.
In my index, I have a field that has been extracted for a "last checkin time". The time shown is GMT and I need to use this field when using a dashboard to accurately show data. I am having a probl... See more...
In my index, I have a field that has been extracted for a "last checkin time". The time shown is GMT and I need to use this field when using a dashboard to accurately show data. I am having a problem with my strptime in that it is not working. An example is an extracted field ==> 2020-02-13 05:00:29.0 The time is GMT (and it needs to be GMT+8) I have done the following: index=someindex source="mysource" | eval epoch_time=strptime("last_checkin_time", "%Y-%m-%d %H:%M:%S.%3N") I have tried adjusting the value of eval to use the %Q options but that has not been able to generate anew field that I can use. I have also tried to use %Z at the end of the strptime to try and force timezone but to no avail I would like to use this time instead of the ingest time (or _time) to drive my dashboard. Thanks in advance
Hi, I am trying to create a report to capture overall CPU Load average. I have created a search query in splunk using perfmon counter but that does not represent the overall CPU load as using indiv... See more...
Hi, I am trying to create a report to capture overall CPU Load average. I have created a search query in splunk using perfmon counter but that does not represent the overall CPU load as using individual counters give separate values. I want to capture overall CPU load as displayed in Windows Task Manager. Please help to provide a search query for overall CPU usage. I am using the below search query : host="*" source="Perfmon:Processor" counter="% Processor Time" instance="_Total" object="Processor" | bucket _time span=1d | chart limit=0 avg(Value) over _time by host | eval Time=_time | convert timeformat="%d-%b %H:%M:%S" ctime(Time) |fields - _time|table Time, *
I have a regional level holiday list and source is a raw ITSM tools data where there is reported date and resolved date For ex : CSV file for holiday list App Location Des ... See more...
I have a regional level holiday list and source is a raw ITSM tools data where there is reported date and resolved date For ex : CSV file for holiday list App Location Des Date App1 Pune Newyear 01/01/2020 App2 Singapore xxxxx 13/01/2020 App3 Pune/Singapore someday 14/01/2020 What my python code should check for the location and App in the raw data and also the location and app in holiday list and based on that it should trigger if there is a holiday or not challenge : app3 has a holiday at both location at such case how to handle this I tired something initially like this: bt its throwing output as 'N' even when the holiday date and reported date matches for holiday in holidays: day , month , year= holiday.split('/') holidate = date(int(year) , int(month) , int(day)) logger.info('%s', holidate) if holidate == each_date.date(): is_it_holiday = 'Y' else: is_it_holiday = 'N' result['is_it_holiday'] = is_it_holiday break
Hi All, Could you please help me if there is any possibility of including the dashboard content into email body when we schedule the body? I know that we can include Report content part of email b... See more...
Hi All, Could you please help me if there is any possibility of including the dashboard content into email body when we schedule the body? I know that we can include Report content part of email body. However, unable to do the same for Dashboard content. Thanks.
Hi, Why isn't the controller not directly responding to the app agent? As you can see in the screenshot, its 1st app agent status is 0%. Your response is much apricated.
Why splunk counts data sent via HEC as consumed license even when destination index is disabled? I am observing similar behavior in our Pord, Dev and POC environments.
We recently orchestrated an application on the dev environment and everything went well but now what is happening is that the nodes agent status keeps going to 0% and then they replicate themselves (... See more...
We recently orchestrated an application on the dev environment and everything went well but now what is happening is that the nodes agent status keeps going to 0% and then they replicate themselves (within the tier). I have attached a SS that might give a clearer picture.  These copies of java nodes get deleted from the controller after the application server is restarted but then the issue reoccurs. My best guess is that there was something incorrectly done while deploying the agents and I was hoping if someone else would also faced the same issue and might be able to tell me what could be wrong. Thanks, Hari Shree Joshi 
As I read the guide from "TA for Nutanix Prism" on Splunk Base. There's some description of data input as below: "On you Splunk Enterprise instance, navigate over to Settings —> Data Inputs —> ... See more...
As I read the guide from "TA for Nutanix Prism" on Splunk Base. There's some description of data input as below: "On you Splunk Enterprise instance, navigate over to Settings —> Data Inputs —> and select the Nutanix Prism API endpoints you want to ingest into Splunk. For each input you want to add, select the endpoint then select new —> and fill out the required form." But I can't find the right option to match "select the Nutanix Prism API endpoints you want to ingest into Splunk. For each input you want to add, select the endpoint", there are some options like : Local Event Logs, Remote Event Logs, File&Directories, Http Event Collector, TCP/UDP... Could you give some advice? thanks in advance.
Any Ideas how to fix? The code appears to run ok, /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py [I see lots of json meta for apps sta... See more...
Any Ideas how to fix? The code appears to run ok, /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py [I see lots of json meta for apps start flashing in] But if i let the schedules run I am finding some info like this every 4 hours; cat /opt/splunk/var/log/splunk/splunkd.log|grep -i getSplunkAppsV1 02-13-2020 03:34:17.455 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" Traceback (most recent call last): 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 90, in 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" if name == "main": main() 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 61, in main 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" for app_json in iterate_apps(app_func): 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 51, in iterate_apps 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" data = get_apps(limit, offset, app_filter) ### Download initial list of the apps 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 18, in get_apps 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" data = json.load(urllib2.urlopen(url)) 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 154, in urlopen 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" return opener.open(url, data, timeout) 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 435, in open 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" response = meth(req, response) 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 548, in http_response 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" 'http', request, response, code, msg, hdrs) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 473, in error 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" return self._call_chain(*args) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 407, in _call_chain 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" result = func(*args) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 556, in http_error_default 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" urllib2.HTTPError: HTTP Error 503: Service Unavailable
I have a requirement to get the average of the count of the IPs over the last 90 days. I have thought of 2 approaches to distribute the query overhead across a span of 90 days. Will schedule a ... See more...
I have a requirement to get the average of the count of the IPs over the last 90 days. I have thought of 2 approaches to distribute the query overhead across a span of 90 days. Will schedule a query to run everyday at the end of the day and collect the result in a csv file using outputlookup. Will keep appending the results every day to the existing file. Then use this file to get the 90 day average. Use summary indexing and schedule a query to run everyday at the end of the day and collect the result in the summary index specifying the search a name. Will keep appending the results every day to this index . Then use this index to get the 90 day average. Question: Is there a way to restrict the lookup csv file or the summary index to have just the latest 90 days record. Meaning , I want to purge the rows in the index or the file which are greater than 90 days. How can I do it?
Hi Community, We are trying to integrate cisco ucs central manager with splunk. We did configurations at splunk end by going through installation and configuration guide however no data in splunk ... See more...
Hi Community, We are trying to integrate cisco ucs central manager with splunk. We did configurations at splunk end by going through installation and configuration guide however no data in splunk and ta_cisco_ucs logs have the following error : "Failed to parse XML response Invalid URL, please make sure the URL is correct". Below are the few details for your reference: TA Version 3.0.0, we have configured #server_url = ucs central manager host, service account created to authenticate ucs central manager(Login to ucs central manager using svc account was successful) and we have set #disable_ssl_verification = true in cisco_ucs_servers.conf but no luck. Any suggestions are appreciated. Thank you.
I have uploaded a CSV and I'm attempting to search it against a INTERESTING FIELDS of of DisplayName with any sourcetype
Hi fellow Splunkers, Sorry I dont have enough karma points to post a link. I followed a Splunk blog post about monitoring windows service by Jason Conger. TIPS & TRICKS Monitoring Windows Serv... See more...
Hi fellow Splunkers, Sorry I dont have enough karma points to post a link. I followed a Splunk blog post about monitoring windows service by Jason Conger. TIPS & TRICKS Monitoring Windows Service State History I used wmi.conf to monitor my services on my servers. In this snippet below for server1 the results turn out great I have a full service state history of the server1 for past 1day index=windows sourcetype="WMI:Services" host=server1 earliest=-1d@d latest=now | streamstats current=false last(State) AS new_state last(_time) AS time_of_change BY DisplayName | where State != new_state | convert ctime(time_of_change) AS time_of_change | rename State AS old_state | table time_of_change host DisplayName old_state new_state In this snippet below for I would wish to have a service state history of all my servers in my enviroment for past 1day. However the results turned out not the way I expected it to be. index=windows sourcetype="WMI:Services" host=* earliest=-1d@d latest=now | streamstats current=false last(State) AS new_state last(_time) AS time_of_change BY DisplayName | where State != new_state | convert ctime(time_of_change) AS time_of_change | rename State AS old_state | table time_of_change host DisplayName old_state new_state Did I miss out anything? Would be grateful if somebody pointed me in the right direction. Thanks!
Hi all, I have search through the questions asked regarding caption question and find below query. If I want to gather the statistics day by day for seeing the trend of each type of data and for ch... See more...
Hi all, I have search through the questions asked regarding caption question and find below query. If I want to gather the statistics day by day for seeing the trend of each type of data and for checking the usage of any new data on-board in the future. How should I modify the query? Thanks for the help in advance. Ricky index=_internal source=*license_usage.log type="Usage" | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval sourcetypename = st | bin _time span=1d | stats sum(b) as b by _time, pool, indexname, sourcetypename | eval GB=round(b/1024/1024/1024, 3) | fields _time, indexname, sourcetypename, GB
We have been working on getting an installation of phantom running in a centos:7 docker container using rpm, but are experiencing some issues around authentication following the steps outlined here:... See more...
We have been working on getting an installation of phantom running in a centos:7 docker container using rpm, but are experiencing some issues around authentication following the steps outlined here: https://docs.splunk.com/Documentation/Phantom/4.8/Install/InstallRPM We cannot authenticate when running the install script, in the docker container or even locally. Our team has taken a look at phantom_setup.sh and have tried passing our splunk phantom community credentials in simple requests to the phantom repo, for example: wget https://USER:PW@repo.phantom.us/phantom/4.8/product/x86_64/repodata/repomd.xml (this example is just a test to confirm), but all tests have resulted in failed authentication. Is anyone else experiencing this issue?
Trying to create a sparkline from data in a lookup table monitor_user_traffic.csv has fields -user -traffic_dest_ip -app -bytes_out -time when I run | inputlookup monitor_user_traff... See more...
Trying to create a sparkline from data in a lookup table monitor_user_traffic.csv has fields -user -traffic_dest_ip -app -bytes_out -time when I run | inputlookup monitor_user_traffic.csv | eval _time=time | stats sum(bytes_out) sparkline(sum(bytes_out),1d) as data_trend by user traffic_dest_ip app I get a value for "sum(bytes_out)" but nothing under "sparkline(sum(bytes_out),1d) as data_trend" Is there some sort of magical way that I need to alert my data for Splunk to be able to create a sparkline?
Hi, Wanted to know if there could be any impact on performance of a search head, if we add many indexer peer to a single search head.