I configured the Splunk Add-on for New Relic with my New Relic (NR) api key and data did start flowing in, but the search:
sourcetype=newrelic_account source="applications.json" | mvexpand applications{}.name | dedup applications{}.name | table applications{}.name
Only returned 18 results (this is the search that populates one of the dashboard's list of applications to filter by). There are over 200 applications in my NR account, so I am curious if I have misconfigured something on my side or if there is an issue with the add-on?
Thanks!!!
This add-on and the accompanying app have been both been updated to support New Relic pagination in the Account Summary input type.
Note: You will need to update any existing account inputs after installing the updated version.
This add-on and the accompanying app have been both been updated to support New Relic pagination in the Account Summary input type.
Note: You will need to update any existing account inputs after installing the updated version.
Future versions of the add-on will have this issue solved. We will also be release an official Splunk version of this add-on soon that will account for pagination.
I did a bit of digging here, it turned out that I needed to disable truncation on the specific source type since the json from NR was larger than my config was allowing. That being said, I am now seeing 83 applications from NR in splunk when I am expecting many more. Is pagination of the NR api taken into account in this addon?
did more digging, based on the source code:
'''
# Now on to processing this single account
'''
#url = "https://api.newrelic.com/v2/applications.json"
api_base_url = "https://api.newrelic.com/v2/"
urls = ["applications.json", "key_transactions.json","mobile_applications.json","alerts_violations.json"]
#headers = {'X-Api-Key':'0d27291dc862905e8e3e8e0f570f0d10b98686e27ffe21d'}
headers = {'X-Api-Key': '{}'.format(opt_api_key)}
parameters = "only_open=true"
account_dict = {'account_id': '{}'.format(opt_account)}
for i in range(len(urls)):
url = api_base_url + urls[i]
if i == 3:
# /alerts_violations.json --> requires a parameter of 'only_open=true'
parameters = "only_open=true"
else:
parameters = ""
response = helper.send_http_request(url, "GET", headers=headers, parameters=parameters, payload=None, cookies=None, verify=None, cert=None, timeout=None, use_proxy=True)
#r_headers = response.headers
#r_cookies = response.cookies
r_json = response.json()
r_status = response.status_code
# check the response status, if the status is not sucessful, raise requests.HTTPError
response.raise_for_status()
# if all is well, let's add the account ID to the event
data = json.loads(json.dumps(r_json))
data.update(account_dict)
# source=helper.get_input_name()
src = urls[i]
event = helper.new_event(source=src, index=idx, sourcetype=st, data=json.dumps(data))
try:
ew.write_event(event)
except Exception as e:
raise e
There is no pagination happening. Could the authors of this addon add support for pagination so that all of the data from NR gets indexed?
Thanks!
Pagination support has now been added to the TA in version 1.5.1.
You can find it on Splunkbase now. Thanks for your feedback!
Hi Tom and Ntankersley,
With 1.5.1 of the TA, it does not load any data and the following error is in my logs:
2017-03-06 14:04:03,971 ERROR pid=6117 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events.
Traceback (most recent call last):
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/modinput_wrapper/base_modinput.py", line 127, in stream_events
self.collect_events(ew)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/newrelic_account.py", line 70, in collect_events
input_module.collect_events(self, ew)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/input_module_newrelic_account.py", line 85, in collect_events
r_json = response.json()
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/requests/models.py", line 842, in json
self.content.decode(encoding), **kwargs
File "/opt/splunk/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/opt/splunk/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/splunk/lib/python2.7/json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Nevermind, I was able to resolve this by removing the input that I configured with 1.5.0 of the TA and recreated it. Now all data is showing up properly
Thanks for the update. Yes, as you noticed, we added a bit of flexibility in which APIs to call so that's most likely what was causing the issue. Moving forward this should not be a problem.
Thanks for downloading and using the add-on, we appreciate your feedback. We're currently in the process of working on the pagination issue.