All Apps and Add-ons

cisco aci Add-on for splunk enterprise : Error with collect.py health

surekhasplunk
Communicator

05-15-2020 09:16:00.244 +0200 ERROR ExecProcessor - message from "python .......app/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode" Response too big. Need to collect it in pages. Starting collection...

Why am i getting this error.

Below is the configuration in my inputs.conf

[script://...bin/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode]
disabled = 0
sourcetype = cisco:apic:health
index = cisco-aci
interval = 21600

0 Karma

PavelP
Motivator

Hello @surekhasplunk

this is a confirmed cisco issue ( https://quickview.cloudapps.cisco.com/quickview/bug/CSCvc32906 ) and according to source ( https://github.com/datacenter/acitoolkit/blob/master/acitoolkit/acisession.py ) the script does a fallback to getting the info in pieces. Can you check if the info is complete or is missing? As it looks it should not be categorized as an error but as warning/notice/informational message.

        elif resp.status_code == 400 and 'Unable to process the query, result dataset is too big' in resp.text:
            # Response is too big so we will need to get the response in pages
            # Get the first chunk of entries
            log.error('Response too big. Need to collect it in pages. Starting collection...')
            page_number = 0
            log.debug('Getting first page')
            cookies = self._prep_x509_header('GET', url + '&page=%s&page-size=10000' % page_number)
            resp = self.session.get(get_url + '&page=%s&page-size=10000' % page_number,
                                    timeout=timeout, verify=self.verify_ssl, proxies=self._proxies, cookies=cookies)
            entries = []
            if resp.ok:
                entries += resp.json()['imdata']
                orig_total_count = int(resp.json()['totalCount'])
                total_count = orig_total_count - 10000
                while total_count > 0 and resp.ok:
                    page_number += 1
                    log.debug('Getting page %s', page_number)
                    # Get the next chunk
                    cookies = self._prep_x509_header('GET', url + '&page=%s&page-size=10000' % page_number)
                    resp = self.session.get(get_url + '&page=%s&page-size=10000' % page_number,
                                            timeout=timeout, verify=self.verify_ssl,
                                            proxies=self._proxies, cookies=cookies)
                    if resp.ok:
                        entries += resp.json()['imdata']
                        total_count -= 10000
                resp_content = {'imdata': entries,
                                'totalCount': orig_total_count}
                resp._content = json.dumps(resp_content).encode('ascii')
0 Karma
Get Updates on the Splunk Community!

Customer Experience | Splunk 2024: New Onboarding Resources

In 2023, we were routinely reminded that the digital world is ever-evolving and susceptible to new ...

Celebrate CX Day with Splunk: Take our interactive quiz, join our LinkedIn Live ...

Today and every day, Splunk celebrates the importance of customer experience throughout our product, ...

How to Get Started with Splunk Data Management Pipeline Builders (Edge Processor & ...

If you want to gain full control over your growing data volumes, check out Splunk’s Data Management pipeline ...