All Apps and Add-ons

cisco aci Add-on for splunk enterprise : Error with collect.py health

surekhasplunk
Communicator

05-15-2020 09:16:00.244 +0200 ERROR ExecProcessor - message from "python .......app/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode" Response too big. Need to collect it in pages. Starting collection...

Why am i getting this error.

Below is the configuration in my inputs.conf

[script://...bin/collect.py -health fvTenant fvAp fvEPg fvAEPg fvBD vzFilter vzEntry vzBrCP fvCtx l3extOut fabricNode]
disabled = 0
sourcetype = cisco:apic:health
index = cisco-aci
interval = 21600

0 Karma

PavelP
Motivator

Hello @surekhasplunk

this is a confirmed cisco issue ( https://quickview.cloudapps.cisco.com/quickview/bug/CSCvc32906 ) and according to source ( https://github.com/datacenter/acitoolkit/blob/master/acitoolkit/acisession.py ) the script does a fallback to getting the info in pieces. Can you check if the info is complete or is missing? As it looks it should not be categorized as an error but as warning/notice/informational message.

        elif resp.status_code == 400 and 'Unable to process the query, result dataset is too big' in resp.text:
            # Response is too big so we will need to get the response in pages
            # Get the first chunk of entries
            log.error('Response too big. Need to collect it in pages. Starting collection...')
            page_number = 0
            log.debug('Getting first page')
            cookies = self._prep_x509_header('GET', url + '&page=%s&page-size=10000' % page_number)
            resp = self.session.get(get_url + '&page=%s&page-size=10000' % page_number,
                                    timeout=timeout, verify=self.verify_ssl, proxies=self._proxies, cookies=cookies)
            entries = []
            if resp.ok:
                entries += resp.json()['imdata']
                orig_total_count = int(resp.json()['totalCount'])
                total_count = orig_total_count - 10000
                while total_count > 0 and resp.ok:
                    page_number += 1
                    log.debug('Getting page %s', page_number)
                    # Get the next chunk
                    cookies = self._prep_x509_header('GET', url + '&page=%s&page-size=10000' % page_number)
                    resp = self.session.get(get_url + '&page=%s&page-size=10000' % page_number,
                                            timeout=timeout, verify=self.verify_ssl,
                                            proxies=self._proxies, cookies=cookies)
                    if resp.ok:
                        entries += resp.json()['imdata']
                        total_count -= 10000
                resp_content = {'imdata': entries,
                                'totalCount': orig_total_count}
                resp._content = json.dumps(resp_content).encode('ascii')
0 Karma
Get Updates on the Splunk Community!

Splunk Search APIを使えば調査過程が残せます

   このゲストブログは、JCOM株式会社の情報セキュリティ本部・専任部長である渡辺慎太郎氏によって執筆されました。 Note: This article is published in both Japanese ...

Integrating Splunk Search API and Quarto to Create Reproducible Investigation ...

 Splunk is More Than Just the Web Console For Digital Forensics and Incident Response (DFIR) practitioners, ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...