Getting Data In

Issue with SNOW | Data is not getting in correctly

kittu1
New Member

We have enabled the jobs to pull the records from each of the tables, post which we have created a report/dashboard as per our requirement. We could see that for few tickets the data is not being indexed with the latest details. Say for example, a ticket number = INC101023, is closed in Service Now on 20-Apr-2020, but in the Splunk index it is showing as "Work in Progress" with the date as 15-Apr-2020.
Can you please let us know, how to retrieve the missing data? 

 

I am getting the below errors in snow logs : 

ValueError: Expecting : delimiter: line 1 column 1791875 (char 1791874)

2020-10-10 14:30:59,831 ERROR pid=20332 tid=Thread-29 file=snow_data_loader.py:collect_data:170 | Failure occurred while getting records from https://henkelprod.service-now.com/change_request. The reason for failure= , u'detail': u'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information.

 

2020-10-10 14:31:34,604 ERROR pid=20332 tid=Thread-24 file=snow_data_loader.py:collect_data:170 | Failure occurred while getting records from https://henkelprod.service-now.com/rm_defect. The reason for failure= {u'message': u'Transaction cancelled: maximum execution time exceeded', u'detail': u'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information. 

 

2020-10-10 15:00:59,950 ERROR pid=20332 tid=Thread-29 file=snow_data_loader.py:collect_data:170 | Failure occurred while getting records from https://henkelprod.service-now.com/change_request. The reason for failure= {u'message': u'Transaction cancelled: maximum execution time exceeded', u'detail': u'Transaction cancelled: maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information.  

020-10-13 12:28:51,678 ERROR pid=8752 tid=Thread-39 file=snow_data_loader.py:_json_to_objects:268 | Obtained an invalid json string while parsing.Got value of type <type 'str'>. Traceback : Traceback (most recent call last):

  File "D:\SplunkProgramFiles\etc\apps\Splunk_TA_snow\bin\snow_data_loader.py", line 265, in _json_to_objects

    json_object = json.loads(json_str)

  File "D:\SplunkProgramFiles\Python-2.7\Lib\json\__init__.py", line 339, in loads

    return _default_decoder.decode(s)

  File "D:\SplunkProgramFiles\Python-2.7\Lib\json\decoder.py", line 364, in decode

    obj, end = self.raw_decode(s, idx=_w(s, 0).end())

  File "D:\SplunkProgramFiles\Python-2.7\Lib\json\decoder.py", line 380, in raw_decode

    obj, end = self.scan_once(s, idx)

ValueError: Expecting : delimiter: line 1 column 972953 (char 972952)

 

Tags (1)
0 Karma

Juan_Leon
Explorer

I had similar issue, the difference is that my TA has inputs for multiple tables. All the inputs work ok with 3000 record count except one of them. Once I adjusted the problematics input to  1000 record count, events started to be ingested.  why only experiencing issues with one of them. 

Error reported for this input is:

The reason for failure= {'message': 'Transaction cancelled: maximum execution time exceeded', 'detail': 'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}.

 

any suggestions will be appreciated. 

 

0 Karma

Jasdeep
Explorer

I was getting the same error but changing record_count from 3000 to 1000 fixed the issue for me.

VatsalJagani
SplunkTrust
SplunkTrust

In the Service Now Add-on, on the account page, there is a parameter called "record_count" which is used for this purpose (pagination or limit the number of results on each call) only I think.

Try reducing that number (min value on the Add-on side is 1000, maximum is 10000).

VatsalJagani
SplunkTrust
SplunkTrust

@kittu1@artelia   - was the answer helpful. Please accept the solution if so.

0 Karma

artelia
Explorer

Hi kittu1!

Where you able to solve this as we are also experiencing the same error message. 
We may have to use query parameters to receive smaller chunks of information at a time, but I haven't yet figured out an easy way to do so. Any advice would therefore be much appreciated.

0 Karma
Get Updates on the Splunk Community!

Aligning Observability Costs with Business Value: Practical Strategies

 Join us for an engaging Tech Talk on Aligning Observability Costs with Business Value: Practical ...

Mastering Data Pipelines: Unlocking Value with Splunk

 In today's AI-driven world, organizations must balance the challenges of managing the explosion of data with ...

Splunk Up Your Game: Why It's Time to Embrace Python 3.9+ and OpenSSL 3.0

Did you know that for Splunk Enterprise 9.4, Python 3.9 is the default interpreter? This shift is not just a ...