All Apps and Add-ons

Website Monitoring Errors

cmahan
Path Finder

I had things working for a while, but at some point when adding more inputs the timing started missing the intervals, then stopped intervals all together. It will run once then halt with these errors.

2015-07-01 17:10:50,244 INFO Previous run was too far in the past (gap=1598.0769999027252) and thus some executions of the input have been missed (stanza=web_ping://Independence Corporate Development)
2015-07-01 17:10:52,944 ERROR Execution failed: Traceback (most recent call last):
File "C:\Program Files\Splunk\etc\apps\website_monitoring\bin\modular_input.py", line 1259, in execute
self.do_run(in_stream, log_exception_and_continue=True)
File "C:\Program Files\Splunk\etc\apps\website_monitoring\bin\modular_input.py", line 1159, in do_run
input_config)
File "C:\Program Files\Splunk\etc\apps\website_monitoring\bin\web_ping.py", line 335, in run
last_ran = self.last_ran(input_config.checkpoint_dir, stanza)
File "C:\Program Files\Splunk\etc\apps\website_monitoring\bin\modular_input.py", line 986, in last_ran
checkpoint_dict = cls.get_checkpoint_data(checkpoint_dir, stanza)
File "C:\Program Files\Splunk\etc\apps\website_monitoring\bin\modular_input.py", line 1048, in get_checkpoint_data
checkpoint_dict = json.load(fp)
File "C:\Program Files\Splunk\Python-2.7\Lib\json_init.py", line 290, in load
**kw)
File "C:\Program Files\Splunk\Python-2.7\Lib\json__init
_.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Program Files\Splunk\Python-2.7\Lib\json\decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Program Files\Splunk\Python-2.7\Lib\json\decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

Tags (1)
0 Karma

LukeMurphey
Champion

That's strange. Somehow the JSON data that is used by the modular input to preserve state (to remember when it last ran). I can make this call more resilient which should help it to recover more gracefully. However, this may be the result from other issues on the host (such as overloading).

I'm am planning to fix this in version 1.1.3; I will be logging the work under this issue report.

*Update: *
I released version 1.1.2 which handles bad checkpoint data gracefully.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...