Hi All,
I am trying to test DLTK. As I run a search for sending to container, it shows errors:
unable to read JSON response from http://localhost:32775/fit. Either you have no MLTK Container running or you probably face a network or connection issue. Returned with exception (Expecting value: line 1 column 1 (char 0))
Docker container is running and I implemented the sample code on jupyter notebook. What is wrong with this? The SPL I run is this:
| inputlookup server_power.csv
| fit MLTKContainer mode=stage algo=linear_regressor epochs=10 batch_size=32 ac_power from total* into app:server_power_regression
Could anyone know about it?
Hi @pdrieger_splunk,
I encounter the same issue as people above when following the exemplary steps of the barebones_template.ipynb. So e.g.
| makeresults count=10
| streamstats c as i
| eval s = i%3
| eval feature_{s}=0
| foreach feature_* [eval <<FIELD>>=random()/pow(2,31)]
| fit MLTKContainer mode=stage algo=barebone_template time feature* i into app:barebone_template
will result in "unable to read JSON response from https://localhost:49155/fit. Either you have no MLTK Container running or you probably face a network or connection issue. Please investigate Splunk search.log or python logs for more details. Returned with exception (Expecting value: line 1 column 1 (char 0))"
If I run
| makeresults count=10
| streamstats c as i
| eval s = i%3
| eval feature_{s}=0
| foreach feature_* [eval <<FIELD>>=random()/pow(2,31)]
| fit MLTKContainer
I get the same error message. My only active container is the __dev__ container - once I run the command another container of the same name is started (which is probably not what i want).
Setup is: DSDL 5.1.0, MLTK 5.4.0, a docker container containing the golden image cpu 5.1.0 as __dev__
Is there a known way to fix the issue?
Hi @Slawamba - can you look into the search.log (available over the job inspector from the search bar) and see what exact error is close to the call of the fit endpoint? If there is something with SSL error, please manually pull the 5.1.0 golden image again and see if this resolved the issue. If it's something different, please let me know.
Hi @brandy81 , thanks for asking - it seems this is related to a known issue as listed here: https://github.com/splunk/splunk-mltk-container-docker/issues/8 - as a workaround you could leave out the ... into app:server_power_regression when using mode=stage.
I tried but it not work. it still having this problem.
How can I fix this ?
Hi @landon can you please share more details on the version you use and what specific issue you face?
Do you know how to handle this problem? Thanks
Hi, thanks for the reply.
I am trying to test DLTK. As I run a search for sending to container, it shows errors like this :
The version I used:
Deep Learning Toolkit 3.7.0
Splunk Machine Learning Toolkit 5.3.1
Python for Scientific Computing 3.0.2
windows10
docker was used docker pull phdrieger/mltk-container-golden-image-cpu:3.7.0 then to run image
In addition. Sometimes when I click START button to run container. It jumps error like this, but try few times it will succeed.
It would be of great help to me if who could answer these questions. Thanks again.
Hi @landon , thanks again for sharing the details. It looks like this situation can occur when some synchronisation function was not yet completed. The list of running containers is actively checked and synced with a .conf file. This config is then used for communicating with the container. When the error is not persistent and does not show up anymore then I assume things are working fine? Can you confirm or do you see this error happening more frequently?
Hi @pdrieger_splunk,
I am facing the same (or a similar) issue and the advice given earlier to omit the "into app:appName" did not resolve the issue for me.
The Error Message stays persistent and i was able to reproduce it on my private machine.
Versions/Data:
Windows 10; mltk-container: 3.8.0; mltk: 5.3.1; psc linux: 3.0.2; psc windows: no version number listed
a docker container containing the golden image cpu 3.8.0 is running successfully
I think I solved the issue i had. Even though I had a container running, it was the __dev__ container, not the one i wanted to send data to.
This confusion stemmed from the 3. bullet point in step 0 of the user guide, where it says to have the __dev__ container running, and I only figured it out because of the video on youtube, where it also isnt specifically mentioned though.
I am uncertian if this is obvious to everybody else, especially considering my unfamiliarity with docker, but this knowledge might be a good addition for the user guide in my opinion.