I've integrations made with UDP/TCP data inputs that index data correctly but after a while they stopped working.
In Splunk we have different types of data inputs configured and only the UDP/TCP stops working.
When this happens, the following validations are performed:
After different tests, data ingestion recovers specifying the parameter disabled=0 in inputs.conf and restarting Splunk.
We didn't reach anything conclusive about what could cause this problem. We would like to be clear about what causes this problem to know how to act if the situation repeats itself.
Do you know what could cause this problem? Could you guide me or share ideas of what I could investigate?
Further to my previous information, last night I stayed up until midnight to see what happens. All worked well until 23:59 then data collection stopped, I restated the server at 00:08 and all returned to normal and has run ever since, it also regenerated the same error message. So, midnight is the problem! I tried this on a Mac and also a Microsoft Win 10 Desktop and both are the same.
I don't have a workaround for this other than reverting to a previous release. so, any help would be appreciated.
This data comes from a system I designed and built to control my Solar and Heat Pump system to optimise the used Energy used, therefore less than 20 Data Points are involved. What I really don't like is every minute all night collecting zeros from my Solar Output. surely if two or more values are the same the system should not process them as it just wastes disk space and processing power. If a user requests a report these gaps can easily be filled in with the last data at the same interval.
The warning about ssg_modular input is completely unrelated. SSG is a solution which allows "full" Splunk users to access their dashboards with Splunk Mobile. It has nothing to do with TCP/UDP inputs (or any other inputs for that matter).
Anyway, as your problem seems to be different, you should rather create a new topic describing your problem in detail instead of attaching to an unrelated old topic - this way you'll gain more visibility.
The error message I receive every morning is as follows:
Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__ssg_subscription_modular_input_the app "splunk_secure_gateway"_Introspecting scheme=ssg_subscription_modular_input: script running failed (exited with code 1).20/11/2022, 05:16:27
I immediately thought this had been changed for the Free version only as it refers to the subscription_modular_input. Maybe not?
I can redirect my data to your test rig if it helps, just need an ip and port no!
I have a similar problem but only with UDP inputs, this has worked successfully for four or so years with previously installed Splunk versions and only failed with 9.0.1 release. If I restart the Server, it always restarts and runs until 23:x.x pm. It seems to be time related as if I restart the Server at 23:00 it still stops at 23:30 or close. If I restart at 00:10 it will run ok until the following evening.
I think what you mentioned is correct, those are the steps that I would follow. Plus, Splunk's internal logs for any errors from the host.
Did you able to figure out in the last investigation what component failed when you checked last time? (Is it disabled=0 parameter?)
Splunk definitely does not disable the input by itself. It could be external factors like user-made change, change that has been made from the backend file, or change made by some automated script in your environment.
If you mean, there was no parameter and you made the parameter disabled=0. Then the possibility is that Splunk restarts resolved your issue and not that parameter because the default value of the disabled parameter is false only.
Also, make sure there is only one file that TCP/UDP is specified so it's easier to troubleshoot all the parameters. Though, you can always btool to check all configurations.
I hope this helps!!!
It indeed seems that something is overwriting your setting behind the scenes.
Next time this problem occurs use the btool with --debug option to see if the input is disabled or not (and if so, which file contains the effective setting) before you fiddle with the config and restart the process.
Hi, thanks for your suggestions.
We have no new evidence to explain what is happening.
We are going to build a monitor to identify next time what might be disabling the inputs.
after you have updated to 9.x.y (where x & y > 0) then you can try to check that by
which shows what and when has changed (not all changes has reported here e.g. when splunk is down and some one has changed .conf files manually).