in system/local directory below is the configuration.
disabled = false
but it is surprising why data is sent to main index still.
is there any other location which is making the index to pass to main index?
run `/opt/splunk/bin/splunk cmd btool inputs list --debug > inputslist.txt` on your forwarder or target server.
In this way you have all the configurated inputs and you can check if there are other configurations that have the same monitor.
On executing this command I came to know that in system\default directory the index = default.
There I changed the index to chilqa and took a UF restart.
This resolved the issue.
But the surprise to me was as per Splunk and the conf file precedence local file will be one which has heighest priority and its index and configuration values will be picked up.
Then in this was why is that default folder values were pickup and index was been sent to "main".
but one more problem which I can see here is some of the data from the UF is going to main and some of it is going to ChilQA seems like I need to debug more on the issue. Please help if you have seen similar issue before.
Probably there's a similar problem: at first identify which logs are indexed on Main index (find hosts and sourcetypes) and then debug in the same way your input.conf stanzas.
Now I created one more problem for me.
In hope that I can give definitive answer I found that the UF version was 6.6.2 and our Enterprise instance is 6.5.3.
So I uninstalled UF, restarted UF server, installed 6.5.2 version of UF and configured the UF in similar way.
Now UF has totally stopped from sending data to the enterprise instance.
I feel I am in big trouble on please help.
There are really no issues with running different versions of UF, it is in fact very common. Here is the documentation reference.
The first thing to always check is your forwarder's splunkd.log. If you are on Linux, it's at /opt/splunkforwarder/var/log/splunk/splunkd.log. Check for any error messages there. Feel free to share what you find, if you can't make sense of it.
If you can run
/opt/splunkforwarder/bin/splunk cmd btool inputs list --debug and
/opt/splunkforwarder/bin/splunk cmd btool outputs list --debug and share the output of both with you, we maybe better able to help you.
Very likely you didn't configure the index 'chilqa' on your indexer. Take a look at splunkd.log on your indexer and you might find a message like this:
Received event for unconfigured/disabled/deleted index='chilqua' with source='<yourlogsource>' host='your forwarder host' sourcetype='sourcetype::test' (1 missing total)
I cannot find anything for the host name in Splunkd.log
However I can find only below mentioned lines in the /var/log/splunk directory.
metrics.log:09-14-2017 10:32:17.333 +0000 INFO Metrics - group=perhostthruput, series="db-containers", kbps=7.893489, eps=34.709250, kb=244.701172, ev=1076, avgage=5464.828067, maxage=29468
metrics.log:09-14-2017 10:32:17.334 +0000 INFO Metrics - group=tcpinconnections, 188.8.131.52:19163:9998, connectionType=cooked, sourcePort=19163, sourceHost=, sourceIp=184.108.40.206, destPort=9998, kb=327.58, _tcpBps=26608.31, tcpKBps=25.98, tcpavgthruput=25.98, _tcpKprocessed=327.58, tcpeps=52.19, processtimems=1, channewkBps=0.08, evtmisckBps=1.19, evtrawkBps=19.51, evtfieldskBps=5.00, evtfnkBps=1.27, evtfvkBps=3.73, evtfnstrkBps=1.19, evtfnmetadynkBps=0.00, evtfnmetapredefkBps=0.00, evtfnmetastrkBps=0.00, evtfvnumkBps=0.00, evtfvstrkBps=3.73, evtfvpredefkBps=0.00, evtfvofflenkBps=0.00, build=4b804538c686, version=6.6.2, os=Windows, arch=x64, hostname=db-containers, guid=A9AADA66-57BB-4410-A075-328AE2C24FA3, fwdType=uf, ssl=false, lastIndexer=None, ack=false
You need to send this to the forwarding server and restart the splunk instance there. Then you need to search only for events that have been forwarded and indexed AFTER the point the forwarder was restarted (old events will obviously stain in
main). If it still goes into
main, then you must not have a index defined in
chilqa (or you have not deployed it to your indexer tier or have not restarted the Splunk instances there) and have a
last chance index defined as
main and that is why it is ending up there.