1- I was uploading my JSON formatted data to splunk manually up to now. My fields were being created for all of my variables automatically. Now, we sent our data with a TCP and I realize that I cannot create fields for my variables automatically, even though the json looks the same. It seems like the json is not parsed in the same way as before when it is uploaded. Instead, I have to use the spath command to create the fields for my variables. Could someone tell me why is it needed for TCP, but not for manually uploading?
2- I also saw this documentation about the best practices about the JSON data: http://dev.splunk.com/view/logging-best-practices/SP-CAAADP6
There is a suggestion to create the fields automatically which I tried to follow.
To my understanding this format suggests to use = instead of :. When I did this change, I ran into another problem.
This time the source type is not json anymore. But it is not clear what will be the new source type if we change the json format?
Thank you very much for the helpful link. I think it is not quite what I am looking for however. In my case I have many JSONs that are being streamed. I can see how this link would help if Splunk was merging many JSONs together, but that is a different problem.
In my case, Spunk is already correctly recognizing each individual JSON as a separate event, whether I use manual upload or TCP. The difference is that when I manually upload or when I use a Splunk forwarder, the individual properties of the JSONs are identified by splunk as fields. But when I use TCP the individual JSONs are only recognized as strings instead.