Archive
Highlighted

Using preloaded sourcetypes

New Member

I am having difficulty setting up my forwarder with a preloaded source type. I have identified the source type as "access_combined".

On my inputs.conf on the forwarder I have something like this:
[monitor:///home/user/dev/build/apps/testproduct/main/logs/jetty/*]
sourcetype = access_combined
disabled = false

In my props.conf I have:
[source::/home/user/dev/build/apps/testproduct/main/logs/jetty/jetty*.log]
sourcetype = access_combined

I imagined this would be sufficient for the forwarder configs - but the logs are not being forwarded.

So:
1. I am not sure what this means for the indexer configs. If I am using a preloaded sourcetype (access_combined), does it then still require inputs.conf and props.conf on the indexer?
2. Also how do I uniquely identify logs from my forwarder within the indexer even if they have a preloaded sourcetype?

Thanks

Tags (1)
0 Karma
Highlighted

Re: Using preloaded sourcetypes

Legend

If you put the sourcetype in inputs.conf on the forwarder, you don't need to put it anywhere else. You don't need props.conf on the forwarder or anything at all on the indexer. (And the term is "pretrained," not "preloaded.")

If this is in your inputs.conf, that should be fine - assuming that you have the file path correct:

[monitor:///home/user/dev/build/apps/testproduct/main/logs/jetty/*]
sourcetype = access_combined
disabled = false

However, there are many reasons why the forwarder might not be sending data to the indexer. The first thing to check is: did you configure outputs.conf correctly? Here is a very simple outputs.conf

[tcpout]
defaultGroup=my_indexers

[tcpout:my_indexers]
server=10.1.2.3:9997

Simply substitute the IP address of your indexer and supply the port number where the indexer is listening (ie, the receiving port). If you have multiple indexers,make a comma-separated list.

For more information, check this out Configure forwarders with outputs.conf

View solution in original post

Highlighted

Re: Using preloaded sourcetypes

New Member

Thanks that did the trick. However, I still am not sure how to uniquely identify the jetty logs (in the case above for example) from other logs that I have.

E.g. in our company all the logs are getting indexed under "distApps" index and if the source type is a pretrained sourcetype then I have no unique way of identifying my jetty logs from say someone else's jetty logs as the index and the source type will be the same.

Is there a way to overcome this while keeping the index the same? Or is the best practice to change the index itself to match something that is unique to our product's jetty logs?

Also is there a way to access a pretrained source type in K-V format? Or do additional transforms have to take place to a pretrained sourcetype before we can do K-V queries?

Thanks!

0 Karma
Highlighted

Re: Using preloaded sourcetypes

Legend

If you want to distinctly identify the logs from a particular server, use the host field. Also, if you always want to see a particular set of hosts, tag the host field. That way, you can end up with a search like this

tag=myservers errorCode=2076

instead of

(host=server1 OR host=server13 OR host=server23 OR host=server32) errorCode=2076
0 Karma
Highlighted

Re: Using preloaded sourcetypes

Legend

I am not sure what you mean by "K-V format" and "K-V queries." I assume that you want to search using key-value pairs. In the search language, that means that you are using fields to search. Fields should be defined at search-time, with very few exceptions.

A pretrained sourcetype generally comes with a predefined set of field definitions. In addition, Splunk will dynamically identify fields where it finds key-value pairs like this name=Jones in the data. You can add more field definitions if you like. With some effort, you can override the predefined fields if you really want to.

Run a search on the data and use the field picker on the left side of the search results to explore the fields that you have. If you want more fields, you might want to read this documentation: Data Interpretation: Fields and Field Extractions

0 Karma