Hi all,
How we could make the ingest of all logs into a single sourcetype in splunk cloud ES.
Knowing what index the offending sourcetype is in is only part of the battle. Next, use your Splunk skills to find which host(s) send the data and the original source file. This information will help you identify the inputs.conf file to change.
Once you've found the inputs.conf file, modify the appropriate stanza to include a sourcetype setting (sourcetype=itops, for example). The next step is to create a matching stanza in a props.conf file on the indexers. In an appropriate app, edit props.conf to include the "Great Eight" attributes.
[itops]
TIME_PREFIX =
TIME_FORMAT =
MAX_TIMESTAMP_LOOKAHEAD =
TRUNCATE =
SHOULD_LINEMERGE =
LINE_BREAKER =
EVENT_BREAKER_ENABLE = true
# Set to the same value as LINE_BREAKER
EVENT_BREAKER =
Set each value based on the events received with that sourcetype.
Yes, another option is to not ingest this data. If you don't use it then there's little point in paying to ingest it.
Why would you want to do such a thing? Making all data have the same sourcetype will break field extraction and make ES almost useless.
What problem are you trying to solve by using a single sourcetype?
..
Thanks for the clarification. You don't need a single sourcetype, you need to fix your onboarding.
The "-too_small" suffix on a sourcetype means the ingested event had NO sourcetype assigned to it so Splunk tried to guess at one, but there was not enough data to make a good guess.
The fix is to ensure EVERY input has a sourcetype assigned to it in inputs.conf.
I agree it is worth examining whether logs are worth ingesting at all. If you don't have a current or upcoming use case for the data then disable the input until you do.
"The fix is to ensure EVERY input has a sourcetype assigned to it in inputs.conf. "
Where can I find the inputs.conf file in Splunk Cloud ES? As we do not have backend access to it.
Most of the inputs.conf files will not be in Splunk Cloud. They'll likely be in Universal Forwarders (UFs) scattered about your enterprise. If you have a Deployment Server to manage the UFs (and you really should) then the files will be there (in $SPLUNK_HOME/etc/deployment-apps).
Any inputs.conf files that are in Splunk Cloud would have been placed there by uploading apps. Modify the files in the apps and then re-upload them.
Hello,
In which apps folder do I need to modify the inputs.conf and then re-upload them.
That's something only you or someone else familiar with your Splunk environment can answer specifically. The apps to update are those that define inputs and do not specify sourcetypes.
Hi,
I figured out "-too_small" suffix on a sourcetype is coming from the index itops , what should be the next step after this do i need to change the source type over here .
Or we should be drop off these ingesting source type?
What would be the best solution
Thanks
Knowing what index the offending sourcetype is in is only part of the battle. Next, use your Splunk skills to find which host(s) send the data and the original source file. This information will help you identify the inputs.conf file to change.
Once you've found the inputs.conf file, modify the appropriate stanza to include a sourcetype setting (sourcetype=itops, for example). The next step is to create a matching stanza in a props.conf file on the indexers. In an appropriate app, edit props.conf to include the "Great Eight" attributes.
[itops]
TIME_PREFIX =
TIME_FORMAT =
MAX_TIMESTAMP_LOOKAHEAD =
TRUNCATE =
SHOULD_LINEMERGE =
LINE_BREAKER =
EVENT_BREAKER_ENABLE = true
# Set to the same value as LINE_BREAKER
EVENT_BREAKER =
Set each value based on the events received with that sourcetype.
Yes, another option is to not ingest this data. If you don't use it then there's little point in paying to ingest it.
which app inputs.conf,props.conf file we need to modify here and the appropriate stanza to include a source type setting as we are using Splunk cloud, is it in DS ?
Thanks..
I don't know what apps you have installed so I can only guess about where the changes should be made. Look for an app with "fluentd" in the name. If you don't have one, then look in splunkbase (apps.splunk.com) for one; otherwise, create one.
The props.conf file will be in the fluentd app on the indexers. The app is uploaded to the Splunk Cloud UI and is distributed to indexers automatically.
The inputs.conf file(s) will be on the forwarders, either in the same app or a different one. The app is loaded from the DS so that is where you should make changes.
Sorry for the vagueness of this answer, but there is no single way to do many things in Splunk and I don't know enough about your environment to be specific.
Hi,
How do we get to know these buffer logs are in use for a specific dashboard, use-case or other metrics Can we check it with REST API ?
Thanks
Examine the logs to see if they contain information that satisfy your Splunk use cases.