Monitoring Splunk

mis configure index values creating problem

Contributor

Dear All,
We have one production search head, three indexers clustered, a cluster master, and a deployment server. All running Windows 2k8 R2. Splunk version is 6.1.3

We planned to get data from some of the Linux box and we wrote an inputs.conf file but in inputs file we gave the wrong index name.
I wanted to give “oracledb” but we gave “oraclelog”
But when I searched for index usage in _internal index. Oraclelog say 28GB so but “oraclelog” index is not present on indexers so where the data is stored?

If it store on different index then which is that index name? How should copy to the data from one index to another index?

Because now if I correct the log file with “oracldb” index will not be able to get the data once from forwarder which is already indexed? How to do this?

Thanks in advance

0 Karma
1 Solution

Builder

If index is not specified data will go to main index, you can get the logs by running

index=main

I am not sure if moving buckets from main index to the correct index will work or not, I think it is better to delete the wrongly inserted logs and index them again.

To delete logs then specify the source from Fields sidebar then delete the events you want as per the following, but first you need to allow the user you are using to delete:

index=main source=test.gz | delete

Give user permissions to delete, from wen interface, I'll assume you are using admin user:

Settings, Access controls, Users, admin

In Assign to roles part, add can_delete, then save

As you already mentioned forwarder will not index logs again as they are already indexed, there are many workaround to re-index the logs:

First:
Splunk forwarder keep track of processed files through fishbucket directory "/opt/splunkforwarder/var/lib/splunk/fishbucket/", so if you remove all the contents of fishbucket directory splunk will process again all files under monitored directories which will process the required files to the correct index, but this also will cause duplicates because all files will be processed, so you should move all processed files to archive directory to not be processed again.

Second:
Second solution is make small edit on files you want to process by adding newline or space for example, as splunk will check the checksum of the file to identify if the file is processed or not, unfortunately changing file name is not enough.

Third:
Use oneshot to index your log files with the correct options. See the following for more info:

http://docs.splunk.com/Documentation/Storm/Storm/User/CLIcommandsforinput

More hints, and other approach here:

http://answers.splunk.com/answers/72562/how-to-reindex-data-from-a-forwarder.html

Regards,
Ahmed

View solution in original post

0 Karma

Builder

If index is not specified data will go to main index, you can get the logs by running

index=main

I am not sure if moving buckets from main index to the correct index will work or not, I think it is better to delete the wrongly inserted logs and index them again.

To delete logs then specify the source from Fields sidebar then delete the events you want as per the following, but first you need to allow the user you are using to delete:

index=main source=test.gz | delete

Give user permissions to delete, from wen interface, I'll assume you are using admin user:

Settings, Access controls, Users, admin

In Assign to roles part, add can_delete, then save

As you already mentioned forwarder will not index logs again as they are already indexed, there are many workaround to re-index the logs:

First:
Splunk forwarder keep track of processed files through fishbucket directory "/opt/splunkforwarder/var/lib/splunk/fishbucket/", so if you remove all the contents of fishbucket directory splunk will process again all files under monitored directories which will process the required files to the correct index, but this also will cause duplicates because all files will be processed, so you should move all processed files to archive directory to not be processed again.

Second:
Second solution is make small edit on files you want to process by adding newline or space for example, as splunk will check the checksum of the file to identify if the file is processed or not, unfortunately changing file name is not enough.

Third:
Use oneshot to index your log files with the correct options. See the following for more info:

http://docs.splunk.com/Documentation/Storm/Storm/User/CLIcommandsforinput

More hints, and other approach here:

http://answers.splunk.com/answers/72562/how-to-reindex-data-from-a-forwarder.html

Regards,
Ahmed

View solution in original post

0 Karma

Contributor

thanks for these details.

Thanks
Gajanan Hiroji

0 Karma