We want to extract existing data (very little, less than a GB) from an index. Is there a best practice for running the dump command on an indexer cluster (3 nodes) for a specific index? Do I have to run this command individually on each indexer in the cluster to ensure all data in the cluster is extracted properly?
Thanks for the response @koshyk. The data size was a bit larger than I expected, so I rean the dump command on all indexers and imported them. Seemed to do the trick. Thanks again for the suggestion though!
Commands:
Dump:
/opt/splunk/bin/splunk search 'index=my_old_idx latest="05/23/2019:11:33:55" | dump basefilename=my_old_idx.log'; find /opt/splunk/var/run/splunk/dispatch//dump/ -type f -name 'my_old_idx .log' | xargs -I {} mv {} /opt/splunk/etc/slave-apps/new_app_here
Import:
find /opt/splunk/etc/slave-apps/new_app_here/ -type f -name 'my_old_idx.log*' | xargs -I {} /opt/splunk/bin/splunk add oneshot {} -sourcetype my_new_st_here -index my_new_idx -rename-source /dir/foo/bar
Thanks for the response @koshyk. The data size was a bit larger than I expected, so I rean the dump command on all indexers and imported them. Seemed to do the trick. Thanks again for the suggestion though!
Commands:
Dump:
/opt/splunk/bin/splunk search 'index=my_old_idx latest="05/23/2019:11:33:55" | dump basefilename=my_old_idx.log'; find /opt/splunk/var/run/splunk/dispatch//dump/ -type f -name 'my_old_idx .log' | xargs -I {} mv {} /opt/splunk/etc/slave-apps/new_app_here
Import:
find /opt/splunk/etc/slave-apps/new_app_here/ -type f -name 'my_old_idx.log*' | xargs -I {} /opt/splunk/bin/splunk add oneshot {} -sourcetype my_new_st_here -index my_new_idx -rename-source /dir/foo/bar
For low amount of data, it is much better to do an outputlookup of the data via GUI. So you don't have to worry about nodes etc
index=whateverindex sourcetype=somesourcetype | table index,host,sourcetype,_raw| outputlookup dataoutput.csv
Then copy this dataoutput.csv for your purposes (or select relevant field)