Getting Data In

How to configure Splunk to permanently index certain data to indexA instead of the current indexB?

New Member

Hi all!

I checked in the forum that someone has already asked similar question.

++++++Copy from another question and answer+++++

For example, if you are trying to move the sourcetype WinEventLog:Application from the main (default) index to the os index, something like this could get you started:

splunk cmd exporttool defaultdb/db_1262807912_1262278800_6 /dev/stdout -csv sourcetype::WinEventLog:Application | splunk cmd importtool os/db_temp /dev/stdin

++++++++++++++++++++++++++++++++++++++++++++

However, this can only copy the data from one index to another index. If new data keeps coming in, the data still be indexed in the old index. Basically, my situation is as follows:

I installed a Splunk App that allows Splunk users to investigate Apache web traffic. However, the Splunk App has set, by default, to process and search data in apache_^ index. Unfortunately, my Apache web traffic data is in "apache" index. So how can I configure the Splunk to permanently index Apache web traffic data to "apache_^" index instead of "apache" index?

0 Karma

Contributor

Best way is if there is no data in apache_^ index then change the name of index in your app to the existing one apache.

If you cannot do that then : Configure your universal forwarder if you are using that or at the data source to use new index name. This way your new data will be ingested in apache_^ index.

If you need old data too then roll the buckets and copy old data to new index location. This is somewhat complex process and you need to take care of buckets conflicts. If you don't want old data then go with above approach.

0 Karma

New Member

If I copy the data from the old index i.e. apache, to the new index i.e. apache_^, will this count onto the daily bandwidth usage? I tried to create a new index (apache_^) and then set its home path to the same as the old index (apache). Afterward, it consume 200% of the licensed daily bandwidth usage AND congested the message queue.

0 Karma