I have been using splunk and unfortunately put all data into main index, but because there is a need to allow multiple administrators to manage splunk and logs.
Each admin can only see limited range of data and I understant that this can be done by separating indexes for each role(NW admin, App admin etc..)
Is there any way to divide(split) an existing index into a few new indexes?
Data that exists in an index cannot readily be broken out of that into a new index. If you have the files to re-index you could delete the existing data and reindex, but that's just about it...
well if I have already indexed large amount of data, and need the index to be broken into a few indexes, only new data can be put in new indexes by simply reconfiguring indexes.conf and inputs.conf..
But no other method? I would like to know if there is absolutely no way to break the exsisting indexes into several small indexes.
It is not necessary to divide data in separate indexes for access control. You can use search filters on roles instead of index restrictions. If the data is already divided into, e.g., into identifiable groups of sources, sourcetypes, or host names, it should be reasonably easy to configure search filters to limit each role to only see data for which they are authorized.
unfortunately i have never tried this, so you might need to learn by trial (and error), but there is a way to edit and manipulate indexes.
Please read the wiki page located at: http://www.splunk.com/wiki/Community:Modifying_indexed_data_via_export_and_import
GK (or jrod) can you comment a bit more on this? For example, if you export the data using the export tool into a CSV, can you then not index the CSV itself? (i have a customer who asks the same question, and for him its not a matter of different people being able to read different data, but its a matter of optimizing indexes etc.. ps. historical logs are not available and so they cant re-index the data)
neither export nor import is very fine-grained. you can only export on indexed fields and full-text keywords (no search-time fields). you would have to edit at least an entire bucket (up to 10 GB compressed) of data. The format may be CSV, but the fields are internal Splunk fields.
What exactly do they hope to do by "optimizing" indexes? Seems unlikely to me that if they don't know how to exports and imports, they are not going to have any idea whether their "optimizations" will help, have no effect, or hurt, and are entirely likely to be based in misapprehensions.
good point. So far i told them to keep their indexes untouched, and have all the new data on the new indexes be separated. Depending on their retention policies, the old index will be moved to frozen anyway..