- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to route certain events to a specific index based on a field value. All events are sent to a heavy forwarder, which then forwards traffic to my indexers. The system that generates the events I want to reroute required a custom data input, so on the heavy forwarder I have a data input defined on a specific UDP port. The data input places all events from this system into a specific index, indexA
for simplicity. Some events from this system contain a certain field value, we'll call it reRouteMe
, in a certain field, fieldA
. For these events, I want to move them to a different index, indexB
.
So basically, for all events from this specific source - if fieldA = reRouteMe
I want this to go to indexB
, otherwise, the event should go to its normal index indexA
defined in the data input. This is the current configuration on the HFWD, which is not working for some reason:
props.conf
[sourcetypeA]
TRANSFORMS-reroute = reRouteMe
transforms.conf
[reRouteMe]
SOURCE_KEY = _raw
REGEX = (reRouteMe)
DEST_KEY = _MetaData:Index
FORMAT = indexB
I've also tried applying this on my indexers to no avail, and I've tried REGEX variances such as fieldA=reRouteMe
, etc. I'm leaning toward the issue being the custom data input that's defined on the forwarder inferring with the props / transforms, but I would think the props / transforms on the indexers would catch it.
Thoughts?
Thanks.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

If I were you, I would just do it after the fact and copy the special events into a summary index using the collect
command; this will be FAR easier (it moves from exceedingly difficult, perhaps even impossible, to absolutely trivial):
http://docs.splunk.com/Documentation/Splunk/6.2.5/SearchReference/collect
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you do it using props and transforms?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

What is the stanza for your inputs.conf?
Can you run a btool on the sourcetype? Make sure the reRouteMe applies to the sourcetype.
$SPLUNK_HOME/bin/splunk btool list props sourcetypeA --debug
Will show (with file paths) the configurations applied to the sourcetype. So if the inputs.conf
sourcetype matches the props.conf
sourcetype, and btool reports it correctly, this should work. I verified the config in my test environment, so I know what you have listed will work.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
inputs.conf in $SPLUNK_HOME/etc/apps/search/local
has all of my custom defined data inputs. This one in particular is [udp://515]
and lists:
[udp://515]
index = indexA
source = sourceA
sourcetype = sourcetypeA
connection_host = dns
That being said, should this inputs.conf be located elsewhere, such as in $SPLUNK_HOME/etc/system/local
where the props.conf and transforms.conf are located, or should I move props and transforms to be with this inputs? I wouldn't think it matters honestly.
If I run btool
as you suggested it pulls props from $SPLUNK_HOME/etc/system/local
and $SPLUNK_HOME/etc/system/default
, but the transforms it refers to is from $SPLUNK_HOME/etc/system/local
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Do you have a sample event (sanatized) that you can share? all of these things should be working. You can keep them in system/local, that's fine for now.
Join us on IRC, #splunk on efnet.org, and we can discuss real-time what and how.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

If I were you, I would just do it after the fact and copy the special events into a summary index using the collect
command; this will be FAR easier (it moves from exceedingly difficult, perhaps even impossible, to absolutely trivial):
http://docs.splunk.com/Documentation/Splunk/6.2.5/SearchReference/collect
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We receive hundreds of events a day that have to be moved to this other index in a streaming fashion. If it was just a one-off case I'd agree this is a good solution.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hundreds/day is nothing. I would still use the collect
command run every hour.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So running [field]=[value] | collect index=[new]
did not result in the events moving? They are still in the original index.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Actually, I missed that collect
moves to a summary index. We aren't permitted to have these events in this index, so I need to move these events to the new index and ensure they aren't in the original.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The collect
command copies them, you still need to use delete
to remove (hide) them from the original index. As far as going to a summary index
, that only means that you you will not be charged twice for the volume against your license, which is a very good thing.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I also missed that it doesn't keep the sourcetype, so they are all in the new index with a host
field as the search head and the sourcetype
of stash
. So I should be able to do [field]=[value] | collect index=[new] sourcetype=[sourcetype] host=[host[
to maintain the metadata correct?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This resolved my issue well enough for now until I can figure out why the routing stopped working. Thanks for the help!
