Hi, we use a lot of base64 encoded fields to save traffic bandwidth.
Is there any way to decode these fields at index time so they are automatically available in index and remove encoded ones. Ideally all this using macro 'base64'. I have tried to do this by field transformations but failed. Thanks
IMO, the best solution is to stop using base64 to transmit data. It's not an encryption mechanism and probably is not saving that much bandwidth (if any).
If you can't get away from base64 then consider writing a modular input that reads the data, converts it to plain text, and writes it to stdout for Splunk to index.
Besides the fact that I agree that relying on forwarder/SSL compression will probably yield your best bandwidth utilization, you can consider using INEGST_EVAL to create new fields at index time.
Note, however, that this creates indexed fields, which will increase your storage utilization for index files. Depending on the cardinality of the data, this storage increase may be significant.
IMO, the best solution is to stop using base64 to transmit data. It's not an encryption mechanism and probably is not saving that much bandwidth (if any).
If you can't get away from base64 then consider writing a modular input that reads the data, converts it to plain text, and writes it to stdout for Splunk to index.
@richgalloway, Can you please advise in that case what's the best solution to have this data decoded in the same index, as a different field because I require this data ready available. Thanks
Macros are search-time features which cannot be used at index time. I'm not aware of any index-time feature that can be used to decode base64 data.
Another way to reduce bandwidth use is to enable SSL compression.