Data models cannot be dynamically added to data models in Splunk 6.0. The data model feature assumes that you know the structure of your data when building the model.
If you have a list of all the fields that can possibly occur, you could create a data model with all of them, and simply allow there to be zero matching fields for those attributes on some of your installs.
You could also look into building a search-based object that creates fields with consistent names, and then building a data model on top of that. Search-based objects can incorporate arbitrary splunk searches, which gives you a lot of power. The downside is they can't be accelerated.
Data models cannot be dynamically added to data models in Splunk 6.0. The data model feature assumes that you know the structure of your data when building the model.
If you have a list of all the fields that can possibly occur, you could create a data model with all of them, and simply allow there to be zero matching fields for those attributes on some of your installs.
You could also look into building a search-based object that creates fields with consistent names, and then building a data model on top of that. Search-based objects can incorporate arbitrary splunk searches, which gives you a lot of power. The downside is they can't be accelerated.
If these fields are all defined in your Splunk instance where the data model is being created, it should be able to auto-discover what those are based on your field definitions.
To do this, when editing your Object in this Data Model, go to "Add Attribute" -> "Auto-Extracted" and then save.
I understand what you are saying, but that does not help. The field names depend on what is installed in the customer's network and there is no finite list of field names that can occur.
I guess it boils down to the fact that data models just do not work with wildcarded field names (yet). In other words, data models do not expose the full functionality of Splunk.
Thanks very much for your time!
Part of the problem in your situation is that data models don't appear to be easily built in a dynamically updated way. http://docs.splunk.com/Documentation/Splunk/6.0/Knowledge/Designdatamodelobjects#Add_an_auto-extract... indicates it will present new fields it discovers anytime you run the Auto-Extract function. Do this via API on a scheduled basis, and you can keep your data model updated regularly. See http://docs.splunk.com/Documentation/Splunk/6.0/RESTAPI/RESTknowledge#datamodel.2Fmodel.2F.7Bname.7D for docs.
I suggest you put your search extract config files into deployment server to uniformly pull the same fields.
If you are not accelerating this data model, you could build auto-extract and update the models with scripting (REST or API are likely approaches).
OR go to each Splunk instance to gather up the various extracts for the relevant source(s) and sourcetype(s) and merge them all, then generate your data models.
The last approach would be to manually set up each one based on the collectively known extracts using the Eval or Regex options to Add Attribute.
As I wrote, the number of fields is unknown. My Splunk instance my have 5, while another instance may have 10 different fields matching the wildcard.