Getting Data In

How to prevent new data ingestion not pushed from Deployment server?

goldone
Engager

Hello,

In order to protect our server performance and data quality.  I found some customers trying to on board their data by themselves, which cause a lot of operation overheads. How can I prevent for this? Does the future version of splunk have such feature?

Scenario 1:

Customer built their own apps installed on their own forwarders with/without new sourcetype name without parsing config and these apps not pushed from our deployment server. How can we detect and block these data instead blocking the whole forwarder?

Scenario 2:

For both HEC and tcp:udp input, how can we avoid customer not overwritten the index name and sourcetype name which should be used the configured on our side? If we detect the unwanted names we can drop it before going to our indexer.

Tags (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @goldone,

Regarding scenario 1, the first suggestion I would like to give you would be to not give administrator permissions to users so that they cannot modify the Forwarder configuration files.
If this is not possible, the second would be to cut the hands of those who do it alone without warning!
Seriously, I believe that the definition of management rules for Data Ingestion is the basis for clean data management.
In addition, if any user changes the Forwarder data, it is a momentary change as the Deployment Server restores the correct configurations in the first step.
If someone a little more experienced in Splunk modifies the data in system/local or system/default (which are not managed by the DS), you can create a script that with each update by the DS, the changes are discarded.
For HEC and network inputs, the suggestion would be the same, in addition I would tell you not to give the rights to create/modify the inputs on Search Heads and Indexers.

Ciao.

Giuseppe

View solution in original post

0 Karma

codebuilder
Influencer

Another possible option would be to configure indexer discovery and don't share the pass4SymmKey (distribute it only via the DS).

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @goldone,

Regarding scenario 1, the first suggestion I would like to give you would be to not give administrator permissions to users so that they cannot modify the Forwarder configuration files.
If this is not possible, the second would be to cut the hands of those who do it alone without warning!
Seriously, I believe that the definition of management rules for Data Ingestion is the basis for clean data management.
In addition, if any user changes the Forwarder data, it is a momentary change as the Deployment Server restores the correct configurations in the first step.
If someone a little more experienced in Splunk modifies the data in system/local or system/default (which are not managed by the DS), you can create a script that with each update by the DS, the changes are discarded.
For HEC and network inputs, the suggestion would be the same, in addition I would tell you not to give the rights to create/modify the inputs on Search Heads and Indexers.

Ciao.

Giuseppe

0 Karma

goldone
Engager

Hi Giuseppe

Thank you for your quick response. Unfortunately, customers own the hosts installed the splunk forwarders and their own config not only sit under /system/local, but also be found under /etc/apps/SplunkUniversialForwarder/local as well. Pushing an app with script may help. It needs POC.

For the HEC/network input, for example, once customer has the token or port they can specify the index name and sourcetype name in the curl command before sending to endpoint, they do not have the permission to modify it on SH or IDX.

Could you please advise further, thanks.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @goldone,

about the first scenario, putting configurations under SplunkUniversialForwarder app isn't a best practice because it's a Splunk reserved App.

About HEC and network inputs, you could override the sourcetype and index avoiding wrong items.

Anyway, as I said, you should manage those problems with an Operating Instruction that exactly define how to send logs to Splunk.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...