Deployment Architecture

How to correctly monitoring a .csv in the Heavy Forwarder instance?

Luis_Torres
Loves-to-Learn Lots

Hello

Can someone recommend me the necessary steps to be able to correctly monitoring a .csv in the Heavy Forwarder instance and be able to search in the index generated from the Search Head? It sounds easy, but I can't do it. I have a deployment server in another instance and I suppose that I should deploy the configuration from there, but I can't connect it.

Any help is welcome. Thank you.

Labels (1)
0 Karma

anilchaithu
Builder

@Luis_Torres 

How does this csv file generated on the HF? 

you can always use monitor stanza in inputs.conf ($SPLUNK_HOME/etc/system/local/inputs.conf) to monitor and index this file. Please refer splunk doc for the config settings

https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/Monitorfilesanddirectorieswithinputs.conf

 

[monitor://<path>]
index = index_name
sourcetype = sourcetype_name

 

Coming to your architecture, to connect HF to a DS you need to configure it as a deployment client. This can be done using deploymentclient.conf ($SPLUNK_HOME/etc/system/local). Please refer splunk doc shared here

https://docs.splunk.com/Documentation/Splunk/8.0.5/Updating/Configuredeploymentclients

 

[deployment-client]

[target-broker:deploymentServer]
targetUri = deploymentserver.splunk.mycompany.com:8089

 

Once you made this connection, you need to create a server class, app (with inputs.conf) on DS to distribute  it to the HF.

For now option 1 looks simpler. you can try that

Hope this helps.

0 Karma

Luis_Torres
Loves-to-Learn Lots

Hello, @anilchaithu.

The .csv is generated by a Python script that calls an API. Both the script and the .csv are hosted in the /bin directory of my application in the Heavy Forwarder Instance. 

I have copied the app to the rest of the instances (except to the UF)

I'm checking what you say.

Thanks for the answer.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Doing this is pretty straightforward.  Using a deployment server increases the complexity a little, but it's the right way to go.  I'll assume the HF is already a deployment client.

Start with a new app for this input.  I'll call it my_csv_input.  Create the $SPLUNK_HOME/etc/deployment-apps/my_csv_input/default directory on the DS.  In that directory, create the files inputs.conf and props.conf.

Inputs.conf will contain

[monitor:///path/to/file.csv]
sourcetype = mysourcetype
# Index must exist
index = foo

Props.conf will contain

[mysourcetype]
INDEXED_EXTRACTIONS = csv
# Change the appropriate field for timestamp values
TIMESTAMP_FIELDS = foo
# Change to the appropriate string for the timestamp field
TIME_FORMAT = %m/%d/%Y %H:%M:%S

There are other possible settings, but we'll need to know more about the CSV file to set them.

Have the DS send this app to the HF and let it restart.  That should get the data indexed.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...