hai team,
we are using splunk cloud and one prem HF
we are getting juniper logs as syslogs and we are using Splunk_TA_juniper in splunk cloud
how to do field attraction from my end
Thanks.
once installed any config need to do ? will it automatically take
Hi @sekhar463,
I suppose that you already configured the input from Juniper.
If not, follow the instructions at https://docs.splunk.com/Documentation/AddOns/latest/juniper/About?_ga=2.107177089.2041590115.1672646....
Ciao.
Giuseppe
we already configured the inputs for syslogs
but if we configured again for the input using ADDON is it leads duplicate logs
and if we install TA in HF what is the config we need to do for extraction
does it required inputs or props
Hi @sekhar463,
I suppose that you are ingesting logs from Juniper using an HF.
Is the HF where you enabled inputs the same that sends logs to Splunk Cloud or there's another intermediate HF?
On the first HF you have to use the Splunk_TA_for_Juniper enabling the wanted inputs, don't use another app or inputs outside the add-on.
In this way you correctly ingest logs, that are correctly parsed by the add-on and sent to Splunk Splunk.
On Splunk Cloud you have to install the same TA for the search time configurations.
Ciao.
Giuseppe
yes we have multiple HF "s getting syslogs to splunk cloud for juniper.
so how we can install ADDON and we need to disable syslog inputs
Hi @sekhar463,
you don't need to disable all syslogs, only the ones from Juniper,
at the same time the Splunk_TA_Juniper must be installed only in HFs used to ingest logs from Juniper not in all HFs.
Ciao.
Giuseppe
okay so we can disable syslog input and once installed need to create new inputs in TA ADDON right
where is it while doing inputs.conf file or in the device end while configuring to send syslogs
Hi @sekhar463,
if syslogs from Juniper arrive to Splunk Cloud passing through one or more HFs, you have to install the Splunk_TA_juniper also on the HFs because data are coocked on the first Full Splunk instance (HFs) where they are passing trough.
Ciao.
Giuseppe
To be precise, the data is _parsed_ on the HFs, not cooked. Data is cooked on UFs and sent cooked to HFs/indexers. It is parsed on HF and sent to indexers (hence the advice to avoid using HF unless absolutely needed - parsed data is much bigger in volume than cooked data).