Getting Data In

Field extraction unsuccessful while index time

Karthikeya
Communicator

I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted - 

transforms.conf

[idname_extract]

SOURCE_KEY = _raw

REGEX = rsaid="\/[^\/]+\/([^\/]+)\/

FORMAT = idname::$1

WRITE_META = true
 
props.conf
 
TRANSFORMS-1_extract_idname = idname_extract
 
Field not getting extracted once indexing is done.
 
But when I am getting in search extraction is happening which means my rex is correct but index time it is failing.
 
|rex "rsaid=\"\/[^\/]+\/(?<idname>[^\/]+)\/"
 
Raw field - 
 
rsaid="/saturn-X-01/SATURN-CK-GSPE/v-saturn.linux.com-44"
 
Need to extract idname=SATURN-CK-GSPE at index time. Am I missing something?
 
0 Karma

Karthikeya
Communicator

even given fields.conf in deployer and pushed to SHs but same.

0 Karma

Karthikeya
Communicator
  • Deploy the fields.conf changes to the search head.

Do I need to do this as well? If yes, how I need to configure this?

0 Karma

kiran_panchavat
Influencer

@Karthikeya 

Additionally, I have tested this in my lab, and it's working fine. Please take a look.

kiran_panchavat_0-1740484171112.png

kiran_panchavat_1-1740484193840.png

kiran_panchavat_2-1740484358522.png

 

Sample events:-

rsaid="/alpha-A-01/ALPHA-CK-GSPE/v-alpha.linux.com-101"
rsaid="/beta-B-02/BETA-CK-GSPE/v-beta.linux.com-102"
rsaid="/gamma-G-03/GAMMA-CK-GSPE/v-gamma.linux.com-103"
rsaid="/delta-D-04/DELTA-CK-GSPE/v-delta.linux.com-104"
rsaid="/epsilon-E-05/EPSILON-CK-GSPE/v-epsilon.linux.com-105"
rsaid="/zeta-Z-06/ZETA-CK-GSPE/v-zeta.linux.com-106"
rsaid="/eta-H-07/ETA-CK-GSPE/v-eta.linux.com-107"
rsaid="/theta-T-08/THETA-CK-GSPE/v-theta.linux.com-108"
rsaid="/iota-I-09/IOTA-CK-GSPE/v-iota.linux.com-109"
rsaid="/kappa-K-10/KAPPA-CK-GSPE/v-kappa.linux.com-110"
rsaid="/lambda-L-11/LAMBDA-CK-GSPE/v-lambda.linux.com-111"
rsaid="/mu-M-12/MU-CK-GSPE/v-mu.linux.com-112"
rsaid="/nu-N-13/NU-CK-GSPE/v-nu.linux.com-113"
rsaid="/xi-X-14/XI-CK-GSPE/v-xi.linux.com-114"
rsaid="/omicron-O-15/OMICRON-CK-GSPE/v-omicron.linux.com-115"
rsaid="/pi-P-16/PI-CK-GSPE/v-pi.linux.com-116"
rsaid="/rho-R-17/RHO-CK-GSPE/v-rho.linux.com-117"
rsaid="/sigma-S-18/SIGMA-CK-GSPE/v-sigma.linux.com-118"
rsaid="/tau-T-19/TAU-CK-GSPE/v-tau.linux.com-119"
rsaid="/upsilon-U-20/UPSILON-CK-GSPE/v-upsilon.linux.com-120"
rsaid="/phi-F-21/PHI-CK-GSPE/v-phi.linux.com-121"
rsaid="/chi-C-22/CHI-CK-GSPE/v-chi.linux.com-122"
rsaid="/psi-PS-23/PSI-CK-GSPE/v-psi.linux.com-123"
rsaid="/omega-W-24/OMEGA-CK-GSPE/v-omega.linux.com-124"

 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!

Karthikeya
Communicator

@kiran_panchavat I am not sure given in the same way but no luck....

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Karthikeya ,

two questions:

in the first case, follow the instructions.

In the second case, put the conf files on the first Heavy Forwarders where data are passing through.

Ciao.

Giuseppe

0 Karma

Karthikeya
Communicator

@gcusello  there are no heavy forwarders. We have a syslog with UF installed on it forwards the data to our deployment server. I have written props and transforms in DS and then pushed to CM and to indexers. Where am I doing mistake?

0 Karma

kiran_panchavat
Influencer

@Karthikeya 

Your configurations need to be applied either by a Heavy Forwarder before indexing, or by the Indexers during indexing. Since you don't have a HF, focus on correctly configuring index-time extraction on your indexers. Simply having the files on the DS isn't enough, they must be deployed to the indexers, and the indexers must be configured to use them. So you can use Cluster master to push the configurations to the indexers.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

Karthikeya
Communicator

@kiran_panchavat doing the same now... created app under etc/deployment-apps and created props and transforms here and pushed to CM and then pushed to indexers through CM. Remaining configs were working fine only facing this issue.

0 Karma

livehybrid
Super Champion

Hi @Karthikeya 

You say you pushed from DS to CM and then pushed to IDX? This isnt a supported architecture so be careful here.

If this is the setup you have, have you been able to confirm the relevant config is in the manager-apps folder on the CM, and also checked that this has reached the indexers?

If you have been able to confirm this then I would suggest running a btool on an IDX to confirm what is being picked up and ensure that nothing is overriding anywhere.

Check props and transforms:

$SPLUNK_HOME/bin/splunk cmd btool props list YourSourcetypeName --debug
$SPLUNK_HOME/bin/splunk cmd btool transforms list idname_extract --debug

Ensure that the props returns the expected "TRANSFORMS-1_extract_idname = idname_extract" and that the transforms is also as you originally detailed.

Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped.
Regards

Will

gcusello
SplunkTrust
SplunkTrust

Hi @Karthikeya ,

did also create fields.conf in the indexers, as described in the above link?

Ciao.

Giuseppe

0 Karma

Karthikeya
Communicator

@gcusello yes created on DS where props and transforms are present and deployed to CM and from CM pushed it to indexers. Still no luck.

0 Karma

livehybrid
Super Champion

Hi @Karthikeya 

did you try the btool commands I posted? What did you get back from them?

Thanks

0 Karma
Get Updates on the Splunk Community!

The All New Performance Insights for Splunk

Splunk gives you amazing tools to analyze system data and make business-critical decisions, react to issues, ...

Good Sourcetype Naming

When it comes to getting data in, one of the earliest decisions made is what to use as a sourcetype. Often, ...

See your relevant APM services, dashboards, and alerts in one place with the updated ...

As a Splunk Observability user, you have a lot of data you have to manage, prioritize, and troubleshoot on a ...