Getting Data In

Timestamp issue : Events to Metrics conversion

Poojitha
Communicator

Hi Everyone, 

I have created a custom app that clones current raw data , extracts metrics and dimensions from existing data and routes the extracted data to metrics index. 

My props and transforms are as below : 

props : 

[eks_fluent]
TRANSFORMS-clone = clone_metrics
NO_BINARY_CHECK = true

[metrics]
TRANSFORMS-extract = extract_metric_k8s_value
TRANSFORMS-routing = route
METRIC-SCHEMA-TRANSFORMS = metric-schema:log_to_metrics
#NO_BINARY_CHECK = true
#EVAL-_metric_time = round(strptime(logtimestamp, "%Y-%m-%d %H:%M:%S.%3N") * 1000)
#METRIC_TIMESTAMP_FIELD = _metric_time
INGEST_EVAL = _metric_time=round(strptime(logtimestamp,"%Y-%m-%d %H:%M:%S.%3N")*1000)
METRIC_TIMESTAMP_FIELD = _metric_time
NO_BINARY_CHECK = true

 transforms.conf

###########################################
# Extract metric + k8s fields
############################################
#[extract_metric_k8s_value]
REGEX = ^.*?\"log_processed\":\{.*?\"timestamp\":\"(?<logtimestamp>[^\"]+)\".*?\"mdc\":\{\"tenantId\":\"(?<tenantId>[^\"]+)\",\"value\":\"?(?<metric_value>[\d\.]+)\"?,\"metricName\":\"(?<metric_name>[^\"]+)\"\},.*?\},.*?\"kubernetes\":\{.*?\"pod_name\":\"(?<pod_name>[^\"]+)\".*?\"namespace_name\":\"(?<namespace_name>[^\"]+)\".*?\"pod_id\":\"(?<pod_id>[^\"]+)\".*?\"host\":\"(?<k8s_host>[^\"]+)\".*?\"container_name\":\"(?<container_name>[^\"]+)\".*?\"docker_id\":\"(?<docker_id>[^\"]+)\".*?\"container_hash\":\"(?<container_hash>[^\"]+)\".*?\"container_image\":\"(?<container_image>[^\"]+)\".*?\}\,\"hostname\":\"(?<extracted_host>[^\"]+)\".*$
FORMAT = logtimestamp::$1 tenantId::$2 metric_value::$3 metric_name::$4 pod_name::$5 namespace_name::$6 pod_id::$7 k8s_host::$8 container_name::$9 docker_id::$10 container_hash::$11 container_image::$12 extracted_host::$13
WRITE_META = true

############################################# 
Clone ONLY metric-capable events
############################################
[clone_metrics]
REGEX = "metricName".*?"value":"\d+(?:\.\d+)?"
CLONE_SOURCETYPE = metrics
WRITE_META = true

############################################
# Route cloned metrics to metrics index
############################################
[route]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = metrics

############################################
# Metric schema (controls what survives)
############################################
[metric-schema:log_to_metrics]
METRIC-SCHEMA-WHITELIST-DIMS = logtimestamp, tenantId, metric_name, pod_name, namespace_name, container_name,container_hash, container_image, docker_id, pod_id, extracted_host

 
Issue faced is I am extracting logtimestamp and trying to set it as _time in new metrics index. But something is failing here and _time is getting set to indextime instead of logtimestamp  I am trying here. 

Please can anyone of you help me to know what is going wrong. 

Thanks, 
PNV

0 Karma

Poojitha
Communicator

@livehybrid  Yes, I  have already done that  in my props.

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @Poojitha 

I have a feeling (but could be wrong) that the metric timestamp should be epoch as milliseconds for metrics,  try multiplying by 1000 in your INGEST_EVAL.

Please let me know if it works!

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

 

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Keep the Learning Going with the New Best of .conf Hub

Hello Splunkers, With .conf26 getting closer, there’s already a lot of excitement building around this year’s ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

How to find the worst searches in your Splunk environment and how to fix them

Everyone knows Splunk is a powerful platform for running searches and doing data analytics. Your ...