Splunk Search

Generate timechart with normalized/rescaled data points

izx
New Member

Hello,

I'm trying to analyze an A/B test results on access pattern changes for a specific field.

Simplified query looks like:

 

index=test-app (ab_test_id="baseline" OR ab_test_id="ab123")
| timechart count(eval(ab_test_id=="baseline")) as Baseline count(eval(ab_test_id=="abc123")) as Test by api_endpoint

 

Since the event counts diff by ~100x, it will be better to re-scale the data either like the following min-max normalization, or just a percentage of each API endpoint, e.g. api_xyz may account for 20% in baseline, but receives 50% in the A/B test (ab123).

https://community.splunk.com/t5/Archive/Normalizing-feature-scaling-a-datapoint/td-p/194303

I used to have a concat field on the timechart, like

 

index=test-app (ab_test_id="baseline" OR ab_test_id="abc123")
| eval endpoint_by_ab=mvzip(api_endpoint, ab_test_id, "_")
| timechart count by endpoint_by_ab
| addtotals row=true fieldname=_total_baseline *_baseline
| addtotals row=true fieldname=_total_ab *_abc123
| foreach *_baseline [eval <<FIELD>> = round('<<FIELD>>' * 100 / _total_baseline)]
| foreach *_abc123 [eval <<FIELD>> = round('<<FIELD>>' * 100 / _total_ab)]

 

It will be great to use the original api_endpoint to leverage the trellis layout to compare baseline with A/B for each api_endpoint, how should I do that?

Thanks,

Labels (1)
0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...