Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

Building a Synthetic Monitoring Browser Test with the Google Chrome Recorder

CaitlinHalla
Splunk Employee
Splunk Employee

Introduction

In our last post, we went over Splunk Synthetic Monitoring basics to kickstart proactive performance monitoring, improve user experience, and meet SLAs. Now let’s dig into more detail and build a Browser test using the Google Chrome Recorder

Using the Chrome Recorder to build out Browser tests is the recommended way to capture complex and critical user flows like signup, login, and checkout. It’s simpler and more resilient than manually targeting elements using things like XPath expressions and lets you quickly get up and running with Synthetic Monitoring. 

After we use the Google Chrome Recorder to record an interaction with our online boutique e-commerce website, we’ll import our recording into Splunk Synthetic Monitoring. Once imported, we’ll organize our test, view the results, and alert on failures. To follow along, you’ll need the Google Chrome Browser and access to Splunk Observability Cloud (psst! Here’s a 14 day free trial). 

Building a Browser Test

For our online boutique, checkout is the most critical business process, so we’d like to monitor it using Splunk Synthetic Monitoring. To do this, we’ll create a recording of the checkout flow by following the record, replay, and measure user flows example in the Chrome DevTools Docs. 

With our Product Checkout recording complete, we’ll export it from the browser as JSON: 

export recording.png

Moving over to Splunk Observability Cloud and navigating to Synthetics, we can use this recording to create our Browser test. 

First, we’ll add a new Browser test:

create new browser test.png

After we configure our new test by setting the necessary values, we can import our recording by selecting Import, (side note: you won’t be able to select Import until you provide a name for your test):

import.png

Once the JSON file is uploaded, we can continue to edit our test, or we can try out our new test to make sure the configuration is valid by selecting Try now…:

try now!.png

We’ll see output from our test run, but these results are ephemeral and don’t impact our overall test run metrics. It looks like our test run was successful, so let’s take a moment to celebrate how easy that was! Now on to fine-tuning.

Test Organization

It looks like our test is made up of one big, long interaction, which isn’t super helpful for future troubleshooting purposes: 

try now results.png

Transactions help us break our Synthetic tests into logical steps that represent user flows. Right now, it looks like our test has one step, when in fact, we took multiple steps (browsing the catalog, adding an item to the cart, actually placing the order) when we were recording the interaction with our site. If we go back and edit our test to include transactions, we’ll be able to scope our results to each transaction and quickly identify the exact points where we encounter performance issues. Let’s see what this looks like. 

First, we’ll close out of our Try now results. Then we’ll select Add synthetic transaction, which will add a new transaction section: 

new transaction.png

Let’s name our first transaction Home Page. We’ll delete this auto-populated Click step and drag our first “Go to url” step into this new transaction: 

home page transaction.png

We’ve gone ahead and organized the remaining steps into transactions: 

transaction breakdown.png

Let’s see what a test run looks like with these more discrete transactions: 

test run result with transactions.png

The Business Transactions section of our run results is now broken down into our defined transactions. We can click on these transactions to filter filmstrip and waterfall results and also use them to identify, at a glance, when a step in our test fails (we’ll see this in a bit). 

Adding Assertions and Detectors

Before we call this test good, we need to add some assertions so that our test will actually succeed/fail on defined success/failure conditions. It would also be helpful to receive a notification whenever our test fails so we can resolve any issues before our customers are impacted. Let’s close out these Try now results and continue editing our test. 

To create an assertion, we first add a step to the transaction we want to validate. This will auto-populate with a Click action. If we select the Click action and expand the dropdown, we can scroll down to view the available Assertions

assertions.png

In our Home Page transaction, let’s assert the text “Free shipping with $75 purchase” is visible, so we know we’ve successfully loaded the HTML for our page:

assert free shipping.png

We could also validate the presence or absence of elements on the page by adding assertions for things like specific products. These types of assertions are more robust and help test out database connections to further ensure critical paths are up and running.

After we’ve added assertion steps to each of our transactions, we can Return to test and submit our first Browser test. 

Note: refreshing the page or selecting Editing Checkout Process at the top of the page won’t save any of the current changes. If you want to save progress, it’s best to submit the test and then make incremental edits along the way so you don’t lose updates. 

Our test is now active and running, and if we select our test from the Overview page, we can see the results: 

test run details.png

We don’t yet have line graph charts for the last day, 8 days, and 30 days since we just created this test, but we do have Uptime Trends, Availability, and Performance KPIs. We can select a test run from the Recent run results or a plot point from our Availability chart to view test run results. 

From our run results page, we can see right away that our test failed thanks to the red banner at the top of the page: 

test run details error.png

We can also easily see which transaction failed because we should have 5 transactions, but instead, we only have 3. It looks like the assertion we set on our Add to Cart transaction failed so the other 2 transactions didn’t execute. 

Rather than constantly watching test runs, let’s add a detector for these kinds of failures. We could have added a detector when we initially configured our test:

create detector from config.png

Or we can add detectors from our test details page: 

detector from overview.png

We’ll create a detector and name it Checkout Process Downtime. This detector will alert on Downtime that exceeds the given threshold of 10%. Every failed test run contributes to this downtime threshold, so if test run failures exceed our set threshold, we’ll get alerted.

When creating a detector, we can conveniently see how frequently it will alert based on the thresholds we set so we can fine-tune them:

checkout process downtime alert.png

Wrap Up

That’s it! We now have a Splunk Synthetic Monitoring Browser test imported from the Google Chrome Recorder. This test will ensure our critical checkout workflow is performing as expected and alert us when it’s not so we can resolve issues before our users are impacted. 

If you’re ready to build confidence around your user’s experiences, meet SLAs, and maintain a competitive edge when it comes to your application’s performance, start by building out your own Splunk Synthetic Monitoring Browser tests. Either head over to Synthetics in Splunk Observability Cloud to get started or sign up for a Splunk Observability Cloud 14 day free trial

Resources

Get Updates on the Splunk Community!

Splunk Observability Cloud’s AI Assistant in Action Series: Analyzing and ...

This is the second post in our Splunk Observability Cloud’s AI Assistant in Action series, in which we look at ...

Elevate Your Organization with Splunk’s Next Platform Evolution

 Thursday, July 10, 2025  |  11AM PDT / 2PM EDT Whether you're managing complex deployments or looking to ...

Splunk Answers Content Calendar, June Edition

Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of community by ...