In our last post, we went over Splunk Synthetic Monitoring basics to kickstart proactive performance monitoring, improve user experience, and meet SLAs. Now let’s dig into more detail and build a Browser test using the Google Chrome Recorder.
Using the Chrome Recorder to build out Browser tests is the recommended way to capture complex and critical user flows like signup, login, and checkout. It’s simpler and more resilient than manually targeting elements using things like XPath expressions and lets you quickly get up and running with Synthetic Monitoring.
After we use the Google Chrome Recorder to record an interaction with our online boutique e-commerce website, we’ll import our recording into Splunk Synthetic Monitoring. Once imported, we’ll organize our test, view the results, and alert on failures. To follow along, you’ll need the Google Chrome Browser and access to Splunk Observability Cloud (psst! Here’s a 14 day free trial).
For our online boutique, checkout is the most critical business process, so we’d like to monitor it using Splunk Synthetic Monitoring. To do this, we’ll create a recording of the checkout flow by following the record, replay, and measure user flows example in the Chrome DevTools Docs.
With our Product Checkout recording complete, we’ll export it from the browser as JSON:
Moving over to Splunk Observability Cloud and navigating to Synthetics, we can use this recording to create our Browser test.
First, we’ll add a new Browser test:
After we configure our new test by setting the necessary values, we can import our recording by selecting Import, (side note: you won’t be able to select Import until you provide a name for your test):
Once the JSON file is uploaded, we can continue to edit our test, or we can try out our new test to make sure the configuration is valid by selecting Try now…:
We’ll see output from our test run, but these results are ephemeral and don’t impact our overall test run metrics. It looks like our test run was successful, so let’s take a moment to celebrate how easy that was! Now on to fine-tuning.
It looks like our test is made up of one big, long interaction, which isn’t super helpful for future troubleshooting purposes:
Transactions help us break our Synthetic tests into logical steps that represent user flows. Right now, it looks like our test has one step, when in fact, we took multiple steps (browsing the catalog, adding an item to the cart, actually placing the order) when we were recording the interaction with our site. If we go back and edit our test to include transactions, we’ll be able to scope our results to each transaction and quickly identify the exact points where we encounter performance issues. Let’s see what this looks like.
First, we’ll close out of our Try now results. Then we’ll select Add synthetic transaction, which will add a new transaction section:
Let’s name our first transaction Home Page. We’ll delete this auto-populated Click step and drag our first “Go to url” step into this new transaction:
We’ve gone ahead and organized the remaining steps into transactions:
Let’s see what a test run looks like with these more discrete transactions:
The Business Transactions section of our run results is now broken down into our defined transactions. We can click on these transactions to filter filmstrip and waterfall results and also use them to identify, at a glance, when a step in our test fails (we’ll see this in a bit).
Before we call this test good, we need to add some assertions so that our test will actually succeed/fail on defined success/failure conditions. It would also be helpful to receive a notification whenever our test fails so we can resolve any issues before our customers are impacted. Let’s close out these Try now results and continue editing our test.
To create an assertion, we first add a step to the transaction we want to validate. This will auto-populate with a Click action. If we select the Click action and expand the dropdown, we can scroll down to view the available Assertions:
In our Home Page transaction, let’s assert the text “Free shipping with $75 purchase” is visible, so we know we’ve successfully loaded the HTML for our page:
We could also validate the presence or absence of elements on the page by adding assertions for things like specific products. These types of assertions are more robust and help test out database connections to further ensure critical paths are up and running.
After we’ve added assertion steps to each of our transactions, we can Return to test and submit our first Browser test.
Note: refreshing the page or selecting Editing Checkout Process at the top of the page won’t save any of the current changes. If you want to save progress, it’s best to submit the test and then make incremental edits along the way so you don’t lose updates.
Our test is now active and running, and if we select our test from the Overview page, we can see the results:
We don’t yet have line graph charts for the last day, 8 days, and 30 days since we just created this test, but we do have Uptime Trends, Availability, and Performance KPIs. We can select a test run from the Recent run results or a plot point from our Availability chart to view test run results.
From our run results page, we can see right away that our test failed thanks to the red banner at the top of the page:
We can also easily see which transaction failed because we should have 5 transactions, but instead, we only have 3. It looks like the assertion we set on our Add to Cart transaction failed so the other 2 transactions didn’t execute.
Rather than constantly watching test runs, let’s add a detector for these kinds of failures. We could have added a detector when we initially configured our test:
Or we can add detectors from our test details page:
We’ll create a detector and name it Checkout Process Downtime. This detector will alert on Downtime that exceeds the given threshold of 10%. Every failed test run contributes to this downtime threshold, so if test run failures exceed our set threshold, we’ll get alerted.
When creating a detector, we can conveniently see how frequently it will alert based on the thresholds we set so we can fine-tune them:
That’s it! We now have a Splunk Synthetic Monitoring Browser test imported from the Google Chrome Recorder. This test will ensure our critical checkout workflow is performing as expected and alert us when it’s not so we can resolve issues before our users are impacted.
If you’re ready to build confidence around your user’s experiences, meet SLAs, and maintain a competitive edge when it comes to your application’s performance, start by building out your own Splunk Synthetic Monitoring Browser tests. Either head over to Synthetics in Splunk Observability Cloud to get started or sign up for a Splunk Observability Cloud 14 day free trial.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.