In this blog, we discuss the development of an automated testing project, using the AI and automation capabilities of Cursor to scale and enhance the robustness of our data testing services. We walk through project aims, key benefits, and considerations when leveraging automation for analytics testing.
Project Background:
On an ongoing basis, we upgrade a JavaScript library that we manage, to include a number of improvements and enhancements, which is deployed to numerous sites. The library integrates with different third-party web analytics tools and performs a number of data cleaning and manipulation actions. Once we upgrade the library, our main priorities are:
Feature testing: Verify new functionality across different sites/environments
Regression testing: Ensure existing functionality has not been negatively affected across different sites
To achieve this, we conduct a detailed testing review across different pages of the site. This involves performing specific user actions (such as page views, clicks, search, and other more exciting actions) and ensuring that the different events are triggered as expected. We capture network requests for outgoing vendors, such as Adobe Analytics or Google Analytics through the browser’s developer tools or a network debugging tool (e.g., Charles) and verify if the correct events are triggered and relevant parameters are captured accurately in the network requests. By ensuring that all events are tracked with the right data points, we can confirm that both new features and the existing setup are working as expected.
Project Aim:
To optimise this process and reduce the manual effort involved, we developed an automated testing tool designed to streamline and speed up data testing. As an overview, this tool automatically simulates user actions on different sites and different browsers, triggering the associated events, and then checks network requests to ensure that the expected events are fired, and the correct parameters are captured.
Automated Testing Benefits:
In the era of AI, automation is a key driver of efficiency and increased productivity. Automating testing processes offers several key benefits to our development and data testing capabilities, such as:
- Reduces setup time and creating testing documentation: We’re able to run through different tests and scenarios with a one-time setup for each site and each version.
- More accurate data testing: With a thought-out test plan which is followed precisely, we’re able to put more trust in our testing outcome. This helps us identify issues quicker.
- Better test coverage: We can run tests on different browsers and devices, using the same setup.
How We Did It:
We chose Python as the primary scripting language, as it offers flexibility for handling complex tasks. Python’s versatility and extensive libraries made it an ideal choice for rapid development and iteration.
For simulating a variety of user interactions and conducting tests across multiple browsers, we selected Playwright. Playwright is a powerful open-source automation tool/API for browser automation. It supports cross-browser data testing (including Chrome, Safari, Firefox), allowing us to validate network requests across a broad range of environments.
We used the Cursor AI code editor to optimise the development process and quickly set up the tool. Cursor’s proprietary LLM, optimised for coding, enabled us to design and create scripts efficiently, accelerating development by streamlining the debugging and iteration process. Cursor’s AI-assistant (chat sidebar) boosted productivity by providing intelligent code suggestions, speeding up debugging and investigation. We’ll dive into our experience using Cursor a bit further in the next section
Lastly, we chose Flask to build the web interface where users can select different types of automated testing. Flask is a lightweight web framework for Python, which we’ve had experience with for other projects. It has its pros and cons, but a key benefit of this project was that it allowed us to get started quickly and focus more on the nuts and bolts of the program.
Our Experience with Cursor:
Cursor AI played a crucial role in taking this project from ideation to MVP. By carefully prompting Cursor’s in-editor AI assistant, we were able to achieve the results we wanted. The tool allowed us to focus on the core structure of the program and the logic of each test without getting bogged down in documentation and finicky syntax errors.
Cursor also gave us the capability to include specific files, documentation links, and diagrams as context for prompts. This allowed us to provide relevant information for the model to find a solution. Compared to an earlier version of Github’s copilot that we tested, we thought this was a clear benefit in leading the model to the most appropriate outcome.
Another useful benefit of Cursor AI was the automated code completion, which could identify bugs and propose fixes, as well as suggest code to add to the program. This feature was useful when it understood the outcome we were aiming for, which it did more often than not.
However, not everything was plain sailing, and our experience did reveal some drawbacks to using AI code editors to be mindful of. For example, relying too much on automated suggestions can distance yourself from the underlying code, making it harder to debug complex issues independently. It was important to review the suggested code and use Cursor’s helpful in-editor diffs to clearly outline the proposed changes. This also allowed us to accept or reject these changes, giving us a good level of control.
Another drawback we noticed is that AI-generated code may not always follow best practices or be optimised for performance, so it’s crucial to review and validate the output carefully. For example, Cursor tended to create monolithic scripts instead of separating functionality into components, such as tests and Flask-related parts, which would be easier to manage in the long term.
Another point we noticed was that over-reliance on AI tools could easily lead to complacency, potentially affecting our problem-solving skills and creativity as developers. When asking Cursor to make large changes to the codebase, it can be easy to just accept all changes and test if they worked without fully understanding the impact. When developing without AI assistance (like everyone did a couple of years ago), it’s better to make specific and relatively small changes at a time to reduce the risk of introducing breaking changes and to better understand the impact of each change. This seems to be a sensible approach when working with a tool like Cursor.
What We Achieved – Efficiencies Unlocked:
The automated testing tool we developed significantly streamlined and optimised the data testing process in a number of key ways:
- Accelerated project development: Using Cursor AI, we rapidly moved through development and completed the project in a short period. The AI-driven interface, combined with Playwright’s capabilities, sped up our debugging process—a major challenge in previous R&D projects. In the past, we often faced delays due to debugging blockers, but now, with the AI assistant, we could directly identify and fix issues, completing the project in a fraction of the time.
- Built a robust, reusable tool: The tool is scalable and flexible, and can be adapted for different analytics platforms (e.g., Google Analytics, Meta, Pinterest). It is reusable across different projects and client needs, as well as different browsers and environments.
- Time efficiency & boosted productivity: One of the most valuable outcomes was the significant reduction in manual testing time. With the new automated testing tool, we ran multiple test cases simultaneously, speeding up the overall process. This helped us meet tight project deadlines and improve client delivery without sacrificing quality. Additionally, it freed up time for focusing on challenging tasks and optimising existing solutions.
Conclusion:
With AI, the classic engineering view of ‘why spend 1 hour doing something when I can spend 10 hours automating it?’ has now become ‘why spend 1 hour doing something when I can spend 2-3 hours automating it?’. In this instance, Cursor allowed us to lower the barrier for innovation and create a tool to meet a set of tight deadlines, whilst also giving us a feature-filled, reusable program moving forwards.
For more information about how we can support your organisation with data testing - including our automated testing services - please feel free to contact us now or explore the links below
About the author
Lynchpin
Lynchpin integrates data science, engineering and strategy capabilities to solve our clients’ analytics challenges. By bringing together complementary expertise we help improve long term analytics maturity while delivering practical results in areas such as multichannel measurement, customer segmentation, forecasting, pricing optimisation, attribution and personalisation.
Our services span the full data lifecycle from technology architecture and integration through to advanced analytics and machine learning to drive effective decisions.
We customise our approach to address each client’s unique situation and requirements, extending and complementing their internal capabilities. Our practical experience enables us to effectively bridge the gaps between commercial, analytical, legal and technical teams. The result is a flexible partnership anchored to clear and valuable outcomes for our clients.