Petri Kainulainen View RSS

Developing Software With Passion
Hide details



Software Development Monthly 9 / 2025 7 Oct 1:00 AM (23 days ago)

The Software Development Monthly is a monthly blog post that shares interesting or useful content which I consumed during the previous month. This blog post is always published on the seventh day of the month.

Let's begin!

Table of Contents:

AI

Vibe coding is not the same as AI-Assisted engineering explains the difference between vibe coding and AI-assisted engineering and argues that even though vibe coding is great for creating prototypes, we shouldn't use it for writing production code.

Writing Code Was Never The Bottleneck argues that AI hype is based on a misunderstanding, and that's why the AI tools might not help teams to move faster.

Writing Code Is Easy. Reading It Isn’t argues that writing code is the easy part and explains why reading code is much harder than writing (or generating) it.

Why I'm declining your AI generated MR identifies six reasons why the author rejects a merge request (aka a pull request) without reviewing it.

Where's the Shovelware? Why AI Coding Claims Don't Add Up raises an interesting question: if AI helps developers to release code at light speed, why the number of new applications isn't growing exponentially?

Development Speed Is Not a Bottleneck argues that increasing coding speed doesn’t remove the real bottlenecks of product development and highlights the problems we should solve instead.

Why I Fired Google and Gave an AI Chatbot a Shot? is my own blog post which explains why I replaced Google with an AI chatbot.

The quality of AI-assisted software depends on unit of work management argues that if we want to improve the quality of AI generated code, we should divide the solved problem into small chunks and go through these chunks one by one.

Software Development

Postgres for Everything identifies 31 different "problems" which can be solved with PostgreSQL.

CUPID: the back story argues that we shouldn't teach the SOLID principles to new programmers as best practices which must be followed in every situation.

Saying NO is not a free action in the world of software engineering explains why it's so hard to say no and shares seven tips that can make it a bit easier.

How I document production-ready Spring Boot applications explains what the author includes in the README file, describes how the author documents the architecture of his application, and demonstrates how the author documents REST APIs with Spring REST Docs.

I love UUID, I hate UUID explains why UUIDv7 is a better primary key than UUIDv4.

React Won by Default – And It's Killing Frontend Innovation explains why it's a bad thing to choose React without considering other possible frontend frameworks.

Keeping Secrets Out of Logs identifies six reasons why secrets end up in logs, provides 10 solutions to this problem, and describes how we can create a process which ensures (hopefully) that secrets won't end up in our logs.

Why Tests Aren’t Enough (And What Actually Keeps Code Safe) argues that tests cannot replace a gut feeling and provides four tips that help us to get better at noticing problematic code.

How I, a non-developer, read the tutorial you, a developer, wrote for me, a beginner is a funny blog post which gives some food for though to people who publish tutorials on the internet.

Redis is fast - I'll cache in Postgres compares the performance of Redis and Postgres, and explains why it might be better to use Postgres even though Redis is faster.

React State Management in 2025: What You Actually Need identifies four states which are found from a React application, explains when we should use a state management library, compares different state management libraries, and (of course) recommends state management libraries for us.

The post Software Development Monthly 9 / 2025 appeared first on Petri Kainulainen.

    

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Clean Test Automation Monthly 9 / 2025 29 Sep 9:22 PM (last month)

The Clean Test Automation Monthly is a monthly blog post that shares interesting or useful test automation content which I consumed during the current month. This blog post is always published on the last day of the month.

Let's begin!

Table of Contents:

Test Design

Improve test quality with mutation testing provides an introduction to mutation testing, demonstrates how we can do mutation testing with Pitest, and explains what we should do with the mutation test results.

Mutation testing - not just for unit tests argues that even though mutation testing works best if our tests are fast, it can be a useful tool for "slow" tests as well.

The Tetris Principle aka "Test as Low↓ as Possible" argues that when we want to write an automated test, we should write the test at the lowest reasonable level of the testing pyramid.

Beyond the Test Pyramid: Building New Monuments for Testing argues that the classic test automation pyramid has served us well, but technological advancements (hardware improvements, cloud computing, and containers) have made it obsolete. The author also introduces a new test automation pyramid that helps us ensure that our application meets the expectations of real users.

How to Know When Simple Isn’t Enough Anymore (The TDD Answer) explains how we can use TDD as a design tool.

The Automation Maturity Pyramid introduces a four level pyramid that helps us to increase the maturity of our test automation efforts. The author identifies the levels of the test automation maturity pyramid and explains how we can climb from the lowest level to the top of the pyramid.

Test Driven Development: Bad Example is a review of Kent Beck's 2003 book, Test Driven Development: By Example.

The Testing Skyscraper: A Modern Alternative to the Testing Pyramid argues that the classic test automation pyramid is obsolete and should be replaced with a testing skyscraper where every level is considered good if fulfills a business need.

Backend

Spring Boot Testing: From Unit to End-to-End Testing is a solid quick start guide that explains what kind of tests we should write for our Spring Boot applications.

Making HTTPS Calls to Untrusted SSL Servers With REST Assured is a practical blog post that describes how we can configure REST Assured to accept untrusted SSL certificates.

Implement Unit Test in gRPC Service describes how we can write tests for a gRPC service and a gRPC client.

Introduction to Data-Driven Testing with Java and MongoDB explains how we can write parameterized tests for a Jakarta Data repository with JUnit 5 and Testcontainers.

Optimizing Spring Integration Tests at Scale is a comprehensive guide that helps us to optimize our Spring Boot integration tests.

Supercharging Test Automation with Java Faker: Generating Realistic Test Data provides an introduction to the Java Faker library which helps us to generate realistic test data for our automated tests.

UI / End-to-End

Understanding Stealth Automation identifies the techniques which websites use to detect automation tools (or bots in general), explains how stealth automation works and why it's important, and demonstrates how we can use it for testing a simple demo website.

Simplify the Playwright HTML report explains how we can replace technical descriptions (locator or identifier details) with human-readable descriptions which emphasize the tested business rule.

Transforming UI Test Report: Harnessing HAR Files in Playwright describes how we can enhance our UI test reports by leveraging the HTTP archive (HAR) files.

Debugging "No Tests Found" Errors in Playwright: A Comprehensive Guide provides tips which help us to solve the infamous "no tests found" error.

Playwright Agentic Coding Tips provides an introduction to AI agents, compares different pricing models, and provides five tips that help us to generate API and UI tests with an AI agent.

Frontend Load Testing Against the Thundering Herd Effect provides a quick introduction to the thundering herd effect in frontend applications, and describes how we can write load tests which help us to catch these issues before they reach production and (potentially) cause a service outage.

The Smart Way to Begin Performance Testing is a step-by-step guide that helps us to write our first performance tests with Grafana k6.

The post Clean Test Automation Monthly 9 / 2025 appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Software Development Monthly 8 / 2025 7 Sep 1:35 AM (last month)

The Software Development Monthly is a monthly blog post that shares interesting or useful content which I consumed during the previous month. This blog post is always published on the seventh day of the month.

Let's begin!

Table of Contents:

AI

I Know When You're Vibe Coding identifies a tell-tale sign which reveals that a PR was written by AI and explains what's the only thing the author wants us to do.

The Lost Path to Seniorhood argues that allowing AI to do the easy stuff will hurt open source (and all of us) in the long run if we allow AI to replace junior and mid-level software developers.

AI is a Floor Raiser, not a Ceiling Raiser argues that even though AI reduces the time that's needed to reach basic proficiency, becoming a master still requires a lot of effort and time.

How to Read an "AI" Press Release helps us to interpret AI press releases.

Clowns to the left of me argues that both AI hype and skepticism is naive, and explains that the truth is somewhere in the middle. Finally, the author points out that even though AI has a lot benefits, it also has serious drawbacks.

Read That F*cking Code identifies three risks we take if we don't read the code that was written by AI and explains how we can generate production-grade code with AI.

How far can we push AI autonomy in code generation? documents an experiment where the author asked an AI agent to write a Spring Boot application and provides tips which help us to improve our AI agent workflows.

Cloud

AWS deleted my 10-year account and all data without warning is cautionary tale which describes what can happen if we trust a cloud provider, and we have no backups, or we haven't stored our backups to somewhere else.

Software Development

How I write production-ready Spring Boot applications specifies the architecture which the author uses when he writes Spring Boot applications.

How to write a good design document provides tips which help us to write good design documents.

What is the N+1 Query Problem and How to Solve it? explains what the N+1 query problem is, describes how we can solve it, and provides five tips which help us to avoid it.

We replaced passwords with something worse argues that passwords are more secure than six digit login codes.

Live Coding Sucks explains why some people cannot pass technical interviews which include a live coding session.

Why Java is Still Worth Learning in 2025: A Developer’s 25-Year Journey describes how to author transformed from a skeptic to an advocate, explains why the author thinks Java is still worth learning, and gives tips which help us to get started.

Code Review Can Be Better identifies two reasons why the author doesn't like Github's code review process and suggests an alternative workflow.

The post Software Development Monthly 8 / 2025 appeared first on Petri Kainulainen.

    

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Why I Fired Google and Gave an AI Chatbot a Shot? 4 Sep 7:49 AM (last month)

Like many developers, I used Google and Stack Overflow to solve problems, explore new tools, and learn new skills. I wasn't happy with these tools, but they were all I got. When the first AI tools were released, I was skeptical and pretty much ignored them. However, last year I decided to bite the bullet and got the Plus subscription of ChatGPT. In this post, I’ll describe why I decided to give an AI chatbot a shot.

Let's begin.

Even though I am using ChatGPT as an example chatbot, this blog post isn't an ad. I pay the ChatGPT subscription with my own money and I didn't get any money for writing this blog post. All images used in this blog post are generated with ChatGPT.

Googling Is a Waste of Time

Why I fired Google

When I was still relying on Google, my problem-solving process looked like this:

  1. Do a Google search.
  2. Skim through the search results.
  3. Click something that might be relevant.
  4. Try the suggested fix.
  5. If that didn’t work, repeat the process with a new search or keep digging.

This workflow was manageable when the search engine optimization wasn't such a massive problem and the search results were somewhat relevant. But today? Google is no longer a search engine — it’s an online advertisement platform. I understand that Google has to make money, but the problem is that finding the relevant information has become too slow and frustrating because:

I struggled for so long because I felt that I have no other choice. Eventually, I had to admit that searching for help wastes so much of my time that I must try something totally different. I decided to give AI a shot.

Rubber Duck Debugging for the Win

Why I gave AI a shot

When I am using ChatGPT, my problem-solving process is completely different. It looks like this:

  1. I describe my problem and include an error message if I have one.
  2. ChatGPT proposes a solution or asks me to clarify my problem.
  3. If ChatGPT proposes a solution, I try the suggested solution.
  4. If the solution didn't solve my problem or ChatGPT needs additional information, I continue the discussion until I find a solution to my problem.

This workflow feels less demoralizing than googling because:

In short, when I use an AI chatbot, I don’t feel alone or stuck. The problem-solving process feels surprisingly collaborative and predictable: the better my input, the better the responses I get.

Summary

When I use Google, I must concentrate on figuring out a good search query and hope that it provides useful search results. Also, I must spend a lot of energy for browsing and evaluating the search results. On the other hand, when I use an AI chatbot, I can simply describe my problem, and it asks me to explain my reasoning until I find the answer I am looking for. The latter process is faster, more focused, and reminds me of a troubleshooting session with a colleague.

Do you still rely on Google, or are you using AI for problem-solving? Share your thoughts in the comments below. I'd love to hear your thoughts and experiences.

The post Why I Fired Google and Gave an AI Chatbot a Shot? appeared first on Petri Kainulainen.

    

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Clean Test Automation Monthly 8 / 2025 31 Aug 8:19 AM (2 months ago)

The Clean Test Automation Monthly is a monthly blog post that shares interesting or useful test automation content which I consumed during the current month. This blog post is always published on the last day of the month.

Let's begin!

Table of Contents:

Test Design

Quality Coaching Scenario: This code won’t take tests provides five tips which help us to write automated tests for a legacy application that cannot be unit tested and explains why we should follow the approach suggested by this blog post.

The Code Was 100% Tested — And 100% Broken is a thought-provoking article which explains why 100% code coverage doesn't guarantee that our application works as expected.

Test code should rarely be resilient argues that an automated test should fail as fast as possible and should fail for only one reason.

Why You Should Test with Real Data (Sometimes) argues that tests which use real data catch real-world issues which are often missed by tests which use mocks, describes what we should take into account when we use real data, identifies the situations when we shouldn't use real data, and gives tips which help us to build a solid test suite which leverages both mocks and real data.

You should delete tests is a thought-provoking blog post which argues that sometimes the best thing we can do is to delete our tests.

Backend

How I test production-ready Spring Boot applications is a comprehensive blog post which explains how the author writes both unit and integration tests for Spring Boot web applications with JUnit 5.

Achieve Faster Build Times with the Spring Test Profiler describes how we can configure and use Spring Test Profiler which collects performance data during test runs, identifies performance bottlenecks, and provides suggestions which help us to improve the performance of our test suite.

AI for API Testing: How I Used AI and Star Trek to Generate Better Test Cases describes how we can use AI for generating tests cases for a REST API that's documented with OpenAPI.

What I Learned Using GitHub Copilot for API Automation explains how the author generated API tests for a REST API with GitHub Copilot and identifies four things the author learned during this process.

Intro to @ClassTemplate Annotation in JUnit provides an introduction to class templates which were introduced in JUnit Jupiter 5.13.0.

UI / End-to-End

Tracking UI to API Connections with Playwright describes how we can write automated tests which ensure that the expected API endpoint was invoked and the expected data was sent to the invoked API endpoint.

UI Testing Locators Guide: How to Write Stable and Maintainable Selectors provides five rules which help us to write locators that make our tests stable and as easy to maintain as possible.

Global Cache: Make Playwright BeforeAll Run Once for All Workers identifies three different ways to set up authentication in our Playwright tests. Because none of these options is perfect, the author introduces their own solution which helps us to run our setup code only once for all workers.

Automating Animation Testing with Playwright: A Practical Guide helps us to write automated tests for animations which are often used in web applications.

Does Your Web App Fail Gracefully? identifies four common failure scenarios and provides tips which help us to write automated tests that ensure that our web application fails gracefully.

The post Clean Test Automation Monthly 8 / 2025 appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Clean Test Automation Monthly 7 / 2025 31 Jul 8:49 AM (3 months ago)

The Clean Test Automation Monthly is a monthly blog post that shares interesting or useful test automation content which I consumed during the current month. This blog post is always published on the last day of the month.

Let's begin!

Table of Contents:

Test Design

QA Decisions, When to Automate Tests, and When to Walk Away: A Practical Guide for Effective Test Automation argues that "automate everything" isn't sustainable, identifies five questions which help us to decide what to automate, highlights five scenarios which must not be automated, and provides best practices which help us to decide what we should automate.

What Your Broken Test Suite Is Really Telling You identifies seven test automation problems which cost us both time and money, and helps us to solve these problems.

Why I'm Betting on LLMs for UI Testing introduces the author's vision for the future. The author argues that we should generate UI tests (as natural language) from the input data (specs, design documents, code, existing tests, and so on) by using an LLM and pass the generated tests to another LLM which runs them.

AI-Assisted Testing – The Rules and Roles is an interesting post which explains how AI can and cannot help testers (at least without compromising quality). Even though is article is written for testers, it's very relevant for developers as well. You see, many people want to use AI for writing automated tests, and if this isn't done right, it's a recipe for a disaster.

Backend

Manage Spring Boot Test Dependencies with Maven describes how we can manage our testing dependencies when we are using Spring Boot.

Testing an OpenRewrite Recipe explains how we can write automated tests for an OpenRewrite recipe which moves Kotlin source code files "closer to the root package" (which is the official recommendation).

Best Practices for Spring Boot Logging Test Configuration describes how we can create a logging configuration which helps us to figure out what went wrong if a test fails, and explains how we can verify that the expected log message is written to the log. This is useful if we want to ensure that our log contains the expected audit log messages.

UI / End-to-End

5 JavaScript Tricks for Cleaner, Faster Playwright Tests provides five tips which help us to write clean tests with Playwright.

How Agoda Uses Playwright Visual Testing to Prevent Brand Leakage in White-Label defines the terms: white-label and brand leakage, and describes how we can prevent brand leakage by writing visual tests with Playwright.

Milliseconds Make Millions: Turning Playwright Tests into Performance Audits is a practical blog post which describes how we can write performance tests with Playwright and Lighthouse.

vi.mock Is a Footgun: Why vi.spyOn Should Be Your Default is a thorough blog post which argues that we should use spies instead of mocks when we are writing our tests with Vitest and we want to replace the dependencies of the system under test with test doubles.

The post Clean Test Automation Monthly 7 / 2025 appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Clean Test Automation Monthly 6 / 2025 30 Jun 7:33 AM (4 months ago)

The Clean Test Automation Monthly is a monthly blog post that shares interesting or useful test automation content which I read during the current month. This blog post is always published on the last day of the month.

Let's begin!

Table of Contents:

Test Design

Test Planning – What Makes A Good Plan? argues that a good enough test plan defines what we will test, explains how we are going to do it, identifies the required skills, and lists the things we need before we can start testing. So, what does this have to do with automated testing? Quite a bit, actually. If we go through these things with out team before writing any code, we can ensure that everyone is on the same page, define testing guidelines for our project, and arrange training if needed.

Lessons learned in test-driven development: Software tester edition shares a tester's perspective on test-driven development (TDD). It identifies the pros and cons of TDD, and describes how we can find a balance between TDD and traditional testing.

Mock Objects & Stubs: Your Key to Bulletproof Test Isolation provides a quick introduction to test doubles, identifies the difference between a stub and a mock, and describes how we can use them to isolate the system under test from its dependencies.

Why I Don’t Use Mocking Frameworks and Why You Might Not Need Them Either explains why the author doesn't use mocking frameworks like Mockito. It highlights the issues caused by mocks, and explains how the author writes both code and tests which don't require mocks.

This blog post also provides an example which demonstrates how we can refactor a service method so that we can write unit tests which don't require mocks. Unfortunately, this example don't do the author any favors because the new unit test tests only a small portion of the refactored code.

And yet, I decided to include this post in this newsletter because I think that it's generally a good idea to minimize the number of test doubles used by our automated tests.

Flaky Tests: When Perfection Becomes the Enemy of Progress defines the term flaky test, identifies the cause of flakiness, and describes three classic and harmful solutions which are often used to tackle flaky tests. Finally, the blog post describes how Docker deals with flaky tests by using an internal tool which ignores known issues.

Refactor or Rewrite? Making the Right Call in Test Automation defines the terms refactor and rewrite, and explores when it's better to refactor existing tests and when it’s worth to rewrite them from the scratch.

Test names should be sentences describes the purpose of an automated test, argues that a good test name is a sentence which describes what went wrong if the test fails, and explains what might be wrong if we cannot figure out a good name for a test.

Backend

Let’s Explore the Best REST API Clients and Testing Tools (2025 Edition) highlights 15 REST API clients and testing tools. This post identifies the key features of every tool and provides a short evaluation which helps us to select the best tool for the job.

STF Milestone 4: Parameterized test classes identifies a situation when we want to run a set of test methods by using the same arguments and describes how we can solve this problem by writing parameterized test classes with JUnit 5.

How to Configure Mockito Agent for Java 21+ Without Warning is an interesting blog post which explains how we can get rid of the warning that's caused by Mockito when we are using Java 21 or newer. This blog post describes why the warning is displayed and explains how we can solve this problem when we are running our tests with Maven, Gradle, and IntelliJ Idea.

Thymeleaf View Testing with Spring Boot and HtmlUnit is a practical blog post that helps us to write tests for Thymeleaf views with Spring MockMvc and HtmlUnit.

Test Flyway Java Migrations with Spring Boot describes how we can write integration tests for Java-based Flyway migrations when we are using Spring Boot.

UI / End-to-End

Supercharging Playwright Tests with Chrome DevTools Protocol describes how we can leverage the Chrome DevTools Protocol (CDP) when we are writing automated tests with Playwright. This post helps us to speed up our Playwright tests by blocking images, capture console logs, and simulate slow network speed.

Why Playwright Tracing Beats Logging for Debugging UI Tests demonstrates why Playwright tracing is a better debugging tool than reading text-based Playwright logs. It also introduces a workaround for a Playwright bug that prevents the trace file from being saved when the Playwright tests are written with Python.

The post Clean Test Automation Monthly 6 / 2025 appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Clean Test Automation Monthly 5 / 2025 31 May 1:31 AM (5 months ago)

The Clean Test Automation Monthly is a monthly blog post that shares interesting or useful test automation content which I read during the current month. This blog post is always published on the last day of the month.

Let's begin!

Table of Contents:

Test Design

Your Performance Tests Are Only as Good as Your Requirements advocates that we should leverage historical production data, understand the nature of peak load, and select our testing strategy based on the production data and the nature of peak load. Finally, the author identifies five questions that help us to write better requirements for our performance tests.

Netflix App Testing At Scale describes the testing strategies which are used to write automated tests for the Netflix Android application which has over over 400 modules and one million lines of Kotlin and Java code. This blog post introduces the different layers of the testing pyramid used by Netflix, highlights the testing tools used to write different tests, identifies the challenges faced by developers, and describes how they were able to solve these problems.

Good Test Automation Doesn’t Start with Code is a thought-provoking article which argues that if you want to write good tests, you shouldn't get obsessed with testing tools, best practices, or writing code. Instead, you should learn to ask the right questions.

Why Property Testing Finds Bugs Unit Testing Does Not explores the limitations of traditional example based tests and highlights the strengths of property-based testing. The author argues that example based tests are effective when the number of inputs is relatively small. However, when the number of inputs (and the number of combinations of inputs) grow, traditional tests often miss the errors which are caught by the randomized inputs used in property-based testing.

AI Test Generation: A Dev’s Guide Without Shooting Yourself in the Foot identifies two common problems often found in AI generated tests, explains how we can catch at least some of these errors with SonarQube, and shares seven tips that helps us to write good tests with AI.

The Testing Tower introduces the concept of testing tower and argues that it's a modern, more expressive, and context-driven replacement for the outdated testing pyramid. The author divides tests into five levels — from fast automated tests which are the foundation of the testing tower to human insight and validation that's found from the battlements of the testing tower.

Backend

Spring Boot AI Evaluation Testing describes how we can build a simple AI agent with Spring AI and explains how we can write automated tests for our AI agent by using a technique called evaluation testing.

Automating Security Testing in CI/CD Pipelines with OWASP ZAP: A Comprehensive Guide describes how we can can configure and run security tests with OWASP ZAP, explains how we can integrate OWASP ZAP with our CI/CD pipeline by using Github Actions, and helps us create a Github Actions workflow which runs our security tests for an application that's running in a staging environment.

Automating Contract Testing: A Developer’s Guide with Spring Cloud Contract provides an introduction to contract testing, introduces the key features of Spring Cloud Contract, and describes how we can write automated contract tests with Spring Cloud Contract.

Spring Boot TestContext Cache Best Practices identifies three common mistakes which cause unnecessary cache misses and shares four best practices which help us to maximize context reuse and improve the performance of our test suite.

Things I Wish I Knew When I Started Testing Spring Boot Applications highlights four things which you should know if you are writing tests for Spring Boot applications.

Writing Unit Test With MockMvcTester: Returning an Object as JSON is my own blog post that identifies what kind of tests we should write, helps us to eliminate duplicate request building code, and describes how we can write unit tests for a REST API endpoint that returns an object as JSON.

PITest — a Hands‑On Guide to Mutation Testing in Java provides an introduction to mutation testing and the PITest library, describes how we can integrate PITest with Maven, and explains how we can optimize its performance using the incremental analysis feature.

Building Cloud-Ready Apps Locally: Spring Boot, AWS, and LocalStack in Action demonstrates how we can develop and test Spring Boot applications which use AWS services in our local development environment with LocalStack and Testcontainers. It explains how we can integrate a Spring Boot application with Amazon SQS and S3, and describes how we can test cloud interactions in a local and cost-effective environment without relying on real AWS resources.

Combine Testcontainers and Spring Boot with multiple containers introduces three different approaches for starting multiple Docker containers with TestContainers when we are writing integration tests for Spring Boot applications. It provides sample code for each explored method, and identifies the advantages and disadvantages of each approach.

Automating Java Style Guide Enforcement with Checkstyle and OpenRewrite describes how we can enforce coding standards automatically with Checkstyle and OpenRewrite. It provides a quick introduction to both Checkstyle and Openrewrite, and explains how we can identify "code style" violations with Checkstyle and fix them automatically with OpenRewrite. Additionally, this blog post explains how we can use these tools with Maven, leverage IDE plugins, and integrate them with our CI pipeline.

Testing MongoDB Atlas Search Java Apps Using TestContainers provides a short introduction to MongoDB Atlas Search and describes how we can write comprehensive integration tests for a service service with JUnit Jupiter and Testcontainers.

Testcontainers + Spring Done Right: Cleaner, Faster, Smarter introduces a new open source project called Spring-TestContainers. The main benefit of this project is that we don't have to write any infrastructure configuration code if we want to start a docker container before our integration tests are run and stop it after our integration tests have been run. This blog post explains why the author wrote the Spring-TestContainers library, describes why we should use it, and helps us to write integration tests for a repository which uses the PostgreSQL database.

UI / End-to-End

Using Playwright Custom Matchers to Automate Layout Testing provides a quick introduction to layout testing, identifies existing layout testing tools, and explains how we can write layout tests which won't rely on visual regression tools (aka compare the expected and the actual screenshot).

Vibe testing with Playwright explains how we can integrate Github Copilot with Playwright MCP server and demonstrates how we can leverage Github Copilot for writing automated tests with Playwright. The author installs the Playwright MCP server, asks Github Copilot to analyze his website and write tests for it, and shares the results with us.

Speeding Up Playwright Tests with Dynamic Sharding in GitHub Actions describes how we can create a Github Actions workflow which improves the performance of our Playwright end-to-end tests by using dynamic sharding. This blog post explains how we can determine the optimal number of shards based on our test count, execute tests in parallel by using the created shards, and merge the test results into the a single HTML report.

Handling Multi-User Flows in Playwright the Right Way describes how we can improve the performance of our Playwright tests and make them less flaky when have to write tests where multiple users invoke the same flow.

Offline but Not Broken: Testing Cached Data with Playwright explains how we can write tests which verify that our web application is working as expected when the user of the web application is offline.

The post Clean Test Automation Monthly 5 / 2025 appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Writing Unit Test With MockMvcTester: Returning an Object as JSON 13 May 7:02 AM (5 months ago)

The second part of my MockMvcTester tutorial described how we can write unit tests for a Spring MVC REST API endpoint that returns a list as JSON. This time we will take a closer look at writing unit tests for a REST API endpoint which returns the information of the requested object as JSON.

After we have finished this blog post, we:

Let's begin.

This blog post assumes that:

What Kind of Tests Should We Write?

When we are writing unit tests for a REST API endpoint which returns the information of the requested object as JSON, we must ensure that the system under test is working as expected when the requested object isn't found and when the requested object is found. When we are writing unit tests for these two scenarios, we must verify that:

Next, we will take a look at the system under test.

Introduction to System Under Test

The system under test processes GET requests send to the path: '/todo-item/{id}' and it fulfills these requirements:

First, if the requested todo item is found, the system under test:

The tested controller method is called findById() and it simply returns the information of the todo item that's found from the database. The source code of the tested controller method looks as follows:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/todo-item")
public class TodoItemCrudController {
    
    private final TodoItemCrudService service;
    @Autowired
    public TodoItemCrudController(TodoItemCrudService service) {
        this.service = service;
    }
    
    @GetMapping("{id}")
    public TodoItemDTO findById(@PathVariable("id") Long id) {
        return service.findById(id);
    }
}

The TodoItemDTO class is a data transfer object (DTO) that contains the information of a single todo item. Its source code looks as follows:

public class TodoItemDTO {
 
    private Long id;
    private String description;
    private List<TagDTO> tags;
    private String title;
    private TodoItemStatus status;
 
    //Getters and setters are omitted
}

The TagDTO class is a DTO that contains the information of a single tag. Its source code looks as follows:

public class TagDTO {
 
    private Long id;
    private String name;
 
    //Getters and setters are omitted
}

The TodoItemStatus enum specifies the possible statuses of a todo item. Its source code looks as follows:

public enum TodoItemStatus {
    OPEN,
    IN_PROGRESS,
    DONE
}

For example, if the found todo item is in progress and has one tag, the following JSON document is returned back to the client:

{
    "id":1,
    "description":"Remember to use JUnit 5",
    "tags":[
        {
            "id":9,
            "name":"Code"
        }
    ],
    "title":"Write example application",
    "status":"IN_PROGRESS"
}

Second, if the requested todo item isn't found, the system under test:

If the requested todo item isn't found, the TodoItemCrudService class throws a TodoItemNotFoundException which is processed by the TodoItemErrorHandler class. The relevant part of our error handler class looks as follows:

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseStatus;
@ControllerAdvice
public class TodoItemErrorHandler {
    
    @ExceptionHandler(TodoItemNotFoundException.class)
    @ResponseStatus(HttpStatus.NOT_FOUND)
    public void returnHttpStatusCodeNotFound() {
        //Left blank on purpose
    }
}

Let's move on and find out how we can send HTTP requests to the system under test.

Sending HTTP Requests to the System Under Test

Because we want to eliminate duplicate code from our test class, we have to create and send HTTP requests to the system under test by using a so called request builder class. In other words, before we can write unit tests for the system under test, we have to create a new request builder class and write a request builder method which creates and sends HTTP requests to the system under test. We can write our request builder class by following these steps:

First, create a new request builder class and mark this class as final. After we have created our request builder class, its source code looks as follows:

final class TodoItemHttpRequestBuilder {
}
There are three things I want to point out:

  • When we name our request builder classes, we should append the text: HttpRequestBuilder to the name of the processed "entity". Because the system under test processes todo items, the name of our request builder class should be: TodoItemHttpRequestBuilder.
  • It's a good idea to mark our request builder class as final because we don't want that other developers can extend it.
  • We should put our request builder class to the package that contains the unit and integration tests which use it. That's why we should set its visibility to package protected.

Second, add a private and final MockMvcTester field to our request builder class and create a constructor which initializes this field. After we have done this, the source code of our request builder class looks as follows:

import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.assertj.MockMvcTester;
final class TodoItemHttpRequestBuilder {
    private final MockMvcTester mockMvcTester;
    TodoItemHttpRequestBuilder(MockMvc mockMvc) {
        this.mockMvcTester = MockMvcTester.create(mockMvc);
    }
}

Third, add a new method called findById() to our request builder class. This method must take the id of the todo item as a method parameter and return a MvcTestResult object. After we have added the findById() method to our request builder class, we have to implement it by following these steps:

  1. Send a GET request to the path: '/todo-item/{id}'.
  2. Return the MvcTestResult object that's returned by the exchange() method of the MockMvcRequestBuilder class.

After we have implemented the findById() method, the source code of our request builder class looks as follows:

import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.assertj.MockMvcTester;
import org.springframework.test.web.servlet.assertj.MvcTestResult;
class TodoItemHttpRequestBuilder {
    private final MockMvcTester mockMvcTester;
    TodoItemHttpRequestBuilder(MockMvc mockMvc) {
        this.mockMvcTester = MockMvcTester.create(mockMvc);
    }
    
    MvcTestResult findById(Long id) {
        return mockMvcTester.get()
                .uri("/todo-item/{id}", id)
                .exchange();
    }
}
Additional Reading:

Next, we will find out how we can write unit tests for the system under test with MockMvcTester.

Writing Unit Tests With MockMvcTester

Before we can write unit tests for the system under test, we have to configure the system under test. We can write the required setup code by following these steps:

First, create a new test class and configure the display name of our test class. After we have created a new test class, its source code looks as follows:

import org.junit.jupiter.api.DisplayName;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
}

Second, add two private fields to our test class:

After we have added these fields to our test class, its source code looks as follows:

import org.junit.jupiter.api.DisplayName;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
}
Generally speaking, we shouldn't replace our services with test doubles because this makes our unit tests hard to maintain. I use this technique here only because it makes our code samples easier to understand. If this would be a real software project, we should increase the size of the tested unit and write unit tests which won't tie our hands.

Third, write a setup method that's invoked before a unit test is run. This method creates the TodoItemCrudService stub, configures the system under test, and creates the request builder that builds HTTP requests and sends the created requests to the system under test. After we have implemented this setup method, the source code of our test class looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.setup.MockMvcBuilders;
import static org.mockito.Mockito.mock;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    @BeforeEach
    void configureSystemUnderTest() {
        service = mock(TodoItemCrudService.class);
        TodoItemCrudController testedController = 
                new TodoItemCrudController(service);
        MockMvc mockMvc = MockMvcBuilders.standaloneSetup(testedController)
                .setControllerAdvice(new TodoItemErrorHandler())
                .setMessageConverters(
                        WebTestConfig.objectMapperHttpMessageConverter()
                )
                .build();
        requestBuilder = new TodoItemHttpRequestBuilder(mockMvc);
    }
}

We have now written the code that configures the system under test. Next, we will write the unit tests which ensure that the system under test is working as expected.

Before we can write the actual test methods, we have to create a new nested test class by following these steps:

  1. Add a new nested test class to our test class and configure the display name of our new nested test class. This class contains the test methods which ensure that the system under test is working as expected.
  2. Add a new constant to our nested test class. This constant contains the id of the requested todo item.

After we have made the required changes to our test class, its source code looks as follows:

import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import static org.mockito.Mockito.mock;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
    }
}

Let's move on and write the unit tests which verify that the system under test is working as expected when the requested todo item isn't found and when the requested todo item is found.

Scenario 1: The Requested Todo Item Isn't Found

We can write the required unit tests by following these steps:

First, add a new nested test class to the FindById class and configure the display name of our new nested test class. This class contains the unit tests which ensure that the system under test is working as expected when the requested todo item isn't found. After we have added a new nested test class to the FindById class, the source code of our test class looks as follows:

import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        @Nested
        @DisplayName("When the requested todo item isn't found")
        class WhenRequestedTodoItemIsNotFound {
        }
    }
}

Second, write a new setup method that's run before a test method that's added to the WhenRequestedTodoItemIsNotFound class. This method ensures that the findById() method of the TodoItemCrudService class throws a new TodoItemNotFoundException when it's invoked by using the method parameter 1L. After we have added this setup method to our test class, its source code looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        @Nested
        @DisplayName("When the requested todo item isn't found")
        class WhenRequestedTodoItemIsNotFound {
            @BeforeEach
            void throwException() {
                given(service.findById(TODO_ITEM_ID))
                        .willThrow(new TodoItemNotFoundException(""));
            }
        }
    }
}

Third, write a unit test which verifies that the system under test returns the HTTP status code 404. After we have written this unit test, the source code of our test class looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.springframework.http.HttpStatus;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        @Nested
        @DisplayName("When the requested todo item isn't found")
        class WhenRequestedTodoItemIsNotFound {
            @BeforeEach
            void throwException() {
                given(service.findById(TODO_ITEM_ID))
                        .willThrow(new TodoItemNotFoundException(""));
            }
            @Test
            @DisplayName("Should return the HTTP status code not found (404)")
            void shouldReturnHttpStatusCodeNotFound() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasStatus(HttpStatus.NOT_FOUND);
            }
        }
    }
}

Fourth, write a unit test which ensures that the system under test returns an HTTP response that has an empty response body. After we have written this unit test, the source code of our test class looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.springframework.http.HttpStatus;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        @Nested
        @DisplayName("When the requested todo item isn't found")
        class WhenRequestedTodoItemIsNotFound {
            @BeforeEach
            void throwException() {
                given(service.findById(TODO_ITEM_ID))
                        .willThrow(new TodoItemNotFoundException(""));
            }
            @Test
            @DisplayName("Should return the HTTP status code not found (404)")
            void shouldReturnHttpStatusCodeNotFound() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasStatus(HttpStatus.NOT_FOUND);
            }
            @Test
            @DisplayName("Should return an HTTP response with empty response body")
            void shouldReturnHttpResponseWhichHasEmptyResponseBody() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasBodyTextEqualTo("");
            }
        }
    }
}

We have now written the unit tests which ensure that the system under test is working as expected when the requested todo item isn't found. Next, we will write the unit tests which verify that the system under test is working as expected when the requested todo item is found.

Scenario 2: The Requested Todo Item Is Found

We can write the required unit tests by following these steps:

First, add a new nested test class to the FindById class and configure the display name of our new nested test class. This class contains the unit tests which ensure that the system under test is working as expected when the requested todo item is found. After we have added a new nested test class to the FindById class, the source code of our test class looks as follows:

import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        //The WhenRequestedTodoItemIsNotFound class is omitted on purpose
        @Nested
        @DisplayName("When the requested todo item is found")
        class WhenRequestedTodoItemIsFound {
            
        }
    }
}

Second, add new constants to the WhenRequestedTodoItemIsFound class. These constants specify the information of the found todo item and define the expected JSON document that must be returned by the system under test. After we have added these constants to our nested test class, the source code of our test class looks as follows:

mport org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        //The WhenRequestedTodoItemIsNotFound class is omitted on purpose
        @Nested
        @DisplayName("When the requested todo item is found")
        class WhenRequestedTodoItemIsFound {
            private final String DESCRIPTION = "Remember to use JUnit 5";
            private final String EXPECTED_BODY_JSON = """
                    {
                        "id": 1,
                        "description": "Remember to use JUnit 5",
                        "tags": [
                            {
                                "id": 9,
                                "name": "Code"
                            }
                        ],
                        "title": "Write example application",
                        "status": "IN_PROGRESS"
                    }
                    """;
            private final Long TAG_ID = 9L;
            private final String TAG_NAME  = "Code";
            private final String TITLE = "Write example application";
            private final TodoItemStatus STATUS = TodoItemStatus.IN_PROGRESS;
        }
    }
}

Third, write a new setup method that's run before a test method that's added to the WhenRequestedTodoItemIsFound class. This method ensures that the findById() method of the TodoItemCrudService class returns a todo item that has one tag when it's invoked by using the method parameter 1L. After we have added this setup method to our test class, its source code looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import java.util.Collections;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        //The WhenRequestedTodoItemIsNotFound class is omitted on purpose
        @Nested
        @DisplayName("When the requested todo item is found")
        class WhenRequestedTodoItemIsFound {
            private final String DESCRIPTION = "Remember to use JUnit 5";
            private final String EXPECTED_BODY_JSON = """
                    {
                        "id": 1,
                        "description": "Remember to use JUnit 5",
                        "tags": [
                            {
                                "id": 9,
                                "name": "Code"
                            }
                        ],
                        "title": "Write example application",
                        "status": "IN_PROGRESS"
                    }
                    """;
            private final Long TAG_ID = 9L;
            private final String TAG_NAME  = "Code";
            private final String TITLE = "Write example application";
            private final TodoItemStatus STATUS = TodoItemStatus.IN_PROGRESS;
            @BeforeEach
            void returnFoundTodoItem() {
                TodoItemDTO found = new TodoItemDTO();
                found.setId(TODO_ITEM_ID);
                found.setDescription(DESCRIPTION);
                found.setStatus(STATUS);
                found.setTitle(TITLE);
                TagDTO tag = new TagDTO();
                tag.setId(TAG_ID);
                tag.setName(TAG_NAME);
                found.setTags(Collections.singletonList(tag));
                given(service.findById(TODO_ITEM_ID)).willReturn(found);
            }
        }
    }
}

Third, write a unit test which verifies that the system under test returns the HTTP status code 200. After we have written this unit test, the source code of our test class looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import java.util.Collections;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        //The WhenRequestedTodoItemIsNotFound class is omitted on purpose
        @Nested
        @DisplayName("When the requested todo item is found")
        class WhenRequestedTodoItemIsFound {
            private final String DESCRIPTION = "Remember to use JUnit 5";
            private final String EXPECTED_BODY_JSON = """
                    {
                        "id": 1,
                        "description": "Remember to use JUnit 5",
                        "tags": [
                            {
                                "id": 9,
                                "name": "Code"
                            }
                        ],
                        "title": "Write example application",
                        "status": "IN_PROGRESS"
                    }
                    """;
            private final Long TAG_ID = 9L;
            private final String TAG_NAME  = "Code";
            private final String TITLE = "Write example application";
            private final TodoItemStatus STATUS = TodoItemStatus.IN_PROGRESS;
            @BeforeEach
            void returnFoundTodoItem() {
                TodoItemDTO found = new TodoItemDTO();
                found.setId(TODO_ITEM_ID);
                found.setDescription(DESCRIPTION);
                found.setStatus(STATUS);
                found.setTitle(TITLE);
                TagDTO tag = new TagDTO();
                tag.setId(TAG_ID);
                tag.setName(TAG_NAME);
                found.setTags(Collections.singletonList(tag));
                given(service.findById(TODO_ITEM_ID)).willReturn(found);
            }
            @Test
            @DisplayName("Should return the HTTP status code ok (200)")
            void shouldReturnHttpStatusCodeOk() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasStatusOk();
            }
        }
    }
}

Fourth, write a unit test which ensures that the system under test returns the information of the found todo item as JSON. After we have written this unit test, the source code of our test class looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.springframework.http.MediaType;
import java.util.Collections;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        //The WhenRequestedTodoItemIsNotFound class is omitted on purpose
        @Nested
        @DisplayName("When the requested todo item is found")
        class WhenRequestedTodoItemIsFound {
            private final String DESCRIPTION = "Remember to use JUnit 5";
            private final String EXPECTED_BODY_JSON = """
                    {
                        "id": 1,
                        "description": "Remember to use JUnit 5",
                        "tags": [
                            {
                                "id": 9,
                                "name": "Code"
                            }
                        ],
                        "title": "Write example application",
                        "status": "IN_PROGRESS"
                    }
                    """;
            private final Long TAG_ID = 9L;
            private final String TAG_NAME  = "Code";
            private final String TITLE = "Write example application";
            private final TodoItemStatus STATUS = TodoItemStatus.IN_PROGRESS;
            @BeforeEach
            void returnFoundTodoItem() {
                TodoItemDTO found = new TodoItemDTO();
                found.setId(TODO_ITEM_ID);
                found.setDescription(DESCRIPTION);
                found.setStatus(STATUS);
                found.setTitle(TITLE);
                TagDTO tag = new TagDTO();
                tag.setId(TAG_ID);
                tag.setName(TAG_NAME);
                found.setTags(Collections.singletonList(tag));
                given(service.findById(TODO_ITEM_ID)).willReturn(found);
            }
            @Test
            @DisplayName("Should return the HTTP status code ok (200)")
            void shouldReturnHttpStatusCodeOk() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasStatusOk();
            }
            @Test
            @DisplayName("Should return the data of the found todo item as JSON")
            void shouldReturnInformationOfFoundTodoItemAsJSON() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasContentType(MediaType.APPLICATION_JSON);
            }
        }
    }
}

Fifth, write a unit test which verifies that the system under test returns the information of the found todo item. We can write this unit test by following these steps:

  1. Send a GET request to the path: '/todo-item/1'.
  2. Get an assertion object which allows us to write assertions for JSON documents.
  3. Verify that the system under test returns a JSON document that's stricly equal to the expected JSON document.

After we have written this unit test, the source code of our test class looks as follows:

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.springframework.http.MediaType;
import java.util.Collections;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
@DisplayName("Tests for the Todo Items API")
class TodoItemCrudControllerTest {
    private TodoItemHttpRequestBuilder requestBuilder;
    private TodoItemCrudService service;
    //The setup method is omitted on purpose
    @Nested
    @DisplayName("Find todo item by using its id as search criteria")
    class FindById {
        private final Long TODO_ITEM_ID = 1L;
        //The WhenRequestedTodoItemIsNotFound class is omitted on purpose
        @Nested
        @DisplayName("When the requested todo item is found")
        class WhenRequestedTodoItemIsFound {
            private final String DESCRIPTION = "Remember to use JUnit 5";
            private final String EXPECTED_BODY_JSON = """
                    {
                        "id": 1,
                        "description": "Remember to use JUnit 5",
                        "tags": [
                            {
                                "id": 9,
                                "name": "Code"
                            }
                        ],
                        "title": "Write example application",
                        "status": "IN_PROGRESS"
                    }
                    """;
            private final Long TAG_ID = 9L;
            private final String TAG_NAME  = "Code";
            private final String TITLE = "Write example application";
            private final TodoItemStatus STATUS = TodoItemStatus.IN_PROGRESS;
            @BeforeEach
            void returnFoundTodoItem() {
                TodoItemDTO found = new TodoItemDTO();
                found.setId(TODO_ITEM_ID);
                found.setDescription(DESCRIPTION);
                found.setStatus(STATUS);
                found.setTitle(TITLE);
                TagDTO tag = new TagDTO();
                tag.setId(TAG_ID);
                tag.setName(TAG_NAME);
                found.setTags(Collections.singletonList(tag));
                given(service.findById(TODO_ITEM_ID)).willReturn(found);
            }
            @Test
            @DisplayName("Should return the HTTP status code ok (200)")
            void shouldReturnHttpStatusCodeOk() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasStatusOk();
            }
            @Test
            @DisplayName("Should return the found todo item as JSON")
            void shouldReturnInformationOfFoundTodoItemAsJSON() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .hasContentType(MediaType.APPLICATION_JSON);
            }
            @Test
            @DisplayName("Should return the information of the found todo item")
            void shouldReturnInformationOfFoundTodoItem() {
                assertThat(requestBuilder.findById(TODO_ITEM_ID))
                        .bodyJson()
                        .isStrictlyEqualTo(EXPECTED_BODY_JSON);
            }
        }
    }
}
Additional Reading:

We can now write unit tests for a Spring MVC REST API endpoint that returns an object as JSON. Let's summarize what we learned from this blog post.

Summary

This blog post has taught us four things:

P.S. You can get the example applications from Github.

The post Writing Unit Test With MockMvcTester: Returning an Object as JSON appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Clean Test Automation Monthly 4 / 2025 1 May 6:30 AM (6 months ago)

The Clean Test Automation Monthly is a monthly blog post that shares interesting or useful test automation content which I read during the current month. This blog post is always published on the last day of the month.

Let's begin!

Table of Contents:

Test Design

Attack of the Clones – The War on Code Duplication explains why duplicate code makes our tests hard to maintain, reveals how code duplication sneaks into our test suite, and introduces a rule that helps us to decide when we should get rid of duplicate code.

Cutting Through the Noise - The Case Against Gherkin in Automation critiques Gherkin-based test automation. The author argues that Gherkin introduces unnecessary complexity and doesn't deliver the promised collaboration benefits because non-technical stakeholders rarely engage with Gherkin scenarios. Thus, developers must manage the redundant feature files and step definitions. Finally, the author identifies three reasons why code-first approach is a better choice.

Don't Mock Your Framework: Writing Tests You Won't Regret identifies three reasons why we shouldn't mock frameworks (or libraries) which we don't own and explains how we can avoid making this mistake. Finally, the author admits that there are exceptions to this rule and identifies one such exception.

Test-Driven Development: Red, Green, Refactor! is a yet another blog post which explains what's the correct way to do TDD. The author identifies the steps of the TDD cycle and describes the purpose of each step. The thing that differentiates this blog post from most TDD articles is that this blog post provides actionable advice for writing tests and selecting the test which we must write next.

Don’t trust a test you've never seen fail: Introducing Reverse Mutation Testing reminds us that we should never trust an automated test until we have seen that it can fail. This blog post explains what's the simplest way to make our tests fail and introduces a concept called reverse mutation testing which generates mutated versions of our test cases, runs the created tests, and creates a report which identifies the tests which won't fail when they should.

Why "Shift Left" Keeps Failing is an excellent blog post which explains why our efforts are doomed to fail if we all agree to do something, but we don't make sure that everyone is on the same page. That's why we must use clear definitions and share our expectations when are making decisions like this.

Backend

Conditionally Registering JUnit 5 Extensions provides a very quick introduction to JUnit 5 extension model and identifies the drawbacks of registering extensions statically with the @ExtendWith annotation and dynamically with the @RegisterExtension annotation. Finally, the author explains how we can solve the problems of these extension registration mechanisms by writing a conditional extension resolver mechanism.

Testing cloud applications without breaking the bank: Testcontainers and LocalStack describes how we can minimize the testing costs of our cloud applications by writing tests which are run on development environment by using LocalStack, Testcontainers, and JUnit 5.

Shift-Left Testing with Spring Boot and Testcontainers: A Comprehensive Guide starts by providing a quick introduction to shift-left testing, Testcontains, and Spring Boot. Next, this blog post introduces three testing strategies we can use when we are writing integration and end-to-end tests, and identifies the challenges which we will face when we implement these strategies. Finally, the author provides a step-by-step guide that helps us to implement shift-left testing with Spring Boot, JUnit 5, and Testcontainers.

gRPC Testing intro: Writing the first test describes how we can implement a simple gRPC service with Java, Gradle, and Spring Boot AND write tests for our gRPC service with JUnit 5.

Testing Spring Web MVC Filter with Spring Boot identifies the limitations of unit tests and explains how we can write better tests for our filters by loading a Spring application context that contains only the web components.

Writing Unit Tests With MockMvcTester: Returning a List as JSON is my own blog post that identifies what kind of tests we should write, helps us to eliminate duplicate request building code, and describes how we can write unit tests for a REST API endpoint that returns a list as JSON.

Top 5 Spring Boot Testing Myths debunks five common myths and misconceptions about testing in Spring Boot. It explains why these myths and misconceptions were born and reveals that these myths and misconceptions aren't a problem IF we use Spring Boot in the correct way.

A Practical Guide to Testing Spring Controllers With MockMvcTester provides a quick and practical introduction to MockMvcTester. If you don't want to read long tutorials and you want to know what you can do with MockMvcTester, you should read this blog post.

Writing Your First JUnit Jupiter (JUnit 5) Extension provides a quick introduction to JUnit 5 extensions, explains why (and when) we should write a custom extension, and describes how we can implement two custom JUnit 5 extensions: a timer which logs the execution time of every test method and an extension which takes a screenshot when a Selenium test fails.

UI / End-to-End

What Makes the Page Object Model So Special? introduces three software design principles which make the page object model useful to anyone who is writing UI tests for a web UI or end-to-end tests for a web application.

Building and improving Page Objects, one step at a time describes how we can make iterative improvements to our page objects. The author starts with a simple page object and demonstrates how we can improve the initial page object by making iterative changes to our page object. The goal of these iterative changes is to create a page object that helps us to write well-structured test code that's both easy to read and maintain.

Automating Accessibility Checks Using Playwright describes how we can write accessibility tests with Playwright by using the axe-core and axe-html-reporter libraries. We will learn how we can ensure that our web application doesn't have WCAG violations and generate a test report that identifies the WCAG violations found by our accessibility tests.

Operating System Independent Screenshot Testing with Playwright and Docker explains why visual tests are flaky if they are run on different operating systems and browsers, and describes how we can solve this problem by running our tests inside a Docker container. This approach improves the reliability of visual tests because they are always run by using the same operating system and browser.

Catch Missing await Calls in Playwright Tests with ESLint helps us to fix flaky Playwright tests before they are run. This blog post provides step-by-step instructions which describe how we can detect unhandled promises by leveraging the @typescript-eslint/no-floating-promises ESLint rule.

The post Clean Test Automation Monthly 4 / 2025 appeared first on Petri Kainulainen.

   

Related Stories

 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?