How to override mock call expectations in table driven tests
#php editor Xiaoxin today introduces a method to override simulation call expectations in table-driven testing. Table-driven testing is an effective testing technology that can test in a data-driven manner and improve the maintainability and scalability of the code. In testing, we often need to mock call expectations to ensure that the code under test behaves as expected. This article will introduce in detail how to use table-driven testing to achieve the desired coverage of mock calls, helping developers better conduct unit testing.
Question content
When doing table-driven testing, I use some mocks generated by mockery
and set some method call expectations that depend on each test case Data provided in the dataset. The problem I'm facing is that the mock call always returns the result set expected in the first test case and not the result set defined for the executed test case.
func (s *MyTestSuite) TestMySampleTest() { testCases := []struct { Name string Requests []*service.CreateCredentialRequest }{ { Name: "first case", mockedResult: 1, expected: 1, }, { Name: "second case", mockedResult: 2, expected: 2, }, } for _, tc := range testCases { s.Run(tc.Name, func() { s.someMock.On("SomeMethodCall", mock.Anything).Return(tc.mockedResult) result := s.SUT.SomeMethodThatCallsTheMockedObe() s.Equal(expected, result) }) } }
When I run this test, the second case fails because the result is 1
instead of the expected 2
, I can see that the problem is that the mocked method returns 1
(the value set for the first test case) instead of 2
(the value set for the current test case).
Any idea how to solve this problem?
Workaround
This may not be the most elegant solution and I was wondering if there were any other ways to do this, but for now, I've found this solution. It consists of generating a new mock for each subtest run by the table driven test, so in each subtest we use a completely new mock instance that does not set any expectations from the previous subtest. Considering that I use testify.suite
to organize and handle my tests, doing so is as simple as manually calling the s.setuptest()
method in each subtest:
// SetupTest is executed before every test is run, I instantiate the SUT and // its dependencies here. func (s *MyTestSuite) SetupTest() { // Instantiate the mock s.someMock = mocks.NewSomeMock(s.T()) // Instantiate the SUT, injecting the mock through the constructor function s.SUT = NewSUT(s.someMock) } func (s *MyTestSuite) TestMySampleTest() { testCases := []struct { Name string Requests []*service.CreateCredentialRequest }{ // test cases here } for _, tc := range testCases { s.Run(tc.Name, func() { // Manually calling s.SetupTest() to generate new instances of the mocks in every subtest. // If we don't do this, the mock will always return the first expectation set (the one set for the first test case). s.SetupTest() // Here comes the logic of the text as we had it before s.someMock.On("SomeMethodCall", mock.Anything).Return(tc.mockedResult) result := s.SUT.SomeMethodThatCallsTheMockedObe() s.Equal(expected, result) }) } }
The above is the detailed content of How to override mock call expectations in table driven tests. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The article explains how to use the pprof tool for analyzing Go performance, including enabling profiling, collecting data, and identifying common bottlenecks like CPU and memory issues.Character count: 159

The article discusses writing unit tests in Go, covering best practices, mocking techniques, and tools for efficient test management.

This article demonstrates creating mocks and stubs in Go for unit testing. It emphasizes using interfaces, provides examples of mock implementations, and discusses best practices like keeping mocks focused and using assertion libraries. The articl

This article explores Go's custom type constraints for generics. It details how interfaces define minimum type requirements for generic functions, improving type safety and code reusability. The article also discusses limitations and best practices

This article explores using tracing tools to analyze Go application execution flow. It discusses manual and automatic instrumentation techniques, comparing tools like Jaeger, Zipkin, and OpenTelemetry, and highlighting effective data visualization

The article discusses Go's reflect package, used for runtime manipulation of code, beneficial for serialization, generic programming, and more. It warns of performance costs like slower execution and higher memory use, advising judicious use and best

The article discusses using table-driven tests in Go, a method that uses a table of test cases to test functions with multiple inputs and outcomes. It highlights benefits like improved readability, reduced duplication, scalability, consistency, and a

The article discusses managing Go module dependencies via go.mod, covering specification, updates, and conflict resolution. It emphasizes best practices like semantic versioning and regular updates.
