Micronaut Testing
Making our example more real
In my previous post, Bazel and Micronaut – An unlikely beautiful match, we explored how to get Bazel, Kotlin and all the Micronaut annotation processors working happily together, and when it was complete, it was a GREAT match. But, there is a glaring omission in the post. There were no tests. Both Bazel and Micronaut are very proud of their testing heritage, and rightly so. Both mention testing in their main descriptions. To really do this new match any sort of justice, we have to have testing. For those of you that have seen one of our presentations or read the insights, you know Dilbert has to have something to say about me putting out the first post with "production code" and no tests.
The Challenges
The biggest challenge I faced was figuring out how to ensure the proper Micronaut annotation processors ran against the test class. The second biggest challenge was getting this to work with JUnit5 which is not a standard Bazel example. Luckily, on this front, the JUnit team put together a nice example to base things on. Of course it was for Java, and didn’t have anything to do with Micronaut, but it gave me some hints. Micronaut also provided an example in Gradle and Maven which also gave some more hints.
The Solution
After some experimenting, I figured out the kt_jvm_test
rule was the best option for our code base. The real challenge was figuring out the correct arguments for the rule.
1 2 3 4 5 6 7 |
kt_jvm_test( name = "micronaut_tests", main_class = "org.junit.platform.console.ConsoleLauncher", args = [ "--select-package=com.sumglobal", ], srcs = glob(["src/test/kotlin/**/*.kt"]), |
In the code snippet above, the main_class
is set to the ConsoleLauncher
so Bazel knows to run the test harness, the args
tells the test framework what packages/sub-packages to scan for *Test.kt
files and finally we point the rule to the source files to compile for testing.
In Micronaut, we might want to specify a specific settings file or logging. In our case we didn’t , but if you did, we would add it like the snippet below.
1 |
resources = glob(["src/test/resources/**/*.xml", "src/test/resources/**/*.yml"]), |
To compile our tests, and ensure all the Micronaut annotation processors are run (kapt in Kotlin) we need to bring in some dependencies.
1 2 3 4 5 6 7 8 9 10 |
deps = [ ":app_lib", ":micronaut_lib", # Need this to be sure the Annotation processor is called on our Test code... "@maven//:io_micronaut_test_micronaut_test_junit5", "@maven//:io_micronaut_test_micronaut_test_core", #Bazel requires this, it is brought in transitively in the Gradle build, but called directly in the junit5 impl above "@maven//:org_junit_jupiter_junit_jupiter_api", "@maven//:org_junit_jupiter_junit_jupiter_engine", "@maven//:org_junit_jupiter_junit_jupiter_params", "@maven//:org_junit_platform_junit_platform_console", ], |
If you read the previous post, you are familiar with what :micronaut_lib
is doing. This is where the magic happens that makes Micronaut so awesome. If you skipped ahead, go back and read the previous post, you know who you are ;). The inclusion of the two micronaut_test dependencies enable the Micronaut testing magic, and also point out how dependencies can be hidden by Maven and Gradle. On a side note here, this is one of the primary reasons to use Bazel over either of these more popular JVM build tools.
1 2 |
"@maven//:io_micronaut_test_micronaut_test_junit5", "@maven//:io_micronaut_test_micronaut_test_core", #Bazel requires this, it is brought in transitively in the Gradle build, but called directly in the junit5 impl above |
The bottom four dependencies are how we get the Jupiter JUnit 5 framework dependencies in the test compile. Once we have it compiling, we still need to define our runtime dependencies.
1 2 3 4 5 6 7 |
runtime_deps = [ ":app_lib", "@maven//:org_junit_jupiter_junit_jupiter_engine", "@maven//:org_junit_platform_junit_platform_console", "@maven//:org_junit_jupiter_junit_jupiter_api", "@maven//:io_micronaut_micronaut_runtime", ], |
The test code itself is just a simple contract test for the services.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
@MicronautTest() @TestInstance(TestInstance.Lifecycle.PER_CLASS) open class HelloBazelMicronautControllerTest { @Inject lateinit var embeddedServer: EmbeddedServer lateinit var client: HttpClient @Inject lateinit var ctx: ApplicationContext @BeforeAll fun beforeAll() { client = HttpClient.create(embeddedServer.url) } /** * This is a contract test for the "hello" endpoint */ @Test fun hello() { val request = HttpRequest.GET<Any>("/bazel-mn/hello") val rsp = client.toBlocking().exchange(request, String::class.java) assertTrue(rsp.body.isPresent) val root = rsp.body.get() print("Message received is $root") Assertions.assertEquals("Hello from Bazel/Micronaut example!", root) } /** * This is a contract test for the "goodbye" endpoint */ @Test fun goodbye() { val request = HttpRequest.GET<Any>("/bazel-mn/goodbye") val rsp = client.toBlocking().exchange(request, String::class.java) assertTrue(rsp.body.isPresent) val root = rsp.body.get() print("Message received is $root") Assertions.assertEquals("Goodbye from Bazel/Micronaut example.", root) } @AfterAll fun cleanup() { client.close() } } |
To run the tests, you can call the specific test rule with bazel test //:micronaut_tests
or you can use the "run all the tests" with bazel test //...
which in our simple case does the exact same thing. One of the nice things about Bazel is its caching and avoidance. If something didn’t change, don’t redo the work. Since I have run my tests several times, I get the following results:
1 2 3 4 5 6 7 8 9 |
>bazel test //… INFO: Analyzed 8 targets (0 packages loaded, 44 targets configured). INFO: Found 7 targets and 1 test target… INFO: Elapsed time: 0.371s, Critical Path: 0.01s INFO: 0 processes. INFO: Build completed successfully, 1 total action //:micronaut_tests (cached) PASSED in 2.5s Executed 0 out of 1 test: 1 test passes. INFO: Build completed successfully, 1 total action |
To find your results, you look in the bazel-testlogs
directory under the name you added for the rule. In our case that is micronaut_tests
. The file is test.xml
. Our test summary looks like this (pulled from the test.xml file)
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Test run finished after 1715 ms [ 2 containers found ] [ 0 containers skipped ] [ 2 containers started ] [ 0 containers aborted ] [ 2 containers successful ] [ 0 containers failed ] [ 2 tests found ] [ 0 tests skipped ] [ 2 tests started ] [ 0 tests aborted ] [ 2 tests successful ] [ 0 tests failed ]]]> |
You can find all the code for this and the first post in our Github repository here https://github.com/SUMGlobal/bazel-micronaut-example. We would love to hear what you think of the post. Leave a comment below.
mike mancini says
Nice article. Are you able to generate test coverage reports using rules_kotlin? Last I checked the answer was no.