Tuesday, November 18, 2025

๐Ÿš€ Using Testcontainers for Real Integration Tests (Spring Boot + JUnit 5)

 Modern applications rely on multiple external systems — relational databases, messaging services, cloud APIs, and more. Unit tests alone can’t reliably validate these integrations.

Testcontainers has become the de-facto solution for running lightweight, disposable Docker containers directly from your JUnit 5 tests. This enables full integration testing without managing local databases or emulators manually.

In this post, walk you through a generic, reusable setup for:

✔ Spring Boot (3.x)
✔ JUnit 5
✔ PostgreSQL & Oracle containers
✔ Google Pub/Sub emulator
✔ Multi-datasource testing
✔ GitHub Actions or any CI environment

 


1️⃣ Why Testcontainers?

Testcontainers lets you spin up real infrastructure inside your tests:

  • Real PostgreSQL and Oracle Free instances

  • Real Pub/Sub emulator

  • Real networking

  • Real JDBC connections

All containers are created and torn down automatically → zero manual infrastructure.

 It works on:

  • Docker Desktop

  • Podman on Windows (podman machine start)

  • Linux / macOS

  

2️⃣ Dependencies (pom.xml)

Add the core Testcontainers modules:

<!-- Core Testcontainers --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>testcontainers</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.testcontainers</groupId> <artifactId>junit-jupiter</artifactId> <scope>test</scope> </dependency> <!-- Database Containers --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>postgresql</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.testcontainers</groupId> <artifactId>oracle-free</artifactId> <scope>test</scope> </dependency> <!-- GCP Emulator --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>gcloud</artifactId> <scope>test</scope> </dependency>

 

3️⃣ Test Profile (application-test.yml)

Create a fully isolated test profile:

spring: cloud: gcp: pubsub: enabled: true emulator-host: localhost:8085 application: name: "Integration Test Suite" server: port: 0 logging: request: shouldLog: true includePayload: true datasource: driver-class-name: oracle.jdbc.OracleDriver maximum-pool-size: 5 second-datasource: driver-class-name: org.postgresql.Driver jpa: hibernate: ddl-auto: create-drop
 
 Adapt the property names to your own project. 

 

4️⃣ Base Testcontainers Class

This reusable class bootstraps all containers once per test suite:

@ActiveProfiles("test") @Testcontainers public abstract class BaseIntegrationTest { @Container static final OracleContainer ORACLE = new OracleContainer("gvenzl/oracle-free:23-slim-faststart") .withUsername("testuser") .withPassword("testpass") .withDatabaseName("testdb"); @Container static final PostgreSQLContainer<?> POSTGRES = new PostgreSQLContainer<>("postgres:17-alpine") .withDatabaseName("testdb") .withUsername("testuser") .withPassword("testpass"); @Container static final GenericContainer<?> PUBSUB = new GenericContainer<>("gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators") .withCommand("gcloud", "beta", "emulators", "pubsub", "start", "--host-port=0.0.0.0:8085") .withExposedPorts(8085); @DynamicPropertySource static void registerProperties(DynamicPropertyRegistry registry) { registry.add("spring.datasource.url", ORACLE::getJdbcUrl); registry.add("spring.datasource.username", ORACLE::getUsername); registry.add("spring.datasource.password", ORACLE::getPassword); registry.add("wsj-datasource.jdbc-url", POSTGRES::getJdbcUrl); registry.add("wsj-datasource.username", POSTGRES::getUsername); registry.add("wsj-datasource.password", POSTGRES::getPassword); String emulatorHost = PUBSUB.getHost() + ":" + PUBSUB.getMappedPort(8085); registry.add("spring.cloud.gcp.pubsub.emulator-host", () -> emulatorHost); } }
 

Every integration test now has:

  • A running Oracle DB

  • A running PostgreSQL DB

  • A running Pub/Sub emulator

No external services. No mocks.

5️⃣ Example Integration Test

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) class UserEventServiceIT extends BaseIntegrationTest { @Autowired private UserEventService service; @Autowired private UserEventRepository repository; @BeforeEach void before() { repository.deleteAll(); } @Test @DisplayName("Should persist event correctly") void testSaveEvent() { service.saveEvent("test-id", "EventType"); Optional<UserEvent> saved = repository.findFirstByEventIdOrderByCreatedDesc("test-id"); assertThat(saved).isPresent(); assertThat(saved.get().getEventId()).isEqualTo("test-id"); } }

Readable, deterministic, and powered by real infrastructure.

 

6️⃣ Optional: .testcontainers.properties

At project root:

docker.client.strategy=org.testcontainers.dockerclient.NpipeSocketClientProviderStrategy reuse.enable=true

Useful for Windows + Podman, and for faster local test cycles.

7️⃣ Dockerfile Tip for Skip Tests

For Spring Boot builds:

RUN mvn clean package -DskipTests=true -DskipITs=true -Dmaven.test.skip=true

 

8️⃣ CI/CD Considerations

In GitHub Actions or other CI systems:

  • Testcontainers automatically pulls containers

  • You don’t need database dumps or SQL files

๐Ÿ”š Final Thoughts

Integration testing is often neglected due to infrastructure complexity.

With Testcontainers, this barrier disappears.

You get:

✔ Realistic tests
✔ Production-like behavior
✔ Zero manual environment setup
✔ Repeatability across machines and CI

Whether your application uses relational databases, cloud services, messaging systems, 

or all of them — Testcontainers makes integration testing simple, fast, and reliable.

 





Sunday, October 5, 2025

๐Ÿงฉ Building a Multi-Datasource Go Application

In many enterprise systems, you may need to work with multiple databases — for example, customer data in PostgreSQL, legacy records in Oracle, and transactional data in MySQL. Managing these connections efficiently while keeping the code clean can be challenging.





The Multi-Datasource Go project — a Go-based demo showing how to connect to MySQL, PostgreSQL, and Oracle XE within one modular application.


⚙️ Project Overview

The project follows a clean architecture pattern, ensuring clear separation between layers:

  • Handlers → handle HTTP routes using Gin
  • Services → contain business logic
  • Repositories → handle database operations per datasource
  • Models → define entity structures

Each datasource is completely independent, configured through a simple YAML file, and initialized at runtime with connection pooling.


๐Ÿงฑ Features

๐Ÿ”Œ Multiple database connections (MySQL, PostgreSQL, Oracle)

๐ŸŒ RESTful API with Gin

๐Ÿง  Domain-driven design with repository pattern

๐Ÿ“ Configurable via YAML and Viper

๐Ÿณ Docker Compose setup for quick local testing

⚡ Built-in health checks and auto table creation

๐Ÿ—️ Clean architecture (domain, repository, service layers)


๐Ÿงฐ Tech Stack

Layer                          Technology 

Web Framework         Gin

Configuration              Viper

Databases                   MySQL • PostgreSQL • Oracle XE

Drivers                        go-sql-driver/mysql, pgx, go-ora

Environment               Go 1.25.1 • Docker Compose


๐Ÿ—‚️ Project Structure

multi-datasource-go/

├─ cmd/api/main.go      # Application entry point

├─ internal/config          # Configuration loader (Viper)

├─ internal/db                # Database connection pools

├─ internal/http              # API routes and handlers

├─ internal/domain        # Entities, repos, services

└─ internal/repo             # Database-specific repositories

๐Ÿš€ Getting Started

1. Clone the Repository

git clone https://github.com/HenryXiloj/golang-demos.git

cd golang-demos/multi-datasource-go

2. Start Databases


docker compose -f docker-compose-multiple-db.yml up -d


3. Run the Application

go run cmd/api/main.go


๐Ÿงช Test the APIs

# MySQL
curl -X POST http://localhost:9000/api/v1/users \
  -H "Content-Type: application/json" \
  -d '{"name":"Henry","lastName":"x"}'

# PostgreSQL
curl -X POST http://localhost:9000/api/v2/companies \
  -H "Content-Type: application/json" \
  -d '{"name":"Test"}'

# Oracle
curl -X POST http://localhost:9000/api/v3/brands \
  -H "Content-Type: application/json" \
  -d '{"name":"Acme"}'

You’ll see all three databases responding independently with auto-created tables.

You can explore the full source code here:๐Ÿ‘‰ multi-datasource-go






Sunday, September 7, 2025

Event-Driven Architecture on GCP with Pub/Sub & Spring Boot

Event‑Driven Architecture (EDA) decouples producers from consumers using an event broker. You get independent scaling, resilience, and faster iteration.


1) Why EDA?

  • Loose coupling: Producers don’t know who consumes; consumers don’t care who produced.
  • Elasticity: Scale consumers independently from producers.
  • Resilience: Retry, backoff, DLQs mean failures don’t cascade.
  • Speed: Teams ship features without synchronous dependencies.


2) Core building blocks (cloud‑agnostic)

  • Event – immutable record of something that happened, with a unique ID and timestamp.
  • Producer – publishes events (APIs, batch jobs, scheduled triggers).
  • Broker – routes events (Pub/Sub, Kafka, RabbitMQ).
  • Subscription/Queue – delivery pipeline for a consumer.
  • Consumer – processes events (microservice, function, job).
  • DLQ – dead‑letter queue for poison messages.
  • Observability – logs, metrics, traces, payload samples.
  • Idempotency – ability to handle the same event more than once safely.


3) Two reference patterns on GCP (but generic concepts)

Pattern A: Cloud Scheduler → Pub/Sub → Spring Boot on GKE

When you need cron‑like triggers (hourly, daily) to kick off a pipeline or poll external systems.

+--------------+      +---------+      +------------------+
| Cloud        | ---> | Pub/Sub | ---> | Spring Boot on   |
| Scheduler    |      |  Topic  |      | GKE (consumer)   |
+--------------+      +---------+      +------------------+
        (produces events)            (subscribes & processes) 

Why this? Simple, cost‑effective, horizontally scalable consumers, works great for batch/stream hybrids.

Notes:

  • Pub/Sub guarantees at‑least‑once delivery. Design consumers to be idempotent.
  • Use message ordering keys only if you truly need ordering; it reduces parallelism.
  • Use DLQ subscriptions with retry policies.


Pattern B: Cloud Scheduler → Workflows → Pub/Sub → Spring Boot on GKE/Cloud Run

Add orchestration (branching, retries, fan‑out, calling APIs) before publishing an event.

+--------------+   +-----------+   +---------+   +----------------------+
| Cloud        |-->| Workflows |-->| Pub/Sub |-->| Spring Boot on GKE   |
| Scheduler    |   | (logic)   |   |  Topic  |   | or Cloud Run          |
+--------------+   +-----------+   +---------+   +----------------------+ 

Why this? Centralize control flow, enrich payloads, call external APIs, then publish. Swap the consumer with Cloud Run when you want scale‑to‑zero and fast cold starts for lightweight handlers.


4) Consumer code (Spring Boot, manual ack, idempotency)

Below is a trimmed version of a working setup using spring‑cloud‑gcp‑starter‑pubsub and Spring Integration — conceptually similar in any queue/broker.

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>spring-cloud-gcp-starter-pubsub</artifactId>
  <version>7.3.0</version>
</dependency>
<dependency>
  <groupId>org.springframework.integration</groupId>
  <artifactId>spring-integration-core</artifactId>
</dependency> 
@Slf4j
@Configuration
public class PubSubApplication {

  private final String topicName = "my_topic_test";
  private final String subscriptionName = "my_topic_test_sub";
  private final String projectId = "my_project_id";

  @Bean
  public PubSubConfiguration pubSubConfiguration() { return new PubSubConfiguration(); }

  @Bean
  public PubSubTemplate customPubSubTemplate(CredentialsProvider credentialsProvider) {
    GcpProjectIdProvider localProjectIdProvider = () -> projectId;
    PubSubConfiguration cfg = new PubSubConfiguration();
    cfg.initialize(projectId);
    DefaultPublisherFactory pub = new DefaultPublisherFactory(localProjectIdProvider);
    pub.setCredentialsProvider(credentialsProvider);
    DefaultSubscriberFactory sub = new DefaultSubscriberFactory(localProjectIdProvider, cfg);
    sub.setCredentialsProvider(credentialsProvider);
    return new PubSubTemplate(pub, sub);
  }

  @Bean
  public PubSubInboundChannelAdapter messageChannelAdapter(
      @Qualifier("pubsubInputChannel") MessageChannel inputChannel,
      PubSubTemplate pubSubTemplate) {
    var adapter = new PubSubInboundChannelAdapter(pubSubTemplate, subscriptionName);
    adapter.setOutputChannel(inputChannel);
    adapter.setAckMode(AckMode.MANUAL);
    return adapter;
  }

  @Bean
  public MessageChannel pubsubInputChannel() { return new DirectChannel(); }

  @Bean
  @ServiceActivator(inputChannel = "pubsubInputChannel")
  public MessageHandler messageReceiver() {
    return message -> {
      var payload = new String((byte[]) message.getPayload());
      var originalMessage = message.getHeaders()
        .get(GcpPubSubHeaders.ORIGINAL_MESSAGE, BasicAcknowledgeablePubsubMessage.class);
      var msgId = originalMessage.getPubsubMessage().getMessageId();

      // 1) Idempotency check (cache/DB): skip if already processed
      if (alreadyProcessed(msgId)) {
        log.warn("Duplicate delivery for messageId={}, ignoring.", msgId);
        originalMessage.ack();
        return;
      }

      try {
        log.info("Processing messageId={} payload={} ", msgId, payload);
        processBusinessLogic(payload);
        markProcessed(msgId);
        originalMessage.ack();
      } catch (Exception e) {
        log.error("Processing failed for messageId={}", msgId, e);
        // no ack: let Pub/Sub redeliver per retry policy -> DLQ if max reached
      }
    };
  }

  // producer (optional)
  @Bean
  @ServiceActivator(inputChannel = "pubsubOutputChannel")
  public MessageHandler messageSender(PubSubTemplate pubsubTemplate) {
    return new PubSubMessageHandler(pubsubTemplate, topicName);
  }

  @MessagingGateway(defaultRequestChannel = "pubsubOutputChannel")
  public interface PubsubOutboundGateway { void sendToPubsub(String text); }

  private boolean alreadyProcessed(String msgId) { /* Redis/DB/Cache lookup */ return false; }
  private void markProcessed(String msgId) { /* Persist msgId */ }
  private void processBusinessLogic(String payload) { /* your logic */ }
} 

Idempotency options:

  • Redis with TTL (fast, good for short windows).
  • Database unique index on message_id (strong guarantee; add upsert).
  • Hashing (content hash + window) when brokers don’t supply message IDs.


5) Designing for at‑least‑once delivery

Reality: Most brokers deliver at least once. Embrace it.

  • Make handlers idempotent (no side effects on duplicates). Examples: upsert by natural key, compare‑and‑swap, store message_id.
  • Deduplicate at write: Use DB constraints; on conflict do nothing/update.
  • Outbox pattern: Write to your DB + outbox table in one transaction; a relay publishes reliably.
  • Poison messages: Route to DLQ with context (trace ID, payload sample, last error). Build a replay tool.


6) Scaling & delivery semantics



Rule of thumb: Prefer Cloud Run for stateless, spiky, HTTP‑triggered consumers. Prefer GKE for heavy runtimes, sidecars, or advanced networking.


7) Security & governance

  • Least privilege service accounts; avoid long‑lived keys.
  • Schema governance (JSON Schema/OpenAPI/Avro). Version events with type + version.
  • PII controls: Mask in logs; encrypt at rest; restrict who can subscribe.
  • Contracts: Document event types and SLAs (latency, retention, retry policy).































๐Ÿš€ Using Testcontainers for Real Integration Tests (Spring Boot + JUnit 5)

 Modern applications rely on multiple external systems — relational databases, messaging services, cloud APIs, and more. Unit tests alone ca...