Sunday, April 28, 2024

Provisioning Cloud SQL with Private Service Connect Using Terraform & Accessing from Cloud Run with Spring Boot

In this post, we'll explore how to provision Cloud SQL instances with Private Service Connect (PSC) connectivity using Terraform and then access them from a Spring Boot application deployed on Google Cloud Run. We'll leverage Terraform to create the necessary infrastructure and configure the networking components. Then, we'll build and deploy a Spring Boot application that connects to this Cloud SQL instances using the appropriate methods.

Enabled APIs

The following APIs need to be enabled for this project:

  • Cloud SQL API
  • Cloud Run API
  • Cloud Build API
  • Artifact Registry API
  • Cloud Logging API
  • Serverless VPC Access API

Terraform Project Overview

The Terraform project sets up the following resources on Google Cloud Platform (GCP):

Virtual Private Cloud (VPC) Network and Subnets:

  • A VPC network named nw1-vpc is created, along with two subnets (nw1-vpc-sub1-us-central1 and nw1-vpc-sub3-us-west1) in different regions.

Cloud SQL Instances:

  • Private Service Connect (PSC) Instance: A Cloud SQL instance named psc-instance is created with Private Service Connect enabled, allowing secure access from Google Cloud services and resources.

Networking Components:

  • Firewall rules are defined to control access to the VPC network.
  • A NAT gateway is configured to allow instances in the VPC network to access the internet.

Service Account and IAM Roles:

  • A service account named cloudsql-service-account-id is created and granted the necessary roles for accessing Cloud SQL instances.

Compare Direct VPC egress and VPC connectors

Cloud Run offers two methods for sending egress (outbound) traffic from a Cloud Run service or job to a VPC network:

VPC Connector:

  • You can enable your Cloud Run service or job to send traffic to a VPC network by configuring a Serverless VPC Access connector. A VPC Connector named private-cloud-sql is provisioned for enabling Private Service Connect access from Google Cloud services like Cloud Run in

Direct VPC egress:

  • You can enable your Cloud Run service or job to send traffic to a VPC network by using Direct VPC egress with no Serverless VPC Access connector required.

Terraform Project

  • Provider Configuration

In the file, we define the required providers and configure the Google Cloud provider with the project ID, region, and zone:

provider "google" {
  project = var.project_id
  region  = var.region
  zone    =

The project_id, region, and zone variables are defined in the file and assigned values in the terraform.tfvars file.

  • Virtual Private Cloud (VPC) Network and Subnets

In the file, we create the VPC network and subnets:

resource "google_compute_network" "nw1-vpc" {
  project                 = var.project_id
  name                    = "nw1-vpc"
  auto_create_subnetworks = false
  mtu                     = 1460

resource "google_compute_subnetwork" "nw1-subnet1" {
  name                     = "nw1-vpc-sub1-${var.region}"
  network                  =
  ip_cidr_range            = ""
  region                   = var.region
  private_ip_google_access = true

resource "google_compute_subnetwork" "nw1-subnet2" {
  name                     = "nw1-vpc-sub3-us-west1"
  network                  =
  ip_cidr_range            = ""
  region                   = var.sec_region
  private_ip_google_access = true

  • Private Service Connect (PSC) Instance

In the file, we create the Cloud SQL instances with Private Service Conect option:

resource "google_sql_database_instance" "psc_instance" {
  project          = var.project_id
  name             = "psc-instance"
  region           = var.region
  database_version = "POSTGRES_15"

  deletion_protection = false

  settings {
    tier = "db-f1-micro"
    ip_configuration {
      psc_config {
        psc_enabled               = true
        allowed_consumer_projects = ["<PROJECT_ID>"]
      ipv4_enabled = false
    availability_type = "REGIONAL"
  • Networking Components
resource "google_compute_global_address" "private_ip_address" {
  name          =
  purpose       = "VPC_PEERING"
  address_type  = "INTERNAL"
  prefix_length = 16
  network       =

resource "google_service_networking_connection" "private_vpc_connection" {
  network                 =
  service                 = ""
  reserved_peering_ranges = []

  • Service Account and IAM Roles

In the file, we create the service account and assign necessary roles:

resource "google_service_account" "cloudsql_service_account" {
  project      = var.project_id
  account_id   = "cloudsql-service-account-id"
  display_name = "Service Account for Cloud SQL"

resource "google_project_iam_member" "member-role" {
  depends_on = [google_service_account.cloudsql_service_account]

  for_each = toset([
  role    = each.key
  project = var.project_id
  member  = "serviceAccount:${}"

  • VPC Connector

In the file, we create the VPC Connector for Private Service Connect access:

#****************************Equivalent gcloud command
/* gcloud compute networks vpc-access connectors create private-cloud-sql  \
--region us-central1  \
--network nw1-vpc  \
--range ""  \
--machine-type e2-micro  \
--project <PROJECT-ID>*/
resource "google_vpc_access_connector" "private-cloud-sql" {
  project       = var.project_id
  name          = "private-cloud-sql"
  region        = var.region
  network       =
  machine_type  = "e2-micro"
  ip_cidr_range = ""

Private Service Connect Configuration

You can reserve an internal IP address for the Private Service Connect endpoint and create an endpoint with that address. To create the endpoint, you need the service attachment URI and the projects that are allowed for the instance.

To reserve an internal IP address for the Private Service Connect endpoint, use the gcloud compute addresses create command:

/*gcloud compute addresses create internal-address  \
--project=<PROJECT-ID>  \
--region=us-central1  \
--subnet=nw1-vpc-sub1-us-central1  \

resource "google_compute_address" "internal_address" {
  project      = var.project_id
  name         = "internal-address"
  region       = var.region
  address_type = "INTERNAL"
  address      = ""                               #"INTERNAL_IP_ADDRESS"
  subnetwork   = 

To create the Private Service Connect endpoint and point it to the Cloud SQL service attachment, use the gcloud compute forwarding-rules create command:

gcloud sql instances describe - displays configuration and metadata about a Cloud SQL instance

Get from this command:

  • SERVICE_ATTACHMENT_URI (pscServiceAttachmentLink)

gcloud sql instances describe psc-instance --project PROJECT_ID 
gcloud compute forwarding-rules create psc-service-attachment-link \
--project=PROJECT_ID \

Cloud SQL doesn't create DNS records automatically. Instead, the instance lookup API response provides a suggested DNS name. We recommend that you create the DNS record in a private DNS zone in the corresponding VPC network. This provides a consistent way of using the Cloud SQL Auth Proxy to connect from different networks.

Get from this command:

  • DNS Entry(dnsName)

gcloud sql instances describe psc-instance --project PROJECT_ID 

In the response, verify that the DNS name appears. This name has the following pattern: For example:

To create a private DNS zone, use the gcloud dns managed-zones create command. This zone is associated with the VPC network that's used to connect to the Cloud SQL instance through the Private Service Connect endpoint.

gcloud dns managed-zones create cloud-sql-dns-zone\
--project=PROJECT_ID \
--description="DNS zone for the Cloud SQL instance"\
--dns-name=DNS_NAME \

Make the following replacements:

  • ZONE_NAME: the name of the DNS zone
  • PROJECT_ID: the ID or project number of the Google Cloud project that contains the zone
  • DESCRIPTION: a description of the zone (for example, a DNS zone for the Cloud SQL instance)
  • DNS_NAME: the DNS name for the zone, such as (where REGION_NAME is the region name for the zone)
  • NETWORK_NAME: the name of the VPC network

After you create the Private Service Connect endpoint, to create a DNS record in the zone, use the gcloud dns record-sets create command:

gcloud dns record-sets create DNS_NAME \
--project=PROJECT_ID \

Make the following replacements:

  • DNS_NAME: the DNS name that you retrieved earlier in this procedure.
  • RRSET_TYPE: the resource record type of the DNS record set (for example, A).
  • RR_DATA: the IP address allocated for the Private Service Connect endpoint (for example, You can also enter multiple values such as rrdata1 rrdata2 rrdata3 (for example,

Spring Boot Application

  • Data Sources Configuration

In the application.yaml file, we configure the data sources for Private Service connect Cloud SQL instances:

    defer-datasource-initialization: true
      mode: always
      url: jdbc:postgresql:///
      database: my-database3
      cloudSqlInstance: <PROJECT-ID>:<REGION>:psc-instance
      username: <user>
      password: <passoword>
      ipTypes: PSC
      driverClassName: org.postgresql.Driver 
  • Database Configuration Classes
ackage com.henry.democloudsql.configuration;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.JpaVendorAdapter;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter;
import org.springframework.transaction.PlatformTransactionManager;

import javax.naming.NamingException;
import javax.sql.DataSource;
import java.util.Properties;

        basePackages = "com.henry.democloudsql.repository",
        entityManagerFactoryRef = "pscEntityManager",
        transactionManagerRef = "pscTransactionManager"
public class PSCpostgresConfig {

    private  String url;
    private String database;
    private  String cloudSqlInstance;
    private String username;

    private String password;
    private String ipTypes;
    private String socketFactory;
    private String driverClassName;
    public LocalContainerEntityManagerFactoryBean pscEntityManager()
            throws NamingException {
        LocalContainerEntityManagerFactoryBean em
                = new LocalContainerEntityManagerFactoryBean();

        JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();

        return em;
    public DataSource pscDataSource() throws IllegalArgumentException {
        HikariConfig config = new HikariConfig();

        config.setJdbcUrl(String.format(url + "%s", database));

        config.addDataSourceProperty("socketFactory", socketFactory);
        config.addDataSourceProperty("cloudSqlInstance", cloudSqlInstance);

        config.addDataSourceProperty("ipTypes", ipTypes);


        return new HikariDataSource(config);
    private Properties pscHibernateProperties() {
        Properties properties = new Properties();
        return properties;
    public PlatformTransactionManager pscTransactionManager() throws NamingException {
        final JpaTransactionManager transactionManager = new JpaTransactionManager();
        return transactionManager;
  • Entity Models
package com.henry.democloudsql.model;

import jakarta.persistence.*;
import lombok.*;

import java.math.BigDecimal;

@Table(name = "table3")
public class Table3 {

    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @Column(name = "product")
    private String product;

    @Column(name = "price")
    private BigDecimal price;
  • Repositories

package com.henry.democloudsql.repository;

import com.henry.democloudsql.model.Table3;

public interface Table3Repository extends CrudRepository<Table3, Long> {
  • Services
package com.henry.democloudsql.service;

public sealed  interface DefaultService<T, G> permits Table3ServiceImpl {
    T save(T obj);
    Iterable<T>  findAll();
    T findById(G id);
package com.henry.democloudsql.service;

import com.henry.democloudsql.repository.Table3Repository;
import org.springframework.stereotype.Service;

public final class Table3ServiceImpl implements DefaultService {

    private  final Table3Repository table3Repository;

    public Table3ServiceImpl(Table3Repository table3Repository) {
        this.table3Repository = table3Repository;

    public Object save(Object obj) {
        return null;

    public Iterable findAll() {
        return table3Repository.findAll();

    public Object findById(Object id) {
        return null;
  • Controllers
package com.henry.democloudsql.controller;

import com.henry.democloudsql.model.Table3;
import com.henry.democloudsql.service.DefaultService;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

public class Table3Controller {

    private final DefaultService<Table3, Long> defaultService;

    public Table3Controller(DefaultService<Table3, Long> defaultService) {
        this.defaultService = defaultService;

    public Iterable<Table3> findAll(){
        return defaultService.findAll();
  1. Dockerization

The Dockerfile is used to build the Docker image for the Spring Boot application:

FROM openjdk:17-jdk-slim
COPY target/demo-cloudsql.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]

This Dockerfile uses the openjdk:17-jdk-slim base image, sets the working directory to /app, copies the built Spring Boot JAR file (demo-cloudsql.jar) into the container, and specifies the entrypoint to run the JAR file.

After creating the Dockerfile, you can build the Docker image locally using the following command:

Additional Details

  • Executing Terraform Commands:

Before deploying the Spring Boot application, run the following Terraform commands to provision the infrastructure:

terraform init

terraform validate

terraform apply -auto-approve

  • Building and Deploying Spring Boot Application:

After modifying MY_PROJECT_ID in application.yml on Spring Boot App, run:

mvn clean install
After clean install, you can build the Docker image locally using the following command:

docker build -t quickstart-springboot:1.0.1 .
This command builds the Docker image with the tag quickstart-springboot:1.0.1 using the Dockerfile in the current directory.

Deployment and Integration

  • Artifact Repository was created by tf project

resource "google_artifact_registry_repository" "my-repo" {
  location      = var.region
  repository_id = "my-repo"
  description   = "example docker repository"
  format        = "DOCKER"

  • Push Docker Image to Artifact Registry

To push the Docker image to the Artifact Registry, you first need to tag it with the appropriate URL:

docker tag quickstart-springboot:1.0.1

Replace MY_PROJECT_ID with your actual GCP project ID.

Then, push the tagged image to the Artifact Registry:

docker push

Deploy to Cloud Run

Deploy the Spring Boot application to Google Cloud Run using the gcloud command:

With VPC Connector:

gcloud run deploy springboot-run-psc-vpc-connector \
  --image \
  --region=us-central1 \
  --allow-unauthenticated \ \
  --vpc-connector private-cloud-sql

With Direct VPC egress:

gcloud beta run deploy springboot-run-psc-direct-vpc-egress  \ \
--allow-unauthenticated  \  \
--network=nw1-vpc  \
--subnet=nw1-vpc-sub1-us-central1  \
--vpc-egress=all-traffic  \
--region=us-central1  \

This command deploys the Spring Boot application to Cloud Run, using the Docker image from the Artifact Registry. It also specifies the service account created by Terraform (


No comments:

Post a Comment

Provisioning Cloud SQL with Private Service Connect Using Terraform & Accessing from Cloud Run with Spring Boot

In this post, we'll explore how to provision Cloud SQL instances with Private Service Connect (PSC) connectivity using Terraform and the...