In this post, we'll explore how to provision Cloud SQL instances with Private Service Connect (PSC) connectivity using Terraform and then access them from a Spring Boot application deployed on Google Cloud Run. We'll leverage Terraform to create the necessary infrastructure and configure the networking components. Then, we'll build and deploy a Spring Boot application that connects to this Cloud SQL instances using the appropriate methods.
Enabled APIs
The following APIs need to be enabled for this project:
- Cloud SQL API
- Cloud Run API
- Cloud Build API
- Artifact Registry API
- Cloud Logging API
- Serverless VPC Access API
Terraform Project Overview
The Terraform project sets up the following resources on Google Cloud Platform (GCP):
Virtual Private Cloud (VPC) Network and Subnets:
- A VPC network named nw1-vpc is created, along with two subnets (nw1-vpc-sub1-us-central1 and nw1-vpc-sub3-us-west1) in different regions.
Cloud SQL Instances:
- Private Service Connect (PSC) Instance: A Cloud SQL instance named psc-instance is created with Private Service Connect enabled, allowing secure access from Google Cloud services and resources.
Networking Components:
- Firewall rules are defined to control access to the VPC network.
- A NAT gateway is configured to allow instances in the VPC network to access the internet.
Service Account and IAM Roles:
- A service account named cloudsql-service-account-id is created and granted the necessary roles for accessing Cloud SQL instances.
Compare Direct VPC egress and VPC connectors
Cloud Run offers two methods for sending egress (outbound) traffic from a Cloud Run service or job to a VPC network:
- You can enable your Cloud Run service or job to send traffic to a VPC network by configuring a Serverless VPC Access connector. A VPC Connector named private-cloud-sql is provisioned for enabling Private Service Connect access from Google Cloud services like Cloud Run in network.tf.
- You can enable your Cloud Run service or job to send traffic to a VPC network by using Direct VPC egress with no Serverless VPC Access connector required.
Terraform Project
- Provider Configuration
In the provider.tf file, we define the required providers and configure the Google Cloud provider with the project ID, region, and zone:
provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}
The project_id, region, and zone variables are defined in the variables.tf file and assigned values in the terraform.tfvars file.
- Virtual Private Cloud (VPC) Network and Subnets
In the network.tf file, we create the VPC network and subnets:
resource "google_compute_network" "nw1-vpc" {
project = var.project_id
name = "nw1-vpc"
auto_create_subnetworks = false
mtu = 1460
}
resource "google_compute_subnetwork" "nw1-subnet1" {
name = "nw1-vpc-sub1-${var.region}"
network = google_compute_network.nw1-vpc.id
ip_cidr_range = "10.10.1.0/24"
region = var.region
private_ip_google_access = true
}
resource "google_compute_subnetwork" "nw1-subnet2" {
name = "nw1-vpc-sub3-us-west1"
network = google_compute_network.nw1-vpc.id
ip_cidr_range = "10.10.2.0/24"
region = var.sec_region
private_ip_google_access = true
}
- Private Service Connect (PSC) Instance
In the main.tf file, we create the Cloud SQL instances with Private Service Conect option:
resource "google_sql_database_instance" "psc_instance" {
project = var.project_id
name = "psc-instance"
region = var.region
database_version = "POSTGRES_15"
deletion_protection = false
settings {
tier = "db-f1-micro"
ip_configuration {
psc_config {
psc_enabled = true
allowed_consumer_projects = ["<PROJECT_ID>"]
}
ipv4_enabled = false
}
availability_type = "REGIONAL"
}
}
- Networking Components
resource "google_compute_global_address" "private_ip_address" {
name = google_compute_network.nw1-vpc.name
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = google_compute_network.nw1-vpc.name
}
resource "google_service_networking_connection" "private_vpc_connection" {
network = google_compute_network.nw1-vpc.id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
- Service Account and IAM Roles
In the serviceaccount.tf file, we create the service account and assign necessary roles:
resource "google_service_account" "cloudsql_service_account" {
project = var.project_id
account_id = "cloudsql-service-account-id"
display_name = "Service Account for Cloud SQL"
}
resource "google_project_iam_member" "member-role" {
depends_on = [google_service_account.cloudsql_service_account]
for_each = toset([
"roles/cloudsql.client",
"roles/cloudsql.editor",
"roles/cloudsql.admin",
"roles/secretmanager.secretAccessor",
"roles/secretmanager.secretVersionManager",
"roles/vpcaccess.serviceAgent"
])
role = each.key
project = var.project_id
member = "serviceAccount:${google_service_account.cloudsql_service_account.email}"
}
- VPC Connector
In the network.tf file, we create the VPC Connector for Private Service Connect access:
#****************************Equivalent gcloud command
/* gcloud compute networks vpc-access connectors create private-cloud-sql \
--region us-central1 \
--network nw1-vpc \
--range "10.10.3.0/28" \
--machine-type e2-micro \
--project <PROJECT-ID>*/
resource "google_vpc_access_connector" "private-cloud-sql" {
project = var.project_id
name = "private-cloud-sql"
region = var.region
network = google_compute_network.nw1-vpc.id
machine_type = "e2-micro"
ip_cidr_range = "10.10.3.0/28"
}
Private Service Connect Configuration
You can reserve an internal IP address for the Private Service Connect endpoint and create an endpoint with that address. To create the endpoint, you need the service attachment URI and the projects that are allowed for the instance.
To reserve an internal IP address for the Private Service Connect endpoint, use the gcloud compute addresses create command:
/*gcloud compute addresses create internal-address \
--project=<PROJECT-ID> \
--region=us-central1 \
--subnet=nw1-vpc-sub1-us-central1 \
--addresses=10.10.1.10*/
resource "google_compute_address" "internal_address" {
project = var.project_id
name = "internal-address"
region = var.region
address_type = "INTERNAL"
address = "10.10.1.10" #"INTERNAL_IP_ADDRESS"
subnetwork = google_compute_subnetwork.nw1-subnet1.name
}
To create the Private Service Connect endpoint and point it to the Cloud SQL service attachment, use the gcloud compute forwarding-rules create command:
gcloud sql instances describe - displays configuration and metadata about a Cloud SQL instance
Get from this command:
- SERVICE_ATTACHMENT_URI (pscServiceAttachmentLink)
gcloud sql instances describe psc-instance --project PROJECT_ID
gcloud compute forwarding-rules create psc-service-attachment-link \
--address=internal-address\
--project=PROJECT_ID \
--region=us-central1\
--network=nw1-vpc\
--target-service-attachment=SERVICE_ATTACHMENT_URI
Cloud SQL doesn't create DNS records automatically. Instead, the instance lookup API response provides a suggested DNS name. We recommend that you create the DNS record in a private DNS zone in the corresponding VPC network. This provides a consistent way of using the Cloud SQL Auth Proxy to connect from different networks.
Get from this command:
- DNS Entry(dnsName)
gcloud sql instances describe psc-instance --project PROJECT_ID
In the response, verify that the DNS name appears. This name has the following pattern: INSTANCE_UID.PROJECT_DNS_LABEL.REGION_NAME.sql.goog.. For example: 1a23b4cd5e67.1a2b345c6d27.us-central1.sql.goog.
To create a private DNS zone, use the gcloud dns managed-zones create command. This zone is associated with the VPC network that's used to connect to the Cloud SQL instance through the Private Service Connect endpoint.
gcloud dns managed-zones create cloud-sql-dns-zone\
--project=PROJECT_ID \
--description="DNS zone for the Cloud SQL instance"\
--dns-name=DNS_NAME \
--networks=nw1-vpc\
--visibility=private
Make the following replacements:
- ZONE_NAME: the name of the DNS zone
- PROJECT_ID: the ID or project number of the Google Cloud project that contains the zone
- DESCRIPTION: a description of the zone (for example, a DNS zone for the Cloud SQL instance)
- DNS_NAME: the DNS name for the zone, such as REGION_NAME.sql.goog. (where REGION_NAME is the region name for the zone)
- NETWORK_NAME: the name of the VPC network
After you create the Private Service Connect endpoint, to create a DNS record in the zone, use the gcloud dns record-sets create command:
gcloud dns record-sets create DNS_NAME \
--project=PROJECT_ID \
--type=A\
--rrdatas=10.10.1.10\
--zone=cloud-sql-dns-zone
Make the following replacements:
- DNS_NAME: the DNS name that you retrieved earlier in this procedure.
- RRSET_TYPE: the resource record type of the DNS record set (for example, A).
- RR_DATA: the IP address allocated for the Private Service Connect endpoint (for example, 198.51.100.5). You can also enter multiple values such as rrdata1 rrdata2 rrdata3 (for example, 10.1.2.3 10.2.3.4 10.3.4.5).
Spring Boot Application
- Data Sources Configuration
In the application.yaml file, we configure the data sources for Private Service connect Cloud SQL instances:
spring:
jpa:
defer-datasource-initialization: true
sql:
init:
mode: always
datasource:
psc:
url: jdbc:postgresql:///
database: my-database3
cloudSqlInstance: <PROJECT-ID>:<REGION>:psc-instance
username: <user>
password: <passoword>
ipTypes: PSC
socketFactory: com.google.cloud.sql.postgres.SocketFactory
driverClassName: org.postgresql.Driver
- Database Configuration Classes
ackage com.henry.democloudsql.configuration;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.JpaVendorAdapter;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter;
import org.springframework.transaction.PlatformTransactionManager;
import javax.naming.NamingException;
import javax.sql.DataSource;
import java.util.Properties;
@Configuration
@EnableJpaRepositories(
basePackages = "com.henry.democloudsql.repository",
entityManagerFactoryRef = "pscEntityManager",
transactionManagerRef = "pscTransactionManager"
)
public class PSCpostgresConfig {
@Value("${spring.datasource.psc.url}")
private String url;
@Value("${spring.datasource.psc.database}")
private String database;
@Value("${spring.datasource.psc.cloudSqlInstance}")
private String cloudSqlInstance;
@Value("${spring.datasource.psc.username}")
private String username;
@Value("${spring.datasource.psc.password}")
private String password;
@Value("${spring.datasource.psc.ipTypes}")
private String ipTypes;
@Value("${spring.datasource.psc.socketFactory}")
private String socketFactory;
@Value("${spring.datasource.psc.driverClassName}")
private String driverClassName;
@Bean
@Primary
public LocalContainerEntityManagerFactoryBean pscEntityManager()
throws NamingException {
LocalContainerEntityManagerFactoryBean em
= new LocalContainerEntityManagerFactoryBean();
em.setDataSource(pscDataSource());
em.setPackagesToScan("com.henry.democloudsql.model");
JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
em.setJpaVendorAdapter(vendorAdapter);
em.setJpaProperties(pscHibernateProperties());
return em;
}
@Bean
@Primary
public DataSource pscDataSource() throws IllegalArgumentException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(String.format(url + "%s", database));
config.setUsername(username);
config.setPassword(password);
config.addDataSourceProperty("socketFactory", socketFactory);
config.addDataSourceProperty("cloudSqlInstance", cloudSqlInstance);
config.addDataSourceProperty("ipTypes", ipTypes);
config.setMaximumPoolSize(5);
config.setMinimumIdle(5);
config.setConnectionTimeout(10000);
config.setIdleTimeout(600000);
config.setMaxLifetime(1800000);
return new HikariDataSource(config);
}
private Properties pscHibernateProperties() {
Properties properties = new Properties();
return properties;
}
@Bean
@Primary
public PlatformTransactionManager pscTransactionManager() throws NamingException {
final JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(pscEntityManager().getObject());
return transactionManager;
}
}
- Entity Models
package com.henry.democloudsql.model;
import jakarta.persistence.*;
import lombok.*;
import java.math.BigDecimal;
@Entity
@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@Builder
@Table(name = "table3")
public class Table3 {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "product")
private String product;
@Column(name = "price")
private BigDecimal price;
}
- Repositories
package com.henry.democloudsql.repository;
import com.henry.democloudsql.model.Table3;
import org.springframework.data.repository.CrudRepository;
public interface Table3Repository extends CrudRepository<Table3, Long> {
}
- Services
package com.henry.democloudsql.service;
public sealed interface DefaultService<T, G> permits Table3ServiceImpl {
T save(T obj);
Iterable<T> findAll();
T findById(G id);
}
package com.henry.democloudsql.service;
import com.henry.democloudsql.repository.Table3Repository;
import org.springframework.stereotype.Service;
@Service
public final class Table3ServiceImpl implements DefaultService {
private final Table3Repository table3Repository;
public Table3ServiceImpl(Table3Repository table3Repository) {
this.table3Repository = table3Repository;
}
@Override
public Object save(Object obj) {
return null;
}
@Override
public Iterable findAll() {
return table3Repository.findAll();
}
@Override
public Object findById(Object id) {
return null;
}
}
- Controllers
package com.henry.democloudsql.controller;
import com.henry.democloudsql.model.Table3;
import com.henry.democloudsql.service.DefaultService;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/api/v3")
public class Table3Controller {
private final DefaultService<Table3, Long> defaultService;
public Table3Controller(DefaultService<Table3, Long> defaultService) {
this.defaultService = defaultService;
}
@GetMapping
public Iterable<Table3> findAll(){
return defaultService.findAll();
}
}
- Dockerization
The Dockerfile is used to build the Docker image for the Spring Boot application:
FROM openjdk:17-jdk-slim
WORKDIR /app
COPY target/demo-cloudsql.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]
This Dockerfile uses the openjdk:17-jdk-slim base image, sets the working directory to /app, copies the built Spring Boot JAR file (demo-cloudsql.jar) into the container, and specifies the entrypoint to run the JAR file.
After creating the Dockerfile, you can build the Docker image locally using the following command:
Additional Details
- Executing Terraform Commands:
Before deploying the Spring Boot application, run the following Terraform commands to provision the infrastructure:
terraform init
terraform validate
terraform apply -auto-approve
- Building and Deploying Spring Boot Application:
After modifying MY_PROJECT_ID in application.yml on Spring Boot App, run:
mvn clean install
docker build -t quickstart-springboot:1.0.1 .
Deployment and Integration
- Artifact Repository was created by tf project
resource "google_artifact_registry_repository" "my-repo" {
location = var.region
repository_id = "my-repo"
description = "example docker repository"
format = "DOCKER"
}
- Push Docker Image to Artifact Registry
To push the Docker image to the Artifact Registry, you first need to tag it with the appropriate URL:
docker tag quickstart-springboot:1.0.1 us-central1-docker.pkg.dev/MY_PROJECT_ID/my-repo/quickstart-springboot:1.0.1
Replace MY_PROJECT_ID with your actual GCP project ID.
Then, push the tagged image to the Artifact Registry:
docker push us-central1-docker.pkg.dev/MY_PROJECT_ID/my-repo/quickstart-springboot:1.0.1
Deploy to Cloud Run
Deploy the Spring Boot application to Google Cloud Run using the gcloud command:
With VPC Connector:
gcloud run deploy springboot-run-psc-vpc-connector \
--image us-central1-docker.pkg.dev/MY_PROJECT_ID/my-repo/quickstart-springboot:1.0.1 \
--region=us-central1 \
--allow-unauthenticated \
--service-account=cloudsql-service-account-id@terraform-workspace-413615.iam.gserviceaccount.com \
--vpc-connector private-cloud-sql
With Direct VPC egress:
gcloud beta run deploy springboot-run-psc-direct-vpc-egress \
--image=us-central1-docker.pkg.dev/MY_PROJECT_ID/my-repo/quickstart-springboot:1.0.1 \
--allow-unauthenticated \
--service-account=cloudsql-service-account-id@terraform-workspace-413615.iam.gserviceaccount.com \
--network=nw1-vpc \
--subnet=nw1-vpc-sub1-us-central1 \
--vpc-egress=all-traffic \
--region=us-central1 \
--project=MY_PROJECT_ID
This command deploys the Spring Boot application to Cloud Run, using the Docker image from the Artifact Registry. It also specifies the service account created by Terraform (cloudsql-service-account-id@terraform-workspace-413615.iam.gserviceaccount.com)
No comments:
Post a Comment