You can manage AWS, GCP, and Azure resources from a single Terraform configuration, and it’s not just about convenience; it’s about abstracting away the underlying cloud provider differences to a degree that makes cross-cloud deployments genuinely feasible.
Let’s see this in action. Imagine we want to provision a simple S3 bucket in AWS and a GCS bucket in GCP.
# AWS Provider Configuration
provider "aws" {
region = "us-east-1"
alias = "primary"
}
# GCP Provider Configuration
provider "google" {
project = "my-gcp-project-12345"
region = "us-central1"
alias = "primary"
}
# AWS S3 Bucket
resource "aws_s3_bucket" "example_bucket_aws" {
provider = aws.primary
bucket = "my-unique-aws-bucket-name-12345"
acl = "private"
tags = {
Environment = "Dev"
ManagedBy = "Terraform"
}
}
# GCP GCS Bucket
resource "google_storage_bucket" "example_bucket_gcp" {
provider = google.primary
name = "my-unique-gcs-bucket-name-12345"
location = "US-CENTRAL1"
storage_class = "STANDARD"
labels = {
environment = "dev"
managed_by = "terraform"
}
}
When you run terraform init, Terraform downloads the necessary provider plugins for both AWS and Google Cloud. Then, terraform plan will show you the actions it will take: creating an S3 bucket in AWS and a GCS bucket in GCP. terraform apply executes these plans. The provider block with an alias is key here. It allows you to configure multiple instances of the same provider, essential for targeting different accounts or regions within the same cloud, or, as in this case, for distinct cloud providers.
The core problem this solves is the operational overhead of managing infrastructure across multiple clouds. Without a unified tool, you’d likely have separate toolchains, teams, and configurations for each provider. Terraform’s multi-cloud capability centralizes this. It provides a consistent API and declarative language to define infrastructure, regardless of whether it’s running on EC2 instances or GCE instances, S3 or GCS.
Internally, Terraform’s provider model is what makes this work. Each cloud provider (AWS, GCP, Azure, etc.) has a corresponding Terraform provider plugin. When you declare a resource like aws_s3_bucket or google_storage_bucket, Terraform consults the respective provider plugin. The plugin then translates Terraform’s generic resource definition into the specific API calls required by that cloud provider. The alias functionality on provider blocks is not just for multiple configurations of the same cloud; it’s the mechanism that lets you explicitly tell a resource which instance of a provider to use. If you don’t specify an alias for a resource and have multiple providers configured, Terraform will complain that it doesn’t know which one to use.
The mental model to build is one of distinct "provider instances" that Terraform can address. Think of it like having multiple remote controls, each programmed for a different brand of TV. You tell the "Sony remote" to change the channel, and it sends the right infrared signal for Sony TVs. Similarly, you tell the aws.primary provider instance to create an S3 bucket, and it uses the AWS API. The power comes from writing a single .tf file that can invoke multiple such remotes.
When you define resources that span providers, like needing to grant an AWS IAM role permission to access a GCS bucket, the challenge isn’t just in the Terraform configuration itself, but in how the underlying cloud IAM systems interact. Terraform can define both sides of this equation, but it cannot magically bridge the security and identity domains between fundamentally different cloud platforms. You’ll need to ensure that the identity Terraform is using for AWS has permissions to create IAM policies, and the identity Terraform is using for GCP has permissions to create IAM service accounts and grant them roles. The actual cross-cloud authorization often requires manual setup or a more advanced identity federation strategy outside of Terraform’s direct provisioning scope.
The next hurdle you’ll face is managing dependencies and state across these disparate environments.