Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
131 changes: 131 additions & 0 deletions auth/custom-credentials/aws/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
# Running the Custom Credential Supplier Sample

If you want to use AWS security credentials that cannot be retrieved using methods supported natively by the [google-auth](https://github.com/googleapis/google-auth-library-python) library, a custom `AwsSecurityCredentialsSupplier` implementation may be specified. The supplier must return valid, unexpired AWS security credentials when called by the GCP credential.

This sample demonstrates how to use **Boto3** (the AWS SDK for Python) as a custom supplier to bridge AWS credentials—from sources like EKS IRSA, ECS, or Fargate—to Google Cloud Workload Identity.

## Running Locally

To run the sample on your local system, you need to install the dependencies and configure your AWS and GCP credentials as environment variables.

### 1. Install Dependencies

Ensure you have Python installed, then install the required libraries:

```bash
pip install -r requirements.txt
```

### 2. Set Environment Variables

```bash
export AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_ACCESS_KEY"
export AWS_REGION="YOUR_AWS_REGION" # e.g., us-east-1
export GCP_WORKLOAD_AUDIENCE="YOUR_GCP_WORKLOAD_AUDIENCE"
export GCS_BUCKET_NAME="YOUR_GCS_BUCKET_NAME"

# Optional: If you want to use service account impersonation
export GCP_SERVICE_ACCOUNT_IMPERSONATION_URL="YOUR_GCP_SERVICE_ACCOUNT_IMPERSONATION_URL"
```

### 3. Run the Script

```bash
python3 snippets.py
```

## Running in a Containerized Environment (EKS)

This section provides a brief overview of how to run the sample in an Amazon EKS cluster.

### 1. EKS Cluster Setup

First, you need an EKS cluster. You can create one using `eksctl` or the AWS Management Console. For detailed instructions, refer to the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html).

### 2. Configure IAM Roles for Service Accounts (IRSA)

IRSA allows you to associate an IAM role with a Kubernetes service account. This provides a secure way for your pods to access AWS services without hardcoding long-lived credentials.

- Create an IAM OIDC provider for your cluster.
- Create an IAM role and policy that grants the necessary AWS permissions.
- Associate the IAM role with a Kubernetes service account.

For detailed steps, see the [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) documentation.

### 3. Configure GCP to Trust the AWS Role

You need to configure your GCP project to trust the AWS IAM role you created. This is done by creating a Workload Identity Pool and Provider in GCP.

- Create a Workload Identity Pool.
- Create a Workload Identity Provider that trusts the AWS role ARN.
- Grant the GCP service account the necessary permissions.

### 4. Containerize and Package the Application

Create a `Dockerfile` for the Python application and push the image to a container registry (e.g., Amazon ECR) that your EKS cluster can access.

**Dockerfile**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of embedding the Dockerfile in the documentation, you can just add the Dockerfile and refer to it. that makes it easier to keep updated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also this Dockerfile probably needs the addition of a non-root user and running the script as the non-root user

Copy link
Author

@vverman vverman Nov 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By root user I assume you meant root user for the Docker container. I made the changes accordingly.

```Dockerfile
FROM python:3.11-slim

WORKDIR /app

# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the script
COPY snippets.py .

# Run the script
CMD ["python3", "snippets.py"]
```

Build and push the image:
```bash
docker build -t your-container-image:latest .
docker push your-container-image:latest
```

### 5. Deploy to EKS

Create a Kubernetes deployment manifest (`pod.yaml`) to deploy your application to the EKS cluster.

**pod.yaml**
```yaml
apiVersion: v1
kind: Pod
metadata:
name: custom-credential-pod
spec:
serviceAccountName: your-k8s-service-account # The service account associated with the AWS IAM role
containers:
- name: gcp-auth-sample
image: your-container-image:latest # Your image from ECR
env:
# AWS_REGION is often required for Boto3 to initialize correctly in containers
- name: AWS_REGION
value: "your-aws-region"
- name: GCP_WORKLOAD_AUDIENCE
value: "your-gcp-workload-audience"
# Optional: If you want to use service account impersonation
# - name: GCP_SERVICE_ACCOUNT_IMPERSONATION_URL
# value: "your-gcp-service-account-impersonation-url"
- name: GCS_BUCKET_NAME
value: "your-gcs-bucket-name"
```
Deploy the pod:
```bash
kubectl apply -f pod.yaml
```

### 6. Clean Up

To clean up the resources, delete the EKS cluster and any other AWS and GCP resources you created.

```bash
eksctl delete cluster --name your-cluster-name
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a good practice for text files, including Markdown files, to end with a newline character. This can prevent issues with some tools and file concatenations.

26 changes: 26 additions & 0 deletions auth/custom-credentials/aws/noxfile_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

TEST_CONFIG_OVERRIDE = {
# Ignore all versions except 3.9, which is the version available.
"ignored_versions": ["2.7", "3.6", "3.7", "3.8", "3.10", "3.11", "3.12", "3.13"],
"envs": {
"AWS_ACCESS_KEY_ID": "",
"AWS_SECRET_ACCESS_KEY": "",
"AWS_REGION": "",
"GCP_WORKLOAD_AUDIENCE": "",
"GCS_BUCKET_NAME": "",
"GCP_SERVICE_ACCOUNT_IMPERSONATION_URL": "",
},
}
2 changes: 2 additions & 0 deletions auth/custom-credentials/aws/requirements-test.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
-r requirements.txt
pytest==8.2.0
4 changes: 4 additions & 0 deletions auth/custom-credentials/aws/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
boto3==1.40.53
google-auth==2.43.0
python-dotenv==1.1.1
requests==2.32.3
125 changes: 125 additions & 0 deletions auth/custom-credentials/aws/snippets.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# Copyright 2025 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# [START auth_custom_credential_supplier_aws]
import json
import os

import boto3
from google.auth import aws
from google.auth import exceptions
from google.auth.transport import requests as auth_requests
Comment on lines +15 to +21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To support writing to standard error, the sys module should be imported. It's also a good practice to group standard library imports together, followed by third-party imports, as per PEP 8.

Suggested change
import json
import os
import boto3
from google.auth import aws
from google.auth import exceptions
from google.auth.transport import requests as auth_requests
import json
import os
import sys
import boto3
from google.auth import aws
from google.auth import exceptions
from google.auth.transport import requests as auth_requests



class CustomAwsSupplier(aws.AwsSecurityCredentialsSupplier):
"""Custom AWS Security Credentials Supplier using Boto3."""

def __init__(self):
"""Initializes the Boto3 session, prioritizing environment variables for region."""
# Explicitly read the region from the environment first.
region = os.getenv("AWS_REGION") or os.getenv("AWS_DEFAULT_REGION")

# If region is None, Boto3's discovery chain will be used when needed.
self.session = boto3.Session(region_name=region)
self._cached_region = None

def get_aws_region(self, context, request) -> str:
"""Returns the AWS region using Boto3's default provider chain."""
if self._cached_region:
return self._cached_region

self._cached_region = self.session.region_name

if not self._cached_region:
raise exceptions.GoogleAuthError(
"Boto3 was unable to resolve an AWS region."
)

return self._cached_region

def get_aws_security_credentials(
self, context, request=None
) -> aws.AwsSecurityCredentials:
"""Retrieves AWS security credentials using Boto3's default provider chain."""
creds = self.session.get_credentials()
if not creds:
raise exceptions.GoogleAuthError(
"Unable to resolve AWS credentials from Boto3."
)

return aws.AwsSecurityCredentials(
access_key_id=creds.access_key,
secret_access_key=creds.secret_key,
session_token=creds.token,
)


def authenticate_with_aws_credentials(bucket_name, audience, impersonation_url=None):
"""Authenticates using the custom AWS supplier and gets bucket metadata.

Returns:
dict: The bucket metadata response from the Google Cloud Storage API.
"""

# 1. Instantiate the custom supplier.
custom_supplier = CustomAwsSupplier()

# 2. Instantiate the AWS Credentials object.
credentials = aws.Credentials(
audience=audience,
subject_token_type="urn:ietf:params:aws:token-type:aws4_request",
service_account_impersonation_url=impersonation_url,
aws_security_credentials_supplier=custom_supplier,
scopes=["https://www.googleapis.com/auth/devstorage.read_write"],
)

# 3. Create an authenticated session.
authed_session = auth_requests.AuthorizedSession(credentials)

# 4. Make the API Request.
bucket_url = f"https://storage.googleapis.com/storage/v1/b/{bucket_name}"

response = authed_session.get(bucket_url)
response.raise_for_status()

return response.json()


# [END auth_custom_credential_supplier_aws]


def main():
"""Main function to parse env vars and call the authenticator."""
gcp_audience = os.getenv("GCP_WORKLOAD_AUDIENCE")
sa_impersonation_url = os.getenv("GCP_SERVICE_ACCOUNT_IMPERSONATION_URL")
gcs_bucket_name = os.getenv("GCS_BUCKET_NAME")

if not all([gcp_audience, gcs_bucket_name]):
print(
"Required environment variables missing: GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME"
)
return

try:
print(f"Retrieving metadata for bucket: {gcs_bucket_name}...")
metadata = authenticate_with_aws_credentials(
gcs_bucket_name, gcp_audience, sa_impersonation_url
)
print("--- SUCCESS! ---")
print(json.dumps(metadata, indent=2))
except Exception as e:
print(f"Authentication or Request failed: {e}")
Comment on lines 134 to 150
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are a couple of improvements we can make here:

  1. Error messages should be printed to standard error (sys.stderr) instead of standard output. This is a standard practice for separating normal output from error diagnostics.
  2. Catching a generic Exception is too broad and can hide bugs by catching unexpected errors (like KeyboardInterrupt). It's better to catch more specific exceptions that are expected from the authentication and request process, such as google.auth.exceptions.GoogleAuthError and requests.exceptions.HTTPError.
Suggested change
if not all([gcp_audience, gcs_bucket_name]):
print(
"Required environment variables missing: GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME"
)
return
try:
print(f"Retrieving metadata for bucket: {gcs_bucket_name}...")
metadata = authenticate_with_aws_credentials(
gcs_bucket_name, gcp_audience, sa_impersonation_url
)
print("--- SUCCESS! ---")
print(json.dumps(metadata, indent=2))
except Exception as e:
print(f"Authentication or Request failed: {e}")
if not all([gcp_audience, gcs_bucket_name]):
print(
"Required environment variables missing: GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME",
file=sys.stderr,
)
return
try:
print(f"Retrieving metadata for bucket: {gcs_bucket_name}...")
metadata = authenticate_with_aws_credentials(
gcs_bucket_name, gcp_audience, sa_impersonation_url
)
print("--- SUCCESS! ---")
print(json.dumps(metadata, indent=2))
except (exceptions.GoogleAuthError, auth_requests.requests.exceptions.HTTPError) as e:
print(f"Authentication or Request failed: {e}", file=sys.stderr)



if __name__ == "__main__":
main()
Loading