You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
K8S Executor - An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity
#5080
We are running Mage AI on Amazon EKS and have configured the Kubernetes executor so that each pipeline runs as a pod/job. Our Python pipeline requires access to S3, for which we have created a specific Kubernetes service account.
We have the following configuration in our code: DEFAULT_SERVICE_ACCOUNT_NAME = os.getenv('KUBE_SERVICE_ACCOUNT_NAME', 'mage-user')
However, the pods/jobs are still running using the "default" service account. We also attempted to annotate the "default" service account to assume a role with S3 permissions, but we are still encountering the following error:
An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity
Could anyone provide guidance or examples on how to correctly configure pipelines/jobs in the Kubernetes executor to use a specific service account?
Snippet of our pipeline
import os
import pandas as pd
from clickhouse_driver import Client
from loguru import logger
import boto3
from google.cloud import storage
from sqlalchemy import create_engine
from dotenv import load_dotenv
load_dotenv()
if 'data_loader' not in globals():
from mage_ai.data_preparation.decorators import data_loader
if 'test' not in globals():
from mage_ai.data_preparation.decorators import test
@data_loader
def load_data_from_clickhouse(*args, **kwargs):
print('################### Configurations ###############################')
# Configuration
bucket = os.getenv("S3_FLAG_BUCKET")
platform = os.getenv('CLOUD_PLATFORM')')
database = os.getenv("CLICKHOUSE_DATABASE")
DEFAULT_SERVICE_ACCOUNT_NAME = os.getenv('KUBE_SERVICE_ACCOUNT_NAME', 'mage-user')
...
def get_from_storage(location, default=None):
client = get_storage_client()
try:
if platform == 'GCP':
bucket_obj = client.bucket(gcp_bucket)
blob = bucket_obj.blob(location)
contents = blob.download_as_string()
return contents.decode("utf-8")
else:
obj = boto3.resource('s3').Object(bucket, location)
body = obj.get()['Body'].read()
return body.decode("utf-8")
except Exception as e:
logger.error(f"Error accessing storage: {e}")
return default
The text was updated successfully, but these errors were encountered:
Hi All,
We are running Mage AI on Amazon EKS and have configured the Kubernetes executor so that each pipeline runs as a pod/job. Our Python pipeline requires access to S3, for which we have created a specific Kubernetes service account.
We have the following configuration in our code:
DEFAULT_SERVICE_ACCOUNT_NAME = os.getenv('KUBE_SERVICE_ACCOUNT_NAME', 'mage-user')
However, the pods/jobs are still running using the "default" service account. We also attempted to annotate the "default" service account to assume a role with S3 permissions, but we are still encountering the following error:
An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity
Could anyone provide guidance or examples on how to correctly configure pipelines/jobs in the Kubernetes executor to use a specific service account?
Snippet of our pipeline
The text was updated successfully, but these errors were encountered: