Warm tip: This article is reproduced from serverfault.com, please click

Passing AWS role to the application that uses default boto3 configs

发布于 2020-11-24 00:56:04

I have an aws setup that requires me to assume role and get corresponding credentials in order to write to s3. For example, to write with aws cli, I need to use --profile readwrite flag. If I write code myself with boot, I'd assume role via sts, get credentials, and create new session.

However, there is a bunch of applications and packages relying on boto3's configuration, e.g. internal code runs like this:

s3 = boto3.resource('s3')
result_s3 = s3.Object(bucket, s3_object_key)
result_s3.put(
                Body=value.encode(content_encoding),
                ContentEncoding=content_encoding,
                ContentType=content_type,
        )

From documentation, boto3 can be set to use default profile using (among others) AWS_PROFILE env variable, and it clearly "works" in terms that boto3.Session().profile_name does match the variable - but the applications still won't write to s3.

What would be the cleanest/correct way to set them properly? I tried to pull credentials from sts, and write them as AWS_SECRET_TOKEN etc, but that didn't work for me...

Questioner
Philipp_Kats
Viewed
0
Philipp_Kats 2020-12-02 06:32:40

I think the correct answer to my question is one shared by Nathan Williams in the comment.

In my specific case, given that I had to initiate code from python, and was a bit worried about setting AWS settings that might spill into other operations, I used the fact that boto3 has DEFAULT_SESSION singleton, used each time, and just overwrote this with a session that assumed the proper role:

hook = S3Hook(aws_conn_id=aws_conn_id)  
boto3.DEFAULT_SESSION = hook.get_session()  

(here, S3Hook is airflow's s3 handling object). After that (in the same runtime) everything worked perfectly