cloud

S3 Files is a newly released feature of S3 that provides a file system on top of your bucket powered by EFS. For reads, the data is loaded into EFS when an object is less than 128KB, by default, for objects equal to or larger than the threshold S3 Files prefers streaming the data directly from the bucket. While writing files, EFS is used and any changes are synched automaticaly with the underlying S3 bucket.

No matter how you decide to dress object storage and treat it like a file system it can always come to bite you, if you're not careful. Here's how renames and moves are handled:

Amazon S3 uses a flat storage structure where objects are identified by their key names. While S3 Files lets you organize your data in directories, S3 has no native concept of directories. What appears as a directory in your file system is a common prefix shared by the keys of the objects within the S3 bucket. Additionally, S3 objects are immutable and do not support atomic renames. As a result, when you rename or move a file, S3 Files must write the data to a new object with the updated key and delete the original. When you rename or move a directory, S3 Files must repeat this process for every object that shares that prefix. Therefore, when you rename or move a directory containing tens of millions of files, your S3 request costs and the synchronization time increase significantly.

Read from link

In an unprecidented move, AWS makes account names more useful. They're now displayed in the top navigation bar, I had to enable functional cookies to make this work. If you don't see it yet, from the footer click Cookie preferences and allow functional cookies then hard refresh.

The account name comes from the name given when you created the account, this can be updated under the Account settings.

Also I find it hilarious that they had to include that this was free in the announcement:

The account name display feature is available at no additional cost in all public AWS Regions.

Read from link

Edge cases make for interesting TILs. Here's one on attempting to create an S3 bucket that already exists in your account.

The bucket you tried to create already exists, and you own it. Amazon S3 returns this error in all AWS Regions except in the North Virginia Region. For legacy compatibility, if you re-create an existing bucket that you already own in the North Virginia Region, Amazon S3 returns 200 OK and resets the bucket access control lists (ACLs).

Discovered via @whitequark on Mastodon.

Read from link

When requesting actions on AWS accounts or resources, AWS needs to verify if the principal (user, role, application, etc.) making the request is allowed to carry out the action. For single accounts with simple workloads, this can be done easily by setting an identity-based policy on the user. However, as needs grow and additional accounts are added, other factors come into play, such as resource-based policies, cross-account roles, service control policies, and more.

Whenever I encounter potential access-related problems, I refer to this flow chart for troubleshooting. Given the number of times I end up searching for this, I believe it might be helpful to share it.

Read from link