We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
LiveView 13 - Chapter 5: AWS S3
Published on: 2025-12-18
Tags:
elixir, Blog, LiveView, Ecto, Html/CSS, Phoenix, S3, AWS, IAM
Finally if you have an Amazon S3 account upload the files there instead of saving locally.
So there is a decent amount of steps here so Ill try to give you a list of things to do then we can do them 1 by 1
Product Images → S3 (Phoenix) — Work List
1. Product Image Model Decisions
Decide: one image per product or multiple
Exactly one image per product
Decide: store full URL vs store S3 object key
Store a URL (not a key or metadata blob)
Decide: overwrite image on update or version it
Overwrite the existing image when a new one is uploaded
2. S3 Bucket (Product Images Only)
Create one standard S3 bucket
pento-images
Choose region
US East (Ohio) us-east-2
Block public access
Enable bucket owner enforced ownership
Decide bucket naming per environment (or prefixes)
3. IAM Access for the Phoenix App
You need to have a user and a policy for this.
Create IAM user or role
Allow only:
upload product images
read product images
delete product images
Store credentials as env vars
The access scope is going to be needing PutObject, GetObject, DeleteObject, ListBucket, attach the bucket to the policy
You will need to create a user for this
You will need to attach the policy to the user
Once this is done you will want to test the policy by creating a access key from the user page. Make it use a CLI
Once that is done you will want to test with some command line instal AWS for you command line first
sudo apt update
sudo apt install -y unzip curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
# Then set the profile with the
aws configure --profile pento_admin
# AWS Access Key ID → from pento_admin
# AWS Secret Access Key → from pento_admin
# Default region → the region your bucket is in (e.g., us-east-1)
# Default output format → json (or leave blank)
echo "test file" > test.txt
aws s3 cp test.txt s3://your-bucket-name/products/test.txt
aws s3 ls s3://your-bucket-name/products/
4. Object Structure Convention
Define product image key format
Use products/:product_id/image.ext
Decide how filename collisions are handled
Always overwrite; keep only the latest version
Decide where replacement images go
Delete the old image from S3 when uploading a new one
5. Phoenix App Configuration
Add S3 as an upload backend
You will need to add the dependencies for the backend
Configure bucket name and region
You will need to add the bucket to the environment you want it to run in. Also you will need to create a new access key with the (local)
config.dev (for now)
config :pento, Pento.Uploads,
bucket: "pento-images",
region: "us-east-1",
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
Just so it here you will then need to store those values in a .env file so you can just pull from there. Also make sure that you have the .env file in the .gitignore as you don't want that on the web.
Ensure dev/stage/prod are isolated
Dev: pento-images-dev or prefix dev/products/:id/image.ext
Prod: pento-images or prefix products/:id/image.ext
6. Product Schema Changes
Before we start this you will need the dependencies for the AWS servers
{:ex_aws, "~> 2.3"},
{:ex_aws_s3, "~> 2.3"},
{:hackney, "~> 1.19"},
{:sweet_xml, "~> 0.7"}
Then add these lines to the dev.exs
# Configure pento uploads
# See pento/lib/pento/uploads.ex for more information.
# It's recommended to use environment variables for sensitive information.
config :pento, Pento.Uploads,
bucket: "pento-images",
region: "us-east-1",
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
# Configure ExAws
# See https://hexdocs.pm/ex_aws/ExAws.html for more information.
# Configure via environment variables or hardcoding below.
# It's recommended to use environment variables for sensitive information.
config :ex_aws,
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY"),
region: System.get_env("AWS_REGION")
Add field(s) for product image reference
This is adding in a new column to the products table that will store the image_url so start with that
mix ecto.gen.migration add_image_url_to_products
Decide nullable vs required
nil or null is fine
Decide behavior when product is deleted
remove the file from the bucket
7. Product Create / Edit Flow
Accept image uploads in product forms
This should be done we just need to change the upload to the S3 bucket.
defp save_product(socket, :new, product_params) do
case Catalog.create_product(socket.assigns.current_scope, product_params) do
{:ok, product} ->
with [image_url | _] <-
consume_uploaded_entries(socket, :image, fn entry, _ ->
upload_s3_file(entry, product.id)
end),
{:ok, _product} <-
Catalog.update_product(socket.assigns.current_scope, product, %{
"image_url" => image_url
}) do
{:noreply,
socket
|> put_flash(:info, "Product created successfully")
|> push_navigate(
to: return_path(socket.assigns.current_scope, socket.assigns.return_to, product)
)}
else
{:error, %Ecto.Changeset{} = changeset} ->
{:noreply, assign_form(socket, changeset)}
[] ->
# no image uploaded, still proceed
{:noreply,
socket
|> put_flash(:info, "Product created (no image uploaded)")
|> push_navigate(
to: return_path(socket.assigns.current_scope, socket.assigns.return_to, product)
)}
_other ->
{:noreply,
socket
|> put_flash(:error, "An unexpected error occurred")}
end
{:error, %Ecto.Changeset{} = changeset} ->
{:noreply, assign_form(socket, changeset)}
end
end
defp params_with_image(socket, params) do
if socket.assigns.product.id do
path =
consume_uploaded_entries(socket, :image, fn entry, _ ->
upload_s3_file(entry, socket.assigns.product.id)
end)
|> List.first()
Map.put(params, "image_upload", path)
else
params
end
end
defp upload_s3_file(%{path: path} = entry, product_id) do
ext = Path.extname(filename)
key = "#{s3_prefix()}/#{product_id}/image#{ext}"
{:ok, _resp} =
ExAws.S3.put_object("pento-images", key, File.read!(path))
|> ExAws.request()
{:ok, key}
end
defp s3_prefix do
case Mix.env() do
:dev -> "dev/products"
:test -> "test/products"
:prod -> "products"
end
end
These will be used to make sure that once we have a file uploaded to the server it will then be uploaded to the S3 bucket. Try to test it now.
Upload image to S3 on create/update
This is all handled above
Persist the S3 reference on success
This is all handled above
Roll back cleanly if upload fails
That happens cleanly atm because we still have the product but the image just doesn’t get uploaded.
8. Image Access in the UI
Render product images in templates / LiveView snd Handle missing images gracefully
def image_url(product) do
case product.image_url do
nil ->
nil
url ->
{:ok, url} =
ExAws.S3.presigned_url(
ExAws.Config.new(:s3),
:get,
"pento-images",
url,
expires_in: 3600
)
url
end
end
<div>
<img alt="product image" width="200" src={image_url(@product)} />
</div>
9. Image Replacement & Deletion
Delete old image when a new one is uploaded
As we only have 1 image with the name for each product there can be only 1 image per product
Delete image from S3 when product is deleted and Prevent orphaned S3 objects
def handle_event("delete", %{"id" => id}, socket) do
product = Catalog.get_product!(socket.assigns.current_scope, id)
# Delete the S3 image first
case delete_s3_image(product) do
:ok ->
{:ok, _} = Catalog.delete_product(socket.assigns.current_scope, product)
{:noreply, stream_delete(socket, :products, product)}
{:error, reason} ->
# Optionally handle failure, log, or show error to user
{:noreply, socket |> put_flash(:error, "Failed to delete image: #{inspect(reason)}")}
end
end
defp delete_s3_image(%{image_url: nil}), do: :ok
defp delete_s3_image(%{image_url: key}) do
case ExAws.S3.delete_object("pento-images", key) |> ExAws.request() do
{:ok, _resp} -> :ok
{:error, reason} -> {:error, reason}
end
end
10. Validation & Constraints
Enforce image size limits and Enforce allowed image types and Reject invalid uploads early
def mount(params, _session, socket) do
{:ok,
socket
|> assign(:return_to, return_to(params["return_to"]))
|> allow_upload(:image,
accept: ~w(.jpg .jpeg .png .gif),
max_entries: 1,
max_file_size: 9_000_000,
auto_upload: true
)
|> apply_action(socket.assigns.live_action, params)}
end