Amazon S3 (Simple Storage Service) is a highly scalable cloud storage solution offering virtually unlimited space with excellent reliability. S3 is designed for 99.999999999% durability and 99.99% availability of objects over a given year.
Integrating S3 with Laravel 10 is straightforward thanks to Laravel’s built-in filesystem abstraction (powered by Flysystem v3). This tutorial will walk you through uploading files to S3 using modern Laravel practices, including using Laravel 10+, the latest AWS SDK for PHP (v3), and leveraging pre-signed URLs for direct, secure uploads. We’ll also touch on tracking upload progress with tools like Livewire/Alpine.js and discuss S3 file access control (public vs private).
Every step uses environment configuration (no hard-coded secrets) and follows current best practices. Let’s get started!
What is AWS S3?
AWS S3 is a mass storage service, virtually unlimited with really impressive advantages. We will never have to worry about adding more storage volumes to our infrastructure because Amazon’s scalability will be responsible for providing as many storage volumes as required, which will be transparently and practically imperceptible for the end-user as well as for us.
Amazon S3 Storage has a lot of good reasons to opt for its use, but this time we will focus on 3:
- 99.9% availability.
- A permission system to access the files is completely configurable in our AWS console.
- Allows storing files between 0 bytes and 5 gigabytes.
How to Integrate AWS S3 with Laravel?
Nowadays, Laravel provides an easy way to upload files to Amazon s3. The process to do it is really simple because Laravel has, by default, the configuration to use it when you want. In order to integrate it successfully, we only require our AWS credentials to access the console to create a new S3 bucket. Easy right?
Next, we will create a small application to join all these concepts and see them in action.
ALSO READ:Laravel 5.6 vs Symfony 4: The Best PHP Framework Battle
Step 1: Set Up Laravel 10 and AWS SDK
- Install Laravel 10: Begin by creating a new Laravel project (if you don’t already have one). In your terminal, run:
composer create-project laravel/laravel:^10.0 laravel-s3-upload
This will create a fresh Laravel 10 installation. Ensure you have PHP 8.1+ since Laravel 10 requires it.
2. Require the S3 filesystem driver: Laravel uses Flysystem v3 under the hood for storage. To enable S3 support, add the AWS S3 Flysystem adapter package (which includes the AWS SDK v3). Run:
composer require league/flysystem-aws-s3-v3 "^3.0" --with-all-dependencies
Laravel’s documentation notes this as a prerequisite for using the S3 driverlaravel.com. This package will pull in the AWS SDK for PHP v3.
3. (Optional) AWS SDK or Service Provider: The Flysystem adapter is usually enough for typical S3 operations via Laravel’s Storage
facade. If you plan to use the AWS SDK directly (for advanced use cases like presigned URLs), you already have it from the above step. You do not need to manually register any AWS service providers in Laravel 10; Laravel auto-discovers package integrations. (In older versions circa 2018, you had to register AwsServiceProvider
and a facade in config/app.php
, but that’s no longer needed for the filesystem driver.)
Tip: Make sure your development environment has OpenSSL and the required PHP extensions for making HTTPS requests, since AWS SDK uses them. Laravel Sail or Homestead environments come pre-configured with these.
Step 2: Create an S3 Bucket and IAM User
Before writing any Laravel code, set up your AWS S3 bucket and permissions:
- Create a new S3 Bucket: Log in to the AWS Management Console and open the S3 service. Click “Create Bucket” and provide a unique bucket name (bucket names are globally unique) and select your AWS region. For example, use a name like
my-laravel-app-files
and region US East (N. Virginia). Leave other settings at defaults or configure as needed, then create the bucket. (If you already have a bucket to use, you can skip this step.) - Set up an IAM user with S3 access: It’s a best practice to avoid using root AWS credentials. Go to the AWS IAM service and create a new user (e.g., “laravel-s3-uploader”). Attach a policy that grants access to your bucket. For simplicity, you can use the AmazonS3FullAccess managed policy for this user, or create a custom policy restricting access to only your specific bucket. For example, a custom JSON policy could allow
s3:PutObject
,s3:GetObject
, etc., on your bucket. (The 2018 approach of using a broad bucket policy open to “*” is less recommended now for security.) - Note the Access Keys: Once the IAM user is created, download or copy the Access Key ID and Secret Access Key for that user. You’ll need these for Laravel’s configuration. (AWS will only show the secret key once on creation—keep it safe!)
- (Optional) Bucket CORS Configuration: If you plan to upload directly from a client browser to S3 using presigned URLs (as we will cover in Step 5), configure your bucket’s CORS policy. In the S3 console, under your bucket Permissions -> CORS configuration, add a rule to allow the required HTTP methods from your front-end’s origin. For example:
<CORSConfiguration>
<CORSRule>
<AllowedOrigins>*</AllowedOrigins>
<AllowedMethods>GET, PUT, POST, DELETE</AllowedMethods>
<AllowedHeaders>*</AllowedHeaders>
</CORSRule>
</CORSConfiguration>
This configuration (which you can restrict to your domain instead of *
) ensures the browser can upload (PUT/POST) and fetch (GET) from S3 directly.
Step 3: Configure Laravel with AWS Credentials (.env)
Laravel’s filesystem config is located in config/filesystems.php
, and by default it already has an s3
disk configuration stub. We will utilize that by setting the appropriate environment variables in our .env
file:
Open your project’s .env
file and add/update the following lines with your AWS credentials and bucket info (replace placeholder values with your actual keys and names):
AWS_ACCESS_KEY_ID=your-iam-access-key-id
AWS_SECRET_ACCESS_KEY=your-iam-secret-key
AWS_DEFAULT_REGION=your-bucket-region # e.g., us-east-1
AWS_BUCKET=your-bucket-name # e.g., my-laravel-app-files
AWS_USE_PATH_STYLE_ENDPOINT=false # false for AWS S3, true for some S3-compatible services
Laravel 10+ will automatically read these values. The config/filesystems.php
file uses these env vars for the s3
disk configuration. For example, it sets 'key' => env('AWS_ACCESS_KEY_ID')
and 'bucket' => env('AWS_BUCKET')
, etc., linking your .env to the S3 client settings.
Do not hard-code keys or secrets in the config or code – keeping them in .env
is important for security and flexibility.
Note: If you use a custom S3-compatible service (like DigitalOcean Spaces, MinIO, etc.), you might also set AWS_ENDPOINT
to the service’s endpoint URL and enable path-style if needed. Otherwise, for Amazon S3, the default endpoints will be used.
Step 4: Building the File Upload Feature
Now, let’s write the Laravel code to upload files from our application to the S3 bucket using Laravel’s Storage API:
1. Create a File Upload Form (Blade Template): In your Laravel app, create a simple form for uploading a file. For example, in resources/views/upload.blade.php
:
<form action="{{ route('upload') }}" method="POST" enctype="multipart/form-data">
@csrf
<label>Select file to upload:</label>
<input type="file" name="file" required>
<button type="submit">Upload to S3</button>
</form>
This form posts to a route named upload
which we will define next. It includes enctype="multipart/form-data"
to allow file data to be sent.
2. Define a Route and Controller: In routes/web.php
, add a route for the form display and for handling the form submission, for example:
use App\Http\Controllers\S3UploadController;
Route::get('/upload', [S3UploadController::class, 'showForm']);
Route::post('/upload', [S3UploadController::class, 'uploadFile'])->name('upload');
Next, create the controller app/Http/Controllers/S3UploadController.php
:
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
class S3UploadController extends Controller
{
public function showForm()
{
return view(‘upload’);
}
public function uploadFile(Request $request)
{
// Validate that a file was uploaded (and optionally validate size/type)
$request->validate([
'file' => 'required|file|max:10240', // e.g., limit 10MB
]);
$file = $request->file('file'); // Illuminate\Http\UploadedFile instance
// Use Laravel's Storage facade to put the file on the S3 disk
$path = Storage::disk('s3')->put('uploads', $file);
// Optionally, make the file publicly accessible (see Step 7 for details)
// Storage::disk('s3')->setVisibility($path, 'public');
// Get the URL of the uploaded file (if it's public)
$url = Storage::disk('s3')->url($path);
return back()->with('status', "File uploaded to S3 at path: $path");
}
}
Let’s break down what happens in uploadFile
:
- We validate the incoming request to ensure a file is present. You can adjust rules (like MIME types, etc.) as needed.
- We retrieve the uploaded file via
$request->file('file')
. - We call
Storage::disk('s3')->put('uploads', $file)
. Laravel takes care of reading the file and streaming it to S3 using the AWS SDK. The first argument'uploads'
is the directory in our bucket where the file will be placed. Laravel will generate a filename automatically if a file object is given. The$path
returned is the full path of the file in the bucket (e.g.,uploads/filename.jpg
). This one-liner replaces the older approach of manually calling AWS SDK methods orfile_get_contents
– it’s concise and efficientdev.to. - By default, files on S3 will be stored as private (not publicly accessible). If you need the file to be publicly readable (for example, images that will be accessed via a URL), you can either configure the disk’s default
visibility
to public inconfig/filesystems.php
or callsetVisibility
as shown (or use thestorePublicly
method as shown later). We’ll discuss visibility in Step 7. - We retrieve the file’s URL via
Storage::disk('s3')->url($path)
. This will give the public URL if the file’s visibility is public or if you’ve set up a baseAWS_URL
(for custom domains/CDN). If the file is private, you won’t be able to access it via this URL without a signed request. - Finally, we redirect back (perhaps to the form page) with a status message. You could also pass the
$url
to view the file or display the image.
At this point, you should be able to upload a file through the form, and it will be saved to your S3 bucket inside the uploads/
prefix. You can verify by checking the S3 console for the new object.
Laravel 10 & Flysystem v3 advantages: The code above uses the latest Flysystem v3 adapter. Under the hood, Laravel streams the file to S3, which is memory-efficient for large files. This means even if you upload a multi-megabyte file, Laravel won’t load the entire file into RAM at once – it reads and sends chunks, which helps avoid timeouts and memory issues.
Step 5: Using Pre-Signed URLs for Direct-to-S3 Uploads
Uploading through your Laravel server (as in Step 4) works for many cases, but for very large files or high-volume uploads, you might want to have users upload directly to S3 to offload your application server. This is where pre-signed URLs come in.
A pre-signed URL is a temporary URL that grants permission to upload (or download) a specific file to S3 without needing AWS credentials on the client sidedev.to. Essentially, your Laravel app can generate a URL that the client’s browser can use to PUT
a file directly to S3, bypassing your Laravel server. This greatly reduces load on your server (since the file data goes straight to S3) and can improve scalabilitydev.to.
How to generate a pre-signed upload URL in Laravel:
- Decide on upload target details: You typically decide what key (path/filename in the bucket) the upload should use and what constraints to enforce (file type, size, etc.). For example, you might let users upload to
uploads/
folder with a specific filename or a UUID. - Generate the URL using AWS SDK: We’ll use the AWS SDK for PHP (which we installed earlier) to create a presigned URL. You can add a method in a controller like
S3UploadController
for this. For security, you might only allow logged-in users to hit this endpoint. Example:
use Aws\S3\S3Client;
use Illuminate\Support\Facades\Storage;
class S3UploadController extends Controller {
// ...
public function getUploadUrl(Request $request) {
// (Optional) Validate input such as desired filename, etc.
$filename = $request->input('filename', Str::uuid() . '.bin');
// Configure S3 client (reuse our disk's credentials/config)
$s3Client = Storage::disk('s3')->getDriver()->getAdapter()->getClient();
// Specify the parameters for the upload
$bucket = env('AWS_BUCKET');
$key = "uploads/{$filename}";
$expires = "+30 minutes"; // URL validity period
// Generate a pre-signed URL for a PUT request
$command = $s3Client->getCommand('PutObject', [
'Bucket' => $bucket,
'Key' => $key,
'ACL' => 'private', // or 'public-read' if you want the object public
// You can also specify 'ContentType' here for the expected file MIME
]);
$request = $s3Client->createPresignedRequest($command, $expires);
$presignedUrl = (string) $request->getUri();
return response()->json([
'url' => $presignedUrl,
'key' => $key
]);
}
}
In the above code, we retrieve the underlying S3 client from Laravel’s Storage disk (this saves us from manually instantiating S3Client and handling config). We then prepare a PutObject
command with our bucket name, target key (file path in bucket), and an ACL (access control list) if needed. Finally, createPresignedRequest
generates the signed URL that’s valid for the specified timedocs.aws.amazon.com. We return this URL (and the key
for reference) as JSON to the caller.
Alternatively, you could instantiate Aws\S3\S3Client
with credentials from env directly and do the same stepsdev.todev.to – the result is the same. The AWS SDK v3 method createPresignedRequest
is used under the hood in both cases.
Client-side usage of the URL: The front-end (could be a JavaScript function, or a Livewire component) will request this endpoint (e.g., via AJAX) to get the pre-signed URL. Once it has the URL, it can upload the file directly. For example, using Axios or Fetch in JS:
// Assume we got `data.url` from the Laravel endpoint and have a file from an <input>
const file = document.querySelector('#myFileInput').files[0];
const uploadUrl = data.url;
axios.put(uploadUrl, file, {
headers: {
'Content-Type': file.type // Ensure the content type matches the file
},
onUploadProgress: progressEvent => {
const percent = Math.round((progressEvent.loaded / progressEvent.total) * 100);
console.log('Upload progress:', percent, '%');
// Here you can update a progress bar in your UI
}
}).then(response => {
console.log('File uploaded directly to S3');
}).catch(error => {
console.error('Upload failed', error);
});
The above example uses Axios to perform a PUT
to the S3 pre-signed URL. We set the Content-Type
header appropriately and use the onUploadProgress
callback to get progress events (more on progress in Step 6). If the PUT is successful (HTTP 200), the file is now in S3 at the specified key. Note that this request never touched your Laravel server – it went straight to AWS.
Saving metadata / informing your app: After a successful direct upload, you might want to inform your Laravel app about the new file (for example, to save a record in the database). This can be done by making another request to your app (e.g., calling a route to finalize the upload, sending the file key or other metadata). This extra step is often needed because the Laravel app was bypassed during the file transfer, so it needs to be explicitly told about the new file.
Step 6: Tracking File Upload Progress (Modern Frontend)
When dealing with file uploads, especially large ones, providing feedback to the user is important. Modern tools make it easy to display upload progress:
- Using Axios or Fetch (with JavaScript): As shown above, Axios provides an
onUploadProgress
callback which gives you the bytes sent vs total bytes. You can use this to update a progress bar or percentage text. Similarly, if using the browser Fetch API, you can construct a manual progress by using the Streams API or anXMLHttpRequest
with progress events. - Using Livewire (Laravel Livewire): Livewire is a popular Laravel package for building dynamic interfaces with minimal JavaScript. Livewire has built-in support for file uploads. By default, Livewire will upload the file via your Laravel app, but it can be configured to use S3 for temporary storage as well. In either case, Livewire emits JavaScript events during upload. Specifically, a
<input type="file" wire:model="photo">
will trigger events likelivewire-upload-start
,livewire-upload-progress
, andlivewire-upload-finish
. You can catch these in JavaScript or even easier, use Alpine.js for a reactive progress bar. For example, in a Blade view with Alpine integrated:
<div x-data="{ progress: 0 }"
x-on:livewire-upload-progress="progress = $event.detail.progress">
<input type="file" wire:model="photo">
<div x-show="progress > 0">
<progress max="100" x-bind:value="progress"></progress>
<span x-text="progress + '%'"></span>
</div>
</div>
Livewire will dispatch the livewire-upload-progress
event with a detail.progress
(0 to 100) as the file uploads. The Alpine snippet above listens for that and updates a <progress>
bar accordingly. This provides a seamless user experience without you writing any custom AJAX code.
Using Vue or other frameworks: If you’re building your front-end with Vue.js or React, you can similarly use their data-binding and lifecycle hooks to display progress. For instance, in Vue, you might use the Axios method as shown and bind the percent
to a component’s data, which updates the DOM. There are also third-party components and libraries (like Uppy, Dropzone, etc.) that can handle uploads and progress bars out of the box.
Step 7: S3 File Visibility – Public vs Private
By default, objects uploaded to S3 through Laravel are private – meaning they require AWS credentials or a signed URL to access. This is desirable for sensitive files. However, sometimes you want files (e.g. user profile images) to be publicly accessible via a URL. Laravel’s filesystem abstraction makes it easy to control this:
Setting visibility per file: If not set globally, you can specify when uploading. The Storage::put
method accepts an optional third parameter for visibility. For example:
Default Disk Visibility: You can set a default visibility in the disk config. For example, in config/filesystems.php
under the s3
disk, add 'visibility' => 'public'
if you want all uploads to S3 to be public by default. This corresponds to S3 object ACL of “public-read”. Be cautious: setting everything to public may not be what you want for user-uploaded data.
Storage::disk('s3')->put('uploads/file.jpg', $contents, 'public');
This will upload and make that object public in one go (ACL = public-read)laravel.com. In our earlier code using ->put('uploads', $file)
, we could do ->put('uploads', $file, 'public')
to achieve the same.
If you’re using the UploadedFile->store()
method, Laravel provides storePublicly()
and storePubliclyAs()
to always use public visibilitylaravel.com. For example:
$path = $request->file('avatar')->storePublicly('avatars', 's3');
In summary, for public files you can either set them public on upload or use CloudFront/authorized URLs. For private files, keep them private and serve via signed URLs or through your app with authentication checks. Laravel’s storage API provides the hooks (as shown above) to handle both scenarios conveniently.
Step 8: Deployment to AWS (Laravel Forge)
Once your Laravel application is working locally with S3, you’ll likely want to deploy it to a production environment on AWS. One modern approach is to use Laravel Forge for easy deployment and server management. Laravel Forge is a service that can provision and manage web servers on various providers (including AWS EC2) with minimal effort. Here’s why you might consider it and how it fits in:
- Server Provisioning: Through Forge, you can create a new AWS EC2 server (or Lightsail instance, etc.) in a few clicks. Forge will install a Laravel-optimized LEMP stack (Linux, Nginx, MySQL, PHP) for you, including the correct PHP version and extensions.
- Deployment Automation: You can hook your project’s repository (GitHub, GitLab, etc.) to Forge. Every time you push, Forge can deploy your code. It handles composer installation, migrating databases, etc., according to a customized deployment script. This means your S3 configuration in
.env
will be in place on the server and your code is up-to-date. - Environment Management: Forge provides a UI to manage your
.env
variables on the server, so you can securely add your AWS credentials there (never commit them to code). - Queue and Cron Management: If your file uploads dispatch jobs (e.g., to process images), Forge can manage queue workers and cron jobs easily from its dashboard.
- SSL and Security: Forge can provision SSL certificates via Let’s Encrypt and perform basic firewall setup. This is important if you are allowing file uploads; you want your site to be HTTPS.
In short, Laravel Forge automates the deployment of Laravel applications to AWS and handles routine server maintenanceaws.amazon.com, letting you focus on code. There are other deployment options too (e.g., AWS Elastic Beanstalk, Laravel Vapor for serverless, or manually managing an EC2), but Forge is very developer-friendly for Laravel projects.
Once deployed, double-check that your production server has the correct .env
variables for AWS and that the server’s IAM role or keys have access to the S3 bucket. If deploying to an AWS EC2, you can also assign an IAM Role to the instance with S3 access instead of using keys – in that case, your app can connect to S3 without embedding keys (the AWS SDK will use the IAM role credentials).
Note: You can use Laravel Forge to create highly scalable applications on AWS.

Common Questions about Uploading Files to Amazon s3 Using Laravel
Amazon S3 (Simple Storage Service) is a scalable cloud storage service provided by AWS. It allows you to store and retrieve any amount of data. S3 with Laravel offers seamless integration through Laravel’s built-in filesystem abstraction, making it easy to manage file uploads, recover files, and organize them.
Yes, you can store files in private folders on Amazon S3 using Laravel. Configuring the appropriate permissions and access control policies ensures that the files stored within specific folders are private and only accessible to authenticated users.
Laravel doesn’t natively support tracking file upload progress out of the box, but you can implement this functionality using JavaScript and AJAX.