From localhost to live HTTPS URL with AWS and Laravel/LiveWire

Peter Quill
10 min readFeb 1, 2021

Past few weeks, I was putting together a simple homepage with Laravel which is featuring few Livewire components such as email subscription and polling-based tables. It’s a simple, introduction-type landing page.

I thought I’ll capture steps of going from localhost to live HTTPS URL with AWS. Predominantly because content online overcomplicates this process.

Steps in a nutshell:

  • Launch Amazon Linux 2 EC2.
  • Setup NGINX and PHP-FPM
  • Setup HTTPS with Let’s Encrypt
  • Get in the code
  • Resolve group and user issues
  • Compile assets

Launch Amazon Linux 2 EC2

Go to EC2, make sure the Region you are in is where you want to launch the instance. Click Launch Instances.

Step 1: Choose an Amazon Machine Image (AMI)

Choose Amazon Linux 2 AMI. It’s not the only free-tier eligible AMI, however, there are inherent benefits in using Amazon Linux 2 when working with AWS.

Step 2: Choose an Instance Type

Choose t2.micro and click Next. At the moment of writing that’s the only free-tier eligible instance type.

Step 3: Configure Instance Details

Change nothing and click Next.

Step 4: Add Storage

Change nothing and click Next.

Step 5: Add Tags

Change nothing and click Next.

Step 6: Configure Security Group

If you already have a security group for public-facing EC2s, choose one, avoid creating new security groups with every EC2 launch.

If not, we want to make sure:

  • we, and only we, can access the instance via SSH
  • we allow HTTP and HTTPS traffic

The security group should look like this:

Proceed to Review and Launch and then Launch.

Once you clicked Launch, you’ll be greeted with “select key pair” dialogue window.

If you have a key pair already, you can choose that or generate a new one. I generate new key pairs for every instance for security reasons. Regardless, make sure you have the PEM file as whit out it you will not be able to SSH to the instance.

The final click — Launch Instance.

Last bit to discuss before moving to another topic is the public IPv4 address. It has been auto-assigned and will change if you Stop and Start the instance. You can Reboot (via AWS Console) or via sudo reboot. As we will be pointing our domain to this IP address, we might want to consider allocating an elastic IP address for the instance (cost implied).

NGINX & PHP-FPM

SSH to the instance.

$ ssh -i "~/.ssh/key-pair.pem" ec2-user@IPv4Address

Amazon Linux 2 has anamazon-linux-extras command which meant to simplify management of most common services.

# See what's available.
$ amazon-linux-extras list
# Install NGINX, PHP-FPM.
$ sudo amazon-linux-extras install nginx1
$ sudo amazon-linux-extras install php7.4

One might think amazon-linux-extras will take care of the run-on-boot flag, it does not. Let's enable it.

$ sudo systemctl enable nginx
$ sudo systemctl enable php-fpm

Configuring NGINX is fairly simple if you don’t try to solve all worlds problem at once. Currently, we just want phpinfo(); test to pass.

# Go root, rather then sudo-prefix every command.
$ sudo su
# Jump to default sites folder.
$ cd /etc/nginx/conf.d/
# Create .conf for your site.
$ touch my-site.com.conf
# Open with any text-editor, vi/nano.
$ vi my-site.com.conf
# Paste the below in.
server {
server_name my-site.com;
root /var/www/my-site.com/public;
# Load default nginx configuration for PHP upstream.
include /etc/nginx/default.d/php.conf;
}
# Save and restart nginx.
$ service nginx restart
# Exit root.
$ exit

Worth to note, NGINX has a default PHP upstream configuration, use it. We want to have a reason to configure new/different PHP upstream per-site.

Final few steps.

# Create directory and index.php
$ mkdir /var/www/my-site.com
$ touch /var/www/my-site.com/index.php
# Open with any text-editor, vi/nano.
$ vi /var/www/my-site.com/index.php
# Paste the below and save.
<?php phpinfo();

Alright, let’s test it.

Go to HTTP://IPv4Address/, you should see PHP info page.

HTTPS://domain

Next stop, putting HTTPS and domain on top of IPv4 address.

Regardless of where your domain is parked, you should be able to edit its DNS records. Add/edit the A record, the value should be the IPv4 Address.

The change should propagate across the globe whilst we are sorting out HTTPS.

We will be leveraging Let’s Encrypt project to have free and auto-renewable SSL certificate.

# Go root, rather then sudo-prefix every command.
$ sudo su
# Install Certbot packages.
$ yum install -y certbot python2-certbot-nginx
# Run Certbot
$ certbot
...
Follow Certbot dialogue process, read everything.
Make sure you provide a valid email address as you will receive notifications about renewals and issues, if any.
...
# Restart nginx and exit root.
$ service nginx restart
$ exit

Let’s take a look at how our NGINX site configuration has been changed by certbot and briefly discuss it.

server {
server_name my-site.com;
root /var/www/my-site.com/public;
# Load default nginx configuration for PHP upstream.
include /etc/nginx/default.d/php.conf;
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/spaclify.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/spaclify.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my-site.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name my-site.com;
return 404; # managed by Certbot
}

It is messy but thankfully we have a line-based comment manged by Certbot where certbot did its job. We can see it has added HTTP to HTTPS redirect and necessary configuration to enable HTTPS for our domain, great.

At this point, we should see the same PHP info page but under https://my-site.com/

Achievement unlocked! Let’s take a break and get some coffee.

Bring in the code

We need to install EPEL as we’ll need to install git, composer and nodejs.

$ sudo amazon-linux-extras install epel
$ sudo yum install git

If you would try to git clone now, you would get a permission error. Not, if your repository is public.

Let's make sure Github allow access. You might be using a different provider, e.g. BitBucket, regardless, there will be a place where you can provide SSH key to allow access.

In Github, it’s in Settings > SSH and GPG keys.

# Generate key for ec2-user.
$ ssh-keygen
... enter, enter, enter...
# Copy the output and create entry at your git provider.
$ cat ~/.ssh/id_rsa.pub

We now should have no issues performing git clone.

$ cd /var/www/my-site.com
$ git clone git@...

Let’s sort out composer.

# Instlal composer.
$ sudo yum install composer
# Install dependencies.
$ cd /var/www/my-site.com
$ composer install

I’m quite sure you’ll encounter memory limit issue here. We are running an t2.micro instance, free and thus — does not pack power. Let’s add a swap disk.

# Go root, rather then sudo-prefix every command.
$ sudo su
$ /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=2048
$ /sbin/mkswap /var/swap.1
$ /bin/chmod 0600 /var/swap.1
$ /sbin/swapon /var/swap.1
# Exit root.
$ exit

I’m okay to swapon and swapoff when I need to deal withcomposer in small instances. You might want to consider upgrading instance type (cost implied) or fstab the swap disk, if composer operations are often, say continues delivery.

Let’s recap where we are.

  • We have working public HTTPS domain, https://my-site.com/
  • We have code and composer dependencies installed.

Next steps would be to set the .env file, run migrations and access the site for a test. While performing these actions you’ll face a file-write permission issue. This leads us to the next topic.

Different group and user of the stack

This is the most difficult part of the whole process and gives a proper appreciation for Laravel Forge and alike services.

The issue is that all stack services have different groups:

  • by default, NGINX uses the group as the same name as the user which is nginx
  • by default, PHP-FPM uses apache as a group and a user
  • CLI commands such as php artisan will be run under the jurisdiction of ec2-user group and user

To unwind the problem:

  • you run php artisan migrate this will write few cache file alongside a laravel.log , if the files did not exist before they will be created under ec2-user group and user.
  • you access the site via https://my-site.com/ PHP-FPM will try to write to laravel.log but will have no permissions as it’s group and user is apache.

There are other angles where the issue will surface but you get the drift.

Before we go and 0777 chmodeverything (pun intended) let's try to resolve it in a secure way.

It’s important to understand the implications of changing default configurations of root services.

Services such as NGINX and PHP-FPM have their own files maintained in locations such as /etc/ and /var/log/, if you change their group they will be unable to access these files anymore resulting in a failure to boot.

The most common way is to create a dedicated group such as www and add relevant service users into this group. I like to use ec2-user as a web group instead.

I use CLI a lot. When I SSH into an instance, I’m a ec2-user and I want to have no restriction reading, writing and finding anything site related. As a result, ec2-user needs to be able to access nginx and apache group files. To achieve this we add ec2-user to nginx and apache groups.

$ sudo usermod -a -G nginx ec2-user
$ sudo usermod -a -G apache ec2-user

The above tweaks do not fully solve the problem as the files might be created with 0644 chmod — not allowing group members to amend it. As a result, we might have a file-write permission issue whilst using php artisan and web access. Good for us Laravel allows changing chmod for single and daily log channels. Let’s change it to allow group access.

$ vi /var/www/my-site.com/config/logging.php...
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => 'debug',
'permission' => 0664,
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/laravel.log'),
'level' => 'debug',
'days' => 14,
'permission' => 0664,
],
...

Files created by PHP-FPM, such as uploads, will be under apache user by default. Let’s change the user to ec2-user.

# Go root, rather then sudo-prefix every command.
$ sudo su
# Open with any text-editor, vi/nano.
$ vi /etc/php-fpm.d/www.conf
...
user = ec2-user
...
# Restart php-fpm and exit root.
$ service php-fpm restart
$ exit

At this point, you should be able to use CLI, php artisan, web access with file uploads and in general, have a working site.

If anyone has a better way in getting the file-write issue resolved between NGINX, PHP-FPM and CLI, please leave it in the comments.

Front-end assets

There’s a bit gotcha with getting nodejs in via EPEL — the version is 6.17.1. You can check this by running the below command.

$ yum info nodejs
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
226 packages excluded due to repository priority protections
Available Packages
Name : nodejs
Arch : x86_64
Epoch : 1
Version : 6.17.1
Release : 1.el7
Size : 4.7 M
Repo : epel/x86_64
Summary : JavaScript runtime
URL : http://nodejs.org/
...

Let’s get nvm in. I’m a big fan of this project, allows great fluidity when working with multiple projects with different stack/version requirements.

# Install nvm.
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
# Source it and install node
$ . ~/.nvm/nvm.sh
$ nvm install node
# Double check
$ node --version

Great, let’s compile assets and see if all runs by visiting https://my-site.com/

$ cd /var/www/my-site.com
$ npm run prod

Troubleshooting

The first visit yields the site fine but with one Livewire issue.

htto://my-site.com/livewire/livewire.js is 404.

At the first glance, one might think it’s Livewire, potentially run discover or publish config for investigation.

# Never hurts when moving servers.
$ php artisan livewire:discover

The issue, however, is with PHP upstream and the fact it is not routing the request via index.php. Let’s take a look. The below is a default PHP upstream which exists with fresh NGINX installation.

$ cat /etc/nginx/default.d/php.conf
# pass the PHP scripts to FastCGI server
#
# See conf.d/php-fpm.conf for socket configuration
#
index index.php index.html index.htm;
location ~ \.(php|phar)(/.*)?$ {fastcgi_split_path_info ^(.+\.(?:php|phar))(/.*)$;fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass php-fpm;
}

We can see location configuration for PHP files but nothing for general routing via index.php which is necessary for Laravel stack. Let’s add that.

# Go root, rather then sudo-prefix every command.
$ sudo su
# Open with any text-editor, vi/nano.
$ vi /etc/nginx/default.d/php.conf
...
# pass the PHP scripts to FastCGI server
#
# See conf.d/php-fpm.conf for socket configuration
#
index index.php index.html index.htm;
location ~ \.(php|phar)(/.*)?$ {fastcgi_split_path_info ^(.+\.(?:php|phar))(/.*)$;fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass php-fpm;
}
# Add this
location / {
try_files $uri $uri/ /index.php$is_args$args;
}

...
# Restart nginx and exit root.
$ service nginx restart
$ exit

Voilà, we got ourselves a running HTTPS site.

Conclusion

There’s a process to it but only for a few runs. After that, it becomes second nature and takes less than an hour to complete.

You can always take this further and create launch templates or use Terraform to have this few clicks away.

Alternatively, use the aforementioned Laravel Forge or alike service.

--

--