There are multiple ways in which you can deploy your Nodejs app, be it On-Cloud or On-Premises. However, it is not just about deploying your application but deploying it correctly. Security is also an important aspect that must not be ignored, and if you do so, the application won’t stand long, and there is a high chance of it getting compromised. Hence, here we are to help you with the steps to deploy Nodejs app to AWS EC2. We will show you exactly how to deploy Nodejs app to server using Docker containers, RDS Amazon Aurora, Nginx with HTTPS, and access it using the Domain Name.
Here are the tools you will need to Deploy Nodejs application on AWS EC2
As I said, we will “deploy Nodejs app to server using Docker containers, RDS Amazon Aurora, Nginx with HTTPS, and access it using the Domain Name”, let’s first understand the architecture before we get our hands dirty.
Deploy Nodejs app to ec2 instance using Docker will be available on port 3000. This sample Nodejs app fetches data from the RDS Amazon Aurora instance created in the same VPC as that of the EC2 instance. An Amazon Aurora DB instance will be private and hence accessible within the same VPC. The Nodejs application deployed on the EC2 instance can be accessed using its public IP on port 3000, but we won’t.
Accessing applications on non-standard ports is not recommended, hence we will have Nginx that will act as a Reverse Proxy and enable SSL Termination. Users will try to access the Application using the Domain Name and these requests will be forwarded to Nginx. Nginx will check the request and based on the API, it will redirect that request to the Nodejs app. The application will also be terminated with the SSL, as a result the communication between the client and the server will be secured and protected.
Here is the architecture diagram that gives the clarity of deploy Nodejs app to AWS
Before we proceed ahead into deploy Nodejs app to AWS, it is assumed that you already have the following Prerequisites.
Go to https://AWS.amazon.com/console/ and login into your account.
After you log in successfully in your account, click in the search bar and search for EC2. Click on the result to visit the EC2 dashboard to create an EC2 instance.
Here, click on “Launch instances” to configure and create an EC2 instance.
Select the “Ubuntu Server 20.04 LTS” AMI.
I would recommend you to select t3.small only for test purposes, this will have 2 CPUs and 2GB RAM. You can choose the instance type as per your need and choice.
You can keep the default settings and proceed ahead. Here, I have selected the default VPC, if you want you can select your VPC. Make a note that, here I will be creating an instance in the Public Subnet.
Better to put a larger disk space at 30GB. The rest can be the default.
Assign a “Name” “Environment” tag any values of your choice. You may even skip this step.
Allow connection to port 22 only from your IP. If you allow it from 0.0.0.0/0, then your instance will allow anyone on port 22.
Review the configuration once, and click on “Launch” if everything looks fine to create an Instance.
Before the instance gets created, it needs a key-pair. You can either create a new key-pair or use the existing one. Click on the “Launch instances” button that will initiate the instance creation.
To go to the console and check your instance, click on the “View instances” button.
Here, you can see that the instance has been created and is in the “Initiating” phase. Within a minute or 2, you can see your instance up and running.
Meanwhile, let’s create an RDS Instance.
Again click in the search bar at the top of the page and this time search for “RDS”. Click on the result to visit the RDS Dashboard.
On the RDS Dashboard, click on the “Create database” button to configure and create the RDS instance.
Choose the “Easy create” method, “Amazon Aurora” Engine type, “Dev/Test” DB instance size as follows
Scroll down a bit and specify the “DB cluster identifier” as “my-Nodejs-database”. You can specify any name of your choice as it is just a name given to the RDS Instance; however, I would suggest using the same name so that you do not get confused while following the next steps.
Also, specify a master username as “admin”, its password, and then click on “Create database”.
This will initiate the RDS Amazon Aurora Instance creation. Make a note that for production or live environments, you must not set simple usernames and passwords.
Here, you can see that the instance is in the “Creating” state. In around 5-10 minutes, you should have the instance up and running.
Make a few notes here:
Now, you can connect to the Instance we created. I will not get into details on how to connect to the instance and I believe that you already know it.
We will need a MySQL client to connect to the RDS Amazon Aurora instance and create a database in it. Connect to the EC2 instance and execute the following commands from it.
We will need a table in our RDS Amazon Aurora instance to store our application data. To create a table, connect to the Amazon RDS Aurora instance using the MySQL client we installed on the EC2 instance in the previous step.
Copy the Database Endpoint from the Amazon Aurora Instance.
Execute the following common with the correct values
Here, my command looks as follows
Once you get connected to the Amazon RDS Aurora instance, execute the following commands to create a table named “users”.
show databases; use main; CREATE TABLE IF NOT EXISTS users(id int NOT NULL AUTO_INCREMENT, username varchar(30), email varchar(255), age int, PRIMARY KEY(id)); select * from users;
Refer to the following screenshot to understand command executions.
Now, let’s create a directory where we will store all our codebase and configuration files.
Clone my Github Repository containing all the code. This is an optional step, I have included all the code in this document.
Note: This is an optional step. If you copy all the files from the repository to the application directory then you do not need to create files in the upcoming steps; however, you will still need to make the necessary changes.
Docker is a containerization tool used to package our software application into an image that can be used to create Docker Containers. Docker helps to build, share and deploy our applications easily.
The first step of Dockerization is installing Docker.
Once you have Docker installed, the next step is to Dockerize the app. Dockerizing a Nodejs app means writing a Dockerfile with a set of instructions to create a Docker Image.
Let’s create Dockerfile and a sample Nodejs app.
In the above file, change values of the following variables with the one applicable to your RDS Amazon Aurora instance:
To access the application, we need to add a rule in the Security Group to allow connections on port 3000. As I said earlier, we can access the application on port 3000, but it is not recommended. Keep reading to know our recommendations.
Learn more with our blog How to Dockerize a Node.js application.
Now we have our Nodejs App Docker Container running.
In this section, we tried to access APIs available for the application directly using the Public IP:Port of the EC2 instance. However, exposing non-standard ports to the external world in the Security Group is not at all recommended. Also, we tried to access the application over the HTTP protocol, which means the communication that took place from the Browser to the Application was not secure and an attacker can read the network packets.
To overcome this scenario, it is recommended to use Nginx.
Let’s create an Nginx conf that will be used within the Nginx Container through a Docker Volume. Create a file and copy the following content in the file, alternatively, you can copy the content from here as well.
In the above file make changes in the 3 lines mentioned below. Replace my subdomain.domain, i.e. Nodejs.devopslee, with the one that you want and have.
Our Nodejs application runs on a non-standard port 3000. Nodejs provides a way to use HTTPS; however, configuring the protocol and managing SSL certificates that expire periodically within the application code base, is something we should not be concerned about.
To overcome these scenarios, we need to have Nginx in front of it with SSL termination and forward user requests to Nodejs. Nginx is a special type of web server that can act as a reverse proxy, load balancer, mail proxy, and HTTP cache. Here, we will be using Nginx as a reverse proxy to redirect requests to our Nodejs application and have SSL termination.
Apache is also a web server and can act as a reverse proxy. It also supports SSL termination; however, there are a few things that differentiate Nginx from Apache. Due to the following reasons, mostly Nginx is preferred over Apache. Let’s see them in short.
Let’s install docker-compose as we will need it.
image: nginx:mainline-alpine container_name: webserver restart: unless-stopped ports: - "80:80" - "443:443" volumes: - web-root:/var/www/html - ./nginx-conf:/etc/nginx/conf.d - certbot-etc:/etc/letsencrypt - certbot-var:/var/lib/letsencrypt - dhparam:/etc/ssl/certs depends_on: - Nodejs networks: - app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email my@email.com --agree-tos --no-eff-email --staging -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com
#command: certonly --webroot --webroot-path=/var/www/html --email my@email.com --agree-tos --no-eff-email --force-renewal -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/ubuntu/Nodejs-docker/views/
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: /home/ubuntu/Nodejs-docker/dhparam/
o: bind
networks:
app-network:
driver: bridge
In the above file make changes in the line mentioned below. Replace my subdomain.domain, i.e. Nodejs.devopslee, with the one that you want and have. Change IP for your personal Email.
–email EMAIL, Email used for registration and recovery contact.
This time, expose ports 80 and 443 in the security group attached to the EC2 instance. Also, remove 3000 since it is not necessary, because the application works through port 443.
Here, I have created a sub-domain “Nodejs.devopslee.com” that will be used to access the sample Nodejs application using the domain name rather than accessing using an IP.
You can create your sub-domain on AWS if you already have your domain.
Create 2 “Type A Recordsets” in the hosted zone with a value as EC2 instances’ public IP.
One Recordset will be subdomain.domain.com and the other will be www.subdomain.domain.com.
Here, I have created Nodejs.devopslee.com and www.Nodejs.devopslee.com, both pointing to the Public IP of the EC2 instance.
Note: I have not assigned any Elastic IP to the EC2 instance. It is recommended to assign an Elastic IP and then use it in the Recordset so that when you restart your EC2 instance, you don’t need to update the IP in the Recordset because public IPs change after the EC2 instance is restarted.
Now, copy values of the “Type
NS Recordset”, we will need these in the next steps.
Go to the Hosted zone of your domain and create a new “Record” with your subdomain.domain.com adding the NS values you copied in the previous step.
Now, you have a sub-domain that you can use to access your application.
In my case, I can use Nodejs.devopslee.com to access the Nodejs application. We are not done yet, now the next step is to secure our Nodejs web application.
Let’s generate our key that will be used in Nginx.
We are all set to start our Nodejs app using docker-compose.
This will start our Nodejs app on port 3000, Nginx with SSL on port 80 and 443. Nginx will redirect requests to the Nodejs app when accessed using the domain. It will also have a Certbot client that will enable us to obtain our certificates.
After you hit the above command, you will see some output as follows. You must see a message as “Successfully received certificates”.
Note: The above docker-compose command will start containers and will stay attached to the terminal. We have not used the -d option to detach it from the terminal.
You are all set, now hit the URL in the browser and you should have your Nodejs application available on HTTPS.
You can also try to hit the application using the curl command
Certificates we generate using Let’s Encrypt are valid for 90 days, hence we need to have a way to renew our certificates automatically so that we don’t end up with expired certificates.
To automate this process, let’s create a script that will renew certificates for us and a cronjob to schedule the execution of this script.
*/5 * * * * /home/ubuntu/Nodejs-docker/renew-cert.sh >> /var/log/cron.log 2>&1
In the above screenshot, you can see a “Simulating renewal of an existing certificate….” message. This is because we have specified the “–dry-run” option in the script.
This time you won’t see such a “Simulating renewal of an existing certificate….” message. This time the script will check if there is any need to renew the certificates, and if required will renew the certificates else will ignore and say “Certificates not yet due for renewal”
We are done with setting up our Nodejs application using Docker on AWS EC2 instance; however, there are other things that come into the picture when you want to deploy a highly available application for production and other environments.
The next step is to use an Orchestrator like ECS or EKS to manage our Nodejs application at the production level. Replication, Auto-scaling, Load Balancing, Traffic Routing, and Monitoring container health does not come out of the box with Docker and Docker-Compose. For managing containers and microservices architecture at scale, you need a Container Orchestration tool like ECS or EKS.
Also, we did not use any Docker Repository to store our Nodejs app Docker Image. You can use AWS ECR, a fully managed AWS container registry offering high-performance hosting.
If you want to create a cloud-native architecture, check out our video What is a Cloud-Native Architecture and how to adopt it?
To deploy Nodejs app to AWS does not mean just creating a Nodejs application and deploying it on the AWS EC2 Instance with a self-managed database. There are various aspects like Containerizing the Nodejs App, SSL Termination, Domain for the app that come into the picture when you want to speed up your Software Development, Deployment, Security, Reliability, Data Redundancy.
In this article, we saw the steps to dockerize the sample Nodejs application, use AWS RDS Amazon Aurora, and deploy the Nodejs application on an AWS EC2 instance using Docker and Docker-Compose. We enabled SSL termination to our sub-domain to be used to access the Nodejs application. We saw the steps to automate domain validation and SSL certificate creation using Certbot, along with a way to automate certificate renewal that is valid for 90 days.
This is enough to get started with a sample Nodejs application; however, when it comes to managing your real-time applications, 100s of microservices, 1000s of containers, volumes, networking, secrets, egress-ingress, you need a Container Orchestration tool. There are various tools like self-hosted Kubernetes, AWS ECS, and AWS EKS that you can leverage to manage the container life cycle in your real-world applications.
To deploy Nodejs app to AWS with SSL Termination requires changes in the code of the Nodejs. So, rather than making HTTPS configuration in the code and managing it on our own and being worried about it, it is better to use Nginx that can be used for the SSL termination and can act as a Reverse proxy to redirect requests to our Nodejs application.
When communication takes place between a client and a server, i.e. between the browser and the Nodejs application, over the insecure connection, there are high chances of data theft, attacks on the server. To overcome such risks, it is always recommended to enable SSL Termination and communicate over a secured connection.
Managing a few containers using docker CLI, or managing 10s of containers using docker-compose is fine. This does stand true when you have 100s and 1000s of micro-services, containers on multiple environments like Dev, QA, Staging, Prod. To not only manage containers but log management, monitoring, networking, load balancing, testing, and secrets management you need some kind of tool called a Container Orchestrator. There are various Container Orchestration tools like ECS or EKS that can help you manage your containers and other moving parts.
Yes, of course. You can deploy your Nodejs app on any Cloud. However, while choosing a Cloud provider there are a few areas of consideration as follows that one must think of.
Certifications & Standards.
Global Infrastructure
Data redundancy:
Low Latency Content Delivery:
Affordable Compute, Network and Storage solutions
Pricing model
Technologies & Service Roadmap
Contracts, Commercials & SLAs
You may have considered hiring a nearshore software development company or services, but you still have doubts…
End-to-end project management goes as far back as you can remember. Every project in history, even…
AWS DevOps has recently become a trending topic in IT circles as it offers companies…
When understanding AI vs Machine Learning, it’s essential to grasp how these innovations shape the…
If you are involved in the IT industry by any means, it is your job…
A Fintech company was dealing with outdated infrastructure, incurring additional costs due to the deprecation…