Deploy spring-boot application on AWS using ElasticBeanstalk, RDS, and Terraform

Transforming the way people build, test & deliver software

Deploy spring-boot application on AWS using ElasticBeanstalk, RDS, and Terraform

Continuous deployment is a very crucial part of any Continuous Software Delivery portfolio. The moment we associate the prefix Continuous to any of the software delivery phases, it’s a well taken assumption that the phase should be repeatable, maintainable, scalable and automated.

With the emerging adoption of Agile and the need of having a Continuous Software Delivery footprint as a part of the engineering portfolio & software development hygiene, it has become a must have for any company. Hence, the need of investing into defining a robust, predictable and reliable deployment strategy has also proportionately gained equal importance.

Due to this shift, over the years we have seen the increased adoption and popularity of tools like Puppet, Chef, Ansible, Vagrant etc. for managing deployments starting from a self serve environment to managing a full scale production environment.

Terraform by HashiCorp is also an upcoming offering in the same space, which has very powerful feature set to offer when it comes to defining & managing deployments and environments. More can be read about Terraform on its official site here.

In this article, we will take a look into how we can deploy a simple microservice developed in SpringBoot using Terraform on AWS with the help of ElasticBeanStalk and RDS. What we will not cover in this article is how to develop a simple SpringBoot application in Java. There are many official tutorials available for reference to get started with here.

About the Application: The application under consideration is a sample Rest over HTTP service developed in springboot which requires a relational database like MySql at the backend to store configurational & transactional data. I have taken this particular stack as it still remains one of the most widely adopted design while creating microservices.

In order to deploy and make the above application available, we would need a server to host our web-service and the mysql instance. With the emergence of public cloud providers, getting infrastructure options for your applications is not that tedious any more like it used to be in earlier days. So here instead of getting into the over head of creating and managing my own compute instance ( VM ) and installing the supported software on it. We will use some of the managed service offerings from AWS to host our application.

AWS ElasticBeanstalk: Popularly known as EB, it is a managed web-server offering from aws which can be used to host your web-service by just making your application artefact available to it. In this context managed means that you don’t need to worry about the underlying infrastructure components like VM, LoadBalancers etc. EB manages all those for you and also empowers you with options like on demand scaling, auto-scaling, monitoring etc for your application.

AWS Relational Database Service: Popularly known as RDS, as the name suggests, it is a managed relational database service offering from aws which supports popular opensource databases like mysql, postgres and some native aws databases like Aurora. Similar to what we discussed about EB, being a managed service we wont need to worry about the underlying infrastructure components and also would have the empowerment of additional options with respect to our database instance like auto-backup, high availability etc.

Now that we have discussed about all the involved components, we will list down the steps in the order of their execution to make our application available,

1. Provision an RDS mysql instance
2. Save the connection URL of the newly created instance
3. Provision an EB Application instance
4. Upload your application artefact to S3 bucket
5. Create an Application Version for the EB Application created in Step 3 and point to the application artefact uploaded in Step 4
6. Provision an EB Application environment and associate it with the version created in Step 5
7. Update the EB Application environment variables to point to the RDS mysql instance created in Step 1
8. Wait for your application to provision and validate the health URL

By following the above steps, we would be able to get our service up & running in an environment, ready to be tested or distributed depending upon the environment which is under consideration.

Once we have the list of required steps available with us, the first thing that is required to be done for making this repeatable and reliable is to automate. This is where all the previously listed tools come into picture and here we are going to discuss how Terraform will help us with this.

The top level object that we need to define and inform Terraform about is called as ‘Providers’. Terraform currently supports a wide range of providers which are listed here.

In our case the provider that we are going to use is `aws`. Below is a snippet of how the pfile looks like,

  • provider.tf
provider "aws" {
  region = "us-east-1"
  access_key = "your-access-key"
  secret_key = "your-secret"
}

Now that we have informed Terraform about the provider, the next level object that we need to describe about is the ‘resource’. A resource definition is how terraform understands what resource we need to provision for the given provider.

For our above example, we would need to provision the following resources,

 - RDS mysql instance
 - EB Application Instance
 - EB Application Version
 - EB Application Environment

We will try and have a separate .tf file for each of the above mentioned resources,

  • rds_mysql_instance.tf
resource "aws_db_instance" "rds-db" {
  identifier = "mydbinstance"
  allocated_storage = "20"
  engine = "mysql"
  engine_version = "5.7.21"
  instance_class = "db.t2.small"
  name = "mydb"
  username = "db-user"
  password = "db-password"
  publicly_accessible = true
  vpc_security_group_ids = ["rds_security_group_id"]
  skip_final_snapshot = true
}
  • ebs_application.tf
resource "aws_elastic_beanstalk_application" "ebs-app" {
  depends_on = ["aws_db_instance.rds-db"]
  name = "my-sample-application"
  description = "This is a demo elastic beanstalk environment"
  appversion_lifecycle {
    service_role = "arn:aws:iam::12345678932:role/aws-service-role/elasticbeanstalk.amazonaws.com/AWSServiceRoleForElasticBeanstalk"
    max_count = 128
    delete_source_from_s3 = false
  }
}
  • ebs_application_version.tf
resource "aws_elastic_beanstalk_application_version" "ebs-app-ver" {
  depends_on = ["aws_elastic_beanstalk_application.ebs-app"]
  application = "${aws_elastic_beanstalk_application.ebs-app.name}"
  bucket = "my.application.artefact"
  key = "deployables/my-sample-application.jar"
  name = "v1"
}
  • ebs_environment.tf
resource "aws_elastic_beanstalk_environment" "ebs-env" {

  depends_on = ["aws_elastic_beanstalk_application_version.ebs-app-ver"]
  name = "my-sample-application-dev"
  application = "${aws_elastic_beanstalk_application.ebs-app.name}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.7.4 running Java 8"
  cname_prefix = "my-sample-application-dev"
  version_label = "${aws_elastic_beanstalk_application_version.ebs-app-ver.name}"

  setting {
    "namespace" = "aws:elasticbeanstalk:application:environment""name" = "SERVER_PORT""value" = "5000"
  }

  setting {
    "namespace" = "aws:elasticbeanstalk:application:environment""name" = "SPRING_DATASOURCE_URL""value" = "jdbc:mysql://${aws_db_instance.rds-db.endpoint}/${aws_db_instance.rds-db.name}?useSSL=false"
  }

  setting {
    "namespace" = "aws:elasticbeanstalk:application:environment""name" = "SPRING_DATASOURCE_USERNAME""value" = "db-username"
  }

  setting {
    "namespace" = "aws:elasticbeanstalk:application:environment""name" = "SPRING_DATASOURCE_PASSWORD""value" = "db-password"
  }

}

Now that we have all the terraform aws resources described as per our requirement. It’s time to put Terraform into action and let it provision the required infra for us. Before getting into the execution, we should ideally recommended to have all the above .tf files stored under a single folder.

From an execution point of view, Terraform has a set of lifecycle phases that helps us plan, manage and execute the deployments. Below mentioned are the required lifecycle phases in the order of execution,

  • terraform init – This command when executed checks for the required provider and downloads/installs the required provider plugin in the local directory. This is a one time activity.
  • terraform plan – This command when executed would recursively go through all the *.tf files in your current working directory and publish the deployment plan onscreen for verification. You can additionally use other supported parameters with this command to store the plan out for later reference as well. Once you execute the command, you will get an output similar to below on your screen,
------------------------------------------------------------------------

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

  + create

Terraform will perform the following actions:

+ aws_db_instance.rds-db

      id:                                           <computed>

      allocated_storage:                            "20"

      auto_minor_version_upgrade:                   "true"

      copy_tags_to_snapshot:                        "false"

      engine:                                       "mysql"

      engine_version:                               "5.7.21"

      identifier:                                   "mydbinstance"

      instance_class:                               "db.t2.small"

      name:                                         "mydb"

      password:                                     <sensitive>

      publicly_accessible:                          "true"

      replicas.#:                                   <computed>

      resource_id:                                  <computed>

      skip_final_snapshot:                          "true"

      status:                                       <computed>

  + aws_elastic_beanstalk_application.ebs-app

      id:                                           <computed>

      appversion_lifecycle.#:                       "1"

      appversion_lifecycle.0.delete_source_from_s3: "false"

      appversion_lifecycle.0.max_count:             "128"

      appversion_lifecycle.0.service_role:          "arn:aws:iam::12345678923:role/aws-service-role/elasticbeanstalk.amazonaws.com/AWSServiceRoleForElasticBeanstalk"

      description:                                  "This is a sample elastic beanstalk environment"

      name:                                         "my-sample-application"

  + aws_elastic_beanstalk_application_version.ebs-app-ver

      id:                                           <computed>

      application:                                  "my-sample-application"

      bucket:                                       "my.application.artefact"

      force_delete:                                 "false"

      key:                                          "deployables/my-sample-application.jar"

      name:                                         "v1"

  + aws_elastic_beanstalk_environment.ebs-env

      id:                                           <computed>

      application:                                  "my-sample-application"

      cname_prefix:                                 "my-sample-application-dev"

      name:                                         "my-sample-application-dev"

      setting.#:                                    "5"

      setting.1206567429.name:                      "SPRING_DATASOURCE_USERNAME"

      setting.1206567429.namespace:                 "aws:elasticbeanstalk:application:environment"

      setting.1206567429.value:                     "my-user"

      setting.1337171872.name:                      "SERVER_PORT"

      setting.1337171872.namespace:                 "aws:elasticbeanstalk:application:environment"

      setting.1337171872.value:                     "5000"

      setting.4249243620.name:                      "SPRING_DATASOURCE_PASSWORD"

      setting.4249243620.namespace:                 "aws:elasticbeanstalk:application:environment"

      setting.4249243620.value:                     "my-password"

      setting.~2041297091.name:                     "SPRING_DATASOURCE_URL"

      setting.~2041297091.namespace:                "aws:elasticbeanstalk:application:environment"

      setting.~2041297091.value:                    "jdbc:mysql://${aws_db_instance.rds-db.endpoint}/${aws_db_instance.rds-db.name}?useSSL=false"

      solution_stack_name:                          "64bit Amazon Linux 2018.03 v2.7.4 running Java 8"

      tier:                                         "WebServer"

      version_label:                                "v1"

      wait_for_ready_timeout:                       "20m"

Plan: 4 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
  • terraform apply – Once you have confirmed the deployment plan published by terraform and your are ready to provision the infrastructure. This command would help you trigger the terraform execution and the actual provision of the components would start off against the AWS infrastructure. This command provides the user with interactive feedback on the execution progress on screen and also provides with the output at the end of execution in the form of define output variables as show below,
Outputs:

application = my-sample-application
cname = my-sample-application-dev.us-east-1.elasticbeanstalk.com
ebs_application_health_url = http://my-sample-application-dev.us-east-1.elasticbeanstalk.com/my-sample-application/health
ebs_application_name = my-sample-application
ebs_environment_id = <generated-environment-id>
ebs_environment_name = my-sample-application-dev
elb_dns_name = my-sample-application-dev.us-east-1.elasticbeanstalk.com
instances = [
    <generated-instance-id>
]
load_balancers = [
    <generated-load-balance-id>
]
settings = [
    {
        name = SPRING_DATASOURCE_USERNAME,
        namespace = aws:elasticbeanstalk:application:environment,
        value = my-user
    },
    {
        name = SERVER_PORT,
        namespace = aws:elasticbeanstalk:application:environment,
        value = 5000
    },
    {
        name = SPRING_DATASOURCE_URL,
        namespace = aws:elasticbeanstalk:application:environment,
        value = jdbc:mysql://mydbinstance.abcdfgez.us-east-1.rds.amazonaws.com:3306/mydb?useSSL=false
    },
    {
        name = SPRING_DATASOURCE_PASSWORD,
        namespace = aws:elasticbeanstalk:application:environment,
        value = my-password
    }
]
]
  • terraform destroy – If your application infrastructure has been deployed using Terraform like above. It also provides provision to rollback and decommission all the infrastructure components that were created via Terraform. This would as well give you an interactive feedback like the previous command regarding the state of progress while decommissioning the resources.

This now brings us to the end of this article, where we not only looked at Terraform as a powerful and upcoming infrastructure provisioning tool but also touched upon a few application level offering from one of the popular public cloud provider i.e. AWS.

Terraform has a lot may advanced features and concepts available which we did not cover in the scope of this article. But those are worth investing time and understanding to help develop a maintainable, scalable and reliable deployment automation strategy.

Continuous Deployment is a critical stage of the Continuous Software Delivery Portfolio, but it’s not the only stage !!

Tags: , , , , ,

Leave a Reply

Your e-mail address will not be published. Required fields are marked *