This script installs SMARTER example using helm charts into one AWS EC2 instance.
This figure shows the components of the application and where they reside.
It assumes that the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN are set correctly so Terraform can access AWS. Set the following variables to correct values: region (provider "aws"): AWS region to allocate an EC2 instance on.
Required variables:
- letsencrypt_email
Optional variables:
- deployment_name: Prefix to apply to object names.
- AWS_EC2_instance_type: instance type to be used
- AWS_VPC_subnet_id: subnet_id use the default of the VPC if this is not defined
An template.tfvars is provided that can be copied so all the variables are set in this file and referenced by the option -var-file="smarter-variables.tfvars" where smarter-variables.tfvars is the name of the file used to set the variables. Commented variables are ignored.
terraform init
# optional: terraform plan -var-file="smarter-variables.tfvars"
terraform apply -var-file="smarter-variables.tfvars"
terraform init
# optional: terraform plan -var "letsencrypt_email=<valid email>"
terraform apply -var "letsencrypt_email=<valid email>"
Please observe that the full installation of k3s, helm charts in the EC2 instance can take up to 15min (expected around 10min) with various parts of the system being available at different times. If it is desired to follow the installation the command below will print the current log and follow it
ssh -i ssh/<deployment-name>-prod-k3s.pem ubuntu@<EC2 instance allocated> "tail -f /var/log/cloud-init-output.log"
Terraform will output the name of EC2 instance allocated and password/ID generated by Terraform.
Grafana web interface can be accessed by https://grafana.\<External IP of EC2 separated with dash>.sslip.io with user admin and password <password/ID>.
A ssh directory will be created locally containing a private/public SSH key that can be used to access the instance using the following command:
ssh -i ssh/<deployment-name>-prod-k3s.pem ubuntu@<EC2 instance allocated>
K3s cloud access on the instance (running the cloud containers) can be achieved by setting KUBECONFIG to /etc/rancher/k3s/k3s.yaml. It should be already be set for the ubuntu user at the end of the installation. K3s edge, that manages the edge devices and applications running on them, can be accessed by setting KUBECONFIG as $(pwd)/k3s.yaml.<password/ID>, that also will be available at the end of the installation.
Helm was used to install charts and can be used to manage them by setting the correct KUBECONFIG.
The edge devices can be installed (Raspberry pi4 for example) by running the following script. The script will install a k3s agent and configure that agent to be a node for k3s edge running on the EC2 instance.
wget https://k3s.<External IP of EC2 separated with dash>.sslip.io/k3s-start.sh.<password/ID> | bash -s -
Token and k3s.yaml file can be accessed by:
wget https://k3s.<External IP of EC2 separated with dash>.sslip.io/token.<password/ID> | bash -s -
wget https://k3s.<External IP of EC2 separated with dash>.sslip.io/k3s.yaml.<password/ID> | bash -s -
Use the AWS credentials provided in the "Get credentials for ProjAdmins" page. Terraform expects the following environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
If an error is reported about "default subnet not found", a subnet was not defined as default for the VPC. A subnet can be set using the AWS_VPC_subnet_id variable.
Log in to the EC2 machine using the ssh command
ssh -i ssh/<deployment-name>-prod-k3s.pem ubuntu@<EC2 instance allocated>
Please take a look at the log at /var/log/cloud-init-output.log and /var/log/cloud-init.log at the EC2 machine to determine where the program failed The script that is executed is called part-002