George McKinney Adventures in Software Development

May 17, 2025

AWS EC2 Auto Scaling Group

Filed under: AWS — georgemck @ 12:23 pm

AWS Amazon Web Services

Auto Scaling Group using Launch Template and Classic Load Balancer

Command Line Interface CLI

 

1. aws ec2 create-security-group –group-name ELBSG –description “ELB Security Group”

2. aws ec2 authorize-security-group-ingress –group-id sg-XXXXXXXXXXXX –protocol tcp –port 80 –cidr 0.0.0.0/0

3. aws ec2 describe-subnets

4. aws elbv2 create-load-balancer –name CLELB –subnets subnet-XXXXXXXXXXXX subnet-XXXXXXXXXXXX

5. aws ec2 create-launch-template –launch-template-name auto-scaling-template –version-description version1 –launch-template-data ‘{“ImageId”:”ami-0953476d60561c955″,”InstanceType”:”t2.micro”}’

6. aws autoscaling create-auto-scaling-group –auto-scaling-group-name ASG01 –launch-template LaunchTemplateId=lt-XXXXXXXXXXXX –min-size 3 –max-size 5 –vpc-zone-identifier “subnet-XXXXXXXXXXXX,subnet-XXXXXXXXXXXX”

7. aws ec2 describe-instances

 

February 27, 2025

AWS Transfer Usage for SFTP

Filed under: AWS,Security — georgemck @ 3:14 am

AWS Transfer is an SFTP service within AWS. Using SSH keys, an SFTP connection can be established to upload and download files to AWS (most likely to S3). There are two approaches for restricting what the user can do. The simple approach is to use the built-in feature with AWS Transfer: Restricted directory. While creating the user, check the restricted box and select the Home directory and optional path. The second approach leverages AWS security policy. You can set the fine-grained permission of Effect, Allow and Resource (by ARN):

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“s3:PutObject”,
“s3:GetObject”,
“s3:DeleteObject”,
“s3:DeleteObjectVersion”,
“s3:GetObjectVersion”
],
“Resource”: “arn:aws:s3:::my.bucket.com/restricted-home/path/*”
},
{
“Effect”: “Allow”,
“Action”: [
“s3:ListBucket”,
“s3:GetBucketLocation”
],
“Resource”: “arn:aws:s3:::my.bucket.com”
}
]
}

note: to create a service-managed user, follow the standard instructions for generating an SSH key. macOS, Linux, or Unix Windows

Create an SSH key with RSA:
ssh-keygen -t rsa -b 4096 -f Zbsg

Connect to SFTP
sftp -i Zbsg Zbsg@sftp.myserver.com

Once connected (if you have the permission), you can upload a file with a Put command:
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 0 0.0KB/s 00:00
sftp>

Download a file
sftp -i Zbsg Zbsg@sftp.myserver.com:file.txt file.txt

November 14, 2024

Working with AWS SDK in JavaScript v3

Filed under: AWS — georgemck @ 12:10 pm

https://github.com/aws/aws-sdk-js-v3

https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_code_examples.html

https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/

https://aws.amazon.com/blogs/compute/managing-aws-lambda-runtime-upgrades/

October 3, 2024

Deploying WordPress on AWS Lightsail

Filed under: AWS,Lightsail,Linux — georgemck @ 8:31 am

After logging into AWS and launching the Lightsail service, it is necessary to create an instance. The instance for WordPress is based on the Linux Debian operating system packaged by Bitnami. Lightsail applications run standalone from AWS at large though can be integrated with it. When an instance is created, with the Route 53 service an hosted zone is created for the domain. It is barebones containing only the SOA and NS records. If it’s necessary to migrate an existing website into AWS/Lighstail, you must add the name service records for MX, CNAME, TXT, etc. This can be done by exporting the existing zone and then importing them into Route 53 but not adding the SOA and NS records (optional, depends on if you will still need them.)

Accessing the database directly is not allowed. To use the phpmyadmin included in Lightsail, you must create an SSH tunnel. Bitnami documents the procedure on this page and specifically with this video.

In order to access SFTP with a program like Filezilla on AWS, follow the documentation here.

If you add files to WordPress, it is likely that there will be permissions issues. You will see in the admin dashboard messages about not being to access certain files or folders. You can change the permission by connecting to the instance using SSH. This procedure is shown as a prerequisite for opening phpmyadmin. After successfully connecting, you can change ownership of the problematic files and folders with the following command which must be updated for the target machine.

sudo chown -R daemon:daemon uploads
sudo chown -R daemon:daemon application/config
sudo chown -R daemon:daemon /opt/bitnami/apache/htdocs/temp

In order for the domain name to resolve on the internet, you will need to update the Hosted Zone. In the hosted zone, add an A record pointing to the IP address of the Lightsail instance. This will work but will not support https (that is there is no SSL certificate). AWS documents how to add an SSL certificate to Lightsail. However, Bitnami has a better solution for this.

February 26, 2024

AWS Transcribe CLI Workflow

Filed under: AWS,ffmpeg — georgemck @ 6:28 pm

Recently I needed to create transcriptions for a number of videos. I decided to use Amazon Transcribe to make it faster for me than typing. I used ffmpeg and S3 to lighten the load.

 

— 1. separate audio from the video file

ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 44100 -ac 2 output.wav

— 2. upload audio to S3 bucket
aws s3 cp output.wav s3://transcribe-for-canvas

 

— 3. extract the text from the audio through transcription
aws transcribe start-transcription-job –transcription-job-name canvascaptions –media MediaFileUri=s3://transcribe-for-canvas/output.wav –output-bucket-name transcribe-for-canvas –subtitles Formats=srt –language-code en-US –region us-east-1

— 4. check on the transcription progress
aws transcribe get-transcription-job –transcription-job-name canvascaptions

 

— 5. download the transcription files
aws s3 cp s3://transcribe-for-canvas –recursive

 

ffmpeg cheatsheet

Filed under: ffmpeg — georgemck @ 10:15 am

 

 

–Combine video and audio
ffmpeg -i ‘video.mp4’ -i ‘audio.m4a’ -c copy -map 0:0 -map 1:0 output.mp4

 

–Extract audio from video
ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 44100 -ac 2 output.wav

 

 

February 14, 2024

LAMP Server on AWS EC2 Amazon Linux 2023 AMI

Filed under: Amazon Linux,AWS,Fedora — georgemck @ 2:40 pm

#!/bin/bash
dnf upgrade -y
dnf install -y httpd wget php-fpm php-mysqli php-json php php-devel
dnf install mariadb105-server
systemctl start httpd
systemctl enable httpd
systemctl is-enabled httpd
usermod -a -G apache ec2-user
chown -R ec2-user:apache /var/www
chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
find /var/www -type f -exec sudo chmod 0664 {} \;
echo “” > /var/www/html/phpinfo.php

February 7, 2024

AL2023 PostgreSQL DNF Available

Filed under: AL2023,Amazon Linux,AWS,Cloud,Linux — georgemck @ 2:09 pm

This is a list of packages available in the Amazon Linux 2023 as of today on Elastic Beanstalk.

 

postgresql-odbc.x86_64
13.01.0000-5.amzn2023.0.1

postgresql-odbc-tests.x86_64
13.01.0000-5.amzn2023.0.1

postgresql15-contrib.x86_64
15.5-1.amzn2023.0.1

postgresql15-docs.x86_64
15.5-1.amzn2023.0.1

postgresql15-llvmjit.x86_64
15.5-1.amzn2023.0.1

postgresql15-plperl.x86_64
15.5-1.amzn2023.0.1

postgresql15-plpython3.x86_64
15.5-1.amzn2023.0.1

postgresql15-pltcl.x86_64
15.5-1.amzn2023.0.1

postgresql15-private-devel.x86_64
15.5-1.amzn2023.0.1

postgresql15-server-devel.x86_64
15.5-1.amzn2023.0.1

postgresql15-static.x86_64
15.5-1.amzn2023.0.1

postgresql15-test.x86_64
15.5-1.amzn2023.0.1

postgresql15-test-rpm-macros.noarch
15.5-1.amzn2023.0.1

postgresql15-upgrade.x86_64
15.5-1.amzn2023.0.1

postgresql15-upgrade-devel.x86_64
15.5-1.amzn2023.0.1

 

This is useful when launching an instance and knowing which packages are available in the repository.

 

 

 

 

 

 

dnf list available postg*

 

 

 

 

 

 

Nice reading on Elastic Beanstalk

September 4, 2023

Create Lambda Layer on AWS CloudShell

Filed under: AWS,CloudShell — georgemck @ 10:51 pm

This is a mess. I will edit it later

 

This is the command history to build a Lambda Layer in Python. I had to add a C compiler, Compile Python 3.9 with a specific requests module, and then upload to S3 and Add the Layer to Lambda function before I could use it but after all that, it worked.

 

[cloudshell-user@ip-10-2-31-49 packaging]$ history 100

============================================================================

 

1 sudo yum -y update
2 python -V
3 wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
4 tar xvf Python-3.9.16.tgz
5 cd Python-3.9.16/
6 ./configure –enable-optimizations
7 sudo make altinstall
8 ls
9 ./configure –enable-optimizations
10 sudo yum groupinstall “Development Tools”
11 gcc –version
12 ./configure –enable-optimizations
13 sudo make altinstall
14 python -V
15 ls /usr/local/bin/
16 ls /usr/local/bin/python3.9
17 /usr/local/bin/python3.9 -V
18 alias python=’/usr/local/bin/python3.9′
19 python -V
20 gcc –version
21 pwd
22 cd ..
23 cd ~
24 ls
25 mkdir packaging
26 cd packaging/
27 python3.9 -m venv layer_package
28 source layer_package/bin/activate
29 pip install requests
30 pip install –trusted-host pypi.org –trusted-host pypi.python.org –trusted-host files.pythonhosted.org requests
31 pip config set global.trusted-host “pypi.org files.pythonhosted.org pypi.python.org” –trusted-host=pypi.python.org –trusted-host=pypi.org –trusted-host=files.pythonhosted.org
32 pip install requests
33 pip install –trusted-host pypi.python.org linkchecker
34 pip -V
35 pip install –trusted-host pypi.python.org requests
36 pip install –trusted-host files.pythonhosted.org –trusted-host pypi.org –trusted-host pypi.python.org requests
37 yum install openssl-devel
38 sudo yum install openssl-devel
39 cd /usr/src
40 deactivate
41 sudo yum install openssl-devel
42 ls
43 cd ~
44 ls
45 cd Python-3.9.16
46 ls
47 ./configure –enable-optimizations
48 ls /usr/local/bin/
49 ls /usr/local/bin/python3.9
50 rm -r /usr/local/bin/python3.9
51 sudo rm -r /usr/local/bin/python3.9
52 ls /usr/local/bin/python3.9
53 sudo make altinstall
54 python -V
55 cd ~/packaging/
56 python3.9 -m venv layer_package
57 source layer_package/bin/activate
58 pip install requests
59 deactivate
60 ls
61 mkdir python
62 cp -r layer_package/lib/python3.9/site-packages/* python/
63 zip -r lambda-layer-requests-python3.9-x86_64.zip python
64 ls
65 aws s3 ls
66 aws s3 mb csa-va-lambda-layers-python-requests
67 aws s3 mb s3://csa-va-lambda-layers-python-requests
68 aws s3 ls
69 ls
70 aws s3 cp lambda-layer-requests-python3.9-x86_64.zip s3://csa-va-lambda-layers-python-requests
71 ls
72 ls python/
73 ls python/urllib3
74 sudo rm -r python/urllib3
75 ls python/requests
76 sudo rm -r python/requests
77 pip install requests==2.28.2 -t ./python –no-user
78 pip3 install requests==2.28.2 -t ./python –no-user
79 ls python/
80 zip -r lambda-layer-requests-python3.9-x86_64-2.zip python
81 ls
82 aws s3 cp lambda-layer-requests-python3.9-x86_64-2.zip s3://csa-va-lambda-layers-python-requests

Install gcc on AWS CloudShell

Filed under: AWS,CloudShell — georgemck @ 9:53 pm

Recently, I had the need to compile Python 3.9 on AWS CloudShell which was necessary to create a Lambda Layer for the requests module.

This required add a C compiler to CloudShell. The steps are:

Step 1: Update packages.

  sudo yum update

Step 2: Install GCC

  sudo yum groupinstall “Development Tools”

Step 3: Check version

  gcc –version

 

[cloudshell-user@ip-10-2-31-49]$ gcc –version
gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Older Posts »

Powered by WordPress