Becoming an AWS Cloud Engineer

Yeah, it finally happened. I knew it would happen sooner than later. The power of being a cloud engineer is having a more price-sensitive VPS solution. My original timetable was to have a multi-user containerized version of my Django React Job Tracker supporting MongoDB. Then I could put that container in the cloud. The thing is, that’s still the better part of well over 40 development hours away. But then some… unexpected events occurred and… It’s time.

This site is now hosted on AWS. 🙂

I was told by a mentor that all it took to become a Cloud Engineer was to create just one site and maybe an app in any Cloud Service.  I have also heard that AWS is the gold standard and all the Job Listings I’ve seen, when they mention Cloud Development, spell out AWS more often than Google Cloud Platform or Azure.

 What’s been holding me back?  Why now?  I’ve been “planning” on jumping into AWS.  Just as soon as… I can get my Django-React app containerized.  That’s likely still a few weeks away.  Then the first week of October, I had a scare: my main site wasn’t coming up for a few hours.  Eeek!  Plus, I have two prospective jobs, and both will be in AWS land.  So…

 Going back to my training over the summer, coffee in hand, I finally took the plunge!  Creating the root AWS account was actually pretty simple.  And my very next step was to briefly check out some Management Console training then… turn on MFA!  Turning on MFA for root was very easy and straightforward. 

The next set of challenges: figure out Identity Access Management (IAM).  You’d think it was simple, but there was the IAM core and the IAM Identity Center (formerly known as Single Sign-On). I kept somehow getting dumped into the latter.  Further complicating matters was understanding the AWS IAM implementation of policies. As an experienced programmer, I’m not used to all of those other provisioning terms.  Luckily, I stumbled upon ‘Amazon Q’ early on and used it to help coach me through the terminology. With that, I was able to create a user in the correct spot, properly set up user authentication, and then add the appropriate permissions.

This is when I thought I could create a simple static site using Amazon S3.  I created a bucket and struggled with associating a Route 53 to get to it.  One of my discoveries: the bucket name can’t (or maybe just shouldn’t and I couldn’t figure out the workaround) be any random string: it needs to be part of the URL itself. This was a circular frustration for a while as I was hoping to start with a static page implementation: nope.  Nothing worked.  That’s when I broke down and created the LightSail WordPress Instance (not a container).

Setting up the LightSail instance (not a container) was initially easy. Looking through the list of regions, the one closest to me, Northern California, is not available.  The closest LightSail instances are in… Oregon.  Good enough.  I selected the tiniest image available; $5 per month.  Used the “Linux WordPress” image. That console gave me the info to set up WordPress on this particular image.  Directly pulling up the IP address, everything works fine!  But… not the domain name.  Grrr…

Calming back down a bit… It was the LightSail instance IP address which I put into a brand-new Route 53 Hosted Zone (I deleted any old zones).  This time, another “missed step” showed up that made complete sense: Route 53 gave me a list of 4 Domain Name Servers to update my “registration”, which had me confused during the prior week.  My discovery was that I was finally able to view my old VSP’s website (they locked me out for half a year…). Logging into that account, I replaced their two DNS entries with the 4 AWS Route 53 DNS entries and…

2-3 hours later, the URL finally worked…

Mostly.

HTTP worked perfectly.  But these days, everybody uses HTTPS, which requires a certificate.

I would call where this left me at: Certificate Hell. 

I don’t use that phrase lightly.  As a programmer, we don’t handle advanced configurations.  We deal with data structures, algorithms, performance, code… what certificate? Where do I create it?  Where do I log it?!

HTTPS requires a certificate.  AWS does have a Certificate Manager (ACM), but apparently that is NOT what to use.  Nope: LightSail has its own Certificate Creation features.  This confused me even more since in most environments, that tells the user that it is all automatic and I don’t have to do anything.  Nope. Alas, the confusion through Q was which certificate do I use?!  And… in this day and age, and using AWS, do I really need to be using a… command prompt to finish the certificate process?!

I think that my biggest frustrations with the process, as beneficial as Q was for other things, were two canned responses that I would continually get:

  • “I cannot help with anything security-related related”
  • “This change may take minutes to several days”

Which is why it took me a full week to get through all this to get to the final point.  And ironically, when things did not make sense with the AWS Q, the place to turn was… ChatGPT!

Would I have done this setup any other way?  Nope.  Sure, there were some rather tense moments dealing with the frustration of things not making sense and not working.  However, I got to learn more about some of the basic components of AWS:

  • Identity Access Manager (and the associated Single Sign-on)
  • S3 Buckets (I’ll revisit later on)
  • Route 53
  • Certificate Manager
  • LightSail

In the not too distant future, I’ll spend more time figuring out the S3 portion, then eventually, Fargate (for the Python Container), Lambda (at first used to turn Fargate on), CloudWatch (to turn Fargate off), and DynamoDB (to host app data).