Tyler Adams

Tyler Adams

Senior DevOps Engineer

About Me

I'm a Senior DevOps Engineer with over 9 years of experience building automation, scalable infrastructure, and deployment pipelines in enterprise environments. I focus on enabling development teams, simplifying delivery processes, and making systems more reliable and maintainable. I like solving hard problems, building tools that get reused, and keeping things just the right amount of automated.

I take pride in owning the work that I deliver — not just building things and closing out tickets, but being accountable for what I do end-to-end. I care deeply about quality, clarity, and standards, and I try to bring that mindset to every team that I’m a part of. "As DevOps Engineers, we’re not the team that follows instructions — we’re the team that writes them."

About This Site

This site is my attempt at a professional landing page — lightweight, fast, and managed using real-world tools and practices. The goal? Be resume-friendly, technically interesting, and extremely low-maintenance.

Overview

This site exists for one simple reason: I wanted a clean, professional-looking and cost-efficient place to host my resume. But I also wanted it to reflect how I approach work — keep it simple, make it solid, and don't over-engineer it into something difficult to maintain.

Hosting

Everything here is hosted on AWS. This is just a simple static webpage hosted on S3 with CloudFront layered in front for TLS and global distribution. I used ACM to issue a cert for both tadms.com and www.tadms.com via DNS validation in Namecheap. DNS points to the CloudFront distribution.

Automation

Could I have wired up a pipeline in GitHub Actions for git event driven CI/CD? Absolutely. Could I have used Terraform for infrastructure-as-code? Also yes. Will I do all of that some day just for fun? Possibly... But for now, something this simple, it felt like more overhead than value. For now I set everything up manually through the console + CLI and just documented it well.

Deployment

My deployment mechanism is just a simple local Bash function wrapped around the AWS CLI. Not because it's necessary, but because I live in my Bash terminal, and turning things into repeatable commands is second nature. Even if it only saves me a few seconds, it serves as my documentation-as-code for how I've done things.

function deploy_site {
  if [ "$1" == "-h" ] || [ -z "$1" ] ; then 
    echo "Usage:    deploy_site [site]"
    echo "Example:  deploy_site www.tadms.com"
    return 0
  fi 
  local site_name="$1"
  local site_dir="/mnt/c/www/${site_name}"
  local site_bucket="s3://${site_name}"
  local start_dir="$(pwd)"
  cd "$site_dir"
  aws s3 sync "${site_dir}" "${site_bucket}" --exclude ".git/*"
  if [ $? -eq 0 ] ; then
    echo "Successfully synced ${site_dir} to ${site_bucket}"
  fi
  cd "$start_dir"
}

Costs

The best part? The whole thing costs me almost nothing — just a few cents per month for storage and requests.