Accidentally deleted my private subnet route tables in the process though, which broke S3 access for ~20 minutes. Could've been much worse #fail #outage

permalink

Shaved another $35 (40%) off my #AWS bill by disabling the NAT on my app subnets. Yay immutable infrastructure and VPC endpoints #win

permalink

I made a thing! burnfastburnbright.com Bootstrap 4, route53 domains, and terraform made this really easy. went from 0 to 100 in about 1.5 hours. #win

permalink

New in v0.117.0: nuked time-elements webcomponents due to Firefox breakage (cut js payload by a factor of 4), infra improvements to resume handling #win

permalink

The v0.116.0 deploy was done using a spot instance with packer. A bigger instance for half the price #aws

permalink

New features in v0.116.0: bootstrap4, removed RSS and Atom feeds, 100% webpack frontend build, css purification improvements, and healthz middleware

permalink
Post Image

Most frequently used commands, redux hyperbo.la/lifestream/146

permalink

LOL that was only six years ago ... don't let your dreams stay dreams: hyperbo.la/lifestream/51 #aws

permalink

welp that didn't last long. CloudFlare only queries a subset of NS records to check for liveness and has determined that I no longer use CloudFlare. Working on purging them from #terraform and registrar now #fail

permalink

It is a good thing that I've automated things well enough that I don't need the bastion #win

permalink

Even more cost savings: dynamically provisioned bastion cloudformation stack #terraform #aws

permalink

More cost savings. RAM footprint of a hyperbola backend is 143MB. Switch from t2.micro to t2.nano #aws #win

permalink

I accidentally created a CNAME (instead of an A record) for an IP today. Lots of confusing errors from nslookup, ssh, and host. Meanwhile dig appeared to resolve the record. #fail

permalink

hyperbola: now with multi-homed DNS. AWS Route 53 and CloudFlare, made possible by terraform. (In the process upgraded hyperbo.la mail to a 2048-bit DKIM key) #win #redundancy #devops

permalink

I accomplished this migration with ~no downtime #win. I spun up the new infrastructure and then deployed new AMIs with updated service records. I did have ~2 minutes of 500s when I accidentally overwrote old mysql DNS record due to a bad copypasta #fail

permalink

Switch from 3 to 2 backend machines. 1 is enough to handle the load I get, so use the bare minimum for redundancy #aws

permalink

Removed dependency on redis by switching to a django-provided database-as-cache adapter. My redis cluster was used only for admin sessions and caching a sidebar on the lifestream page. Unnecessary overhead #aws

permalink

Switched DB instance type from db.t2.small to db.t2.micro. From running my linode I know that MySQL never used more than ~400MB of RAM so I knew this was safe. My database is tiny #aws

permalink

Switched from Aurora to a multi-az RDS instance. I don't need the complex topologies that aurora allows and it forced me to use an overprovisioned instance type #aws

permalink

Now that I've shown I can go all out with the most expensive #AWS components, today I exercised my cost efficiency and right sizing muscles. I cut my AWS bill in half with the following steps:

permalink

3 AZs I feel so alive #aws

permalink

Just bumped backend ASG from 1 -> 3 t2.micros. With this change now all parts of hyperbola (redis, mysql, backend, lb) are multi-AZ #win

permalink

mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root mysql was the magic incantation required to get lifestream archive views working locally

permalink

Today's shipped email featuring subtly modified lyrics from Kanye's Flashing Lights

permalink

The magic command to make homebrew work after uninstalling Xcode: sudo xcode-select -switch /Library/Developer/CommandLineTools #win

permalink