Results for #fail

23:07 utc dec 28 2018 permalink

The most expensive part of hyperbola's #aws infrastructure is the SSM PrivateLink endpoint in 3 AZs #fail #cost

07:39 utc nov 17 2018 permalink

Just accidentally truncated my .bash_history. Restored from backup but the latest was 55 days ago. #fail

16:49 utc nov 11 2018 permalink

I was hard down for ~5min last night while rolling out secrets in parameter store. 0.149.0, 0.149.1, 0.149.2, and 0.149.3 were bad releases #fail. 0.149.4 is stable: Postmortem pending.

23:12 utc nov 04 2018 permalink

The computers did exactly what I told them to do 😕 #fail

23:11 utc nov 04 2018 permalink

Add in some manual #terraform state edits and deleting things in the #aws console and we're recovered #fail #win

23:05 utc nov 04 2018 permalink

The cleanup script didn't error because my set flags were in the shebang but #packer was invoking the script via bash instead of directly #fail So many yaks.

23:04 utc nov 04 2018 permalink

This change was introduced in 0.146.0 but did not manifest due to a bug in the cleanup script. I was not passing -y to apt autoremove, which caused the command to abort and end the script with an error. #fail

22:57 utc nov 04 2018 permalink

Got into an undeployable state due to differences in #provisioning between local and prod environments #fail

02:25 utc oct 28 2018 permalink

I accidentally skipped v0.139.0 today because prettier barfed during cutting the release and I forgot to reset my git tree. I guess I forgot to run prettier on my whole repo when I enabled it. #fail

02:48 utc apr 03 2018 permalink

Sometimes using the #AWS cost and usage reports is just not fun, mostly due to the myriad of columns being undocumented. #fail

06:10 utc feb 24 2018 permalink

Semantic versioning is a lie (looking at you #packer). My config stopped working because a key was deprecated between 1.1.x and 1.2.x. Somehow this prevented the config from validating. #fail

06:13 utc feb 11 2018 permalink

So it turns out I shouldn't have ignored that MySQL backtrace when printing the help text of my new django management command in dev. That's why it hung when building the AMI. #fail One line fix:

06:12 utc feb 11 2018 permalink

I have pinned dependencies everywhere _except_ my AMI build pipeline. Bitten by the packer 1.2.0 upgrade breaking the ansible-local provisioner. #fail

17:57 utc dec 02 2017 permalink

Found a lifestream bug during my Django 2.0 upgrade that was never exercised on the live site because I've never had more than 40 posts in a month #fail

05:58 utc nov 30 2017 permalink

Accidentally deleted my private subnet route tables in the process though, which broke S3 access for ~20 minutes. Could've been much worse #fail #outage

02:37 utc nov 11 2017 permalink

welp that didn't last long. CloudFlare only queries a subset of NS records to check for liveness and has determined that I no longer use CloudFlare. Working on purging them from #terraform and registrar now #fail

01:45 utc nov 06 2017 permalink

I accidentally created a CNAME (instead of an A record) for an IP today. Lots of confusing errors from nslookup, ssh, and host. Meanwhile dig appeared to resolve the record. #fail

15:54 utc nov 04 2017 permalink

I accomplished this migration with ~no downtime #win. I spun up the new infrastructure and then deployed new AMIs with updated service records. I did have ~2 minutes of 500s when I accidentally overwrote old mysql DNS record due to a bad copypasta #fail

18:51 utc oct 28 2017 permalink

#history throwback to the time that my wiki was spammed by a bot that turned all the pages into link spam for discount pharmaceuticals #fail

03:23 utc oct 06 2017 permalink

I skipped #django 1.11.4 and 1.11.5. Finally did an upgrade today to 1.11.6 #win. My dep upgrades for python, js, and ansible deps are too coarse grained. Pulled in ansible changes which blocked the deployment #fail