Posted on October 20, 2018 in Uncategorized by adamNo Comments »

I’ve gone on about this before but have clarified my thoughts further recently. I tried to put this on twitter but the thread didn’t post correctly. So here is Adam’s Keynote Heuristic.

A keynote should be one of the following;

  • Controversial – ‘x is dead’, ‘you are all doing x wrong’, etc. which shows how you are correct in saying it (if inflammatorily phrased) and then a call to action to flip it around to make it not so much. Using this approach I might do ‘Your problems can’t be solved with automation’ which would illustrate how most of the problems I got asked to solve with automation are actually people problems (or process, which really are also people) and then hypothesize on how to solve them. Or maybe ‘all the problems in automation have been solved’ (which is my current fun one) where I ridiculously generalize things into patterns we were using a decade ago and how they have been updated for today which leads to a call to action to overhaul the selenium docs.
  • Story time – This is literally, ‘I have cool stories to tell.’ Typically this person is not from the domain that the event is for but whose stories can wrap up nicely into actionable things that can be applied to the attendees. ‘So there was this other time where I forgot to unstub the payment gateways…
  • Something from left field – I find this one actually the most interesting and this is where someone takes a different field entirely and applies it to the audience of the event. I’ve done this for testing audiences around when I ‘fixed’ the dryer and coached lacrosse. The anecdotes need to be fun. I often try to snag the loot bags from conferences that are sharing facilities to get insights I can steal for this sort of talk.
  • State of the Union – As the name implies, these keynotes reflect a bit on the past to set context for the future. Ideally, its a 1/3 past, 2/3 future split.

Too often keynotes are really just track talks with a personality/name-brand. Which doesn’t diminish their content, its just that they are misnamed.

Posted on January 6, 2018 in Uncategorized by adamNo Comments »
  • Draw a picture. It will help.
Posted on January 5, 2018 in Uncategorized by adamNo Comments »
  • Ulysses S. Grant was only 40 at Shiloh.
  • I really dislike people saying ‘our’ or ‘my’ team when they are not a player or owner. Parents and fans don’t get to say ‘my’ team.
  • Hard things are hard until they are easy. Risky things are risky until they are de-risked. The two are annoyingly related.
  • I have no clue what is happening in the X-Men books.
Posted on January 4, 2018 in What I Learned Yesterday by adam1 Comment »
  • Hockey [ok, sports] parents are crazy
  • It’s not the day after that hurts, it’s the day after the day after. (Snowboarding crash.)
  • Keep track of your time while doing it, as trying to remember what you did (or learned…) after the fact rarely works out well.
  • Vagrant based development environments are brilliant. Yes, yes, Docker ones are the new hotness…
  • There is not the equivalent of -s -N for mysql once you are in the cli tool so either you get column names and grids for all queries, or you don’t.
  • Test your reporting
  • Then test it again.
Posted on January 2, 2018 in Uncategorized by adamNo Comments »
  • I still hate installing doors
  • There is always more implicit knowledge than documentation. (Which resulted in me changing my laptop issuance policy of to ‘Issue once configured’ vs the current ‘Here is your laptop and instructions’. The next evolution will be central management of policy, etc.
  • Black Mirror is overrated.
  • Another reason to disable form autofill – Ad targeters are pulling data from your browser’s password manager. (See also the central management of policy bullet.)
Posted on January 1, 2018 in Uncategorized by adamNo Comments »
  • A reminder; it’s always a people problem
  • A reminder; trust, but verify
  • Things I can never remember;
    • show index from <table>
    • show full processlist
  • The default polling interval for Laravel workers is 3 seconds. If you have certain queues that are more time sensitive, you need to adjust accordingly. But you can only go as low as 1 second. This is another known issue with Laravel that is important when you build larger apps beyond toys. File under ‘things I shouldn’t have to do’
  • Sometimes, its not [just] a query that is misbehaving that is causing a problem, but is a query running in a loop. And you are one of only 5 or 6 people who can trigger it due to environmental stuff.
  • Environment matters. Our current nasty performance problem will only be triggered in production.
  • I have completely lost all comfort doing frontside turns on a snowboard
Posted on January 1, 2018 in Uncategorized by adamNo Comments »
  • created_at, which is a fields that is created when using timestamps in Laravel Models/Migrations, is not indexed by default. So any time you use the built-in latest() or oldest() functions on an Eloquent query, you are doing a full table scan. There was an issue raised, but of course got shut down by the maintainer. Seriously, it annoys the hell out of my how small scale the Laravel team thinks. Scaling wut? At least when DHH was creating Rails he had a huge application to build/maintain. What’s Laravel got? Oh, a bunch of ecosystem stuff…
  • Observability throughout a distributed system is a pain. I’ve been trying to diagnose a performance bottleneck and tracking a request through 3 difference systems is a challenge. I need to somehow inject an id into incoming messages and flow it through. And messages that are originated in the system on their way out.
  • I found the source of the bottleneck at least. It looked like it was in an Eloquent save() call. Buuuut, its in an Event Observer on the ‘created’ event. I so wanted to blame the framework…
Posted on December 30, 2017 in What I Learned Yesterday by adamNo Comments »
  • I’ve always heard rumour that AWS billing is a nightmare, and I sorta got a peek at that yesterday. Part of what I need to figure out is how to attribute our AWS spend to various projects (for tax purposes as well as project profitability) and since I was in CloudFront doing something I poked around a bit. Yeeaaaah, what a mess. The billing page just as requests by region. The CloudFront usage report just has stats and resource ids. But not tags. I’ve not spent much time with this, so I’m sure there is a way to get the per item spend based on the tag, but I suspect there will be some scripting in the future to figure that out rather than one convenient report I didn’t have to build myself.
  • Speaking of AWS tags, they appear to be case sensitive, so ‘Client’ is different than ‘client’ which is a pain. There are very few places where case matters. Stop it.
  • Now the reason I am using takes is in the vain hope that the ‘resource allocation tags’ feature of billing works as one might expect. Except, you have to go into the page and enable them. Which I have only just now done. (For both ‘Client’ and ‘client’.)
Posted on December 29, 2017 in What I Learned Yesterday by adamNo Comments »
  • The MySQL Timestamp format goes to a single second of accuracy, so when you pull records and order them by a timestamp column they will be returned in their creation order if the timestamp is the same. When creation order really matters, you have to do something like
    ORDER BY created_at DSC, id DSC
  • Consistency matters. But so do boundaries. Just because something is called ‘data’ internally, doesn’t mean it has to be externally. Especially if ‘info’ is a better name externally. What’s worse though is when you have used ‘info’ in one api response, but ‘data’ every where else. Worse still, when you are the person who introduced the inconsistency.
  • AWS OpsWorks for Puppet Enterprise would make my life somewhat easier, but would also add $200 USD a month to our bill. A couple more deals and I’ll totally pull the trigger on this.
Posted on November 19, 2014 in Uncategorized by adamNo Comments »

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application [‘legacy’] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

Next Page »