Enter Sandman on Classroom Instruments

This is so great... and I was wondering the best way to get my kids into Metallica...

Nov 18th, 2016

Publishing This Blog With Bitbucket Pipelines

Earlier this year, Atlassian released Bitbucket Pipelines, it's Cloud CI offering, as a beta product coupled with their hosted Git/Hg source control service. I like to know as much as possible about my products, but I'm not much of a software developer1, so finding a use case to try out Pipelines was puzzling.

Then I remembered that I build this very blog. (Oops) It's not much of a job to run an octopres/jekyll build, but my current setup was highly-dependent on a single machine at my house always being on with Dropbox working. It was a little too fragile. I host this blog on nearlyfreespeech.net, and though I know there are other hosting options out there that might do the whole build and deploy process natively for me2, I (also) try to never shy away from a challenge.

At Summit last month, Atlassian released Pipelines as GA, and set an intro price of FREE for the remainder of the year.3 I was out of excuses, so in my spare minutes over the past month I have tried to get a simple jekyll build of the site to work in pipelines.

And, of course, ran into one problem after another.

First, pipelines runs its build in a Docker container. This isn't a negative, actually, but it added a complexity with which I had relatively little experience.. I figured the simplest way to get started was to use an existing ruby image that was the same version on my Mac and just install dependencies as part of the build. Jekyll, though, requires a JavaScript compiler and no matter what I tried to do in the build to install oneall of them, the best option was to get node working since that's how my current build works. At that point, though, my build script was tens of steps each run, meaning my build-minute use was going to be super high each time I wanted to publish something.

Rather than build my own Dockerfile, which was really tempting, I deciced to use another image in the Docker Hub that has both ruby and node already set up. It's not far from what I was about to do myself, so no use re-inventing...

...which was good, because second, I really didn't want to have to install Docker. Every time I have previously tried to install Docker and have it work reliably, the VirtualBox piece just dies on me at some point. (More on this in a moment, though, because I'm rarely this lucky.)

My blog is already a private bitbucket repository. I was able to skip a few parts of the setup and just enable Pipelines on my existing repo, though I chose to create a branch for this work which I merged in once I was happy with the results. I also wanted my repo to be as simple as possible, so I spent some time adding items to my .gitignore and expanding the rake task for cleanup to remove the generated site.

An aside: Now that I work for Atlassian, I have a parallel set of accounts to the ones I had before as a customer. This means I have two bitbucket accounts, and as a result my normal method of keeping my bitbucket ssh key in my local keyring failed to choose the right key when working with my blog repo locally. Enter [git aliasing](https://developer.atlassian.com/blog/2016/04/different-ssh-keys-multiple-bitbucket-accounts/), which is super handy.

That fixed, I looked at how my current site generation task is run4 and tried to replicate that via the bitbucket-pipelines.yml:

1
2
3
4
5
6
7
8
9
image: starefossen/ruby-node:latest

pipelines:
  default:
    - step:
        script:
          - bundle install
          - rake generate
          - rake deploy

After that, I knew I'd need to get key-based SSH set up to actually deploy the content:5

1
2
3
4
5
6
7
8
9
10
11
12
image: starefossen/ruby-node:latest

pipelines:
  default:
    - step:
        script:
          - mkdir -p ~/.ssh
          - cat my_known_hosts >> ~/.ssh/known_hosts
          - (umask 077 ; echo $SSH_KEY_VAR | base64 --decode > ~/.ssh/id_rsa)
          - bundle install
          - rake generate
          - rake deploy

And while I expected that to work, I quickly found out that rsync wasn't part of the ruby-node docker image:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
image: starefossen/ruby-node:latest

pipelines:
  default:
    - step:
        script:
          - apt-get update
          - apt-get install -y rsync
          - mkdir -p ~/.ssh
          - cat my_known_hosts >> ~/.ssh/known_hosts
          - (umask 077 ; echo $SSH_KEY_VAR | base64 --decode > ~/.ssh/id_rsa)
          - bundle install
          - rake generate
          - rake deploy

This worked, nearly flawlessly, with one major problem: the results of rake generate did not match what I had at home, and for a little while I had a very ugly, completely empty, web site. (Oops)

After a respite and far too many (frustrating) re-runs, I decided I had no choice but to bite the bullet and get Docker installed locally. In the last 8-ish months, though, Docker has fixed the problem that had been bugging me for ever and ever, and removed the need to have a separate VM application running on the machine. Replicating my pipeline environment was therefore trivial6 and I ran through the build steps without any issues.

Which was infuriating, because it worked perfectly fine.

So, you know, it has to be the environment.

Turns out I was over-zealous in my cleanup efforts and had removed the Gemfile.lock from the repo, meaning that it was grabbing all of the latest dependency versions from rubygems. Somewhere in there was an update that broke generation altogether. The lock file was still there locally (because I had run bundle install outside of Docker at some point in the troubleshooting process), so once it was pushed back to the remote, the pipeline build completed flawlessly. And deployed. And I was happy.

Each time my site builds, I will be consuming about 2.5 build-minutes. I can publish 20ish posts per month (HA!) for free for the foreseeable future.

This was a great learning experience. Prior to this a majority of my CI experience had been using Bamboo to kick off mvn commands. I'd highly recommend using pipelines for simple build tasks, even if it's for things like unit tests, content generation, publishing to a remote, whatever.


  1. My weak-arse perl skills don’t count, nor do I write unit tests for my hack-jobs, alleviating me of any sort of CI benefits…

  2. I had been using the free aerobatic.io hosting in bitbucket for a bit, and I could move to Github and hsot there with a CNAME, but I’m happy with my setup.

  3. Starting in January the cost goes up to whopping $0 for 50 build-minutes, $10/mo for 1000 if for some reason I need that much.

  4. Via hazel, as a simple shell script.

  5. I apprecate that Atlassian made it trivial to store a local key securely via a protected environment variable.

  6. No, seriously. Maybe 5 minutes, while the kids ran around screaming and hitting each other with light sabers.

Nov 10th, 2016

Cubs Win! Cubs Win! Holy Cow!

Nov 3rd, 2016

Wintergatan

Breathtakingly awesome. The videos explaining how it works (Part 1 and Part 2) are fascinating. He plays the song on the keyboard at the end of part 2 to explain how he modifies a repeated set of melodies. It's almost as good on just a piano.

Sep 5th, 2016

Leaving on a Rocket Ship

In mid-2005, I was working as the Residential Network Support Manager for Wellesley College. It was my job to coordinate all of the technical support for the 2,000+ students, train two dozen students with almost no tech support background to do field and email support, and plan out the welcome program for getting new students acclimated to their network and software and such. Though I had been relatively successful there, I really didn't enjoy my job. There was a vast cultural difference between myself and the other staff, and despite my best efforts to push for material changes, they were met with an attitude of "that's not the way we do things", which was prevalent in EdTech, especially at smaller schools.

So I started job-hunting, and through a roller-coaster of life events, finally took a job doing application support for a small-ish healthcare tech startup called eScription. They made a suite of tools for Computer Aided Medical Transcription using a mixture of proprietary and open-source speech recognition tools and home-grown applications. The solution was sold as a SaaS platform with some on-premise workflow tools and a user-facing client embedded in Microsoft Word, and the entire stack was supported by a (then) 7-person team. We had about four dozen customers at the time, mostly individual hospitals around the country. The job gave me all sorts of opportunities to flex my unix muscles, and I grew with the organization, eventually managing one of the support teams.

About two years later, I moved out of Support and into R&D, owning one of those workflow products. Shortly after that, eScription was acquired by Nuance, and we were quickly assimilated into the vast acquisitive machine, joining Dictaphone and Commissure among the Healthcare division's more recent acquisitions. eScription was the go-forward platform for background-speech transcription1, and though we bled original staff members like a stuck pig, we generally flourished.

In late 2010, my team was spun off of eScription along with some other staff in the division to start a skunkworks project to deliver our first NLP solution to the healthcare market. Though we stumbled a bit those first couple of years an had a failed partnership or two, we learned from our mistakes and put some narrowly-focused products in the market. After a couple of additional acquisitions in 2014, we developed more of a footing in the industry, and now deliver the top-ranked Clinical Documentation Improvement platform in the industry, along with the under-pinnings for intelligent Radiology assistance, structured documentation generation from narrative medical text, and Computer Assisted Coding for medical billing. It's a platform I am proud to have helped build from the ground, up.

...and now I'm done. Though it has been a great ride, my days at Nuance have come to an end. I am going to miss my team so very much, but after just shy of 11 years, I am ready for my next adventure: In mid-September, I am going to be joining Atlassian as a Technical Account Manager working remotely with East Coast enterprise customers. Over the last three years, I've become intimately familiar with their tools and services, and have gotten to know many of their staff. Atlassian looks like a great place to work, and I am looking forward to finding out for myself in just a few short weeks.

When the TAM program was introduced2, I pushed to have Nuance purchase this service as we were growing our footprint of users from a few dozen to a few hundred (which then turned in to a couple thousand). It was one of the best decisions we made. Our TAM has been instrumental in supporting our division's adoption of every tool that Atlassian makes, from JIRA to Bamboo, and ensuring we follow best practices along the way. I'm such an unabashed fan that I'm quoted on the TAM web site (scroll to the bottom):

Our TAM gave us product advice that was able to save a department of 300 people roughly four hours a night. —Matt Shelton, Engineering Manager, Nuance Communications

Admittedly I'm just as much of a fan of the Premier Support team -- these two services are worth every penny for an enterprise customer.3

This is going to be a very different job from the onemany that I do now. For starters, I haven't been an individual contributor since 2006! Having only my own work products to focus on will be a change. Not working directly for an R&D group will also be a big shift, but I haven't been able to do much that is customer-facing in a while, and I am excited to get out there again. I'm going to be adding some DevOps experience and CI/CD exposure to the team, and given Atlassian's trajectory there is so much room to grow. I can barely wait.

Here's to the getting on the rocket ship!


  1. If ever there were to be a niche…

  2. Late 2013/Early 2014, if memory serves.

  3. This wasn’t supposed to be an ad read.

Aug 27th, 2016

Maven Extension for Feature Branch Isolation

Last year I had the privilege of speaking at the Atlassian Summit on the topic of selecting a branching model when migrating a team to Git. I had a blast, but I also came away with a small amount of regret: During the presentation, I mentioned that my team had used a custom maven extension to automate the process of isolating our build artifacts in Artifactory. We had been, and at the time still were, working with our legal department to obtain clearance to publish the extension as an open source project.

That process, unfortunately, took a lot longer than I had expected. Thankfully, the wait is over. I am incredibly pleased to finally release our Maven Feature Branch Extension to the general public. The extension is published under the Apache Software License, using the same version as Maven 3.x. Take it, use it, fork it, etc. We'll track issues in our bitbucket project and try to get to them as quickly as we can. We'll also happily accept pull requests.

I am deeply embarrassed that there is a very real possibility that someone's work might have been stalled waiting on me for nine months. I have tried to reach out to everyone that asked for this extension at Summit, posted comments on this blog, and emailed me directly. I may have missed some and if you fall into that category, please accept my most sincere apologies. I'll be at Summit this year; I'd be happy to buy you a beer.

Jul 29th, 2016

Open Feedback

A few weeks ago, Becky Hansmeyer wrote a post about how some of her "favorite people on Twitter seem to be…well, avoiding Twitter…and how that made [me] kinda sad". I wholeheartedly agreed with her original post and was both fascinated and excited (for her, this person I don't know personally) to hear this post be the topic of discussion on this past week's Analog(ue).

I've been ruminating on the concepts that Becky, Mike, and Casey all touched on for a few days. I wanted to share it with them, but I thought... is it my place to do so? I enjoy reading what Becky writes about app development1, and enjoy listening to Mike and Casey talk about their lives. Does that give me any sort of right to tell them what I think about what they wrote/said? I know the latter pair receive a fair amount of feedback. I've sent Casey some regarding his other podcast (ATP), but why was it ok for me to do that?

We live in both a highly-connected, and highly-insular world. The capability of our words to impact people we don't even know is more a reality now than ever before. There can be many positives to this, though it also leads to a logical fallacy: because we can say it, and we live in a place where "Congress shall make no law abridging" us from having our God-given right to an opinion, we therefore believe that we have the right to broadcast any opinion, no matter how completely uninformed, disrespectful, unkind, etc. We think that our opinion about a topic we just got all rage-y on today is as valid to be consumed by the masses as the critic, lawmaker, and victim who have first-hand experience with that issue and may have for far longer than we could imagine. Our perceived right broadcast our indignance leads to a world full of angry people, many of whom don't have a clue from where their rage truly originates.

And the culture uses this to its benefit all the while creating victims of groupthink rage. The mob mentality takes over before anyone can reasonably ask themselves "is it right for me to be angry?"2 .

Sometimes it is right. Most of the time, though, we're spectators in a globally-viewable personal conversation, weighing in on someone else's business or throwing trash on their lawn.

(And some of the time it's just the election season.)

Becky wrote a follow-up post and hit the content I wanted to touch on squarely on the head. It made me feel like it was 2004 all over again, where conversations between personal content creators (cough bloggers cough) was managed through trackback links and comment threads, pingomatic and technorati. Content was harder to find, and it was harder to randomly stumble into rage -- you either had to explicitly leave a comment with your email address attached, or wrote your own darn blog post to respond to someone else's ideas.

Now all you have to do is fire off a potentially-anonymous tweet, and that's why I don't blame folks who want to take their content into walled gardens instead of the public square.

But what about me and my thoughts? Should anyone listen to what I have to say in response to their opinions which they shared publicly? Nope. I mean... they can if they want, but something has changed in our culture that we expect they should, and so we send unfiltered responses without questioning if it's our place to do so.

And if I'm completely honest, some part of me wants to anyway despite knowing all of this. I want to be able to send criticism and be told I'm right because on one level I'm self-centered and consider myself important.

Instead I'm going to throw back to 2004 and be perfectly content with what I have here.


  1. And, let’s be honest: her dog is adorable. More corgis! Corgis for everyone!

  2. Jonah 4:9

Mar 16th, 2016

Frederick Douglass Daguerreotype

My father-in-law sent us a video about one of his long-ongoing research projects, the study of an early Daguerreotype of Frederick Douglass.1 For a nerd with any sort of science leaning, this is a pretty interesting topic. Every time we visit my wife's home, he has some new story about this project with twists, turns, "plot thickens" moments, etc. It's great to see the attention start to ramp up a bit!

I'm really quite proud to be related to this guy. He does some pretty cool work in the nano scale up at the UofR.


  1. In the video, he’s the guy doing all of the work on the microscope, pointing out findings, etc.

Jan 7th, 2016

Creative Branching Models for Multiple Release Streams

A couple of weeks ago, I had the incredible opportunity to speak at the 2015 Atlassian Summit in San Francisco, CA. The conference lasts about three days and covers a wide range of topics from Software Engineering to Process Improvement to Team Dynamics to Enhancing Communication and so on. Most of the sessions are focused on using Atlassian's tools to accomplish a given goal, but many are generally applicable to anyone in a Software Engineering field.

My presentation covered my team's migration from Subversion to Git with a long time spent talking about the work we needed to do to keep our multi-module build setup in Maven whilst using git-flow as a branching model and making our engineers do as few manual steps as possible. I was limited to about 30 minutes, so I wrote a brief series of posts to cover everything I couldn't say on stage:

  1. Git or SVN? How Nuance Healthcare chose a Git branching model)
  2. Dealing with Maven dependencies when switching to Git
  3. Pulling the Trigger: Migrating from SVN to Git

Video

Slides

This was my first time presenting at a technical conference. It was a great experience that I hope to repeat in the future.

Nov 17th, 2015

Pulling the Trigger: Migrating from SVN to Git

Note: The official version of this post can be found at https://www.atlassian.com/git/articles/migrating-svn-git-branching-workflow.

We're moving to Git, and we figured out how to use git-flow and Maven together in an efficient devleopment workflow. Before I get into what our workflow looks like now, it's important to know where we came from.

Back In the Olden Days...

In the our previous world, before migrating from SVN to Git, all of our version management was manual. Development took place on trunk simultaneously across all active features for the team. When a developer committed their changes to SVN, Bamboo would kick off a snapshot build (1.0.0-SNAPSHOT). If their work passed integration testing, they'd run the release plan after manually verifying nobody else had run a subsequent snapshot1. That release build (1.0.0-1)2 would be quickly smoke tested and then handed to QA for functional validation.

We were "releasing" all the time; every build that went to QA was from a Maven release goal without incrementing the minor or patch number. We had Bamboo tack on that -buildnumber to each release so that we could track specific releases to QA,.

Then, once QA blessed the "last one" for that release, we'd increment the minor version using mvn version:set. This meant that if yours truly wasn't completely on top of his game, we ran the risk of forgetting to increase that version number and building a "release" of 1.0.0-x that was after what we had released to production. Big mess. Big pain. But it meant that every build out of development had a clear, trackable, permanent number.

That was "good", but it was a tracking nightmare.

We didn't want to do that anymore. We only wanted to release when something was going to go out the door, customer-ready. Also, no more build numbers in production—major.minor.patch only.

However, we also wanted to ensure QA had a way to track a specific delivery to a specific release. QA couldn't be testing SNAPSHOT releases all the time. Thankfully, JIRA Software makes some of this easy by showing links between JIRA issues, Bitbucket repositories and build plans containing builds for that issue.

In the end, we decided that we would adhere to a few rules:

  1. Developers perform integration testing from their feature branch, which they are keeping up to date with the develop branch.
  2. Once integration testing is complete and passing, they issue a pull-request and have their code pulled into the develop branch.
  3. They promote a QA candidate cut of our common project, and then trigger a QA candidate cut of develop for their product project. Since we need to use a specific common project release at build time3, we prompt a developer for the aforementioned common QA candidate release version.
  4. QA tests QA candidate builds only. Never snapshots.

We repeat steps 1-5 for all features, bugs found in QA, etc. Then, once we're all done with a version's efforts, we follow git-flow and branch to a release branch by promoting a specific QA candidate build in Bamboo. This creates our first release candidate set, and QA performs final testing, regression testing, etc.

Building It Right

Getting your builds just right can be tricky. Before we moved everybody off of SVN and our original Bamboo build plans, we set up a POC project and then worked out all of our Bamboo plans using those. In the end, we found the following plans met our needs:

  1. develop: Triggered from Bitbucket Server, it creates SNAPSHOT builds with every change pulled in from pull requests. The build plan has three stages:
    1. Snapshot - Builds snapshots and doesn't touch versions.
    2. QA Build - Builds a "QA Candidate" release numbered X.Y.Z-qa{buildnunber} from the same git commit used to create that snapshot.
    3. Promote To Release - Creates the release branch and increments develop's minor version number, all from the same git commit used to create the QA build.
  2. feature: Set up with a default build plan, but mostly is watching for new branches created with feature/* in the name, using Bamboo's plan branches feature, and creates a branch plan for that branch whenever it does. It has only one stage, which builds SNAPSHOTs.4 Also, because we don't want feature to build by itself, we point its repository to a branch called "fake-branch" so that it never triggers.
  3. release and hotfix: These have the same set of build and release steps so they get to share a plan. The plan also points to a "fake-branch" and only cuts branch plan-based releases with the following two stages:
    1. Build Candidate Release - This creates a release numbered X.Y.Z-rc{buildnumber}.
    2. Finalize Release - This sets the final release number, merges up to master, and creates a tag. Then it removes the release branch which disables the plan.
  4. support: This is nearly identical to release and hotfix, but it never merges to master. Instead, we increament the minor version at the end of the Finalize Release stage.

Because feature, release_hotfix, and support plans are all running branch plans, when you view them in Bamboo, you see “Never built” for each plan. The first time this happened everyone had a blank look on their face... where are my builds?? but then we realized this made sense.

We have so many build plans5 that if bamboo displayed all of the plans inline including branch plans, you'd be scrolling forever and a day. The information overload would actually be less helpful. So we click once more to see our branch plan status. It's not a big deal, but it would be nice to find an easy way to see them all.6

Migration Process

Once we were satisfied that the POC project worked the way we wanted it to, and the developer workflow was consistent and reliable, we ran an import of our SVN codebase into Git and ran through several more iterations of testing each possible workflow to iron out build issues, workflow oddities, etc.

We found some things like my note above about cleaning the Maven workspace every time you run a feature branch build in Bamboo, and certain times we needed various flags for a specific maven lifecycle. Generally, these were easy to figure out, but once in awhile there was much shaking of fists and gnashing of teeth.7

After all of that, we decided it was actually now or never, and announced our migration date. As a part of the migration, we decide to do a bunch of cleanup so that our Git repo had a nice starting place. We did things like:

  • Bulk code re-formatting so that we could enable some stricter style checks as a part of the build process
  • Converted all line endings to UNIX
  • Squashed three top-level dependency projects into one and refactored all dependent code as a result

These things out of the way, we kicked everybody out of SVN, made it read-only and did a final pull into Git. We conducted a training session with all of our developers to go through our workflow one more time and show it in action with the actual codebase.

It was the smoothest migration I've ever experienced. We froze SVN around 5pm on a Monday and by 10pm we were done with all of our initial build issues worked out. There were no major issues; some things required a lot of waiting. Training was the next morning and we were doing feature work by lunch.

New Developer Flow

Once through the migration, we were able to see how this workflow would work in the real world. When a developer starts work on a feature (ABC-4321), they need to do a few things to get started:

  1. From JIRA Software, in the Development area of the issue, click on Create Branch.
  2. This opens a screen within Bitbucket Server that lets them select the branch type (usually feature), repository, and branch name. Because of our Maven extension I mentioned in the previous post, the branch name is always the JIRA issue key, no description.
  3. Repeat steps 1-2 for each of the associated projects for that feature, always using the single, same issue key.
  4. git pull && git checkout origin/feature/ABC-4321 feature/ABC-43218

This workflow is straightforward, repeatable and reliable. Developers can work in isolation and pull in contributed changes from develop as they move forward. The branching action can feel a bit repetitive if, say, a user story has work in all four of our product verticals and the common project. We've been thinking about automating this with some sort of JIRA workflow post-function to call the Bitbucket Server REST API, but that might be overkill for something that isn't costing us too much developer time.

Lessons Learned

This process to get us from SVN to git with a shiny new workflow was a long one — from kickoff to migration we took almost seven months. A vast majority of that time was spent wrestling with maven.

I'll admit that we had some staffing concerns along the way; in parallel to this work the same group of engineers working on the mgiration were supporting a department of 800 people on the Atlassian toolset, providing production support for our platform applications, and working on other operational R&D projects. Once we finally put three people on the migration nearly full-time, we were done in about a month.

Despite all of it, we learned a lot:

  1. No amount of preparation truly prepares you for the real thing. For instance, each release build type we tried to perform failed the first time we tried for one reason or another. Once we fixed one project's build for that type, we copied the config to the others and haven't had a single repeat.
  2. This workflow generates a lot of builds. So. Many. Builds. We needed to double our Bamboo agents to keep up.
  3. This workflow generates a lot of build artifacts. Within the first week we ran out of disk space on our Artifactory instance and had to spend a solid day manually purging old release candidates and QA builds no longer needed. Then we needed to think of a way to ensure that when feature branches are removed, we also remove all of their branch-specific snapshot artifacts.
  4. The team doesn't really like needing to pick between a hotfix or having a support branch. It makes sense to be able to cut a hotfix, but most of the time they want a support branch. We might decide to only use hotfixes on special occaisions when the merge would truly be straightforward.
  5. The combination of JIRA Software, Bitbucket Server and Bamboo are seriously killer. Watching someone start work in JIRA Software, create a branch and immediately have a branch plan built and ready to validate their work is beautiful.
  6. Pull Requests in Bitbucket Server are the greatest thing since sliced bread. Between keeping a push-happy engineer at bay or making sure we're ready for an offshore team's contributions, we couldn't be happier with the pull-request process. Given we perform code inspection in Crucible rather than at pull-request, we're able to use it for quick sanity checks as well.
  7. Our previous SVN-based tags had been tagged by a service account that was performing the build. Since that user wasn't real, when we tried to create branches from tags, our git-hook to validate the user was valid for a given commit failed. I wrote an article on my personal blog on how to change the author of a single commit in Git, which came in handy the first time we needed to create a support branch from an old SVN-based tag... which was the day after the migration!

All in all, our migration was a great success. It didn't solve every problem my team has had, but it certainly solved many and gave us a more stable footing to move forward.


  1. An uncomfortable amount of HipChat conversations in our Developers room went to asking if anyone needed to commit any changes before a release build was made. Prior to HipChat: Lync, email, or yelling over a cube wall.

  2. The -1 is an incrementing build number for the plan that never resets to 1. It didn’t take too long for release builds to have numbers in the high 100s.

  3. The POM still refers to a -SNAPSHOT release of common here and can’t risk the build pulling the wrong SNAPSHOT.

  4. It also forcibly cleans up the workspace every time it builds. YMMV, but we found this to be necessary.

  5. Across the org we have something like 120 plans in this particular Bamboo instance, growing all the time. Filtered just to my team’s plans, we’re roughly half of that list. With all of our branches we’re pushing 200.

  6. One of my engineers wrote a greasemonkey script that lets him see all branch plans. It only works if you aren’t a Bamboo admin due to the number of visible plans. I’m working on whipping up a dedicated AtlasBoard for myself.

  7. We only burned maven in effigy once or twice.

  8. The exact steps here vary depending on if the developer is using the command line or using Eclipse to switch branches.

Nov 1st, 2015