Written on February 2, 2018
In the Buffer’s marketing team, we have been experimenting with a new squad format for the past few months. Here’s how I ran the squad for a recent product cycle.
While the marketing team continues to work on our ongoing projects, such as the blog and social media, the marketing squad focuses on a specific meaningful metric and try to move the needle. We focused on churn for six weeks (the length of our product cycle, which we also follow in the marketing team) and mobile acquisition for another six weeks.
For that two cycles, participation was voluntary. Anyone can opt in or opt out before a cycle started. I joined as a member of the squad for the first cycle and was offered an opportunity to lead the squad for the second cycle. This was the first time I’m leading a team at work so I was really excited about it (though I very much saw it as helping to organize and coordinate things in the squad to keep it running smoothly).
The squad processes
Here’s an overview of how I ran the squad, iterating from the previous cycle with the things we learned.
We followed the growth machine model by Brian Balfour. Even though the model is meant for growth teams, we found it useful for us. (I’m sure our processes aren’t perfect. If you have any suggestions for us, I would love to hear them!)
First, we decide on the metric and plan for the cycle.
In between each product cycle, we have a two-week period for reflection, rest, and planning for the next cycle. During this time, I worked with my team lead, Kevan Lee, to decide on the metric for the squad. We eventually decided on mobile downloads for new-to-Buffer users for a number of reasons:
- After looking at the breakdown of our Monthly Recurring Revenue (MRR) growth components, we discovered that the two areas that would make a meaningful impact on our MRR growth are new paying customers and churn.
- The way our marketing team is set up is more suited for top-of-the-funnel, awareness and acquisition projects than bottom-of-the-funnel, retention projects, which we tried in the previous cycle.
- As a marketing team, we have always been focusing on acquisition for Buffer’s web application. So there might be some hidden opportunities around our mobile apps that can make a meaningful impact.
- We had several good sources of data to rely on, such as Looker, Apple App Store analytics, and Google Play Store Console.
Once we decided on the metric, I created a document with all the information we required for the cycle. This served as the backbone of the cycle, a place where everyone can come and find out what’s going on. It included the context of the squad (who’s involved, what’s the duration, etc.), our goal and projection, the data and data sources we have, and a brief timeline.
The main bulk of the document is where we state and track our processes, which are brainstorm, prioritize, test, implement, analyze, and learnings. Here are a few screenshots of the document:
The context of the project
Our brainstorming prompts
2. Kickoff and brainstorm
On the first day of the cycle, we had a kickoff sync (our term for a video call), where I shared the document and the key information and we brainstormed.
The approach I took with the brainstorm is this: Everyone brainstormed on their own for 10 minutes before adding their ideas to the document. Then each of us presented our ideas while the rest chimed in whenever we saw an opportunity to build on the idea.
Research has shown that brainstorming alone leads to more ideas, and more good ideas, than brainstorming in groups.
The idea of the brainstorm was to get creative and list as many ideas as possible first. We also recognize that we might not have the full context of an idea without researching into it. So we avoided criticizing ideas at this stage. This also prevented us from creating any stop energy.
It’s only in the next stage where we evaluated the ideas.
3. Prioritization of ideas
Next, we prioritized our ideas using the ICE score system.
The ICE score system is a framework by Sean Ellis of GrowthHackers. Here’s what ICE stands for:
- Impact is the predicted amount that this idea, if done successfully, will mean to our metric.
- Confidence is related to how probable we expect our success; low is for things we’ve never done before and high is for things we’ve experimented with.
- Ease relates to resources — what do we need to implement? Time? Money? Engineering help?
We scored each ICE element on a scale of 1 to 5 (low to high) and then totaled the points (maximum 15).
Here are some ideas that I scored and ranked:
(☝️ One thing I learned during the cycle is that I could craft more specific hypotheses and be more quantitative about the predicted impact.)
This step is to help ensure that we would work on the most impactful idea, given the number of resources we have. In the previous cycle, we picked ideas to work on before scoring them with this system. I felt that this might mean that we weren’t working on the most impactful ideas first. So I swopped the sequence — prioritize before picking the ideas.
4. Experimentation and learnings
Then, we ran experiments, analyzed our results, and recorded our learnings.
This stage takes up the bulk of the cycle. We picked experiments to run based on their ICE score to ensure that we are working on the most impactful ideas first. For smaller experiments, one person would usually run with the idea himself. For bigger experiments, a few of us would collaborate.
We were lucky to have Matt Allen, our data analyst, who allocated 50 percent of his time to help with marketing data. He helped us with our experiment planning, getting the necessary data, and analyzing our results. Oh, and making sure that we weren’t p-hacking!
The most important part of this stage and the entire cycle is recording our learnings.
As mobile acquisition is an area that we have not explored before, we expected that many experiments would fail. But the main focus for us is to maximize our learnings. How does the app store (listing, ads, etc.) work? Why did an experiment succeed or fail? How can we improve the experiment based on what we have learned?
To help us be more intentional about maximizing our learnings, I created a document (linked from the main document) for us to record our learnings every week.
At the end of the cycle, we had a retrospective sync to reflect on the cycle, discuss our learnings, and suggest ideas for the next cycle and beyond.
While the cycle wasn’t spectacular in terms of the mobile download numbers, everyone in the squad seemed to be encouraged by the learnings and potential ways for improvements we took away from the six weeks. (I’m impressed by what we have learned.)
An experiment on its own
The squad itself was an experiment to see if this team structure would work well for us. Overall, we are happy with this structure as it made ownership of metrics clearer and led to more collaboration between team members. Hence, we are implementing this structure for the entire marketing team. The marketing team will be split into two squads to focus on branding and acquisition.
Of course, I don’t think we have everything figured out yet. We’ll be learning and tweaking the system as we go. But I’m excited about what we will achieve in this new team structure.
I’m honored to be asked to be the liaison for the acquisition squad, helping to set goals, manage projects, and keep things running smoothly. It’ll be great to get any advice on this. If you know anyone who is doing something similar, I’ll be grateful if you could introduce me to the person. Thank you!