The process of shipping a feature can be explained using the rock, pebble, and sand exercise that you probably would have done in your elementary school. Picture a jar ready to be filled. First you fill the jar with rocks, and when the final rock reaches the top of the jar, you would think that the jar is full. Then you are introduced with pebbles, which you’ll realize can find plenty of spaces among the rocks. And when you believe the jar can’t possibly hold more, you are given the sand which will surprise you by how much more you need to pour to actually fill the jar. Some teachers go further and give you water but you get the gist now.
In fact, I realized that the rock, pebble, and sand exercise is also used as an analogy to communicate priorities where the larger components represent tasks with higher priorities. But what I am going to talk about is something different. I think this is a great framework for engineers to set realistic timelines for building features, and more importantly, for product teams to have a better understanding of what shipping a feature requires, especially in a startup environment.
The jar itself represents a feature as a concept that is still high-level, rudimentary, and isn’t speced out. For example, at Fiber, we had to come up with a way to help our customers better manage replies from their email campaigns and to nurture lead. Our solution was to devise a our own inbox system designed to nurture leads and send reminder notifications for those that had not replied within a couple of days along couple more features to nudge our customers to proactively follow up with customers with high intent replies and push through to close deals. While thinking about the feature, we knew which external product’s apis to use and vaguely how to build it. Surprisingly, this is just the container. At every stage, it is very easy to fall into a trap to think what you know is all there is, which is prone to underestimating the process of shipping a feature, leading to inaccurate goal and timeline settings.
The rock is a more detailed implementation method about the feature. This involves thinking about a more detailed plan of what we want in the feature, thinking about feasibility in a semi-technical perspective. The inbox feature mentioned above involved planning the feature in more detail. Michael scoped out a mock that involved detail implementation - marking the thread to add people to an exclusion list on hard rejects, a remind me later feature, showing a detailed campaign and prospect information for every email thread. The mock also involved looking into smartlead documentation to check whether they provided the webhooks that are needed to build the feature we want and the skeletal logic on how the information from the webhook will be processed and serve the feature. This is infact the furthest a member that is not building the product could go and going beyond this could be an inefficient use of time. The rocks would then gradually fill the jar, or the feature, as the engineer starts to try and implement what the outlined plan suggests. Surprisingly this is only the rock, what more can there be?
Pebbles represent technical implementation and any additional work that you discover is required as you go. The devil is in the details. This entails designing data models, utilizing APIs for input/output operations, writing unit tests, dev-ops work, even as to dealing with bad api documentations etc. For instance, for the inbox, we built the data models for email threads and messages, edited the existing model to accommodate a feature to track when emails were sent and using an email sent webhook from SmartLead. We also explored fuzzy search libraries to enable users to search their inbox efficiently, set up notification for the users, implemented infinite scrolling and pagination queries to manage memory output effectively, worked with SmartLead's native IDs to track their origin, dealing with Smartlead’s rate-limiting by adding dev-ops work like using bottle-neck and redis, and more. Along with the actual coding, a decent amount of time and effort is dedicated to making the feature actually functional, and it could often surprise you by how much more work you discover on your way (in our case has been the dev-ops work like preventing rate-limit errors or dealing with bad ). If you're fortunate, filling in the pebbles may not be as daunting for some features.
Finally testing is the sand. You can’t release a feature without testing and this is one of the process that is overlooked the most even by the engineers. A period for intensive testing must be considered when the roadmap is discussed.
Shipping a feature always involves unforeseen events. Again, the devil’s in the details. A high functioning team would be mature enough to understand the true process of shipping a feature, and use some sort of framework like this internally to communicate progress and set realistic expectations.