I’ve been having a lot of fruitful chats with informal advisors (people who will indulge me jawing about music discovery over coffee, basically) and got a lot of feedback from the mailing list on competitive research (on which: expect more soon). This post is a glimpse inside the sausage factory, things that I’ve been thinking about lately in no particular order.
This sounds evocative! And I don’t know what this phrase means, exactly. Without having an exact definition, I’ll say that what I’d expect from a deep tool is intelligence on my behalf. I don’t want to have to manually ‘activate’ it all the time – the best outcome for me, particularly if I’m allowing a piece of software access to my habits and inputs, is that it observes and learns indirectly.
Unfortunately, this tech is most often used for nefarious/ad-tech purposes. I am not setting out to create, say, a system that recommends new music simply to prime you for purchase. (That you might want to support artists is, of course, a consideration.)
That said: discounting the interesting outcomes of a ‘personal intelligence’ purely because most of the use cases so far have been unsavory feels reactionary. The difference, to my mind, is agency. Surveillance tech is gathering information on me, but not on my behalf. A system that works for me, for my benefit, is a different animal.
There’s no inherent value in creating ‘a system’. We’re in an interesting space in software delivery, I think – the cost of creating any kind of system is approaching zero as more operational pieces become componentized or abstracted away. This wasn’t always true. A lot of startups succeeded purely on the basis of unlocking behaviours and outcomes that weren’t previously possible.
Music discovery is obviously ‘a system’ already – I could write a lengthy treatise just on Spotify’s recommendations, for example. They’re out there doing it, at scale, and pretty successfully! I hope to uncover new value by finding the gaps in that system. Clearly, a very large number of people are being served very well by existing approaches – but just as clearly, I think there’s a large group of folks who aren’t.
This need is impossible to quantify (at present) because we tend to design for positive use cases. We know what attracts and retains people, but we’re less informed about what disincentivizes them, what stops them from engaging in the first place. Any inherent value in 100DB doesn’t come from what it is, but specifically what it isn’t – it can’t be “Spotify except better” because Spotify is already very good at being Spotify.
In every project I’ve ever worked on, personal or professional, finding success has been about both diligent experimentation and throwing spaghetti at the wall. I know what I think will solve a given problem, and I’m usually sort of right and sort of wrong. Course correction is not surprising to anyone who’s worked in product.
But: it’s extremely easy, as a design-minded person, to rush to a conclusion – to immediately latch onto the first answer to the question being posed. I’m not giving away state secrets here if I share that the initial concept for 100DB was basically “Product Hunt for music”. It’s still an interesting idea! But I’m trying to not solve the problem right away. I am trying to deliberately make space for surprise, to allow the shape of the problem to emerge. Where’s the balance point, though? I could do research and analysis for the the next six months and it could very well be time well-spent, or … not. “The best time to plant a tree was 20 years ago”, etc.
Conversations over the last couple of weeks have suggested a way forward: solve microproblems – i.e., throw enough spaghetti to begin to understand where your spaghetti is landing (and who’s eating it, if we want to really extend this metaphor). The first step doesn’t need to be creating something that solves a specific problem, but literally any kind of output at all. The MVP of a music recommendation platform might be a single track that I publish manually every week on a WordPress blog. Is this useful? Probably not! But we can begin to approach true utility in small bites, rather than trying get there all at once.
This feels like a helpful riff on the Wizard of Oz prototype approach. Instead of trying to fake something more grand right away, take a stepwise and additive approach to functionality.