Article

AI Reflections After Getting Lots of Feedback

June 21, 2024
4 min read

Written By Tela Gallagher Mathias , COO and Managing Partner, PhoenixTeam

I’ve been doing a lot of feedback sessions for Phoenix Burst (www.phoenixburst.ai), which is a passion product for me. To date, we’ve been really focused on moving left to right across the product development lifecycle – meaning from the “concept” end of “concept to cash”. We’ve wanted to really understand the limits of generative technology for analyzing, decomposing, and improving existing knowledge bases so that we could create the core artifacts of software development – requirements, user stories, acceptance criteria, test cases, and synthetic test data. Basically the “write requirements” and “test solution” part of Figure 1.

One of the prevailing feedback themes has been along the lines of:

“That’s nice and all, but what about when I need to make modifications to an existing system? So much of what we do is to improve or adapt systems we already have. When I look in my backlog, at least half the user stories have nothing to do with our goals, what are you doing about THAT?”

What a great question, and I am glad that so many of you have echoed that sentiment.  Having been in the software business for 25 years, I’ve thought a lot about this problem, and have seen this problem in action many times. I have delivered some very valuable software – call it an 11 on a scale of one to ten. I have also delivered some software that was kind of meh – maybe a six of ten. And then there’s all the bad ideas and semi-built software that never made it to market for whatever reason.

I’ve had two great pivots in my product management philosophy. The first was when I attended a course taught by Marty Cagan in New York after his first book was published. The next was attending a large-scale scrum (LeSS) class taught by Gene Gendel. Marty’s class completely altered my thinking about how a product team should be organized and what the role of the product manager was. It taught me the true definition of minimum viable product (MVP) and opened my eyes to hypothesis testing. Honestly, it made me very depressed. I had been thinking about it all wrong for so far too much of my career. It took me about six months to really digest what it meant to me and my industry.

Gene’s class taught me that many of our delivery problems in software are organizational in nature, and, therefore, very difficult to solve. It also forced me to really understand and be able to take apart the systems we use to create software products, and to uncover the levers we can pull to improve outcomes. Also, very depressing since I do not control the budget or budget decisions for any of my customers. That makes it hard, almost impossible in fact, to address the organizational problems.

So that was a tough couple of years, but I digress. In any event, I’ve concluded that there is no one size fits all. For me, my business, and my customers, it’s more important that I meet them where they are and apply the best approach that has the best chance of creating better outcomes. Sometimes these are small gains and sometimes these are huge gains, but I am less depressed about the situation than I used to be.

So, what do all my customers have in common? They have goals. They also have systems they want to retire, replace, modernize, or otherwise improve. So, they all generally have goals and systems. And they also all want the same thing – they want their goals to be met by their system. Simple enough. Figure 2 shows what customers want.

Unfortunately, it can be devilishly difficult to know if our systems are meeting our goals. And it is even harder to prove that the goal was met. Also, as we’ve covered before, it takes too long and it’s too expensive. Generally, the process doesn’t look like Figure 2, it looks more like Figure 3, with myriad twists and turns, onramps and offramps. It gets very murky trying to get from my goal to my system. I’ll call this the murky middle.

Generally, there is not a direct line from goal to system and system back to goal, it gets lost in the murky middle. We do not know how much of our system is valuable. We do not know which parts are valuable and which parts are not. We are unable to easily quantify the potential impact of a change. We do not know which thing in our backlog will be the most valuable if we prioritize it.

The murky middle is the messy part of software development, it’s where those artifacts we’ve figured out how to generate in Phoenix Burst come into play. That got me thinking – if everyone has goals, and everyone has systems, what if we generate the requirements from the existing systems, generate the value propositions from the goals, and then see if they match up? That would allow us to determine if our systems are meeting our goals. By extension, it might allow us to determine the value of a change, we could probably even use this process to look at our backlog and find the most valuable things in it. How cool would that be? Cooler than cool. THAT would be a giant leap forward in our goal to make making software not suck. Or at the very least, it would tell us if the software we made sucked.

So that’s our next experiment. I hope you’ll get in touch if any of this resonates, and I’ll let you know what we figure out.

Similar posts

Insights, Rules, and Experiments in the AI Era