As we continue embracing AI in our digital products, new challenges and possibilities arise in our industry. One of the most intriguing for me is how my role as Product Manager should and will change. The traditional PM role demands a blend of strategic vision, business acumen, user-centric thinking, and obviously some technical knowledge. However, are current skills sufficient to lead AI products? I think they are not. In this post, I’ll share some areas I’ve identified where the Product Manager role is evolving from the traditional PM, into the AI scene.
Recognizing AI’s potential
AI goes beyond creating chatbots. When entering the AI product space, a PM should be visionary, curious, and creative. Taking the time to evaluate potential AI usage in our product should be at the core of our work. How would a machine-learning model optimize a process? How can we enhance human customer support with predictive models of user behaviors? Asking these types of questions might lead us to unleash AI potential for our product. When integrating AI functionality in digital products, the PM must consider alternatives that align with the business goals of creating new revenue streams and improving customer experience.
One way we are recognizing AI’s potential is in our collaboration with SimpleDocs, one of our clients in the Legal Tech space. From the product perspective, we are innovating in the contract life cycle management process. We are implementing AI features to evaluate, edit, review, and identify potential risks to contracts based on the company’s legal playbook. Although these features use generative text-based AI, we are exploring AI’s potential by training the models with our customers’ unique legal requirements, playbooks, and documents. The AI’s output won’t be very general and bland contract observations, but rather detailed and specific per use case.
Strategic decision making
I recently took an AI Product Management course at The Product School. Our instructor, a seasoned AI PM, was very emphatic about the strategic role of an AI PM. He shared a strategic framework for decision-making involving understanding product features by what they add to the product and your business.
Think about offensive features, as the ones where you leverage AI technology to expand how you can monetize your product. On the other hand, think of defensive features, as those that will enhance current customer experience and promote customer retention.
To give you some examples, with one of our clients whose product had text-based AI at its core, we opted for an offensive AI feature strategy. After the release of the MVP, we developed additional AI features, this time focusing on video generation based on the user’s needs. As a result, the product capabilities were expanded as we created a way to upsell the product. Meanwhile, we went for a defensive approach with a different client in the M&A space. The value proposition of this product is to optimize M&A processes. Our approach to AI was to develop a machine-learning model in the recommendations engine. The model learns from user behavior and organizational preferences to recommend best-suited options for purchasing companies. As a result, each time the user interacts with our product, the product will recommend better companies, reducing search times and improving the customer experience. While this product also has offensive strategy potential (we can use AI to expand the customer base), focusing on the defensive approach makes sense because of the potential gains from each successful M&A transaction made on the app.
Collaboration with Data Scientists and AI Engineers
As product managers, we are the link between the different departments (engineering, data, design, etc.), and that shouldn’t change with the adoption of AI. However, now more than ever, it is essential to leverage our engineering and data science peers to make the best product decisions. It’s critical to facilitate conversations among these two departments to ensure there’s technical feasibility and that the data is ready/available for use. The risks of not communicating and engaging with these teams can include but are not limited to:
- Being late for product delivery
- Training your models with low-quality data
- Having to absorb high costs to get data on time
Whether you are going to train your AI model with proprietary data or you are going to outsource the data ingest to external parties. Make sure to align efforts with your team, participate in technical discussions, and learn from the technology and approaches your peers are proposing.
Evaluating AI Product and Model Performance
When defining product KPIs, we can’t forget to include measuring specific AI metrics and the model’s performance. Unless you are a very technical PM, you must collaborate with the engineering team to define the best approach to measure the model. Consider areas such as accuracy, reliability, and efficiency benchmarks.
On the other hand, you can define metrics and methods to evaluate your AI product and its usage. There are three main approaches to doing it:
- Human Evaluation: Work with people on your team (or outsource if you wish) to evaluate the model. some text
- Identify the value proposition of the AI in your product.
- Define the evaluation criteria, and use yes/no, or 1-5 scale rating questions to review the product.
- Align the reviewers on the expected behaviors and results
- Compare results to assess your product
- User Feedback: Based on the same value proposition premises, make sure there are ways to gather structured and unstructured user feedback. Align efforts with the UX team, to identify the best methodologies and practices.some text
- Include quick surveys or simple thumbs up/down next to your AI features
- Enable in-app issues report
- Organize sessions to gather live user feedback
- Automated Assessment: This is a newer approach, so be careful. It is mostly used in open-sourced models. Generally, the engineering team will drive these efforts to measure speed and accuracy. At Telos, we have experience with Openlayer, but other tools can help you with this.
Ethical and Responsible AI Use
Although AI is a technology that has been incorporated into our lives quite quickly, we can’t turn a blind eye to the fact that we don’t understand the technology at its fullest. While there have been recent findings from the Anthropic research lab on what’s going on behind the scenes, there’s still no magic bullet to prevent bias and ethical concerns.
The way we can contribute as AI PMs is by developing guidelines and practices that promote fairness, transparency, and attempts to prevent bias in the product. Consider the following:
- System prompts: ensure that system prompts are in place to prevent misuse of the technology, guarding our product.
- Data access: ensure your team has the right and ethical permissions to access the data sources used to train the models. (e.g. paying for ethical data sources)
- User consent: If your team is considering using user data to train the models for your product, be transparent with your users. Communicate how the data will be used, get consent, and allow for easy opt-out.
- User reporting: allow users to provide on-time feedback if an interaction with our product becomes uncomfortable or unsafe.
Conclusion
As we know, AI technologies are quickly evolving and bringing new challenges. These are just 5 areas where I think the PM role is transitioning as we embrace AI technology. There will probably be more as we continue learning and exploring AI. We can continue creating significant user value by adopting these capabilities and ensuring our products remain relevant. If you have an interest in the AI revolution, identify which of these areas you’re taking a more traditional PM role, and start cultivating the more technical skills that are required to strategically lead an AI product.