10 Pitfalls of Developing Impactful AI, and How to Prevent Them
Everyone talks about AI, but how do you get real and impactful solutions from the drawing board to fundamentally improve the way people work? Wolters Kluwer Schulinck and Mozaik have been partnering from ideation to commercial launch of highly successful AI applications. But let's be honest, it was no walk in the park.

The journey of creating a successful AI-powered product is often complex and fraught with challenges. At the recent Data Expo conference, Dennis Maas, Head of Product at Wolters Kluwer Schulinck, and Vincent Hoogsteder, Partner at Mozaik, shared our firsthand experiences from a successful partnership, offering a realistic look at what it takes to bring an AI product from concept to market.
Here’s a summary of the key takeaways:
The 10 Pitfalls to Avoid:
- Measuring quality at the water cooler: Relying on subjective feedback can be risky. Instead, they recommend setting up an evaluation framework to move from subjective discussions to automated, objective measurements.
- Prioritizing tech leaps over small tweaks: It's easy to get caught up in the hype of new technology. However, the team found that an experimentation mindset, where even small tweaks to prompts and content are valued, often yields the best results.
- Being complacent about the speed of learning: In the fast-paced world of AI, waiting weeks for feedback can slow down progress. To counter this, we integrated legal experts into the team to create a daily and personal feedback loop.
- Thinking from existing paradigms: A common mistake is to simply use AI to improve existing processes, like creating content summaries. The real breakthrough came when we shifted their focus from the product to the customer's workflow.
- Choosing features over quality: While it can be tempting to focus on developing new features, the team decided to go "all-in" on quality, adopting a "less is more" approach.
- Letting managers decide what to build: Managers may not always have the deep technical understanding required for AI projects. To avoid this, they empowered the team with the deepest knowledge; engineers, product managers, designers, and legal experts to take the lead.
- Forgetting the human element: Fear of the unknown is a significant hurdle. We addressed this by designing their AI as a "CoPilot" and even teaching it to say, "I don't know," and refer users to legal experts for sensitive cases.
- Testing and releasing like traditional software: The non-deterministic nature of AI requires a different approach. They implemented a system of analytics with evaluations, latency, feedback, and engagement, allowing them to act on any significant changes.
- Overestimating the importance of latency: The initial versions were slow, with response times of over a minute. Despite our concerns, we released the product and received an overwhelmingly positive response from customers, proving that quality can trump speed.
- Believing too much of the LLM vendor marketing lingo: New and "better" models are released at a rapid pace, but they can have unexpected effects on quality and latency. Our solution was to build an infrastructure that allows for easy switching between LLMs and to rigorously test every new release.
Ready to learn more?
For a deeper dive into these lessons learned, you can download the full presentation slides here.
“This Vincent guy really, really knows his shit!”
As stated by one happy customer