
How The Orange Force is putting AI to work
Two years ago, I wrote a blog asking whether generative AI is a blessing or a curse for learning Mendix. The answer I landed on then: neither, or rather, both. A shortcut that thinks for you is a curse. A tool that supports your thinking is genuinely useful. The difference is entirely in how you use it. That argument still holds. If anything, I believe it more firmly now than I did then.
But something has changed. Back then, I was mostly writing about the risks and the potential. At The Orange Force (TOF), we’ve spent a lot of time on that question since. Good conversations, deep ones. About where the line is between a helpful tool and a harmful shortcut, about what responsible use looks like in practice. That thinking didn’t lead to a policy document. It led to something more interesting. We started building.
Here’s what we built
Over the past year or so, we’ve been developing a set of AI-powered tools for our own colleagues, using Anthropic’s Claude. Virtual coaches that help people learn, reflect, and grow in their day-to-day work. Not because AI is fashionable. Because these tools help.
Here’s what we’ve built so far.
A Scrum Coach that’s always available
Anyone who has worked in an Agile environment knows that Scrum is easy to understand on paper and surprisingly hard to apply in practice. What do you do when the team keeps pulling items into the sprint mid-way? What’s the right way to handle a Sprint Review when half the stakeholders don’t show up? When should a Scrum Master push back, and when should they facilitate?

These are the kinds of questions that come up constantly, and they rarely come up at a convenient moment. Our virtual Scrum Coach gives colleagues a space to think through those situations. You describe what’s happening, and the coach helps you work through it within the Scrum framework. It doesn’t hand you a script. It helps you reason through it.
The same tool supports colleagues who are preparing for their Professional Scrum Master exams. It can generate practice questions at various difficulty levels, but that’s not really the point. What makes it useful is what happens after you answer. The coach explains why a given answer is correct or incorrect and ties that explanation back to the Scrum Guide. Then, rather than immediately throwing the next question at you, it checks in. Is this clear? Do you want to dig deeper into this topic before moving on? You’re in control of the pace and depth of your own learning. That’s a very different experience from reading a study guide for the fifth time.
A Competency Coach that helps you see yourself clearly
We recently launched our TOF Competency Framework, a structured way of mapping where each colleague is in their professional development and where they could go next. The framework is solid, but a framework on its own doesn’t do much if people don’t engage with it.
That’s where the Competency Coach comes in. Colleagues can use it to work through a self-assessment, explore what different competency levels look like in practice, and identify areas they want to develop. It also helps them prepare for their development conversations, whether that’s with me as L&D manager or with their HR manager. Coming into that conversation with a clear picture of where you stand and what you want to work on makes it a lot more useful for everyone involved.

What I find particularly valuable is that the coach doesn’t tell people what to think about themselves. It asks the right questions and helps them think it through.
Does this make me obsolete as a Learning & Development manager? I’ve asked myself that. The honest answer is no, quite the opposite. I can’t be available to every colleague whenever they need a sounding board. That’s just not realistic. What the Competency Coach does is give people a way to help themselves, so that when they do come to me, it’s for the conversation that needs a human.
A Coding Best Practices Coach that knows our standards
Every development team has conventions. The way you name things, the patterns you follow, the standards you apply. At TOF, we have a well-developed set of coding best practices but knowing they exist and actually internalizing them are two different things. And then there are edge cases. A wiki documents the standard. It can’t document every situation where a deviation might be the better call, where blindly applying the best practice would, ironically, produce the worst result. That’s where a conversation becomes more valuable than a document. You can discuss the specific context, weigh the pros and cons, and think through different approaches before making a decision.
Our Coding Best Practices Coach is exactly that conversation. A colleague describes what they’re trying to do, provides the relevant context, and gets guidance grounded in our actual conventions — not a generic answer from the internet.

Another good example is publishing a REST API. You formulate your request, provide the relevant context, and the coach walks you through how to design it according to our specs and best practices. Our standards, applied to your specific situation. That’s a meaningful difference.
For newer colleagues especially, this kind of just-in-time guidance can significantly shorten the learning curve. For more experienced developers, it’s a useful sounding board when working in an area you don’t touch every day.
An AI Masterclass that meets you where you are
Alongside the coaches, we organized an AI masterclass for all colleagues. The goal was straightforward: boost AI fluency across The Orange Force, get everyone hands-on experience with Claude, and make sure nobody was left behind as we started putting these tools to work.
We structured the masterclass around Anthropic’s 4D framework, a practical model built around four core competencies: Delegation (deciding what to hand off to AI), Description (giving AI the right context), Discernment (assessing whether AI output is actually useful), and Diligence (using AI responsibly and safely). A shared lens for thinking about how to work with AI effectively, efficiently, ethically, and safely.
The interesting part is how we ran it. After a short introduction covering what tools we use and how, including how AI itself can facilitate a hands-on masterclass, the session was largely driven by a Claude Project we built specifically for the occasion. It started by finding out where each colleague was in their AI journey. From there, drawing on the 4D framework, it adapted: explaining concepts to those newer to AI, moving straight into practice with those already comfortable. Every colleague got something genuinely new out of it, at their own level.
And like the other coaches, this one didn’t disappear after the masterclass. There is always more to learn, more to practice, more to discuss. The project is still available for colleagues to return to whenever they want to go deeper.
How we built them and why it matters
It’s worth saying something about the approach behind these tools, because the design choices are not accidental.
All three coaches are built around the Socratic method. They don’t give you the answer. They ask questions, surface assumptions, and help you arrive at your own understanding. This is a deliberate choice. If a coach simply told you what to do, it would be no different from googling the answer. Fast, convenient, and not particularly good for learning. The goal is always that the person using the tool comes away understanding something better than before they started.
Under the hood, we built these coaches using Claude Projects, a feature from Anthropic that lets you create a dedicated AI environment with its own context, instructions, and knowledge base. Think of it as the difference between walking up to a stranger and asking a question, and asking a colleague who already knows your company, your standards, and your way of working. The project is what gives each coach its personality, its boundaries, and its grounding.
Within those projects, we use what Anthropic calls Skills. Instruction sets that tell the coach exactly how to behave in specific situations and how the output is expected. Each Skill has a clear purpose and scope and can be bundled with reference materials: the Scrum Guide, our coding conventions, our competency framework. Think of a Skill as a playbook. It tells the coach not just what it knows, but how to use that knowledge: when to ask a follow-up question, when to refer to a source, when to stay within its lane, or how to create a document with our company branding.
Each coach is also grounded in a specific knowledge base. The Scrum Coach works within the Scrum framework. The Coding Best Practices Coach draws on TOF’s own conventions and standards, not the open internet. The Competency Coach is built on our TOF Competency Framework. That specificity is what makes them useful in practice. A generic AI assistant can tell you about Scrum in general terms. Our Scrum Coach can help you think through a specific situation in the context of how your team works.
We also make a deliberate choice about what each coach doesn’t draw on. Whether broader information from the internet is useful or potentially harmful depends on the tool. For the Coding Best Practices Coach, some external context can be valuable. For the Competency Coach, we’ve taken a more cautious approach.
That caution is specific and intentional. The Competency Coach will not give you a score or a level. It will not tell you where you rank compared to a colleague. It will not discuss salary. These boundaries exist because professional development is a sensitive area, and the last thing we want is for a tool to say something that feels definitive about someone’s career, based on a chat conversation. The coach is there to help you reflect and prepare. The actual development conversation happens between people.
The thread running through all of this
None of these tools make decisions for our colleagues. None of them replace the need to understand what you’re doing. In every case, the person using the tool still needs to evaluate the response, apply judgment, and own the outcome.
That’s the same point I made two years ago, and it bears repeating. AI output is only as useful as your ability to assess it. If you haven’t internalized the fundamentals of what you’re building, you can’t evaluate whether the guidance you’re getting fits your situation. Take the Coding Best Practices Coach. It can help you think through an edge case, but only if you understand the standard well enough to know why it might not apply. The foundation still needs to be there. AI just helps you build on it faster.
This is also why I understand why developers ask whether AI will take their jobs. It’s a fair question, and the concern behind it is real. Will AI change the job? Yes, probably. It will take over parts of it, the repetitive, the mechanical, the straightforward. But that’s not the same as replacing the developer. The thinking, the judgment, the understanding of why one solution fits better than another in a specific context, that’s still entirely human. The developers who will struggle are not the ones who use AI tools. They’re the ones who use them without understanding what they’re doing and why. That’s the curse version. What we’re building at TOF is firmly in blessing territory: tools that assist, not replace. Tools that expect the person using them to think.
This is just the beginning
What we’ve built and launched so far is not the end of what’s possible. We’re still figuring out where AI can continue to add value in our work, whether that’s for learning, delivery, or collaboration. What we’ve seen so far gives us plenty of confidence that there’s more to come.
We’re not implementing AI because we feel like we should. We’re doing it because it works — and because it makes a real difference for our colleagues. Every coach we’ve built exists for the same reason: to give people the means to help themselves, to grow at their own pace, to walk into important conversations better prepared. That’s what responsible AI use looks like to us. Not replacing people. Empowering them.

About the Author
Yves Rocourt is the Learning & Development Manager and a Senior Consultant at The Orange Force. Previously he led the Mendix Academy and the TimeSeries University. Mendix certifications? He’s got them all: he is a Mendix Trainer MVP, an Expert-level Mendix developer, and an Advanced-level Mendix trainer. He is sort of a history buff, having worked as a museum curator before starting with Mendix. Questions about history or Mendix? Join one of his trainings, you can ask him about both!


