Defending your soft skills training budget in the boardroom: how do you measure what people have actually started doing differently?

You know the situation. You have designed a great training program. The participants were enthusiastic, the evaluation forms looked good, and the trainer clearly did their best. But three months later, you are sitting across from the director who asks: “What has it actually delivered?” And you notice that your answer is not truly convincing.

This is the central problem of L&D in 2026. Not the lack of beautiful programs and annual plans. Not the lack of engaged employees. But the absence of an answer to the question that is so relevant today: has anything changed in how people behave on the shop floor? And if so, how do you demonstrate that?

Proving the ROI of soft skills training is a complex part of the job for many L&D professionals. And that is exactly the problem PractAIce aims to solve; not by building more attractive training sessions, but by making behavioral change visible and measurable.

The problem with scores that convince no one

Most training evaluations stop at what Kirkpatrick calls the first or second level: how satisfied were participants, and what did they learn? These are useful signals, but they say little about what actually changes in the workplace. According to data from the American Society for Training and Development, Kirkpatrick Level 3 — behavioral change — is measured in only 25% of cases, and Level 4 — organizational results — in only 15%.

This is not because L&D professionals do not want to measure it. It is because it is difficult. Behavioral change in soft skills, such as how someone gives feedback, how someone handles resistance, or whether someone remains calm in a difficult conversation, is by definition elusive if you do not have a structured way to observe it.

And yet, it is precisely this data that management demands. According to research by Brandon Hall Group, 57% of L&D leaders experience increasing pressure to demonstrate ROI, and that pressure has only grown in recent years. At the same time, 75% of organizations say that better aligning L&D with business strategy is their top priority for 2025. The gap between what L&D delivers and what the business needs lies not in the quality of the training, but in the measurability of the results.

Why soft skills are so difficult to measure and what that means for your approach

There is a reason why management training and soft skills training are so often dismissed as “nice to have”: the effects are visible in behavior, not in figures. And behavior is difficult to quantify. Especially if the measurement only takes place afterward, separate from the training itself.

The Kirkpatrick model provides a clear framework for this. Level 3 of the model measures whether participants actually apply what they have learned in their work — and that is, as Kirkpatrick himself states, the first level where you truly say something about the effectiveness of a training. Everything before that, such as satisfaction scores and knowledge tests, is internal evidence for the training function itself. What the business needs starts at Level 3.

But how do you obtain Level 3 data for soft skills? Traditionally through manager observations, 360-degree feedback, or performance reviews. These methods have value, but they are time-consuming, subjective, and arrive too late, often weeks or months after the training has taken place. By that time, the link between what was learned and the measured behavior has often faded.

That is the gap. And it is a structural gap, not a gap you close with a better satisfaction survey.

How PractAIce generates behavioral data that actually convinces

PractAIce works differently than classic training, and that difference is precisely relevant to the ROI issue. Every time an employee engages in an AI role-play, the platform generates detailed behavioral data: how did someone score on active listening? How was feedback structured? How did someone respond to escalation in the conversation? Or how effectively does someone handle conflict management?

This data is not based on self-reporting or a manager’s impression. It is based on what someone actually did in the simulation, how often, and whether that improved over multiple sessions. That is Kirkpatrick Level 3 data: not added afterward, but integrated into the design of the platform from the ground up.

For L&D professionals, this means a difference in how you enter boardroom discussions. Instead of “95% of participants rated the training as good or very good,” you can say: “employees scored an average of 54% on constructive feedback skills at the start; after four practice sessions, that was 71%, with a clear curve in individual progression.” That is a conversation you can have with the director.

Soft skills training that sticks: why repetition is key

One of the most consistent findings in training research is that one-off learning is not sufficient for behavioral change. People learn something, return to the workplace, and the pressure of everyday life pulls them back into old patterns. That is not a lack of motivation; it is neurology.

Deliberate practice, the concept by psychologist K. Anders Ericsson, shows that expertise does not arise from experience or talent, but through focused, repeated practice on specific behavioral aspects, with immediate feedback after each attempt. For soft skills, this means: not one role-play in a group session, but going through the same scenario ten times until the new behavior becomes automatic.

That is what PractAIce makes possible. Employees can repeat a scenario without being dependent on a trainer or a colleague. They practice at their own pace, at a time that suits them, and thus build the behavioral repertoire that the training intended step by step. For management training and leadership development, this is particularly relevant: those skills erode the fastest if there is no structural practice opportunity after the training.

What L&D professionals can do differently

It starts with a shift in how you set up training. Kirkpatrick recommends designing backward: start at Level 4: what business result do you want to achieve? Then work back to what behavior is needed for that (Level 3), what people need to learn for that (Level 2), and what the training should look like (Level 1).

In practice, for soft skills training, this means: determine in advance which competencies you want to develop, make them observable, and build in a way to collect those observations. So, not afterward, but as part of the training design itself.

PractAIce supports this approach directly. L&D professionals can assemble scenarios per competency that align with their own organizational context — the types of conversations employees have in their specific roles, with dynamics recognizable to their team. These customized role-plays not only provide a better learning experience, they also generate more relevant behavioral data.

And that is what changes the conversation with the director. Not “we have trained 120 employees,” but: “we can show that the feedback culture in team X has concretely improved, and here is the data that supports that.”

Frequently asked questions about measuring soft skills training effectiveness

How do you measure the ROI of soft skills training?
The best way to measure the ROI of soft skills training is by defining behavioral indicators you want to see improve beforehand. Consider: how often someone gives feedback, how someone responds in conflict situations, how someone scores on active listening in simulated conversations. PractAIce generates this behavioral data automatically per practice session, making pre- and post-measurement possible without additional observation burden for managers.

What is the Kirkpatrick model and why is Level 3 so important for L&D?
The Kirkpatrick model divides training evaluation into four levels: reaction, learning, behavior, and results. Level 3 (behavior) is the first level that truly says something about whether training has worked in practice. Satisfaction scores (Level 1) and knowledge tests (Level 2) measure what happened in the training; Level 3 measures what changes in the workplace afterward. For soft skills in particular, that is the crucial evidence.

How do I convince management of the value of soft skills training?
The strongest arguments are concrete and behavioral. Not “the training was well-rated,” but “employees scored demonstrably higher on specific communication competencies after the program, and here is the progress over multiple practice moments.” PractAIce makes this type of reporting possible without you having to manually collect observation data.

Is management training via AI more effective than classroom training?
Not necessarily more effective in content, but certainly in transfer. The fundamental problem with classroom management training is that there is no structural practice environment after the training day. AI-driven practice via PractAIce fills exactly that gap: employees can repeat scenarios when they need to, with immediate feedback, without dependency on a trainer.

In conclusion: the conversation you want to be able to have

L&D is not an activity center. It is a strategic function that contributes to how people collaborate, communicate, and perform. But that contribution is only visible if you make it measurable.

Proving the ROI of soft skills training is not a matter of better evaluation forms. It starts with the design: what behavior do you want to see, how do you make it observable, and how do you build in structural practice so that what is learned truly sticks? The fact that 80% of professionals say soft skills like communication will only become more important due to AI does not make the question smaller, but it does make it more urgent.

PractAIce is built for L&D professionals who want to be able to have that conversation with management. Not with gut feeling, but with data. Are you unsure if it fits your organization? A demo shows more than an article ever can describe.