SCAM ALERT:

Please be aware of scammers falsely impersonating HUB24 representatives. HUB24 will never contact you regarding potential investment opportunities. Please click here for more information. 

Contact us via our form

Contact Us

Chat to your local BDM

Contact a BDM

Give us a call

Call: 1300 854 994
Login

Ensuring accuracy becomes part of the journey

AI
  • Minimising hallucinations became a top priority for HUB24’s Innovation Lab to maintain trust and compliance in AI solutions being developed.
  • Expanded governance and diagnostics were introduced to track and analyse AI-generated errors.
  • Advances in open-source models and reasoning capabilities have reduced hallucinations and improved accuracy.

In the heavily regulated financial advice industry, AI giving wrong or misleading information via hallucinations, poses a challenge.

For this reason, as generative AI’s role grew, HUB24’s Innovation Lab turned its attention to reducing hallucinations. It did this by drawing on its experience with machine learning.

“To bring the confidence and accuracy of traditional machine learning into generative AI, we expanded our governance framework,” said Dr Evan Morrison, Head of the Innovation Lab at HUB24.

This included additional monitoring and diagnostics to better understand the causes of hallucinations.

“We have also worked to find models that don’t hallucinate as much, using techniques to ground responses with factual data.” These techniques help the models to focus when they go off on a tangent.

The rise of open-source models and improvements in natural language automation led to a decline in hallucinations. Newer models are trained to reason more carefully, reducing errors and boosting output quality.

“Our goal has always been to maximise accuracy with the highest possible confidence,” said Morrison.

Return to our Productivity Hub for insights on AI, innovation, and how we’re enabling better outcomes for advisers and their clients.