MIT researchers and the MIT-IBM Watson AI Lab are finding ways to test how trustworthy AI鈥檚 predictions are, before the AI is used practically. This is a highly relevant one, because AI is being used in industries where accuracy is of utmost importance.
Critical industries like the medical, law, engineering industries are using AI for analysis, diagnoses, among other tasks.
Even though AI cannot and does not replace these critical roles, it does serve as a useful assistant in these industries. The only way this can happen successfully, though, is if it is used in the most responsible ways.
How Does It Work?
Researchers shared in a paper how the process goes. To compare the models, the MIT team introduced the idea of neighbourhood consistency.
This method involves setting up reliable reference points and checking how closely different models agree on these points when looking at a test data point.
The MIT news page reports, 鈥淭hey do this by training a set of foundation models that are slightly different from one another.
鈥淭hen they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.鈥
AI is known for hallucinations and inaccuracies. Different organisations are working to find solutions. Not too long ago, Oxford鈥檚 Computer Science researchers revealed an algorithm that also uses testing to determine whether AI is hallucinating.
These universities are contributing greatly to what could help millions, and save even more across industries.
鈥淎ll models can be wrong, but models that know when they are wrong are more useful鈥 Our method allows you to quantify how reliable a representation model is for any given input data,鈥 says senior author and research lead, Navid Azizan.
More from News
- Tanzania Is Dealing With Digital Fraud Through Legislation – What Are The Changes?
- UK Government To Launch 拢500 Million Sovereign AI Unit – What Does This Mean?
- World Quantum Day 2026: Experts Reflect On Industry Developments This Year
- 79% Of UK Workers Fear Losing Their Jobs This Year – And Its Not AI Related
- Scail Launches To Help Regulated SaaS Businesses Navigate The AI 鈥淧erfect Storm鈥
- X Is Taking A Slightly Different Approach To Managing Click Bait Content – Will It Work?
- AI Is Meant To Reduce Workloads, Why Is It Still Causing Workers Cognitive Fatigue?
- Apple Wins Q1 As Smartphones Shipments Go Up And Competitor Sales Go Down
How AI Inaccuracies Impact Us
In finance, recognising fraud is important. Institutions need to make sure sensitive data is well-handled, and the use of AI needs to be in a way that recognises errors. MIT鈥檚 development prevents this before it can occur.
Health care is no different. But adding to handling sensitive data, AI will be used to examine organs such as the brain, and this cannot afford to produce any inaccuracies.
For startups, AI may be used for predictions, analysis of data, and depending on the industry, handling sensitive information as well. Ecommerce startups who sell products online also need to make sure that their automation systems are producing the right results.
AI and cybersecurity are closely linked as experts within these industries are working together to make sure that AI鈥檚 development is still in the best interests of the public, over and above being accurate.
This method helps create a space where AI systems are still ethical, by providing a way to assess the likelihood of producing false or misleading outputs.
Right now, the only improvements they report needs further exploring involved finding a way to go through the process except with less systems needed. That way, even more operations and costs would be involved.