Software tools: AI has to align with local regulatory and ethical standards

ai-Speech recognition and dictation

Share this article:

Facebook
LinkedIn
Twitter
Email

AI (artificial intelligence) is part of many tools we have been using for a while now. But in the ever-evolving landscape of AI, businesses and organisations need to make sure, all the tools their staff use comply with local laws and privacy regulations.
And for AI to be embraced and relied upon, it must demonstrate unwavering reliability and safety. This trust is not simply a matter of theoretical reassurance; it hinges on tangible actions and strategies implemented by organizations and developers.

In this article, we delve into the five critical pillars that underpin the concept of responsible AI implementation: Reliability and safety, Explainability, Fairness, Transparency, and Privacy and Security. Each of these pillars plays a pivotal role in ensuring that AI systems not only perform as intended but also align with ethical and regulatory standards. Let’s embark on a journey through these fundamental aspects of trustworthy AI.

Is Nuance Dragon Medical One using trustworthy AI?

Dragon Nuance is a Microsoft company and AI used in Dragon speech recognition follows the principles and guidelines outlined in this article.
Dragon Medical One and its use of AI to improve accuracy is approved by Te Whatu Ora. Dragon speech recognition is a trusted tool in public healthcare and has been used in numerous public and private hospitals and practices across New Zealand and Austraila for years.
Dragon is a secure, cloud-based speech recognition solution that is hosted on Microsoft Azure data centres in Australia using 256-bit encryption. 

The hosting infrastructure is compliant with the NZ Privacy Act and the AU Privacy Act, which includes a set of thirteen Australian Privacy Principles (APPs).
Dragon Medical One doesn’t store any data and complies with the Health Information Privacy Code (requirements for storage and security of personal information).

Nuance (Dragon) is a Microsoft company and Microsoft established an advisory committee for AI, ethics, and effects in engineering and research in 2017. The core responsibility of the committee is to advise on issues, technologies, processes, and best practices for responsible AI.

From an ethical perspective, AI should be fair and inclusive, be accountable for its decisions and not discriminate or hinder different races, disabilities, or backgrounds.

Accountability

Accountability is an essential pillar of responsible AI. The people who design and deploy an AI system need to be accountable for its actions and decisions, especially as we progress toward more autonomous systems.

Organizations should consider establishing an internal review body that provides oversight, insights, and guidance about developing and deploying AI systems. This guidance might vary depending on the company and region, and it should reflect an organization’s AI journey.

Inclusiveness

Inclusiveness mandates that AI should consider all human races and experiences. Inclusive design practices can help developers understand and address potential barriers that could unintentionally exclude people. Where possible, organizations should use speech-to-text, text-to-speech, and visual recognition technology to empower people who have hearing, visual, and other impairments.

Reliability and safety

For AI systems to be trusted, they need to be reliable and safe. It’s important for a system to perform as it was originally designed and to respond safely to new situations. Its inherent resilience should resist intended or unintended manipulation.

An organization should establish rigorous testing and validation for operating conditions to ensure that the system responds safely to edge cases. It should integrate A/B testing and champion/challenger methods into the evaluation process.

An AI system’s performance can degrade over time. An organization needs to establish a robust monitoring and model-tracking process to reactively and proactively measure the model’s performance (and retrain it for modernization, as necessary).

Explainable

Explainability helps data scientists, auditors, and business decision makers ensure that AI systems can justify their decisions and how they reach their conclusions. Explainability also helps ensure compliance with company policies, industry standards, and government regulations.

A data scientist should be able to explain to a stakeholder how they achieved certain levels of accuracy and what influenced the outcome. Likewise, to comply with the company’s policies, an auditor needs a tool that validates the model. A business decision maker needs to gain trust by providing a transparent model.

Tools for Explainability

Microsoft has developed InterpretML, an open-source toolkit that helps organizations achieve model explainability. It supports these models:

  • Glass-box models are interpretable because of their structure. For these models, Explainable Boosting Machine (EBM) provides the state of the algorithm based on a decision tree or linear models. EBM provides lossless explanations and is editable by domain experts.
  • Black-box models are more challenging to interpret because of a complex internal structure, the neural network. Explainers like local interpretable model-agnostic explanations (LIME) or SHapley Additive exPlanations (SHAP) interpret these models by analyzing the relationship between the input and output.

Fairlearn is an Azure Machine Learning integration and an open-source toolkit for the SDK and the AutoML graphical user interface. It uses explainers to understand what mainly influences the model, and it uses domain experts to validate these influences.

To learn more about explainability, explore model interpretability in Azure Machine Learning.

Fairness

Fairness is a core ethical principle that all humans aim to understand and apply. This principle is even more important when AI systems are being developed. Key checks and balances need to make sure that the system’s decisions don’t discriminate against, or express a bias toward, a group or individual based on gender, race, sexual orientation, or religion.

Microsoft provides an AI fairness checklist that offers guidance and solutions for AI systems. These solutions are loosely categorized into five stages: envision, prototype, build, launch, and evolve. Each stage lists recommended due-diligence activities that help minimize the impact of unfairness in the system.

Fairlearn integrates with Azure Machine Learning and supports data scientists and developers to assess and improve the fairness of their AI systems. It provides unfairness-mitigation algorithms and an interactive dashboard that visualizes the fairness of the model. An organization should use the toolkit and closely assess the fairness of the model while it’s being built. This activity should be an integral part of the data science process.

Learn how to mitigate unfairness in machine learning models.

Transparency

Achieving transparency helps the team understand:

  • The data and algorithms that were used to train the model.
  • The transformation logic that was applied to the data.
  • The final model that was generated.
  • The model’s associated assets.

This information offers insights about how the model was created, so the team can reproduce it in a transparent way. Snapshots within Azure Machine Learning workspaces support transparency by recording or retraining all training-related assets and metrics involved in the experiment.

Privacy and security

A data holder is obligated to protect the data in an AI system. Privacy and security are an integral part of this system.

Personal data needs to be secured, and access to it shouldn’t compromise an individual’s privacy. Azure differential privacy helps protect and preserve privacy by randomizing data and adding noise to conceal personal information from data scientists.

Human AI guidelines

Human AI design guidelines consist of 18 principles that occur over four periods: initially, during interaction, when wrong, and over time. These principles help an organization produce a more inclusive and human-centric AI system.

Initially

  • Clarify what the system can do. If the AI system uses or generates metrics, it’s important to show them all and how they’re tracked.
  • Clarify how well the system can do what it does. Help users understand that AI isn’t completely accurate. Set expectations for when the AI system might make mistakes.

During interaction

  • Show contextually relevant information. Provide visual information related to the user’s current context and environment, such as nearby hotels. Return details close to the target destination and date.
  • Mitigate social biases. Make sure that the language and behavior don’t introduce unintended stereotypes or biases. For example, an autocomplete feature needs to be inclusive of gender identity.

When wrong

  • Support efficient dismissal. Provide an easy mechanism to ignore or dismiss undesirable features or services.
  • Support efficient correction. Provide an intuitive way of making it easier to edit, refine, or recover.
  • Make clear why the system did what it did. Optimize explainable AI to offer insights about the AI system’s assertions.

Over time

  • Remember recent interactions. Retain a history of interactions for future reference.
  • Learn from user behavior. Personalize the interaction based on the user’s behavior.
  • Update and adapt cautiously. Limit disruptive changes, and update based on the user’s profile.
  • Encourage granular feedback. Gather user feedback from their interactions with the AI system.

AI business consumers

AI business consumers (business experts) close the feedback loop and provide input for the AI designer. Predictive decision-making and potential bias implications like fairness and ethical measures, privacy and compliance, and business efficiency help to evaluate AI systems. Here are some considerations for business consumers:

  • Feedback loops belong to a business’s ecosystem. Data that shows a model’s bias, errors, prediction speed, and fairness establishes trust and balance between the AI designer, administrator, and officers. Human-centric assessment should gradually improve AI over time. Minimizing AI learning from multidimensional, complex data can help prevent biased learning. This technique is called less-than-one-shot (LO-shot) learning.
  • Using interpretability design and tools holds AI systems accountable for potential biases. Model bias and fairness issues should be flagged and fed to an alerting and anomaly detection system that learns from this behavior and automatically addresses biases.
  • Each predictive value should be broken down into individual features or vectors by importance or impact. It should deliver thorough prediction explanations that can be exported into a business report for audit and compliance reviews, customer transparency, and business readiness.
  • Due to increasing global security and privacy risks, best practices for resolving data violations during inference require complying with regulations in individual industry verticals. Examples include alerts about noncompliance with PHI and personal data, or alerts about violation of national/regional security laws.

The responsible use of AI, particularly in the realm of speech recognition, demands that we align our technologies with both ethical principles and regulatory standards. As AI continues to evolve and integrate into our daily lives, the importance of ensuring compliance with local laws and regulations cannot be overstated.
We have to safeguard privacy and uphold the legal framework that governs our use of these powerful tools. By diligently checking and adhering to local laws and regulations, we can foster a future where AI serves as a force for good, benefiting society while respecting the rights and values of individuals.

Feel free to contact us if you have questions or need advice on the compliance of speech recognition software.

Find out how speech-to-text technology and digital dictation will help you work smarter.

Subscribe to Our Newsletter

Name(Required)

View our privacy policy.

This field is for validation purposes and should be left unchanged.

Follow us:

Most Popular

.