CANADA'S LEADING INFORMATION SOURCE FOR THE METALWORKING INDUSTRY

LATEST MAGAZINE

CANADA'S LEADING INFORMATION SOURCE FOR THE METALWORKING INDUSTRY

CANADA'S LEADING INFORMATION SOURCE FOR THE METALWORKING INDUSTRY

BUSINESS BUILDER: Will AI regulations protect your business?

Share This Post
AI can also help your company reduce costs, improve efficiency and increase customer satisfaction. So how can you reap these benefits while staying safe and secure? PHOTO by Pexels.

This BUSINESS BUILDER is provided by the experts at BDC.

Many of us have been cautious when it comes to AI—data privacy, copyright infringement and inaccuracies are all worries. Yet, AI can also help your company reduce costs, improve efficiency and increase customer satisfaction. So how can you reap these benefits while staying safe and secure?

Although existing laws for consumer protection, human rights and criminal law apply to AI, they were not developed with it in mind. As a result, they generally lack the scope and level of detail to regulate this complex, multi-layered technology effectively.

That is about to change. Governments worldwide have already taken steps to govern AI. Regulations that could impact your business are coming, and it might be sooner than you think.

Will upcoming AI regulations protect your business?

While many of the AI laws being proposed are still in draft form, we can still take cues from the kinds of issues regulators are looking at. Data privacy, discrimination and copyright infringement are just a few of their concerns.

Businesses that use AI-powered tools should continue to manage risks with these best practices:

  • Do not share personal or proprietary data unless privacy is guaranteed
  • Ask someone knowledgeable to check the accuracy of AI outputs
  • Avoid publishing AI-generated content verbatim in case it is copyrighted

Ensure your employees are aware of these rules

Although regulations on general AI use are coming, countries around the globe are at different stages of putting them together.

What kinds of AI regulations can businesses expect?

Canada, China, Europe, and the United States (U.S.) have already started to signal their intent to regulate and certain trends are emerging. We believe that organizations that develop AI will be expected to:

  • Explain how models work (e.g., logic, criteria, etc.)
  • Describe how models use data (e.g., what kind, where from, usage, storage, etc.)
  • Clarify the choices they offer users of their AI (e.g., opt-in, opt-out, data erasure, etc.)
  • Clearly indicate when AI is used (e.g., you are interacting with a bot)
  • Demonstrate a lack of bias in automated decisions and treat all issues with fairness, as well as evidence of internal safeguards to minimize bias

These regulations aim to protect those who use products with AI capabilities.

How are AI regulations taking shape around the world?

Countries around the globe are at different stages of putting regulations together. Some countries, including Canada, have already proposed legislation or are in the process of finalizing it. Other countries, such as the U.S., have drafted some general principles for companies developing AI applications—however, these are non-binding, which means there are no consequences for not adhering to them. And some countries have no legislation or principles at all.

AI regulations in Canada

In 2022, Canada introduced Bill C-27, the Digital Charter Implementation Act—a framework that aims to ensure trust, privacy and responsible innovation in the digital realm. As part of Bill C-27, Canada also introduced the Artificial Intelligence and Data Act (AIDA), which seeks to protect individuals and their interests from the potentially harmful aspects of AI systems.

At present, AIDA outlines six regulatory requirements:

  • Human oversight and monitoring
  • Transparency
  • Fairness and equity
  • Safety
  • Accountability
  • Validity and robustness

According to Innovation, Science and Economic Development Canada (ISED), businesses with AI-powered products will be expected to implement accountability mechanisms, such as internal governance processes and policies, to ensure they meet their obligations under the act. If all stays on track, AIDA should come into force no sooner than 2025.

In addition, a recent court ruling indicates that businesses will need to take measures to ensure their AI-powered tools are accurate. In a landmark case, a judge found a large Canadian company legally liable for the misinformation their online chatbot provided to one of their customers.

AI regulations in the U.S.

At the federal level, the Biden administration introduced the Blueprint for an AI Bill of Rights in October 2022. This document outlines a set of five principles and practices to help guide companies that develop, deploy and manage automated systems and AI technologies.

The five principles include:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation (to ensure users know when they are interacting with AI)
  • Human alternatives, consideration and fallback (to ensure users can easily opt-out or access a person for help)

Meanwhile, several states have enacted privacy laws related to AI. For example, most states require companies to disclose when AI is used for automated decision-making and to provide ways for consumers to opt out of this type of data processing. Other states have additional transparency requirements to ensure that businesses disclose how their systems work.

Other steps toward AI regulation in the U.S.

On July 5, 2023, the city of New York enacted the AI Bias Law, which requires companies to regularly audit their hiring algorithms for bias and publish their findings.

In August 2023, a U.S. judge ruled that AI-generated artwork cannot be copyrighted. According to the U.S. Copyright Office, this includes any artwork that is automatically generated by a machine or mechanical process without the creative input of a human author.

The Federal Trade Commission (FTC) has also become more active in policing and investigating AI-powered products. For example, they are currently investigating Open AI to determine whether the company sufficiently informs users when their technology generates false information.

AI regulations in Europe

The European Union (E.U.) passed a provisional agreement called the Artificial Intelligence Act (AI Act) in 2023.

The E.U.’s AI Act classifies AI into four levels of risk:

Unacceptable risk

These technologies are prohibited with very few exceptions and could include capabilities like:

  • cognitive behavioural manipulation
  • government social scoring
  • real-time biometric identification, such as facial recognition

High risk

Technologies in this category must meet a long list of requirements to ensure safety. In addition, companies must share details about their system in a publicly accessible database. Technologies in this category could include:

  • autonomous vehicles
  • drones
  • medical devices

Limited risk

These AI systems need to meet minimal transparency requirements. They include technologies such as:

  • chatbots
  • generative AI

Minimal risk

Most of the AI in this tier has already been deployed and is being used today, but certain responsibilities may be assigned to creators, distributors and users to ensure transparency. Examples of AI in this category include:

  • email spam filters
  • video games
  • personalized movie recommendations
  • voice assistants

Like the U.S., different European countries are also working on their own AI regulations. For example, in October 2023 Spain proposed regulating AI-generated simulations of people’s images and voices.

Share This Post

 

Recent Articles




Wordpress Social Share Plugin powered by Ultimatelysocial
error

Enjoy this post? Share with your network