The "Architects of AI" were named Time's person of the year Thursday, with the magazine citing 2025 as when the potential of artificial intelligence "roared into view" with no turning back.
People are also reading…
Time CEO Jessica Sibley, second from right, joined by OpenAI Chief Global Affairs Officer Chris Lehane, second left, rings the New York Stock Exchange opening bell Thursday for TIME's "Person of the Year."
Time CEO Jessica Sibley is interviewed Thursday on the floor of the New York Stock Exchange, adjacent to TIME's "Person of the Year" cover.
Businesses are increasingly turning to AI to ensure accessibility for people with disabilities. Is it working?
AI tools can create more accessible experiences
One of the most important frontiers of accessibility has been online. By adhering to the Web Content Accessibility Guidelines issued by the World Wide Web Consortium, designers can create websites and web-based environments that can be accessed by everyone, regardless of ability.
In more practical terms, adhering to these standards will mean ensuring content has enough contrast so people with limited vision or colorblindness can read text. Adding "alt text" to images allows visual information to be shared with screen reader users, and captions on videos allow people who are deaf or hard of hearing to properly understand the information conveyed. It's also important to ensure people can navigate pages more easily.
New AI-driven tools are crucial for meeting these guidelines more efficiently and effectively. For instance, they can auto-generate captions, suggest alternative text for images, or flag insufficient contrast.Ìý
While these advancements make it easier for companies to comply with the guidelines, human oversight remains essential. For example, some of Google's AI-generated search result summaries have , which, when disseminated on websites, can misinform and harm users with disabilities. In 2023, researchers at Pennsylvania State University found that some AI models used to categorize large amounts of text . These models tend to classify sentences as negative or "toxic" based on the presence of disability-related terms without regard for the context.Ìý
To address these problems, experts emphasize the importance of involving the user community—including those with disabilities—in all stages of AI development.
"AI data systems that include representation of people with disabilities to minimize bias," the United Access Board, a governmental agency, advised during its 2024 Preliminary Findings on Artificial Intelligence. This should include a thorough evaluation of AI tools in the hiring process and for job-related activities "to identify potential discriminatory impacts on applicants and employees with disabilities."
The board also noted concerns about AI-powered surveillance tools known as "bossware technologies," which may not be correctly calibrated for employees with disabilities. This can be a problem if companies attempt to monitor things like employee fatigue or movement based on wearable technology that may not properly assess people with physical disabilities.
Realizing AI's potential hinges on acknowledging its limitations
Thousands of website owners have taken significant strides to meet accessibility standards since the 2010s, when the Americans with Disabilities Act required compliance for company domains. Yet as of 2023, , according to WebAIM.Ìý
As with any new technological breakthrough, the initial excitement—and overpromise—for AI-driven tools to tackle these persistent compliance issues has led to closer examinations of their true potential and limitations. While many industry experts agree that AI can offer scalable and relatively affordable solutions to meet compliance standards, solely relying on AI-powered solutions will not result in the outcome legislators and social advocates strive for: Fully inclusive online experiences for people with disabilities. AI tools have helped make workplaces and the internet more accessible, but they have shown business owners that human involvement remains essential.ÌýBut as more business owners implement more responsible oversight and inclusive design, unlocking AI's potential could mean that exponentially more workplaces and internet experiences become more accessible for all.
Story editing by Carren Jao. Additional editing by Elisa Huang. Copy editing by SofÃa JarrÃn. Photo selection by Lacy Kerrick.
Ìý
originally appeared on and was produced and distributed in partnership with Stacker Studio.
5 ways companies are incorporating AI ethics
5 ways companies are incorporating AI ethics
As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.Ìý
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.ÌýÌýÌý
found that about half of consumers think there is not sufficient regulation of generative AI right now. The lack of oversight tracks with limited trust that institutions—particularly tech companies and the federal government—will ethically develop and implement AI, according to KPMG.Ìý
Within the tech industry, ethical initiatives have been set back by a , according to an article presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency. Layoffs at major corporations, including Amazon's streaming platform Twitch, Microsoft, Google, and X, hit hard, leaving a vacuum.
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.ÌýÌýÌýÌý
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The KPMG survey found that the biggest consumer concerns are the spread of misinformation, fake news, and biased content, as well as the proliferation of more sophisticated phishing scams and cybersecurity breaches. As AI grows more sophisticated, these concerns are likely to be amplified as more people may potentially be negatively affected—making ethical frameworks for approaching AI all the more essential.Ìý
That puts the onus to set ethical guardrails upon companies and lawmakers. In May 2024, Colorado became the first state to introduce with provisions for consumer protection and accountability from companies and developers introducing AI systems used in education, financial services, and other critical, high-risk industries.
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies' responsible use of AI, the KPMG report also found that organizations can take concrete steps to garner and maintain public trust—education, clear communication and human oversight to catch errors, biases, or ethical concerns.
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.Ìý
analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.Ìý

Actively supporting a culture of ethical decision-making
AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end, JPMorgan Chase has , including an ethics team to work on the company's AI initiatives. The company ranks top on the , which looks at banks' AI readiness, including a top ranking for transparency in the responsible use of AI.
Development of risk assessment frameworks
The National Institute of Standards and Technology has developed an that helps companies better plan and grow their AI initiatives. The approach supports companies in identifying the risks posed by AI, defining and measuring ethical activity, and implementing AI systems with fairness, reliability, and transparency.Ìý The Vatican is even getting in on the action—it collaborated with the Markkula Center for Applied Ethics at Santa Clara University, a Catholic college in Silicon Valley, to for companies to navigate AI technologies ethically.
Specialized training in responsible AI usage
Amazon Web Services has developed many tools and guides to help its employees think and act ethically as they develop AI applications. The , a YouTube series produced by AWS Machine Learning University, serves as an introductory course that covers fairness criteria and methods for mitigating bias. tool helps developers detect bias in AI model predictions.
Communication of AI mission and values
Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include Dell Technologies' and IBM's , which clarify their approach to AI application development and implementation, publicly setting guiding principles such as "respecting cultural norms, furthering social equality, and ensuring environmental sustainability."
Implementing an AI ethics board
Companies can create to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. has had an AI Ethics Advisory Panel since 2018; it works on current ethical issues and looks ahead to identify potential future problems and solutions. Northeastern University has to work with companies that prefer not to create their own.
Story editing by Jeff Inglis. Additional editing by Alizah Salario. Copy editing by Paris Close. Photo selection by Clarese Moller.Ìý

