GARTNER is a registered trademark and repair mark of Gartner, Inc. and/or its associates within the U.S. and internationally and is used herein with permission. All rights reserved.
Chief Info Officers and different know-how determination makers repeatedly search new and higher methods to judge and handle their investments in innovation – particularly the applied sciences which will create consequential selections that affect human rights. As Synthetic Intelligence (AI) turns into extra distinguished in vendor choices, there may be an growing have to determine, handle, and mitigate the distinctive dangers that AI-based applied sciences could convey.
Cisco is dedicated to sustaining a accountable, honest, and reflective strategy to the governance, implementation, and use of AI applied sciences in our options. The Cisco Accountable AI initiative maximizes the potential advantages of AI whereas mitigating bias or inappropriate use of those applied sciences.
Gartner® Analysis lately revealed “Innovation Perception for Bias Detection/Mitigation, Explainable AI and Interpretable AI,” providing steerage on the most effective methods to include AI-based options that facilitates “understanding, belief and efficiency accountability required by stakeholders.” This article describes Cisco’s strategy to Accountable AI governance and options this Gartner report.
At Cisco, we’re dedicated to managing AI improvement in a method that augments our give attention to safety, privateness, and human rights. The Cisco Accountable AI initiative and framework governs the applying of accountable AI controls in our product improvement lifecycle, how we handle incidents that come up, interact externally, and its use throughout Cisco’s options, providers, and enterprise operations.
Our Accountable AI framework includes:
- Steering and Oversight by a committee of senior executives throughout Cisco companies, engineering, and operations to drive adoption and information leaders and builders on points, applied sciences, processes, and practices associated to AI
- Light-weight Controls carried out inside Cisco’s Safe Growth Lifecycle compliance framework, together with distinctive AI necessities
- Incident Administration that extends Cisco’s present Incident Response system with a small staff that opinions, responds, and works with engineering to resolve AI-related incidents
- Trade Management to proactively interact, monitor, and affect business associations and associated our bodies for rising Accountable AI requirements
- Exterior Engagement with governments to grasp world views on AI’s advantages and dangers, and monitor, analyze, and affect laws, rising coverage, and rules affecting AI in all Cisco markets.
We base our Accountable AI initiative on rules per Cisco’s working practices and instantly relevant to the governance of AI innovation. These rules—Transparency, Equity, Accountability, Privateness, Safety, and Reliability—are used to upskill our improvement groups to map to controls within the Cisco Safe Growth Lifecycle and embed Safety by Design, Privateness by Design, and Human Rights by Design in our options. And our principle-based strategy empowers prospects to participate in a steady suggestions cycle that informs our improvement course of.
We try to fulfill the best requirements of those rules when growing, deploying, and working AI-based options to respect human rights, encourage innovation, and serve Cisco’s objective to energy an inclusive future for all.
Take a look at Gartner suggestions for integrating AI into a company’s information methods in this E-newsletter and study extra about Cisco’s strategy to Accountable Innovation by studying our introduction “Transparency Is Key: Introducing Cisco Accountable AI.”
We’d love to listen to what you assume. Ask a Query, Remark Under, and Keep Related with Cisco Safe on social!
Cisco Safe Social Channels