As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Along those lines, a notable research paper related to the auditing of AI that is entitled “Toward Trustworthy AI Development Mechanisms For Supporting Verifiable Claims” has proposed that an AI audit would usually examine three sets of the mechanism underlying the AI:They define institutional mechanisms this way: “Institutional mechanisms are processes that shape or clarify the incentives of the people involved in AI development, make their behavior more transparent, or enable accountability...
And hardware mechanisms are defined this way: “Computing hardware enables the training, testing, and use of AI systems. Hardware relevant to AI development ranges from sensors, networking, and memory, too, perhaps most crucially, processing power. Concerns about the security and other properties of computing hardware, as well as methods to address those concerns in a verifiable manner, long precede the current growth in adoption of AI.
Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: WIRED - 🏆 555. / 51 Read more »
Source: engadget - 🏆 276. / 63 Read more »
Source: engadget - 🏆 276. / 63 Read more »
Source: ForbesTech - 🏆 318. / 59 Read more »
Source: hackernoon - 🏆 532. / 51 Read more »
Source: PopSci - 🏆 298. / 63 Read more »