As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.
Per the overall NIST RMF, here is a definition of risk: “Risk is a measure of the extent to which an entity is threatened by a potential circumstance or event. Risk is also a function of the adverse impacts that arise if the circumstance or event occurs, and the likelihood of occurrence.
The NIST also realizes that an AI RMF as a proposed standard has to be readily useable, be updated as technology advances, and embody other core criteria: “A risk management framework should provide a structured, yet flexible, approach for managing enterprise and societal risk resulting from the incorporation of AI systems into products, processes, organizations, systems, and societies.
External stakeholders would encompass a wide array of entities including trade groups, advocacy groups, civil society organizations, and others. The general public consists of consumers and others that experience the risk associated with untoward AI.Sorry to say that there is no particular number or assigned value that we can give to the amount of tolerable or acceptable risk that we might find worthwhile or societally permissible.