Progressive Levels of Plan Goodness (Executing Ideal Plan For Robots Is Not Necessarily Optimal For Humans)

PolicyPhysicalPlanPeoplePlanV1

Some specific examples of how a plan that would work well with robots won’t work well with humans:
– Gun confiscation. If benevolent robots enforce all the laws, it might work, but if lawmakers and law enforcement are Stalin, the Egyptian military, or even just some racist cops, this is not going to work the way you had hoped.
– Even modestly complex income tax provisions like the US earned income tax credit. As of 2015ish, 25% of the payments were either over or under the correct amounts. When poor people who need subsidies because they have no money wind up paying tax professionals to ensure proper claim of the credit, things have gone wrong.
– Pretty much any health care system not involving checklists and formal handoffs. Despite many things being routine, so you would assume human and robot performance would not be different, checklists significantly improve quality of care in a number of common clinical situations.

Hence we need to make sure we estimate our plans’ results when implemented by actual performers, not against the highest-quality operators we can imagine.

To that end, here is a suggested approach for allocating resources against a problem where humans do most of the work:

  • First, allocate to the level that notionally is required to perform the tasks, assuming perfect execution, plus some margin for known incompetence, mistakes, corruption, etc.
  • Next, allocate people to the task of identifying (for corrective actions) incompetence, mistakes, and corruption, until you either have reached the threshold of acceptable error, or you no longer are gaining net benefit (the cost of the additional reporter is more than the cost of the remaining obvious deficiencies). Such corrective actions include sample inspection and sting operations.
  • Next, do time-and-motion studies to determine, to the current known acceptable standard, which staff are executing to that standard. This leads to retraining, implementation of checklists, or automation purchases.
  • Finally, you engage in the ongoing practice (at low levels) of experimenting with alternative approaches, business process restructuring, etc.