Error correction:
The steps to get back on track
But step size unknown
Outside of gotcha-questions on midterms and finals, I think that schooling for the most part tends to avoid edge cases and boundary conditions. That's not necessarily unreasonable. After all, why emphasize the less likely scenarios over the more likely ones? That's certainly fine and dandy as long as problems stay within reasonable expectations, but as you spend more time on any given problem, it seems more and more likely that it'll stray out of expected norms.
Whatever the job and whatever the project, there's always some feedback loop, where everyone's trying to hit the target goal as quickly and as efficiently as possible, trying to minimize overshoot and waste. Regular feedback tells us how far or close we are to the goal, allowing us to modulate our behavior and effort as appropriate. If we're lucky, it's a straightforward single-input, single-output system, where's it's easy to track the overall behavior of everything. More likely than not, it's a multi-input, multi-output system with each input struggling in isolation. For complex systems, I feel that we should be upfront about the lack of complete knowledge, since how sub-teams modulate and optimize their output despite the fog of war (so to speak) is the big question. Everything else, like how well the plant works relative to spec or expectations, should have significantly less variation.
Even with incomplete information, there's a lot we can do for a small control loop chugging away, ignorant of its interconnectivity and place in a much larger system. For sure, we can be adaptive, k, so let's not dampen those spirits, bb. I assume we can work at least a little bit better than the simplest control loops and have the option of adjusting our gains and feedback with a bit more fidelity and frequency. In fact, I still think responsiveness and the ability to make adjustments are significantly greater advantages than any other particularly "human" trait when comparing us to machines. This way, there's quite a lot any single subcomponent can do without knowing any bit about the much larger system in which it exists.
That's all great and all, but these cases that we're used to and taught to recognize all seem to assume that we can in fact generate sufficient output to correct for perceived errors and goals. What if that's not actually the case? What if 100% does not and cannot get us to where we want to be? This isn't some Disney redemption story where an extra 10% comes out of nowhere and/or we get to leverage some previously ignored loophole. At some point, we are what we are, limited not by effort or desire, but by our own self.
Significant issues arise by unknowingly or knowingly trying to push something past its limits. In the best case scenario, the component just saturates and fails to match the given command. All the prior optimal control theory and planning goes to waste, based on invalid assumptions and misplaced optimism. Over time, as the real expect continues to lag behind the expected output capability, error grows unbeknownst to the system designers, and even if the system conditions returns to a manageable state, we're prone to overcorrection, continuing to drive the system further from the desired goals.
It seems that the most effective way to correct this is to recognize the mismatch between real and expected capabilities, and then adjusting the overall system design. That requires changes at a supervisory and architect level. For sure, renegotiating expectations and projections with stakeholders is no easy feat, depending on the fervor with which the initial promises were made. It's particularly difficult if the original system is the first of its kind, or if unforeseen system characteristics turn out to be more than negligible nuisances.
In fact, it's probably more reasonable to assume that the control system itself is unlikely to change. We go back to the our overburdened sub-component, the 5' 6" (5' 7" on a good day!) out-of-shape baller being asked to dunk on Shaq. Looney Tunes MJ he is not. What chance and what hope is there? He could have faith in the system architects, for surely they must've done something relevant to this before; they must've put together the system with the intention of success and grounded in their past experiences.
Nah. Absent the whole system view and having exhausted whatever faith was left, isn't the more reasonable step to go off-script? This is the real world, not a crude block diagram in an overpriced textbook. Sub-components can collude and adjust, not just operate in isolation with rigid, blind resolve. We can get away with accepting short term local loss, even if the system plan made no such accommodations and tolerances. Gains are suggestions, feedback is noisy and misleading, and perhaps none of us know when we'll get to where we're headed. Maybe the best structure is the one that at times allows itself to be torn down from within to be built back up better than before.