How Correctly To Predict From A Logical And Economic Perspective

Summary:
– Make a spreadsheet with facts, critical estimation parameters, model identification, model prediction results, and value evaluations for those prediction results.
Sections:
– Core Concepts
– Types Of Facts And Significance
– Notes On Critical Estimation Parameters
– Model Templates/Best Classification Of Predictions Via Models
– Low-Knowledge/Zero-Knowledge Situations
– When Is A Prediction Spreadsheet Complete? – Adjustments From The Ideal To Economically Optimal Decision Trees
– A Few Examples To Demonstrate The Correctness Of The Approach


Core Concepts

In a correct metaphysical sense, prediction actually is a correlation of the most distant pasts with the more recent pasts and present, since we never live in the future, only in an apparently successive stream of presents.
Following the orthodox average human objective materialist view, the future is the current set of objects, plus some changes that occurred according to the laws of physics and possibly some mental states or divine interventions, resulting in the new set of objects.
In either case, we start with a known set of knowledge-objects, transform them via rules into a new set of knowledge objects, with the constraint that the old set of knowledge-objects had to be the ones actually experienced, and the new set of knowledge objects should be (to the extent that we assert that it is knowable) the ones we actually experience in the present that arrives (or at least in the present that does arrive, we understand the reasons for the deviations).

I believe it is most efficient to analyze/compose prediction in this fashion:
– Facts: those items that directly can be measured for which we have some reasonable certainty, or at least we can put a probability bound on their existence
– Critical estimation parameters (CEPs): those items that cannot directly be measured, but that we know or have good reason to believe exist, as reflected in data points that are facts that indicate the bounds in which that parameter must exist
– Models: the mappings of the facts and CEPs to the futures in question
– Model predictions: possibly 1-1 with models, but since models often output ensembles or a probability mass determined by random variables, sometimes the number of model predictions for a given set of facts and CEPs will be 1-to-many.
You don’t add anything to the metaphysics, and as you will see, each input into the models that will determine the predicted futures is not only easily separated and worked on as needed, but they readily correspond to the classes of phenomena we attempt to predict.

I have attempted to organize this by the construction of a sort of general model approach, which includes by reference, for example, standard models of physics. The construction of this model will proceed by outlining its most quantitative contributors to the most complete/accurate version of the future, proceeding to those that are least significant/likely to be accurate.


Types Of Facts And Significance

Of course, if a model does depend on certain facts and CEPs (and basically everything must, even the mountains must exist at a certain place for them to remain in place in the predicted future), they must exist as represented in our calculations for that model. What does it mean for a fact or CEP to exist? There are four ways in which a phenomenon can be said to exist (note of course the time might vary):

– The world of the self (solipsism)
– The world of the human-observable (first-order objective materialism)
– The world of the indirectly influencing (micro and hidden phenomena, like quantum mechanics)
– The world of the unobserved and unnoticed (when a tree falls in the forest and you didn’t even know it existed)

Those sensations of oneself, you don’t doubt their existence in your own world. They might not represent a future reality, but at least they exist now and you know that for this moment they do exist. If they are used for nothing else, at least they are the original means of value evaluation of model predictions.

The world of immediately observable is where most facts used by our models come from. There are several dimensions:
– An item that actually exists now, with observable aspects
– The record of the previous existence, or current existence (or periodic existence, or rough pattern, so on…) of that phenomenon
– (used frequently and fairly important and so mentioned even though not strictly in the physical world) The claim of the existence of something that was human-observed

The indirectly influencing phenomena never truly are observed by humans. Some explanation:

Things like quantum mechanics and massive simulation only meaningfully exist in controlled settings, manipulated only by machines. While people postulate that quantum phenomena govern the entire universe, in real life particles are being measured all the time by interactions with nearby matter. Furthermore, no one is actually measuring 99+% of the quantum interactions as laid out by the physics. Likewise, when we go to perform a serious simulation such as detailed social dynamics or the weather, we aren’t measuring anything remotely close to the quantities that we know already exist and impact the model. Consequently, such models are suggestive and not great at predicting in an open system.

The immediate consequence of this is that these physics only apply to the specific isolated systems where they are contained or observed. In any other usage, they don’t matter, and therefore don’t need to be taught, only mentioned for future reference and mental preparation.

The more interesting consequence is that, effectively, all work on these types of small and isolated phenomena, occur only in specialized facilities, AND that any work upon either the phenomena themselves, or on the machines that manipulate them, can only be done via computers or similar indirect means (with the exception of macro-component assembly). Consequently, almost all of the work can be done away from the actual site where the equipment is housed, and certainly all of the fine design work and fabrication is not done by hand. This means computers and similar abstract control mechanisms are effectively mediating either the observation or the manipulation of these phenomena. It is not merely that the sufficiently advanced technology appears as magic: it is in fact existing in a metaphysically separate (but related and linked) world.

As for the unobserved and unnoticed phenomena, these do actually factor into prediction models, as various forms of ignorance/limitations of measurement. We have been surprised by things that we never measured or noticed in the past; yet they have appeared in our later past and some continue to appear in our present. Often we might model this as tail risk, SWAGs, and inherent uncertainty/wobble. If we did not account for these things, we would over-constrain the future, and our retrospectives would wind up affirmatively denying the existence of items measured in the past or present, but that were not accounted for in our models.


Notes On Critical Estimation Parameters

See page “Correctly Bounding Critical Estimation Parameters”.


Model Templates/Best Classification Of Predictions Via Models

Here, I attempt to follow both the generally understood knowledge taxonomies, but most importantly, I attempt to use the simplest/easiest generator functions and/or templates to partition the prediction space, giving priority to physically realized and predictable items (because they are the bulk of the space and the most useful because the most reliable), but also including theoretical possibility to express and realize items such as gods, new future dynamics, and random behaviors.

There are two sorting orders for these:
– Correlation of facts and bounded CEPs according to all model rules, first, in accordance with the number of future facts/accurate futures predicted.
– Deconflict on isolation (the conditions under which the correlation is valid)/situation. That is to say, some models may contradict, but if there is a spatial or other isolation construct that can allow each to co-exist and most accurately predict the sum of the future facts, then use that construct, somehow joining the two models to predict more facts. If there isn’t an obvious quantity of facts right in either model, then the matter remains ambiguous, and that’s why I leave “models” in as a quantity in the composition of prediction, because there may be knowledge to be gained by computing two different models/rule sets that neither one alone could completely yield.

In those two orders, here are the best model templates I’ve come up with:

– Newtonian physics and other well-defined, often repeated experimental or real-life situations, with prediction accuracy over 99%. This suggests that the correlations are extremely high and are well isolated, so that they would apply to a very broad range of futures.
– Situations where prediction accuracy is over 99% but only in limited conditions and where an explanation for the limited applicability is not known. One example is fixing problems with machines, where there is often a clear way to reproduce a problem, but the root cause of the issue is yet to be discovered. The correlations are very high, indicating the validity of the correlation, but the amount of isolation from other events is not known, limiting the predictive power. For example, the machine could magically start working again, and we wouldn’t know why it returned to normal operation.
– Situations where prediction accuracy in the short term is high but long-term prediction accuracy is low (e.g. the weather), suggesting that the correlation in the short term is likely to be accurate and that it can be relied on in that context, but that there are known issues with either the core correlation itself, isolation of events, or data issues causing loss of precision and/or accuracy of the results.
– Situations where the short-term prediction is unreliable but the long-term prediction is accurate. The issue with these is that there are a number of intervening states which may have higher correlation than the supposed correlation, which casts doubt on both the validity of the correlation and of the isolation from other events. Two examples where this type of correlation is claimed would be climate change due to human activity and the democratic peace theory.
– Situations where the prediction accuracy is low but higher than a random distribution among cases. Everything is in doubt but it’s better than nothing. Predictions of human behavior would often fall in this category.
– Seemingly random events, with few or no clear correlations. Here I also bucket in zero-knowledge analysis and the afterlife, as the use/analysis/techniques are most similar.

If two or more model templates accurately predict the data, the one that is of highest correlation should be used.

The derivation of these model templates is detailed below for both metaphysically correct and objective-materialist views (note that I am mostly ignoring the prediction of human behavior in this text, leaving it to other writing)

In a metaphysical context, following the hypothesis that the future will be like the older past transitions to the newer past and the present (which is roughly consistent with one’s own knowledge about the relationships of the past and present), then we seek the simplest to compute approaches/rules that most correctly reconstruct the series of newer situations, given the older situations.
– The vast majority of the future facts follow from the present, given the geography of the land, the chemistry/physics, and so on…what we consider the physical phenomena according to the generally accepted science e.g. Newtonian physics. Especially in Newtonian physics and similar macro-transformations, the formulation of the rules is simple and the computations are tractable, so it’s unlikely that we need further to optimize them.
Reviewing the remainder, we see several classes of phenomena that are not predicted by the broad rules of physical science.
– One class of phenomena is the “looks the same, but unexplained predictable deviation from known rule”. The future can be predicted for these items by tagging them and then applying the class-specific rules on them. Based on our past experience, usually these items do have a physical-based explanation for the deviation in behavior, but this class of items exists due to the limitations of our many-ways limited measurement and recording capability.
– Another observed phenomenon is that “model doesn’t transitively work between pasts”. The most simple version is that for past x, applying the model yields the correct statement of past x+1, but not for pasts x+2, x+3, etc. Hence we begin to lose the ability to predict at arbitrarily long intervals. That could be because the inputs/facts to past x, when evaluated through an otherwise sufficient model, are not sufficiently related/accurate to past x+1 to allow past x+2 to use the same model. Another possibility is that the model used causes errors/has insufficient precision in its prediction of past x+1 that carry over to the inputs for past x+2 and make it invalid (vs. a re-measurement at past x+1 which would hold).
– Another version of “model doesn’t transitively work between pasts” is when past x and past x+n for n sufficiently large correlate, but pasts x+1 through x+n-1 may not correlate. Effectively, the facts/CEPs/inputs for past x+n are those applied from past x to past n-1. Hence the model, instead of operating instantaneously, only operates over an accumulation of inputs to predict past x+n. There are a wide variety of real-life reasons why this might be so – physical accumulation of objects subject to a large, irregular decay function, multiple highly relevant inputs that are uncorrelated but one on average dominates, or even a simple random or irregular-periodicity process (such as a coin flip) being repeated enough times that an event winds up occurring.
– The final generalized version before an effectively random outcome is that certain facts or their count/frequency leading up to past x+n can be systematically placed to some bounded precision, but only via some complicated, arbitrary function. There are several possible reasons given correct and complete inputs: there is an accumulator/irregular process which does build over time, but its accumulation/trigger is small relative to decays/interference it experiences; the inputs feed to a stochastic probability in the model, effectively a decay function (see next note for further explanation); or the past x+1,2… are operating on each other, and the fact content of each past is not pre-determined (e.g. could be stochastic in each step) by application of models using inputs to past x – in other words that the model error is complicated enough that its propagation has to be corrected or adjusted using that complicated function. However, in my experience, the most likely reason why this behavior is seen is because of issues with incomplete or wrong input data.

The objective materialist view of the situation is somewhat less complicated as the supposition of objects that exist independent of the observer, that are perceived in a somewhat predictable way and with which other observers can interact, supplies a physical ruleset that bounds the transition space, so that, whether or not the model or measurements/facts/observations are accurate, ultimately the past x+1 is a more or less deterministic function of the input and past n. In that case, then any limitations in predictive power are either due to limited facts or imperfect models. Hence the prediction decomposition as previously outlined more than suffices, as critical estimation parameters somewhat aggregate the concepts of incomplete/erroneous/ambiguous/loosely-correlated input data.
In some such views, randomly determined phenomena exist (or at least are not possible effectively to measure), therefore some aspects of past x+1 could be randomly determined, which will propagate that deviation and any such deviations in pasts x+2,3… throughout pasts x+n. The structure of the randomly determined phenomena in these views effectively defines a complicated model error function, which is one of the possible reasons listed in a metaphysically correct view for a low but non-random prediction accuracy.

The preference of simplicity in these models and in forcing the use of the highest-correlation model template is justified by the added value of simplicity of explanation and reliability of computation, both of which reduce work and therefore are more valuable than a more complicated model template.


Low-Knowledge/Zero-Knowledge Situations

The optimal prediction approach is to reject any attempt immediately to predict such phenomena, and to admit the limitation of prediction technique (agnosticism) in these cases. At the same time, we also admit that the count of unexpected phenomena increases over time, and that the occurrence/recognition of such phenomena drastically increases when measurement quantity and precision increase. Hence as a side note, we also predict that improvements in measurement will improve performance in prediction for these cases.

Explanation:

First it is necessary further to clarify what it means for a low-knowledge/zero-knowledge situation to exist. If we had a roughly reliable means of eliciting or observing the phenomenon, then we could not say we had low knowledge in this sense. Likewise, no one ever had built anything remotely resembling an atomic bomb before the Age of Industry, so the existence of the phenomenon in a rigorous way never was proven (the sun does seem to operate according to some of these same principles, but no one ever launched a satellite then to have any confirmation of such speculation, either). Yet the zero-knowledge phenomenon still did exist, although it never had been exercised by humans. Religious miracles are a low-knowledge phenomenon. There’s no reliable way of eliciting or observing them, yet they do seem to crop up from time to time, at least that’s what’s claimed; also there intermittently are events that can be interpreted as religious miracles for which there is no reliable scientific explanation, such as miraculous recovery from disease.

Consider the atomic bomb. Clearly there is a regular principle behind it, it has a very observable effect of high magnitude that influences our decision tree, and now the physics are recognized as reliable. Dozens of devices were detonated and some documentation of that testing is available. However, measurement limitations and therefore inability to distinguish the true science of the atomic bomb from other beliefs, make it impossible to have predicted the emergence of the atomic bomb in the past without also predicting a massive number of events and situations that never happened.

Consider how one calculates the yield of an atomic bomb; a significant yield is the true demonstration of the significance of the phenomenon, otherwise it would be a firecracker and could be ignored. Massively oversimplifying, it has to do with the enrichment level (i.e. the number of a certain type of atom) of the device. Say you are handed a device that you are told is an atomic bomb. How would you determine the yield? Ostensibly you would run the material through a centrifuge, run a scanner to catch stray particles, etc. You would be applying a number of scientific techniques that only came to exist around the time that the atomic theory that led to the bomb was being explored. What happens if you don’t have those available? You might try a proxy such as weighing the material; but that presupposes that you actually knew the weights that would be indicated, which presupposes you know enough atomic theory to rebuild this device. What happens if you don’t know the atomic theory, and are simply told this is a massively destructive weapon? How would you know? This is the limitation of measurements; without them you have no way of predicting the behavior of a given device without actually detonating it. It could have yield values from zero all the way up to the enrichment maximum, which without atomic theory, you wouldn’t even know what that is, so from your perspective it could be anything. In history, this was a problem the WWII Germans ran into when they attempted to develop an atomic bomb: they massively overestimated the amount of fissile material required to make a useful device. Of course they were surprised to see that the US Manhattan Project team got one to work.

Consider another angle of this argument: let’s say someone walks up to you with something that looks like a fully-cased and armed military bomb and claims it is a nuclear fusion device. You only see the casing. How do you know if it is a
– Fusion device
– Fission device
– Fuel-air explosive
– High explosive
– Training device
– Demilitarized ordnance
– Just a fancy case used as a time capsule
In this example, you have no idea because you have no way to take the measurements (such as tearing apart the case) that would confirm one or the other. If you try to guess amongst these possibilities, you will predict incorrectly most of the time.

Further developing the previous argument, let’s say you don’t know what the types of military bomb are. To venture a guess, you would be pulling concepts out of the air and attempting to synthesize them into a concept of how this particular device, if it’s even a military bomb, might work. The number of ways you could mix and match those concepts is extremely large (easily millions of different combinations), so your predictions would be even more laughably incorrect. Even though at least the Greeks actually had contemplated a theory of atoms thousands of years prior to the Atomic Age, because of their limited measurement means, they had no way of bounding their speculation to a number of possible configurations that could have come anywhere near the concept of the atomic bomb. In particular, there was no immediately obvious test that they could have run even to determine which of these configurations were the most likely.

The situation becomes even more complicated when there may be no measurably constructive hypothesis. Consider the Millerites and at least almost all other messianic/millennarian movements. Many people have prophesied the return of a prophet or other massively significant religious figure, but very few to none actually appeared to our knowledge. Consequently at least almost all the futures predicted by all these people turned out to be completely erroneous. Furthermore, many of these claims are poorly bounded and can apply to many possible future events, and so any claim that might actually have been satisfied by a religious miracle or the return of one prophet or another, itself becomes indicative of weakly-correlated knowledge at best. The claim of an atomic bomb potential is only one claim out of many, many billions of prospective claims, some of which (mainly those following the higher-correlation model templates above) are true, but many of which are wrong, and in particular, for low-knowledge/zero-knowledge claims, most of which are wrong, and many hugely wrong. Even the atomic scientists made many mistakes and had many limitations in tentative theories.

When we consider how this maps onto a correct metaphysical view, seeing the emergence of facts related to atomic behavior basically coming out of nowhere 100-200 years (or whatever you like) prior to the first detonation, it might be tempting to place a predecessor or promulgate some rule to allow the atomic bomb to be predicted; this would improve the overall prediction accuracy. However, that is historically incorrect, as without the measurement capability, there certainly are no available facts/CEPs (note the CEPs could not be reasonably bounded) sufficient to the precision of a model (even if it were precisely known at the time) that would determine the existence of the atomic bomb potential without also predicting a bunch of other things that didn’t happen. When we consider prediction from the present, we must also consider the limitations of our own measurements, and hence we are in a similar situation to the Greeks with regards to any such new phenomena.
In other words, in a metaphysical sense, you would be predicting phenomena from no sufficient facts or CEPs, implying that a correct prediction would only be a function of the model. However, that also clearly is an error in the respect of prediction of past x+n, because the uranium always existed and the chemistry/atomic physics seems to have been constant. As such, if situation in the x+n (1800) led via model existence to x+n+1 (1945) with a bomb, then all x, x+1,2… would also have led to an atomic bomb.

From a economically efficient decision-tree perspective, accepting the large numbers of false predictions would massively change the decision tree from what turned out to be optimal.
Note for reference, the objective materialist view of the time was even blinder to this type of prediction, since without a measurement, it’s assumed that there is no object, and intuitively, space separates objects.


When Is A Prediction Spreadsheet Complete? – Adjustments From The Ideal To Economically Optimal Decision Trees

Even though a prediction spreadsheet may be internally self-consistent and verified, how do we know that it suffices for the purpose of making a decision – that is, when we exclude things from it, how do we bound the impact of those exclusions on our realized value?

There are five prediction-related knowledge concepts this text will consider; each one of these products represents an necessary concession or trade between prediction accuracy, work to perform the prediction, and overall value derived from the activities of prediction:
– The full prediction of every future fact (the ideal of information completeness)
– The full prediction of every knowable future fact (reduction of the ideal based on limitations of prediction and measurement)
– The correct decision tree according to an individual’s values, which does not require the prediction of every future fact (the value-maximization of prediction, considering the economic value associated with the work involved to obtain perfectly accurate predictions, with the assumption that the intellectual and/or aesthetic value of obtaining a better prediction is less than the value associated with avoiding economic costs of unneeded prediction work)
– The (possibly multiple, and possibly ambiguous, economically efficient) decision tree that corresponds to an individual’s value-maximizing activity, even if the decision tree is not correct in every case. Some decisions (and the predictions on which they are based) may be sub-optimized because the value lost in the work to obtain correct decision flows exceeds the value gained by making those more correct decisions.
Of course there are blends of these, but interpolation accurately characterizes them.
– The non-optimal decision trees, which successively approximate the economically efficient decision trees according to the amount of effort that is expended in their construction

As agents assumed to have some sort of free will and also with the philosophical objective to maximize value, of course the economically efficient decision tree is the best. So how does one know whether the economically efficient decision tree has been obtained, and if not, roughly to what extent does a given tree approximate an economically efficient or correct tree?

To illustrate the issues:
Of course, the very first adjustment from ideal that everyone makes is to discard predictions that don’t seem to relate to one’s own life or business in any way, or for which there are very clear economic feasibility bounds.
Common but important ones are:
– The “secretary’s dilemma” or satisficing behavior when choosing a mate
– Reaching a stopping point when contemplating optimal investments
– Various heuristics when deciding what political candidates to support or what policies are important or not important, given the size of law codes
Even at this point, already we have difficulties:
– The set of futures for the mate, based on our historical knowledge, extends the entire spectrum of the possible decisions, even with some care given to the selection.
– Often the most optimal investments are small companies that turn into very large ones, hence these are the ones most likely to be missed when reaching an evaluation stopping point. Furthermore, a short position is difficult exactly to time, so the knowledge of overall economic activity (mostly filled by large and mid-size companies) doesn’t benefit as much as a lucrative long position. Moreover, as markets become more efficient, short-run movements become more and more significant to overall returns, but that increased volume vs. decreased reward for each decision imposes a more restrictive economic feasibility constraint.
– Evaluating the trustworthiness and decision accuracy of an individual candidate is fundamentally difficult and gets harder as the decision gets more important or more sensitive to manipulation. Hence to assess the true effectiveness (vs. acting according to principles) of the decision, e.g. because of issues in obtaining accurate counterfactuals, becomes a difficult problem.

If we more or less knew the probability spectra of set of futures that could result from a given decision-making approach (see notes below), then an ordering criterion follows by ranking the sets of futures, and hence you can state a given spreadsheet considers all alternatives with prediction work below a known cutoff based on the added value of those futures (and any challenge to its completeness can proceed from the assertion that there is a more optimal result). Particularly in the case of the “secretary’s dilemma”, the distinctions between the set of futures become relatively harder to discern, yet there still is high volatility in the result. That is, making spontaneous decisions underperforms more methodical approaches, but once you start to put an arranged-marriage system against a delayed-decision love-marriage system, the difference in the performance of either isn’t clearly distinguishable. Yet, there is a large amount of value to be gained in improving these decisions, and we also have seen over the course of human history, variation over time and as the ostensible function of external influences such as economic hardship and royal political alliances.

If we considered the matter as a function of the incompleteness of the spreadsheet, then we would find deficiency in:
– The facts
– The critical estimation parameters (not really an issue as you will see)
– The models
– Maybe the measurements of the outcomes of the past are deficient, although this is not the focus of the current discussion since you ostensibly know from the past states that you have this problem

For any CEPs, the CEPs exist to simplify the understanding of the future outcomes, by providing a bounded uncertainty or discrete set of resolutions that allow the futures to be resolved using the facts and the model in question. Hence although the refinement of the CEP bounds might pose an economic obstacle, the CEP usually expresses something you can’t feasibly measure to better bounds: that’s why you would have created them. So, likely there isn’t economically feasible improvement to be had by improved CEPs as such.

The decision tree is defined by the values of the facts (and CEPs) as transformed by the model into the futures. Hence to say the economic improvement of a decision is infeasible, is to say the amount of improvement gained in either facts or models by the additional effort required to measure those facts or run/discover those models is outweighed by the work required. Here are some situations which might allow such an assessment:
– The assertion that the spreadsheet approach(es) is economically efficient, with a demonstration that additional facts/better models also have been employed in the past in roughly similar situations, but that the results did not sufficiently improve.
– That measuring additional facts to the level required by the model (assuming the model is correct), will not improve the balance because the measurements are too costly for the value of accuracy gained, to yield a sufficiently more favorable future prediction.
– That we know of better models than the ones used in the spreadsheet, but either they are infeasible to run at all, or the possible value to be gained from using them, as indicated by the suboptimality of the currently predicted future vs. the ideal state, is not justified by the expense of using them.
– That we know what error function is being propagated, and that this error function has some systematic bias or tendency towards some set of futures; and that regardless of which futures are predicted by either side of that error bound, that the correct decisions for a given value system won’t change (obviating further work by sensitivity analysis).

There are some other common choices that exhibit a more clearly bounded set of futures, hence easier to quantify the decision optimality:
– Staying in a particular house, job, or other situation where it’s slightly suboptimal but the work and risk to change is high relative to the suboptimality
– Choosing to pay more money and get less quality on average for goods and services when the cost of research for optimality is high relative to the anticipated waste/suboptimality
– Choosing to abandon certain hobbies or pursuits because other opportunities are more lucrative, hence reducing the number of decisions in the tree


A Few Examples To Demonstrate The Correctness Of The Approach

Consider that the future will not be roughly like the present or past. If that were the case, then we would roughly invert the present, moving the mountains and seas, ruining all the cities and building new ones of a different character, having children that look and behave differently from our current ones…even on a partially random basis, is that really the case? Is that what we’ve seen? Or do we see that in situations of radical change other than of human behavior, large changes may occur, but are gradual?

Consider also the case of hysteresis and fatigue/failure. One would think this were a counterexample to the claim that the future is not like the present or past, since the result of x+1 operation on object y is a different outcome than that of operation x, x-1, etc.
This theory posits several decision rules that result in the prediction of eventual failure, keeping the past, present, and future consistent:
– The incorporation of unknown and unmeasured phenomena at the time of prediction, yet that could be measured after the fact. In this case the physics of the matter would posit a consistency with the previous state of object y that makes the past like the present, and that relationship holds across a wide range of different situations/phenomena/experiments.
– Explicit recognition that unmeasured variables exist and contribute to the result, and that there may be no way to back out a seemingly random result to an unambiguous concept of initial states. In other words, the theory does not assert complete prediction in every case, unless there is a measurement technique to the initial states sufficient to the task (most visibly seen in human behavior).
– The past has many examples of such fatigue/failure. Hence, those would be incorporated into the decision rules (e.g. mechanical engineering) and so to the extent that the sub-theories of physics, materials science, etc. can predict future events based on past phenomena, the decision rules would at least likely propose the different outcome to x+1 operation onto the future in question.
– To generalize this statement, only an irregular phenomenon with no clear trend in the past could be considered a counterexample. However, due to the advances in science/more experimentation, we have seen that phenomena that were irregular with no clear trend in the past, turned out to have relatively clear and predictable behaviors based on the states that were newly able to be measured based on the advances in technology. Hence there is a decision rule in this theory that states that irregular phenomena of this sort are at least highly unlikely, and most likely there is some sort of state/decision rule space where at least there could be a probability-cloud-type answer. First, from preceding description and based upon the very large amounts of future state correctly predicted by the theory, such a phenomenon must not have much impact on other phenomena (otherwise the mountains would move) and by exclusion, only could account for a few future states. So the theory is 99%+ correct in a state-space accounting, and in a “is this rule correct” accounting, it is easily 80-90% correct. Such an irregular phenomenon also could not be considered repeatable, otherwise it could be covered by the decision rules of the theory. Because it is irregular and not repeatable, it is difficult to categorize or bound the phenomenon, because it seems to behave or manifest differently in every case. Hence the influence of the phenomenon would wind up being represented as an small-percentage error in the theory’s future state space, since it has to be attributed to something measurable, yet it cannot be isolated to anything that might really be driving it, so it is modeled as small rates of random failure.

This theory gives insight into why climate science is so difficult to get right: climate scientists try to figure out the future average temperature over decades from now, with a fundamental training data set measured in 15 or so decades, with a long-term trend that hasn’t decreased for long periods, with the inability to precisely isolate every one of numerous contributing variables as assumed by the related physics, without computing power even fully to compute the measurements that are available according to the models that they have worked up to date, which themselves diverge from each other based on the quality and character of the specific measurement precisions. Consequently they have too many possible solutions/model results to their problem, especially since they must incorporate error bars and multiple scenarios to account for phenomena that skew the year-to-year results, which they already acknowledge. Because there are too many admissible solutions with no obvious way to eliminate the ambiguity, they must pick some, effectively making their statements a partially random function.

Likewise, social sciences such as macroeconomics face these same types of issues, plus they deal with people, whose actions are fundamentally hard to predict. The macroeconomist at least gets the benefit of some more bounding assumptions and semi-rationality on the part of the people in his domain.