When To Use Experiments/History Instead of Theory/Prediction Techniques

The most important consideration is the cost of the experiment (including failure cost) vs. the benefit of applying the available theory to the available knowledge to get closer to a right answer on the first few tries. The optimal approach winds up being:
– For physical problems, theory probably should be applied if the predictions of the theory are not heavily defined by the experiments.
— Furthermore, if the theory is heavily defined by the experiments, then learning the theory doesn’t help much anyway, so when solving a particular problem under particular conditions, it may make sense to do the experiments even if slightly suboptimal because the costs of learning an extremely complicated theory that may not produce the most accurate answer constitute a poor gamble vs. the fixed and typically knowable/bounded costs of performing experiments.
– For problems about the utility of self, theory predictions beyond basics aren’t highly accurate, so experiment unless the items in question are fairly valuable and subject to loss (primarily health).
– For problems in dealing with others, only the non-routine behavior of certain highly rational individuals will be knowable to the level of applying a relatively concise set of decision rules. Furthermore, your own access to that knowledge likely is highly limited. Consequently in any doubtful situation where the individual is not displaying a routine pattern of behavior, experimentation is the only practical option.


Explanation:

What does it mean for “a theory to be heavily defined by the experiments”? For a prediction technique/theory, there is a set of all predicted outcomes given the initial conditions plugged into the theory, along with their respective probabilities. When a theory is heavily defined by the experiments, the experiments that humans have performed to date and recorded in history books, are a large proportion of that set, and tend to represent the deviations or changes in slope or characteristic that prevent the theory from having a simple mathematical form such as linear or step-function/transition type based on initial conditions. From a mathematical perspective, you would say that the known range (results) of the prediction function is closely approximated by an interpolation of the data points of experiments to date, and that the prediction function is complicated and irregular. From a historically empirical perspective, attempts to extrapolate from previous points often have failed in such theories, so that the probability of the experiment having the predicted outcome when outside the previously experimented bounds, is relatively low.

Because a theory that is heavily defined by the experiments can be at least largely approximated by the knowledge of the experiments, then there is a question of whether it is easier to learn the complicated theory, or to learn the particular experiments that define the theory for the particular set of problems that you care about. If the subset of known experiments in that range is small and the cost of the experiments is relatively low, then performing or even re-doing the experiments will be the most practical course, especially if the computational costs associated with running prediction techniques are high.
With my limited knowledge, I believe that Newtonian physics of macro bodies is where theory makes sense to apply, while applied chemistry is an area where the predictions of specific interactions in particular situations are heavily defined by the experiments, even though there are many general (often computationally infeasible) theories at macro scale and quantum mechanics that provide rules to the interactions.

For the question of how good you are at performing certain actions with your own body and skill, almost any physical assessment of technique or knowledge is very cheap and easy to perform, no theory needed, and the cost of developing a theory would be far in excess of the experimental cost. This same dynamic applies when medical treatments are performed, as the progress of a disease/condition (sometimes erratically) varies over time and with treatments, so even if the theory does correctly predict a certain course in the current conditions, the conditions will change with treatment. Further, in the case of conditions like cancer, a macro theory of the genetics and biochemistry involved requires so many data points across the body and is so heavily data-intensive to pull back an uncertain result, that the monitoring tests still make economic sense even with personalized medicine, simply because not all data points will be sampled. As for matters of personal utility and preference, such as sexual orientation, food preference, and vocational enjoyment, there is no general theory or strong predictor of many of these items, so they have to be re-learned by each new person through experiment.

Every problem in dealing with the self, with the assumption that the self is a rational decision maker seeking wisdom, is as hard or harder in the case of predicting others’ behavior/reactions. You can’t readily determine their internal utility if they want to conceal it, and it takes a lot more effort and may be infeasible to gather the same data points that you would for yourself. Others’ routine behaviors are highly predictable, but there the experiments are the theory, as there is no practical distinction between a theory that predicts one result continuously (showing up for work) and observing the individual show up for work for even a week. Predicting when a person would leave work with any precise timing, for example, also is unpredictable given a mass of people without declared intentions. Furthermore, those intentions will be changed by circumstances unknown by that human’s imperfect knowledge of the future.