To understand how something complex works one has two main tools — the mechanistic, respectively the holistic approach. The mechanistic approach assumes that something can be understood by breaking it into parts (aka analysis) and then by combining the parts to form the whole (aka synthesis). However, this approach doesn’t always account for everything as there’s behavior and/or characteristics not explainable by the parts themselves. Considering that the whole is more than its parts, the holistic approach studies the interactions of the parts that lead to such unexpected effects (aka synergies), the challenge being to identify those characteristics, circumstances or conditions that lead to or related to these effects. Thus, when these two tools are combined within multiple iterations one can get closer to the essence.
When breaking things into parts we need first to look at the thing or object of study from a bird’s eyes view and identify the things that might look like parts. Even if the object of study looks amorphous, the experience doubled by intuition and perseverance can offer a starting point, and from there one can start iteratively to take things apart until one decides to stop. When and where one stops is a question of possible depth, as allowed by the object itself, by the techniques available or our grasping, respectively by the intended depth — the level chosen for approximation.
Between the whole and the lowest perceived components, one has the luxury of experimenting by breaking things apart (physically and/or mentally) and putting things together to form unitary parts — parts that typically explain one or more functions or characteristics, respectively the whole. In addition, one can play with the object, consider it in a range of contexts, extrapolate its characteristics, identify behavior not explainable by the parts themselves. In the process one arrives to a set of facts (things known or proved to be true), respectively suppositions expressed as beliefs (things hold as true without proof), assumptions (things accepted as true without proof) or hypotheses (things who’s value of truth is not known, typically because of limited evidence).
One builds thus a (mental) model, an abstraction of the object of study. The parts and relations existing between the parts form the skeleton of the model, while the facts and suppositions attempt giving the model form. Unfortunately, models seldom accommodate all the facts, therefore what one ignores or considers into the model can make an importance difference on whether the model is of any use. One is forced thus to advance theories on how the skeleton can accommodate the form, how form reflects the facts and suppositions.
Fortunately, simple models can prove to be useful, especially when they allow approximating the real thing within the considered context. However, the better approximations one needs and/or the broader the context is, more complex the models can become, especially when the number of facts considered as important increases. This can mean that two models or theories can be useful or correct when considered in different contexts but lose their applicability when considered in another context.
Having a repository of models to choose from is usually a helpful thing, especially in understanding more about the object studied. The appropriate usage of a model depends also on understanding its range of applicability within a context or across contexts, the advantages, and disadvantages of using the model. Knowing when to use a model is as important as knowing when not to use it, while understanding the measure of the error associated with a model can make us aware of the risks associated with a model and decisions made based on it.
Originally published at http://the-web-of-knowledge.blogspot.com.