Data are produced by sampling into discrete and countable things a world which is continuous, multiple, and ultimately more nuanced than any possible representation. To sample is to essentialize in some way. It is to turn what is temporal and contingent into something static, categorical, and unchanging.
Thus to sample the world into data both adds something and takes something away. Erasure: we lose the play in phenomena, the multiplicity, the ambiguity and ambivalence. Excess: who has performed the sampling, with what technology, what are the schemata, how and where is it represented and stored, and how is this data been imparted validity? How much is it worth? There is no datum, only the dataset.
Data always speaks of an institution, it is given authority by the discourse of Science or of Industry. It is this that separates data from language — whereas the intersubjective concepts with which we grasp the world are constantly changing but can be changed by no one in particular, data is a private language we take on institutional faith and/or training.
Further, that which is countable is a special class of concept — it is also that which is computable, parsed and calculated. Machine interpretation is second-order interpretation via code. Where do we go from here?
At a certain point, numbers begin to flow again. Colliding schemata become qualitative — surrounded by data, we cannot help but de-sample, intuit, visceralize. Let’s play with that visceralization.