[Retrieved earlier notes from small brown notebook. Ref. FK 3:568:] "... a plurality of (file)formats & metadata, meaning a proliferation of information nodes holding the 'same' content would increase the probability of survival (security in numbers) ..." But what about aggregation of data,then, and the necessary homogenization of data types, allowed values, backbone ontologies / controlled vocabularies etc.? Will it be possible to preserve nevertheless the variation in content through this process by means of an hour-glass model with a multiplicity of data and formats at the top (as input), aggregation & homogenization as the "waist" and a multiplicity of content nodes at the bottom as output?
Kommentarer