There is an emerging problem within the scientific community – its rapid growth together with advancement of technology increases information flow to unbearable levels. In these conditions existing system for research evaluation is becoming more and more obsolete, creating a bottleneck in the “knowledge production pipeline”, which results in the loss of considerable amount of data. To illustrate this point “knowledge production” can be formalized as following:
1) Science is a Machine operating the “knowledge production pipeline”. It extracts, filters and refines the information from the Universe.
2) The end product of the pipeline is structured Knowledge ready to be “consumed” by society, industry or by the Science Machine itself (knowledge reuse creates positive feedback).
3) Pipeline consists of four operational segments, represented by the corresponding sets of tools: extraction of information, documentation, evaluation and knowledge dissemination.
The main bottleneck is created in the segment of research evaluation. Scientists (operators of the pipeline) are still performing this process manually. Increased automation of the preceding “extraction machinery” and the shift to electronic “documentation machinery” results in disproportion between the amount of operators and amount of information to be handled. In other words, while we are perfecting tools for knowledge extraction (thus increasing the information flow), the machinery required for knowledge evaluation is worn out, loosing ability to handle this flow.
Another problem refers to inefficiency of the existing documentation tools – paper laboratory journals and local electronic data storages provide very limited access. In fact, this problem also points to the obsoleteness of the science evaluation practices: since no credit is given for the knowledge bypassing the peer-reviewed publishing process, scientists lack incentive to make negative results and «raw» data publicly available.
So essentially everything boils down to the necessity of delivering new metrics for research activities and partially automating the evaluation process. This does not mean elimination of the peer-review, but increase in its efficiency.
And, beyond everything else, academia has to start giving credit to researchers performing peer-review and make its results publicly available, thereby encouraging this type of scientific activity and removing bias and injustice from the process.
~ ~ ~
- Mechanisms for citation are the key to changing the academic credit culture
- Peer review: the myth of the noble scientist
- Giving credit, filtering, and blogs versus traditional research papers
- Rewards, output and academia
- Experimenting with Peer-Review
~ ~ ~ ~ ~