Home
 


programme
abstracts
lecturers
location

social events


abstracts 2012


July 1st:
Tutorial Day

Bibliometric Crash Course
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria / Christian Gumpenberger, Bibliometrics Department, University of Vienna, Austria / Stefan Hornbostel, Institute for Research Information and Quality Assurance (iFQ), Germany / Sybille Hinze, Institute for Research Information and Quality Assurance (iFQ), Germany

Navigation, search and analysis features of the Web of Knowledge Platform
Malgorzata Krasowska, Business Development Manager, Thomson Reuters

This Thomson Reuters workshop on the Web of Knowledge platform will consist of three distinctive parts: navigation, customization and new search and analysis features on WOK 5.6 (launched May 6th 2012, with focus on Web of Science and Biosis Citation Index), use and features of two additional WOK resources, namely EndNote Web and Researcher ID, and search, interpretation and export of data from Journal Citation Reports.
In the first part of the workshop, we will explore multiple search techniques in the Web of Knowledge by navigating it through all databases search as well as individual databases. Exercises will underscore analysis features of each database, possibilities of various author searches, enhancements of the “Marked List” option, creation of proper citation reports and using the cited reference search in Web of Science. We will learn exporting options and look at the connection between Web of Science and Researcher ID, as well as the WOK platform and EndNote Web.
The second part will present the bibliographic software/service which is EndNote Web. In a few exercises we will learn how to import data to the EndNote Web library, format bibliographies, organize and share/manage them. We will also insert citations into texts using Cite-While-You-Write function and format word documents in specific bibliographic styles.
Our final module will focus on Journal Citation Reports, a tool to evaluate scientific journals from sciences and social sciences. Besides general navigation features, we will explain key metrics (Impact Factor, and Five Year Impact Factor among them), and methods of finding journals in a specific category, ways of selecting them and exporting to excel.


SciVerse Scopus: introduction and walk trough demonstration
Jan-Albert Majoor, Account Development Manager Elsevier S&T

-    Introduction to SciVerse Scopus
-    Performing a Document Search
-    Performing an Author Search and an Affiliation Search
-    Citation Tracker and ‘Analyze Results’
-    Working with ‘Journal Analytics’
-    CRIS and More Information


July 2nd:
Introduction to Scientometrics: Theoretical and Practical Aspects
The Philosophical Foundations of (Biblio/Sciento/Infor)Metrics
Nicola De Bellis, Medical Library - University of Modena and Reggio Emilia, Italy

The lecture will provide a critical overview of the philosophical background to the current explosion of (biblio/sciento/infor)metrics in the science policy arena. The idea,  stemming from 19th Century positivistic theories, that social facts can be studied in an “objective” fashion by means of natural science methods will be followed in its three main scientometric ramifications: 1) Desmond Bernal's advocacy of scientific methods in the field of administration within the framework of a true revolution in the overall scientific information system; 2) Robert Merton's functionalist theory of science as a norm-driven, self-ruling social institution, where documents and bibliographic links among them contain almost all that matters for the system to work properly; 3) Eugene Garfield's and Henry Small's theories of bibliographic citations as concept symbols, which turned the newborn Science Citation Index into the missing link between research quantity and quality as measured by bibliometric indicators of scientific performance. The key role played by Derek Price in setting up a paradigm binding together the empiricist, mathematical and evaluative souls of (biblio-sciento-infor)metrics will be outlined in this connection. At the same time, a succinct literature review of the “reasons to cite” and the dark sides of scientists' behavior uncovered by constructivist sociologists and information scientists will highlight the weaknesses of any reductive view of the research process. To finalize the discussion, a quick reference to the French tradition of co-word analysis will demonstrate the resiliency of bibliometric techniques to a simplistic contraposition between normativism and constructivism, suggesting that a coherent theoretical framework for quantitative science studies has yet to be developed.

History and Institutionalization of Scientometrics
Stefan Hornbostel, Institute for Research Information and Quality Assurance (iFQ), Germany

This lecture describes the context from which the field of scientometrics/ bibliometrics has emerged. The discipline of scientometrics characterised as a research field in the intersection of information science and science studies. Its emergence is closely linked to the growth of scientific information in the 20th century and the evolution from science to what de Solla Price called ‘big science’. The thematic and geographic diffusion of scientometrics since the 1960s, its present structure as well as the growing number of contemporary applications is discussed. Special attention is paid to the institutionalization process of the field; important milestones in the development of the field and in its institutionalization are presented.
Finally, the consequences of the ‘perspective shift’ in bibliometrics through science policy use, economic interests and utilisation within the scientific reputation system,  as well as the enormous acceleration of the development of our field caused by the IT revolution during the last fifteen years are discussed.

Scientometric indicators in use: an overview
Sybille Hinze, Institute for Research Information and Quality Assurance (iFQ), Germany

The use of scientometric indicators dates back to the 1960s and 1970s in the United States where the first Science Indicators report was published in 1973. Since then a variety of indicators emerged aiming at reflecting various aspects of science and technology and their development. The presentation will give an overview of indicators and their use in science policy making. The specific focus will be on indicators used in the context of research evaluation. In particular indicators applied to measuring research performance at the various levels of aggregation i.e. the macro, meso and micro level will be introduced. A range of aspects reflecting research performance will be addressed such as research productivity and its dynamic development, the impact of research, collaboration, and thematic specialization. Options and limitations of the indicators introduced will be discussed.

Mathematical Foundation of Scientometrics
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

Scientometrics, just as all metrics fields, relies on mathematics models, notably on mathematical statistics. The lecture briefly describes the mathematical foundation and basic postulates of bibliometrics, explains what publications and citations stand for, and how observations have to be assigned to the actual units of analysis. Although straightforward deterministic models can be used to describe many phenomena analysed in bibliometrics, the probabilistic approach provides the groundwork for more sophisticated models and indicators based on stochastic methods.  The lecture introduces, in particular, models for publication and citation processes and shows how scientometric indicators can be derived from these models. Special attention is paid to the typically “fat tail” of scientometric distributions. Another important issue that results from the stochastic approach is the issue of statistical reliability, notably of asymptotic unbiasedness and consistency of estimators, and the construction of confidence intervals for indicators.

The application of Network analysis in science studies: Common theoretical background for broad applications
Bart Thijs, Centre for R&D Monitoring (ECOOM), Dept MSI, Katholieke Universiteit Leuven, Belgium

Network analysis is the application of powerful statistical tools and analytical techniques to uncover the relations within or the structure or development of science. This particular application of techniques stemming from many different science fields is often referred to as Mapping of Science. It can be applied to all entities associated with science like disciplines, journals, institutions and researchers. Specialized scientometric tools like co-citation analysis, bibliographic coupling or co-word analysis were already developed and applied in the seventies and eighties of the twentieth century. New techniques like text mining or hybrid approaches are added to the bibliometrician’s toolbox in the latest years. This lecture will focus both on the classical approaches as on the new techniques in an application-oriented approach within a solid theoretical framework.

Numerical representation of relations
Just like traditional cartography tries to model and communicate spatial information, mapping of science is about modeling quantitative relations between entities. In this process three crucial decisions have to be made.

  1. Which are the entities to be plotted?
  2. Which quantitative measure will be used to describe the relation among entities?
  3. Which analytical tool is appropriate?

This lecture will focus mainly on different measures of relations between entities.
Relations based on citations and references include bibliographic coupling, co- and cross-citation. Other direct links between entities include co-authorship, institutional collaboration or international collaboration. Also lexical approaches like co-word analysis and text mining will be tackled.
Each of these measures have their own properties which can have strong implications on the applicability of the analytical techniques. In order to improve the distinctive capabilities of these measures new hybrid approaches have been proposed.

Analytical techniques and visualization
The lecture will also deal with several analytical tools and visualization techniques that are suitable for capturing the underlying structure. Clustering techniques like k-means or Ward’s hierarchical clustering are proven techniques to classify the entities. Neural networks and Self Organizing Maps can also be applied in bibliometrics. Network theory describes the relation and the entities in terms of nodes and ties (or edges and links). Tools like Pajek offer a wide range of possible visualizations for the presentation of the relations between entities or nodes. The lecture will show how these tools can be applied on a broad range of entities from individual authors over institutes and countries to complete fields or sets of journals revealing the structure and dynamics at each of these levels active in science.

The SciVal Suite - Helping Institutions to Establish, Evaluate and Execute Research Strategies
Valérie Thiel, SciVal Consultant Elsevier


- Introduction to SciVerse Scopus
- Performing a Document Search
- Performing an Author Search and an Affiliation Search
- Citation Tracker and ‘Analyze Results’
- Working with ‘Journal Analytics’
- CRIS and More Information


July 3rd:
Procedures and Indicators: State-of-the-Art

The many faces of collaboration and teamwork in scientific research
Donald de B. Beaver, Williams College, USA

It is fashionable to say today that the single authored paper is essentially extinct, but such papers make up about 10 per cent of published research.  Collaborative research tends to be associated with selective prestigious journals with high impact factor, and the frequency of multiple authorship associated with those journals 95-99%) is not representative of the whole.  During the 20th century, collaboration became the dominant mode of conducting research, from about 6% of all papers in 1900, to 90% today.  What explains its popularity?
One thought is that collaboration is a response to a brainpower shortage – it enables the extraction of a fraction of a research paper from the many who would otherwise not publish on their own.  Another is that collaborative research is associated with greater productivity, an important factor during the century in which “publish or perish” became the norm for basic research.  Still another is that collaborative research bears greater epistemic authority than the single authored paper, standing as it does with one leg in the context of discovery and the other in the context of justification. 

Advanced bibliometric methods for evaluation of research groups, ranking and benchmarking of universities, and mapping of research related to socio-economic issu
Anthony van Raan, CWTS - Centre for Science and Technology Studies, Leiden University, The Netherlands

We present an overview of new developments in ‘measuring science’ based on bibliometric methods. Measurement of research performance is addressed including aspects such as interdisciplinarity, collaboration, ‘knowledge users’, scientific excellence, role of journals. Advanced bibliometric methods are an indispensable element next to peer review in research evaluation procedures, particularly at the level of research groups, university departments, institutes, and special research programs of research councils and charities supporting scientific research.
Central topics are: construction of performance indicators on a solid mathematical basis; their empirical behavior and functionality; network-based definition of research fields and proper normalization procedures; impact distribution of impact within fields and within journals; potential and limitations of bibliometric indicators for the engineering, social sciences and humanities fields.
A special focus will on ranking and benchmarking of universities, particularly the new Leiden Ranking in comparison with the THE and Shanghai rankings.
Next we discuss recent developments in network analysis and science mapping. We show the potential of bibliometric science mapping as unique instruments to discover patterns in the structure of scientific fields, to identify processes of knowledge dissemination, to visualize the dynamics of scientific developments as well as to assess research related to important socio-economic themes.


Journal-level classifications - current state of the art
Éric Archambault, Science-Metrix, Canada

Journal-level classification play an important role for researchers who want to send a paper to a journal in a field of research that is relevant to a manuscript's content. Importantly also, journal-level classifications have been used to produce statistics on scientific production. We will briefly examine the origin of journal classifications in bibliometrics with the pioneering work played by CHI Research in the 1970s. Current classifications will be examined, as well as the various techniques that can be brought to bear when building a classification including clustering techniques and the use of human expertise.

Meso-level retrieval: field delineation and hybrid methods
Michel Zitt, INRA, LERECO Lab, Research Department SAE2, France

Context
Finding acceptable delimitation for meso-level (field, topics) is essential to the quality of thematic analysis in bibliometrics. The meso-level stands on a fuzzy borderline. It can rely on typical term-base retrieval techniques with a bottom-up juxtaposition of queries to cover a large area, making it very difficult to avoid noisy forms. An alternative way is the top-down approach based on the statistical breakdown of large scientific sets/networks, even at the whole science level. Classical or recent methods encounter the limitations of large-scale classifications.  Whatever the method, the delineation of fields typically meets with another type of difficulty, the institutional and community perceptions of field frontiers, and possibly incompatible expectations of scientists/experts requested in supervision, and of final users.

Focus on hybrid methods
Bibliometrics is anchored in a few fundamental networks:  citation, linguistic/lexical, collaboration, affiliation sensu lato (institutional, territorial). The delineation of fields and topics involves connections, strategies and perceptions reflected in those various networks. A sensible association of these various sources, both in terms of quality and cost-efficiency, helps to achieve an acceptable delineation.  What choice for the combination of networks? Early combined metrics, series or parallel processes?  Which granularity? Examples will be shown on hybridization of the best performing networks, terms and citations, with contrasted but not antagonistic properties.

New developments in bibliometrics and research assessment
Henk Moed, Senior Scientific Advisor, Elsevier

This presentation consists of two parts. The first part is an introduction to the use of bibliometric indicators in research assessment, aimed to show the boundaries of the playing field, and to highlight important rules of the game. It underlines the potential value of bibliometrics in consolidating academic freedom. It stresses the relevance  of assessing the societal impact of research, but emphasizes at the same time that one must be cautious with the actual application of such indicators in a policy context. 

The second parts identifies major trends in the field of bibliometrics, and focuses on the creation of large, compound databases by combining different datasets.. Typical examples are the integration of citation indexes with patent databases, and with "usage" data on the number of times articles are downloaded in full text format from publication archives; the analysis of full texts to characterize the context of citations; and the combination of bibliometric indicators with statistics obtained from national surveys. Significant outcomes are presented of studies based on such compound databases, and their technical and conceptual difficulties and limitations are discussed.



Exploring application of existing bibliometric indicators and measures to books
Patricia Brennan, Director Product Strategy, Thomson Reuters

With the launch of the Web of Science Book Citation Index by Thomson Reuters  in 2011 a new era of citation analysis has begun.  Previously, while it was possible to identify book citations within the Web of Science citation data, linking citations to known book source items was not possible, nor was it possible to get a view of book output alongside other formats such as proceedings or journal articles. This presentation will review the Book Citation data and explore new possibilities for not only expanded indicators but also the opportunities for the role of books and their citations in understanding the published scholarly literature.  The potential for use of these new data in bibliometric studies, in evaluation of research productivity or impact, and for better understanding the relationships between scholarly publications will be explored.


Scientometrics, Technometrics, Innovation: Policy Aspects
Koenraad Debackere, Katholieke Universiteit (K.U.) Leuven

Science and technology have become major items on the policy agenda. As a consequence, the need for appropriate indicator development to underpin science and technology policy has increased dramatically over the last decades. The field of scientometrics has been one of the primary contributors to the development and use of indicators for science, technology and innovation (STI) policy.  The interaction between the field of scientometrics and its accomplishments on the one hand and its uses in the STI policy arena on the other hand therefore warrants closer attention. During this presentation, the focus will be on (1) mapping the various indicator needs of STI policy, (2) linking those needs to recent scientometric developments, (3) listing do’s and don’ts of the deployment of scientometric indicators in the policy arena, and (4) some illustrative cases. This overview will familiarize ESSS participants with the judicious application of advances in scientometrics and technometrics to STI policy.

Mapping and Visualisation of Science
Katy Börner, Royal Netherlands Academy of Arts and Sciences (KNAW), The Netherlands / Cyberinfrastructure for Network Science Center, Director, School of Library and Information Science, Indiana University, Bloomington, IN

Recent developments in data mining, information visualization, and science of science studies make it possible to study science and technology (S&T) at multiple levels using a systems science approach. At the micro-level, the impact of single individuals or specific works can be examined. At the meso-level, the expertise profiles of institutions can be compared or the trajectories of student cohorts can be modeled. The macro-level provides a 10,000 foot view of the continuously evolving geospatial and topical landscape of science and the global import/export activities unfolding over both spaces. See sample studies and maps in the international Places & Spaces: Mapping Science exhibit (http://scimaps.org) and the Atlas of Science (http://scimaps.org/atlas).
The first part of this talk will present research results and case studies that aim to increase our scientific understanding of the inner workings of S&T. The second part introduces the Science of Science (Sci2) Tool (http://sci2.cns.iu.edu), a modular toolset specifically designed for the study of science. It supports the temporal, geospatial, topical, and network analysis and visualization of scholarly datasets at the micro (individual), meso (local), and macro (global) levels and is used by major funding agencies in the US and Europe as well as researchers in more than 40 countries.
References:
  • Scharnhorst, Andrea, Börner, Katy, and van den Besselaar, Peter, eds. (2012). Models of Science Dynamics. Springer Verlag.
  • Börner, Katy. (2010). Atlas of Science: Visualizing What We Know. Cambridge, MA: MIT Press.
  • Shiffrin, Richard M., and Börner, Katy, eds. (2004). Mapping Knowledge Domains. Proceedings of the National Academy of Sciences of the United States of America 101 (Suppl. 1).

Others can be found at http://cns.iu.edu/publications

Relevant Links
Mapping Science exhibit: http://scimaps.org
Science of Science Tool: http://sci2.cns.iu.edu


July 4th:
Seminars Day 1
Data cleaning and Processing
Matthias Winterhager, Bielefeld University, Institute of Science and Technology Studies (IWT), Germany

The quality of bibliometric analyses is heavily depending on appropriated handling of the relevant raw data fields. Depending on the level of aggregation and the target objects under study, various issues of accuracy can come up with citation links and several data elements (document type, author, institution, country, journal, field and discipline). 
We will have a close look at the relevant data fields in modern citation databases like Web of Science or Scopus to see if they are “ready to use” for doing all kinds of bibliometric studies. Main problems of data quality will be shown and major types of errors and their consequences will be discussed.
Standardisation, verification and the introduction of identifiers can help to overcome problems of data quality. Data processing approaches of the German competence centre for bibliometrics will be demonstrated.

Introduction to Bibliometric Data Sources
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria

This talk is about the specific requirements for bibliographical data sources to be met in regard to their suitability for bibliometric application. Furthermore relevant issues like coverage, representativeness and selection criteria are considered.
Any appropriate bibliography can serve as data source for a bibliometric study, however, comparative studies and large-scale analyses require large standardized data sources such as provided by bibliographic databases.
After a short general introduction providing background information, the main features of bibliographic databases are discussed with special focus on the question of which features are useful, essential or indispensable for bibliometric use. Most databases are designed for information retrieval and thus not necessarily fit for metric applications. In this context some basic features are introduced using examples from different databases. Distinction is made between specialized subject databases and multidisciplinary databases. In particular, the opportunities and limitations of the three major and multidisciplinary data sources – Web of Science, SCOPUS and Google Scholar – are discussed.
Alternatively subject specific databases (e.g. “MathSciNet” ,“SciFinder”), patent databases (e.g. “Derwent Innovations Index”, Espacenet (PATSTAT)) or pilot projects for citation indexing on the web (e.g.  “Citebase”, “CiteseerX”– all based on open access archives) are presented and examined critically regarding their beneficial potential for data enrichment in bibliometric analyses.


Impact Measures
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria



July 5th:
Seminars Day 2
Mapping Science (on the basis of Bibexcel Software)
Olle Persson, Sociology Department, Umeå universitet, Sweden

Purpose: To introduce the basic skills needed to produce maps with special reference to bibliometric data.
Learning outcomes:
The students learn how to:
(1) Prepare data including converting downloaded records, extracting and editing data,
(2) Calculate measures of relatedness including citations, co-citations and shared references, key word analysis,
(3) Make maps using Pajek and similar drawing software.
Teaching method: Short lectures with exercises.
Students should download latest version of Bibexcel and Pajek from the Internet.
Study material will be made available in advance of course start.

Network Analysis with R
Bülent Özel, Department of Computer Science, Istanbul Bilgi University (IBU) , Turkey

Social Network Analysis has increasingly been used in the field of scientometrics. It is primarily adopted to explore, to predict and to model patterns of collaboration in sciences or science related issues. R is both a programming language based on S-Plus language as well as an advanced environment statistical computations. R as a whole is acknowledged to be a powerful tool which simplifies many statistical computations. It is freely available and is licensed under the GNU Public License. A number of research teams has contributed to the environment by providing functional and powerful packages to conduct social network analysis. In this short tutorial, I will commence by a brief introduction to R programming environment and data structures. In the first part of Network Analysis using base network packages, we will form and visualize networks. Having introduced how to compute node level and graph level centrality measures and other network measures such as transitivity, reciprocity, cliques and components, we will conduct network co-varieties and network regression. Both built-in data sets and a co-authorship dataset which will be provided for this short course are used for exemplary analysis. In the last part of the course, I will introduce RSiena package, which may enable scientometricians to conduct longitudinal and dynamic network analysis.


July 6th:
Seminars Day 3
Workshop: Research Evaluation in Practice 1

Workshop: Research Evaluation in Practice 2

July 7th:
Panel Discussion
Bibliometrics and Ethics. Distorted behaviour based on policy use and misuse of bibliometric data?