June 8, 2015

Open Access Conference 2015: Learning from Experience

OPEN ACCESS CONFERENCE 2015:
LEARNING FROM EXPERIENCE

San Jose State University
Dr. Martin Luther King, Jr. Library
October 23, 2015

Call For Proposals

In celebration of Open Access Week, San Jose State University will be hosting
its biennial one-day conference on October 23, 2015, on all things Open: Open
Access, Open Educational Resources, Open Education. For the conference
sessions, we are interested in what worked, what didn’t work, and why in areas
related to:

–Open access publishing and institutional repositories
–Sustainability, scalability, and assessment of open access
–Open educational resources and open access applications in the classroom
–Massive open online courses (MOOCs) : copyright issues, assessment,
challenges
–Outreach, promotion, and overcoming resistance to open initiatives

We will consider proposals for individual presentations and panels organized
around a theme. Presentations are scheduled for either 60 minutes or 30
minutes. In the final program, 30-minute sessions will be paired. We are also
offering a lightning round session consisting of 5- to 10-minute presentations
showcasing your best ideas and your most educational failures.

Submit your presentation proposal to:
http://scholarworks.sjsu.edu/cgi/ir_submit.cgi?context=oa-un-conference

Deadline for proposals: July 31, 2015
Notification of accepted proposals expected: August 15, 2015

Please feel free to contact the Program Planning Committee with questions:

Ann Agee, Program Proposal Coordinator, at ann.agee@sjsu.edu and
Christina Mune, Program Planning Chair, at christina.mune@sjsu.edu

May 26, 2015

Interview with Joachim Schöpfel, author of Learning from the BRICS: Open Access to Scientific Information in Emerging Countries

Joachim Schöpfel is lecturer of Library and Information Sciences at the University of Lille 3 (France), director of the French Digitization Centre for PhD theses (ANRT) and member of the GERiiCO research laboratory. He teaches on LIS topics, including intellectual property. His research interests are scientific information and communication, especially open access and grey literature. Litwin Books recently published his book, Learning from the BRICS: Open Access to Scientific Information in Emerging Countries. Joachim agreed to be interviewed here about it.

Joachim, thank you for agreeing to do this interview. I’d like to start by asking you to tell readers a bit about yourself and what got you interested in the topic of this book.

After a PhD in Psychology at the University of Hamburg (Germany), I have been working for nearly 20 years in the French public information industry before returning to academic life. As an author and information manager, I have always been interested in open access as a set of tools and services designed to facilitate scientific communication. Most of my publications are freely available on the French open repository HAL. Also, I am interested in the development of the open access movement, in France and Germany and other European countries, but also in other regions of the world. Because of their economic and demographic dynamics, the BRICS countries play a particular role in global policies, may it be security, public health, ecology, innovation, research or education. This was the reason why a couple of years ago I became interested in open access initiatives and projects in these countries.

For readers’ info, the BRICS countries are the “emerging” economies of Brazil, Russia, India, China, and South Africa. What are some of the basic differences regarding the open access movement in those countries, versus the U.S. and Europe?

Perhaps the most important difference is that the scientific output of the BRICS countries has largely been neglected and underrepresented by the international databases and catalogs; and partly still is. Language, culture, politics – all this may explain the underrepresentation but today they want to be visible and have impact on the global landscape of scientific research.

Another difference is that they had and partly still have more problems than the U.S. or Europe to get access to the core of scientific information. Again, language plays a role but also the economics of scientific information and technical infrastructures. Open access, therefore, has another and sometimes more crucial meaning for the BRICS countries, as a vector of global dissemination of their scientific results and as a way to get access to larger amounts of information than before.

So what does the book say about the open access situation in the countries discussed? Could you tell us its scope and outline it?

The book shows that all emerging countries develop an open access policy. Yet, the diversity and differences prevail. Each country pursues its own open access strategy that fits best with its economic, financial, political and scientific situation. Each strategy is specific and different, except for Brazil and South Africa which started a bilateral collaboration for OA journal publishing on the SciELO platform. However, all countries face the same double challenge, i.e. how to increase the visibility and global impact of their scientific output, and how to improve access to scientific and technical information for their research and higher education? Open access can be an answer to both.

Can you give a few interesting examples of the differences between the open access policies in these countries?

The public policy concerning open access journals is quite different between the countries. While Brazil and, to a lesser degree, South Africa, invest into a central public platform for OA journals (SciELO, SciELO-SA), the other countries and in particular China and India have another strategy, based on larger numbers of different OA servers. Another difference is the role of the public sector. Russia for instance, but also Brazil seem to consider that free access to research output is part of the social and political responsibility of the State, i.e. national or regional authorities. Open access, gold or green road, should not be controlled by commercial publishing houses. On the other hand, India and perhaps even more China foster further individual, institutional and often corporate initiatives, without clear distinction between “for profit” and “non for profit” dissemination. A third difference is related to their global strategy. While some countries focus more on regional collaborations, such as Brazil and South Africa, others (China, Russia, India) appear to seek global impact, in competition with Western countries, which means for instance, that for them the question of English content and the visibility in international initiatives are of prime importance.

The title, Learning from the BRICS, suggests that there are lessons to be learned from these countries in going forward with OA in the West. What do you think are some of the lessons to be derived for us?

One lesson is that there is no single solution or magic recipe for open access, and that a pragmatic and flexible approach fitting with local conditions seems more important than preconceived ideas about what should be done. Perhaps there is no unique or dominant model of open access. Perhaps there never will be. Perhaps, too, there is no need for a unique model, be it green or gold. Diversity may be a better option for sustainable development. Another lesson is the need for a strong commitment to open access shared by scientific and political authorities in order to increase the impact of the countries’ research output and the availability of scientific information. With the words of one of the book’s authors, Abel Packer from SciELO (Brazil): “National research policies that favour open access is the main factor to advance open access”. Yet, as our book shows, this commitment must also be shared by the local and domain-specific research communities in order to transform national policy into a success story. This is the third lesson: learning from each other does not only mean learning from failures, mistakes and dead-ends but more so and above all, learning from success. More than the understanding of problems and challenges, perhaps the real message of our book is the importance of success stories. The development of open access depends on the promotion of successful initiatives, such as SciELO in Latin America. Expect success, focus on it, and coordinate scientific and political efforts in favour of open science.

Thank for this interview, Joachim. Your insights are very much appreciated, by me and I’m sure by our readers. Litwin Books is privileged to have published this title.

Thanks to Litwin Books for support and interest!

May 5, 2015

Learning from the BRICS: Open Access to Scientific Information in Emerging Countries

Learning from the BRICS: Open Access to Scientific Information in Emerging Countries

Editor: Joachim Schöpfel
Price: $35.00
Published: May 2015
ISBN: 978-1-936117-84-0
Printed on acid-free paper

The market for scientific and technical information (STI) has been dominated by publishers from the United States, Great Britain, Germany, and the Netherlands. This book takes a look at the interesting developments in publishing coming from the countries with emerging economies known as BRICS (Brazil, Russia, India, China, and South Africa), which comprise 40% of the world’s population and whose GDPs comprise 18% percent of the world’s economy. Each of these countries has a unique economic system as well as differing systems of academic higher education and research. As a result, they have each developed different models of academic publishing for the dissemination of their research results, many of which are based on principles of open access.

This book closes a gap in the literature of academic publishing by examining the strategies employed in STI publishing in these countries. As a growing part of the international STI market, they will impact the ways in which information is produced and made available in the future. The models examined here can serve as alternative options for information delivery in developed countries, and may serve as more sustainable models for emerging economies in Africa and Latin America.

Brazil, Russia, India, China and South Africa all developed their own way to open access, based on specific blends of green and gold road, public investment and private initiatives. What they have in common, is their commitment to research as a driver of economic and societal development and to open science as a way to enhance quality, impact and access to scientific information. Open access is not an end in itself but a means to better science.

Twelve established information professionals and scientists from seven countries contribute to this book and help the reader to understand the open access situation in the emerging countries. How are they doing what they are doing, and why? Where are the bottlenecks, and what are the challenges? What can be learned? Each chapter is introduced by “Facts & Figures,” a section with basic data about each country, on its economic performance, research and development, scientific output and open access publishing.

Brazil: The first chapter presents the open access journal platform ScieELO, the most important open access server for scientific journals worldwide, with an impact well beyond Brazil.

Russia: Chapter two provides a general overview on institutional initiatives for free dissemination of public research on the Internet, especially in the field of grey literature, in a society with strong traditions of public interest prevailing on private intellectual property.

India: Along with a detailed description of the open access movement in India, the third chapter informs about awareness and acceptance of institutional repositories and open access journals among the Indian scientific communities.
China: The author presents the results of a recent survey on the development of open access journals in China. This is interesting insofar as only very few titles are known and indexed outside of China.

South Africa: The last chapter shows how open access can increase its impact and also protect local content, and how it can build on African cultural traditions and values of Ubuntu, i.e. relatedness, sharing and generosity.

Each chapter tells a story, and each story is different. A virtual roundtable concludes the book, with a focus on shared values and engagement in the international community of open access and open science. This book provides an important overview of publishing trends in BRICS nations and will be of interest to anyone concerned with the future of academic publishing, including librarians, higher education researchers, and publishers. It also provides insights regarding copyright issues, the economics of publishing and STI, and international affairs.

Joachim Schöpfel is lecturer of Library and Information Sciences at the University of Lille 3 (France), director of the French Digitization Centre for PhD theses (ANRT) and member of the GERiiCO research laboratory. He was manager of the INIST (CNRS) scientific library from 1999 to 2008. He teaches library marketing, auditing, intellectual property and information science. His research interests are scientific information and communication, especially open access and grey literature.

Available from Amazon or your favorite library vendor.

January 2, 2013

Thanks to all who contributed…

December 21, 2011

Data Mining

Libraryland is a-buzz about a new role we can play in the pursuit of scientific knowledge: data curation. Data curation serves, in particular, the new scientific methodology that goes under the name e-science. E-science involves the collection of data sets which are made widely available to the research community. Researchers then “mine” these data sets by using automated systems to find statistically significant relationships within the data. The library’s role is to curate the data, i.e., identify, acquire, and manage the data sets through the course of their life cycle. As exciting as this new methodology is, one should be aware of its weaknesses. E-science can be a valuable addition to traditional scientific methodology, but by itself, it is no panacea.

In a commentary entitled “Implications of the Principle of Question Propagation for Comparative-Effectiveness and ‘Data Mining’ Research” in the Journal of the American Medical Association, 35(3), 2011, Mia and Benjamin Djulbegovic argue that data mining does not provide definitive answers to research questions. Instead, it should be considered merely a hypothesis-generating technique. Their first point already had been demonstrated vividly by a piece of data mining research entitled “Testing Multiple Statistical Hypotheses Resulted in Spurious Associations: A Study of Astrological Signs and Health” published in the Journal of Clinical Epidemiology, 59(9), 2006 by Peter Austin et al. Austin et al.’s research showed that residents of Ontario, Canada who were born under the astrological sign of Leo had a higher chance of suffering from a gastrointestinal hemorrhage than others in the population, and those born under the sign Sagittarius had a higher probability of being hospitalized for a humerus fracture. These results were statistically significant, even after being tested against an independent validation cohort. The study “emphasizes the hazards of testing multiple, non-prespecified hypotheses.” In other words, it warns us that given an enough data points, one can, after the fact, find any number of ways to connect them.

The second point in Djulbegovic and Djulbegovic, that data mining should be used as a hypothesis-generating technique, is, on the other hand, undermined by Austin et al. Austin et al. point out that the statistical methods that are at the heart of data mining are not able to distinguish real from spurious associations. Data mining employs the automated examination of enormous bodies of data. Its usefulness is thought to be proportional to the size of the data set that it collates; however, as the data set becomes larger and as the number of attributes that serve as potential relata increases, the number of potential relationships increases exponentially. Importantly, the number of spurious associations also increases. With enough data, no significance test will be stringent enough to provide assurance against the kind of results found in Austin et al. What is needed, according to Austin et al. is a “pre-specified plausible hypothesis.” For statistical analysis to be useful, the researcher must begin with a hypothesis, preferably a plausible one, if the research is to be valuable.

What exactly is a pre-specified plausible hypothesis and how can we generate it if data mining can’t do that for us? The question was posed some sixty years ago by the philosopher Nelson Goodman using different terms: Goodman believed that a critical question for epistemology was to distinguish between “projectible and non-projectible hypotheses.” One can more or less replace “pre-specified plausible hypothesis” with Goodman’s term “projectible hypothesis.” According to Goodman, when we seek to understand what hypothesis is (or is not) projectible, we do not come to the problem “empty-headed but with some stock of knowledge” which we use to determine what is (or is not) projectible. Projectible hypotheses will be those which do not conflict with other hypotheses that have been supported in the past. They will commonly use the same terminology of previously supported hypotheses. The terminology appearing in the hypotheses will have become “entrenched” in the language. This goes a long distance toward explaining why we don’t find the link between one’s astrological sign and medical conditions plausible. Twenty-first century Western medicine is not accustomed to linking astrological signs to ailments and so must find any hypothesis that does so implausible.

If Goodman is correct, then data mining is of little use without an historical understanding of the field of science to which the data pertains. Library administrators should keep this in mind when allocating resources. Clearly, purchasing data sets is a necessary part of serving our research patrons, but the emphasis must be not on the mere accumulation of data, it must be on the selection of data that is critical to continuing the scientific discourse. While data sets that distinguish astrological signs are clearly insignificant for medicine, there are many other attributes that form the basis of data sets that are more or less reasonable. Librarians must be able to perform the complex task of distinguishing the more from the less. It is the curation of data that is important, i.e., the acquisition and management of data sets through the whole of its life cycle; and most importantly, the curation of data sets that are of interest and value to the scholarly and research community.

Here, we have another argument for allocating library resources to pay for librarians with deep subject expertise. As e-science develops, vendors will make more and more data sets available, regardless of their actual worth to researchers. To effectively choose the data sets that are of value, librarians must have a thorough understanding of the research needs of their patrons. To do this, they must have a deep understanding of the field. Unfortunately, with the excitement swirling around e-science, the mere access to large data sets threatens to become the be-all and end-all in collection management. If we aren’t careful, we may find ourselves with mountains of data from which everything and nothing can be concluded.

October 13, 2011

Thoughts on VuStuff II

I spent the better part of Wednesday at VuStuff II, a small regional gathering hosted by Villanova University’s Falvey Memorial Library, which focused on the intersection of technology and scholarly communication in libraries. The attendees were an interesting mix of people from academic and special libraries, and included library directors, archivists, systems librarians, special collections librarians, reference librarians, technical services librarians, and more. In the group discussion session, some of us regretted the lack of representation from public libraries. It sounded like it is now on the agenda to do outreach to that sector next year.

I’ve been impressed with what’s going on at Villanova for awhile now. Not only are they doing some of the most interesting, cutting-edge work that I’ve seen in terms of presenting digital content from their special collections, but the culture of their library work environment is very different (and I might judge it as “better”) than what I know of in other libraries and work settings. This is an outsider’s view, based on perceptions gleaned from what people who work there have told me and things that I’ve read. The following are some of the things I find particularly intriguing and feel might serve as a good model for other places to consider: 1) Falvey library staff are given time to explore special projects based on their own interests. By doing this, the library is taking a risk – some work hours may indeed be “wasted,” but new products and new services may be born. A lot of workplaces harp on the need for employees to be “creative,” “collaborative,” and “innovative,” but very few actually provide the time and space to support their staff in doing this. 2) Falvey funds technology. Money for digital projects and technology-based services is written into the budget. Many workplaces expect staff to “make do” with no financial support or else fund projects on an ad hoc basis. Falvey models the fact that superior technology-based projects require dedicated, on-going funding. 3) Falvey diversifies the responsibility for technology.  There is no one staff position that is responsible for technology initiatives; rather, various aspects of technology are integrated into the job descriptions of numerous library staff members.  This means that if a library staff position is cut or a staff member leaves, technology initiatives don’t evaporate along with that change. 4) Falvey supports open access. The VuFind product they’ve developed for use as a flexible library resource portal is available for free through a GPL open source license. The digital library content they present is available freely to anyone (with a few exceptions for some materials with outside restrictions). Instead of partnering with commercial interests to market a product, Falvey keeps to the ideal of libraries providing information and resources free-of-charge.

I think that Joe Lucia, Villanova’s university librarian and the director of Falvey Memorial Library, deserves a lot of credit for his leadership in these areas. I missed his opening remarks at the conference, but found his questions and comments throughout the sessions to be interesting and thought-provoking. He seems to be looking further forward than many library directors, asking questions like “What does it mean for libraries if the ILS as we know it is dead in the next five to eight years?” “What does it mean if 80% of the content of our book collections is available electronically?” A word to the wise is that the two books he specifically mentioned were Siva Vaidhyanathan’s The Googlization of Everything and R. David Lankes’ The Atlas of New Librarianship.

The presentations at the conference were informative and sometimes inspiring. Amy Baker of the University of Pittsburgh described the preservation of archival mining maps project that her institution has been involved in, spurred by a mining accident in western Pennsylvania. Working in conjunction with the Pennsylvania Department of Environment Protection, this project is a good example of a university/government partnership that provides publicly available information in order to help protect people and property. It reminded me that while librarians and archivists rarely see our work as possibly having life-or-death consequences – sometimes it does.

Eric Lease Morgan of the University of Notre Dame demonstrated the Catholic Research Resources Alliance website (the “Catholic Portal”) and explained how it uses the VuFind product to draw together metadata from various formats and sources into one seamless product. I was particularly interested in its ability to perform full text searches and construct KWIC word concordances. I’m not sure how well known or well utilized this site is, but I think it holds a great deal of potential for researchers in literature, history, religious studies, and other fields to mine text data for a variety of purposes.

Eric Zino of the LYRASIS library network explained the Mass Digitization Collaborative, undertaken to help libraries digitize selected resources in a cost effective way. Unique items of historical value have been the major focus, although participating libraries are free to choose any materials they wish to include (provided copyright restrictions are met). Digitized materials are made publicly available via the Internet Archive, and can also be hosted locally. This project underscored the benefits of libraries working together to cut costs, minimize staff time spent on projects, produce consistent products, and share content more broadly.

I missed the final presentation of the conference, which was Rob Behary of Duquesne University speaking on his library’s project to digitize the Pittsburgh Catholic newspaper. His presentation highlighted some of the benefits of moving from microfilm to digital content. Most librarians will agree that efforts like this, to preserve smaller regional publications with a unique focus or viewpoint, are an important service that libraries should be involved in.

All in all, this was an interesting day with plenty of time for networking built in. I enjoyed reconnecting with former colleagues and students, and meeting some new people as well. It was particularly rewarding to be with a group of people who were interested in moving library services forward into the 21st century, while still retaining the traditional library value of open access to information. I suspect that organizers may be seeking larger quarters for future VuStuff gatherings as its reputation continues to grow.