As the director of the Samuel J. Wood and C.V. Starr Biomedical Information Center, Terrie Wheeler believes that one important role for a medial library is to lessen the administrative burden on researchers so they can “move the science forward,” as she puts it. NEJM LibraryHub spoke to her recently about some of the ways her library does this.
Q: Tell us about the Weill Cornell Medicine Data Core, which the library helped develop and now administers.
A: Data Core is a secure computing and storage environment where our researchers and their collaborators can put patient data they want to analyze. Researchers are granted access to the data if they are permitted by the data use agreements, which the library manages, and are approved by our Institutional Review Board. Initially, the Data Core data sets were available only to the Department of Population Health Sciences, but now they’re available to the entire college. The Data Core is a Windows environment, in the cloud, which is accessed from a custom app. Inside the Data Core, we install all the statistical analysis tools researchers may need — RStudio, SAS, Stata, GraphPad, etc. The data cannot be exported from the Core without HITRUST-certified librarians reviewing it to make sure it is de-identified.
Other schools have data cores; however, I don’t know of any other school where the library administers it. We try to put ourselves in the shoes of researchers. Their deadlines are short and they need their data fast, so we need to be responsive.
We also are seeking approval to host New York state Medicaid data in our Data Core. This required filling out over 100 pages of security and systems questionnaires, and it took 14 months to complete! We have now submitted it to New York State and are awaiting approval. Once approved, the library will be able to submit additional requests on behalf of other researchers, with a much faster approval time. In a similar manner, the Data Core has successfully negotiated access to other valuable data, allowing researchers to focus on data analysis, instead of acquisition and paperwork. We like to think of our facilitation of access to these patient data sets as “interlibrary loans”.
The Data Core became an invaluable asset during the recent COVID-19 pandemic. Within days a COVID-19 research data repository was set up in the Data Core by researchers on the Research Informatics Team. As the Data Core team must ensure access and availability of computational resources, requests for access or support skyrocketed, increasing by 100 percent over our pre-COVID rate.
Q: Can you talk to us about the library’s grant writing service? How did this come about?
A: When I first interviewed for the director’s job, I was told by our research dean that the library should start a grant-writing service. When I got the job and started asking my staff about what was important to them and what types of skills they wanted to grow, I discovered that one of our staff members had been an editor in New Zealand and he was actually quite good at scientific writing. So he now heads up our editing service. Another grant editor works at our front desk and is also a professor at a community college.
The team, which has about four people doing editing part-time, does light to medium editing. We’ll help with grammar and structure. We’ll make sure the researcher is meeting all the NIH requirements. Maybe the scientist hasn’t put the punch at the top. Maybe he or she has gotten muddled in the details of the science and lost the bigger picture, so the editor will help re-fabricate that bigger picture. A couple of our librarians are excellent statisticians, so they can do a quick review of the statistics if needed. But we will not touch the science because that’s the scientist’s bailiwick.
And we’re making a difference in a big way. Since the grant editing service began about four years ago, we have helped bring in $41 million in grant funding. We usually target junior researchers with this service, for whom English may be their second language, although a recent success was a senior researcher who needed help with a multi-consortium resubmission. The NIH gives you two chances, so the second time around he worked closely with us and got the grant.
Q: Are there other ways that the library tries to reduce the administrative burden for researchers?
The library has developed, maintains, and oversees Weill Cornell’s installation of VIVO, an open-source researcher profiling system. VIVO has about 86,000 views a month and pulls information from multiple authoritative systems, ensuring that researchers have a rich and accurate web presence.
We also focus on providing high-quality bibliometrics to our users. With our publications reporting system, VIVO Dashboard, we take each article that a researcher has written and benchmark it against 200 other articles that are written in the same year in the same research area and are the same article type — research articles to research articles, review articles to review articles.
Administrators can access a dashboard that shows researchers’ citation impact — where it falls in the percentile ranking for its discipline. The dashboard also allows the school to see the return on investment in different research areas over time. We can track return on investment for individuals, divisions, departments, and the entire school. I write the promotion and tenure letters that go before the Board of Overseers, and we include the research impact information from VIVO Dashboard. Of course, it’s just one number, and you’re looking at it in context with other things.
In the next year, we will deploy ReCiter, a homegrown open-source publication management system, which uses machine learning and available identity data to allow administrators to easily maintain publication lists for thousands of scholars.
Q: This last question doesn’t fit into the category of reducing administrative burden, but we know it’s something you’re very proud of. Can you tell us about SMARTFest?
A: It began the year before I arrived at the library as a modest event — an opportunity for students, faculty, researchers, and others to meet with ITS [Information Technologies and Services] and library staff, view demos of services and learn about significant IT projects planned for the upcoming year. Now in its seventh year, SMARTFest has grown into a campus-wide event with library, IT and educational vendors targeting every kind of user, demonstrations, high-end food, and raffle prizes. A critical part of SMARTFest is keeping our users informed about library and IT services available to them. This year we had over 1,600 people attend; 25 vendors and we pulled in about $24,000. The library and ITS, which co-sponsors SMARTFest with us, had 27 booths, too. The event gets bigger every year. We have four different levels of sponsorship for vendors. The event takes place in the middle of February when it’s snowy and cold, so it’s our big mid-winter celebration.
SMARTFest’s biggest takeaway is that it empowers our library team and ITS — who bear the brunt of everybody’s complaining when IT equipment doesn’t work. SMARTFest is the one day a year when library and ITS expertise can shine. It’s just incredible to see everyone so proud of what they do. Last year, I had people attend from other universities to see how we do it. Even the vice provost of administration is a big fan. After it’s over, my staff and ITS toast each other and celebrate with champagne, stories, and a will to make this event even bigger and better the following year.