Reference: Salo, D. Name Authority Control in Institutional Repositories. Data service portal Aila facilitates access to data and serves as the tool for data dissemination. One of its key features is the ability to control access to datasets according to the conditions set by data producers. In addition to descriptive DDI2-metadata, it facilitates creation of structural and long-term preservation metadata. Aila and Metka define the software platform used when building new tools and services at the archive.
All metadata are repurposed from a single authoritative source. Finally, it introduces a remote entitlement management concept aimed to manage the workflow needed for granting an access to datasets that are only available for download after an explicit permit from the depositor, PI, research group or IRB.
The early years of Islandora supported by a multi-year Atlantic Innovation Fund grant, which provided funding for developers, project management, interns, travel, and all of the other bits and pieces that get a software development project off the ground. During that time the Islandora community grew and flourished, but long-term sustainability needed clarity. In , that grant was slated to come to an end and we needed to find a new way to help sustain the project.
The Islandora Foundation was born from that need. In the two years since the formation of the Islandora Foundation was announced at Open Repositories , the project has welcomed more than a dozen supporting institutions, hosted Islandora Camps all over the world, and put out two fully community-driven software releases with dozens of new modules built and contributed by the Islandora community.
The Islandora project has made the journey from a grant-funded project incubated in a University library, to a non-profit that exists in symbiosis with the community it serves. This journey, and its place in the larger community of digital repositories and the institutions that use and support them, is the subject of this poster, which details nine-year history of the IF. Funding agencies and institutions are increasingly asking researchers to better manage and share their digital research data. Yet, meeting those needs should not be the only consideration in the design and implementation of open repositories for data.
What do researchers expect to get out of this process?
How can we design our data repositories to best fit research needs and expectations, as well as those of the organization? This institutional-focused repository is designed for researchers to self-deposit their research data. The data then undergo a workflow of curatorial review, metadata enhancement, and digital preservation by a team of data curators in the library. The result is well-documented research data that are broadly disseminated through an openly accessible discovery interface DSpace 4.
Before marketing our service to campus, we performed three usability tests with our target population: academic research faculty with data they must share publicly. The results of our user-testing revealed a handful of configuration and interface design changes that would streamline and enhance our service. Zenodo, a CERN operated research data repository for the long tail of science, launched a bit over a year ago its GitHub integration, enabling researchers to easily preserve and make their research software citable. This poster will give an overview over the uploaded software packages in terms of programming languages, subjects, number of contributors, countries, etc.
We will further explore curation of research software an integration into existing subject specific repositories. Digital Preservation the Hard Way: recovering from an accidental deletion, with just a database snapshot and a backup tape. An awesome tool set for digital preservation is available to all institutions who use DSpace. This is not a story of how we used this tool set. This is a story of how we recovered from an accidental deletion of a significant number of items, collections, and communities--an entire campus's ETDs: missing items, missing bitstreams, 1. In other words: here's how to do it the wrong way, but you'd really be better off doing things the right way.
This poster should be sufficient to serve as a guide for actually recovering from an accidental deletion of materials in DSpace, if one only has a database snapshot and a tape backup of a DSpace assetstore. It will also serve as a reminder of the digital preservation tool set available for DSpace, as well as why these tools exist.
Many libraries provide web services such as library catalogs and institutional repositories. Each web service contains author profiles, which are not necessarily accessible from another service; sometimes only the author name is shared, which may be ambiguous. In this project, I developed an add-on module for Next-L Enju that enables synchronization of profile information between these three components through ORCID so that librarians can create a correct link from our library catalog to the institutional repository.
In this poster, I would like to introduce a case study of its workflow and implementation. Cerberus was designed to store the important digital assets created as part of the mission of the university, including scholarly, administrative, and archival objects, but we needed a way to easily promote the scholarly content research publications, presentations, datasets, and theses and dissertations. We were able to highlight the scholarly content by introducing the notion of communities, which we used to create relationships between collections, users, and files.
The community structure has not just neatly organized repository content according to the existing Northeastern structure, it has made it easier for the system to leverage the relationships between objects to enhance the discoverability of scholarly content in the repository. Because of the diversity of the public facing distribution and repository systems available today, as well as anticipating the development of "The Next Great Repository," this new system needed to be repository and distribution system agnostic. WVU Libraries wanted the repository managers to be able to develop new and modify existing metadata entry forms with no, or minimal, support from systems.
This solution should be able to gracefully handle the changing understanding of what metadata requirements for today's researchers are. Lastly this system needed to be able to convert digital objects from their archival format to their web presentation formats resize images, combine tiffs into single PDFs, apply watermarks, etc. MFCS provides a drag and drop form creation platform, as well as a robust data entry system that provides all the needed tools for digital collection management.
The API provides easy to use methods for batch migrating data into and exporting data out of the archival system for use in other repository systems Hydra, DLXS, etc. The history of the Open University UK institutional repository Open Research Online is one of changing requirements as defined by its research community, institutional administrators and external HE policy.
How the repository has responded to these changes has ensured its success. However, how we manage the potentially competing requirements of compliance monitoring and Open Access dissemination will determine the future of the repository. The distributed submission policies of many repositories makes standardizing metadata input very difficult.
Out of that 50, two full-time staff members have management responsibilities for the repository. We are taking a pragmatic approach to addressing the issue of limited clean-up capacity by transforming our training process. Training focuses on clearly communicating repository-wide metadata standards, collaboratively creating collection-specific metadata guidelines as needed, and providing detailed input guidelines for each Dublin Core DC metadata field.
We are working one-on-one with student workers to familiarize them with the new guidelines and are communicating with repository submitters via listservs and in-person meetings. The new guidelines were rolled out recently, and we expect to see a decrease in the number of records requiring editing. We will present examples from our new guidelines, suggestions for successful communication methods with stakeholders, and provide information regarding the incidence of errors since implementing the new training. Using these as a development test bed, our project demonstrates how multiple repositories of diverse resources can exchange and connect related information via complementary workflows and metadata sharing.
Our poster maps out how we are building cross-link between our data and scholarship repositories, on the one hand establishing relationships between resources upon submission by researchers and on the other establishing technical connections between repositories on which to build out future interoperability. Obtaining metadata and content for your repository can be challenging. Wait until after publication, and you can likely harvest the metadata - but then you may not be able to get the content. Authors have the manuscript to hand when they get notification of acceptance for publication - but then the metadata has to be manually entered, and they may not have all of it, requiring that it is updated later.
This poster shows a new capture process and workflow that encourages authors to deposit their manuscript when it is accepted for publication, and automatically combines it with harvested metadata after publication to complete the repository record. As open linked data gains traction, vastly more information becomes available and discoverable online. But the era is over where a profile service can be built from scratch, with any expectation of completeness much less staying current over time.
How can a service that needs this information harvest what it needs from the Internet, and use it in a way that can be trusted by all? System for cross-organizational big data analysis of Japanese institutional repositories. As one of the application, IRDB content analysis system provides statistical information depending on content type and format.
It allows users to compare the cross-organizational data. The system can divide into two major components.
One is the log repository which collects filtered access logs from Japanese IRs as data source. The filtering has to be done in each IR server since it includes elimination of the privacy data.
The other component is the user interface. The system enables users, including repository managers, to analyze both content and access log to the content among Japanese IRs.
New Content in Digital Repositories: The Changing Research Landscape ( Chandos Information Professional Series) [Natasha Simons, Joanna Richardson] on. New Content in Digital Repositories. The Changing Research Landscape. A volume in Chandos Information Professional Series. Book • Authors: Natasha.
Susan W. Parham 1 , Patricia Hswe 2 , Amanda L. Westra 5. The last decade has seen a dramatic increase in calls for greater accessibility to research results and the datsets underlying them. As research institutions endeavor to meet these demands, repositories are emerging as a potential solution for sharing and publishing research data. To develop new curation services, repository managers and developers need to understand how researchers plan to manage, share, and archive their data.
The information gleaned from these evaluations can be leveraged for improving research data management services and infrastructure, from data management training to data curation repositories. This poster will introduce the analytic rubric developed through a collaboration among five U. The focus will be on examining the intentions of researchers toward data sharing and archiving, as expressed through a preliminary review of DMPs across these institutions.
The aim of this repository model is to facilitate the use of 3D models and fabrication in the classroom at multiple levels of the curriculum. This project addresses the lack of cross-over between existing learning object repositories and 3D object repositories, and provides a guiding model for how repository systems and projects can facilitate bringing 3D modeling and fabrication into the open education community. Information packages served from our repository contain a renderable or printable 3D model or set of models along with a set of curricular elements that help contextualize the model s in the learning environment.
We discuss the inception of the repository project, the results of a number of pilot projects, and our plans for future development. The Georgetown University Library has developed an application named the FileAnalyzer to facilitate the ingest of large collections of content into DSpace. The application can inventory a collection of files to be ingested and prepare ingest folders from a metadata spreadsheet.
Once Georgetown University adopted these workflows, the backlog of collections to be ingested was eliminated. This workshop will demonstrate the DSpace ingest workflows that are supported by the FileAnalyzer. Participants will learn how to install the FileAnalyzer and run several of the tasks that can be useful for DSpace collection management. Using demo.
Lastly, the session will discuss the framework for modifying the FileAnalyzer to implement institution-specific customizations. Since , Symplectic has been integrating its flagship research management system, Elements, with institutional repositories.