Monday, December 6, 2010

Reflection on the semester

Now that the class is starting to wrap up with student presentations and a quiz this week, I feel like this is a good time to end my blog. This was my very first blog, hopefully not my last, and I feel like it was a good experience. I liked having a very focused, topic driven post each week where I was also allowed to be a little more informal and express my likes, dislikes, and write about the concepts that I had a hard time grasping. I typically wrote my posts (or started to) every Sunday or Monday, which gave me time to think about the class and reading information from a few days earlier. Overall, I think my blog posts show an accurate reflection of how I connected with each unit. Some weeks went better than others, and there were definitely some weeks that I did not want to post at all.

My favorite blog post, and also my favorite time during the semester, was on 11/12/10 ("A little more with SFX"). Following my initial read-through of the syllabus, I expected to enjoy the first half of the semester because the second half consisted of concepts that were brand new to me and "scary" because they pertained to the technology side of ER management. However, I think this post shows a turning point in my attitude towards the link resolver, DOI, and OpenURL technology in libraries. I automatically assumed that I would have a difficult time understanding the material, but after I forced myself to present on Find It (about which I previously knew nothing), I gained a deeper understanding than I ever thought I would. Between my notes from class around this time (first two weeks of November) and my blog posts, I feel like I could stand in front of a group of people and teach these concepts. It's interesting how the topic I feared most ended up being my favorite.

My overall thoughts about ERM have drastically changed. The first half of the semester was filled with some personal challenges which slightly distracted me from my school work, but I also didn't feel connected to the material. I also went into this class with low confidence in my ability to understand some of the technology that ER librarians work with on a daily basis. I now know that work with electronic resources for future librarians is a necessary skill and there are so many different facets to it (just thinking back on all of our presenters) that there is bound to be a favorite work area. Some librarians may like the licensing and negotiating (the business side) of electronic resource management, whereas others may be more drawn to the various technologies that are used to keep everything running smoothy, like management systems and link resolvers.

Even though I struggled through parts of the class, the work (reading, presenting, writing, etc) made me feel like I have really added to my skills as a librarian and my ability to understand and work in a rapidly growing section of the industry.

Thursday, December 2, 2010

Reading Notes for Unit 13: ERM Librarian

I want to get a few of my reading notes down before I go to class tomorrow. An electronic resources librarian often shares some of the same task as his/her collegues (reference work, biobliographic instruction), but must treat the electronic resources management with more business sense and build relationships with vendors outside of the library setting.

Marian Through the Looking Glass...
- This is a newer position for the library profession meaning that directors/boards have to carefully design the ER position to make sure that all aspects of it are accomplished. This can be difficult because many librarians may not know much about the ER realm of librarianship.
- In a matter of five years, ER spending went from a reported 8.85% to 22.01%
- Description of the ER management position: "an increasing number of...position announcements, a greater diversity of functional areas involved, a wider variety of types of institutions placing advertisements, and the emergence of distinctions between 'electronic' and 'digital' positions in terms of job responsibilities."
- Three common position jobs: purchase management; renewals and cancellations; pricing negotiations; AND covering technical problems (additionally, ER librarians will work with link-resolver software, federated searching software, and managing usage data).
- All of the above areas of the position point to a tech services librarian instead of a service-focused librarian. However, the job of an ER librarian really differs from library to library depending on their needs and current staff.
-There still aren't many opportunities for training for an ER librarian. We can take a class like Electronic Resource Management, but otherwise, it is up to the librarian to look for learning opportunities and take the initiative to learn the material themselves.

How to Survive as a New Serialist
- This was more of an article that I would keep around for good references (websites, webinars, blogs, workshops, etc).
- One interesting fact: try to look at the job of the ER librarian from the ILS - what already exists and what will be needed for training? How can you improve the current system? What is going to be needed to make it run smoothly?

Process Mapping for Electronic Resources
- Because the ERM landscape rapidly changed in the last decade (and still is changing) many libraries are at different points in how they choose to approach managing their electronic resources and are attempting to define the skills and role taken up by the ER librarian.
- Process Mapping:
- "synonymous with business process reengineering (BPR)"
- rooted in Total Quality management (TQM)
- process maps help an organization better visualize workflow and the functions of a particular process. Moreover, they can help employees understand what areas need improvement or need to be changed all together.
- Even though libraries technically aren't businesses, they are organizations with many areas of work with budget constraints. Especially in the ERM realm, librarians need to take a business approach. It's an area of librarianship where a service oriented field merges with the corporate world of vendors and publishers.
- Process maps not only show what areas need change, but they also show what sections of work are already working well for the organization.
- Communication is key with process mapping. Once the map has been created and analyzed, effective communication will carry the project to the next step and is necessary to make the necessary changes happen.
- "They [libraries] are increasingly turning to proven business practices that allow them to evaluate and design new methods of delivery of resources and services." (103)

Friday, November 19, 2010

What is a handle system anyway?

Today's class clarified some of the concepts we've been learning about this semester and also answered some questions that I've actually had since the beginning of library school (yikes!). Like this one: what is a DOI and why would people confuse me even more by referring to it as a "dewey" (pronounced the same way)?

We covered 4 questions/learning objectives, and I'm going to lay them out here:

1. What's the issue with regular links? The issue is that regular old links identify locations, but not the actual item. The problem is that locations of information change and they may move servers, which breaks the link. So, it looks like we need a solution...

2. What does the term "local control" mean and why is it important? This is in relation to OpenURL and DOIs, which I will explain in just a minute. Local control is important because it gives the library (the agency that PAYS for subscriptions and access) to control where a link takes the user. A library maintains its own link resolver server - they want to point patrons to the library's purchased resources, NOT the publisher's website, where the user will be prompted for a credit card number before proceeding.

3. What is a "handle system?" A handle system is an index that will track the location of a certain item. If the location changes, it's updated in an index. When the user clicks on a link (or item), their "request" will run through an index, and will then be directed to the properly updated location of what they're looking for. Some URLs will actually contain the word "handle" in it.

4. DOI, OpenURL - what is the difference?

All of these fit under the "handle system" umbrella.
DOIs (pronounced like Dewey): made up of a prefix and a suffix (refers to the publisher and the item number) is an identifier assigned to an object by the publisher. They determine if this goes to the title level, chapter level, or I suppose, even the page level. They will always point back to the publisher page, so it's hard to have local control with this type of identifier. They're great for citations because they point to the authoritative resource.

OpenURL: the library's personal link resolver identifier. This will point to resources only the library subscribes to or their ILL page. Many publishers will also assign an OpenURL to a resource because they are often used with link resolver software, like SFX. These are often much longer and have detailed information like author's last name, page numbers, etc.

Reading notes on Friday's material:

This is a slight shift from DOI's and OpenURLs over to some readings on eBooks:

"Comparison Points and Decision Points" (outlines vendors for audiobooks)
One of the biggest points made about all e-book formats is that the user does not want to fiddle with technology or feel like they have to learn it. The user wants to listen to or read their good book, not figure out how to make it work. In this article, we look at 4 different audiobook vendors: Audible, NetLibrary, OverDrive, Tumble, and Playaway. The author addresses size and quality of a library's ebook collection as an important factor, especially since digital audiobook collections are currently not very large. Audible currently boasts the largest collection of over 14,100 books. This may also lead to overlapping and duplicates within a vendors collection. While this may not always seem efficient, it also gives the user a choice when selecting a version, voice, and allows them to compare prices.

I found that just like regular libraries, audiobook vendors have to advertise their new Purchase Orders (specific listeners may want more books from their favorite author) and continually bring in new materials. Additionally, content is also occasionally removed from collections.

The author mentions that content should be arranged according to the age of the content: "One simple three-way method of slicing up a collection is into frontlist, backlist, and public-domain titles..." (17) (recently published, older but still protected by copyright, works that are out of copyright and into the public domain). Peters continues on to analyze ebook collections by:

- subject and genre strengths: publisher supplied genres and subject headings may result in a different form of categorization.
- content characteristics: this may include abridged vs. unabridged
- narrators: with human narrated audiobooks, many listeners may have developed a preference for a favorite voice. They are typically narrated by actors/personalities; authors; and professional narrators.
- sound and quality: not usual a big deal because repeated use does not damage the sound.
- languages other than English: it's important for a vendor to have works in other languages (Spanish, French, German, Italian, etc).
- purchase and lease options: vendors may offer a purchase plan or a lease plan, and some libraries like to have the ability to swap out underperforming titles
- cost component: different for each vendor, library must consider what will work best for their budget
- licensing and agreement terms
- key features and accessibility issues
- a few features that a library should consider are: placeholding, bookmarking, skip back, sampling, nonlinear navigation

I didn't list all of the key points librarians should consider, but managing and selecting an ebook vendor will be a major task for an ER librarian!

"An Overview of Digital Audiobooks for Libraries"

This article, also written by Peters breaks down the major services of Audible, OverDrive, NetLibrary, and TumbleTalkingBooks into 6 categories and provides their recommendation:

1. Usage model: all are either single user or concurrent users
2. file format: MP3; Windows Media Audio; Flash
3. number of sound qualities: 4; 1; 2
4. supported devices: various vendors support many devices (as long as they are file format compliant)
5. ownership or subscription: Tumble has the best pricing model (subscribe: select and swap)
6. size of collection: ranges from 100 titles (Tumble) to 23,000 titles (Audible)

The moral of this article: each vendor comes with its pros and cons and no two vendors are alike. It really comes down to what your library needs and which vendor has the most to offer.

Friday, November 12, 2010

A little more with SFX...

I can honestly say that if I hadn't done a presentation last week on Find It, I would have been so confused in class this morning. I'm so relieved that I put in the initial work (not just to get my presentation completed), but also because I provided myself with a solid background understanding of OpenURLs and link resolvers. Judith Louer, from CTS, came in today to show our class the workings behind SFX and what she sees everyday at work. After doing my own research about it, I knew it was complicated and knew it was a huge job, and listening to her speak confirmed this. Once again, Find It works by combining the forces of our library (but I guess it's more than a library...it's a HUGE library system for a research institute with 42,000 students), all of our vendors (according to Sue Detinger, we work with over 2000), and Ex Libris, who "manages" the software. This adds up to a lot of chaos. Today, I learned that bibliographic information provided by the vendors for each article/journal, doesn't really match up with the library's cataloging practices. Ugh. One more thing to complicate Find It - making sure our data is adjusted to fit our format. It was nice to hear from the person who contributes to keeping it all running.

The readings for this week were great - they were easy to understand (which I sometimes need with tech information) and were also interesting. I won't write too much about them, but they confirmed/solidified things I already know/have heard of in other classes. I especially liked learning about CrossRef and how it works due to a collaborative effort between several publishers.

I'm still a little bit shaky on the difference between OpenURLs (and the information they carry) and DOIs. I'll have more on that next week...

Notes on the readings: (there is some overlap here with my other readings and entries, so these aren't comprehensive reading notes, just the most important points)

"E-Journal Management Tools" (Jeff Weddle and Jill Grogg)
This article summarizes several different management tools for e-journals. Because of the explosion of e-journals in the past decade, learning how to work with and organize these resources has become a major part of librarianship (and the job of the ER librarian(s)). I will briefly summarize each tool.

1. A-Z lists: This is one of the first ways that libraries decided to manage their journals. However, now that one journal may have several different points of access, the management of these lists is too costly and complicated. However, many vendors now provide the lists to the library and the journals can now be categorized differently, like by subject. Depending on the size of the A-Z lists, libraries may want to outsource this management work to the vendor, or they may be able to manage it on their own.

2. OpenURLs and Link Resolvers: Because of my presentation, I've already spent a lot of time covering this aspect of e-resources, but want to include the two essential elements mentioned in the article. In order for the framework to function, these elements must be in place:
a. "localized control (often via the knowledge base)"
b. "standardized transport of metadata, specifically the metadata which describes the users' desired information object."

Note: localized control is covered in next week's blog.

3. DOI and CrossRef: DOI is a persistent object identifier, not a location identifier. CrossRef is a database that works to connect DOIs with their URL (they also work with openURLs). However, the DOI is assigned by the publisher and will go back to the publisher's website unless a link resolver is used to direct it elsewhere.

4. Link Resolvers:
a. LinkSource (EBSCO)
b. SFX (Ex Libris: used by UW-Madison)
c. OL2 (Fretwell-Downing)
d. Article Linker (Serials Solutions/ProQuest)

5. Federated Searching: Most students don't know what kind of search they're running when using a "subject based database list" or the "articles tab" on UW library's website. A federated search allows the user to search across multiple databases, but will only pull up about 30 citations (or some other designated amount) from each database. While the federated search features may be limiting, the user is able to search multiple databases with one click.

"CrossRef" (Amy E. Brand)

This article explains the history and workings behind CrossRef, a nonprofit organization created by publishers to run a cross-publisher citation linking system and act as an official DOI registration agency. Here are some points I would like to highlight:
- CrossRef adds between 2 and 3 million DOI records per year and in the future will include: patents, technical reports, gov docs, datasets, and images.
- A DOI consists of a prefix (the content owner, like CrossRef) and a suffix (item information provided by the publisher - may include year, journal acronym, etc).
- DOIs are very reliable because they are attached to an item NOT a location because locations constantly change.
- A CrossRef shortcoming: it does not take the researcher or the institution into account
- The publishers foot the bill for the CrossRef service, and it's supposed to be invisible to the user. However, it seems like it's the most worthwhile for an institution's library to work with CrossRef AND maintain their own local control.

The Process:
1. Publisher exports metadata to CrossRef.
2. DOI is requested based on metadata.
3. System exports articles with their DOI attached.
4. Users retrieve articles through the assigned DOI.
How do OpenURLs and DOI's work together (with Crossref)?
"The DOI and OpenURL work together in several ways. First the DOI directory itself, where link resolution occurs in the CorssRef platform, is OpenURL enabled. This means that it can recognize a user with access to a local resolver. When such a user clicks on a DOI, the CrossRef system redirects that DOI back to the user's local resolver , and it allows the DOI to be used as a key to pull metadata out of the CrossRef database, metadata that is needed to create the OpenRUL targeting the local resolver. As a result, the institutional user clicking on a DOI is directed to appropriate resources." I typically don't include quotes this long; however, this paragraph made the relationship between DOIs and OpenURLs click for me.

"On the Road to the OpenURL"
Back in 1999, several groups met to discuss reference linking and its challenges. This group included: National Information Standards Organization (NISO); Digital Library Federation (DLF); National Federation of Abstracting and Information Services; and the Society for Scholarly Publishing. They came up with three components of a successful reference linking system:
1. identifiers for works
2. "a mechanism for discovering the identifier from a citation"
3. the ability to take the reader/researcher from the identifier to a specific item

The answer (or one of)....an OpenURL.
-internal linking vs external linking: (internal)staying within one system, but this is often too confining. External linking allows the user to move between their current system, ILL, doc delivery services, online bookstores, and library catalogs.
- OpenURLs provide the local control that libraries need (whereas DOIs are more for publisher websites).
- an OpenURL identifies an item through its metadata, not a copy of the item. Example: when we work with Find It, the user may be directed to more than one copy of the item from different vendors. The link resolver matches the item up with the appropriately matching metadata.
"Beyond OpenURL: Technologies for Linking Library Resources"
This article provides an overview of linking tools used in libraries and covers where we have gone since moving away from static URLs and where we need to go in the future.
- presently working with OpenURLs and DOIs. This started about 10 years ago with the CrossRef initiative.
- DOIs don't rely on a knowledge base to complete the link. They go from request to DOI software to the content provider (most likely the publisher).
- dynamic linking: the link resolver works with the request and is able to point the user to additional materials like dictionaries and subject encyclopedias.
- conceptual and associative linking: this is the "more like this" linking (commonly seen on Amazon)
Additional Web 2.0/Library 2.0 tools: blogs, wikis, social network (College library on Facebook), chat (which I love), and RSS feeds. I also read about blikis (blog/wiki) and while they seem like a good concept, the name seems a little ridiculous. Maybe I'll just have to try one out.

Saturday, November 6, 2010

Data Standards

This blog post is going to be a little bit different - just reading notes this week. I gave a presentation on Find It on Friday and we worked on an activity involving COUNTER. So, here are the things we read to prepare ourselves for class:

"Library Standards and E-Resource Management: A Survey of Current Initiatives and Standards Efforts" by Oliver Pesch


- the e-journal life cycle! acquire - provide access (i think Find It fits under this part) - administer - support - evaluate - renew: remember, each step in the e-journal life cycle takes a lot of time and work, and it's an on-going process.
- requires working with multiple vendors and different systems
- so, we use the help of:
NISO (National Information Standards Organization)
Editeur - focused on international standards for e-commerce of books and journal subscriptions
COUNTER - usage statistics!
DLF - digital library federation
ICEDIS - works at the publisher and vendor level to develop a set of standards
UKSG (United Kingdom Serials Group)

Just a few notes for "Standards for the Management of Electronic Resources" (Yue)

- Standards = interoperability, efficiency, and quality!
- the largest area of growth for libraries has been in e-journals, and without an initial set of standards at the onset of this area, different formats and ways of managing serials emerged.
- ONIX as the first electronic assessment tool for serials
- MARC and ER = square peg in a round hole. We need XML! Can I quote Steve Paling on this? Yes, "MARC must die!"
- OpenURL - works well for linking to full text. Static URLs don't work in this case because of the fluid nature of the e-journal market. Now we have the openURL resolution system known as SFX where a source is directed to a target by a link resolver.

COUNTER Current developments and future plans and our COUNTER activity:

- the main goal of libraries/librarians is not to spend their days looking at and finding usage statistics - COUNTER makes this easier for them
- COUNTER report format - requires vendors to provide only reports ‘relevant’ to their product(s) - most supply only a few report types.
JR1 - # of successful full-text article requests by month and journal
J1a for subscription archives
JR2 - turnaways by month and journal (due to simultaneous user limit)
uncommon -
JR3 (optional) - number of successful item requests and turnaways by month, journal and page type.
JR4 (optional) - total search
es run by month and service
JR5 - number of successful full-text article requests by year and journal


Database Reports:
DB1 total number of searches and sessions by month and database
DB2 turnaways by month and database
DB3 total number of searches and sessions by month and service (branded group of online info products)

Consortium Reports
CR1 # of successful full-text article/e--book requests by month
CR2 # of searches by database

report format compliance (manual review)
article request counting (test scripts)
database session/search counting (test scripts)

- Don’t account for: automated search filtering (bots, crawlers, LOCKSS, etc)
HTML vs PDF downloads - some services display HTML full-text along with abstract - is this a “download?”

We also covered CORE (cost of resource exchange) and SUSHI (Standard Usage Statistics Harvesting Initiative)

“Library Standards and e-resource management”

- E-Journal lifecycle:

- 1. Acquire: titles, prices, subscriptions, license terms, etc.

- 2. Provide access: cataloging, holdings lists, proxy support, searching and linking

- 3. Administer: use rights and restrictions, holdings, title list changes

- 4. Support: contacts, trouble shooting

- 5. Evaluate: usage data, cost data

- 6. Renew: title lists, business terms, renewal orders, invoices (groups help create standards as management resources)

“Standards for the Management of ER”

- Promote interoperability, efficiency, and quality

- Another way to look at the lifecycle:

- 1. Selection

- 2. Acquisition

- 3. Administration

- 4. Access control

- 5. Assessment

“COUNTER: Current Developments and Future Plans”

- Usage statistics as part of the librarian’s toolkit

- Vendors have a practical standard for usage stats on their major product lines

- Standard usage stats Harvesting Initiative (SUSHI): automated retrieval of the COUNTER usage reports into local systems (part of the XML schema) will indicate the intensity of use of a database, popularity of a database.

- Journal usage factor: total usage (COUNTER JR1 Data)/total # of articles published online (within a specific date range)

- PIRUS: Publisher and Institutional Repository Usage Statistics: an invaluable tool in demonstrating the value of individual publications and entire online collections.



Thursday, November 4, 2010

Electronic Resource Management Systems

Quite honestly, I had a difficult time connecting with the material in this unit until we went to the computer lab to take a look at a management system (ERMes). It was difficult for me to conceptualize, probably because every library works with a different type of electronic resource management system, or works with one that they have created themselves. I can see how deciding on what type of system to use and who will be using it is entirely dependent on the size and the needs of individual libraries. You certainly cannot take a "one-size-fits-all" approach with these systems. During class, we looked at several different systems: EBSCONet ERM Essentials; a homegrown system from Columbia; Innovated Interfaces Millenium; and Serials Solution 360.

There are a few benefits and challenges that come with each product. A few of the benefits: ERM Systems improve overall management communication, there is auto population of data, and an ability to update information automatically/quite easily.

If you're working with EBSCONet, there is the added bonus of automatic management of all EBSCO materials. If you're library subscribes to many of EBSCO's databases, this might be a good option for you. However, for all other vendor products, ERM must be manually completed which is tedious and takes time. Any ERM system that requires manual data entry (and I'm not sure there's one that we went over that doesn't require at least some data entry) leaves room for error. Misspellings, typos, and entering in different license material occurs because usually a group of people will contribute to data entry, not just one person. This can throw an entire section off. This product was recommended for small - medium sized university libraries. One library (Kent State, I believe), even said this system would help them record and remember deadlines and contract deals. For them, it was better than working on a Google Calendar and Excel spreadsheets.

Innovated Interfaces Millenium and Serials Solution 360 can come as part of a package deal (link resolver software AND an ERM system all in one!) But once again, it depends on the type of library. At the University of Wisconsin, we use Ex Libris' link resolver software (SFX), but their ERM system (Verde) would never work for us due to the breadth and depth of our collection.

Each unit in this class adds one more corporation to the list of commercial services provided for libraries, specifically academic libraries. And with my Find It presentation this week, we'll be able to add yet another! Up to this point, the biggest lesson I have learned is to be patient and thoughtful with purchasing decisions - each product is slightly different and could potentially hinder certain areas of librarians' work.

A few Reading Notes:

Unit 9: Electronic Resource Management Systems

- Title by title management no longer works

- Homegrown systems became popular in the late 1990s/ early 2000s

- Strong focus on data standards, issues related to license expression and usage data

- Most companies now offer an ERM system as part of the ILS (interoperability is one advantage to working with one company’s product, but they are also at the mercy of the company for updates – could end up under supported)

- a few things to watch out for: does the system work well with what you already have?

- Is it reliable and sustainable?

- Cost?

- What is the cost of advancements vs the benefits to the user?

- Refer back to ERM checklist

- Implementation of a system: staffing (who should be involved, how they will be structured within the library, training, etc)

- Communication across library departments is key (especially with managing work flow)

- Questions to ask before selecting a certain system:

- 1. “What elements are important to include for you library?

- 2. “What elements are repetitive across license agreements and provide little value or are inconsequential in describing?”

- 3. “Who will be responsible for providing consistent interpretation of license language and meaning?”

- 4. “What tools or resources are available to assist individuals in the mapping process?”

Sunday, October 24, 2010

DRM to TPM

I have a pretty general idea of Digital Rights Management, and after learning about TPM (Technological Protection Measures) I have learned that TPM is just one aspect that fits under the DRM umbrella. One of the biggest things we talked about in class was the difference between authentication and authorization, two major components of TPM. The two are different, but work together to provide access. Authentication answers the question of "who" and authorization answers the question of "what may the authorized person do?"

The authentication process works with IP addresses - libraries and publishers keep a list of "ok" IP addresses (those that are approved to access the database). When a person logs in to say, UW Libraries and their databases, the username and password matches up with a list of approved users and then they are assigned an IP address. The user information that matches up with the preapproved list of users is called an LDAP server (or lightweight directory access protocol). For a place like UW-Madison, thousands of students are added to and taken off this list each year, in addition to thousands of "guest access" passes that can be easily created.

The authorization process gets down to specifics. David Millman's article "Authentication and Authorization" addresses this process well: "The authorization decision is, in other words, given someone's identity, what may they do? What information may they see; what may they create or destroy; what may they change?" We've already started to see tighter authorization requirements on our own campus, and they're predicted to be further restricted in coming years. Currently, there are a handful of databases where the user must be in a specific library to gain access, and certain departments already have additional login information required beyond the regular UW-Net ID and password. A question was raised regarding the Wisconsin Institutes for Discovery in which a relationship between the public and private sector is formed to perform biomedical research - what kind of authorization will be required for these researchers? Especially with the private sector researchers? This will not only be an interesting challenge with authorization, but in licensing as well.

Below are my messy reading notes on a few of the readings:

“Every Library’s Nightmare?”

- “TPM are configurations of hardware and software used to control access to, or use of, a digital work by restricting particular uses such as saving or printing.

- Hard restrictions: secure – container TPM where there is a physical limitation built into the hardware.

- ISSUES: user dissatisfaction, generate interoperability issues; block archival activities; increased staffing to handle these issues.

- Soft restrictions: discourage use, but not impossible to get around. Now almost accepted as part of e-resources (just the way things are). These change our expectations from vendors.

- Occurs in resources that are 1. Digital and 2. Licensed.

- These restrictions would be impossible on paper copies

- Soft restriction types: 1. Extent of use 2. Restriction by Frustration (often done with awkward chunking) 3. Obfuscation (poorly designed interfaces that do not properly show the capabilities) 4. Interface Omission (tasks only possible through browser or computer commands, left out of the interface) 5. Restriction by Decomposition (breaks down into files, makes it hard to save or e-mail) 6. Restriction by Warning (proclaims limitations and “misuse may result in...” language.

- Hard restriction types: 1. No copying or pasting of text 2. Secure container TPM (ex: only posting low resolution images)

“Technologies Employed to Control Access to or Use of Digital Cultural Collections”

- Digitized works are often harder to control and restrict access to, so that’s where TPM comes in (sits under the umbrella of DRM – “a broader set of concerns and practices associated with managing rights from both a licensor and a licensee perspective.”

- Usage controls manipulate the resource itself (same as a hard restriction?)

- Libraries are more likely than archives/museums to employ a system that restricts or controls access/use.

- Common systems are: authentication and authorization; IP range restrictions; network based ID systems

“Authentication and Authorization”

- Authentication: validating an assertion of identity (identity code and password)

- Other examples include:

1. 1. Shared secrets (like a shared password) 2. Public key encryption 3. Smart cards (not sure if I’ve ever seen this before, or if this method is even used anymore) 4. Biometric (personal physical characteristics) 5. Digital Signatures

- Authorization: access control or access management, or permitted to perform some kind of operation on a computer system.

- Divided into three categories: 1. “whether a subject may retrieve an object” 2. “whether a subject may create, change, or destroy an object; 3. The extent that the person can change the authorization rules.



ERM article

A good friend of mine from undergraduate (who now works in student development at a community college in Tacoma, WA) carefully follows library news. I've tried to convince him to apply for library school, and I feel like I'm getting closer. He occasionally sends me articles about libraries, and this is the most recent from the "Chronicle of Higher Education." This article, titled "Library Inc" nicely sums up the pricing issues and commercial interests of vendors for someone who may not know much about it. The author of the article seems to give a hopeless prediction for the future of libraries (which I don't agree with), but nevertheless a good find. The comments that follow the article are perhaps the most amusing and point out the controversy that exists among librarians when it comes to ERM.

Tuesday, October 19, 2010

TEACH

Technology, Educations and Copyright Harmonization Act of 2002 - It turns out the TEACH Act is finally a refreshing point in the ongoing copyright mess. Thomas Lipinski from the School of Information Studies at UW-Milwaukee came and spoke to us and provided us with some useful tools. Portions of the TEACH Act are somewhat hard to understand so it was helpful to have an expert there to wade through the material with the class.

Perhaps the two most important points that I immediately picked up on were that the TEACH Act is only for accredited, non-profit higher education institutes (which makes the Act apply to a very specific area of education). Additionally, the TEACH Act treats different categories of works differently, which means that you don't always have to default to fair use. The language withing these different categories is somewhat vague, but either way, it provides some clarification in areas that would otherwise default to fair use. It also seems that it was written in such a way that the Act complies with the DMCA - there are pieces of copyright law, especially the DMCA that contradict or clash with another piece of the law, so it's nice to have something that works in accordance with the DMCA.

TEACH doesn't completely cover fair use and copyright and always work in an institution's favor - it excludes all material the University may have already known was being used illegally, and faculty still need to work with materials they know they will actually use (if they then have to resort to fair use, it's much easier to).

Sunday, October 10, 2010

Let's get down to business...

I'm not exactly sure why, but this week's unit on pricing models with our guest speakers from WiLS made e-resource management seem so real. Some of it probably has to do with money, and that may be what I needed to solidify the tricky and complicated task of working with e-resources in the library setting. We started out the class with a few basics, like pricing models (here is my quick and dirty list):

1. Carnegie Classification (tiered) - this is also what WiLS uses most
2. FTE - relevant for large, research universities
3. Usage based stats: less predictable for the budget, publisher is also in charge of
documenting usage
4. Pay-per-use: see 'tokens' below
5. Simultaneous user limit: can be used in combination with other pricing, but it limits how
many people can access the database at one time (I think of the databases at the Business School Library)
6. Paying for an article: if it is immediate, they will buy it, otherwise it’s best to resort to
ILL (I tell people that at UW-Madison, you should never have to buy an article)
7. Consortial pricing: not every university will pay the same amount in one consortium –
may depend on how many full time students they have
8. Token (under pay-per-use): provides access to a single article for 24 hours, universities
will get a certain # of “free” tokens per year – monitored by the librarians and can be
doled out to different departments (tokens are an add-on, could be a pricing model in and
of itself)
9. Backfiles: an accessory to either increase or decrease your price – they are a new way of
making money for the publisher
10. Out-right purchase: particularly for e-books – lives on the publisher’s server so you pay a
maintenance fee even after you’ve purchased it.

And, of course, there was my post on the Big Deal, so I'm not sure how much more I need to say about that. Ultimately, pricing comes down to what will work best for the university. No one pricing model is acts as a one-size-fits all option. Initially, the Big Deal may have seemed this way, but it's not for every library.

I really enjoyed our guest speakers from WiLS (Wisconsin Library Services) and how they described themselves as librarians for librarians. Before they came in, I know absolutely nothing about them (other than the fact that every few days, a WiLS runner comes into the music library to ask me for some obscure microfilm for ILL). It's run exactly like a business in that they have to make things happen (like set-up contracts between libraries and vendors) in order to get paid. WiLS basically takes the pain out of negotiating and pricing for libraries - they also typically get them a 15 - 20% discount on subscriptions.

I knew that WiLS and ILL were connected, but after listening to Eric (name?) I realized just how they were connected and how the university libraries fit into the ILL + WiLS equation. That, along with their work with cooperative purchasing/consortial licensing, and their constant work to streamline processes while making enough money, makes it all obvious to me that libraries are foolish to do it on their own. E-resource management, if the library decides to do everything by themselves, is a full-time job for multiple people.

Here are a few points on negotiating that cleared up some confusion about the process for me:

1. it's a merging of two worlds - the profit and non-profit organizations
2. Always answer two questions (at least): what will the market bear? and what is a fair price based on the needs of the library?
3. a group like WiLS lessens the work for the vendor as well by streamlining the process
4. WiLS is the billing agent (also handle renewals in subsequent years)

My favorite ILL tidbit (besides learning how they work with universities): 120,000 requests annually! I now understand why he said that ILL is in the top 3 services libraries offer.

Thursday, October 7, 2010

Big Deal or Not to Big Deal?

Just a few thoughts before going into class tomorrow - I want to get the "facts" about the Big Deal laid out before I learn more about this e-reserve one-size-fits-all option:

Pros:
1. Overall, per article, per word, per whatever, buying in bulk is cheaper (which includes journals).
2. Price increases are predictable - it's capped, so the library isn't thrown any surprises when it comes time to renew.
3. Smaller institutions who may not have a lot to begin with, could benefit from a deal like this.

Cons: (buzz phrase for cons - wiggle room)
1. It takes over the library's budget - it may be very difficult if the library has other journal subscriptions they want to maintain that aren't part of the Big Deal.
2. Large universities and research institutions already have big collections, so would they really benefit from a predetermined package that may provide some overlap or journals that are substandard compared to what they already have?
3. The Big Deal takes away the library's say in their collection.

No cancellations, no selections, huge part of the budget = no wiggle room

I liked the article on the UC system and their protest against Nature. I wonder what happened with that? I guess I'll find out tomorrow. :)

Wednesday, October 6, 2010

Georgia State (according to the New York Times)

After reading the Georgia State material from last week, I was curious to see how it was portrayed in the media. Just like usual, I turned to the New York Times (even though I'm sure there are many other articles I could look into) and found their/Katie Hafner's take on the situation:

Click here for the article.

There are three things that stood out to me:

1. This quote sums up fair use/copyright issues so nicely...
"Indeed, as the printed word is put in digital form, holding onto rights seems to many like climbing up the slippery sides of a glass." (makes sense to me, which leads into my second point!)

2. She ends the article with a quote from a non-profit article repository:
"Sometimes a bit of slack can help us all discover a winning formula." I do think that Georgia State was probably a bit too relaxed with their e-reserves, like not having a password. But, I can't help but think that we make some of these issues complicated for ourselves and each other just by the way we approach them.

3. Hafner raises the profit vs. non-profit question - how much of our "publicly" run universities are tied into for-profit deals, especially when we work through so many corporations to make access to information/research possible?

Sunday, October 3, 2010

Unit 5: Part 1 (Georgia and E-Reserves)

Before I jump into our class discussion on the Georgia case, I want to make sure I get down a couple of my own thoughts regarding this week's reading.

1. Fair Use is not clear cut - this is especially visible in the Linda Neyer chapter on Copyright and Fair Use where she explains the three sets of guidelines for e-reserves.
a. classroom guidelines (1976) - mostly deals with copying and one-time use
b. ALA recommendations (1982)
c. CONFU (1991) - the biggest problem here is that the determined CONFU guidelines never became official because CONFU was somewhat of a bust (ARL Bimonthly report we read stated: "Libraries and higher education associations rejected the draft CONFU electronic reserves guidelines because they were highly proscriptive and did not provide the necessary flexibility inherent in fair use.")

So now the question is, as a library, which set of guidelines do you follow? Or do you make-up your own? In class we talked about fitting somewhere on the liberal - conservative range in terms of how you approach e-reserves. There are certain ways you can push the boundaries a bit without getting in trouble. In Georgia's case, it's probably a good idea to have a password protected system even when taking a more liberal approach.

I want to address the four Fair Use points made in the ARL report. Even though they are a bit broad, I find that each point either provides a starting place, or a confusing point to question when looking at the use of a work.
1. "The character of use." Perhaps the most important thing here is that the work on e-reserve does not act as a supplement to a textbook. It should compliment what has already been purchased by the student.

2. "The nature of the work to be used." I'm not entirely sure about this one - it seems obvious that all types of materials should be used (fiction, non-fiction, music, film, art, etc.). However, copyright for music is treated differently than for books. Each type of material comes with its own set of complications.

3. "The Amount Used." In 1976 Congress supported the "Agreement on Guidelines for Classroom Copying..." in which they specifically defined the length that could be used through total percentage of a work and words used. While this may be one of the first quantifiable guidelines I've come across when dealing with Fair Use, it seems like wishful thinking to apply length restrictions. What about the use of creative works? How do you measure how much of a work of art to use? Applying an objective measure to a subjective work may not be the best solution.

4. "The effect of the use on the market for or value of the work." The ARL report states that this factor may be less important if the library sticks to the previous three factors. However, I think this entirely depends on the University and if they have a University Press. There may be several factors that contribute to #4 that relate only to the library.

One point someone brought up in class was to remember that "if you don't use it, you lose it" with fair use. I guess I hadn't thought of that until this class. There is a fine balance between knowing when to ask permission and knowing how/when to claim fair use.

More to come on the Carrie Kruse talk!

Saturday, September 25, 2010

Holy cow...licensing!

I realize that I was absent from my blog last week, but no worries, I have plenty to cover today. Last spring's policy course (Information Ethics) was fascinating and overwhelming at the same time - a book a week, a crash course in copyright...overall a lot of information to take in. Now that I've been in e-resource management for four weeks, I realize how specialized and detailed all of the fields in copyright are.

Last week (9/17/10), we focused on the ProCD vs. Zeidenberg case. After reading the case, I was left somewhat confused, but had gathered the main points of the case. But it was only after hearing Anuj Desai talk about the case that the truly complicated nature of the case emerged. The buzzword of the ProCD vs. Zeidenberg case? PREEMPTION. Federal law trumps state law and state trumps local. But here is where the case gets jumbled (or one of the places): licensing agreements, or contracts, preempt copyright law. Throw in the concept of "public domain" and we have ourselves a messy case.

Here are a few of my notes from Anuj's talk and my reading:
- OVERALL, there was a conflict between the copyright statute and the licensing agreement
- Zeidenberg's point: this information is in the public domain (in absence of a contract)
- arbitrage possibilities
1. the only people who bring cases are the copyright holders
2. takes fair use out of the game
3. Obviousness of the terms of use (it must be obvious to the user, not just the library)
4. in general, licensing pricing is correlated with use

CONTU (t=technology) and CONFU (fair use)
We spent some additional time, this week and last week, going over CONTU and CONFU and examining the conference goals and outcomes. We covered several different areas in class, but my group focused on the distance learning portion of CONFU. Portions of the document didn't make much sense to me - there is difficulty in defining and creating restrictions (I guess this applies to most things when it comes to information technology). Here is what I noticed in the reading:

1. The rhetoric is designed around non-profit institutions - how does this all change once it gets into the "for-profit" area of education?
2. no asynchronous distance learning
3. must take place in a classroom (which now defeats the reason why many people choose on-line courses)
4. there was an attempt to separate dramatic from non-dramatic, but I could never figure out what that was
5. subsequent performances must have permission

and my favorite point...

6. This should be revisited in 3 - 5 years.

And now onto Licensing Day! Yay!
(quick note: this class has really made me realize that after I graduate and leave UW-Madison, I will most likely never again have access to so much information and so many databases. I'm not sure people realize how much we have until their access has been denied.)

Here are a few of the points I found most interesting about licensing:

- libraries spend an average of 250 hours per year on licensing
- the majority of legal issues that arise in licensing are with corporate and legal libraries (19%).

I don't want to spend too much time going over the minutiae of licensing, but I must admit that I found it quite dry but interesting (can I say that without sounding too contradicting?). The boring points written in fine print ultimately define the level of access. Before looking at a few example licenses, I assumed that all licenses fit a specific layout and order, but not necessarily. I often had to search under various headings to find the most important points, like usage rights, user control, enforcement clauses, etc. Some points were also split up under two different headings. Reading and interpreting these documents really do take time and certain eye for detail. There will be more on this in my licensing assignment on JSTOR...

Sunday, September 12, 2010

Last spring semester, I wrote a paper on the Obama Hope poster copyright debacle (otherwise known as the Fairey v Associated Press case). I often referred to Jessica Litman's Digital Copyright book as a quick reference for copyright and the DMCA. I'm happy to be returning to the Litman book, because she flushes out the complicated nature of copyright in a way that is understandable for all of us non-copyright attorneys out there.

During class, we covered the most important points from the first 6 chapters. Here are a few of my favorites/the points that furthered clarified copyright law:

1. Benefits of copyright law should be split between creators and the public ("If creators can't gain some benefit from their creations, they may not bother to make new works" (15)).
- I like reading/thinking about the progression of copyright law. In the Copyright manual, we read about the Statute of Anne and how it was put in place to ultimately give the government some control over what was published, as a means of censorship. 200 years later, copyright morphed into the American law that supposedly encouraged creativity. Does the incentive provided by copyright still encourage creativity? After looking at its complicated nature, changes, interpretations, etc, it seems as though it may do the opposite. If we look at elements of copyright like the Moral Rights Act, and the overall idea of property rights, I believe copyright law may stifle the creation of new works because it maintains an old work for decades at a time.

2. Four metaphors (ordered from "old way" of copyright to the "new way"):
a. Balance
b. Bargain: Public gives limited exclusive rights in order to encourage production.
c. Incentive: greater protection and increased limitations increase incentives (in other words, every time we expand copyright law, its limitations and exceptions are further narrowed).
d. Property rights: intellectual property ought to be treated like other property rights

3. Additionally, we discussed exactly why copyright law is so complicated - it's due to everything from the lawmakers' approach when creating the laws to the application of the law to the constantly shifting area of technology. Is it possible to create a law (and apply it) to highly fluid and subjective topic? Well, yes, but it needs a lot of work and everyone needs a lot of help. Thanks (I think) for the attorneys who specialize in copyright.

Sunday, September 5, 2010

Electronic Resources Management and Licensing - The Start

After completing the Information Ethics and Policy course last semester, I felt I had a strong understanding of copyright and policy surrounding information resources. Now, I get the chance to gain a specialized knowledge of policy in the electronic resources management arena.

Here are a few things I want to focus on from the Unit 1 readings:

Electronic Resources and Academic Libraries: 1980 - 2000
- One issue in the 1980s and 1990s (among others): the serials crisis. Libraries struggled with the access vs. ownership debate and purchasing for reasons of "just in time" or "just in case." This library debacle still exists - how many resources do libraries acquire or decide to keep in off-site storage, just in case someone may need it? Are our libraries (including the UW libraries) avoiding or maintaining the notion of the library as a storage space instead of a research facility? I suppose it depends - the School of Business Library maintains a small collection of old reference books, but has mostly done away with outdated print material, opting for current business databases. However, the Mills Music Library frequently struggles with managing the large storage room and cage of ancient 78s and music education material from the 1960s and 70s.

- "The question then, is how to determine which resources to provide by immediate full-text access, delayed full-text, or as citations, and most importantly, how to pay for all of these" (3).
Especially in the academic setting, students and faculty want instant gratification when it comes to doc delivery and access to on-line articles. When libraries (like Mills Music Library) decides to keep massive print collections in an off-site space where the patron may not browse material, the library's catalog must maintain a high standard of information about each print item. Otherwise, the materials lose their value. In order to be useful to patrons (especially those who are accustomed to instant access to information) on-line catalogs must strive to reach the same standard of accessibility as many commercial, full-text databases.

Class discussion from 9/3 and "History of Traditional and Electronic Scientific Journal Publishing": Here are just a few notes I took from this article and some of the points that were brought up in class.

- NSF wound down research and funding for libraries until the Digital Libraries Initiative (late 90s)
- Early 1990s: massive increase in the number of computers in the U.S., increase in the number of e-journals (deal with the access vs ownership issue)
- use of on-line resources and the CD-ROM in the 1990s
- Libraries had to figure out how to convert the articles to a common digital format - this resulted in extra staffing and overhead costs
- Red Sage: early work with e-publishing
- Overall, there has been (and continues to be) growth in the amount of e-resources produced
- Instead of relying on publishers, a direct link from the author to the reader occurred.
- Researchers now find articles through a 3rd party (intermediaries and aggregators). The difference between these two parties is still not entirely clear to me.

I must admit that despite my work in Information Ethics and Policy, I remained somewhat clueless about this area of research. I use on-line databases everyday, but never realized the complexity and controversy surrounding electronic resource management. I'm ready to dive in.