Monday, December 6, 2010

Reflection on the semester

Now that the class is starting to wrap up with student presentations and a quiz this week, I feel like this is a good time to end my blog. This was my very first blog, hopefully not my last, and I feel like it was a good experience. I liked having a very focused, topic driven post each week where I was also allowed to be a little more informal and express my likes, dislikes, and write about the concepts that I had a hard time grasping. I typically wrote my posts (or started to) every Sunday or Monday, which gave me time to think about the class and reading information from a few days earlier. Overall, I think my blog posts show an accurate reflection of how I connected with each unit. Some weeks went better than others, and there were definitely some weeks that I did not want to post at all.

My favorite blog post, and also my favorite time during the semester, was on 11/12/10 ("A little more with SFX"). Following my initial read-through of the syllabus, I expected to enjoy the first half of the semester because the second half consisted of concepts that were brand new to me and "scary" because they pertained to the technology side of ER management. However, I think this post shows a turning point in my attitude towards the link resolver, DOI, and OpenURL technology in libraries. I automatically assumed that I would have a difficult time understanding the material, but after I forced myself to present on Find It (about which I previously knew nothing), I gained a deeper understanding than I ever thought I would. Between my notes from class around this time (first two weeks of November) and my blog posts, I feel like I could stand in front of a group of people and teach these concepts. It's interesting how the topic I feared most ended up being my favorite.

My overall thoughts about ERM have drastically changed. The first half of the semester was filled with some personal challenges which slightly distracted me from my school work, but I also didn't feel connected to the material. I also went into this class with low confidence in my ability to understand some of the technology that ER librarians work with on a daily basis. I now know that work with electronic resources for future librarians is a necessary skill and there are so many different facets to it (just thinking back on all of our presenters) that there is bound to be a favorite work area. Some librarians may like the licensing and negotiating (the business side) of electronic resource management, whereas others may be more drawn to the various technologies that are used to keep everything running smoothy, like management systems and link resolvers.

Even though I struggled through parts of the class, the work (reading, presenting, writing, etc) made me feel like I have really added to my skills as a librarian and my ability to understand and work in a rapidly growing section of the industry.

Thursday, December 2, 2010

Reading Notes for Unit 13: ERM Librarian

I want to get a few of my reading notes down before I go to class tomorrow. An electronic resources librarian often shares some of the same task as his/her collegues (reference work, biobliographic instruction), but must treat the electronic resources management with more business sense and build relationships with vendors outside of the library setting.

Marian Through the Looking Glass...
- This is a newer position for the library profession meaning that directors/boards have to carefully design the ER position to make sure that all aspects of it are accomplished. This can be difficult because many librarians may not know much about the ER realm of librarianship.
- In a matter of five years, ER spending went from a reported 8.85% to 22.01%
- Description of the ER management position: "an increasing number of...position announcements, a greater diversity of functional areas involved, a wider variety of types of institutions placing advertisements, and the emergence of distinctions between 'electronic' and 'digital' positions in terms of job responsibilities."
- Three common position jobs: purchase management; renewals and cancellations; pricing negotiations; AND covering technical problems (additionally, ER librarians will work with link-resolver software, federated searching software, and managing usage data).
- All of the above areas of the position point to a tech services librarian instead of a service-focused librarian. However, the job of an ER librarian really differs from library to library depending on their needs and current staff.
-There still aren't many opportunities for training for an ER librarian. We can take a class like Electronic Resource Management, but otherwise, it is up to the librarian to look for learning opportunities and take the initiative to learn the material themselves.

How to Survive as a New Serialist
- This was more of an article that I would keep around for good references (websites, webinars, blogs, workshops, etc).
- One interesting fact: try to look at the job of the ER librarian from the ILS - what already exists and what will be needed for training? How can you improve the current system? What is going to be needed to make it run smoothly?

Process Mapping for Electronic Resources
- Because the ERM landscape rapidly changed in the last decade (and still is changing) many libraries are at different points in how they choose to approach managing their electronic resources and are attempting to define the skills and role taken up by the ER librarian.
- Process Mapping:
- "synonymous with business process reengineering (BPR)"
- rooted in Total Quality management (TQM)
- process maps help an organization better visualize workflow and the functions of a particular process. Moreover, they can help employees understand what areas need improvement or need to be changed all together.
- Even though libraries technically aren't businesses, they are organizations with many areas of work with budget constraints. Especially in the ERM realm, librarians need to take a business approach. It's an area of librarianship where a service oriented field merges with the corporate world of vendors and publishers.
- Process maps not only show what areas need change, but they also show what sections of work are already working well for the organization.
- Communication is key with process mapping. Once the map has been created and analyzed, effective communication will carry the project to the next step and is necessary to make the necessary changes happen.
- "They [libraries] are increasingly turning to proven business practices that allow them to evaluate and design new methods of delivery of resources and services." (103)

Friday, November 19, 2010

What is a handle system anyway?

Today's class clarified some of the concepts we've been learning about this semester and also answered some questions that I've actually had since the beginning of library school (yikes!). Like this one: what is a DOI and why would people confuse me even more by referring to it as a "dewey" (pronounced the same way)?

We covered 4 questions/learning objectives, and I'm going to lay them out here:

1. What's the issue with regular links? The issue is that regular old links identify locations, but not the actual item. The problem is that locations of information change and they may move servers, which breaks the link. So, it looks like we need a solution...

2. What does the term "local control" mean and why is it important? This is in relation to OpenURL and DOIs, which I will explain in just a minute. Local control is important because it gives the library (the agency that PAYS for subscriptions and access) to control where a link takes the user. A library maintains its own link resolver server - they want to point patrons to the library's purchased resources, NOT the publisher's website, where the user will be prompted for a credit card number before proceeding.

3. What is a "handle system?" A handle system is an index that will track the location of a certain item. If the location changes, it's updated in an index. When the user clicks on a link (or item), their "request" will run through an index, and will then be directed to the properly updated location of what they're looking for. Some URLs will actually contain the word "handle" in it.

4. DOI, OpenURL - what is the difference?

All of these fit under the "handle system" umbrella.
DOIs (pronounced like Dewey): made up of a prefix and a suffix (refers to the publisher and the item number) is an identifier assigned to an object by the publisher. They determine if this goes to the title level, chapter level, or I suppose, even the page level. They will always point back to the publisher page, so it's hard to have local control with this type of identifier. They're great for citations because they point to the authoritative resource.

OpenURL: the library's personal link resolver identifier. This will point to resources only the library subscribes to or their ILL page. Many publishers will also assign an OpenURL to a resource because they are often used with link resolver software, like SFX. These are often much longer and have detailed information like author's last name, page numbers, etc.

Reading notes on Friday's material:

This is a slight shift from DOI's and OpenURLs over to some readings on eBooks:

"Comparison Points and Decision Points" (outlines vendors for audiobooks)
One of the biggest points made about all e-book formats is that the user does not want to fiddle with technology or feel like they have to learn it. The user wants to listen to or read their good book, not figure out how to make it work. In this article, we look at 4 different audiobook vendors: Audible, NetLibrary, OverDrive, Tumble, and Playaway. The author addresses size and quality of a library's ebook collection as an important factor, especially since digital audiobook collections are currently not very large. Audible currently boasts the largest collection of over 14,100 books. This may also lead to overlapping and duplicates within a vendors collection. While this may not always seem efficient, it also gives the user a choice when selecting a version, voice, and allows them to compare prices.

I found that just like regular libraries, audiobook vendors have to advertise their new Purchase Orders (specific listeners may want more books from their favorite author) and continually bring in new materials. Additionally, content is also occasionally removed from collections.

The author mentions that content should be arranged according to the age of the content: "One simple three-way method of slicing up a collection is into frontlist, backlist, and public-domain titles..." (17) (recently published, older but still protected by copyright, works that are out of copyright and into the public domain). Peters continues on to analyze ebook collections by:

- subject and genre strengths: publisher supplied genres and subject headings may result in a different form of categorization.
- content characteristics: this may include abridged vs. unabridged
- narrators: with human narrated audiobooks, many listeners may have developed a preference for a favorite voice. They are typically narrated by actors/personalities; authors; and professional narrators.
- sound and quality: not usual a big deal because repeated use does not damage the sound.
- languages other than English: it's important for a vendor to have works in other languages (Spanish, French, German, Italian, etc).
- purchase and lease options: vendors may offer a purchase plan or a lease plan, and some libraries like to have the ability to swap out underperforming titles
- cost component: different for each vendor, library must consider what will work best for their budget
- licensing and agreement terms
- key features and accessibility issues
- a few features that a library should consider are: placeholding, bookmarking, skip back, sampling, nonlinear navigation

I didn't list all of the key points librarians should consider, but managing and selecting an ebook vendor will be a major task for an ER librarian!

"An Overview of Digital Audiobooks for Libraries"

This article, also written by Peters breaks down the major services of Audible, OverDrive, NetLibrary, and TumbleTalkingBooks into 6 categories and provides their recommendation:

1. Usage model: all are either single user or concurrent users
2. file format: MP3; Windows Media Audio; Flash
3. number of sound qualities: 4; 1; 2
4. supported devices: various vendors support many devices (as long as they are file format compliant)
5. ownership or subscription: Tumble has the best pricing model (subscribe: select and swap)
6. size of collection: ranges from 100 titles (Tumble) to 23,000 titles (Audible)

The moral of this article: each vendor comes with its pros and cons and no two vendors are alike. It really comes down to what your library needs and which vendor has the most to offer.

Friday, November 12, 2010

A little more with SFX...

I can honestly say that if I hadn't done a presentation last week on Find It, I would have been so confused in class this morning. I'm so relieved that I put in the initial work (not just to get my presentation completed), but also because I provided myself with a solid background understanding of OpenURLs and link resolvers. Judith Louer, from CTS, came in today to show our class the workings behind SFX and what she sees everyday at work. After doing my own research about it, I knew it was complicated and knew it was a huge job, and listening to her speak confirmed this. Once again, Find It works by combining the forces of our library (but I guess it's more than a library...it's a HUGE library system for a research institute with 42,000 students), all of our vendors (according to Sue Detinger, we work with over 2000), and Ex Libris, who "manages" the software. This adds up to a lot of chaos. Today, I learned that bibliographic information provided by the vendors for each article/journal, doesn't really match up with the library's cataloging practices. Ugh. One more thing to complicate Find It - making sure our data is adjusted to fit our format. It was nice to hear from the person who contributes to keeping it all running.

The readings for this week were great - they were easy to understand (which I sometimes need with tech information) and were also interesting. I won't write too much about them, but they confirmed/solidified things I already know/have heard of in other classes. I especially liked learning about CrossRef and how it works due to a collaborative effort between several publishers.

I'm still a little bit shaky on the difference between OpenURLs (and the information they carry) and DOIs. I'll have more on that next week...

Notes on the readings: (there is some overlap here with my other readings and entries, so these aren't comprehensive reading notes, just the most important points)

"E-Journal Management Tools" (Jeff Weddle and Jill Grogg)
This article summarizes several different management tools for e-journals. Because of the explosion of e-journals in the past decade, learning how to work with and organize these resources has become a major part of librarianship (and the job of the ER librarian(s)). I will briefly summarize each tool.

1. A-Z lists: This is one of the first ways that libraries decided to manage their journals. However, now that one journal may have several different points of access, the management of these lists is too costly and complicated. However, many vendors now provide the lists to the library and the journals can now be categorized differently, like by subject. Depending on the size of the A-Z lists, libraries may want to outsource this management work to the vendor, or they may be able to manage it on their own.

2. OpenURLs and Link Resolvers: Because of my presentation, I've already spent a lot of time covering this aspect of e-resources, but want to include the two essential elements mentioned in the article. In order for the framework to function, these elements must be in place:
a. "localized control (often via the knowledge base)"
b. "standardized transport of metadata, specifically the metadata which describes the users' desired information object."

Note: localized control is covered in next week's blog.

3. DOI and CrossRef: DOI is a persistent object identifier, not a location identifier. CrossRef is a database that works to connect DOIs with their URL (they also work with openURLs). However, the DOI is assigned by the publisher and will go back to the publisher's website unless a link resolver is used to direct it elsewhere.

4. Link Resolvers:
a. LinkSource (EBSCO)
b. SFX (Ex Libris: used by UW-Madison)
c. OL2 (Fretwell-Downing)
d. Article Linker (Serials Solutions/ProQuest)

5. Federated Searching: Most students don't know what kind of search they're running when using a "subject based database list" or the "articles tab" on UW library's website. A federated search allows the user to search across multiple databases, but will only pull up about 30 citations (or some other designated amount) from each database. While the federated search features may be limiting, the user is able to search multiple databases with one click.

"CrossRef" (Amy E. Brand)

This article explains the history and workings behind CrossRef, a nonprofit organization created by publishers to run a cross-publisher citation linking system and act as an official DOI registration agency. Here are some points I would like to highlight:
- CrossRef adds between 2 and 3 million DOI records per year and in the future will include: patents, technical reports, gov docs, datasets, and images.
- A DOI consists of a prefix (the content owner, like CrossRef) and a suffix (item information provided by the publisher - may include year, journal acronym, etc).
- DOIs are very reliable because they are attached to an item NOT a location because locations constantly change.
- A CrossRef shortcoming: it does not take the researcher or the institution into account
- The publishers foot the bill for the CrossRef service, and it's supposed to be invisible to the user. However, it seems like it's the most worthwhile for an institution's library to work with CrossRef AND maintain their own local control.

The Process:
1. Publisher exports metadata to CrossRef.
2. DOI is requested based on metadata.
3. System exports articles with their DOI attached.
4. Users retrieve articles through the assigned DOI.
How do OpenURLs and DOI's work together (with Crossref)?
"The DOI and OpenURL work together in several ways. First the DOI directory itself, where link resolution occurs in the CorssRef platform, is OpenURL enabled. This means that it can recognize a user with access to a local resolver. When such a user clicks on a DOI, the CrossRef system redirects that DOI back to the user's local resolver , and it allows the DOI to be used as a key to pull metadata out of the CrossRef database, metadata that is needed to create the OpenRUL targeting the local resolver. As a result, the institutional user clicking on a DOI is directed to appropriate resources." I typically don't include quotes this long; however, this paragraph made the relationship between DOIs and OpenURLs click for me.

"On the Road to the OpenURL"
Back in 1999, several groups met to discuss reference linking and its challenges. This group included: National Information Standards Organization (NISO); Digital Library Federation (DLF); National Federation of Abstracting and Information Services; and the Society for Scholarly Publishing. They came up with three components of a successful reference linking system:
1. identifiers for works
2. "a mechanism for discovering the identifier from a citation"
3. the ability to take the reader/researcher from the identifier to a specific item

The answer (or one of)....an OpenURL.
-internal linking vs external linking: (internal)staying within one system, but this is often too confining. External linking allows the user to move between their current system, ILL, doc delivery services, online bookstores, and library catalogs.
- OpenURLs provide the local control that libraries need (whereas DOIs are more for publisher websites).
- an OpenURL identifies an item through its metadata, not a copy of the item. Example: when we work with Find It, the user may be directed to more than one copy of the item from different vendors. The link resolver matches the item up with the appropriately matching metadata.
"Beyond OpenURL: Technologies for Linking Library Resources"
This article provides an overview of linking tools used in libraries and covers where we have gone since moving away from static URLs and where we need to go in the future.
- presently working with OpenURLs and DOIs. This started about 10 years ago with the CrossRef initiative.
- DOIs don't rely on a knowledge base to complete the link. They go from request to DOI software to the content provider (most likely the publisher).
- dynamic linking: the link resolver works with the request and is able to point the user to additional materials like dictionaries and subject encyclopedias.
- conceptual and associative linking: this is the "more like this" linking (commonly seen on Amazon)
Additional Web 2.0/Library 2.0 tools: blogs, wikis, social network (College library on Facebook), chat (which I love), and RSS feeds. I also read about blikis (blog/wiki) and while they seem like a good concept, the name seems a little ridiculous. Maybe I'll just have to try one out.

Saturday, November 6, 2010

Data Standards

This blog post is going to be a little bit different - just reading notes this week. I gave a presentation on Find It on Friday and we worked on an activity involving COUNTER. So, here are the things we read to prepare ourselves for class:

"Library Standards and E-Resource Management: A Survey of Current Initiatives and Standards Efforts" by Oliver Pesch


- the e-journal life cycle! acquire - provide access (i think Find It fits under this part) - administer - support - evaluate - renew: remember, each step in the e-journal life cycle takes a lot of time and work, and it's an on-going process.
- requires working with multiple vendors and different systems
- so, we use the help of:
NISO (National Information Standards Organization)
Editeur - focused on international standards for e-commerce of books and journal subscriptions
COUNTER - usage statistics!
DLF - digital library federation
ICEDIS - works at the publisher and vendor level to develop a set of standards
UKSG (United Kingdom Serials Group)

Just a few notes for "Standards for the Management of Electronic Resources" (Yue)

- Standards = interoperability, efficiency, and quality!
- the largest area of growth for libraries has been in e-journals, and without an initial set of standards at the onset of this area, different formats and ways of managing serials emerged.
- ONIX as the first electronic assessment tool for serials
- MARC and ER = square peg in a round hole. We need XML! Can I quote Steve Paling on this? Yes, "MARC must die!"
- OpenURL - works well for linking to full text. Static URLs don't work in this case because of the fluid nature of the e-journal market. Now we have the openURL resolution system known as SFX where a source is directed to a target by a link resolver.

COUNTER Current developments and future plans and our COUNTER activity:

- the main goal of libraries/librarians is not to spend their days looking at and finding usage statistics - COUNTER makes this easier for them
- COUNTER report format - requires vendors to provide only reports ‘relevant’ to their product(s) - most supply only a few report types.
JR1 - # of successful full-text article requests by month and journal
J1a for subscription archives
JR2 - turnaways by month and journal (due to simultaneous user limit)
uncommon -
JR3 (optional) - number of successful item requests and turnaways by month, journal and page type.
JR4 (optional) - total search
es run by month and service
JR5 - number of successful full-text article requests by year and journal


Database Reports:
DB1 total number of searches and sessions by month and database
DB2 turnaways by month and database
DB3 total number of searches and sessions by month and service (branded group of online info products)

Consortium Reports
CR1 # of successful full-text article/e--book requests by month
CR2 # of searches by database

report format compliance (manual review)
article request counting (test scripts)
database session/search counting (test scripts)

- Don’t account for: automated search filtering (bots, crawlers, LOCKSS, etc)
HTML vs PDF downloads - some services display HTML full-text along with abstract - is this a “download?”

We also covered CORE (cost of resource exchange) and SUSHI (Standard Usage Statistics Harvesting Initiative)

“Library Standards and e-resource management”

- E-Journal lifecycle:

- 1. Acquire: titles, prices, subscriptions, license terms, etc.

- 2. Provide access: cataloging, holdings lists, proxy support, searching and linking

- 3. Administer: use rights and restrictions, holdings, title list changes

- 4. Support: contacts, trouble shooting

- 5. Evaluate: usage data, cost data

- 6. Renew: title lists, business terms, renewal orders, invoices (groups help create standards as management resources)

“Standards for the Management of ER”

- Promote interoperability, efficiency, and quality

- Another way to look at the lifecycle:

- 1. Selection

- 2. Acquisition

- 3. Administration

- 4. Access control

- 5. Assessment

“COUNTER: Current Developments and Future Plans”

- Usage statistics as part of the librarian’s toolkit

- Vendors have a practical standard for usage stats on their major product lines

- Standard usage stats Harvesting Initiative (SUSHI): automated retrieval of the COUNTER usage reports into local systems (part of the XML schema) will indicate the intensity of use of a database, popularity of a database.

- Journal usage factor: total usage (COUNTER JR1 Data)/total # of articles published online (within a specific date range)

- PIRUS: Publisher and Institutional Repository Usage Statistics: an invaluable tool in demonstrating the value of individual publications and entire online collections.



Thursday, November 4, 2010

Electronic Resource Management Systems

Quite honestly, I had a difficult time connecting with the material in this unit until we went to the computer lab to take a look at a management system (ERMes). It was difficult for me to conceptualize, probably because every library works with a different type of electronic resource management system, or works with one that they have created themselves. I can see how deciding on what type of system to use and who will be using it is entirely dependent on the size and the needs of individual libraries. You certainly cannot take a "one-size-fits-all" approach with these systems. During class, we looked at several different systems: EBSCONet ERM Essentials; a homegrown system from Columbia; Innovated Interfaces Millenium; and Serials Solution 360.

There are a few benefits and challenges that come with each product. A few of the benefits: ERM Systems improve overall management communication, there is auto population of data, and an ability to update information automatically/quite easily.

If you're working with EBSCONet, there is the added bonus of automatic management of all EBSCO materials. If you're library subscribes to many of EBSCO's databases, this might be a good option for you. However, for all other vendor products, ERM must be manually completed which is tedious and takes time. Any ERM system that requires manual data entry (and I'm not sure there's one that we went over that doesn't require at least some data entry) leaves room for error. Misspellings, typos, and entering in different license material occurs because usually a group of people will contribute to data entry, not just one person. This can throw an entire section off. This product was recommended for small - medium sized university libraries. One library (Kent State, I believe), even said this system would help them record and remember deadlines and contract deals. For them, it was better than working on a Google Calendar and Excel spreadsheets.

Innovated Interfaces Millenium and Serials Solution 360 can come as part of a package deal (link resolver software AND an ERM system all in one!) But once again, it depends on the type of library. At the University of Wisconsin, we use Ex Libris' link resolver software (SFX), but their ERM system (Verde) would never work for us due to the breadth and depth of our collection.

Each unit in this class adds one more corporation to the list of commercial services provided for libraries, specifically academic libraries. And with my Find It presentation this week, we'll be able to add yet another! Up to this point, the biggest lesson I have learned is to be patient and thoughtful with purchasing decisions - each product is slightly different and could potentially hinder certain areas of librarians' work.

A few Reading Notes:

Unit 9: Electronic Resource Management Systems

- Title by title management no longer works

- Homegrown systems became popular in the late 1990s/ early 2000s

- Strong focus on data standards, issues related to license expression and usage data

- Most companies now offer an ERM system as part of the ILS (interoperability is one advantage to working with one company’s product, but they are also at the mercy of the company for updates – could end up under supported)

- a few things to watch out for: does the system work well with what you already have?

- Is it reliable and sustainable?

- Cost?

- What is the cost of advancements vs the benefits to the user?

- Refer back to ERM checklist

- Implementation of a system: staffing (who should be involved, how they will be structured within the library, training, etc)

- Communication across library departments is key (especially with managing work flow)

- Questions to ask before selecting a certain system:

- 1. “What elements are important to include for you library?

- 2. “What elements are repetitive across license agreements and provide little value or are inconsequential in describing?”

- 3. “Who will be responsible for providing consistent interpretation of license language and meaning?”

- 4. “What tools or resources are available to assist individuals in the mapping process?”

Sunday, October 24, 2010

DRM to TPM

I have a pretty general idea of Digital Rights Management, and after learning about TPM (Technological Protection Measures) I have learned that TPM is just one aspect that fits under the DRM umbrella. One of the biggest things we talked about in class was the difference between authentication and authorization, two major components of TPM. The two are different, but work together to provide access. Authentication answers the question of "who" and authorization answers the question of "what may the authorized person do?"

The authentication process works with IP addresses - libraries and publishers keep a list of "ok" IP addresses (those that are approved to access the database). When a person logs in to say, UW Libraries and their databases, the username and password matches up with a list of approved users and then they are assigned an IP address. The user information that matches up with the preapproved list of users is called an LDAP server (or lightweight directory access protocol). For a place like UW-Madison, thousands of students are added to and taken off this list each year, in addition to thousands of "guest access" passes that can be easily created.

The authorization process gets down to specifics. David Millman's article "Authentication and Authorization" addresses this process well: "The authorization decision is, in other words, given someone's identity, what may they do? What information may they see; what may they create or destroy; what may they change?" We've already started to see tighter authorization requirements on our own campus, and they're predicted to be further restricted in coming years. Currently, there are a handful of databases where the user must be in a specific library to gain access, and certain departments already have additional login information required beyond the regular UW-Net ID and password. A question was raised regarding the Wisconsin Institutes for Discovery in which a relationship between the public and private sector is formed to perform biomedical research - what kind of authorization will be required for these researchers? Especially with the private sector researchers? This will not only be an interesting challenge with authorization, but in licensing as well.

Below are my messy reading notes on a few of the readings:

“Every Library’s Nightmare?”

- “TPM are configurations of hardware and software used to control access to, or use of, a digital work by restricting particular uses such as saving or printing.

- Hard restrictions: secure – container TPM where there is a physical limitation built into the hardware.

- ISSUES: user dissatisfaction, generate interoperability issues; block archival activities; increased staffing to handle these issues.

- Soft restrictions: discourage use, but not impossible to get around. Now almost accepted as part of e-resources (just the way things are). These change our expectations from vendors.

- Occurs in resources that are 1. Digital and 2. Licensed.

- These restrictions would be impossible on paper copies

- Soft restriction types: 1. Extent of use 2. Restriction by Frustration (often done with awkward chunking) 3. Obfuscation (poorly designed interfaces that do not properly show the capabilities) 4. Interface Omission (tasks only possible through browser or computer commands, left out of the interface) 5. Restriction by Decomposition (breaks down into files, makes it hard to save or e-mail) 6. Restriction by Warning (proclaims limitations and “misuse may result in...” language.

- Hard restriction types: 1. No copying or pasting of text 2. Secure container TPM (ex: only posting low resolution images)

“Technologies Employed to Control Access to or Use of Digital Cultural Collections”

- Digitized works are often harder to control and restrict access to, so that’s where TPM comes in (sits under the umbrella of DRM – “a broader set of concerns and practices associated with managing rights from both a licensor and a licensee perspective.”

- Usage controls manipulate the resource itself (same as a hard restriction?)

- Libraries are more likely than archives/museums to employ a system that restricts or controls access/use.

- Common systems are: authentication and authorization; IP range restrictions; network based ID systems

“Authentication and Authorization”

- Authentication: validating an assertion of identity (identity code and password)

- Other examples include:

1. 1. Shared secrets (like a shared password) 2. Public key encryption 3. Smart cards (not sure if I’ve ever seen this before, or if this method is even used anymore) 4. Biometric (personal physical characteristics) 5. Digital Signatures

- Authorization: access control or access management, or permitted to perform some kind of operation on a computer system.

- Divided into three categories: 1. “whether a subject may retrieve an object” 2. “whether a subject may create, change, or destroy an object; 3. The extent that the person can change the authorization rules.