Over the past few years, 3D printing has become a frequent subject of discussion in the library world. Library makerspaces have proliferated. Patrons have printed innovative and essential objects, including prosthetic hands. Now that 3D printers are becoming a common feature in public libraries, we’re beginning to see discussion of potential legal dangers, and the formulation of rules to keep this new technology from creating problems. Debate is ongoing about whether 3D printers represent a major source of innovation or a passing fad. We at Unbound even weighed in on the subject last year. Whether or not we’ll all have 3D printers in our homes one day, it’s undeniable that they’ve made an impact. Today we’ll be taking a look at two new technologies that synchronize with 3D printers to expand the scope of what they can produce. One of them is currently finding its way into libraries across the country, and the other is still a laboratory prototype.
3D Scanners are tools for turning physical objects into digital models. Just as the 3D printer took manufacturing from a factory scale into the home, 3D scanners make three-dimensional imaging possible without a large, expensive device like an MRI scanner. Indeed, many 3D Scanners on the market today are produced by the same companies that produce 3D Printers. The devices make sense together—one could scan an object and then use the resulting model to print a replica of it. But how accurate will that replica be?
Here at SLIS, our tech lab owns a 3D scanner called the MakerBot Digitizer. To learn more about its efficacy, I consulted with Linnea Johnson (Manager of Technology) and Gabriel Sanna (SLIS student). The Digitizer is surprisingly lightweight and streamlined. I was expecting a heavy, brutalist kind of object, but instead I was presented with something resembling a small, abstracted replica of the Enterprise. We tested out the device by scanning Gabriel’s sunglasses.
As you can see, the scanned glasses were not entirely accurate. The general shape was mostly preserved, but the details were rough. In the area around the lenses, however, things got a bit strange. The scanner works by tracing the object with two lasers as it rotates on a built-in turntable. The glare on the lenses disrupted this process and made it difficult for the system to get an accurate read on them. MakerBot recommends coating reflective objects in a thin layer of baby powder to remedy this issue, but we elected not to fill every crevasse in Gabriel’s glasses with fragrant powder. Next we scanned a small statue from Linnea’s office, and this time we printed the result.
Absent the glare, this time the scanner captured the overall shape of the object much more accurately. The finer details were captured at a pretty low resolution, but you can still tell what you’re looking at. When we printed the object, it lost additional resolution and came out looking pretty vague (and orange). Admittedly, this process represents the minimal-effort use case for the scanner. Canny users of the scanner could take the model it produces and correct it manually before printing it. This would result in a much higher-quality end result.
Still, it seems clear that prosumer 3D scanners still have a ways to go. A trained and dedicated user could certainly save a good deal of 3D modeling time, as long as they were willing to put some effort into correcting the original scan. However, the device isn’t magic—a novice user could not expect an immediately accurate replica of their object in most circumstances. We are still in the fairly early stages of this technology, and I expect it to improve dramatically in the coming years.
If the 3D scanner is still young, this next technology hasn’t even been born yet. At this year’s SIGGRAPH (ACM Special Interest Group on GRAPHics and Interactive Techniques) 2015 conference, a research group from Zheijiang University will be presenting a new process they call “computational hydrographic printing“.
Hydrographic printing has been around for some time. It’s a method for printing color and patterns onto objects. Basically, you produce a colored plastic film, rest it on top of a pool of water, spray the film with chemicals that activate bonding agents within the plastic, and then dunk an object into the water, allowing the film to wrap around the object and coat it with color. Hydrographic printing is good at applying solid colors and patterns, but it’s not so great when you need accuracy. Dipping an object by hand is an imprecise art. It’s difficult to keep a steady hand during the entire process, and nearly impossible to accommodate for the way the film stretches and warps as it bonds to the object. You’d be fine printing a brick wall pattern onto a mask, but trying to print a face would work out maybe once in a hundred times.
That’s where computational hydrographic printing comes in. The Zheijiang University researchers have developed a new device and process that can print hydrographically with an accuracy and consistency that was previously impossible. Their technique couples cheap, off-the-shelf components (binder clips, an Xbox Kinect, a linear DC motor) with advanced physics simulation software that can predict how the film will stretch when an object is immersed.
By simulating the dunk in advance, the software is able to generate a design that is distorted just enough to compensate for the stretching of the film when placed onto the object. This design can be printed onto the film using a common inkjet printer. Their rig incorporates an aluminum arm that grips the object, as well as a 3D visual feedback system that ensures that the arm remains perfectly steady and maintains its position, for a perfectly executed dunk. The result is so consistent that a single object can be immersed multiple times, to print complex designs onto all sides of it, as seen in the above gif.
If this technology continues to develop, it could easily sit alongside a 3D printer in a library’s makerspace. 3D printers already use digital models, and that same model could be fed directly into a hydrographic printer. After 3D printing an object, all you would have to do is send that 3D model and a 2D design into the hydrographic printer, and you could paint your object with excellent accuracy.
It isn’t hard to imagine a future library makerspace where you could use a 3D scanner to model an object, use a 3D printer to create a physical copy of it, and then use a hydrographic printer to paint it. It wouldn’t quite be a Star Trek replicator, but it wouldn’t be too shabby either.
At this very moment, a team of intrepid scientists, artists, and adventurers are traversing one of the world’s most untouched habitats: Botswana’s Okavango Delta and its dangerous source rivers, the Cuito and the Cubana. Beginning with a wetland bird survey in 2012, The Okavango Wilderness Project has been undertaking yearly expeditions to the Okavango, and these expeditions have been expanding in scope every year. Now, on the 2015 trip, the Cuito River is receiving particular attention. Due to a 27 year long Angolan civil war (which ended in 2002), the lands along the river are seeded with live land mines. The explorers aim to travel the full length of the river, a feat never before attempted due to its inhospitability. They hope to raise awareness of the many conservation issues affecting the Delta and its source rivers, advocate for legislative protections, gather useful scientific data, and share the cultures of those who live near the Delta. To these ends they have assembled a crew with a diverse set of specialties: ichthyologists, ornithologists, herpetologists, botanists, photographers, polers, and more. Of special interest to Unbound is their data artist, Jer Thorp, and the data reporting methods he has orchestrated.
A photo posted by Into the Okavango (@intotheokavango) on
This year, the Into the Okavango expedition is recording and broadcasting a very thorough suite of data. Their mekoro (a kind of specialized canoe) are equipped with GPS sensors that record their locations at all times. Members of the crew are wearing heart rate monitors that record their biometrics. They have brought a Data Boat equipped with sensors that measure water and air quality. They are uploading photos to Instagram, tweeting, and creating podcasts and field recordings. As they reach certain planned points on their route, they are deploying stationary, autonomous, solar-powered sensor platforms that will automatically record water temperature and pH over time, and report back over the internet.
Their process for recording animal sightings is particularly clever. When one of their experts spots an animal of interest they call out the species name and use a custom-made android app to take photos of it and report the sighting. The app sends the data to a Raspberry Pi computer in the back of the boat, and then an antenna connected to the Pi sends the data to the internet. The sightings incorporate GPS data so you can track where each one occurred along the route, allowing for maps of sightings like the one at the top of this post. Some of the large corpus of data they collect is reported live, but most of it reaches the internet in one transmission every evening. When the explorers have made camp they sort through the records of the day and upload it to the expedition’s website.
All of the data uploaded to the website is comprehensive and completely open to the public. The site features an interactive map that intuitively displays the crew’s geographic location over time, paired with their photos, tweets, and wildlife sightings. They also provide an open API which allows anyone to interface with the complete database using their own software. By providing such immediate and open access, the expedition team hopes that they’ll allow people at home to have a personal relationship with the research data, and by extension the Okavango Delta itself. Ideally, their data will persist as a legacy for future researchers to use in ways they’d never expected.
A photo posted by Into the Okavango (@intotheokavango) on
I first became aware of Jer Thorp’s work when he visited Simmons College for the IMLS-sponsored conference Envisioning Our Information Future and How to Educate for it. His experience as an artist embedded in information institutions lent him a valuable perspective on the issues and ideas we discussed there. He is known for his software-based art that visualizes and conceptualizes scientific data in creative ways. His Sustained Silent Reading, a 2010 installation at the Gottesman Library at Columbia University, used semantic analysis to find important people, places, and things in a text and display their relationships in a stylized diagram. His contributions to Manhattan’s 9/11 Memorial allowed for the organization of victim’s names by their relationships to each other, with coworkers and family members’ names placed together. Since 2013, Thorp has been lending his talent for data manipulation to The Okavango Wilderness Project.
Thorp did not physically attend their 2013 expedition, but he created an open API for their data from home, allowing anyone following along from their computers the ability to create their own tools and visualizations using the data. In 2014 he expanded this API and traveled with the explorers, helping them report their data in real-time. At one rather notable point during that trip he ended up facing off with a hippopotamus one on one. Undaunted, he returned in 2015 to refine the team’s process for reporting data and to assist them in the field.
After spending several weeks preparing the expedition’s technology and venturing on the first leg of the journey, Thorp’s segment of this year’s trip ended on the 28th of May. The systems he designed will continue to serve up data for the rest of the expedition.
Thorp’s design of the expedition’s data workflow could provide a blueprint for librarians with an interest in real-time open access data. In this expedition, Thorp’s job facilitating his colleagues’ research was not entirely dissimilar to a faculty services librarian’s role when supporting academic research. Librarians’ expertise in information seeking, manipulating data, and facilitating information access could make them good candidates for joining future expeditions of this type. Perhaps the time is right for a new breed of adventure librarian?
If you’ve heard of any librarians undertaking similar projects, let us know in the comments!
The question of whether we still need libraries continues to be raised in various forums, from the New York Times (da Loba, 2012) to Forbes (Denning, 2015) and NPR (Weeks, 2015), to personal and community blogs. Those who question the library’s value tend to point to the—seemingly—ubiquitous access to information in the digital age. Why do we need libraries when we can find almost any information instantaneously online? Responses to this question are often couched in terms of bridging the digital divide and providing access to computers and the internet for those who do not have (and perhaps cannot afford) it at home, or providing access to higher-end technology and multimedia production tools, as in the case of makerspaces. While these arguments are legitimate, they (and the questions that prompt them) ignore the basic fact that in our knowledge economy, information is treated as a commodity—an entity which has economic value and can be owned, bought, and sold. As such, vast quantities of information are not freely available, regardless of access to technology. This issue of information as a commodity has important consequences for society and for libraries, and it is crucial for librarians to engage with the issue as professionals and as citizens in a democratic society. In this blog post, I would like to consider some of the implications of the commodification of information in the hopes of spurring further conversation on the topic.
Our market orientation, the idea that we are treating more things as commodities and more organizations as businesses, is evidenced in the information professions by our increasing use of business terminology—referring to patrons as “information consumers” and discussing marketing and branding of our institutions and services. Although living in an information/knowledge economy highlights the issue, the idea of commodifying information is not a new one. Traue (1997) traces the shift in orientation from information as a public good to a consumer product. He begins with the invention of the printing press, which helped to separate information—a packaged product—from knowledge—an internalized understanding. Once information was separate it could be owned, and copyright laws were developed to help protect that ownership and promote creation of new products. Later, Shannon and Weaver introduced their theory of communication that introduced the possibility of quantifying information into discrete units, which allows then for buying and selling. Traue also touches on certain government policies, including those of President Reagan, which moved the production of certain government information into private hands, thereby increasing the cost of access.
Advances in technology have added another layer of complexity. On the one hand, technology has made the tools for producing and disseminating information more widely available, meaning more people can participate in the process of creating and sharing information. Further, the Internet, Web, wireless technologies, and mobile devices have increased the ability to access information from almost anywhere at any time. While it would be reasonable to assume that these changes have led to an abundance of information that is easily and freely accessible, the reality is more complicated. In fact, while information itself is more abundant than ever before, access is often limited. In many cases, publishers and other copyright holders appear to be exploiting the uncertainties arising from new technologies to exercise even greater control over the information they produce. For example, many of the major publishing houses and vendors such as EBSCO currently engage in a model whereby libraries cannot buy materials like books or journal subscriptions, but pay licensing fees in order to access information for a limited time, after which they must pay for continued access or lose the content (Bessner, 2002). Similarly, several of the largest book publishers refuse to license ebook titles to libraries, or impose artificial limitations, such as automatically removing a title after a certain number of circulations (Maier, 2014). Another approach is the “pay per” model, in which individuals are charged for access to single resources. For example, a Google search for the topic “commodification of information” will result in several scholarly articles. In many cases, only the abstract of the article is freely available online. If the reader wishes to access the entire article, they are prompted to pay as much as $40 for a single article. In many cases, these same readers could request the article for free through interlibrary loan at their local library, but that option is not displayed on the result screen, and many people do not realize this option exists. These pricing models can lead to decreased access for both libraries and users.
In addition to impacting whether and how users access information, the commodification of information can also influence what gets created and published, since publishers only want to invest in material that will sell. Similarly, grant makers and foundations might not fund research in unpopular areas. In terms of scholarly communication, Lawson, Sanders, and Smith (2015) suggest that, as a result of this marketing orientation, certain research questions and topics will become marginalized as researchers choose not to pursue areas for which they cannot secure funding or which are not likely to get published. In an interview for Inside Higher Ed, Hans Radder suggests that this economic focus could lead to bad science, as scientific researchers are influenced by the pharmaceutical companies and other industries that fund their research. Further, he notes that “commodified research tends to focus on short-term economic gain, while a significant social function of academic research has always been to provide a more general “knowledge infrastructure” that can be drawn upon when confronted with novel future challenges” (Jaschik, 2010). Commercial publications likewise are driven by market forces. Indeed, some critics suggest that the lack of diverse literary characters, especially in children’s books, is due in large part to the perception that there isn’t a large enough market to promote such books (We Need Diverse Books, n.d.). As a result, whole communities of people cannot see themselves reflected in the books they read.
Access to quality information is a necessity. People need information to inform decision making from who to vote for in elections, to what to eat in order to be healthy, to which products are the best use of their money. In fact, it has been argued that access to information, though not codified in the United States Constitution or Bill of Rights, is actually a human right that underpins all other human rights (Bishop, 2011; Weeramantry, 1995). Thus, creating barriers to the access of information becomes a social justice issue, as “access to information, as well as the requisite education and skills necessary to participate effectively under current economic conditions is heavily influenced by social class” (Adair, 2010). This inequality has to do with more than just access to technology such as computers, smartphones and internet in order to access information, but also has to do with access to education and ability to pay for information sources and services. Further, this inequality impacts not just individuals but communities, institutions, and even whole nations who may not have the technological infrastructure to access or the financial capital to pay for information. For instance, Lawson, Sanders, and Smith (2015) raise concerns about the production and dissemination of scholarly information, including the rising subscription costs of academic journals, which limit access to those who cannot afford such prices and effectively “provide a privileged and stratified access to this scholarly information and knowledge” (p. 2). Further, they note that in many cases the research being reported in these articles and journals was funded by government and taxpayer money, begging the question of why the general public is not able to freely access information when they provided the money to enable its creation.
Given that the basic mission of libraries is to provide free and equitable access to information, the commodification of information raises both challenges and opportunities for librarians. To begin with, librarians have a role to play in helping to raise awareness of and promote use of open access publications (Lawson, Sanders, & Smith, 2015). While traditional publishing models charge the end user—whether it be individuals, libraries, or other entities—to access the information they produce, open access resources make their publications available to users for free. While open access journals have gained some popularity, especially in the sciences, some scholars and critics still question their quality and refuse to publish with them. Librarians can help scholars and researchers to understand the value of open access, highlighting that fact that articles in open access journals tend to reach a wider audience and get cited more frequently, and helping them to understand that many open access journals are peer-reviewed just like traditionally published journals, so the quality of research should be comparable. Librarians can also help these scholars and researchers learn about which copyrights they retain when publishing, and whether they can deposit copies of traditionally published articles in open access repositories.
Librarians also have a role to play in helping patrons develop critical information literacy skills. As noted above, many different interests influence publishing, which can lead to the spread of “bad science,” propaganda, and other misinformation. Unfortunately, once it is digested, poor information can be difficult to correct and can lead to misunderstandings and poor decision-making. As professionals skilled in assessing the authority and credibility and at researching sources, librarians can help users develop an informed skepticism, questioning material and digging deeper into sources in order to evaluate them before forming opinions or making decisions based on that information. Through formal education programs, tutorials and self-paced guides to resources, or one-on-one instruction on an as-needed basis, librarians can work with patrons to develop the competencies needed to evaluate and use information effectively.
Finally, librarians should raise awareness about the impacts of information commodification, and the role of the library with regard to this issue. In fact, I would suggest that the debate around whether we need libraries could be reframed around the issue of commodification of information—in a knowledge society, information has economic value. As such, it is not, nor is it ever likely to be, entirely free. Therefore, libraries are necessary to continue serving their original purpose of helping to provide free and equitable access to information in all formats. It is important for people to understand that vast quantities of information are not freely accessible, even with sophisticated technology and high-speed online access, and to consider how limitations on access affect participation in a democratic society. Public libraries in the United States, though funded by taxpayer money and operating within a local government structure, have a mission that requires them to act independently of these external influences, effectively creating what Habermas termed a “public sphere” where citizens can access information and debate topics (Webster, 1995). In an economic system where information has value and can be owned, bought, sold, access to the information necessary for everyday life decision making and participation in a democracy is limited to those who can afford it, libraries can help to bridge the gap and make information more widely accessible. By not engaging with this issue, librarians are potentially undercutting that role and the value that they bring to their communities.
Adair, S. (2010). The commodification of information and social inequality. Critical Sociology, 36(2): 243-263.
Lawson, S., Sanders, K., & Smith, L. (2015). Commodification of the information profession: A critique of higher education under neo-liberalism. JLSC 3(1): ep1182. http://jlsc-pub.org/jlsc/vol3/iss1/1/
In the 2012 science fiction film Robot & Frank, a near-future public library is entirely staffed by just one librarian… and one robot. The robot, named Mr. Darcy, mans the circulation desk, shelves books, and answers reference questions. The librarian handles occasional administrative work. The main character, a patron old enough to remember the way libraries used to be, laments that the human touch has left the building.
This concern has been around for decades. Will machines replace librarians, leaving many unemployed? Will libraries become totally automated? Will robotics alienate librarians from their patrons? Unbound has surveyed the current state of the art in library robotics and found little cause for alarm. In public libraries, academic libraries, and museums, it seems that robots are bringing people closer together, not driving them apart.
Connecticut’s Westport Library has undertaken a similar robotics project, but on a different scale. Instead of five hundred simpler robots, they’ve acquired two enormously complex ones. The humanoid child-sized robots, named “Nancy” and “Vincent,” are part of French robotics company Aldebaran’s NAO line. The NAOs are equipped with video cameras, directional microphones, tactile sensors, Wi-Fi connectivity, sonar rangefinders, and a complex range of motion. With these input methods available, the robots can be programmed to exhibit complex behaviors. Examples include turning their heads to look at people who are speaking to them, identifying and manipulating objects, and pulling information from the internet to add to a conversation. Nancy and Vincent are charismatic machines, and they attract patrons into the library to see what they’re capable of. Like CPL’s Finch programs, the Westport Library holds training sessions where they teach patrons how to program the NAOs. They hold a series of classes of increasing complexity, Levels One through Three. Those who have completed at least the Level One class are invited to a weekly Open Lab, where attendees can work on their own original NAO coding projects, with assistance from a professional developer. They also hold weekly “Robot Viewing” sessions where they show off some of the behaviors that have been developed thus far – including a “Thriller” dance.
In academic libraries, robotics technology is being used to ease space constraints and make materials more readily accessible. The University of Technology, Sydney (UTS) has recently installed an enormous automated storage and retrieval system (called the Library Retrieval System, or, LRS) underneath its library. UTS’s system takes the form of six enormous robotic cranes that tend to thousands of closely packed bins of books. When a patron requests a stored book from the online catalog, the LRS automatically springs into action. One of the cranes retrieves the appropriate bin and brings it to an employee, who retrieves the requested book. The book is then delivered to the library’s hold shelf, where the patron can pick it up. The entire process generally takes about fifteen minutes. The LRS allows for extremely dense storage of books, obviating the need for an expensive and unwieldy off-campus storage facility and freeing up space in the library for new student-centered services like collaborative study spaces, maker spaces, and multimedia editing stations.
The LRS naturally raises questions about discoverability – if everything is packed into metal boxes underground, then how will students stumble upon works they didn’t know they were looking for? The library has taken steps to mitigate this problem. The collection’s most commonly used materials are still shelved out in the open for students to browse. Additionally, the UTS libraries have added new features to their online catalog to enhance discoverability. Their “collection ribbon” is an intuitive visual way to narrow search results, and their “shelf view” feature displays books as they would appear on the shelf, surrounded by the books they would have been shelved with. They intend to add recommendation-related features in the future. By quietly and efficiently freeing up space, the LRS seems primed to create new opportunities for human-to-human interaction in the library.
Museums are currently exploring robotics in their capacity to improve accessibility. The de Young Fine Arts Museum in San Francisco recently acquired a pair of telepresence robots. These machines open up the gallery floor to patrons with disabilities that would otherwise prevent them from visiting in person. The robots, called BeamPros, are almost like mobile Skype avatars. A patron can reserve a Beam tour in advance, log into the robot from their computer at home, and then pilot it around the museum. The BeamPro is a 5’2 tall frame on wheels, holding up a screen, a microphone, speakers, and a camera. A live video feed of the patron’s face is displayed on the screen, and the camera picks up high-resolution video of the space for the pilot. A second camera points down toward the ground, allowing the pilot to avoid obstacles.
The BeamPro is fully mobile and under the full control of the pilot. Unlike a prerecorded video tour or interactive website, the BeamPro grants its user agency and physical presence. A person piloting a BeamPro can move about the space as they wish, and linger on a particular work of art as long as they like. They can interact with other museum patrons and with employees. Henry Evans, an advocate for and user of this technology, has this to say: “In five years, I would like to see museums all around the world at least experimenting with this technology, and in 10 years for it to be ubiquitous. It will be the next great ‘democratization of culture.’” Currently, Suitable Technologies, the developers of the BeamPro, are partnering with five museums around the country to pilot this remarkable use for their product. CBS News covered one of these partnerships in this video news report.
Despite the often-dire predictions of science fiction, in today’s landscape robots seem to be a largely beneficial presence in libraries and museums. Whether they’re used for education, organization, or accessibility, these machines are facilitating human connections, not eroding them. If we continue on the path we’re on, then Robot & Frank‘s Mr. Darcy will not be diminishing the library’s human touch any time soon.
Have you had any experience with robotics in a library, museum, or archive setting? Let us know about it in the comments!
You are teaching a college history course and you want to make copies of an essay for your students, but you don’t have the rights to it. You’re a rap musician and you want to sample an old jazz song, but you can’t find any definitive proof of who owns the rights to it, so you can’t ask permission. You’re a filmmaker working on a horror movie about a cult, and you want to use copyrighted historical footage of the Jonestown cult in your film. In each of these cases, Fair Use could save you from major legal consequences.
Fair use is a doctrine of copyright law that allows one to use portions of a copyrighted work that one does not own the rights to without legal consequences. Without Fair Use, derivative works including parody, satire, educational excerpts, and more would be subject to legal consequences.
This past week was Fair Use Week, an annual celebration to spread awareness of Fair Use and to advocate for its continued strength. As our contribution to the cause, we here at Unbound would like to walk you through a hypothetical example of how Fair Use can be beneficial, from two different perspectives: education and the arts.
Let’s say a filmmaker is working on a documentary about media representations of the city of Sarasota, Florida. Across the street, a college professor is teaching a class about Sarasota’s history. Both of them acquire a copy of “White Sandy Beaches,” a short video commissioned by the Sarasota County government in 1953 to promote tourism. The filmmaker would like to use a clip from the video in her film, in order to comment on its factually inaccurate depiction of the city’s beaches. The history professor would like to show her class this video, as it includes unique footage of an important speech by a former mayor of Sarasota.
The history professor has used footage like this before in class, and knows that she doesn’t have to worry about rights issues when displaying it in the classroom. However, this semester she would like to include the full video as part of the students’ out-of-class studies, and have it streaming online for them. She visits the university library and speaks with the librarian in charge of digitization.
The librarian decides to consult the ARL Code of Best Practices in Fair Use for Academic and Research Libraries (PDF link). The first section, “Supporting Teaching and Learning with Access to Library Materials via Digital Technologies”, lays out both the requirements and the ideal procedures for making course material available online. Normally, it would be illegal to stream a copyrighted work online, but there are several factors that make it possible to do so legally in this case through Fair Use. This video was originally intended as television advertising for a city, to draw in tourists. Taking that video and placing it into an educational context transforms its usage such that the librarian would not be impinging on its original purpose. It is being viewed on an online instructional technology platform instead of television, and it is being used to teach students history rather than to advertise. These differences in context and usage make this work a good fit for fair use. If the librarian were digitizing a work that was originally created for academic educational purposes, then this usage would be a bit sketchier, as it would not significantly change the context or purpose of the work. If that were the case, it would be best to use only an excerpt.
Though this video is a good fit for educational fair use, there are still best practices to use when making it available. It’s important that only enrolled students, and faculty/staff working in an educational capacity have access to the digitized video (this can be legally problematic when teaching MOOCs, as any member of the public can enroll and access the work). The video needs to be available only over the course of the semester the class is being taught in. Students accessing the video should be presented with a full scholarly attribution, and a paragraph explaining the students’ rights and responsibility when accessing the material under Fair Use. With these measures in place, there will be no legal trouble with streaming the video.
The documentary filmmaker has a much greater challenge in store for her, as she is not creating this work in a non-profit capacity or as part of an educational institution. If she is taken to court by the rights holder of “White Sandy Beaches,” then she will not have an air-tight defense in the same way our academic librarian would. In the domain of the arts, Fair Use can be fairly subjective, and rulings can be based in large part on the whim of a case’s judge. Fair Use is codified into law as a series of guidelines, not hard rules. The four factors that are considered in a Fair Use case are as follows:
The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
The nature of the copyrighted work
The amount and substantiality of the portion used in relation to the copyrighted work as a whole
The effect of the use upon the potential market for, or value of, the copyrighted work
The filmmaker’s best bet would be to first find the copyright holder of the video and ask for permission to use it. If she can officially license an excerpt of the video for her film, the she will not have to worry about Fair Use in the first place. In many cases this is untenable for financial reasons. In our case, the filmmaker simply cannot identify the current rights holder. She attempted to seek out the film’s copyright record using the US Copyright Office’s web search but was unable to find it. Due to their relative obscurity and short lifespan of use, ephemeral promotional videos like this one can often pose a challenge when tracking down the owner. It’s quite possible that the creators never filed an officially documented copyright claim on this film in the first place. Even so, they still own the copyright and can bring a legal challenge in the case of infringement.
In a world without Fair Use, this would be the end of the line for the filmmaker. Without any way to get permission from the rights holder, any usage of material from “White Sandy Beaches” would leave her open to a lawsuit for copyright infringement. She would have to make do without the excerpt, and her future audience would not have the chance to appreciate these rare images of the past from a city that is rarely filmed.
Luckily, her usage of this video does arguably fall under the realm of Fair Use, and there is a path she can take to safely show clips from it in her film. The Documentary Filmmakers’ Statement of Best Practices in Fair Use (PDF link) specifically discusses situations like this. If a filmmaker is “employing copyrighted material as the object of social, political, or cultural critique,” then such usage generally constitutes Fair Use. In the same way that an essay about a play would need to be able to quote that play, a film discussing another film must be able to quote that film. Because the filmmaker in our example is critiquing media representations of Sarasota, it is legally permissible for her to show clips of those media representations as part of that critique.
The one critical point to consider in this usage is this: the filmmaker’s use of “White Sandy Beaches” must not damage the original work’s market value by taking its place. In order to minimize the chance of that happening, the filmmaker would be best off using only portions of “White Sandy Beaches” and not the entire film. She would need to ensure that the footage is used for a different effect than the source material was originally intended for. As her documentary will be about media representation and not about selling Sarasota to tourists, the filmmaker does not have to worry about taking over the original film’s market share. The two films will occupy very different places on the market. A Fair Use defense never guarantees a win in court due to Fair Use’s openness to interpretation, but this is certainly a strong enough case that the filmmaker could feel confident with moving forward.
Through the careful application of Fair Use best practices, both librarians and artists are able to educate and entertain in ways that would otherwise be illegal. Because Fair Use is determined on a case-by-case basis, best practices are constantly evolving as court decisions create new precedents. New innovations in technology and communications have created possibilities for media appropriation that were unimaginable in 1976 when Fair Use was first codified: fanfiction, fan art, YouTube supercuts, music mash-ups, and so many other internet-based artworks are driving the law to keep up. MOOCs, which can accommodate many thousands of students from across the world, are pushing the boundaries of what Fair Use will allow. With luck and with strong advocacy, we’ll keep the protections we have, and see Fair Use expand to cover these new cases, keeping innovation in education and the arts possible.
The New York Public Library explains how Fair Use helps them accomplish education and preservation goals.