By Tim Ribaric and Cecile Farnum
There is a chorus in the media focused on the impact artificial intelligence will have on people’s working lives. Pundits are attempting to make calls about the future of work, motivated by the rapid and seemingly sudden explosion of generative AI tools that kicked off in the fall of 2022, when ChatGPT captured the imagination of the world.
It’s not as if AI tools didn’t exist before this — it was just that this tool, put in the hands of literally everyone, unleashed imaginations (and millions in venture capital). Now, it seems like there is an entrant in every area of the knowledge economy trying to sell an AI-based tool to bring about this new world of work.
This might seem far-fetched, but there is ample evidence that organized labour is facing this challenge already. Think back to the recent strike in the United States of both the Screen Actors Guild and the Writers Guild of America. Many issues were at play in those disputes, but one key issue that received a lot of coverage in the press was the Hollywood studios’ desire to use AI tools and techniques to replace workers.
These plans even included creating digital avatars of performers that could be used by the studios forever, and without further compensation. Ultimately, the guilds were successful in bargaining protections against such measures, aided by a long and drawn-out strike.
It’s perhaps time to consider that a similar strategy will be needed by academic workers in the upcoming years. More specifically, a strategy for protecting the work of professional librarians.
Why librarians specifically? Let’s argue that traditional faculty members have robust mechanisms to protect their intellectual property. Course materials for teaching are confined to course management systems, where access is mitigated by the professor in question, and usually shut down when the class is completed. (This is not a solid rule, but one which reflects most practice close enough.)
Research outputs are similarly protected by the individual faculty member. They choose what venues they will write for, and what content they will use for these purposes. Librarian work, by contrast, is ‘done out in the open.’
Librarians spend a large amount of time facilitating access to research material through many means, such as arranging subscriptions, completing in-class instruction, and creating online instructional content. This work is often done without enforcing any serious restrictions, except perhaps making sure the person seeking help is affiliated with the institution in question.
This transparency leads to a preconception that this work might be ripe for automation with AI. Could a chat-bot be devised that would replace the time-honoured librarian skill of the reference interview?
We’re guessing no, but that doesn’t mean some service provider won’t attempt it. Cash-strapped and haggard admin teams in university libraries across the country might just go for such a solution as it (potentially) would save money and (again potentially) increase service levels.
What recourse would there be if such a service were shopped around by a vendor? We’re going to guess not that much. New norms would be established around these tools to guide expectations of what they reliably could do, but that misses the bigger point, and the battle would already be lost. Contract protections regarding AI tools (just like the Guilds) would at least produce defensible, and more importantly, grievable guidelines.
Organized labour has created many protections and provisions that probably seemed impossible before they were created. The old story about the weekend being the result of union efforts is a good example of this. It probably seemed impossible in the minds of workers to have any weekly time off, besides a brief respite on Sunday to attend church — and yet, here we are.
The real challenging question is what would this collective agreement language look like? It would probably have to articulate carefully what AI tools are, and what parts of the work done are not eligible for automation via these tools.
If we look more closely at the recent contract language won by the Writers Guild of America, it tries to protect the labour of writers in film and television; restricting the ability of studios to use writing primarily generated by AI without the involvement of professional writers. We can imagine a similar clause in future collective agreements that affirms the humanness of librarians as central to role, for example: “The University agrees that because neither traditional AI nor generative AI is a person, it cannot independently perform the duties and responsibilities of a librarian, as articulated in the collective agreement.”
Similarly, the Screen Actors Guild memorandum of agreement provides some protections against the use of synthetic performers created through generative AI. Collective agreement language to protect librarians from any future avatar or other ‘synthetic performers’ could potentially read as:
“The parties acknowledge the importance of human performance in librarianship and will not consider synthetic performers as equivalents.”
It may sound like science fiction, but we’re already seeing AI have an impact on labour in other fields. This might be tricky territory for our faculty associations and unions. It certainly is something that we all collectively need to think about, though, considering that these technologies show no signs of slowing down.
Tim Ribaric (@elibtronic) is the Digital Scholarship Librarian at Brock University and former member and chair of the CAUT Librarians’ and Archivists’ Committee.
Cecile Farnum is a liaison librarian at Toronto Metropolitan University Libraries who has participated in several rounds of collective bargaining and served on the CAUT Librarians’ and Archivists’ Committee.