SciELO - Scientific Electronic Library Online

 
vol.91 issue1Assessing digital records authenticity in a Botswana government accounting system: an archival diplomatics perspective author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

    Related links

    • On index processCited by Google
    • On index processSimilars in Google

    Share


    South African Journal of Libraries and Information Science

    On-line version ISSN 2304-8263Print version ISSN 0256-8861

    SAJLIS vol.91 n.1 Pretoria  2025

    https://doi.org/10.7553/91-1-2477 

    RESEARCH ARTICLES

     

    Lost in the algorithm: navigating the ethical maze of AI in libraries

     

     

    Sara Ezzeldin Aly Ibrahim

    Research Assistant at the Center for Documentation of Cultural and Natural Heritage, Cairo Egypt sara.ezzeldin@bibalex.org ORCID: 0009-0002-7436-0191

     

     


    ABSTRACT

    As libraries integrate artificial intelligence (AI), they face a complex ethical maze. This paper explores seven maze chambers in AI-powered libraries. "Data Bias and User Representation" examines how AI algorithms may perpetuate biases, leading to unfair recommendations and limited access for diverse users. "Privacy and Patron Confidentiality" highlights risks in collecting and analysing user data, stressing the importance of anonymising and user control. "Algorithmic Transparency and User Trust" explores strategies like Explainable AI (XAI) to help users understand and trust AI decision-making processes. "The Human Librarian in the Age of AI" addresses the evolving role of librarians, emphasising the need to balance AI efficiency with human expertise. "AI and Accessibility for Diverse Users" looks at how AI can improve accessibility for people with disabilities or language barriers, while also stressing the importance of mitigating biases to ensure inclusivity. "Ethical Procurement and Vendor Practices" and "Community Engagement and Open Dialogue" focus on responsible vendor selection and transparent communication with users. Acknowledging the pioneering efforts of the Bibliotheca Alexandrina, the paper calls for ongoing ethical considerations to ensure AI benefits all library patrons, fostering inclusive, user-centred institutions in the digital age.

    Keywords: AI ethics, human-AI symbiosis, inclusive AI design, privacy, data bias


     

     

    1 Introduction

    We are witnessing the fourth industrial revolution where artificial intelligence (AI) is embedded in many domains and in every aspect of our daily lives, whether we are conscious of its role or not. Given the polarised feelings about AI, we all have a responsibility to understand it to logically join the conversation. As previously, (Aly 2023):

    AI is an exploration of methods by which "computers can mimic human capabilities" ... while Machine Learning (ML), a subfield of AI, shows computers a large amount of data and instructs them to learn from it and eventually the computers are taught to do this "mimicking" on their own (Lakshmanan et al. 2021). Nowadays, the hype the field is getting is due to the "darling" of ML techniques also known as Deep Learning (DL) (Vaughan 2020). DL, its name stems from achieving its learning goal through deep Neural Networks (NNs), is aided by tremendous computing power and vast amounts of data to achieve unmatchable results compared to other ML techniques (Géron 2019).

    Within the spectrum of AI, two primary categories are often discussed (Bostrom 2014): weak AI and strong AI. Strong AI, or artificial general intelligence (AGI), is an AI system with generalised human cognitive abilities that can intelligently tackle unfamiliar tasks. Surprisingly, all AI systems today are considered weak AI, meaning they excel at specific tasks.

    According to Encyclopaedia Britannica, ethics is the branch of philosophy that studies morality. It is concerned with right and wrong, as well as evaluating human conduct, including our interactions with technology (Pant et al. 2024). Consequently, understanding AI is essential for ensuring its ethical development and use.

    One domain that is standing at a crossroads now is libraries where AI presents a compelling opportunity to revolutionise these institutions (EBSCO 2024). However, this exciting development is accompanied by a complex web of ethical considerations that demand careful attention. While some research has explored the potential benefits of AI in libraries, as illustrated in Table 1 (EBSCO 2024; Ashikuzzaman 2024; Pacific University Library 2023; Marshall & DuBose 2024), a growing body of research is highlighting the ethical challenges associated with implementing AI in libraries (Mishra 2023). This paper aims to contribute to this critical area by delving into the "ethical maze" of AI-powered libraries.

    The metaphor of a maze aptly captures the interconnected challenges and opportunities that arise from AI integration. The paper is structured as if navigating this intricate maze while confronting seven ethical key chambers:

    Data Bias and User Representation: Can AI algorithms inherit and perpetuate biases present in the data they are trained on?

    Privacy and Patron Confidentiality: What are the potential privacy risks associated with collecting and analysing user data for AI applications?

    Algorithmic Transparency and User Trust: How does AI arrive at recommendations and search results?

    The Human Librarian in the Age of AI: Are human librarians considered as mere means?

    AI and Accessibility for Diverse Users: Can AI facilitate accessibility for patrons with disabilities or language barriers?

    Ethical Procurement and Vendor Practices: How to select AI vendors in libraries?

    Community Engagement and Open Dialogue: How to reassure patrons into accepting AI?

    Through a systematic review of existing research, this paper addresses the above thorny ethical problems. It seeks to answer the main research question: "How can libraries ethically integrate AI to enhance user experience, ensure inclusivity, and maintain trust while addressing challenges related to data bias, privacy, transparency, accessibility, procurement practices and community engagement?" The paper then examines the necessary preparations undertaken by Bibliotheca Alexandrina, a pioneer in information access, in navigating the strenuous landscape of this era to advance its social mission with the responsible use of AI technologies (IFLA 2020). This is achieved by starting in-house with its staff and reaching out to patrons to promote ethical AI in libraries. The paper offers a maze guide with the goal of morally responsible AI-powered libraries as the framework to uphold conceptually. It also suggests that AI morality in libraries is a question that needs to be regularly asked and evaluated as this technology evolves.

     

    2 Chamber 1: Data Bias and User Representation

    Data serves as the lifeblood of AI, meaning that AI algorithms performance and reliability hinge on the quality and representativeness of the data they are trained on (Nagarajan 2024). If the data used to train these algorithms reflects existing biases, the algorithms themselves can perpetuate and amplify these biases. This phenomenon can lead to unfair recommendations, exclusion of diverse viewpoints, limited access for marginalised user groups and ultimately patrons mistrusting libraries by undermining the core principles of library inclusivity. The IBM team views the source of bias in AI (IBM Data and AI Team 2023) to be divided into three categories, as illustrated in Figure 1.

    One might argue that human librarians can, in fact, have personal biases that influence their recommendations. For instance, they might recommend materials that align with their views and subconsciously overlook others. However, this can be corrected by patrons further explaining about their interests. Also, a biased librarian can have fewer interactions and reach than that of an AI-powered system.

    Users from underrepresented communities or with specific research interests not reflected in the training data could be left without access to relevant resources, hindering their research efforts. Content personalisation, which can affect intellectual freedom, is one of the key AI applications. According to a 2019 study by UNESCO, there are two main concepts describing the potential adverse effects of algorithmic curation (IFLA 2020):

    Filter bubbles: Limit the scope of information a user is exposed to by delivering content tailored to their interests, based on user characteristics and past engagement (IFLA 2020).

    Echo chamber: A phenomenon where exposure to similar or repeated information can reinforce and strengthen a users' views or beliefs (IFLA 2020).

    Only when adding computational and statistical sources of bias to human and systemic biases do we get the full picture of biases affecting AI systems. Only when identifying them all, can we begin to create strategies to mitigate bias and ensure the core principles of library inclusivity are applied and no unintentional infringement on user privacy occurred (NIST 2022).

     

    3 Chamber 2: Privacy and Patron Confidentiality

    The American Library Association (2006) states that:

    All people ... possess a right to privacy and confidentiality in their library use. Protecting user privacy and confidentiality has long been an integral part of the mission of libraries ... The right to privacy includes the right to open inquiry without having the subject of one's interest examined or scrutinised by others, in person or online. Confidentiality exists when a library is in possession of personally identifiable information about its users and keeps that information private on their behalf.

    Upholding libraries mission cannot be fulfilled without protecting their patrons' privacy and confidentiality (American Library Association 2006). The integration of AI in libraries raises several concerns regarding patron privacy. The following are some key aspects to consider:

    Data Collection Practices: Content personalisation and intelligent recommendations often rely on the collection of user data. Thus, transparency in data collection practices is crucial. Libraries have an ethical obligation to clearly inform users about what data is being collected, how it will be used and for what purposes (Saeidnia 2023). Techniques like anonymisation can be employed to further protect user privacy. An anonymised dataset can still be used for AI training purposes while minimising the risk of identifying individual users. However, it is important to acknowledge that complete anonymisation might not always be possible, and libraries should be transparent about the limitations of these techniques.

    Data Retention and Disposal: Libraries need to establish clear policies for data retention and disposal. Determining how long collected user data will be retained and outlining a secure disposal process are crucial for user privacy. Additionally, libraries should provide users with mechanisms to request the deletion of their data at any point, adhering to the principles of "data minimisation" and user control over their information.

    Surveillance and User Tracking: The potential for AI-powered libraries to use surveillance technologies raises significant privacy concerns. Facial recognition software, for example, could be used to track user movements within the library. While this might sound like a security measure, it raises ethical questions about user surveillance and the potential for chilling effects on user behaviour. Studies have shown that users under mass surveillance often feel pressured to adjust their behaviour (Murray et al. 2024). Libraries should resist the urge to implement such invasive technologies without clear guidelines and transparent communication with users regarding the purpose and limitations of any monitoring systems in place (Fortier & Burkell 2015).

    To combat the above concerns and bolster patrons trust, some strategies can be implemented:

    Privacy-by-Design Principles: Should be prioritised throughout every stage of the development and implementation of AI-powered systems.

    Privacy Impact Assessments (PIAs): Conducting them prior to the implementation of any AI-powered system is crucial. PIAs involve a systematic process for identifying the potential privacy risks associated with AI systems, evaluating the impact on users, and developing mitigation strategies.

    Independent Oversight and Accountability Mechanisms: By creating advisory boards comprised of privacy experts, librarians and user representatives can evaluate the ethical implications of AI projects and ensure compliance with data privacy regulations (Saeidnia 2023). Independent audits of AI systems can further enhance accountability and provide valuable feedback for improving privacy safeguards.

    Continuous Improvement and Adaptability: Libraries should adopt a continuous improvement mindset, regularly reviewing and adapting their data privacy practices, reflecting emerging technologies and evolving user expectations.

     

    4 Chamber 3: Algorithmic Transparency and User Trust

    It is imperative for libraries patrons to have trust in their AI-powered libraries to reap the potential benefits of AI. A key factor in establishing this trust is algorithmic transparency where patrons are provided with clear insights into how AI algorithms operate within the library environment. The challenge of algorithmic transparency stems from the inherent complexity of many AI algorithms used in libraries. These algorithms can be intricate mathematical models that process vast amounts of data. One cannot simply see inside a "black box", but a glass one is in the making.

    Transparency is achieved only when patrons know how data is processed and used to generate recommendations, search results or other AI outputs. A transparent AI equates to a trustworthy AI. When transparency is lacking, several concerns arise such as:

    Mistrust and Scepticism: Patrons might distrust AI recommendations if they do not understand the rationale behind them. For instance, when users encounter a biased recommendation, they may suspect that the algorithm prioritises certain resources over others without clear justification, raising concerns about the system's objectivity and overall credibility.

    Loss of Control and Agency: Without transparency, patrons might feel a sense of diminished control over their information and how it is used within AI systems. This can be particularly alarming for patrons seeking sensitive information or conducting research on private topics.

    Reduced User Engagement: If patrons mistrust AI-powered features, they might be less likely to engage with them. Fortunately, several strategies can be employed to demystify AI algorithms and foster a culture of transparency within libraries:

    Explainable AI (XAI) Techniques: XAI plays a central role and offers a range of techniques to make AI algorithms more interprétable. It is defined by IBM as "a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms" (IBM, 2024). The techniques can provide patrons with explanations for recommendations, highlighting the factors that influenced the outcome. Additionally, libraries can use visualisation tools to present the decision-making process of the AI system in a user-friendly format, allowing patrons to visualise how their data contributes to the final recommendation (Dwivedi et al., 2023).

    Transparency by Design: Transparency should be prioritised throughout every stage of the development and implementation of AI-powered systems. This involves clearly communicating the purpose of AI features, the types of data being used and the potential limitations of the algorithms themselves. An example can be seen in the privacy policy of OpenAI, the owner of ChatGPT, which states, "you should not rely on the factual accuracy of output from our models" and highlights that AI-generated outputs rely on probabilistic predictions rather than guaranteed factual accuracy (OpenAI 2024). Libraries should ensure that patrons are aware of such limitations to foster trust and empower them to critically evaluate the AI-provided results.

    Mitigating Algorithmic Bias: Libraries should employ diverse training datasets, implement bias detection techniques and continuously evaluate the performance of AI systems to identify and address potential biases.

    Addressing the "Filter Bubble" Effect: Libraries can mitigate this by incorporating features that allow users to explore diverse viewpoints and broaden their information landscape. For instance, libraries can present users with a curated selection of resources alongside personalised recommendations, ensuring exposure to a wider range of perspectives.

    Managing User Expectations: AI is not a perfect substitute for human expertise or critical thinking skills. Libraries should emphasise the role of AI as a tool to enhance the research experience, not replace it. By setting realistic expectations, libraries can foster a culture of responsible AI use and empower users to critically evaluate the information presented by AI systems.

    Accountability and Independent Oversight: Independent audits of AI systems can provide valuable feedback for improving transparency and mitigating potential risks.

     

    5 Chamber 4: The Human Librarian in the Age of AI

    Is AI writing the final chapter for human librarians?! Fabio Zanzotto (2019) describes AI as the biggest knowledge thief in modern times and that learning AI machines are "extracting knowledge from unaware skilled or unskilled workers by analysing their interactions. By passionately doing their jobs, these workers are digging their own graves." Today, the focus is on fostering a symbiotic relationship where human expertise and AI capabilities work in tandem to empower patrons.

    Human librarians have endured through the times and cannot be replaced, at least for now, as they have valuable strengths that cannot be taught to AI models:

    Critical Thinking and Information Literacy Skills: In today's information-saturated world, where users require guidance in navigating the vast and often unreliable online landscape, human librarians' expertise is crucial. AI algorithms can struggle with these nuances, making human librarians essential for fostering information literacy and responsible research practices (Frederick 2020).

    Empathy, Emotional Intelligence, and User-Centred Approach: Human librarians can connect with users at an emotional level, understand their unique research needs and tailor their assistance accordingly. This empathetic and user-centred approach builds trust and fosters a positive learning environment within the library. While AI can be programmed to personalise recommendations, it currently lacks the ability to build genuine rapport and cater for the emotional aspects of the research process and might give improper responses.

    Expertise in Subject Areas and Specialised Knowledge: Subject librarians who have advanced degrees and specialised knowledge in specific subject areas provide in-depth research assistance, curate relevant resources and connect users with valuable domain-specific information beyond what AI algorithms might readily indicate.

    Creativity and Problem-Solving Skills: When users encounter research roadblocks or require assistance with nonstandard information needs, a librarian's creativity can offer fresh perspectives and lead to new avenues of discovery. AI, while adept at pattern recognition and data analysis, often struggles with the kind of out-of-the-box thinking that human librarians can bring to the table (Frederick 2020).

    Champions of Ethics and Morals: During the Ottawa Conference, several speakers highlighted that machines lack a moral compass. Ethics and morals can have context-specific aspects and ambiguities that are difficult to "teach" a machine to adjudicate. The unique perspective of librarians may prove useful in addressing this limitation of AI (Frederick 2020).

    Human librarians can collaborate with AI to free up their valuable time for more complex research queries and personalised assistance by following these survival approaches:

    Complementary Service Models: Combining the strengths of human librarians and AI. This can be achieved with a two-tiered approach where AI-powered chatbots provide basic information and answer frequently asked questions and human librarians tackle higher-level inquiries.

    AI-augmented Research Workflows: By automating tedious tasks like literature searches or data analysis, librarians can focus on higher-level tasks like interpreting research findings and providing strategic research guidance.

    Human-in-the-Loop Artificial Intelligence (HIT-AI): HIT-AI can be viewed as "a possible antidote to the poisoning of the job market" and "giving the right value to the knowledge producers". Basically, AI systems have a clear knowledge lifecycle and a clear creditor (who can profit monetarily if possible) of the knowledge that has been used in a specific deployment or in specific situations (Zanzotto 2019).

    Training and Upskilling Librarians: Equipping human librarians with the necessary skillset allows them to effectively bridge the human-AI divide, understand the limitations of AI systems and use them as powerful tools to enhance their information services.

     

    6 Chamber 5: AI and Accessibility for Diverse Users

    AI-powered library systems must be designed with inclusivity in mind for all patrons within libraries. This means ensuring that these systems are accessible to a diverse range of users, including those with disabilities, varying cognitive abilities and literacy levels, and those who speak different languages. This room explores the ethical imperative of AI accessibility and outlines strategies for creating AI-powered libraries that are truly inclusive.

    Libraries serve a diverse population with varying needs and abilities. According to the World Health Organization (WHO) (2023), "an estimated 1.3 billion people - or 16% of the global population - experience a significant disability today". These potential patrons require assistive technologies and must be considered for the most inclusive design possible. Table 2 offers a glimpse into some user groups with specific accessibility considerations when implementing AI:

    However, it is important to acknowledge that AI itself is not a silver bullet for accessibility. To ensure AI-powered libraries are accessible to all, several strategies can be implemented:

    User-Centred Design: Involving patrons with disabilities in the design and development of AI systems is crucial. Their firsthand experiences and insights can help identify potential accessibility barriers and ensure AI systems are designed with inclusivity in mind.

    Accessibility Standards and Guidelines: Libraries should adhere to established accessibility standards and guidelines, such as the Web Content Accessibility Guidelines (WCAG), which provide a framework for ensuring digital content is accessible to people with disabilities.

    Regular Testing and Evaluation: Regularly testing AI systems with users from diverse backgrounds helps identify and address accessibility issues. This iterative process ensures that AI systems remain inclusive as technologies evolve.

    Accessibility Training for Staff: Library staff should be trained in accessibility best practices and how to use AI-powered library systems with users who have disabilities. This empowers staff to provide informed and inclusive assistance to all users.

     

    7 Chamber 6: Ethical Procurement and Vendor Practices

    Libraries, as trusted institutions dedicated to information access and education, have a responsibility to ensure that the AI technologies they implement align with their ethical commitments and core values. Ethical procurement practices go beyond simply acquiring functional technology and delving into the social and environmental impact of the development process. In the context of AI, the following are reasons why ethical procurement is crucial (Saleh 2023):

    Mitigating Bias and Algorithmic Fairness: Ethical procurement requires assessing vendor practices for mitigating bias in AI development and ensuring algorithms are fair and objective in their recommendations and decision-making processes.

    Data Privacy and Security: Ethical procurement requires scrutiny of vendor data security measures to ensure sensitive user data is protected against unauthorised access, misuse or breaches.

    Transparency and Accountability: Ethical procurement necessitates choosing vendors who prioritise transparency to foster user trust, enable libraries to understand the rationale behind AI recommendations and hold vendors accountable for responsible development practices. Libraries must overcome the challenge that vendors may not always readily disclose detailed information about their AI development practices. This is evident when examining the wave of lawsuits filed against AI giants in Table 3.

    Environmental Sustainability: Since the development and training of AI models can require significant computational resources, ethical procurement involves considering a vendor's commitment to environmental sustainability and minimising the environmental impact of AI adoption within the library.

    Libraries can establish strong vendor relationships that promote ethical AI practices by employing the following strategies:

    Developing Clear Procurement Policies: Libraries should develop clear procurement policies that explicitly outline ethical considerations for AI vendors. These policies should address issues such as data privacy, bias mitigation, algorithmic transparency and environmental sustainability. By clearly communicating these expectations, libraries set the stage for responsible AI development from the outset.

    Vendor Risk Assessment: Libraries should conduct thorough vendor risk assessments before acquiring AI technologies. These assessments should evaluate a vendor's track record on data security, responsible AI development practices and commitment to environmental sustainability. This allows libraries to make informed decisions about which vendors will best uphold their ethical standards.

    Open Communication and Collaboration: Building trusting partnerships with vendors is essential. Libraries should maintain open communication channels with vendors to discuss ethical concerns, address potential issues related to bias or data privacy and explore collaborative initiatives to foster the responsible development of AI for libraries.

    Independent Audits and Certifications: Libraries can consider partnering with independent organisations that specialise in auditing AI systems for bias, fairness and adherence to ethical principles. Additional certifications, such as those related to data privacy like General Data Protection Regulation, can offer further assurance that vendors prioritise responsible practices.

    Supporting Open-Source AI Initiatives: By supporting open-source AI projects, libraries can contribute to the creation of transparent and accessible AI technologies that prioritise responsible development practices.

    There are two remaining constraints that can be simplified to "time and money" in achieving ethical procurement:

    The Everchanging Landscape of AI: AI is a dynamic field, making it challenging for libraries to stay abreast of emerging ethical considerations and adapt their procurement practices accordingly.

    Cost Considerations: Ethical AI development may involve additional costs associated with data security measures, bias mitigation techniques and transparency tools. Libraries need to balance ethical considerations with budgetary constraints.

     

    8 Chamber 7: Community Engagement and Open Dialogue

    Community engagement is crucial for building trust in AI-powered libraries. Ultimately, it ensures that AI implementation within libraries serves the greater good and reflects the values and priorities of the community it aims to empower. Open dialogue with library users and stakeholders benefits AI implementation in identifying community needs and priorities, mitigating bias and algorithmic fairness, promoting transparency and user trust, identifying ethical considerations and building community ownership.

    Sceptics who rightfully still have a low opinion of AI, perceive valid challenges encountered in AI-powered libraries such as:

    Balancing Technical Complexity with Public Understanding: Effectively communicating the technical intricacies of AI to a diverse audience is essential. Libraries need to develop clear and accessible language to ensure the community can understand AI capabilities and limitations.

    Encouraging Participation from Diverse Groups: Reaching out to various community segments and ensuring participation from historically marginalised groups can be challenging. Libraries need to employ targeted outreach strategies to ensure diverse voices are represented in the conversation about AI.

    Mitigating Misinformation and Fearmongering: Misinformation can fuel anxieties surrounding AI. Libraries can play a crucial role in combating misinformation by providing AI facts and facilitating constructive dialogue focused on responsible development and implementation.

    Libraries can employ various strategies to encourage open dialogue and community engagement regarding AI:

    Community Forums and Workshops: Organising public forums and workshops creates a platform for open discussions about AI in libraries. These sessions can be used to educate the community about AI capabilities, potential benefits and drawbacks, patrons' rights in the AI ecosystem (e.g., data privacy rights, opt-out of personalised features or data collection practices, accessibility options) and solicit feedback on proposed AI implementations.

    Citizen Science Initiatives: Libraries can partner with universities or research institutions to involve community members in citizen science projects related to AI development. This allows community members to contribute directly to shaping AI technologies for the library environment.

    Surveys and Focus Groups: Conducting surveys and focus groups allows libraries to gather in-depth feedback from community members on their concerns (e.g., privacy preferences, AI transparency), expectations and hopes regarding AI within the library. This feedback can be used to inform the development and implementation of AI systems.

    Interactive Exhibits and Demonstrations: Libraries can create interactive exhibits and demonstrations that showcase how AI technologies work within the library. This allows users to experience AI firsthand and engage in discussions about its potential applications and limitations.

    Engaging Diverse Community Voices: It is crucial to ensure that community engagement efforts involve voices from diverse backgrounds and perspectives. This ensures that the concerns and priorities of all community members are heard and addressed when developing AI-powered library services.

     

    9 An Example: Bibliotheca Alexandrina

    Drawing inspiration from the ancient Library of Alexandria and reclaiming its mantle, the modern Bibliotheca Alexandrina (BA) aims to be a pioneer in the digital age while upholding ethical principles (Bibliotheca Alexandrina 2002). The BA caters for all users, regardless of any disabilities or special needs, to fulfil its mission of promoting the development of an independent, self-confident, and literate citizenry through the provision of open access to cultural, intellectual and informational resources of all types.

    Central to the BA's mission is its commitment to intellectual freedom, which is reflected in its role as one of the depository libraries for the World Intellectual Property Organization (WIPO). Considering the ethical challenges posed by AI, such as issues surrounding intellectual property and the identity of AI system creditors and copyright holders, the BA is well-positioned to contribute to these critical discussions. Its dedication to community engagement and open dialogue is evident through the numerous conferences, lectures, webinars and workshops it hosts, many of which address the growing societal concerns surrounding AI in Egypt.

    Over the years, the BA has increasingly focused on AI, as evidenced by the rising number of events featuring the keyword "artificial intelligence" in its official news section: three mentions in 2022, 12 in 2023 and 23 in 2024. Embracing the global trend of AI adoption, the BA encourages its staff to explore, critique and develop AI tools. For example, the information and communication technology (ICT) sector is currently developing a custom BA chatbot, while Scopus AI is being used to empower the academic community in their research endeavours. Furthermore, the BA prioritises capacity building for the age of AI by continuously training its librarians, equipping them with the skills to navigate and assist others in using online AI tools effectively. With over 330 computers available, librarians can provide hands-on guidance in integrating AI tools into research and other activities.

    The BA also integrates generative AI into its daily operations. Staff members use AI features embedded in software such as the Adobe Suite to produce a variety of creative and academic outputs. By adopting AI technologies responsibly, the BA aims to serve as a moral exemplar, demonstrating how AI can be harnessed for societal benefit while upholding ethical standards. In doing so, the BA underscores its dual commitment to innovation and the ethical use of emerging technologies.

     

    10 Conclusion

    Libraries, once quiet sanctuaries of knowledge, are now intricate ecosystems where humans and algorithms coexist. The integration of AI into libraries presents a significant opportunity to enhance the user experience and improve service delivery. However, it also raises complex ethical considerations that must be addressed to ensure that AI technologies are implemented responsibly and in alignment with the core values of libraries, namely, privacy, inclusivity, transparency and accountability.

    Throughout this paper, we navigated the ethical maze of AI in libraries, examining the seven key chambers that represent the most pressing concerns. From data bias and user representation to the evolving role of the human librarian in the age of AI, we have explored how these challenges must be mitigated to avoid undermining the trust of library patrons. Privacy and patron confidentiality, especially with the growing concerns over AI's data usage that demand vigilant attention, as do the issues of algorithmic transparency and user trust, which must be central to any AI integration strategy.

    We also discussed the importance of ethical procurement and the potential legal challenges that libraries face when adopting AI tools, particularly considering the wave of allegations against AI companies like OpenAI, Microsoft and GitHub. These cases underscore the need for libraries to be proactive in selecting AI vendors who align with their ethical values and ensure that AI technologies respect privacy and intellectual property rights. The ongoing legal challenges and ethical debates demonstrate the evolving landscape of AI, urging libraries to continuously evaluate the impact of AI and adjust their policies and practices accordingly.

    A significant takeaway from this discussion is the role of community engagement. As libraries move forward with AI adoption, engaging patrons and fostering open dialogue will be critical to building trust and addressing concerns. Libraries must embrace a user-centred approach, ensuring that AI tools are accessible, fair and aligned with the values of the communities they serve.

    Looking ahead, libraries must not only focus on the immediate opportunities AI provides but also on the long-term ethical implications. The Bibliotheca Alexandrina, for example, demonstrates how libraries can pioneer AI adoption while upholding ethical principles, serving as a beacon for others to follow. As the landscape of AI continues to evolve, libraries must remain committed to a continuous process of evaluation, ensuring that AI technologies are used in ways that benefit all patrons and advance the broader mission of knowledge, access and equity.

    Finally, it is of utmost importance to keep the conversation going, since AI is here to stay. The rapid pace at which AI technologies are advancing means that their ethical, legal and societal implications will continue to evolve. Libraries must maintain an ongoing dialogue, not only within their own institutions but also with the broader public and policymakers, to ensure that AI serves the best interests of society. AI is not a passing trend but a transformative force that will shape the future of information access, education and community engagement. By fostering continuous reflection, collaboration and adaptation, libraries can lead the way in integrating AI responsibly, ensuring that it remains a force for good in society.

    Moreover, national actions and strategies are essential for supporting libraries in navigating the complexities of AI. Governments and policymakers must recognise the crucial role that libraries play in ensuring equitable access to AI technologies and must develop national frameworks that provide clear guidelines for AI implementation in public institutions. These frameworks should include ethical standards, privacy protection and regulatory measures that address the risks associated with AI adoption, ensuring that libraries are equipped to handle the challenges and opportunities that arise.

    National strategies should also emphasise public education on AI and its ethical implications, ensuring that communities understand how AI impacts their lives and what protections are in place to safeguard their rights. Collaborative efforts between libraries, AI developers, policymakers and advocacy groups will be essential to ensure that AI technologies serve the public good while respecting fundamental ethical principles.

    In conclusion, AI in libraries is not simply a technological advancement but also a moral responsibility. Libraries have the opportunity and the obligation to lead in the ethical use of AI, ensuring that as these systems grow and evolve, they remain a force for good in society. The journey through the ethical maze of AI is complex, but it is essential that libraries continue to prioritise ethical considerations, transparency and community involvement. By doing so, they will safeguard their role as trusted institutions in the digital age, helping to navigate the challenges of the fourth industrial revolution with integrity and foresight.

     

    References

    Aly, S.E. 2023. Arabic calligraphy styles guide: An evaluation study in preparation for classification MSc Dissertation, University of East London.         [ Links ]

    American Library Association. 2006. Privacy: An interpretation of the Library Bill of Rights. [Online] https://www.ala.org/advocacy/intfreedom/librarybill/interpretations/privacy (Accessed 15 June 2024).

    American Library Association. 2017. Equity, diversity, inclusion: An interpretation of the Library Bill of Rights. [Online] https://www.ala.org/advocacy/intfreedom/librarybill/interpretations/EDI (Accessed 15 June 2024).

    Ashikuzzaman, M. 2024. Impact of artificial intelligence (AI) on library services. [Online] https://www.lisedunetwork.com/impact-of-artificial-intelligence-on-library-services/ (Accessed 9 March 2024).

    Baker and Hostetler. 2024. Case tracker: Artificial intelligence, copyrights and class actions. [Online] https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/ (Accessed 30 July 2024).

    Bostrom, N. 2014. Superintelligence: Paths, dangers, strategies. Oxford University Press.

    Dwivedi, R. et al. 2023. Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9).         [ Links ]

    EBSCO. 2024. EBSCO information services defines guiding principles for the responsible use of artificial intelligence. [Online] https://www.ebsco.com/blogs/ebscopost/ebsco-information-services-defines-guiding-principles-responsible-use-artificial (Accessed 15 July 2024).

    Fortier, A. and Burkell, J. 2015. Hidden online surveillance: What librarians should know to protect their own privacy and that of their patrons. Information Technology and Libraries, 34(3): 59-72.         [ Links ]

    Frederick, D.E. 2020. Librarians in the era of artificial intelligence and the data deluge. Library Hi Tech News, 37(7).         [ Links ]

    Géron, A. 2019. Hands-on machine learning with scikit-learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems, 2nd ed. O'Reilly Media.

    Gunter, D. 2024. AI challenges for librarians. [Online] https://www.researchinformation.info/analysis-opinion/ai-challenges-librarians (Accessed 15_July 2024).

    IBM. 2024. Explainable AI. [Online] https://www.ibm.com/topics/explainable-ai (15 June 2024).

    IBM Data and AI Team. 2023. Shedding light on ai bias with real world examples. [Online] https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/ (Accessed 15 June 2024).

    IFLA. 2020. IFLA statement on libraries and artificial intelligence. [Online] https://www.ifla.org/publications/node/93397 (Accessed 15 June 2024).

    Day, J.M. 2000. Guidelines for library services to deaf people. International Federation of Library Associations and Institutions.

    Lakshmanan, V., Görner, M. and Gillard, R. 2021. Practical machine learning for computer vision. O'Reilly Media.

    Marshall, D. and DuBose, J. 2024. AI in academic libraries: The future is now. Public Services Quarterly, 20(2): 150-155.         [ Links ]

    Mishra, S. 2023. Ethical implications of artificial intelligence and machine learning in libraries and information centers: A frameworks, challenges, and best practices. Library Philosophy and Practice (e-journal), 7753.         [ Links ]

    Murray, D., Fussey, P., Hove, K., Wakabi, W., Kimumwe, P., Saki, O. and Stevens, A. 2024. The chilling effects of surveillance and human rights: Insights from qualitative research in Uganda and Zimbabwe. Journal of Human Rights Practice, 16(1): 397-412. doi: 10.1093/jhuman/huad020        [ Links ]

    Nagarajan, N. 2024. A comprehensive review of AI's dependence on data. International Journal of Artificial Intelligence and Data Science (IJADS), 1(1).         [ Links ]

    NIST. 2022. There's more to AI bias than biased data, NISTreport highlights. [Online] https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights (Accessed 20 April 2024).

    OpenAI. 2024. OpenAI privacy policy. [Online] https://openai.com/privacy/ (7 April 2024).

    Pacific University Library. 2023. Navigating the future: The role of AI in academic libraries. [Online]https://www.lib.pacificu.edu/navigating-the-future-the-role-of-ai-in-academic-libraries/ (Accessed 13 June 2024).

    Pant, A., Hoda, R., Spiegler, S.V., Tantithamthavorn, C. and Turhan, B. 2024. Ethics in the age of AI: An analysis of AI practitioners' awareness and challenges. ACM Transactions on Software Engineering and Methodology, 33(3): 1-35. doi: 10.1145/3635715        [ Links ]

    Ramgir, V.N. and Patil, H.J. 2023. Assistive technologies in libraries for visually impaired users. In Digital transformation in libraries and information centres. Today & Tomorrow's Printers and Publishers.

    Saeidnia, H.R. 2023. Ethical artificial intelligence (AI): Confronting bias and discrimination in the library and information industry. Library Hi Tech News. doi: 10.1108/LHTN-10-2023-0182.

    Saleh, E.E. 2023.هل تحتاج مؤسسات المعلومات إلى تطوير استراتيجية لمواجهة الذكاء الاصطناعي (Do Information Institutes Need Strategies to Face AI?), 34. [Online] https://arab-afli.org/journal/index.php/afli/article/view/150/84.

    The Bibliotheca Alexandrina. 2002. About the BA. [Online] https://www.bibalex.org/en/page/about (Accessed 14 May 2024)

    University of Washington. 2024. Universal access: Making library resources accessible to people with disabilities. [Online] https://www.washington.edu/doit/universal-access-making-library-resources-accessible-people-disabilities#Services (Accessed 20 July 2024).

    Vaughan, D. 2020. Analytical skills for AI and data science. O'Reilly Media.

    World Health Organization. 2023. Disability. [Online] https://www.who.int/news-room/fact-sheets/detail/disability-and-health (Accessed 17 July 2024).

    Zanzotto, F.M. 2019. Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64. doi: 10.1613/jair.1.11345

     

     

    Received: 27 October 2024
    Accepted: 4 February 2025