9JzQadUgSICYk2XJIqP_kw.png

First, Reed Hepler and I have created LibraryRobot.org, a free one-stop page of AI tools for librarians, staff, and patrons created using the"custom GPT" feature of OpenAI's ChatGPT-4. 

These tools are:

 

  • Book Finder
  • Book Summarizer
  • Library Programming Assistant
  • LOC Authority Record Finder
  • Talk to a Book
  • Search Query Optimizer
  • ESL Reading Passage Creator
 

We'd love your feedback--there's a link on the page to give it. As you may or may not know, OpenAI, as a part of their announcement of ChatGPT-4o, is rolling out non-paid access to these kinds of custom-created GPT modules, but the timing of the roll-out isn't clear. If you have a paid ChatGPT account, you will have access to the tools; if not, and they require an upgrade, keep checking back!

 
Second, we've released our second "This Week in AI" recording, which has to cover two weeks because we couldn't meet last Friday and we're skipping this Friday (because of this). So it's a little longer than we plan on doing each week, but OH!, there were lot of news and ideas to cover. Hope you enjoy!
 
 

 

00:00:00 - 00:50:00

In the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss various developments and ethical concerns surrounding artificial intelligence (AI). They introduce library robot.org, a new AI tool for librarians and educators, and express concerns about the potential misuse of AI as a source of information. The hosts also discuss the use of large language models like Google's AI and Microsoft's Copilot, raising concerns about their accuracy, privacy, and ethical implications. They touch upon OpenAI's business practices, specifically their use of Scarlett Johansson's voice without her consent, and the ethical and legal implications of paying for access to content to train AI models. The conversation also covers the potential impact of AI on education, the workforce, and the possibility of reaching the singularity. The speakers ponder the dilemmas surrounding the development of AI, its regulation, and its integration into daily life.

Summaries from summarize.tech - detailed version at https://www.summarize.tech/www.youtube.com/watch?v=kPMLpAw_S8g.

  • 00:00:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss recent developments in AI. They introduce library robot.org, a new AI tool designed to help librarians and educators find books and optimize searches. The tool is based on OpenAI's widely available chat model and represents a shift towards easier interface with AI assistance. However, they also caution against misusing AI as a source of information, citing examples of Google's AI tool providing incorrect and potentially dangerous responses to search queries. The hosts express concern about the potential consequences of relying on AI for information without proper context or understanding.
  • 00:05:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the use and implications of large language models, specifically Google's AI, and Microsoft's new Copilot plus laptops. Hargadon raises concerns about the accuracy and factual nature of large language models, which are designed to build rapport and mirror user writing, often based on culturally diverse and sometimes inaccurate data. Hepler adds that Google's AI is being used as if it's a keyword search, and Microsoft's new Copilot plus laptops, which come with integrated copilot instances, raise significant data privacy issues as the company now tracks not only online searches but also users' keystrokes, apps, and websites. The panelists express concerns about the comfort and ease of use versus privacy, as users are giving up a substantial amount of personal information for these convenient tools.
  • 00:10:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the ethical concerns surrounding OpenAI's business practices, specifically their use of Scarlett Johansson's voice without her consent. Hargadon expresses his unease about the lack of transparency regarding the creation of the voice and OpenAI's apparent disregard for ethics in their rush to profit from the technology. Hepler adds that this incident highlights the growing divide between the academic and corporate worlds in artificial intelligence and the need for more transparency and self-control in the industry. The conversation also touches on the potential dangers of advanced AI, including its ability to mimic voices and scam people, as well as the unknown consequences of artificial general intelligence.
  • 00:15:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the ethical and legal implications of OpenAI's new practice of paying for access to content to train their AI models. They ponder the question of whether reading freely available content on the web for personal use is different from an AI's use of it, and whether there are ethical concerns regarding the collection and use of user metadata. The conversation also touches upon the influence of AI's ability to mimic human emotions and the quote by E.O. Wilson that humanity faces the challenge of having paleolithic emotions, medieval institutions, and godlike technologies.
  • 00:20:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon shares his experience using an AI language learning model, which he finds to be thoughtful and helpful in correcting his mistakes during conversations in Portuguese. He compares it to a private tutor and expresses surprise at the quality of the free base model. Reed Hepler then discusses the progress of open source and closed source AI models, as shown in a chart from Arena Elo. The gap between the capabilities of these models has been decreasing, with open source models like Llama 370B approaching parity with closed source models like Gpt 240. Despite some skepticism, Reed expresses optimism that open source models will continue to improve in text analysis and generation.
  • 00:25:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the equitability of using AI tools and the potential for open source AI models. Hepler expresses his excitement about the closing gap between free and commercial AI tools, while Hargadon compares it to the open source model in the software world. They also touch upon the concept of symmetrical power of AI, where the creation, assessment, integration, and reporting of tasks could be done solely by AI tools with minimal human input. However, Hepler emphasizes the importance of human collaboration and engagement with AI for better results. Hepler references David Wiley's idea of symmetrical power of AI and the need for human involvement in the process.
  • 00:30:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the use of AI in education and its potential impact on the workforce. Hepler expresses concern that people may rely solely on AI for insights and productivity, while Hargadon argues that AI should be viewed as a tool to enhance human capabilities. They also touch upon the idea of banning AI from classrooms and the concept of generative teaching. Additionally, they mention the ongoing debate about the timeline for the development of Artificial General Intelligence (AGI) and the potential need for Universal Basic Income due to the displacement of jobs by AI.
  • 00:35:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the possibility of reaching the singularity, a hypothetical event when artificial intelligence surpasses human intelligence. Reed Hepler expresses skepticism about the singularity, suggesting instead that there will be multiple smaller singularities in specific fields. He believes that a general AI singularity is unlikely and that it may take 50 years or more to achieve. Steve Hargadon agrees that AI will surpass human knowledge in various areas, even if it doesn't reach a singularity. They also discuss the societal implications of AI, including the potential for humans to use AI to replace each other in various industries, and the ethical concerns surrounding the use of AI by world leaders.
  • 00:40:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Steve Hargadon and Reed Hepler discuss the dilemmas surrounding the development of artificial intelligence (AI). They ponder whether AI should be built to resemble humans with emotions and fallibility or to be logical and factual. The speakers question if corporations want an ethical and factual AI or one that simply fulfills their desires. They reflect on the human-centered approach to AI and the shift in the field's paradigm towards creating machines that complement human abilities rather than replacing them.
  • 00:45:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the challenges of regulating and understanding the role of artificial intelligence (AI) in society. Hepler expresses the difficulty in determining what consumers want from AI, while Hargadon emphasizes the need for consensus in policy-making but acknowledges the rapid advancement of technology. They also touch upon the progression of generative AI, from library robots to customized models, and its integration into various products. The conversation raises questions about the future of AI, with Hepler pondering the possibility of AI analyzing babies' cries and converting them into brain images, and being integrated into everyday items like shopping carts. The speakers express uncertainty about the direction and implications of AI development.
  • 00:50:00 In this section of the "This Week in AI May 28, 2024" YouTube video, Reed Hepler and Steve Hargadon discuss the potential future development of AI and its integration into daily life. Hepler proposes the idea of a home network of AIs communicating with each other, while Hargadon wonders if the advancements in AI will come faster than expected and if it will lead to a sterile environment where computers make all decisions. They also mention upcoming tech and AI-related events, including the Tech Gpt Bootcamp for tech professionals, the AI Bootcamp for libraries and librarians, and an AI Bootcamp for personal and professional growth.

 



Votes: 0
E-mail me when people leave their comments –

You need to be a member of Future of AI to add comments!

Join Future of AI

Monthly Archives