Screenshot%202024-08-17%2012.32.16%20PM.png


 

 

 

We've released our latest "This Week in AI" recording, back on Fridays. Hope you enjoy! 

AI summary provided by summarize.tech: https://www.summarize.tech/www.youtube.com/watch?v=rQA5oYNrYR8.

 

00:00:00 - 00:20:00

In the August 16, 2024, episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discuss various developments and ethical considerations surrounding artificial intelligence. They begin by discussing the limitations of large language models and the disillusionment that arises from their specific training and informational power. The conversation then touches upon the use of AI in businesses, such as J.P. Morgan Chase's AI assistant, Llm Suite, and the potential impact on small businesses. Google's next steps in AI, including voice interaction and manipulating other apps, are also discussed. The hosts express concerns about the delayed release of new AI features from Microsoft and the importance of human-centered considerations when creating and using AI tools. They share their personal experiences with the imperfections of large language models and look forward to future advancements in AI technology. The conversation concludes with a discussion on the potential negative impact of labeling products as "AI-enhanced" and the importance of recognizing the limitations and ethical considerations of AI.

  • 00:00:00 In this section, Steve Hargadon and Reed Hepler discuss the latest developments in AI, focusing on the control and narrative shaping capabilities of large language models. The conversation begins with a sense of disillusionment regarding the limitations of large language models, which are not logical but rather shaped by specific training and represent an enormous informational power. The news segment starts with J.P. Morgan Chase offering its employees an artificial intelligence assistant, called Llm Suite, created by OpenAI. This chatbot app is designed to help with various tasks, such as emails, reports, and data analysis. The company is essentially controlling the narrative within the organization by mandating the use of this tool and limiting access to other alternatives. Steve and Reed discuss the similarities between this approach and customizing a GPT model, which still leaves room for error and shapes the narrative within an organization. The conversation then touches upon the practical pathway to increased profitability for large companies through the use of AI, as seen in J.P. Morgan's efficiency gains and Walmart's hundred-fold improvement in database updating. However, this development may make it harder for small businesses to compete against large corporations. The discussion concludes with the importance of considering various power dynamics and human-centered considerations when dealing with AI.
  • 00:05:00 In this section of "This Week in AI - 16 August 2024", hosts Reed Hepler and Steve Hargadon discuss various developments in artificial intelligence. A significant topic is the balance of power and disruption caused by the use of AI for different purposes. They mention an enormous data breach affecting over 2 billion people's records, emphasizing the importance of privacy and the limitations of companies' promises. Google's next Gemini move is another focus, with Google reportedly moving towards creating practical, tangible things that users can do through voice interaction. Reed Hepler believes that AI manipulating other apps for users is the future of consumer products. They also touch upon the disillusionment of consumers regarding AI and the potential emotional impact of voice modes like ChatGPT. The hosts express concerns about the cognitive triggers and feelings that AI can evoke, leading to delayed releases and increased intelligence levels.
  • 00:10:00 In this section of the "This Week in AI" YouTube video from August 16, 2024, Steve Hargadon and Reed Hepler discuss the delayed release of new AI features from Microsoft and the ethical considerations surrounding their implementation. Microsoft's voice and image interaction capabilities have been pushed back indefinitely, leading to concerns about transparency and the digital divide between those who have access and those who do not. The speakers also touch upon the issue of companies overpromising AI capabilities and the importance of human-centered considerations when creating and using AI tools. Reed Hepler shares his blog post on the topic, which focuses on ethical considerations and the need for a plan when implementing AI. The conversation concludes with the observation that AI is becoming more integrated into everyday programs and tools, making it essential to recognize its potential implications.
  • 00:15:00 In this section of the "This Week in AI" YouTube video from August 16, 2024, Steve Hargadon and Reed Hepler discuss the limitations of large language models in reasoning and understanding context. Hargadon explains that these models struggle to go both ways in a conversation, as demonstrated by their inability to answer questions about Tom Cruise's mother based on information about Tom Cruise. Hepler adds that people are getting fired for assuming that AI knows everything and using it as a source, citing an example of a reporter who wrote articles containing made-up quotes from AI tools. The conversation then shifts to the potential negative impact of labeling products as "AI-enhanced," with Hepler expressing concern that consumers may be disappointed or disillusioned when the products don't live up to their expectations.
  • 00:20:00 In this section of the "This Week in AI" YouTube video from August 16, 2024, hosts Reed Hepler and Steve Hargadon discuss their experiences with large language models and the imperfections they have encountered. Hargadon expresses his disappointment with the lack of promised features in ChatGPT's voice mode and shares how he has come to rely on Perplexity as a productivity tool. Despite the occasional shortcomings, they both agree that these tools have been incredibly useful and look forward to future developments. The conversation touches on the incentives for companies to exaggerate the capabilities of their products and the potential comparison between Perplexity and other language models like ChatGPT in terms of Internet search. Overall, the hosts express their excitement about the advancements in AI technology and the potential it holds for productivity and innovation.
Votes: 0
E-mail me when people leave their comments –

You need to be a member of Future of AI to add comments!

Join Future of AI

Monthly Archives