Authors File Lawsuit Against OpenAI and Microsoft for Alleged Copyright Infringement in AI Training


Title: Authors File Lawsuit Against OpenAI and Microsoft for Alleged Copyright Infringement in AI Training

A lawsuit has been filed in Manhattan federal court by a group of 11 nonfiction authors, accusing OpenAI and Microsoft (MSFT.O) of misusing their written works to train the models behind OpenAI’s widely-used chatbot ChatGPT and other artificial intelligence-based software.

The authors, including Pulitzer Prize winners Taylor Branch, Stacy Schiff, and Kai Bird, who co-wrote the J. Robert Oppenheimer biography “American Prometheus,” made their case on Tuesday, asserting that the companies violated their copyrights by utilizing their work in training OpenAI’s GPT large language models.

As of Wednesday, representatives for OpenAI, Microsoft, and the authors have not responded to requests for comment.

Last month, writer and Hollywood Reporter editor Julian Sancton initiated the proposed class-action lawsuit. This legal action is part of a series of cases brought by groups of copyright owners, including renowned authors such as John Grisham, George R.R. Martin, and Jonathan Franzen, alleging the misuse of their work in AI training by OpenAI and other tech companies. The companies, including OpenAI, have consistently denied these allegations.

Notably, Sancton’s lawsuit is the first author-initiated legal action against OpenAI that also names Microsoft as a defendant. Microsoft has invested billions of dollars in the artificial intelligence startup and has seamlessly integrated OpenAI’s systems into its products.

According to the amended complaint filed on Monday, OpenAI allegedly “scraped” the authors’ works, along with a substantial amount of other copyrighted material from the internet, without permission. This material was purportedly used to teach GPT models how to respond to human text prompts. The lawsuit contends that Microsoft has been “deeply involved” in training and developing these models, making it equally liable for copyright infringement.

The authors are seeking an unspecified amount of monetary damages and are requesting the court to issue an order for the companies to cease infringing on their copyrights.

Cloud Dominance: The Growing Concerns Over Big Tech’s Control in AI Development

When engaging with AI chatbots such as Google’s Bard or OpenAI’s ChatGPT, users are actually interacting with a product shaped by three or four critical components. These include the engineering prowess behind the chatbot’s AI model, the extensive training data it processed to understand user prompts, the sophisticated semiconductor chips employed for training (which can take months), and now, cloud platforms emerging as the fourth essential ingredient.

Cloud platforms aggregate the computing power of sought-after semiconductor chips, offering online storage and services to AI companies in need of substantial processing capabilities and a secure space for their training data. This dependence on cloud services significantly influences the dynamics of the broader AI industry, positioning cloud companies at the core of a transformative technology expected to impact work, leisure, and education.

The cloud market, dominated by a few major players like Amazon, Microsoft, and Google, has prompted concerns about potential anticompetitive influence over the future of AI. Policymakers, including Senator Elizabeth Warren, emphasize the need for regulation to prevent these tech giants from consolidating power and endangering competition, consumer privacy, innovation, and national security.

While the public cloud market is projected to grow by over 20% to $679 billion next year, AI’s share of this expenditure could range from 30% to 50% within five years, according to industry analysts. This shift places a spotlight on the limited number of cloud platforms capable of delivering the massive processing power increasingly demanded by AI developers.

Government scrutiny is on the rise, with the Federal Trade Commission (FTC) and President Joe Biden expressing concerns about competition in cloud markets impacting AI development. The FTC warns against a potential stranglehold on essential inputs for AI development, and Biden’s executive order emphasizes the need to address risks arising from dominant firms’ control over semiconductors, computing power, cloud storage, and data.

Exclusive agreements between AI companies and cloud providers, hefty fees for data withdrawal, and the potential for cloud credits to lock in customers have raised competition concerns. Critics fear inflated pricing, anticompetitive practices, and exploitative contract terms that could hinder the development and accessibility of AI services.

Cloud providers defend their record, citing a highly competitive market that benefits the U.S. economy. They argue that customers negotiate extensively on various aspects, including price, storage capacity, and contract terms. However, concerns persist among regulators worldwide, reflecting broader apprehensions towards Big Tech’s concentration of power in digital markets.

As the AI industry continues to evolve, the debate over the role and influence of cloud platforms in shaping its trajectory intensifies. Some AI companies intentionally avoid exclusive ties with cloud vendors, highlighting the significant power wielded by cloud firms in the market.

TikTok’s Evolution: From Short-Form Sensation to Long-Form Ambitions

In 2020, TikTok emerged as a cultural phenomenon, captivating users with its short, snappy dancing and comedy clips during the early days of the Covid-19 pandemic. This triggered a short-form video arms race among social media giants like Facebook, Instagram, and YouTube, all vying to replicate TikTok’s success. However, in a surprising turn, TikTok is now steering its course towards longer videos, challenging the very essence of its initial appeal.

This Saturday marks the official phase-out of TikTok’s original “Creator Fund,” signaling a shift toward the new “Creativity Program Beta.” Under this program, content creators seeking monetization will need to produce videos exceeding one minute in length. While this move aligns TikTok with the more lucrative long-form content model, some creators express frustration, fearing a departure from the platform’s roots as a hub for short, easily digestible content.

Nicki Apostolou, a TikTok creator focusing on Native American history and culture, with nearly 150,000 followers, voices concerns, stating, “I don’t always have a minute of content in me.” The sentiment echoes among creators who joined TikTok for its short-form appeal, feeling alienated by the platform’s shift towards a “mini YouTube” model.

TikTok spokesperson Zachary Kizer justifies the move, citing feedback from the community and the need to evolve. The shift towards longer-form content is seen as a strategic business decision, aiming to keep users engaged for extended periods and attract advertisers with more monetization possibilities.

Over the past three years, TikTok has incrementally increased video length limits, currently testing 15-minute uploads. The new Creativity Program targets adult creators with 10,000 or more followers, promising higher pay for videos surpassing the one-minute mark.

While TikTok encourages creators with the prospect of increased payments and deeper audience engagement, critics argue that the platform risks losing its distinct identity. The challenge for creators lies in adapting to the demands of longer content, with concerns about the dwindling attention spans of today’s audience.

Despite apprehensions, TikTok reports creators making longer-form content have more than doubled their earnings in the past year. The platform insists that video recommendations are based on user preferences rather than length, aiming to allay fears of marginalized short-form creators.

As TikTok embraces this evolution, creators like Aly Tabizon express both excitement and concern. Monetizing short astrology videos has been “life-changing,” yet the transition to longer content may pose challenges, given the prevailing eight to ten-second attention span. Tabizon, however, remains open to experimentation, acknowledging the potential for greater pay.

For some, the shift to longer videos raises issues of resource constraints. Laura Riegle, a TikTok creator known for short, snappy content, highlights the increased time and effort required for long-form videos, posing challenges for creators with limited free time.

TikTok, recognizing the evolving landscape, offers alternative monetization avenues such as subscriptions and tips. However, skepticism persists among creators who find these methods akin to “busking on the street” and potentially unsustainable.

As TikTok navigates this transition, the platform faces the delicate task of balancing the demands of longer-form content with the expectations and preferences of its diverse creator community.

Google Initiates Cookie Slaughter: Chrome’s Tracking Overhaul Begins January 4th

In a groundbreaking move, Google has announced its commencement of the long-awaited dismantling of internet cookies, scheduled to kick off on January 4th. The initial phase will witness the blocking of cookies for 1% of Chrome users, totaling approximately 30 million individuals. This marks the inaugural step in Google’s Privacy Sandbox project, designed to replace traditional cookies with an alternative tracking system, purportedly offering enhanced privacy features.

For the past three decades, websites and tech companies have heavily relied on “third-party cookies” to track consumers online. The prevalence of these cookies has allowed businesses, including Google, to collaboratively monitor users’ online activities, raising concerns about privacy infringement.

In lieu of cookies, Google has introduced a new suite of tools that empowers the Chrome browser to internally track users’ online behavior. This data remains on the user’s device, with the browser categorizing individuals into distinct groups, or “Ad Topics,” such as “Yoga Fan” or “Young Conservative.” While websites can inquire about these categories, they are unable to pinpoint the user’s identity, a departure from the conventional use of cookies.

Although Chrome continues to track user activity, a departure from browsers like Firefox and Safari, Google’s revamped version represents a notable stride in privacy preservation. Despite ongoing tracking, this new iteration discloses less information about users and their internet activities.

Victor Wong, Google’s senior director of product management for Privacy Sandbox, emphasized the significant shift, stating, “We are making one of the largest changes to how the Internet works at a time when people, more than ever, are relying on the free services and content that the web offers.”

While these Privacy Sandbox cookie replacements are currently available on the Chrome browser as an optional tool, their adoption signifies a substantial shift given Chrome’s dominance in the browser market. Users have the flexibility to disable these features in their settings if they find them undesirable.

The impending changes may cause disruptions, given the integral role cookies play in various online functions. Google acknowledges potential issues and is actively working to identify and retain essential cookies while phasing out intrusive ones. Users can disable the new “Tracking Protection” tool on demand, and Chrome will prompt users to disable it for specific websites if complications arise.

Come January 4th, a select 1% of users will experience “Tracking Protection” by default, denoted by a distinctive eyeball logo in the URL bar. As Google progresses with its cookie elimination initiative, this transformation stands as a significant milestone in shaping the future landscape of internet privacy.

OpenAI’s Superalignment Team Focuses on AI Governance Amid Leadership Shake-Up

Amidst the fallout of Sam Altman’s abrupt departure from OpenAI and the subsequent chaos, OpenAI’s Superalignment team remains steadfast in their mission to tackle the challenges of controlling AI that surpass human intelligence. While the leadership turmoil unfolds, the team, led by Ilya Sutskever, is actively working on strategies to steer and regulate superintelligent AI systems.

This week, members of the Superalignment team, including Collin Burns, Pavel Izmailov, and Leopold Aschenbrenner, presented their latest work at NeurIPS, the annual machine learning conference in New Orleans. Their primary goal is to ensure that AI systems behave as intended, especially as they venture into the realm of superintelligence.

The Superalignment initiative, launched in July, is part of OpenAI’s broader efforts to govern AI systems with intelligence surpassing that of humans. Collin Burns acknowledged the difficulty in aligning models smarter than humans, posing a significant challenge for the research community.


A figure illustrating the Superalignment team’s AI-based analogy for aligning superintelligent systems.

Despite the recent leadership changes, Ilya Sutskever continues to lead the Superalignment team, raising questions given his involvement in Altman’s ouster. The Superalignment concept has sparked debates within the AI research community, with some questioning its timing and others considering it a distraction from more immediate regulatory concerns.

While Altman drew comparisons between OpenAI and the Manhattan Project, emphasizing the need to protect against catastrophic risks, skepticism remains about the imminent development of superintelligent AI systems with world-ending capabilities. Critics argue that focusing on such concerns diverts attention from pressing issues like algorithmic bias and the toxicity of AI.

The Superalignment team is actively developing governance and control frameworks for potential superintelligent AI systems. Their approach involves using a less sophisticated AI model to guide a more advanced one, akin to a human supervisor guiding a superintelligent AI system.

In a surprising move, OpenAI announced a $10 million grant program to support technical research on superintelligent alignment. The funding, including a contribution from former Google CEO Eric Schmidt, is aimed at encouraging research from academic labs, nonprofits, individual researchers, and graduate students. The move has prompted speculation about Schmidt’s commercial interests in AI.

Despite concerns, the Superalignment team assures that their research, along with the work supported by grants, will be shared publicly, adhering to OpenAI’s mission of contributing to the safety of AI models for the benefit of humanity. The team remains committed to addressing one of the most critical technical challenges of our time: aligning superhuman AI systems to ensure their safety and benefit for all.

Google Launches Duet AI for Developers with Powerful Gemini Model Integration

Google has officially released Duet AI for Developers, a suite of AI-powered assistance tools designed for code completion and generation. The company has announced the general availability of the tool, revealing plans to incorporate Google’s robust Gemini model in the upcoming weeks.

While code completion and generation tools have become commonplace, Google stands out by collaborating with 25 companies. These partners, including Confluent, HashiCorp, and MongoDB, are contributing datasets to assist developers in building and troubleshooting applications specific to their platforms.

The collaborative effort extends beyond code completion, with partners such as Datadog, JetBrains, and Langchain providing documentation and knowledge sources. This data aims to enhance the Duet AI for Developers chat experience, offering information on creating test automation, resolving production issues, and addressing vulnerabilities.

Richard Seroter, Chief Evangelist for Google Cloud, highlighted the ambition to eliminate developer toil and enhance the coding experience using AI. The goal is to create an AI assistant that integrates seamlessly into developers’ tools while incorporating Google’s expertise.

The integration involves training the model on the latest cloud-native practices and incorporating it into the Google Cloud Console, along with popular IDEs that developers commonly use. Seroter emphasized that Google views the Duet AI product family, including Duet AI in Security Operations, as enterprise-grade, with features such as enterprise access controls and Google’s indemnification guarantee.

Google’s approach aligns with the broader industry narrative, emphasizing that AI coding tools, including Duet AI, are complementary to developers’ skills rather than replacements. Productivity gains have been reported, with Turing, an AI-powered tech services company, experiencing a 33% increase after adopting Duet AI for Developers.

Duet AI for Developers currently supports over 20 languages, including C, C++, Java, JavaScript, and Python. Beyond coding capabilities, it features AI log summarization and error explanation integrated with Google’s Cloud Logging. Additionally, Smart Actions provide one-click shortcuts for tasks like unit test generation.

Until the end of January 2024, Duet AI for Developers will be available for free. Post that period, the subscription cost will be $19 per user per month with an annual commitment.

Revolutionizing Conversations: Kobie AI Unlocks Interactive Dialogue with Historical Figures

In a groundbreaking approach to artificial intelligence (AI), Kobie Fuller’s innovative use of generative AI, known as Kobie AI, is shedding light on the positive aspects of technology. One notable application is the ability to interact with historical figures such as James Lowry, an influential yet lesser-known figure in the Black experience in America.

Image: James Lowry

James Lowry AI for DEI: Transforming Insights into Interactive Conversations

James Lowry, whose history is deeply intertwined with the Black experience in America, is brought to life through Kobie Fuller’s AI experiment. The tool, Kobie AI, allows users to engage with Lowry’s experiences, particularly focusing on diversity, equity, and inclusion (DEI). By feeding Lowry’s book, “Change Agent,” into a large language model, users can now pose questions and receive sophisticated and in-depth answers based on Lowry’s actual words and deeds.

Unlocking Wisdom: Kobie AI’s Role in Preserving and Sharing Life Experiences

Lowry, who dedicated his life to promoting investment in historically underrepresented communities, authored the book as a means of sharing his experiences with the world. Recognizing that not everyone will read the entire book, Lowry sees AI as a powerful tool to allow people to grasp the essence of his journey by simply asking questions.

Interactive Learning: Kobie AI as a Teaching Tool for Future Generations

The AI platform begins with a prompt inviting users to explore DEI topics and seek wisdom from Lowry’s life journey. Students, historians, DEI professionals, or anyone interested can inquire about DEI issues or delve into specific moments in Lowry’s life, creating an interactive dialogue that can serve as a teaching tool for understanding the experiences of a Black man in American business.

Generative AI’s Potential: Transforming Historical Narratives

As Kobie Fuller continues to explore the capabilities of this technology, the interactive dialogue with James Lowry is just one example of how generative AI can be a powerful vehicle for understanding diverse experiences. From facilitating conversations about DEI to immortalizing the wisdom of historical figures, Kobie AI showcases the transformative potential of AI in shaping our understanding of the past.

Epic Games Scores Legal Victory Against Google in Monopoly Case

Three years after initiating legal action against tech giants Apple and Google, Epic Games, the creator of Fortnite, has secured a significant win. The jury in the case of Epic v. Google has rendered its verdict, concluding that Google transformed its Google Play app store and Google Play Billing service into an illegal monopoly.

Following just a few hours of deliberation, the jury unanimously affirmed Google’s monopoly power in the Android app distribution and in-app billing services markets. They found that Google engaged in anticompetitive practices within these markets, causing harm to Epic. Additionally, the jury determined that Google established an illegal tie between its Google Play app store and Google Play Billing payment services. The distribution agreement, Project Hug deals with game developers, and dealings with OEMs were all deemed anticompetitive.

In response, Google’s Vice President of Affairs and Public Policy, Wilson White, stated that the company plans to appeal the verdict. White emphasized that the trial underscored Google’s fierce competition with Apple and other app stores on Android devices and gaming consoles.

Epic Games celebrated the verdict in a blog post, asserting, “Today’s verdict is a win for all app developers and consumers around the world. It proves that Google’s app store practices are illegal, and they abuse their monopoly to extract exorbitant fees, stifle competition, and reduce innovation.”

This legal triumph is noteworthy, particularly in contrast to Epic’s previous legal battle against Apple two years ago, which resulted in a loss. In the case of Epic v. Google, the focus was on undisclosed revenue-sharing agreements between Google, smartphone manufacturers, and major game developers. These deals, believed internally by Google executives to suppress rival app stores, exposed Google’s apprehension about Epic. Unlike the Apple ruling, the outcome was determined by a jury.

The specific remedies and implications of this victory are yet to be determined by Judge James Donato. Epic did not seek monetary damages but aims for a court declaration granting app developers the freedom to introduce their own app stores and billing systems on Android. The judge will meet with both parties in January to discuss potential remedies.

While Epic CEO Tim Sweeney suggested potential financial gains in the hundreds of millions or even billions if relieved from paying Google’s fees, Judge Donato has already indicated that he won’t grant an anti-circumvention provision as an additional measure.

Google’s Wilson White reiterated their commitment to challenging the verdict, emphasizing the openness and choice provided by Android and Google Play compared to other major mobile platforms.

Google’s AI-Powered Note-Taking App, NotebookLM, Launches Widely in the US with New Features

Google’s experimental AI-driven note-taking application, NotebookLM, is now widely accessible in the United States, accompanied by several new features. The company announces that NotebookLM is “beginning” to utilize Google’s Gemini Pro AI model to enhance document understanding and reasoning.

Already capable of tasks such as summarizing imported documents, extracting key points, and answering questions about note sources, NotebookLM now offers the ability to transform notes into various document formats. Users can select the desired notes, and the app will automatically suggest formats like outlines or study guides. Additionally, users have the option to specify a custom format, such as an email, script outline, newsletter, and more.

The updated NotebookLM introduces suggested actions based on user activities within the app. For instance, if a user is writing a note, NotebookLM may automatically provide tools to refine prose or suggest related ideas from sources. Other new features include the ability to save useful responses as notes, share notes with others, and direct the app’s AI focus on specific sources during interactions.

Google is also expanding some of NotebookLM’s limitations. Users can now include up to 20 sources in their notebooks, each with a capacity of up to 200,000 words. Originally introduced as “Project Tailwind” at Google’s I/O conference in May, NotebookLM was initially available to a limited group of testers before this wider release. The expansion grants access to all users aged 18 and older in the US and comes shortly after Google unveiled Gemini, its GPT-4 competitor.