Apple Faces Sales Ban in the US for Watch Series 9 and Watch Ultra 2 as Biden Administration Declines Veto


In a significant development, Apple is prohibited from selling the Watch Series 9 and Watch Ultra 2 in the United States, as the Biden administration opted not to override the ban imposed by the International Trade Commission (ITC) today.

The removal of both devices from Apple’s official website occurred on December 21st, followed by their withdrawal from store shelves after December 24th. A statement from the Office of US Trade Representative Katherine Tai, reported by CNBC, revealed that the agency “decided not to reverse the ITC’s determination” after careful consideration.

Responding to the ban, an unidentified Apple spokesperson, as reported by Reuters, confirmed the company’s intention to appeal the ITC decision. The spokesperson stated, “We strongly disagree with the USITC decision and resulting exclusion order, and are taking all measures to return Apple Watch Series 9 and Apple Watch Ultra 2 to customers in the U.S. as soon as possible.”

The ITC imposed the ban after determining that Apple had violated the patent for blood oxygen saturation technology owned by the company Masimo. Additionally, the ITC directed Apple to cease selling any previously-imported devices containing the infringing technology. Despite Apple’s attempt to halt the decision during the appeal process, the ITC denied the request. The final opportunity for intervention rested with President Joe Biden, who did not veto the ban.

It’s important to note that the sales ban only impacts Apple’s stores in the US. Customers still have the option to purchase the Watch Series 9 or Watch Ultra 2 at retailers such as Best Buy and Target while supplies last. Apple will continue to offer the Watch SE, which lacks a blood oxygen sensor and remains unaffected by the ban.

The future steps for Apple remain uncertain. Analysts, including my colleague Victoria Song, explore potential paths Apple could take, such as implementing software changes to the blood oxygen sensor or disabling the sensor on imported devices. However, these approaches may not be sufficient to satisfy the ITC, leading to speculation that Apple might consider settling with Masimo as an alternative solution.

Urgent Update Required: Google Chrome Faces Critical Vulnerability Exploited by Malicious Actors

Google Chrome users are urged to take immediate action as a severe vulnerability has been identified in the popular web browser. This particular security flaw, categorized as CVE-2023-7024, is a heap buffer overflow within WebRTC, as disclosed by Google. The gravity of the situation is compounded by the fact that the vulnerability is not only known but is actively being exploited by malicious entities.

Heap buffer overflows, such as the one affecting Google Chrome, involve attackers causing a section of memory to overflow, creating an opportunity for exploitation. Google has officially confirmed the existence of an exploit for this vulnerability, making it a pressing concern for users.

To safeguard against potential security breaches, users are advised to ensure their Chrome browser is updated to version 120.0.6099.130 on Windows PCs, or alternatively, version 120.0.6099.129 for Mac or Linux. Taking prompt action is crucial, as failure to update may leave systems exposed to exploitation.

To check and update Chrome, users can access the Settings page by clicking the three-dot menu in the top-right corner of the browser. From there, navigate to the left-side panel and select ‘About Chrome’ at the bottom of the list. This action will automatically check for updates and apply any necessary upgrades.

It’s important to note that after the update, users must close all instances of the Chrome browser and reopen it to ensure the upgrade is applied. Failure to address this vulnerability promptly may result in compromised security, so users are strongly advised to verify their browser version without delay.

Authors File Lawsuit Against OpenAI and Microsoft for Alleged Copyright Infringement in AI Training


Title: Authors File Lawsuit Against OpenAI and Microsoft for Alleged Copyright Infringement in AI Training

A lawsuit has been filed in Manhattan federal court by a group of 11 nonfiction authors, accusing OpenAI and Microsoft (MSFT.O) of misusing their written works to train the models behind OpenAI’s widely-used chatbot ChatGPT and other artificial intelligence-based software.

The authors, including Pulitzer Prize winners Taylor Branch, Stacy Schiff, and Kai Bird, who co-wrote the J. Robert Oppenheimer biography “American Prometheus,” made their case on Tuesday, asserting that the companies violated their copyrights by utilizing their work in training OpenAI’s GPT large language models.

As of Wednesday, representatives for OpenAI, Microsoft, and the authors have not responded to requests for comment.

Last month, writer and Hollywood Reporter editor Julian Sancton initiated the proposed class-action lawsuit. This legal action is part of a series of cases brought by groups of copyright owners, including renowned authors such as John Grisham, George R.R. Martin, and Jonathan Franzen, alleging the misuse of their work in AI training by OpenAI and other tech companies. The companies, including OpenAI, have consistently denied these allegations.

Notably, Sancton’s lawsuit is the first author-initiated legal action against OpenAI that also names Microsoft as a defendant. Microsoft has invested billions of dollars in the artificial intelligence startup and has seamlessly integrated OpenAI’s systems into its products.

According to the amended complaint filed on Monday, OpenAI allegedly “scraped” the authors’ works, along with a substantial amount of other copyrighted material from the internet, without permission. This material was purportedly used to teach GPT models how to respond to human text prompts. The lawsuit contends that Microsoft has been “deeply involved” in training and developing these models, making it equally liable for copyright infringement.

The authors are seeking an unspecified amount of monetary damages and are requesting the court to issue an order for the companies to cease infringing on their copyrights.

Cloud Dominance: The Growing Concerns Over Big Tech’s Control in AI Development

When engaging with AI chatbots such as Google’s Bard or OpenAI’s ChatGPT, users are actually interacting with a product shaped by three or four critical components. These include the engineering prowess behind the chatbot’s AI model, the extensive training data it processed to understand user prompts, the sophisticated semiconductor chips employed for training (which can take months), and now, cloud platforms emerging as the fourth essential ingredient.

Cloud platforms aggregate the computing power of sought-after semiconductor chips, offering online storage and services to AI companies in need of substantial processing capabilities and a secure space for their training data. This dependence on cloud services significantly influences the dynamics of the broader AI industry, positioning cloud companies at the core of a transformative technology expected to impact work, leisure, and education.

The cloud market, dominated by a few major players like Amazon, Microsoft, and Google, has prompted concerns about potential anticompetitive influence over the future of AI. Policymakers, including Senator Elizabeth Warren, emphasize the need for regulation to prevent these tech giants from consolidating power and endangering competition, consumer privacy, innovation, and national security.

While the public cloud market is projected to grow by over 20% to $679 billion next year, AI’s share of this expenditure could range from 30% to 50% within five years, according to industry analysts. This shift places a spotlight on the limited number of cloud platforms capable of delivering the massive processing power increasingly demanded by AI developers.

Government scrutiny is on the rise, with the Federal Trade Commission (FTC) and President Joe Biden expressing concerns about competition in cloud markets impacting AI development. The FTC warns against a potential stranglehold on essential inputs for AI development, and Biden’s executive order emphasizes the need to address risks arising from dominant firms’ control over semiconductors, computing power, cloud storage, and data.

Exclusive agreements between AI companies and cloud providers, hefty fees for data withdrawal, and the potential for cloud credits to lock in customers have raised competition concerns. Critics fear inflated pricing, anticompetitive practices, and exploitative contract terms that could hinder the development and accessibility of AI services.

Cloud providers defend their record, citing a highly competitive market that benefits the U.S. economy. They argue that customers negotiate extensively on various aspects, including price, storage capacity, and contract terms. However, concerns persist among regulators worldwide, reflecting broader apprehensions towards Big Tech’s concentration of power in digital markets.

As the AI industry continues to evolve, the debate over the role and influence of cloud platforms in shaping its trajectory intensifies. Some AI companies intentionally avoid exclusive ties with cloud vendors, highlighting the significant power wielded by cloud firms in the market.

TikTok’s Evolution: From Short-Form Sensation to Long-Form Ambitions

In 2020, TikTok emerged as a cultural phenomenon, captivating users with its short, snappy dancing and comedy clips during the early days of the Covid-19 pandemic. This triggered a short-form video arms race among social media giants like Facebook, Instagram, and YouTube, all vying to replicate TikTok’s success. However, in a surprising turn, TikTok is now steering its course towards longer videos, challenging the very essence of its initial appeal.

This Saturday marks the official phase-out of TikTok’s original “Creator Fund,” signaling a shift toward the new “Creativity Program Beta.” Under this program, content creators seeking monetization will need to produce videos exceeding one minute in length. While this move aligns TikTok with the more lucrative long-form content model, some creators express frustration, fearing a departure from the platform’s roots as a hub for short, easily digestible content.

Nicki Apostolou, a TikTok creator focusing on Native American history and culture, with nearly 150,000 followers, voices concerns, stating, “I don’t always have a minute of content in me.” The sentiment echoes among creators who joined TikTok for its short-form appeal, feeling alienated by the platform’s shift towards a “mini YouTube” model.

TikTok spokesperson Zachary Kizer justifies the move, citing feedback from the community and the need to evolve. The shift towards longer-form content is seen as a strategic business decision, aiming to keep users engaged for extended periods and attract advertisers with more monetization possibilities.

Over the past three years, TikTok has incrementally increased video length limits, currently testing 15-minute uploads. The new Creativity Program targets adult creators with 10,000 or more followers, promising higher pay for videos surpassing the one-minute mark.

While TikTok encourages creators with the prospect of increased payments and deeper audience engagement, critics argue that the platform risks losing its distinct identity. The challenge for creators lies in adapting to the demands of longer content, with concerns about the dwindling attention spans of today’s audience.

Despite apprehensions, TikTok reports creators making longer-form content have more than doubled their earnings in the past year. The platform insists that video recommendations are based on user preferences rather than length, aiming to allay fears of marginalized short-form creators.

As TikTok embraces this evolution, creators like Aly Tabizon express both excitement and concern. Monetizing short astrology videos has been “life-changing,” yet the transition to longer content may pose challenges, given the prevailing eight to ten-second attention span. Tabizon, however, remains open to experimentation, acknowledging the potential for greater pay.

For some, the shift to longer videos raises issues of resource constraints. Laura Riegle, a TikTok creator known for short, snappy content, highlights the increased time and effort required for long-form videos, posing challenges for creators with limited free time.

TikTok, recognizing the evolving landscape, offers alternative monetization avenues such as subscriptions and tips. However, skepticism persists among creators who find these methods akin to “busking on the street” and potentially unsustainable.

As TikTok navigates this transition, the platform faces the delicate task of balancing the demands of longer-form content with the expectations and preferences of its diverse creator community.

Google Initiates Cookie Slaughter: Chrome’s Tracking Overhaul Begins January 4th

In a groundbreaking move, Google has announced its commencement of the long-awaited dismantling of internet cookies, scheduled to kick off on January 4th. The initial phase will witness the blocking of cookies for 1% of Chrome users, totaling approximately 30 million individuals. This marks the inaugural step in Google’s Privacy Sandbox project, designed to replace traditional cookies with an alternative tracking system, purportedly offering enhanced privacy features.

For the past three decades, websites and tech companies have heavily relied on “third-party cookies” to track consumers online. The prevalence of these cookies has allowed businesses, including Google, to collaboratively monitor users’ online activities, raising concerns about privacy infringement.

In lieu of cookies, Google has introduced a new suite of tools that empowers the Chrome browser to internally track users’ online behavior. This data remains on the user’s device, with the browser categorizing individuals into distinct groups, or “Ad Topics,” such as “Yoga Fan” or “Young Conservative.” While websites can inquire about these categories, they are unable to pinpoint the user’s identity, a departure from the conventional use of cookies.

Although Chrome continues to track user activity, a departure from browsers like Firefox and Safari, Google’s revamped version represents a notable stride in privacy preservation. Despite ongoing tracking, this new iteration discloses less information about users and their internet activities.

Victor Wong, Google’s senior director of product management for Privacy Sandbox, emphasized the significant shift, stating, “We are making one of the largest changes to how the Internet works at a time when people, more than ever, are relying on the free services and content that the web offers.”

While these Privacy Sandbox cookie replacements are currently available on the Chrome browser as an optional tool, their adoption signifies a substantial shift given Chrome’s dominance in the browser market. Users have the flexibility to disable these features in their settings if they find them undesirable.

The impending changes may cause disruptions, given the integral role cookies play in various online functions. Google acknowledges potential issues and is actively working to identify and retain essential cookies while phasing out intrusive ones. Users can disable the new “Tracking Protection” tool on demand, and Chrome will prompt users to disable it for specific websites if complications arise.

Come January 4th, a select 1% of users will experience “Tracking Protection” by default, denoted by a distinctive eyeball logo in the URL bar. As Google progresses with its cookie elimination initiative, this transformation stands as a significant milestone in shaping the future landscape of internet privacy.

OpenAI’s Superalignment Team Focuses on AI Governance Amid Leadership Shake-Up

Amidst the fallout of Sam Altman’s abrupt departure from OpenAI and the subsequent chaos, OpenAI’s Superalignment team remains steadfast in their mission to tackle the challenges of controlling AI that surpass human intelligence. While the leadership turmoil unfolds, the team, led by Ilya Sutskever, is actively working on strategies to steer and regulate superintelligent AI systems.

This week, members of the Superalignment team, including Collin Burns, Pavel Izmailov, and Leopold Aschenbrenner, presented their latest work at NeurIPS, the annual machine learning conference in New Orleans. Their primary goal is to ensure that AI systems behave as intended, especially as they venture into the realm of superintelligence.

The Superalignment initiative, launched in July, is part of OpenAI’s broader efforts to govern AI systems with intelligence surpassing that of humans. Collin Burns acknowledged the difficulty in aligning models smarter than humans, posing a significant challenge for the research community.


A figure illustrating the Superalignment team’s AI-based analogy for aligning superintelligent systems.

Despite the recent leadership changes, Ilya Sutskever continues to lead the Superalignment team, raising questions given his involvement in Altman’s ouster. The Superalignment concept has sparked debates within the AI research community, with some questioning its timing and others considering it a distraction from more immediate regulatory concerns.

While Altman drew comparisons between OpenAI and the Manhattan Project, emphasizing the need to protect against catastrophic risks, skepticism remains about the imminent development of superintelligent AI systems with world-ending capabilities. Critics argue that focusing on such concerns diverts attention from pressing issues like algorithmic bias and the toxicity of AI.

The Superalignment team is actively developing governance and control frameworks for potential superintelligent AI systems. Their approach involves using a less sophisticated AI model to guide a more advanced one, akin to a human supervisor guiding a superintelligent AI system.

In a surprising move, OpenAI announced a $10 million grant program to support technical research on superintelligent alignment. The funding, including a contribution from former Google CEO Eric Schmidt, is aimed at encouraging research from academic labs, nonprofits, individual researchers, and graduate students. The move has prompted speculation about Schmidt’s commercial interests in AI.

Despite concerns, the Superalignment team assures that their research, along with the work supported by grants, will be shared publicly, adhering to OpenAI’s mission of contributing to the safety of AI models for the benefit of humanity. The team remains committed to addressing one of the most critical technical challenges of our time: aligning superhuman AI systems to ensure their safety and benefit for all.

Google Launches Duet AI for Developers with Powerful Gemini Model Integration

Google has officially released Duet AI for Developers, a suite of AI-powered assistance tools designed for code completion and generation. The company has announced the general availability of the tool, revealing plans to incorporate Google’s robust Gemini model in the upcoming weeks.

While code completion and generation tools have become commonplace, Google stands out by collaborating with 25 companies. These partners, including Confluent, HashiCorp, and MongoDB, are contributing datasets to assist developers in building and troubleshooting applications specific to their platforms.

The collaborative effort extends beyond code completion, with partners such as Datadog, JetBrains, and Langchain providing documentation and knowledge sources. This data aims to enhance the Duet AI for Developers chat experience, offering information on creating test automation, resolving production issues, and addressing vulnerabilities.

Richard Seroter, Chief Evangelist for Google Cloud, highlighted the ambition to eliminate developer toil and enhance the coding experience using AI. The goal is to create an AI assistant that integrates seamlessly into developers’ tools while incorporating Google’s expertise.

The integration involves training the model on the latest cloud-native practices and incorporating it into the Google Cloud Console, along with popular IDEs that developers commonly use. Seroter emphasized that Google views the Duet AI product family, including Duet AI in Security Operations, as enterprise-grade, with features such as enterprise access controls and Google’s indemnification guarantee.

Google’s approach aligns with the broader industry narrative, emphasizing that AI coding tools, including Duet AI, are complementary to developers’ skills rather than replacements. Productivity gains have been reported, with Turing, an AI-powered tech services company, experiencing a 33% increase after adopting Duet AI for Developers.

Duet AI for Developers currently supports over 20 languages, including C, C++, Java, JavaScript, and Python. Beyond coding capabilities, it features AI log summarization and error explanation integrated with Google’s Cloud Logging. Additionally, Smart Actions provide one-click shortcuts for tasks like unit test generation.

Until the end of January 2024, Duet AI for Developers will be available for free. Post that period, the subscription cost will be $19 per user per month with an annual commitment.

Revolutionizing Conversations: Kobie AI Unlocks Interactive Dialogue with Historical Figures

In a groundbreaking approach to artificial intelligence (AI), Kobie Fuller’s innovative use of generative AI, known as Kobie AI, is shedding light on the positive aspects of technology. One notable application is the ability to interact with historical figures such as James Lowry, an influential yet lesser-known figure in the Black experience in America.

Image: James Lowry

James Lowry AI for DEI: Transforming Insights into Interactive Conversations

James Lowry, whose history is deeply intertwined with the Black experience in America, is brought to life through Kobie Fuller’s AI experiment. The tool, Kobie AI, allows users to engage with Lowry’s experiences, particularly focusing on diversity, equity, and inclusion (DEI). By feeding Lowry’s book, “Change Agent,” into a large language model, users can now pose questions and receive sophisticated and in-depth answers based on Lowry’s actual words and deeds.

Unlocking Wisdom: Kobie AI’s Role in Preserving and Sharing Life Experiences

Lowry, who dedicated his life to promoting investment in historically underrepresented communities, authored the book as a means of sharing his experiences with the world. Recognizing that not everyone will read the entire book, Lowry sees AI as a powerful tool to allow people to grasp the essence of his journey by simply asking questions.

Interactive Learning: Kobie AI as a Teaching Tool for Future Generations

The AI platform begins with a prompt inviting users to explore DEI topics and seek wisdom from Lowry’s life journey. Students, historians, DEI professionals, or anyone interested can inquire about DEI issues or delve into specific moments in Lowry’s life, creating an interactive dialogue that can serve as a teaching tool for understanding the experiences of a Black man in American business.

Generative AI’s Potential: Transforming Historical Narratives

As Kobie Fuller continues to explore the capabilities of this technology, the interactive dialogue with James Lowry is just one example of how generative AI can be a powerful vehicle for understanding diverse experiences. From facilitating conversations about DEI to immortalizing the wisdom of historical figures, Kobie AI showcases the transformative potential of AI in shaping our understanding of the past.