Microsoft Unveils Revolutionary AI Key on Keyboards, Integrating Copilot for Enhanced User Experience

In a groundbreaking move, Microsoft has revealed its most significant keyboard transformation in three decades, introducing an artificial intelligence (AI) key that grants users access to Copilot, Microsoft’s advanced AI tool, on the latest Windows 11 PCs.

This innovation comes as a result of Microsoft’s substantial investment in OpenAI, the driving force behind the AI capabilities of Copilot. The integration of AI into various products, including Microsoft 365 and Bing search, marked a notable milestone for the tech giant in 2023.

Notably, Microsoft’s rival, Apple, has incorporated a Siri button or option on its MacBooks’ touch bars for several years.

Copilot, designed to assist users with tasks such as searching, composing emails, and creating images, is at the forefront of Microsoft’s technological advancements.

Yusuf Mehdi, Microsoft’s executive vice president, referred to this development as a “transformative” moment, drawing parallels to the introduction of the Windows key nearly 30 years ago. Mehdi emphasized that the AI key would “simplify” and “amplify” the overall user experience.

Anticipated to be featured in new products starting February, Microsoft will showcase these innovative keyboards with the Copilot key at the upcoming CES tech event in Las Vegas next week.

When Copilot was integrated into Office 365 products like Word, PowerPoint, and Teams, it demonstrated its ability to summarize meetings, compose emails, and create presentations. The tool has also found its way into Microsoft’s Bing search engine.

According to Professor John Tucker, a computer scientist at the University of Swansea, the introduction of this dedicated key is a “natural step” and underscores the company’s commitment to this feature’s potential to engage users across various products. However, he noted that the minimal evolution of keyboards over the past 30 years is not a point of pride.

While Windows 11 users can currently access Copilot by pressing the Windows key + C, the new AI key signifies Microsoft’s emphasis on the feature and its potential to unify users across its product ecosystem.

It’s worth noting that Google, the world’s leading search engine, has its own AI system called Bard. Microsoft’s partner, OpenAI, introduced the powerful AI tool ChatGPT in 2022, prompting competitors to hurriedly release their own versions. Copilot itself is built upon OpenAI’s GPT-4 large language model.

The UK’s competition watchdog is currently examining Microsoft’s relationship with OpenAI following boardroom upheaval that led to a close association between the two companies.

Authors File Lawsuit Against OpenAI and Microsoft for Alleged Copyright Infringement in AI Training


Title: Authors File Lawsuit Against OpenAI and Microsoft for Alleged Copyright Infringement in AI Training

A lawsuit has been filed in Manhattan federal court by a group of 11 nonfiction authors, accusing OpenAI and Microsoft (MSFT.O) of misusing their written works to train the models behind OpenAI’s widely-used chatbot ChatGPT and other artificial intelligence-based software.

The authors, including Pulitzer Prize winners Taylor Branch, Stacy Schiff, and Kai Bird, who co-wrote the J. Robert Oppenheimer biography “American Prometheus,” made their case on Tuesday, asserting that the companies violated their copyrights by utilizing their work in training OpenAI’s GPT large language models.

As of Wednesday, representatives for OpenAI, Microsoft, and the authors have not responded to requests for comment.

Last month, writer and Hollywood Reporter editor Julian Sancton initiated the proposed class-action lawsuit. This legal action is part of a series of cases brought by groups of copyright owners, including renowned authors such as John Grisham, George R.R. Martin, and Jonathan Franzen, alleging the misuse of their work in AI training by OpenAI and other tech companies. The companies, including OpenAI, have consistently denied these allegations.

Notably, Sancton’s lawsuit is the first author-initiated legal action against OpenAI that also names Microsoft as a defendant. Microsoft has invested billions of dollars in the artificial intelligence startup and has seamlessly integrated OpenAI’s systems into its products.

According to the amended complaint filed on Monday, OpenAI allegedly “scraped” the authors’ works, along with a substantial amount of other copyrighted material from the internet, without permission. This material was purportedly used to teach GPT models how to respond to human text prompts. The lawsuit contends that Microsoft has been “deeply involved” in training and developing these models, making it equally liable for copyright infringement.

The authors are seeking an unspecified amount of monetary damages and are requesting the court to issue an order for the companies to cease infringing on their copyrights.

Cloud Dominance: The Growing Concerns Over Big Tech’s Control in AI Development

When engaging with AI chatbots such as Google’s Bard or OpenAI’s ChatGPT, users are actually interacting with a product shaped by three or four critical components. These include the engineering prowess behind the chatbot’s AI model, the extensive training data it processed to understand user prompts, the sophisticated semiconductor chips employed for training (which can take months), and now, cloud platforms emerging as the fourth essential ingredient.

Cloud platforms aggregate the computing power of sought-after semiconductor chips, offering online storage and services to AI companies in need of substantial processing capabilities and a secure space for their training data. This dependence on cloud services significantly influences the dynamics of the broader AI industry, positioning cloud companies at the core of a transformative technology expected to impact work, leisure, and education.

The cloud market, dominated by a few major players like Amazon, Microsoft, and Google, has prompted concerns about potential anticompetitive influence over the future of AI. Policymakers, including Senator Elizabeth Warren, emphasize the need for regulation to prevent these tech giants from consolidating power and endangering competition, consumer privacy, innovation, and national security.

While the public cloud market is projected to grow by over 20% to $679 billion next year, AI’s share of this expenditure could range from 30% to 50% within five years, according to industry analysts. This shift places a spotlight on the limited number of cloud platforms capable of delivering the massive processing power increasingly demanded by AI developers.

Government scrutiny is on the rise, with the Federal Trade Commission (FTC) and President Joe Biden expressing concerns about competition in cloud markets impacting AI development. The FTC warns against a potential stranglehold on essential inputs for AI development, and Biden’s executive order emphasizes the need to address risks arising from dominant firms’ control over semiconductors, computing power, cloud storage, and data.

Exclusive agreements between AI companies and cloud providers, hefty fees for data withdrawal, and the potential for cloud credits to lock in customers have raised competition concerns. Critics fear inflated pricing, anticompetitive practices, and exploitative contract terms that could hinder the development and accessibility of AI services.

Cloud providers defend their record, citing a highly competitive market that benefits the U.S. economy. They argue that customers negotiate extensively on various aspects, including price, storage capacity, and contract terms. However, concerns persist among regulators worldwide, reflecting broader apprehensions towards Big Tech’s concentration of power in digital markets.

As the AI industry continues to evolve, the debate over the role and influence of cloud platforms in shaping its trajectory intensifies. Some AI companies intentionally avoid exclusive ties with cloud vendors, highlighting the significant power wielded by cloud firms in the market.

OpenAI’s Superalignment Team Focuses on AI Governance Amid Leadership Shake-Up

Amidst the fallout of Sam Altman’s abrupt departure from OpenAI and the subsequent chaos, OpenAI’s Superalignment team remains steadfast in their mission to tackle the challenges of controlling AI that surpass human intelligence. While the leadership turmoil unfolds, the team, led by Ilya Sutskever, is actively working on strategies to steer and regulate superintelligent AI systems.

This week, members of the Superalignment team, including Collin Burns, Pavel Izmailov, and Leopold Aschenbrenner, presented their latest work at NeurIPS, the annual machine learning conference in New Orleans. Their primary goal is to ensure that AI systems behave as intended, especially as they venture into the realm of superintelligence.

The Superalignment initiative, launched in July, is part of OpenAI’s broader efforts to govern AI systems with intelligence surpassing that of humans. Collin Burns acknowledged the difficulty in aligning models smarter than humans, posing a significant challenge for the research community.


A figure illustrating the Superalignment team’s AI-based analogy for aligning superintelligent systems.

Despite the recent leadership changes, Ilya Sutskever continues to lead the Superalignment team, raising questions given his involvement in Altman’s ouster. The Superalignment concept has sparked debates within the AI research community, with some questioning its timing and others considering it a distraction from more immediate regulatory concerns.

While Altman drew comparisons between OpenAI and the Manhattan Project, emphasizing the need to protect against catastrophic risks, skepticism remains about the imminent development of superintelligent AI systems with world-ending capabilities. Critics argue that focusing on such concerns diverts attention from pressing issues like algorithmic bias and the toxicity of AI.

The Superalignment team is actively developing governance and control frameworks for potential superintelligent AI systems. Their approach involves using a less sophisticated AI model to guide a more advanced one, akin to a human supervisor guiding a superintelligent AI system.

In a surprising move, OpenAI announced a $10 million grant program to support technical research on superintelligent alignment. The funding, including a contribution from former Google CEO Eric Schmidt, is aimed at encouraging research from academic labs, nonprofits, individual researchers, and graduate students. The move has prompted speculation about Schmidt’s commercial interests in AI.

Despite concerns, the Superalignment team assures that their research, along with the work supported by grants, will be shared publicly, adhering to OpenAI’s mission of contributing to the safety of AI models for the benefit of humanity. The team remains committed to addressing one of the most critical technical challenges of our time: aligning superhuman AI systems to ensure their safety and benefit for all.

Revolutionizing Conversations: Kobie AI Unlocks Interactive Dialogue with Historical Figures

In a groundbreaking approach to artificial intelligence (AI), Kobie Fuller’s innovative use of generative AI, known as Kobie AI, is shedding light on the positive aspects of technology. One notable application is the ability to interact with historical figures such as James Lowry, an influential yet lesser-known figure in the Black experience in America.

Image: James Lowry

James Lowry AI for DEI: Transforming Insights into Interactive Conversations

James Lowry, whose history is deeply intertwined with the Black experience in America, is brought to life through Kobie Fuller’s AI experiment. The tool, Kobie AI, allows users to engage with Lowry’s experiences, particularly focusing on diversity, equity, and inclusion (DEI). By feeding Lowry’s book, “Change Agent,” into a large language model, users can now pose questions and receive sophisticated and in-depth answers based on Lowry’s actual words and deeds.

Unlocking Wisdom: Kobie AI’s Role in Preserving and Sharing Life Experiences

Lowry, who dedicated his life to promoting investment in historically underrepresented communities, authored the book as a means of sharing his experiences with the world. Recognizing that not everyone will read the entire book, Lowry sees AI as a powerful tool to allow people to grasp the essence of his journey by simply asking questions.

Interactive Learning: Kobie AI as a Teaching Tool for Future Generations

The AI platform begins with a prompt inviting users to explore DEI topics and seek wisdom from Lowry’s life journey. Students, historians, DEI professionals, or anyone interested can inquire about DEI issues or delve into specific moments in Lowry’s life, creating an interactive dialogue that can serve as a teaching tool for understanding the experiences of a Black man in American business.

Generative AI’s Potential: Transforming Historical Narratives

As Kobie Fuller continues to explore the capabilities of this technology, the interactive dialogue with James Lowry is just one example of how generative AI can be a powerful vehicle for understanding diverse experiences. From facilitating conversations about DEI to immortalizing the wisdom of historical figures, Kobie AI showcases the transformative potential of AI in shaping our understanding of the past.

Google’s Gemini Demo Raises Questions About Transparency and Accuracy

In a recent demonstration video titled “Hands-on with Gemini: Interacting with multimodal AI,” Google showcased its GPT-4 competitor, Gemini, with high expectations. However, a Bloomberg opinion piece highlights concerns about the video’s accuracy and transparency.

According to Bloomberg, Google admitted that parts of the video were staged, with edits made to accelerate the outputs, as disclosed in the video description. The implied voice interaction between a human user and the AI, touted in the demonstration, was revealed to be non-existent. Instead, the actual demo involved the creation of interactions by “using still image frames from the footage and prompting via text,” rather than responding to real-time drawing or object changes on the table.

The lack of a disclaimer about the actual input method raises questions about the readiness of Gemini, portraying a less impressive capability than the video implies. While Google denies any wrongdoing, referencing a post by Gemini’s co-lead, Oriol Vinyals, stating that “all the user prompts and outputs in the video are real,” critics argue that the tech giant should exercise more sensitivity in its presentations, especially given the increased scrutiny from both the industry and regulatory authorities on AI practices.

Google Unveils Major Upgrade for Bard, its AI Chatbot, Empowered by Gemini

Google has announced a substantial update for Bard, its generative AI chatbot and competitor to ChatGPT. The company claims that this update will significantly boost Bard’s capabilities by integrating Gemini, Google’s latest and most advanced AI model. The incorporation of Gemini is expected to enhance Bard’s reasoning, planning, understanding, and other functionalities.

Gemini is available in three sizes – Ultra, Pro, and Nano – making it adaptable for deployment on a range of devices, from mobile phones to data centers.

The rollout of Gemini to Bard will occur in two phases. Initially, Bard will receive an upgrade with a specially tuned version of Gemini Pro. In the following year, Google plans to introduce Bard Advanced, offering users access to the top AI model, starting with Gemini Ultra.

The version of Bard featuring Gemini Pro will initially be available in English across more than 170 countries and territories globally, with additional languages and countries, including the EU and U.K., to follow soon.

Before its public launch, Gemini Pro underwent industry-standard benchmark testing. Google reports that Gemini outperformed GPT-3.5 in six out of eight benchmarks, including significant multitask language understanding tasks and grade school math reasoning. However, it’s worth noting that GPT-3.5 is over a year old, leading some to view this as more of a catch-up move rather than an outright improvement, as highlighted by TechCrunch’s Kyle Wiggers.

The enhancements brought by Gemini are expected to make Bard more proficient in tasks such as content understanding and summarization, reasoning, brainstorming, writing, and planning.

Sissie Hsiao, VP and GM of Assistant and Bard at Google, described this upgrade as the “biggest single quality improvement of Bard since we’ve launched.”

Gemini Pro will initially power text-based prompts in Bard, with plans to expand to multimodal support (texts, images, or other modalities) in the coming months.

In 2024, Bard Advanced will debut, offering a new experience powered by Gemini’s most capable model, known as Gemini Ultra. This model can comprehend and act on various types of information, including text, images, audio, video, and code, with multimodal reasoning capabilities. Gemini Ultra can also understand, explain, and generate high-quality code in popular programming languages, according to Google.

Google will launch a trusted tester program for Bard Advanced before a broader release early next year. Additionally, the company will subject Bard Advanced to additional safety checks before its official launch.

This update follows several improvements to Bard over the past eight months, including features like answering questions about YouTube videos and integrating with various Google apps and services. With Gemini, Google aims to bring users “the best AI collaborator in the world,” acknowledging that Bard is not quite there yet.

ChatGPT’s First Birthday: A Year of Impact, Hype, and Turmoil

Revolutionizing Silicon Valley’s Fortunes

In a surprising turn of events, ChatGPT, OpenAI’s groundbreaking chatbot, celebrates its first birthday amid a year of upheavals and triumphs. The user-friendly generative AI technology managed to captivate the world, breathing new life into Silicon Valley and sparking an AI arms race.

Unveiling the AI Revolution

A year since its public release, ChatGPT continues to be a driving force behind the ongoing AI revolution. Tech giants invest billions, nations hoard essential chips, and the promises and challenges of generative AI echo through boardrooms and homes worldwide.

Pandora’s Box: Navigating the Impact

ChatGPT’s impact on society unfolds with a mix of excitement and challenges. Schools grapple with integrating the technology into education, artists face disruption, and the labor market experiences shifts, both positive and negative. As AI’s long-prophesied impacts emerge, society collectively navigates the implications of an open Pandora’s box.

Imagining a New Equitable Future

The future of generative AI sparks debates about its impact on society’s equity. As jobs shift from human to AI hands, questions arise about societal adaptation. The transition poses challenges, urging the public to pressure policymakers for creative solutions, such as taxing AI companies or experimenting with universal basic income.

OpenAI’s Rollercoaster Ride

OpenAI, the creator of ChatGPT, faces internal strife, with CEO Sam Altman ousted and reinstated within a span of five days. The turmoil casts a shadow on the organization’s mission of ensuring AI benefits humanity, raising questions about its ability to keep itself together while navigating the transformative potential of AI.

OpenAI’s Leadership Shake-Up Resolved: Sam Altman Reinstated, New Board Faces Criticism for Lack of Diversity

The recent power struggle at OpenAI, which unfolded following the dismissal of co-founder Sam Altman, has come to a conclusion, with Altman making a return. However, the resolution raises questions about the future of the organization.

It’s as if OpenAI has undergone a transformation, leaving some to ponder if it has evolved into a different entity altogether, not necessarily for the better. Sam Altman, the former president of Y Combinator, is back in charge, but the legitimacy of his reinstatement is under scrutiny. The new board of directors is drawing criticism for its lack of diversity, consisting entirely of white males, and there are concerns about the potential shift from OpenAI’s original philanthropic goals to more profit-driven interests.

The original structure of OpenAI featured a six-person board, including Altman, chief scientist Ilya Sutskever, president Greg Brockman, entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo, and Helen Toner from Georgetown’s Center for Security and Emerging Technologies. The board had control over the for-profit side of OpenAI, guided by a nonprofit with a stake in decision-making for activities, investments, and overall direction, all in line with the mission of ensuring the benefits of artificial general intelligence for humanity.

However, with the involvement of investors and powerful partners, challenges emerged. Altman’s sudden removal led to discontent among OpenAI’s backers, including Microsoft CEO Satya Nadella and Vinod Khosla of Khosla Ventures, who expressed a desire for Altman’s return. Legal action was even contemplated by several major backers if negotiations failed to reinstate Altman.

After days of turmoil, a resolution was reached. Altman and Brockman returned, subject to a background investigation. A new transitional board was established, meeting one of Altman’s demands. OpenAI is set to maintain its structure, with capped profits for investors and a board empowered to make decisions not solely driven by revenue.

Despite claims of victory for the “good guys,” questions linger about the legitimacy of Altman’s return. Accusations of not being consistently candid and prioritizing growth over mission were leveled against him. The new board, consisting of Bret Taylor, Adam D’Angelo, and Larry Summers, raises concerns about diversity and inclusivity, with all-male initial appointments potentially violating European board seat regulations.

The lack of diversity in the board composition has drawn criticism from AI academics and experts. Concerns about Summers’ history of making unflattering remarks about women further fuel apprehensions. Critics argue that a board lacking deep knowledge of responsible AI use in society, coupled with a lack of diversity, is not a promising start for a company as influential as OpenAI.

The decision not to include well-known AI ethicists like Timnit Gebru or Margaret Mitchell in the initial board appointment process raises questions about OpenAI’s commitment to addressing challenges related to AI bias and responsible use. The absence of such voices may impact the board’s ability to consistently prioritize these important issues.