OpenAI’s Response to Elon Musk Lawsuit Reveals Discord Over Company’s Mission

OpenAI has responded to Elon Musk’s lawsuit with revelations of internal discord over the company’s mission and funding. Last week, Musk sued the ChatGPT company, alleging a deviation from its original nonprofit mission in favor of profit-driven motives.

In a move to defend its position, OpenAI published excerpts from Musk’s emails from the early days of the company, indicating his acknowledgment of the necessity for substantial funding to support the ambitious AI projects.

According to the released emails, Musk argued that solely relying on fundraising would not suffice for OpenAI’s success in building a generative AI platform. Instead, he suggested seeking alternative revenue streams to ensure the company’s sustainability.

In a November 22, 2015 email to CEO Sam Altman, Musk proposed a funding commitment of over $1 billion, promising to cover any shortfall beyond that. However, OpenAI claims Musk only contributed $45 million, while other donors raised $90 million.

Musk’s emails also revealed his suggestion in February 1, 2018, for Tesla to acquire OpenAI, which was rejected by the company. Subsequently, Musk parted ways with OpenAI later that year.

Expressing concerns in a December 2018 email about OpenAI’s relevance without substantial resources, Musk emphasized the need for billions in funding annually.

In response to Musk’s lawsuit, OpenAI formed a for-profit entity, OpenAI LP, in 2019, which significantly increased the company’s valuation to $90 billion within a few years. Microsoft later committed $13 billion in a partnership with OpenAI.

Musk’s lawsuit alleges a breach of contract, claiming that OpenAI’s partnership with Microsoft violated its founding charter. He seeks a jury trial and demands reimbursement of profits received by the company’s executives.

OpenAI, originally established to mitigate the risks of artificial generative intelligence, insists it has not deviated from its mission. The company affirms its commitment to product safety and improving people’s lives through its technology.

In a blog post, OpenAI expressed disappointment over the legal dispute with Musk, whom they admired but accused of hindering their progress toward their mission.

“We’re sad that it’s come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him,” the company stated.

OpenAI intends to dismiss Musk’s claims and reaffirms its dedication to advancing AI while prioritizing ethical considerations.

Microsoft Discloses Ongoing Russian Hack Attempts Despite Previous Breach

In a recent development, Microsoft (MSFT.O) revealed on Friday that hackers associated with Russia’s foreign intelligence were once again attempting to breach its systems. Utilizing data stolen from corporate emails back in January, the hackers aimed to gain new access to the tech giant, whose products are extensively used across the U.S. national security establishment.

This disclosure has raised concerns among analysts regarding the safety of systems and services provided by Microsoft, one of the world’s largest software makers. The company supplies digital services and infrastructure to the U.S. government, amplifying worries about national security risks.

Microsoft has attributed the intrusions to a Russian state-sponsored group known as Midnight Blizzard, or Nobelium. The Russian embassy in Washington did not immediately respond to requests for comments on Microsoft’s statement, nor did it respond to previous statements regarding Midnight Blizzard’s activities.

The breach, initially disclosed by Microsoft in January, targeted corporate email accounts, including those of senior company leaders, as well as cybersecurity, legal, and other functions. The tech firm stated in a recent blog that evidence showed Midnight Blizzard utilizing information obtained from the corporate email systems to gain unauthorized access or attempt to do so.

Jerome Segura, principal threat researcher at Malwarebytes’ Threatdown Labs, noted the unsettling nature of the ongoing attack despite Microsoft’s efforts to prevent access. He expressed concerns about customers not having reassurance amid Microsoft’s learning process during the attack.

The hackers stole various data, including access to source code repositories and internal systems, Microsoft confirmed. With Microsoft owning GitHub, a public repository for software code, analysts expressed worries about potential exploitation of such information to compromise software and introduce backdoors.

Microsoft revealed that the hackers used a “password spray” attack to break into staff emails, significantly increasing their attempts compared to the January breach. Adam Meyers, a senior vice president at Crowdstrike, highlighted the severity of the situation, emphasizing the depth of the hackers’ infiltration into Microsoft.

Midnight Blizzard has a history of targeting governments, diplomatic entities, and non-governmental organizations, according to analysts. Microsoft believes the group targeted them due to the company’s extensive research into Midnight Blizzard’s operations, dating back to at least 2021.

Microsoft’s threat intelligence team has been investigating Nobelium since then, especially following its involvement in the SolarWinds cyberattack. Despite Microsoft’s efforts to combat the attacks, the persistence of the breach attempts underscores the significant commitment and focus of the threat actor’s resources.

As the investigation continues, Microsoft is reaching out to affected customers to assist them in taking mitigating measures. However, the company has not disclosed the names of the affected customers.

House Committee Advances Bill Targeting TikTok Amid Spying Concerns

The House Energy and Commerce Committee made significant strides on Thursday by advancing a bill aimed at potentially imposing a nationwide ban on TikTok across all electronic devices. This move reignites lawmakers’ scrutiny of one of the globe’s most popular social media platforms while emphasizing lingering apprehensions regarding TikTok’s alleged risk of Chinese government espionage.

The legislation, which garnered unanimous support from the committee, proposes prohibiting TikTok from US app stores unless the social media platform, boasting approximately 170 million American users, swiftly separates from its China-linked parent company, ByteDance.

Should the bill be enacted, ByteDance would have 165 days, or just over five months, to divest TikTok. Failure to comply by the specified deadline would render it illegal for app store operators like Apple and Google to offer TikTok for download. Furthermore, the bill contemplates similar restrictions for other applications “controlled by foreign adversary companies.”

Representative Cathy McMorris Rodgers, the chair of the panel and a Republican from Washington, asserted, “Today, we will take the first step in creating long-overdue laws to protect Americans from the threat posed by apps controlled by our adversaries, and to send a very strong message that the US will always stand up for our values and freedom.”

New Jersey Representative Frank Pallone, the ranking Democrat on the committee, likened the bill to past endeavors aimed at regulating US airwaves. He underscored insights from national security officials obtained during a closed-door hearing earlier that day, expressing serious consideration for the concerns raised by the intelligence community.

Introduced earlier in the week with bipartisan backing by Representative Mike Gallagher, a Republican from Wisconsin, and Representative Raja Krishnamoorthi, a Democrat from Illinois, the legislation has also garnered support from the White House and House Speaker Mike Johnson.

Following its clearance by the committee, the TikTok legislation is slated for a floor vote next week, as indicated by House Majority Leader Steve Scalise. However, its prospects in the Senate remain uncertain, with no companion bill and Senate Commerce Committee Chair Maria Cantwell of Washington offering no firm commitment to advancing the proposal.

TikTok, in response, is mounting opposition efforts, including mobilizing its user base. The company has issued full-screen pop-ups within the app, cautioning users about the bill’s potential implications on their constitutional right to free expression. The notification urges users to contact their congressional representatives to oppose the bill, leading to a surge in phone calls to House offices, according to multiple congressional staffers.

Despite criticisms labeling the bill as a TikTok ban, Representative Mike Gallagher rebuffed such characterizations, emphasizing that the bill places the onus on TikTok to sever ties with the Chinese Communist Party. He clarified that TikTok could continue to operate provided ByteDance no longer owns the company.

In response to lawmaker claims regarding the bill’s options for TikTok, the company asserted that the legislation ultimately aims for a complete ban of TikTok in the United States.

During Thursday’s session, Representative Dan Crenshaw of Texas dismissed suggestions that lawmakers lacked understanding of the technology they sought to regulate, underscoring his familiarity with social media platforms.

Beyond potentially barring TikTok from app stores, the bill could also limit TikTok traffic or content from being carried by various internet hosting services, extending its impact beyond TikTok, Apple, and Google.

The bill’s proponents cite long-standing concerns regarding China’s intelligence laws, which raise apprehensions about Beijing potentially accessing TikTok user data. While the US government has not publicly presented evidence of such access, cybersecurity experts highlight it as a hypothetical but significant concern.

Efforts to regulate TikTok have faced obstacles, including legal challenges and concerns regarding constitutional rights. The bill’s sponsors assert that it targets foreign adversary control rather than speech content. However, critics, including the American Civil Liberties Union and the Computer and Communications Industry Association, argue that the bill jeopardizes Americans’ free speech rights and infringes upon the rights of private businesses.

Anthropic Unveils Claude 3: A Multimodal AI Competing with GPT-4

Anthropic, a prominent AI startup bolstered by substantial venture capital investments, has introduced its latest technological advancement, Claude 3. With backing potentially reaching hundreds of millions more, the company asserts that Claude 3 stands on par with OpenAI’s GPT-4 in terms of performance.

Claude 3, the newest iteration of Anthropic’s GenAI, comprises a suite of models including Claude 3 Haiku, Claude 3 Sonnet, and the most powerful, Claude 3 Opus. Anthropic claims these models exhibit enhanced capabilities in analysis and forecasting compared to predecessors like GPT-4 (excluding GPT-4 Turbo) and Google’s Gemini 1.0 Ultra (excluding Gemini 1.5 Pro).

A notable feature of Claude 3 is its status as Anthropic’s inaugural multimodal GenAI, enabling analysis of both text and images akin to certain versions of GPT-4 and Gemini. Claude 3 can process various visual formats such as photos, charts, graphs, and technical diagrams sourced from PDFs, slideshows, and other document types.

Setting itself apart from competitors, Claude 3 can analyze multiple images simultaneously in a single request, facilitating comparison and contrast, according to Anthropic.

However, there are constraints on Claude 3’s image processing capabilities. Anthropic has restricted the models from identifying individuals, likely due to ethical and legal concerns. Additionally, Claude 3 may struggle with “low-quality” images (under 200 pixels) and tasks involving spatial reasoning or object counting.

Furthermore, Claude 3 is limited to image analysis and does not generate artwork at this time.

Anthropic assures customers that Claude 3 demonstrates improvements in following multi-step instructions, producing structured output, conversing in multiple languages, and providing answers with greater accuracy. Additionally, the models will soon include citation of answer sources for user verification.

According to Anthropic, Claude 3’s enhancements are attributed to an expanded context, allowing for more expressive and engaging responses.

While not without its imperfections, Anthropic plans to continually update Claude 3 to address issues such as bias and hallucinations. The company envisions Claude 3 evolving to interact with other systems and deliver advanced capabilities comparable to those of OpenAI’s reported ambitions.

Anthropic offers Opus and Sonnet models presently, with Haiku slated for release later in the year. Pricing for Claude 3 varies based on model and usage.

In summary, Anthropic’s Claude 3 represents a significant advancement in AI technology, positioned to compete with industry leaders like OpenAI while paving the way for future developments in AI self-teaching and interaction.

Adobe Unveils Project Music GenAI Control: AI-Driven Music Creation and Editing Tool

In a groundbreaking move aimed at revolutionizing music creation for amateurs and professionals alike, Adobe announced its latest innovation, Project Music GenAI Control, during the Hot Pod Summit in Brooklyn. This generative AI experiment is designed to empower users to effortlessly create and customize music without requiring any prior professional audio experience.

Using text prompts, users can initiate the generation of music tailored to specific styles like “happy dance” or “sad jazz.” Adobe’s integrated editing controls then enable users to personalize the generated music by adjusting parameters such as tempo, intensity, and structure. Additionally, users have the flexibility to remix sections of music and generate repeating loops, ideal for applications like content creation or providing background tracks.

Moreover, Project Music GenAI Control boasts the capability to adjust generated audio based on a reference melody and extend audio clips to accommodate various needs, such as animations or podcast segments. While details about the user interface for editing generated audio remain undisclosed, Adobe assures users of a seamless editing experience.

Although similar tools like Google’s MusicLM and Meta’s AudioCraft are in development, they primarily focus on generating audio via text prompts without robust editing support. Unlike these tools, Project Music GenAI Control empowers users with comprehensive control over their music, akin to the pixel-level precision offered by Photoshop.

Collaborating with the University of California and Carnegie Mellon University’s School of Computer Science, Adobe emphasizes that Project Music GenAI is still in its early stages. While the integration of its features into Adobe’s existing editing tools like Audition and Premiere Pro is anticipated, a public release date for the tool remains undisclosed.

For those eager to stay updated on Project Music GenAI’s development and other experiments by Adobe, visit the Adobe Labs website.

Instagram’s Threads Surpasses X (formerly Twitter) in Alt-Twitter Wars

In the ongoing battle for dominance in the alternative Twitter landscape, Instagram’s Threads has emerged as the current frontrunner, surpassing X (formerly Twitter) in daily downloads globally. While app downloads may not perfectly reflect usage, they serve as indicators of market trends, and Threads is currently leading the charge.

Meta’s take on the Twitter-esque platform has witnessed a significant surge in daily downloads, with triple the figures of X on iOS globally and more than double on Google Play. This shift marks a notable change from previous months, where Threads and X were neck-and-neck in terms of downloads, particularly on iOS.

The momentum for Threads began building towards the end of the previous year, with daily installs exceeding half a million across both Google Play and iOS. Despite a slight dip in January, Threads consistently outpaces X in daily downloads on both platforms, indicating a widening gap between the two rivals.

On February 25, 2024, Threads recorded 486,803 installs on Google Play and 342,228 on iOS, compared to X’s 225,408 and 112,625 downloads, respectively. Similarly, on February 22, Threads boasted 382,999 iOS installs versus X’s 113,649, showcasing its dominance in the market.

Meta’s CEO, Mark Zuckerberg, announced Threads’ impressive milestone of 130 million monthly active users during the company’s fourth quarter, with Instagram’s head, Adam Mosseri, highlighting its success in specific markets like Japan.

Despite X’s claim of 500 million monthly active users, concerns arise regarding the authenticity of these figures, particularly with reports of verified bot problems. This issue, coupled with X’s struggles post-rebranding from Twitter, has impacted its download numbers and revenue.

In contrast, decentralized alternatives like Mastodon and Bluesky have failed to gain significant traction compared to Threads and X. Mastodon’s official mobile app and Bluesky, though showing initial promise, have not posed a substantial challenge to the established players in the alt-Twitter space.

While Bluesky recently opened its doors to the public and introduced federation, allowing users to run their own servers, its download numbers remain modest compared to Threads and X. However, the future trajectory of Bluesky as a decentralized alternative remains uncertain, with potential for growth over time.

In summary, Instagram’s Threads has emerged as the leading contender in the alt-Twitter wars, signaling a shift in the microblogging landscape and Meta’s increasing influence over the digital news ecosystem.

Google Apologizes for AI Blunder Injecting Diversity with Disregard for Historical Context

Google issued an apology, or something close to it, following another embarrassing AI misstep this week. The blunder involved an image-generating model, named Gemini, which injected diversity into images without considering historical context, leading to laughable results.

Gemini, Google’s flagship conversational AI platform, utilizes the Imagen 2 model to generate images upon request. However, users recently discovered that requesting images depicting certain historical scenarios or figures resulted in nonsensical representations. For example, the Founding Fathers, known historically as white slave owners, were depicted as a multicultural group including people of color.

This oversight quickly became fodder for online commentators and was dragged into the ongoing discourse on diversity, equity, and inclusion within the tech sector. Critics accused Google of perpetuating a “woke mind virus” and labeled it an ideological echo chamber.

Google attributed the issue to a workaround implemented to address systemic bias in training data. When generating images, the model defaults to representations most common in its training data, often resulting in over-representation of white individuals due to biases in available images.

However, Google acknowledged the need for diversity in generated images to cater to its global user base. It emphasized the importance of providing a variety of representations, especially in scenarios where users do not specify certain characteristics.

The problem stemmed from a lack of implicit instructions in situations requiring consideration of historical context. While the model was designed to provide diverse outputs, it failed to differentiate between scenarios where diversity was appropriate and those where historical accuracy was paramount.

Google’s SVP, Prabhakar Raghavan, admitted the model’s overcautious behavior and its tendency to refuse certain prompts incorrectly, leading to embarrassing and inaccurate results.

While Google stopped short of a full apology, Raghavan’s acknowledgment of the model’s behavior raises questions about accountability. Despite attributing the errors to the model, it’s essential to recognize that the responsibility ultimately lies with the developers who created and trained it.

Mistakes are inevitable in AI models, but holding developers accountable is crucial in ensuring transparency and accountability in AI development.

Google Pay to Shut Down in the United States in June, Consolidating with Google Wallet

Google has announced its decision to shut down Google Pay in the United States by June, citing the widespread adoption of Google Wallet as the primary payment app. This move aims to streamline Google’s payment services, reducing confusion among users.

Following the shutdown, the standalone Google Pay app will only be accessible in Singapore and India. The company rationalizes this decision as a step towards consolidating its payment apps, positioning Google Wallet as the singular platform for payment features.

Google highlights that Google Wallet is extensively utilized, being five times more popular than Google Pay in the United States. Consequently, users can seamlessly access the app’s key features directly from Google Wallet.

Effective June 4, users in the United States will lose the ability to send, request, or receive money through the Google Pay app. They are encouraged to transfer their Google Pay balance to their bank account via the app before the deadline. Any remaining funds can be managed through the Google Pay website.

For users accustomed to finding offers and deals through the Google Pay app, Google assures that these features will still be available through the new deals destination on Google Search.

Google Wallet remains the primary mobile payment solution in the United States, enabling various functionalities like in-store payments, boarding passes, transit access, loyalty card storage, digital ID storage, and car ignition via a digital key.

Google underscores the global reach of Google Pay, with millions of users across more than 180 countries utilizing the platform for online, mobile, and in-store transactions.

Match Group Partners with OpenAI to Boost Work Efficiency with AI

Match Group has announced a significant enterprise collaboration with OpenAI, the creator of the AI chatbot, in a recent press release drafted with assistance from ChatGPT. This venture encompasses more than 1,000 enterprise licenses for the renowned dating app conglomerate, which includes popular platforms like Tinder, Match, OKCupid, and Hinge. The integration of AI technology aims to support Match Group employees in their daily tasks and is part of Match’s substantial investment of over $20 million in AI for the year 2024.

Although press releases typically exude enthusiasm for company developments, the release authored with ChatGPT veers into exaggerated territory. It boasts about ChatGPT being the ultimate “wingman” for employees, describes the Chief Technology Officer’s overwhelming excitement, and even incorporates a cringe-worthy analogy about AI safety akin to a “prenup with technology.” The release further includes a quote purportedly from ChatGPT itself, expressing dubious excitement about the collaboration.

Beyond the theatrics of the press release, Match Group plans to leverage AI technology, specifically ChatGPT-4, to enhance various aspects of its operations, including coding, design, analysis, template creation, and communication tasks. Access to OpenAI’s tools will be restricted to trained and licensed Match Group personnel to safeguard corporate data. Additionally, employees will undergo mandatory training focusing on responsible AI use, its capabilities, and limitations, aligning with the company’s existing privacy practices and AI principles.

While Match Group did not disclose the financial details of the agreement or its impact on the company’s finances, it anticipates that AI tools will significantly enhance team productivity. Executives highlighted Match Group’s commitment to AI during the fourth-quarter earnings call, emphasizing its role in evolving existing products and developing new ones. AI is expected to revolutionize various aspects of the dating app experience, including profile creation, matching algorithms, and post-match guidance, with a focus on enhancing user safety.

CEO Bernard Kim underscored the strategic importance of AI to Match Group’s future, emphasizing its potential to elevate user experiences and product quality. The company is also exploring the creation of standalone AI-powered apps, with plans to commence testing in 2024. A dedicated innovation team is spearheading AI integration across Match’s app portfolio, supported by the expertise of Match’s acquisition, Hyperconnect.

Despite inquiries about Match Group’s broader AI initiatives leveraging OpenAI technology, the company refrained from providing details. However, Match Group has committed significant resources, allocating $20 million to $30 million towards AI innovation in 2024.

1 2 3 8