Meta hired Yang Song from OpenAI to lead generative and multimodal AI research at Meta Superintelligence Labs. The move signals accelerating productization of generative AI features, intensified AI talent acquisition, and a shift toward multimodal AI capabilities across consumer platforms.
On September 25, 2025, Wired reported that Meta has hired Yang Song, a leading AI researcher who formerly led the strategic explorations team at OpenAI, as a research principal at Meta Superintelligence Labs. Song is known for foundational work on score based diffusion models and multimodal generative systems. This hire is a clear signal that Meta is investing in generative AI and multimodal AI to move research into product at platform scale.
Competition for senior AI talent is now a strategic lever for the largest technology companies. Groups at OpenAI, Google Brain and other top labs pushed foundation models and multimodal systems from demos to capabilities that deliver high quality images, audio and text together. Meta reorganized around ambitious AI goals and created Meta Superintelligence Labs to pursue advanced, potentially AGI scale research. Recruiting researchers with deep experience in generative modeling helps Meta shorten research and development timelines and influence which platforms lead in new consumer and developer facing features.
Yang Song brings an academic and industry pedigree, with influential publications on score based diffusion models and multimodal generative systems. His work helps power image and audio synthesis advances and research that combines text, image and audio generation. His move to Meta Superintelligence Labs underscores how firms pursue talent to accelerate generative AI solutions and multimodal AI capabilities.
Diffusion models: Generative techniques that start from random noise and iteratively refine it into a coherent image, audio clip or other output. These models produce high quality, diverse samples and are widely used in modern image and audio synthesis.
Multimodal generative systems: Models that accept and produce different types of data, for example turning text prompts into images or generating descriptive audio from images. In plain language, multimodal AI lets systems work across several data types at once, enabling richer experiences in messaging, creative tools and augmented reality.
What does this hire mean in practical terms?
Who is hiring AI experts at Meta? Meta Superintelligence Labs and other Meta AI units are actively recruiting senior researchers and engineers focused on generative AI and multimodal AI.
How will this affect integrations and partnerships? Talent flows shape which platforms release new capabilities first. Expect earlier availability of advanced generative AI solutions via platform APIs, SDKs and creator tools, which may shift where partners choose to integrate.
This hire aligns with a broader trend: leaders with deep model expertise are being recruited to convert research into platform scale products. For businesses that rely on third party platforms, AI driven recruitment and competition for talent will determine where the most accessible and compelling generative AI solutions appear.
Yang Song's move to Meta Superintelligence Labs is more than a personnel change. It signals Meta doubling down on generative AI and multimodal AI research and shortening the path from breakthroughs to consumer products. As talent moves between leading labs, innovation will likely accelerate and the need for careful governance, transparent product design and clear integration strategies will grow. Businesses and policymakers should watch which capabilities appear in platforms first and prepare for a landscape where richer multimodal AI features are embedded across everyday apps.