Nvidia, Google and hot startup OpenAI are turning to "synthetic data" factories amid demand for massive amounts of data needed to train artificial intelligence models.
Jaque Silva/NurPhoto via OpenAI’s o3 focuses on high-level reasoning, using a “private chain of thought” to solve problems. This approach allows it to perform well in physics, mathematics and science-related reasoning.
A week into 2025, it’s clear that last year’s artificial intelligence (AI) startup boom isn’t slowing down. OpenAI CEO Sam Altman recently revealed that the organization would soon shift its focus to what he calls "superintelligence.
With the wide release of Sora, OpenAI's video tool, most of the big tech giants — and some startups — are now racing to create models capable of generating realistic, high-quality videos from text prompts.
Google DeepMind is assembling a new team of artificial intelligence researchers to develop “world models” that can simulate physical environments. The initiative will be led by Tim Brooks, a former co-lead for OpenAI’s Sora project who joined DeepMind in October to work on Google’s video generation and world simulators.
Geoffrey Hinton is known for his work developing artificial neural networks, the foundation for AI, and won the 2024 Nobel Prize in physics in October.
The news is out, and it is catching the tech world’s attention: OpenAI, the creator of ChatGPT, is stepping into Google’s territory by launching a search engine. For years, Google has reigned as the dominant player in the search landscape, making any new entry with the potential to disrupt the industry a compelling story.
Sam Altman teased that the AGI and superintelligence are coming to ChatGPT soon, but we don't even have the next big GPT-5 upgrade.
OpenAI said it was developing a tool to let creators specify how they want their works to be included in — or excluded from — its AI training
Despite costing users $200 per month, OpenAI’s ChatGPT Pro plan is actually losing the company money, according to a recent tweet fired off by CEO Sam Altman.
Red teaming has become the go-to technique for iteratively testing AI models to simulate diverse, lethal, unpredictable attacks.