Ex-OpenAI researchers build AI for robots that can help them understand the world, talk like ChatGPT

The use of artificial intelligence (AI) and robots has been explored widely in popular fiction. The whole idea of robots taking care of basic human tasks has been there for a long time, and films and TV shows have experimented widely with it.

In 2024, with tools like ChatGPT, Gemini and Bing AI, this fictional concept seems to be getting close to being a reality. And now, reports have surfaced that former OpenAI researchers have teamed up to create a new software that will help robots become more aware of their physical world and develop a deeper understanding of language.

According to a report in The New York Times, Covariant, a robotics startup founded by former OpenAI researchers, is applying the technology development methods used in chatbots to build AI that helps robots navigate and interact with the physical world. Instead of building robots, Covariant focuses on creating software that powers robots, starting with those used in warehouses and distribution centres.

The technology also gives robots a broad understanding of the English language, letting people chat with them as if they were chatting with ChatGPT.

The AI technology developed by Covariant allows robots to pick up, move, and sort items in warehouses by giving them a broad understanding of the physical world. The NYT report adds that the tech will also help robots understand English better, allowing users to chat with them similar to interacting with ChatGPT. In other words, the startup seems to be developing ChatGPT, but for robots. The viral AI tool was launched by OpenAI in 2022 and gained a lot of popularity for its human-like responses.

Similar to ChatGPT and other AI tools, Covariant’s AI technology learns from analysing large amounts of digital data. The company says that it has gathered data from cameras and sensors in warehouses for years, allowing robots to understand their surroundings and handle unexpected situations.

The report also mentions that the company’s technology is called R.F.M. (robotics foundational model) and it combines data from images, sensory input, and text, providing robots with a more comprehensive understanding of their environment. For instance, the system can generate videos predicting the outcome of a robot’s actions.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *