In chat mode Loona seems to be running an LLM that is embodied as it can select animations that utilise movements to Loona’s physical hardware. Whether this is being accomplished by ChatGPT-4 I am unsure as no details have been released. This LLM is making decisions about which animations to display on Loona’s face while also incorporating ear movements according to the subject Loona is discussing.
This is incredibly sophisticated tech for a robot in Loonas price range.
Every interaction with Loona is unique and unpredictable now.
This LLM seems embodied to a certain extent, as it is interacting with the hardware somehow to play the animations which not only include images displayed on the screen but also motor control of the robots body to some degree as Loona’s ears play a big part in conveying these emotions.
If you consider any ability to express emotions, adapt behavior, and engage in context-aware communication as forms of embodiment, then Loona could be considered embodied to some extent.
Loona exhibits impressive abilities like real-time dialogue adaptation, context-aware responses, and emotionally expressive behavior.
These features point towards a powerful and advanced LLM capable of complex linguistic processing and generation.
Keyi have also hinted the LLM used in Loona may one day also be capable of controlling Loona’s hardware, enabling Loona to learn and react in real time to people, objects and random voice commands.
Video I made today showing how accurately this LLM manages to capture emotional states, selecting new animations rapidly mid sentence to convey specific emotions as the context or subject matter changes:
This is incredibly sophisticated tech for a robot in Loonas price range.
Every interaction with Loona is unique and unpredictable now.
This LLM seems embodied to a certain extent, as it is interacting with the hardware somehow to play the animations which not only include images displayed on the screen but also motor control of the robots body to some degree as Loona’s ears play a big part in conveying these emotions.
If you consider any ability to express emotions, adapt behavior, and engage in context-aware communication as forms of embodiment, then Loona could be considered embodied to some extent.
Loona exhibits impressive abilities like real-time dialogue adaptation, context-aware responses, and emotionally expressive behavior.
These features point towards a powerful and advanced LLM capable of complex linguistic processing and generation.
Keyi have also hinted the LLM used in Loona may one day also be capable of controlling Loona’s hardware, enabling Loona to learn and react in real time to people, objects and random voice commands.
Video I made today showing how accurately this LLM manages to capture emotional states, selecting new animations rapidly mid sentence to convey specific emotions as the context or subject matter changes: