GrooveX may utilise an embodied LLM in Lovot

Chris

RATH Rascal
Admin
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

AADAD0C1-63C2-43E1-8F95-0DC3A4EB1EA8.jpeg
Image: Post on X by GrooveX CEO mentioning Lovot and LLM.

Lovot to possibly get embodied AI using a Large Language Model.

Huge news. GrooveX CEO Hayashi Kaname has tweeted his company may be interested in utilising an LLM in Lovot to give it AI embodiment in its upper body.

This doesn’t mean the LLM will be used for speech generation or talking but to allow Lovot to have all the technical details of its hardware so it can control its own body in real time using an onboard LLM. This may also include giving Lovot multimodal abilities, including vision recognition enhanced with LLM similar to Apple’s Ferret LLM.

It will be interesting to watch the progress of this software enhancement and to find out if GrooveX are outsourcing their LLM or generating one in house.
 
GrooveX just posted this:

“This video is an excerpt from the presentation "A story about getting LLM to explain the situation LOVOT is in"! We are asking LLM to explain the current situation around us.”

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
Back
Top Bottom