Can Untitled Yet be a grounded language that an agent learns to speak and to understand the perception in its environment?
Client | Untitled Yet 
Today around seven-in-ten Americans use social media to connect, engage with news content, share information, and entertain themselves. [1] They upload, share, like, or dislike the objects in real life to disclose what those elements mean to them or the environment. And they relatively use applications as tools to perform those tasks. But how do they cope with users’ paradoxes; parallel processing in the brain but posting sequentially in time? Can humans understand enough the objective of the new tool? Is there a discrepancy or a deepening dispute between the computational world and its user?  Or is everything just fine?
Untitled yet is, therefore, a language that obtains sequential representations in human perception through user-composite videos to able to mimic them.

Imagine you create your perception through words by linking them to audio and video rather than a mastered language used by a particular country or region. You only use the grammar of the spoken language to create logical arrays to follow. But your perception, on the other hand, has been creating by these sequential representations that are built-in multiple layers. For example, you can complement the word with natural sounds; or you can use composite sounds or videos in an order that makes sense to you.​​​​​​​

Untitled yet is designed to leave bread crumbs for the computational models to mimic human perception. If the self-imposed users can manage to stimulate pathways over the encoding process of their perceptions, the computational model can simultaneously follow those paths as the raw data to decode the logic pattern of users’ perceptions. Then maybe, soon, machines can understand humanity.
Back to Top