Many captured moments in the now to write "the quick brown fox jumps over the lazy dog"
How much are you willing to play a game to create a masterpiece?
Client | Taxon Foundry
People today should be immersed in the design process more to bring something into existence. But they would rather be observers separated from the whole than be the sole proprietor of the outcome. If observers don’t raise a legitimate concern and overlook the creation of artificial intelligence, unfortunately, inertia will take over and give the green light to a professional-only AI creation process apart from human perception. Inevitably what happened in the arts will happen in AI. The observer will remain the same when the most natural impetus of expression estranges from the whole to sit on a pedestal by forming ideas in a context that is never being understood by the linear logic of many.

But can we teach human consciousness to technology so the two can effectively intertwine with one another?

Gall’s Law states that all complex systems that work evolved from simpler systems that worked. [1] What makes it complex is not the process of achieving the task in the system. It is the choice of repeating the task with arbitrary parts rather than paying attention to the intemerate data — the first contact with the unknown. It is the most proprietary and evolutionary data of the equation that has mostly been wasted. But what if we stop taking the information we receive from the environment for granted and play a game to see how collective intelligence tends to function in its most innate, unrehearsed, or merely unpredictable state? So the machine can record the human’s first contact with the unknown before any emotions hardwire a neural network in the brain. 

Human consciousness is a neural-based experience. The human brain is both an information sink and a source. [2] When a signal—any physical change in the information—is exposed to consciousness, it evokes meaning. How we carry the meaning depends on how we link the signals to the information—encoding. If language is a set of arbitrary signal-meaning links that are learned [2] by the users, we can break the information that is related to the human body, environment, and time up into units and turn them into signals that vary from person to person. So the machine can learn to be more humanized by obtaining a neutral state to see human perception for each user.
How can we create an unpredictable state to see the untainted data for the first time?
A well-known medium, a language, is a good starting point here due to its low learning curve. So the user won't get confused about the protocol of a spoken language. We merely change the visual signals from glyphs to real-time videos: the rest will remain the same for the spoken language to create logical arrays to follow.
Imagine capturing any moment in the now with a recording device and uploading the take into a system. And then imagine the system breaks the take up into units as many as the number of letters in an alphabet to create a font. Now each video unit is a signal that represents a letter. Let's say the information we would like to carry with these new signals is the quick brown fox jumps over the lazy dog.
What you just saw doesn't match with the set of your arbitrary signal-meaning links for the quick brown fox jumps over the lazy dog, does it? It is not easy to figure out how long it takes a letter to be exposed. It is hard for a human to register in a second, but once signals—real-time units— are linked to letters, it is not hard for the computational model to decode. Please keep in mind that this project is nowhere near to suggest people abandoning their languages and start to communicate with a nontrivial language. It suggests letting machines form a language that is compatible with their breed, not from a set of arbitrary signal-meaning links that are learned by many. It implies forgetting all human predictions coming from the past and give machines a chance to obtain an empirical state for the present. What we do is compressing the human perception into data right before any emotions are involved. And if the machine plays the fonts that are coming from the same origin—but in different environments and times— it can learn to see through the origin’s eyes. It is a simple game that asks for permission to collect and break your time up into signals to use each one of them as a part of collective intelligence. 

It will also take the meaning out of the picture for the machine. John Joe McFadden and Jim Al-Khalili have made a scientific case [3] that consciousness only arises from quantum entanglement in the human brain. And computational models dont simulate quantum entanglement yet. So let the machine communicates in a Taxon, a video type that substitutes letters with videos by linking one another, not by some abstract forms mastered by many. 

So how much are you willing to play this game to create a masterpiece?
[1] Josh Kaufman, The Personal MBA, 2012
[2] Brian Whitworth, BSc, BA, MA(Hons), Ph.D., The Brain and Technology: Brain Science in Interface Design | Encoding,  interaction-design.org
[3] John Joe McFadden and Jim Al-Khalili, The origins of quantum biology, royalsocietypublishing.org
[4] Jim Al Khalili, ted.com/talks
Back to Top