Will unknown remain unknown? Are we making a computer or a user?

Published April 20, 2017 by anirbanbandyo

Often we get confused that we are not making a real computer, but we are making an user. Because we do not have an user after the computer is constructed. The computer runs by itself. But then, who is the original driver? The metric of primes created by us exclusively for the computer, runs the show inside its core hardware. What is a metric? And why a metric would have the power to do remarkable things?

When we work on building a metric, like good old days of astrophysics, we are becoming as religious as Turing machine. In astrophysics, theoreticians used to have a space time metric, while doing complex math, students used to refer to the metric time to time and retrieve all essential data to solve planetary problems.  Similarly for Artificial Intelligence we have introduced a new metric of primes. The idea is to hack nature and make a computer that can generate most patterns that we see in nature, so that unknown is known. How to perfectly build an effective prime metric architecture is to be a matter of investigation for longer time, but, it cannot be ruled out that the concept to use a prime metric as a prime decision maker is a new concept altogether.

The existing information theory is based on the idea of the known. Now, we have introduced a new information theory, FIT, (Fractal Information Theory), wherein we have put tools to bridge two known domains through an unknown path. This is an important change from the era of information theory that was existing for the last century.

What is the trick that I know the unknown? We can do it if we build a universal metric that keeps all possible solutions, just like the space-time metric that is being used for nearly a century with little modifications to discover new and new physical phenomenon that was never known. If we are not surprised how a space time metric discovered in the 1920s is able to provide us new and new discoveries over a century, we should not be surprised that a similar metric for AI. Of course this is not a known culture in AI, but we feel that people would get accustomed with this with our simple DIY (Do it Yourself) kits that we are building now.

Imagine you have two parts of a music, and you have a kit that would combine two parts of the music with a new one in the middle, and that new music would make a sense to your mind. Similar things would be true in handling a large data, it would generate unseen patterns in the big data. The reason we want a DIY kit is that every people in the world could get a free access to the information revolution that we want. This is not about making money, but transforming the way we live in the world of unpredictability, where economics is worse than astrology, virus are transforming themselves in a pattern in which we do not even have any data.

The beauty of our computing is that we get the total picture at once. Then, more the time pass by, more the information arrives, from 50% reliability to 66% reliability to 72% reliability to 76% reliability to… the journey moves on towards 99% reliability, beyond which not possible to achieve. Absolute reliability is a trademark of the existing computers, but for us, “zooming the unknown” as a function of time and more detailed input is the key.

Our product computer would be a toy to change the perspective about this world. Not just playing game, if we are in an unknown territory about which we have absolutely no information, then, our product computer or user can provide a good overview, instantly with 66% success rate. Uncharted territories are increasing everyday with the data explosion. If we humans do not have a technology to estimate what is there in the uncharted territory we cannot do anything. Accidents would create massive havoc to the human society.

  1. Imagine a virus is silently evolving into a dangerous species. Some prime metric hardware is there to perpetually track the development of the virus evolution and perfecting the prediction to monitor its evolution. Thus, it could estimate the terror threat well in advance.
  2. From the Microwave background data of the universe, it could estimate the structure of the universe partially.
  3. From the massive data flow of the internet, it could find patterns of threats, like cyber attack.
  4. It can monitor and predict all possible climate change, where the future predictions are not possible due to complexity.
  5. It can monitor individual health over years and learn about individual health crisis well in advance. Typical heath problems exclusive to a person could be identified and cared.
  6. Economics will become a scientific subject of study as the computer would build predictable models perfecting it over time.
  7. Social science and psychology will become a scientific subject as verifiable predictable models would be there, that could be rejected or accepted by logic.
  8. General science would get a tool to study absolute property of a system, not a fitting model, thus, even scientific studies would get a better cross checker of its conclusions.
  9. Evolution of life could be tracked scientifically, not just in the past, but also the future to be predicted.
  10. Life like machines of the future will come, which would have their own operation life time and after a certain time, they will die just like living systems.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: