翻墙软件推荐:

V2net翻墙软件是我一直在用的,价格很实惠,有各个档次可以选择,翻墙效果还是很好的。

点击查看
目录
首发于:
最近更新于:
分类: posts

Step one: find what it is

In the world everything is data as input to the AGI brain, here the main question always is to find out who or what it is.

No matter whether you use the neural network model or program, here we all call it Operation.

Like the classic problem to find out whether the picture is a cat, we focus on the cat picture as the input data, but in AGI, the input data is not the main issue. every possible real-world impact on you is input data, and you will find out what it is.

My early philosophical thinking has developed an information_pack concept, I think the human brain's core reasoning starting point is to find out what it is about a thing, to define the more abstract concept "one".

Talking is a slice of complex brain activity in summary.

Studying the English language we found the basic syntax: - subject = subject_complement - subject =>(verb) object

If every AGI model, and its data all will convert to a common data vector space, let's assume it is built on the English language, and going on simply it more and more until we can not. then we will find the subject, seem more likely it is a conclusion, not a main issue.

Here is some basic assumption:

  1. every neural cell is a unique basic operation.

  2. when the real world impact on the brain as input data, we brain must do something or an operation that is we find something. the introspection process split by a tiny time slice. In this tiny time slice, the brain does a lot of different operation, this different operation build up the step one talk about the basic operation.

  3. operation outputs just two values, True or False. we call it the Attribute. such as the eye moving around by a circle and outputs True, we call this thing has attribute circle. but wait, until now we do not figure out which it is, the thing is just the thing, the thing is not different from another thing, the thing here is just because our brain's inner introspection process is split by a tiny time splice.

  4. finally the information_pack concept has come out. the InformationPack such as zhouyi 泰卦 or taigua is

{Attribute: op1 True, Attribute op2 True, Attribute op3 True, Attribute op4 False, Attribute op5 False, Attribute op6 False}

or for a simple example... the information pack apple is:

{this thing is red, this thing is round....}

IMPORTANT NOTICE: The information pack name is unique and is handled by the AGI brain, that is saying even though the AGI brain does not talk with us, he still can develop many information pack concepts by himself.

Step two: talk with a human

Every AGI brain has its information pack system, which we called it unlabeled information pack system. The second step is to label it by human hand? no, here is the fact, we all live in the same universe or the same real world, so eventually every AGI brain information pack system goes to some common state, we based on this common state and do some label work. here less some details, but here is my assumption: step two turn out to be a very mature procedural in the future.

by the end of step two, the AGI brain can make some simple sounds like This is an apple, this is a chair, etc...

of course, we automatically labeled not the whole system, in the AGI brain's whole life he still needs to go on learning new labels.

Step three: keep learning

world data =>[program] => output

world data =>[model] => output

with the deep learning technology, we will still do some extra work to train our AGI brain, but here is the fact:

  1. we do not handly process data and feed it into the AGI brain, Let the world data directly impact the AGI brain.

  2. maybe some AGI brain does not need to learn human language, or maybe some cost issue?, but most AGI brain will learn at least one human language and some extra work to make their information pack system adjust by human language such as English language, then the future AGI brain training work will mostly some label work, and this label work will directly talk with it.