Sie sind auf Seite 1von 12

Inspired by John Tukey vision of evolution process between Statistics and Data Science

expressed at his article about Statistics and Data science, presenting how the approach
used by statistician would need to evolve to another scenario, I’m also inspired by John
Chambers and Bill Cleveland vision, that created a framework to Turkey’s vision that guide
our data capability now days. Where I pretend to contribute with one framework designed to
create one general artificial intelligence and present a Theory of Evolution, such as Statistics
to Data Science, from Computational Science to Artificial Intelligence, comparing the current
method used by Computer Science to create intelligent systems and the necessary method
to build Artificial Intelligence.

My characterization of Artificial Intelligence envision that AI is the use of computational tools,


such as Neural Networks, Machine Learn and Deep Learn training algorithms to analyze real
life events, being the evaluating metrics, values and functions attributed by the participants
of the event. Introduction a much larger part of designing process to understand and identify
participants experience, evaluation and preferences while defining metrics, values and
functions to analyze real life events. Only after this process developing a computational
schema, tools, algorithms, structure and framework. Artificial intelligence are a build such as
a framework to understand the world we live in it, by creating models the emulate real life
and analyzing them considering our perspective of the real world.

AI. its about how human create memories, classify, store and how we optimize the path
between those memories to make sense of the the world acting as conscious living being.
Rather than just understand how our brain cells connect together, how our human sensors
measure the world, and how we transport that data across our body until our brain. We need
to understand how is the process of labeling, store, evaluate, connect and activate, only
when necessary, all those informations while receiving new ones over at least 18 hours per
day, it sound exhausting just by reading it.

While being a child, we have more neurons than as adults, but how do we get smarter over
the times if we have less and less neurons every day ? The answer lies on how we “create”
new neurons every day and how we optimize the process of flowing thru different neurons
until finally meet the right one or to get the right combination of neurons or information.

It's still necessary to understand how we feel the moment and store information we are
experiencing and what we make to record that information, how our brain breakdown all that
amount of data and store in just one single place in our brain, and how to arrange the right
connections that will make that single place accessible by all other neurons just arranging
few connections in strategic places.

I would use this paper to recommend a process to Design one General Artificial Intelligence,
from project and design perspectives to coding and training and also test. My goal is to
improve the current method used by many data scientists and AI. Developers from
computational science backgrounds.

I will define a process to design the one AI that is capable of use sensor-data to understand
the world, transform data into relevant information, classify information according to is field
and level , store information in a strategic spot in the memory according to its field and level
of detail, archive and combine different information to help the user to answer questions,
deciding where to store and how to access information, that it might be relevant to enhance
users capability of analysis.
The AI goal is help the user to learn about subjects and answering questions about subjects.

The process will be based on:

Design Thinking: To select which inputs and outputs will be used by the algorithms, and
divide them into levels and fields, we will hold Workshops to have a batter understanding of:

● How humans make choices;



● How humans classify they preferences;

● How humans describe they learn method;

● How humans describe they experiences;

● What humans define as brain work and how they level it;

● How humans define as knowledge and how they level it;

● How humans separate or classify they knowledge;

Model Thinking: Using models to understand the data and transform it on information, the
model will be based and used for as the following:

● Regular Thinking: Representing the common sense, used as bias create an


user center intelligence focus at his perspective.

● Business Thinking: Representing the financial sense, used as structure
for algorithms related to finance and business.

● Scientific Thinking: Representing the natural laws and hearth, used as
structure for algorithms related to health, user mechanics.

● Engineering Thinking: Representing the Problem/Solution sense, used to
optimize paths and relate informations.

The models also will be used to test the outputs of the algorithms using different models of
analyses to select outputs that (a) have batter probabilities to suit the problem, (b) are more
relevant to the field and level of task and (c) have batter general performance at all kind of
tests.
System Thinking: To create the way the AI store and access the data and manage the inputs
acquire, using linear algebra and complex system to create a structure to:

● Transform data into Information;



● Label the Information into Field and Level;

● Store information in a Strategic Spot;

● Transform users Task in to Level and Field;

● Find and Manage Information Relevant for users Task;

● Answer to questions made by the user using the information stored;

● Improve system according to user feedback and performance;

Source of data:
Local Data: Geographic coordinates;
Talking With: Optical sensor (Face Recognition)
Date: Time and hours in geographic coordinates
Subject of Talk: Audio Sensor (Natural Language Processing)
Statement: Audio sensor (Voice Recognition)
Health Info: Health Sensor (Apple Watch)
Motion Lecture: Optical Sensor (Body language and Surrounding Space)

Above I explain each use of data, and how do we analyze it:

Local: Shows where the user is, says relevant information about the level and/or field of
research on data we need to do.

Talking with: Shows with whom the user is talking to, says relevant information about the
level and/or field of research on data we need to do.

Time: Shows which point of my user routine he is, when performing a task or storing data,
says relevant information about the level and/or field of research on data we need to do.

Data: Shows me at which point of my users routine he is, when performing a task or storing
data, says relevant information about the level and/or field of research on data we need to
do.

Subject of Conversation: Show us previously conversations that we had like that, what
requests where made at the time, what informations where used, what informations where
stored and previously paths used in this subject or registers of tasks equal to the request
made before.

Statement : Shows what the user wants, define the task of the AI, statements are translate
into queries to get the information request.

Heath Info: Shows information about the health condition of the user when he request the
task, giving relevant information about how the AI should respond to the task, within what
kind of statement and prioritizing what kind of thinking, field or level.
For the development of the system the user must decides what kind of data he will feed his
Artificial Intelligence with, and the based on the selected data the system has proper training
algorithms and classifiers, that are used to analyze and store the data, according to the
given inputs. To store this information is necessary a complex system, that saves the data
separated by level and fields using a proper algorithm.

To read the data stored the system uses the subject of conversation and the statement to
find the proper field and level using the same logic that as used to store the data. After find
the proper data the system than uses decisions algorithms marked in categorical thinking,
selected by the user, such as Regular Thinking, Business Thinking, Scientific Thinking,
System Thinking, Psychological Thinking, Engineering Thinking, Comportment Thinking,
Religious Thinking, etc.

Algorithm to define Level and Subject:

The level of the statement is direct related with the level of detail the phrases have. For
example, the Phrase: “Roger Federer has a strong left hand sack due his physiology” carries
a lot o detail, first his name is mentioned with surname and last name, second it mentions
his good performance with sacks with his left hand, and finally attribute this characteristic
due his physiology. All the words are mapped into different “input boxes” that attribute values
using positive and negative prime numbers to classify than, with their relative field and level.
At this point its important to make clear that the inputs determine the field where the
information will be stored, and as the user define which inputs to user indirect define the
fields that will be covered by it AI, making his AI unique and closer to what the user believe.
The weights are used first to concatenate words used based on the frequency they are used
together and receive different values at different inputs dictionary, this will help at the
characterization of the field of the statement by evaluating words frequently used together as
patters to identify fields. The phase is also passed to a weight evaluation based on
Recursive Neural Network to proper phrase. The context weight is used to highlight the
environment or field within the statement has been made.

The same logic is used to store information that comes as verbal statements, the word value
is fixed and can be either positive and negative. Words can be located in different Input
Categories at the same time, the same is true for tuples of words, but with different values in
each Input Categories, this helps us when evaluating fields.

The statement is stored in different level and field based on the words that are presented on
the statement, guiding it the level and field classification, the information is stored based at
the level and field.

We provide the statement, level, field and information to a neural network that is trained to
store this data-structure in a strategic spot, using it level and field, to be used with
association with other related data to complete task given by the user, providing the
information holden by this data cell.

This means that where the data is important for both storage and access process, being a
key process in the role system. To accurate this step, users most define, in principle the
relative field and level of each statement, to see if the system is getting the thing right or not.

The algorithm of storing and archiving data-cells is create by the user in term of is own
perspective and preferences, this means those algorithms are achieved by analyzing user
choices and preferences at I high level point of view and, after, creating a training data with
levels and fields.

After a few manual check, the model of analyze must be accurate at users field and level
classification, being possible to start to build is Input Categories and Value Dictionaries, that
will be used in the algorithm to stored data-cell in strategic spots, learning by her self how to
do improve it, and how to access relevant information to perform user’s task.

The system must them trained with users training data of classification, to be capable to
store and access data-cell by it self. The new information must be classified and stored one
by one, improving the store/access algorithm accurate at classifying information at users
bases. The access algorithm is specked to improve even more by allowing the user to
evaluate the artificial intelligence performance. Such older is the artificial intelligence, better
and more accurate his access algorithm is.

The storage algorithm, in other hand, get accurate according to softwares upload, allowing a
grater amount of training models and patter recognitions algorithms are available, to better
understand the user perspectives, preferences, expectations and needs.

The access algorithm works as the following logic, first it identify with context the task was
requested at, defining fields weights by determine the “b” values to the context factor

evaluate the for each Input Categories. Than it evaluate the field where the most
relevant data-cells might be, fields are separated in terms of the vectors, the statement
phrase vector, show which fields main contain relevant information for the given task, by
providing the level, or size of the vector, and its Input Categories values. Fields present at
this vector are them analyzed in terms of his data-cell, by comparing the task vector with the
data-cell vectors to see their relations. This aims to find data-cells vectors that are related in
both vectors in terms of directions, planes, and other geometric features, the data-cells that
match are than selected and their information is than evaluate. The level classification helps
to (a) select fields, (b) provide information of how how much detail is need, by increasing the
number of data-cell are relatively close to the task vector and allowing a level comparing
between them.

As showed before each level has an different rage of data-cells, variating with the module of
the statement, the fields are parted in different quadrants of the being field A (positive, all
other none), B (positive, positive, all other none), C (negative, an all other none), … being
A,B,C fields composed by input categories. Example:

User define:
Field – Professional - Inputs Categories: (work, person, function, financial) all other inputs
categories will be set None.
Field – Academic – Inputs Categories: (book, author, theory, science, formula, theorem) all
other inputs categories will be set None.
Filed – Religious – Inputs Categories: (person, function, beliefs, theory) all other inputs
categories be set as None.

Fields can have same input categories, but at a given field the values of words and signals
or the combination of signals in different or the combination of input categories is different.
Its usual to say that the Fields are disjoints.

​even if when they have matching inputs categories

As might be more clear now, the fields regions in a multi-dimensional array, or a tensor of
rank n, being n the number of Inputs Categories used to define and classify real life events.
The levels are mapped into 5 range of values.

Notice that during the data-cell-task vector evaluational process the real value of level is
used in order to compare data-cell level of information and a given task requirement, being
the Level classification of 1,2 and 3 only matter when deciding the task vector number of
data-cell analyzed, when Level 1 task vector only access Level 1 data-cells, Level 2 task
vector only access Level 1 and Level 2 data-cells and Level 3 task vectors access to Levels
1,2,3 and so on , all them according to their field/context.
Example:
1 – I need to eat to live.
2 – Eating provide us energy to perform tasks.
3 – The carbs are broke into sugar that is used as fuel at our cells.
4 – Our cells broke carbs with the process of X in the cell structure named Y and convert
sugar in energy in the process named Z on the cell structure W.
5 - Explain process X,Y,Z and W.

As we increase the level we need more connections between informations, to develop more
levels we need more basic information and the relations we create between low levels
informations we allow or A.I. to learn more. It’s like to rise a person, first we teach the basic
informations and further we teach more complex things. But the information must be stored
in right places. One third level information can uses another fields of second level
information to learn more about the third level infos.
Progressive Neural Network with Deep Learning
All neurons have information stored.
Besides processing the input the neurons also adds informations in input and passes to the
other neuron. Our neurons don't just process information, they also have information about
the subject. For example, if I ask what my dog should eat. First It process the question, them
it start to pass this statement thru my neurons, each neuron than adds information and gives
new inputs for the next neuron, such as my dog is a mammal, and the next neuron that
mammals are omnivorous, so the next neurons get the input and add something else and
goes on. My point is that the information should be stored and processed at the same place,
and every neuron besides process the information should also add some information.

Storing Information:
Dogs are mammals and omnivorous. Transform the information in to Vector

Level of information is how specific the information is, so it should be stored for from the
center, it’s measured by the size/value of the vector. When performing a “thinking process”
the machine should start gathering information from the lower levels, such as dogs are
animals, and digging deeper going thru the neurons gathering information and saving them
to process the evaluation of what is need to answer the task.

“What My dog Should eat?”


Neuron Structure:

Input Data Mixer

To answer one question the neuron should communicate with other neurons simultaneously
in different fields. The information is stored at neurons following the same rules that it follows
to answer statements. Some path to respond same path to learn.

Machine Learning:

1 Phase (Initial Stage):

2 Phase (Dynamic Learning):


3 Phase (Manual Overhaul):

4 Phase ( Ramification ):

5 Phase ( Manual Overhaul):


6 Phase (Dynamic Behavioral):

Find informations about subgroup IV:

● Creating routes;
● Process of information analysis;
● Available Data;
● Relations;

7 Phase (Restructuring):

Neural Network: