Sie sind auf Seite 1von 5

[what’s really ‘scary’ is that below could be implemented in a genetic programming

environment and various versions of it could be ‘bred and grown’ producing ‘optimal’
individuals (based on preselected criteria – such as exhibiting truly conscious behavior).
then those individuals could be replicated on dedicated hardware with real senses and
arms in a real local environment.]

[an object oriented problem decomposition of consciousness]

A COnscious MAchine Simulator – ACOMAS


Salvatore Gerard Micheal, modified 15/JAN/09

Objects
2 cross verifying senses (simulated)
hearing (stereo)
seeing (stereo)
motivation subsystem (specs to be determined)
information filters (4 controlled hierarchically)
supervisory symbol register (same as below)
short-term/primary symbol register (8^3 symbols arranged in 8x8x8 array)
rule-base (self verifying and extending)
rule-base symbol register (same as above)
3D visualization register (10^12 bits arranged in 10000x10000x10000 array)
models of reality (at least 2)
morality (unmodifiable and uncircumventable)
don’t steal, kill, lie, or harm
goal list (modifiable, prioritized)
output devices
robotic arms (simulated – 2)
voice-synthesizer and speaker
video display unit
local environment (simulated)
operator (simulated, controlled by operator)

Purpose
test feasibility of construct for an actual conscious machine
discover timing requirements for working prototype
discover specifications of objects
discover implications/consequences of enhanced intelligence
(humans have 7 short-term symbol registers)
discover implications/consequences of emotionless consciousness

Specifications – The registers are the core of the device – the (qualified) ‘controllers’ of
the system (acting on goal list), the reasoners of the system (identifying rules), but all
constrained by morality. The goal list should be instantaneously modifiable. For instance,
an operator can request “show me your goal list” .. “delete that item” or “move that item
to the top” .. “learn the rules of chess” and the device should comply immediately.
Otherwise, the device plays with its environment – learning new rules and proposing new
experiments.

The purpose of the cross verifying senses is to reinforce the ‘sense of identity’ established
by these, registers, and model of reality. The reason for ‘at least 2’ models is to provide a
‘means-ends’ basis for problem solving – one model to represent local environment ‘as is’
and another for the desired outcome of the top goal. The purpose for arranging the short-
term register in a 3D array is to give the capacity for ‘novel thought’ processes (humans
have a tendency to think in linear sequential terms). The reason for designing a self
verifying and extending rule-base is because that has a tendency to be a data and
processing intensive activity – if we designed the primary task of the device to be a ‘rule-
base analyzer’, undoubtedly the device would spend the bulk of its time on related tasks
(thereby creating a rule-base analyzer device and not a conscious machine). The ‘models
of reality’ could be as simple as a list of objects and locations. Or, they could be a virtual
reality implemented on a dedicated machine. This applies to the ‘local environment’ as
well. For operator convenience, the simulated local environment should be in the form of
a virtual reality. So the operator would interact with the device in a virtual world (in the
simulated version). In this version, the senses, robot arms, and operator presence – would
all be virtual. This should be clarified to the device so that any transition to ‘real reality’
would not have destructive consequences.

Modifications: the specs for the visualization register at this time were “either or” –
either a restricted VR (as described in the specs-01 document) or a bit-array. Since
I’m not a VR programmer, it was simpler for me to specify my “best guess” at the
requirements for a bit-array. A dedicated rule-base symbol register was added
because that subsystem will likely be “register hogging” and won’t allow the device
to freely “pay attention” to anything else but rule-base development. The
supervisory symbol register was added also to “free up” the primary symbol register
for “attentive tasks”. The purpose of that is exactly what it says: to take directives
directly from the motivation subsystem and “tell” the rest of the system what to do.
For instance, the rule-base register may be currently scanning the rule-base for
consistency (since there were no immediate tasks assigned). The primary symbol
register is telling the robotic arms to push a set of toy blocks over (since it is in
play/experiment mode – to see what happens). The supervisory symbol register just
received a directive from motivation: try to make Sam laugh with a joke. A possible
scenario is described in specs-01. (The directive would have to entail “researching”
what a joke is – in the rule-base, what qualifies as “laughing”, and any other
definitions required to satisfy the directive. If those researches were not satisfied, the
directive would have to be discarded or questions asked of the operator: “Sam,
what’s a joke?”) ..After outlining the ‘conscious part’ in a schematic ‘block
diagram’, I realized information filters would be required implemented in a
hierarchical fashion (this basic design was approached in ’95). Senses feed:
motivation, supervisor, primary register, and visualization register. But through
filters: motivation controls its own and the supervisor filter; supervisor controls the
filters feeding the registers. Directives/info flows directly from: motivation to
supervisor and supervisor to registers. Signals before and after filters would be
analogous to sensations and perceptions: the hierarchical ‘filter control structure’
decides what sensations are perceived by lower registers – thereby controlling
sensation impact and actual register content. Whether or not humans actually think
like this, I believe the structure is rich enough to at least mimic human
consciousness. The crux, ‘the Achilles Heel’, becomes motivation. The motivation
subsystem must be flexible and focusable. It cannot be overly flexible (completely
random) or overly rigid (focusing only on the goal list). Its control of the filters
(including its own) must be adaptable and expedient. Its purpose is to guide the
system away from inane repetition, ‘blind alleys’ (unnavigable logical sequences and
unprovable physics), and catastrophic failure; simultaneously, guide the system
toward robust and reliable solutions to problems, expedient play/experimentation,
and engaging conversation with the operator. If this sounds impossible, try to raise a
baby without any experience! You learn fast! ;)

My ultimate purpose of creating a conscious machine is not out of ego or self


aggrandizement. I simply want to see if it can be done. If it can be done, then create
something creative with potential. My mother argues a robot can never ‘procreate’
because they are not ‘flesh and blood’. It can never have insight or other elusive human
qualities. I argue that they can ‘procreate’ in their own way and are only limited by their
creators. If we can ‘distill’ the essence of consciousness in a construct (like above), if we
can implement it on a set of computer hardware and software, if we give that construct
the capacity for growth, if that construct has even a minimal creative ability (such as with
GA/GP), and critically limit its behavior by morality (such as above), we have created a
sentient being (not just an artificial/synthetic consciousness). I focus on morality because
if such a device became widespread, undoubtedly they would be abused to perform
‘unsavory’ tasks which would have fatal legal consequences for inventor and producer
alike.

In this context, I propose we establish ‘robot rights’ before they are developed in order to
provide a framework for dealing with abuses and ‘violations of law’. Now, all this may
seem like science fiction to most. But I contend we have focused far too long on what we
call ‘AI’ and expert systems. For too long we have blocked real progress in machine
intelligence by one of two things: mystifying ‘the human animal’ (by basically saying it
can’t be done) – or – staring at an inappropriate paradigm. It’s good to understand
linguistics and vision – without that understanding, perhaps we could not implement
certain portions of the construct above. But unless we focus on the mechanisms of
consciousness, we will never model it, simulate it, or create it artificially.

ACOMAS Simulation Run 246 Log:


Observing the display of visualization register: noticed many geometric shapes “flying
about” in seeming random orbits – forming and deforming seemingly at random (could
not detect any obvious pattern in formation, kind of object, nor orbit). Noted the register
log and it seemed to be working on goal 7 – trying to determine sub-goals that might
fulfill the primary. No speech output. Robotic arms stationary. Stereoscopic cameras
stationary. At 7:47PM EST, all output blanked. The visualization register display blanked,
the register log stopped scrolling. No observable activity. Checked the power feeds and
UPS – all seemed working fine. Looked in the cameras. Tapped on them lightly saying
“Are you okay?” Nothing. No response. A few moments later – speech output: (a
somewhat husky voice) “I’m fine Dave. How are you?” On the visualization register
display was an image of 2001’s Hal’s “red eye” camera. I chuckled and replied “That’s
not funny Hal.”

I’ve gotten some flak from an astrosciences-forum-reader about ACOMAS. We


exchanged a few personal insults and “ended” the “discussion” by disagreeing to
disagree. (Not even agreeing how we were to disagree.) I asked them to post their
“decomposition of consciousness” but they refused. They were content to promulgate the
mystical perspective insisting that “I knew nothing about consciousness” and that “I was
doomed to chaotic failure”. I firmly and adamantly disagree. I propose the exchange
above is not only actually possible – it’s probable – and not programmed – but
spontaneous.

At the moment, I have only two viable candidates for visualization register: restricted VR
or “pixel cube”. Restricted in such a way that a complete “server version” of VR is not
required. (We don’t need to implement a complete server VR such as Perfect World with
its obviously imperfect laws of physics; we can satisfy the design requirements with a
limited subset of VR capability and domain.) A limited space and limited set of objects
with one exception – capacity to make additional objects from the given set. The pixel
cube could be implemented by a 3D array of bits: on or off. The array would have to be
enormous. I suggest something like one million cubed. This would allow the
representation of objects in 3D. For instance, visualizing a coffee cup or chair would be
“child’s play” because those objects should be in ACOMA’s local environment. (Note the
drop of “S” for the non-simulated version.) Again, the content of the registers, at any one
moment, is something that still must be addressed. I have hopes that is both approachable
and feasible.

Sam Micheal, 15/JAN/2009

ACOMA – A COnscious MAchine


Can it be done?
Can it be designed by me?
Sam Micheal

It’s ‘official’; I’m ‘nuts’. I have been officially told by a university professor of computer
science: “This problem is too big for you Sam.” Really? Is that so? Are you 1000% sure?

As a person ‘in love’ (understatement) with systems science, physics, and AI, I have
taken so many courses from engineering disciplines – I have lost count – where and
when.. I DO remember a computer vision course I took. I DO remember some basic
precepts. I DO remember how we know almost nothing about scene recognition (this was
about 15 years ago so perhaps we know a little more now). But if you actually READ my
proposal, it says NOTHING about dependency on scene recognition. In fact, it depends
not one IOTA on anything ‘in development’.
This is the ‘beauty’ of our current system. “Instead of pursuing this avenue of
investigation, which I doubt you have any real experience in..” [italics added] he
continues to suggest I restrict myself to more ‘tame’ and approachable areas in computer
science. I thanked him for his traditional concern. But his ‘concern’ was itself dismissive.
His department is focused on computer science education. Why should they care about
conscious machines? “They would have done it by now if they could.” (He voiced almost
the same sentiment in the same letter.) Wow; what a ‘revelation’. And all this to say
without actually reading my proposal.

Perspective; perspective; perspective. Read Modern Heuristics by Michalewicz. If you


can understand that, you’re smart. If you can apply it, you’re smarter. Now, I’m not
saying I’m that smart. ;) But I am saying I have some insights about the problem. Key
word: insights. What’s another key word? Intuition. Now, let me review a recent
conversation with my mother about consciousness..

“The reason AI people have not developed conscious machines is because they have
focused on intelligence NOT consciousness. And they have made the critical conceptual
error in thinking that consciousness is dependent on immature technologies like computer
vision. It is NOT. I contend consciousness is physical; we can understand it physically.
However, much more elusive are concepts like intuition and inspiration. I contend we
will develop conscious machines way before we will develop machines with intuition and
inspiration.”

My design is more than just ‘physical’; it is information dependent. There is a thing in


my design called a rule-base. Is this the same thing as a database? Is it constructed with
data mining? Maybe. Maybe not. I try to define some general specifications. I believe I
have a construct that is ‘rich’ enough (diverse and sophisticated enough) to at least mimic
consciousness. And I try to provide much more than consciousness. I design structures
that will assist intelligence and self-awareness. Hopefully, these will enhance
consciousness. The idea is this: I think it is difficult to create consciousness from scratch
– but not impossible. If we can create a device that is minimally aware and also give it
some capabilities: intelligence, self-awareness (via model), and some capacity for
visualization (which to me is Very important), we may achieve what most have said is
impossible – machine consciousness. My construct is perhaps too dependent on
visualization. My original specification exceeded the current technology (1 mega bits
cubed). Because that is impossible by current standards, I had to cut that down by a factor
of one million. Can the thing still be self-aware with limited visualization capability? I
don’t know. But it’s worth trying.

It’s certainly worth more than “This problem is too big for you Sam.”

Sam Micheal, 17/JAN/2009

Das könnte Ihnen auch gefallen