Sie sind auf Seite 1von 6

Building a robot head: Design and control issues

Gustavo Barciela, Enrique Paz, Joaquín López, Rafael Sanz and Diego Perez

After several years of mobile robots research, our first


Abstract—Physical appearance and behavior of a mobile experience in a robot aplication interacting with people was
robot are important issues in applications that need to interact a tour guide robot for the “Xuventude Galicia Net” event
socially with humans. The robot must have ways of expressing that took place in Santiago de Compostela (Spain) during
its internal state. With this ability, people that interact with the
robot will know its “mood” in a similar way that we can guess
three days in March 2007. The platform used was a
the mood of a person looking at his face. modified Peoplebot from ActivMedia with a laser range, a
The robot behavior must take into account all the different voice system and a laptop that was used to do all the
kind of sensors to obtain stimulus about the environment and processing and user-interface tasks.
people interacting with it. In order to obtain behavioral One of the conclusions of that project was that the
consistency, a general control architecture that allows the physical appearance and behavior of the robot are important
connection of all components and modules is necessary.
assets in terms of Human-Computer Interaction. Even the
In this paper we describe the design of a robot head and its
organizers and some attendants at the event let us know that
integration in the RATO control architecture. The head has
been mechanically designed to show different expressions and we should provide the robot with a head.
reproduce human-like movements. The head control system is Expressions that show emotions play an important role in
responsible to coordinate the different servo motors that switch inter-human communication because, for example, the
between different expressions and produce the movements recognition of the mood of a conversational partner helps to
required by the administrative level modules. The basic understand his/her behavior and intention.
behaviors that associate stimulus with expressions and
For the 2008 edition we planned to update an old B21
sequences of actions are modeled using Petri Nets. Finally we
will present the results in a robot guide application. named RATO replacing the connections of most of the
original sensory system for a CAN bus.
Index Terms— Mobile robots, robotic head, human robot We had previous experience in building a robotic head
interaction, robot guide. and we have one since 2005 [11]. However, for this year’s
edition we planned to build a new one that integrates better
I. INTRODUCTION with the RATO current sensory and control systems. The
design and control including the integration in the robot
T here are a lot of mobile robot applications that need to
interact with people. According to several authors [1],
[2], [3], in order to interact socially with humans, a software
control architecture is described in this paper. The face can
show expressions analogous to happiness, sadness, surprise,
agent must have behavioral consistency, and must have boredom, anger, calm, displeasure, fear, etc.
ways of expressing its internal states. The rest of this paper is organized as follows. The next
Much research has already been conducted in the area of section introduces the head design. It is divided in three
non-verbal communication between a robot and a human subsections that describe the mechanic, motor and electric
that include facial expression [4], [5], [6], [7] that focuses system. Section III describes the bus that connects the
on the communication task. Also, several mobile robot different robot devices and section IV introduces the RATO
guide applications [8], [9] have been developed that mainly control architecture. The integration of the head control
focus on the navigation and guide tasks. However, we are modules in this architecture is shown in section V including
more interested on the integration of the daily guide tasks some behavior examples. Finally, section VI shows the
with the communication tasks in order to make the robot results in a robot guide application and concludes the paper.
guide more attractive to the public.
Worldwide, several research projects focus on the II. HEAD DESIGN
development of android heads with very complex behaviors The head should be designed taking into account that the
[3]. Other projects [10] focus on obtaining the appearance different head expressions must be easily recognized by a
of a human head. There are also some companies such as human being [12].
Handsonrobotics that design, develop and produce The Uncanny Valley theory [13] states that as a robot is
interactive bio-inspired conversational robots, including the made more humanlike in its appearance and motion, the
world-famous Albert-Hubo head. Instead, we will focus on emotional response from a human being to the robot will
building a cheap, simple, easy to integrate in the RATO´s become increasingly positive and empathic, to a point where
control architecture and very expressive head. the response quickly becomes that of strong repulsion.
However, as the appearance and motion continue to become
This work was supported in part by the Spanish Comisión less distinguishable from a human being, the emotional
Interministerial de Ciencia y Tecnología, CICYT, project DPI2005-06210. response becomes positive once more and approaches
The. Authors are with the Dep. Ingeniería de Sistemas y Automática,
University of Vigo, 36200 Vigo, Spain (phone: +34 986 812222; fax: +34 human-to-human empathy levels.
986814014); e-mail: (gbarciela, epaz, joaquin, rsanz, dplosada)@uvigo.es. Some roboticists [14] have criticized the theory, and other
researchers [15] consider the uncanny Valley crossed at part of the structure. Some components of the structure are
some level. Even though these android heads might look even located bellow the connection point between the head
like a human head, so far their motion is quite different. and the neck. However, there are some relatively heavy
Animated heads can be so expressive or even more than elements such as the cameras that need to be placed in some
android heads and they are usually easier to control. Since defined points.
our main interest is the expressivity, we decided to build a We locate most of the servo motors in the base of the
simple animated head. structure because they constitute a high percentage of the
We can see in fig 1 that a face with eyes, eyebrows and head’s weight. Movement to the different elements (eyes,
moth is universally recognized and can show a quite wide eyebrows, eyelids, mouth, etc.) are actuated via bar
range of expressions. linkages.
B. Motor System
The neck has two DoFs that provide horizontal and
vertical movements that can be combined.
Figure 1. Face expressions The face has ten degrees of freedom providing the
following movements:
These elements are actually enough for most of the • Eyes up and down.
animated cartoons to be very expressive. Since our goal is • Eyes to the sides.
expressiveness rather than make an android face, they will • Left eyebrow.
be the basic elements of our robotic head. • Right eyebrow.
A. Mechanical design • Open and close the mouth.
• Lips.
The head has two main parts: the neck and the head.
• Right eyelid (open and close).
The neck is a Pan-Tilt unit PTU-46-17.5. It has two
• Left eyelid (open and close).
degrees of freedom. The pan movement is used to turn the
face towards the visitors and nodding while the tilt
movement allows shaking the head. This model can move
1.6kg. weight at a speed of 300 deg/sec. The movement
resolution is 0.0514 degrees. The unit is connected to the
computer on-board using a RS-232 cable allowing control
of the position, speed and acceleration.

Figure 3. Robot head mounted on Rato.

All DoFs are actuated by standard hobby grade servo


motors as used in radio controlled cars or air planes. Eight
of them are model DY-S0210 from Dong Yang and two
HS-300 from Hitec for the eye movement. After different
tests with both models, we found that the DY-S0210 servos
Figure 2. Head mechanical structure
are quite loud so we plan to replace them all for HS-300.
The head’s mechanical structure (fig. 2) is made of We have chosen these kinds of motors because it is quite
aluminum folded sheets 2.5 mm thick. The parts are joined simple to work with them and they do not need to move
by aluminum rivets 3 mm in diameter. heavy parts of the head.
We have chosen aluminum to build the structure because The eyes are two webcams that we use to transmit images
it is light and easily malleable. Only a few axes are made of to a web page since so far we are not doing image
stainless steel to endow them more mechanical resistance. processing on the robot.
The Pan-Tilt unit that we are using has a rather low C. Electric design
payload. Therefore, the head must be light and the gravity We have designed a new card (fig 4) to control the servos
center must be as closer as possible to the Pan-Tilt unit. For and communicate to the CAN master. The card has been
this reason we need to place most of the weight in the lower designed to control 16 servos but so far we use only 10.
simple CAN server handles the connection and
disconnection of the different slaves loading the
corresponding driver. This is done in a similar way that
USB devices can be plugged and unplugged to a computer
and the driver will be loaded accordingly. For example, if a
module with sonar sensors is connected, the CAN server
will first register and initialize the new module connection
and then it will identify the module to hand over the
messages to the sonar-driver.
Modules can be connected at any time and in any order.
Connection and disconnection is detected using a watchdog
message (any frame or a special watchdog frame if no other
frame is issued for the watchdog period).
When a slave is connected to the bus, it starts sending a
1. PIC 18F458 7. Transceiver watchdog periodically. The master realizes that there are
2. Servo Motor connectors 8. Programmer connector
3. Power connection 9. Oscillator
new connected modules (begin receiving watchdog) or
4. Servo power connector 10. Reset button disconnected (no getting frames from the module during
6. Can connector 11. Status LEDs watchdog period). In the case of connecting a new module,
the master will send information to configure the slave and
Figure. 4. Slave card to connect 16 servo motors.
change the watchdog time. If the master does not receive the
watchdog of a slave for a period longer than a timeout, it
The PIC program generates the PWM signals to control
assumes the slave is disconnected or has some error and it
the servos and processes the CAN messages.
will be notified to the control programs interested in the
There are four LEDs that provide information about the
slave data.
state of the system:
The slave modules have an Identifier (ID) that can be any
• Red: shows when the servos are powered.
even number and the master module can send frames to a
• Yellow: shows when the card is powered. specific slave using ID+1 or as a broadcast frame. On the
• Green: There are two green LEDs connected to the other side, each slave module will filter all the frames
B0 and B1 PIC inputs. They show whether the CAN except the broadcast frames and frames directed to it
servos and virtual encoders are activated. (ID+1).
The card has different connectors for power,
programming, servo connectors, servo power connector and
CAN bus connector.

III. COMMUNICATIONS IN THE ROBOT


The robot RATO is an old modified B21 RWI platform
with a ring of 24 sonar sensors and 24 bumpers on the
enclosure controlled with four CAN slave modules attached
to the computer through a CAN-USB adapter card [20]. The
base is controlled with another two CAN slave modules.

Figure 6. Head CAN-Slave module status sequence.

A. RoboCan master module


The RoboCAN master module handles the connection of
the different slave modules and configures them if
necessary. It starts without any slave module registered.
Figure. 5. CAN based master slaves communications. Master includes
different drivers. When a message from a new slave module is detected, it
will create a new instance for this new module that will
Fig. 5 shows how the different slaves (sensor-actuator evolve according to the states depicted in fig. 6. After
modules) are connected to the master (control module). detecting the first message for this new module, the status
Control modules are running on a PC attached to CAN bus will be “Detected” and a new message to this new module
using a CAN-USB adapter. will be sent asking for information about the module in
CAN master process is implemented as shown in fig. 5. A order to initiate the drivers to handle the sensor-actuators of
this new slave. A driver deals with the information of all the from the master. If the time the slave remains disconnected
sensor or actuators of the same kind. If the information for is shorter than “watchdog period” the master may not detect
this module is ok and all the drivers are correctly initiated, the disconnection. On the other case, for longer
the new module is registered as “Connected” and disconnections, the master will restart the slave module.
information obtained from it will be passed to the drivers.
The new module can be disconnected as a request for the IV. CONTROL ARCHITECTURE
master or due to a timeout (fig. 6). If a module is The control architecture named ISANAV is a modular
disconnected, all the associated drivers will be notified. control architecture that is organized as shown in figure 8.
B. RoboCAN head slave module Even though the different modules are organized in four
sets, they can be mapped in the three layer architecture
Fig. 7 shows the state graph of the head slave module.
popularized by Bonasso et al. [16]. The hardware servers
For the control of this module we divide the functionality of
and control set implement the functional layer while Task
each servo motor in a virtual sensor and an actuator. CAN
Dispatch implements the executive and planning layer.
frames will be issued (green ellipses) in three different
Finally ISANAV includes a set of processes to interact with
cases. The first is published as a reply from some
the users and connect to other process for Multirobot
information request (module, sensor or motor information).
applications.
The other two are frames published periodically for the
The navigation platform is based on CARMEN [17] and
sensor polling and module watchdog mechanisms. The
some modules such as localize, navigator and base hardware
sensor polling is maintained as an encoder mechanism to
servers remain basically the same. Unlike CARMEN,
keep track of the servo positions. The watchdog mechanism
motion control is divided into high-level (strategic) planning
guarantees that a frame is sent every “watchdog period”
[18] and lower-level (tactical) collision avoidance using the
time while it is connected, if the module does not need to
Beam method [19].
transmit a frame during that time, a watchdog frame will be
issued. A. Hardware server modules
The hardware server modules govern hardware
Send watchdog Servo motor interaction providing an abstract set of actuator and sensor
sensor frame action
interfaces and isolating the control methods from the
Sensor Module Command
hardware details. Most of the hardware devices are
timeOut timeout servo connected to a CAN bus using RoboCAN [20]. Some of
these devices are used in navigation such as the laser and
Waiting sonar while others are specific for the application such as
the robot head, sound and speech system, etc. The hardware
Reset servers also provide low-level control loops for rotation and
module Frame Module
frame info config translation velocities. Thanks to this layer changes in
request frame
hardware components can be made without changes on
Servo Sensor higher layers modules while keeping the same interface.
Reset Send info Configure
config config
module frame module
frame frame B. Control modules
The control modules integrate sensor and motion
Configure Configure
servo CAPTIONS: sensor
information to provide improved sensor odometry, provide
CAN action basic navigation capabilities (localization, path planning,
other actions
follow path, etc) and basic application specific functions
Set CAN event Set
params other event params (say text, make expression, etc).
C. Executive modules
Figure 7. Slave module main program (on PIC 18F458).
All the modules in this layer belong to the RoboGraph
application that includes two modules (figure 8): task editor
On the other side, different actions should be taken
that is used only for application development and task
(yellow ellipses) when “module configuration”, “sensor
dispatch without graphical interface that should be working
configuration”, “servo configuration”, “reset module” or
when the robot is doing surveillance operations.
“servo command” frames are received.
This layer uses hierarchical interpreted binary Petri nets
Module configuration frames are issued to configure
[22] to coordinate the activity of all the rest of the modules.
different module parameters such as the watchdog time.
Tasks are described using an interpreted Petri net editor and
Actuator configuration frames include “disable motors”,
saved in an xml file. A dispatcher loads these files and
“enable motors” and “servo motor configuration”.
executes the different Petri nets under user requests. A
Sensor configuration frames are issued to configure
monitor that shows the state of all the running nets is very
parameters related with the virtual sensors. They include
useful for debugging and tracing purposes.
“disable sensors”, “enable sensors” and “sensor config.”.
The interaction with other modules in the architecture is
There are information frames (query and request)
done publishing and subscribing to messages. This way,
regarding the module, virtual sensors or servo motors.
problems on a module such as a blocking problem do not
The slave module is unaware of eventual disconnections
block dispatch and we can set up simple mechanisms to from one application to another.
detect and recover from a failure or exception situation.
V. HEAD CONTROL MODULES
The head control is carried out in different modules
according to the architecture described in last section. First,
at the hardware servers level, a new CAN driver to deal with
head messages (IPC and CAN), as described in section III,
has been developed.
Second, at the graphical user interfaces level a module
named Head GUI for debugging and demo purposes has
been added. This module allows the user to change the head
expressions and move independently each servo.
Third, a simple head simulator for debugging purposes
has been developed. This module behaves like the hardware
drive and manages the same IPC messages but instead of
controlling the physical head, moves a graphical virtual
face. Therefore, from the point of view of other IPC
modules, this module behaves like the head driver. In order
to validate some algorithms and programming mobile robots
applications a massive number of tests is necessary. Tests
with robots are usually very time and resource consuming.
For this reason, most of the mobile robot architectures
Figure 8. ISANAV Control architecture. Modules are grouped in include a simulator to first validate algorithms and test
several sets. The hardware servers set reads sensor data and controls applications. The head simulator is necessary in order to
actuators. The control set provides several basic functions that can be carry out these entire previous tests.
used by other modules of this set and the executive set.
Finally, the executive layer needs to publish IPC head
messages to coordinate head expressions and movements
The Petri Net can evolve only with the arrival of IPC
with the different application tasks. We have seen in the last
messages or the end of a Timer. Every time a change in the
section that we define tasks as Petri Nets. So far we have
status of a Petri net (start, stop, evolve) or in the waiting
used this system in a robot guide application. For this
queues (new requests added or removed) is produced, a new
application there are a few basic nets that control only one
IPC message reporting that change is issued for GUI
element. For example, the Head Control Petri Nets (HCPN)
monitor mode and stored in the log file for GUI play-logger
manage only the head. On the top level, the Application
mode.
Petri Nets (APN) coordinate the execution of the different
D. Interface modules robot elements using the basic Petri Nets.
There are several interface modules for the programmer A. Head Control Petri Nets
to debug and trace the control and hardware server modules.
We have designed some nets to reproduce some basic
However, there is only one interface module on board that
head behaviors such as the waiting behavior or talking
allows the user to interact with the robot (for example, for
behavior. The first one is designed to be executed when the
identification). Finally, users can also connect via Web,
robot is waiting and it includes a set of random movements
monitor and interact with the robot.
like blinking, moving the eyes sideways and short ear
Each module in fig 8 is a Linux process that exchanges
movements. It also “yawns” after long periods of inactivity.
information with other modules using IPC Inter Process
The second one is designed to be executed while the
Communication. Developed at Carnegie Mellon's Robotics
robot is talking. Even though the basic movement is to open
Institute, IPC provides a publication-subscription model for
and close the mouth, it also includes some short random
processes to pass messages to each other via a central server
movements such as blinking. Mouth movements are not
process. Each application registers with the central server,
synchronized with the content of the speech. However, the
and specifies what types of messages it publishes and what
fact that the talking behavior is only activated when the
types it listens for. Any message that is passed to the central
robot is really talking gives a good characterization of the
server is immediately copied to all other processes
talking process. Much better than keeping the mouth opened
subscribed.
while talking or doing nothing at all.
The process of building a mobile robot application using
this framework includes programming on different levels. B. Application Petri Nets
First, hardware server and control modules need to be Petri Nets in the last section can be started and stopped
implemented. Modules that implement navigation tasks can from other Petri nets such as the Tour Net. This net is the
be used from one application to another. At the next level, higher level net in execution when the robot is giving a tour
the executive layer, it is necessary to build a module or to the visitors. It basically gets the next point to visit, starts
sequencer that sends requests and receives the outcomes of the task to go to the point while in parallel can say
the functional layer modules. This module usually varies something, turns his head to the visitors, etc. Once in the
point it will explain things about the stand. REFERENCES
Every time the robot has to say something the talking [1] J. Bates. The role of emotion in believable characters.
behavior described in the last section is started and when it Communications of the ACM, 1
[2] B. Blumberg. Old Tricks, New Dogs: Ethology and Interactive
finishes talking the behavior is stopped.
Creatures. PhD thesis, MIT, 1996.
[3] Breazeal, C. (2000), "Sociable Machines: Expressive Social Exchange
Between Humans and Robots". Sc.D. dissertation, Department of
Electrical Engineering and Computer Science, MIT.
[4] S. Li, M. Kleinehagenbrock, J. Fritsch, B. Wrede, and G.
Sagerer.”BIRON, let me show you something”: Evaluating the
interaction with a robot companion. In Proc. of the IEEE Int. Conf. on
Systems, Man, and Cybernetics (SMC), 2004.
[5] O. Rogalla, M. Ehrenmann, R. Z¨ollner, R. Becher, and R. Dillmann.
Using gesture and speech control for commanding a robot assistant. In
Proc. of IEEE Int. Workshop on Robot and Human Interactive
Communication (ROMAN), 2002.
[6] R. Stiefelhagen, C. F¨ugen, P. Gieselmann, H. Holzapfel, K. Nickel,
and A. Waibel. Natural human-robot interaction using speech, head
pose and gestures. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent
Robots and Systems (IROS), 2004.
[7] T. Tojo, Y. Matsusaka, T. Ishii, and T. Kobayashi. A conversational
robot utilizing facial and body expressions. In Proc. of the IEEE Int.
Conf. on Systems, Man, and Cybernetics (SMC), 2000.
[8] S. Thrun, M. Bennewitz, W. Burgard, A.B. Cremers, F. Dellaert, D.
Fox, D. Haehnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz.
Figure 9. Robot Rato is giving a tour in “Palacio de Congresos y MINERVA: A second generation mobile tour-guide robot. Proc. of
Exposiciones de Galicia”, Santiago de Compostela (Spain) for the the IEEE International Conference on Robotics and Automation
xuventudeGalicia.net event in March 2008. (ICRA'99), 1999.
[9] Bennewitz, M.; Faber, F.; Joho, D.; Schreiber, M.; Behnke, S.
Towards a humanoid museum guide robot that interacts with multiple
The waiting behavior is activated for example when the
persons. International Conference on Humanoid Robots, 2005 5th
robot is waiting for visitors. Once the visitors get close to IEEE-RAS Volume , Issue , 5-7 Dec. 2005 Page(s): 418 – 423
the robot they are invited for a tour and if they get closer the [10] K. Berns, C. Hillenbrand, K. Mianowski The Mechatonic Design of a
robot heads shows a happy face and starts the tour. Human-like Robot Head 16-th CISM-IFToMM Symposium on Robot
Design, Dynamics, and Control (ROMANSY) – 2006
Besides the behaviors, several facial expressions are [11] E. P. Domonte, J. P. Castro, J. L. Fernández, R. Sanz. Diseño de una
activated in different points of the higher level Petri Nets. Cabeza Robótica para el estudio de procesos de interacción con
personas. Proceedings of the XXV Jornadas de Automática 2005.
[12] Kim, H., G. York, Burton, E. Murphy-Chutorian, and J. Triesch
VI. RESULTS AND CONCLUSIONS (2004). Design of an Arthropomorphic Robot Head for Studying
We have used the system described in this paper in a Autonomous Development and Learning, Proc. Int. Conf. On
Robotics and Automation, ICRA 2004, New Orleáns, LA, USA.
mobile robotic guide that was working for three days [13] Mori, Masahiro (1970). Bukimi no tani [the uncanny valley] (K. F.
(march 2008) in the “Palacio de Congresos y Exposiciones MacDorman & T. Minato, Trans.). Energy, 7(4), 33-35. (Originally in
de Galicia”, Santiago de Compostela (Spain) for the Japanese)
[14] MacDorman, Karl F. & Ishiguro, H. (2006). The uncanny advantage
“Xunventude Galicia Net” event. For this application a
of using androids in cognitive science research. Interaction Studies,
modified B21 model shown in fig 9 was used. 7(3), 297-337
In this kind of application the environment is a set of [15] Hanson, David, Bergs, R., Tadesse, Y., White, V., Priya S.
stands, most of them mounted the day before. Furthermore, "Enhancement of EAP Actuated Facial Expressions by Designed
Chamber Geometry in Elastomers." In Proc. SPIE's Electroactive
some of the tasks are not fully defined until a few hours Polymer Actuators and Devices Conf., 10TH Smart Structures and
before the starting of the event. Therefore, a tool like Materials Symposium, San Diego, USA, 2006
RoboGraph to quickly create, change and debug tasks [16] R. Bonasso, D. Kortenkamp, D. Miller and M. Slack, “Experiences
with an architecture for intelligent, reactive agents”. Journal of
becomes necessary. Behaviors that associate events to head Artificial Intelligence Research Vol 9(1), pp: 237-256, 1997
expressions and actions are also easily changed as we stated [17] M. Montemerlo , N. Roy, S. Thrun, “Perspectives on Standardization
before. in Mobile Robot Programming : The Carnegie Mellon Navigation
(CARMEN) Toolkit”, Proceedings. 2003 IEEE/RSJ International
For this year’s edition (2008), we have used the robot Conference on Intelligent Robots and Systems, (IROS 2003). 27-31
RATO with the head described in this paper and the Oct. 2003. Vol. 3, pp: 2436- 2441
acceptance was much better. People tried to interact with the [18] A.R. Diéguez, J.L. Fernández, R. Sanz, “A global motion planner that
robot, they even expected human-like capabilities and talked learns from experience for autonomous mobile robots”, Robotics &
CIM Vol. 23 pp: 544–552 (2007), Ed. Elsevier.
to the robot expected to be answered. Some of the visitors [19] J.L. Fernández, R. Sanz, J.A. Benayas and A.R. Diéguez, “Improving
tried to find out what actions would make the robot change collision avoidance for mobile robots in partially known
his facial expressions and say different things. environments: the beam curvature method.” Robotics and
Autonomous Systems. vol. 46, pp. 205-219, April 2004.
[20] J. L. Fernández, Maria J. Souto, Diego P. Losada, R. Sanz, E. Paz.
ACKNOWLEDGMENT “Communication framework for sensor-actuator data in mobile
robots”. Proceedings of the 2007 IEEE International Symposium on
We would like to thank all the people that have Industrial Electronics, pp:1502-150. ISBN: 1-4244-0755-9.
influenced this work. Especially, Javier Perez Castro who [21] Joaquín López, Rafael Sanz, Enrique Paz and Carlos Alonso. “Using
designed the previous robot head, base of this work. Also to hierarchical binary Petri nets to build robust mobile robot
applications: RoboGraph”. IEEE International Conference on
Reid Simmons for his helpful advices about the use of IPC Robotics and Automation, 2008. To be published in the conference
and Dr. Mónica Cid for her help with the English grammar. proceedings.

Das könnte Ihnen auch gefallen