Sie sind auf Seite 1von 23

Linux Graphics Stack

This post attempts to be a brief and simple introduction to the Linux graphics stack, and as
such, it has an introductory nature. I will focus on giving enough context to understand the role
that Mesa and 3D drivers in general play in the stack and leave it to follow up posts to dive
deeper into the guts of Mesa in general and the Intel DRI driver specifically.

A bit of history
In order to understand some of the particularities of the current graphics stack it is important
to understand how it had to adapt to new challenges throughout the years.

You see, nowadays things are significantly more complex than they used to be, but in the early
times there was only a single piece of software that had direct access to the graphics hardware:
the X server. This approach made the graphics stack simpler because it didn’t need to
synchronize access to the graphics hardware between multiple clients.

In these early days applications would do all their drawing indirectly, through the X server. By
using Xlib they would send rendering commands over the X11 protocol that the X server would
receive, process and translate to actual hardware commands on the other side of a socket.
Notice that this “translation” is the job of a driver: it takes a bunch of hardware agnostic
rendering commands as its input and translates them into hardware commands as expected by
the targeted GPU.
Since the X server was the only piece of software that could talk to the graphics hardware by
design, these drivers were written specifically for it, became modules of the X server itself and
an integral part of its architecture. These userspace drivers are called DDX drivers in X server
argot and their role in the graphics stack is to support 2D operations as exported by Xlib and
required by the X server implementation.

DDX drivers in the X server (image via wikipedia)


In my Ubuntu system, for example, the DDX driver for my Intel GPU comes via the xserver-xorg-
video-intel package and there are similar packages for other GPU vendors.

3D graphics
The above covers 2D graphics as that is what the X server used to be all about. However, the
arrival of 3D graphics hardware changed the scenario significantly, as we will see now.

In Linux, 3D graphics is implemented via OpenGL, so people expected an implementation of this


standard that would take advantage of the fancy new 3D hardware, that is, a hardware
accelerated libGL.so. However, in a system where only the X server was allowed to access the
graphics hardware we could not have a libGL.so that talked directly to the 3D hardware.
Instead, the solution was to provide an implementation of OpenGL that would send OpenGL
commands to the X server through an extension of the X11 protocol and let the X server
translate these into actual hardware commands as it had been doing for 2D commands before.
We call this Indirect Rendering, since applications do not send rendering commands directly to
the graphics hardware, and instead, render indirectly through the X server.

OpenGL with Indirect Rendering (image via wikipedia)


Unfortunately, developers would soon realize that this solution was not sufficient for intensive
3D applications, such as games, that required to render large amounts of 3D primitives while
maintaining high frame rates. The problem was clear: wrapping OpenGL calls in the X11
protocol was not a valid solution.

In order to achieve good performance in 3D applications we needed these to access the


hardware directly and that would require to rethink a large chunk of the graphics stack.

Enter Direct Rendering Infrastructure (DRI)


Direct Rendering Infrastructure is the new architecture that allows X clients to talk to the
graphics hardware directly. Implementing DRI required changes to various parts of the graphics
stack including the X server, the kernel and various client libraries.
Although the term DRI usually refers to the complete architecture, it is often also used to refer
only to the specific part of it that involves the interaction of applications with the X server, so
be aware of this dual meaning when you read about this stuff on the Internet.

Another important part of DRI is the Direct Rendering Manager (DRM). This is the kernel side of
the DRI architecture. Here, the kernel handles sensitive aspects like hardware locking, access
synchronization, video memory and more. DRM also provides userspace with an API that it can
use to submit commands and data in a format that is adequate for modern GPUs, which
effectively allows userspace to communicate with the graphics hardware.
Notice that many of these things have to be done specifically for the target hardware so there
are different DRM drivers for each GPU. In my Ubuntu system the DRM module for my Intel
GPU is provided via the libdrm-intel1:amd64 package.

OpenGL with Direct Rendering (image via wikipedia)


DRI/DRM provide the building blocks that enable userspace applications to access the graphics
hardware directly in an efficient and safe manner, but in order to use OpenGL we need another
piece of software that, using the infrastructure provided by DRI/DRM, implements the OpenGL
API while respecting the X server requirements.

Enter Mesa
Mesa is a free software implementation of the OpenGL specification, and as such, it provides
a libGL.so, which OpenGL based programs can use to output 3D graphics in Linux. Mesa can
provide accelerated 3D graphics by taking advantage of the DRI architecture to gain direct
access to the underlying graphics hardware in its implementation of the OpenGL API.
When our 3D application runs in an X11 environment it will output its graphics to a surface
(window) allocated by the X server. Notice, however, that with DRI this will happen without
intervention of the X server, so naturally there is some synchronization to do between the two,
since the X server still owns the window Mesa is rendering to and is the one in charge of
displaying its contents on the screen. This synchronization between the OpenGL application and
the X server is part of DRI. Mesa’s implementation of GLX (the extension of the OpenGL
specification that addresses the X11 platform) uses DRI to talk to the X server and accomplish
this.
Mesa also has to use DRM for many things. Communication with the graphics hardware
happens by sending commands (for example “draw a triangle”) and data (for example the
vertex coordinates of the triangle, their color attributes, normals, etc). This process usually
involves allocating a bunch of buffers in the graphics hardware where all these commands and
data are copied so that the GPU can access them and do its work. This is enabled by the DRM
driver, which is the one piece that takes care of managing video memory and which offers APIs
to userspace (Mesa in this case) to do this for the specific target hardware. DRM is also required
whenever we need to allocate and manage video memory in Mesa, so things like creating
textures, uploading data to textures, allocating color, depth or stencil buffers, etc all require to
use the DRM APIs for the target hardware.

OpenGL/Mesa in the context of 3D Linux games (image via wikipedia)


What’s next?
Hopefully I have managed to explain what is the role of Mesa in the Linux graphics stack and
how it works together with the Direct Rendering Infrastructure to enable efficient 3D graphics
via OpenGL. In the next post we will cover Mesa in more detail, we will see that it is actually a
framework where multiple OpenGL drivers live together, including both hardware and software
variants, we will also have a look at its directory structure and identify its main modules,
introduce the Gallium framework and more.
TCP/IP model
o The TCP/IP model was developed prior to the OSI model.
o The TCP/IP model is not exactly similar to the OSI model.
o The TCP/IP model consists of five layers: the application layer, transport layer, network
layer, data link layer and physical layer.
o The first four layers provide physical standards, network interface, internetworking, and
transport functions that correspond to the first four layers of the OSI model and these
four layers are represented in TCP/IP model by a single layer called the application layer.
o TCP/IP is a hierarchical protocol made up of interactive modules, and each of them
provides specific functionality.

Here, hierarchical means that each upper-layer protocol is supported by two or more lower-
level protocols.

Functions of TCP/IP layers:

Network Access Layer

o A network layer is the lowest layer of the TCP/IP model.


o A network layer is the combination of the Physical layer and Data Link layer defined in
the OSI reference model.
o It defines how the data should be sent physically through the network.
o This layer is mainly responsible for the transmission of the data between two devices on
the same network.
o The functions carried out by this layer are encapsulating the IP datagram into frames
transmitted by the network and mapping of IP addresses into physical addresses.
o The protocols used by this layer are ethernet, token ring, FDDI, X.25, frame relay.

Internet Layer

o An internet layer is the second layer of the TCP/IP model.


o An internet layer is also known as the network layer.
o The main responsibility of the internet layer is to send the packets from any network,
and they arrive at the destination irrespective of the route they take.

Following are the protocols used in this layer are:

IP Protocol: IP protocol is used in this layer, and it is the most significant part of the entire
TCP/IP suite.

Following are the responsibilities of this protocol:

o IP Addressing: This protocol implements logical host addresses known as IP addresses.


The IP addresses are used by the internet and higher layers to identify the device and to
provide internetwork routing.
o Host-to-host communication: It determines the path through which the data is to be
transmitted.
o Data Encapsulation and Formatting: An IP protocol accepts the data from the transport
layer protocol. An IP protocol ensures that the data is sent and received securely, it
encapsulates the data into message known as IP datagram.
o Fragmentation and Reassembly: The limit imposed on the size of the IP datagram by
data link layer protocol is known as Maximum Transmission unit (MTU). If the size of IP
datagram is greater than the MTU unit, then the IP protocol splits the datagram into
smaller units so that they can travel over the local network. Fragmentation can be done
by the sender or intermediate router. At the receiver side, all the fragments are
reassembled to form an original message.
o Routing: When IP datagram is sent over the same local network such as LAN, MAN,
WAN, it is known as direct delivery. When source and destination are on the distant
network, then the IP datagram is sent indirectly. This can be accomplished by routing
the IP datagram through various devices such as routers.

ARP Protocol

o ARP stands for Address Resolution Protocol.


o ARP is a network layer protocol which is used to find the physical address from the IP
address.
o The two terms are mainly associated with the ARP Protocol:
o ARP request: When a sender wants to know the physical address of the device, it
broadcasts the ARP request to the network.
o ARP reply: Every device attached to the network will accept the ARP request and
process the request, but only recipient recognize the IP address and sends back
its physical address in the form of ARP reply. The recipient adds the physical
address both to its cache memory and to the datagram header

ICMP Protocol

o ICMP stands for Internet Control Message Protocol.


o It is a mechanism used by the hosts or routers to send notifications regarding datagram
problems back to the sender.
o A datagram travels from router-to-router until it reaches its destination. If a router is
unable to route the data because of some unusual conditions such as disabled links, a
device is on fire or network congestion, then the ICMP protocol is used to inform the
sender that the datagram is undeliverable.
o An ICMP protocol mainly uses two terms:
o ICMP Test: ICMP Test is used to test whether the destination is reachable or not.
o ICMP Reply: ICMP Reply is used to check whether the destination device is
responding or not.
o The core responsibility of the ICMP protocol is to report the problems, not correct them.
The responsibility of the correction lies with the sender.
o ICMP can send the messages only to the source, but not to the intermediate routers
because the IP datagram carries the addresses of the source and destination but not of
the router that it is passed to.

Transport Layer

The transport layer is responsible for the reliability, flow control, and correction of data which is
being sent over the network.

The two protocols used in the transport layer are User Datagram protocol and Transmission
control protocol.

o User Datagram Protocol (UDP)


o It provides connectionless service and end-to-end delivery of transmission.
o It is an unreliable protocol as it discovers the errors but not specify the error.
o User Datagram Protocol discovers the error, and ICMP protocol reports the error
to the sender that user datagram has been damaged.
o UDP consists of the following fields:
Source port address: The source port address is the address of the application
program that has created the message.
Destination port address: The destination port address is the address of the
application program that receives the message.
Total length: It defines the total number of bytes of the user datagram in bytes.
Checksum: The checksum is a 16-bit field used in error detection.
o UDP does not specify which packet is lost. UDP contains only checksum; it does
not contain any ID of a data segment.

o Transmission Control Protocol (TCP)


o It provides a full transport layer services to applications.
o It creates a virtual circuit between the sender and receiver, and it is active for
the duration of the transmission.
o TCP is a reliable protocol as it detects the error and retransmits the damaged
frames. Therefore, it ensures all the segments must be received and
acknowledged before the transmission is considered to be completed and a
virtual circuit is discarded.
o At the sending end, TCP divides the whole message into smaller units known as
segment, and each segment contains a sequence number which is required for
reordering the frames to form an original message.
o At the receiving end, TCP collects all the segments and reorders them based on
sequence numbers.

Application Layer

o An application layer is the topmost layer in the TCP/IP model.


o It is responsible for handling high-level protocols, issues of representation.
o This layer allows the user to interact with the application.
o When one application layer protocol wants to communicate with another application
layer, it forwards its data to the transport layer.
o There is an ambiguity occurs in the application layer. Every application cannot be placed
inside the application layer except those who interact with the communication system.
For example: text editor cannot be considered in application layer while web browser
using HTTP protocol to interact with the network where HTTP protocol is an application
layer protocol.

Following are the main protocols used in the application layer:

o HTTP: HTTP stands for Hypertext transfer protocol. This protocol allows us to access the
data over the world wide web. It transfers the data in the form of plain text, audio,
video. It is known as a Hypertext transfer protocol as it has the efficiency to use in a
hypertext environment where there are rapid jumps from one document to another.
o SNMP: SNMP stands for Simple Network Management Protocol. It is a framework used
for managing the devices on the internet by using the TCP/IP protocol suite.
o SMTP: SMTP stands for Simple mail transfer protocol. The TCP/IP protocol that supports
the e-mail is known as a Simple mail transfer protocol. This protocol is used to send the
data to another e-mail address.
o DNS: DNS stands for Domain Name System. An IP address is used to identify the
connection of a host to the internet uniquely. But, people prefer to use the names
instead of addresses. Therefore, the system that maps the name to the address is
known as Domain Name System.
o TELNET: It is an abbreviation for Terminal Network. It establishes the connection
between the local computer and remote computer in such a way that the local terminal
appears to be a terminal at the remote system.
o FTP: FTP stands for File Transfer Protocol. FTP is a standard internet protocol used for
transmitting the files from one computer to another computer.
WI-FI and 802.11
The 802.11 standard is defined through several specifications of WLANs. It defines an over-the-
air interface between a wireless client and a base station or between two wireless clients.
There are several specifications in the 802.11 family −
• 802.11 − This pertains to wireless LANs and provides 1 - or 2-Mbps transmission in the
2.4-GHz band using either frequency-hopping spread spectrum (FHSS) or direct-
sequence spread spectrum (DSSS).
• 802.11a − This is an extension to 802.11 that pertains to wireless LANs and goes as fast
as 54 Mbps in the 5-GHz band. 802.11a employs the orthogonal frequency division
multiplexing (OFDM) encoding scheme as opposed to either FHSS or DSSS.
• 802.11b − The 802.11 high rate WiFi is an extension to 802.11 that pertains to wireless
LANs and yields a connection as fast as 11 Mbps transmission (with a fallback to 5.5, 2,
and 1 Mbps depending on strength of signal) in the 2.4-GHz band. The 802.11b
specification uses only DSSS. Note that 802.11b was actually an amendment to the
original 802.11 standard added in 1999 to permit wireless functionality to be analogous
to hard-wired Ethernet connections.
• 802.11g − This pertains to wireless LANs and provides 20+ Mbps in the 2.4-GHz band.
Here is the technical comparison between the three major WiFi standards.

Feature WiFi (802.11b) WiFi (802.11a/g)

PrimaryApplication Wireless LAN Wireless LAN

Frequency Band 2.4 GHz ISM


2.4 GHz ISM (g)
5 GHz U-NII (a)

Channel Bandwidth 25 MHz 20 MHz

Half/Full Duplex Half Half

Radio Technology Direct Sequence OFDM


Spread Spectrum (64-channels)

Bandwidth <=0.44 bps/Hz ≤=2.7 bps/Hz


Efficiency

Modulation QPSK BPSK, QPSK, 16-, 64-QAM

FEC None Convolutional Code

Encryption Optional- RC4m (AES in 802.11i) Optional- RC4(AES in 802.11i)

Mobility In development In development

Mesh Vendor Proprietary Vendor Proprietary

Access Protocol CSMA/CA CSMA/CA

Basic frame format which is required for all MAC implementation is defined in IEEE 802.3
standard. Though several optional formats are being used to extend the protocol’s basic
capability.
Ethernet frame starts with Preamble and SFD, both works at the physical layer. Ethernet header
contains both Source and Destination MAC address, after which the payload of the frame is
present. The last field is CRC which is used to detect the error. Now, let’s study each field of
basic frame format.
Ethernet (IEEE 802.3) Frame Format

• PREAMBLE – Ethernet frame starts with 7-Bytes Preamble. This is a pattern of alternative
0’s and 1’s which indicates starting of the frame and allow sender and receiver to
establish bit synchronization. Initially, PRE (Preamble) was introduced to allow for the loss
of a few bits due to signal delays. But today’s high-speed Ethernet don’t need Preamble to
protect the frame bits.
PRE (Preamble) indicates the receiver that frame is coming and allow the receiver to lock
onto the data stream before the actual frame begins.
• Start of frame delimiter (SFD) – This is a 1-Byte field which is always set to 10101011. SFD
indicates that upcoming bits are starting of the frame, which is the destination address.
Sometimes SFD is considered the part of PRE, this is the reason Preamble is described as 8
Bytes in many places. The SFD warns station or stations that this is the last chance for
synchronization.
• Destination Address – This is 6-Byte field which contains the MAC address of machine for
which data is destined.
• Source Address – This is a 6-Byte field which contains the MAC address of source
machine. As Source Address is always an individual address (Unicast), the least significant
bit of first byte is always 0.
• Length – Length is a 2-Byte field, which indicates the length of entire Ethernet frame. This
16-bit field can hold the length value between 0 to 65534, but length cannot be larger
than 1500 because of some own limitations of Ethernet.
• Data – This is the place where actual data is inserted, also known as Payload. Both IP
header and data will be inserted here if Internet Protocol is used over Ethernet. The
maximum data present may be as long as 1500 Bytes. In case data length is less than
minimum length i.e. 46 bytes, then padding 0’s is added to meet the minimum possible
length.
• Cyclic Redundancy Check (CRC) – CRC is 4 Byte field. This field contains a 32-bits hash
code of data, which is generated over the Destination Address, Source Address, Length,
and Data field. If the checksum computed by destination is not the same as sent checksum
value, data received is corrupted.
Note – Size of frame of Ethernet IEEE 802.3 varies 64 bytes to 1518 bytes including data length
(46 to 1500 bytes).

Brief overview on Extended Ethernet Frame (Ethernet II Frame) :

Standard IEEE 802.3 basic frame format is discussed above in detail. Now let’s see the extended
Ethernet frame header, using which we can get Payload even larger than 1500 Bytes.

DA [Destination MAC Address] : 6 bytes


SA [Source MAC Address] : 6 bytes
Type [0x8870 (Ethertype)] : 2 bytes
DSAP [802.2 Destination Service Access Point] : 1 byte
SSAP [802.2 Source Service Access Point] : 1 byte
Ctrl [802.2 Control Field] : 1 byte
Data [Protocol Data] : > 46 bytes
FCS [Frame Checksum] : 4 bytes
Although length field is missing in Ethernet II frame, the frame length is known by virtue of the
frame being accepted by the network interface.
Bluetooth Stack

Bluetooth has several protocol layers which form the Blue‐tooth protocol stack.
Controller and host stacks are the two main stacks in Bluetooth, where the controller stack
contains the radio interface part, and the host stack deals with the high level data.
Controller stack functionality includes (not limited to): CRC generation and verification, AES
encryption and preamble, access address, and air protocol framing.
If the controller stack is separated physically then an HCI layer (Host Controller Interface) will
handle the communication between the host stack and controller stack. Mostly, it will be using
USB or UART (using AT commands).

You can find in the market 2 types of Bluetooth solutions: 1- SoC: A single IC runs the
application, the host, and the controller.
2- Dual IC over HCI: One IC runs the application and the host and interface using HCI with a
second IC contains the controller (physical layer).
Bluetooth Protocol
Stack. Image courtesy of THE BLUETOOTH REVOLUTION-Michael Trieu
The well known nrf51822 IC is an example of SoC type, and HC-05 is an example of dual IC over
HCI type.

finally, several Bluetooth stacks are there: BlueZ for Linux, BlueDroid for Android and Apple has
its own one.
Android System Architecture
The Android software stack generally consists of a Linux kernel and a collection of C/C++
libraries that is exposed through an application framework that provides services, and
management of the applications and run time.

Linux Kernel

Android was created on the open source kernel of Linux. One main reason for choosing this
kernel was that it provided proven core features on which to develop the Android operating
system. The features of Linux kernel are:

1. Security:

The Linux kernel handles the security between the application and the system.

2. Memory Management:

It efficiently handles the memory management thereby providing the freedom to develop
our apps.

3. Process Management:

It manages the process well, allocates resources to processes whenever they need them.

4. Network Stack:

It effectively handles the network communication.

5. Driver Model:

It ensures that the application works. Hardware manufacturers can build their drivers into
the Linux build.

Libraries:

Running on the top of the kernel, the Android framework was developed with various features.
It consists of various C/C++ core libraries with numerous of open source tools. Some of these
are:
1. The Android runtime:

The Android runtime consist of core libraries of Java and ART(the Android RunTime). Older
versions of Android (4.x and earlier) had Dalvik runtime.

2. Open GL(graphics library):

This cross-language, cross-platform application program interface (API) is used to produce


2D and 3D computer graphics.

3. WebKit:

This open source web browser engine provides all the functionality to display web content
and to simplify page loading.

4. Media frameworks:

These libraries allow you to play and record audio and video.

5. Secure Socket Layer (SSL):

These libraries are there for Internet security.

Android Runtime:

It is the third section of the architecture. It provides one of the key components which is called
Dalvik Virtual Machine. It acts like Java Virtual Machine which is designed specially for Android.
Android uses it’s own custom VM designed to ensure that multiple instances run efficiently on a
single device.
The Delvik VM uses the device’s underlying Linux kernel to handle low-level
functionality,including security,
threading and memory management.
Application Framework

The Android team has built on a known set proven libraries, built in the background, and all of it
these is exposed through Android interfaces. These interfaces warp up all the various libraries
and make them useful for the Developer. They don’t have to build any of the functionality
provided by the android. Some of these interfaces include:

1. Activity Manager:

It manages the activity lifecycle and the activity stack.

2. Telephony Manager:

It provides access to telephony services as related subscriber information, such as phone


numbers.

3. View System:

It builds the user interface by handling the views and layouts.

4. Location manager:

It finds the device’s geographic location.

Android is an open source, Linux-based software stack created for a wide array of devices and
form factors. The following diagram shows the major components of the Android platform.
Figure 1. The Android software stack.
The Linux Kernel

The foundation of the Android platform is the Linux kernel. For example, the Android Runtime
(ART) relies on the Linux kernel for underlying functionalities such as threading and low-level
memory management.

Using a Linux kernel allows Android to take advantage of key security features and allows
device manufacturers to develop hardware drivers for a well-known kernel.

Hardware Abstraction Layer (HAL)

The hardware abstraction layer (HAL) provides standard interfaces that expose device hardware
capabilities to the higher-level Java API framework. The HAL consists of multiple library
modules, each of which implements an interface for a specific type of hardware component,
such as the camera or bluetooth module. When a framework API makes a call to access device
hardware, the Android system loads the library module for that hardware component.

Android Runtime

For devices running Android version 5.0 (API level 21) or higher, each app runs in its own
process and with its own instance of the Android Runtime (ART). ART is written to run multiple
virtual machines on low-memory devices by executing DEX files, a bytecode format designed
specially for Android that's optimized for minimal memory footprint. Build toolchains, such
as Jack, compile Java sources into DEX bytecode, which can run on the Android platform.

Some of the major features of ART include the following:

• Ahead-of-time (AOT) and just-in-time (JIT) compilation


• Optimized garbage collection (GC)
• On Android 9 (API level 28) and higher, conversion of an app package's Dalvik Executable
format (DEX) files to more compact machine code.
• Better debugging support, including a dedicated sampling profiler, detailed diagnostic
exceptions and crash reporting, and the ability to set watchpoints to monitor specific fields

Prior to Android version 5.0 (API level 21), Dalvik was the Android runtime. If your app runs well
on ART, then it should work on Dalvik as well, but the reverse may not be true.
Android also includes a set of core runtime libraries that provide most of the functionality of
the Java programming language, including some Java 8 language features, that the Java API
framework uses.

Native C/C++ Libraries

Many core Android system components and services, such as ART and HAL, are built from
native code that require native libraries written in C and C++. The Android platform provides
Java framework APIs to expose the functionality of some of these native libraries to apps. For
example, you can access OpenGL ES through the Android framework’s Java OpenGL API to add
support for drawing and manipulating 2D and 3D graphics in your app.

If you are developing an app that requires C or C++ code, you can use the Android NDK to
access some of these native platform libraries directly from your native code.

Java API Framework

The entire feature-set of the Android OS is available to you through APIs written in the Java
language. These APIs form the building blocks you need to create Android apps by simplifying
the reuse of core, modular system components and services, which include the following:

• A rich and extensible View System you can use to build an app’s UI, including lists, grids, text
boxes, buttons, and even an embeddable web browser
• A Resource Manager, providing access to non-code resources such as localized strings, graphics,
and layout files
• A Notification Manager that enables all apps to display custom alerts in the status bar
• An Activity Manager that manages the lifecycle of apps and provides a common navigation back
stack
• Content Providers that enable apps to access data from other apps, such as the Contacts app,
or to share their own data

Developers have full access to the same framework APIs that Android system apps use.
System Apps

Android comes with a set of core apps for email, SMS messaging, calendars, internet browsing,
contacts, and more. Apps included with the platform have no special status among the apps the
user chooses to install. So a third-party app can become the user's default web browser, SMS
messenger, or even the default keyboard (some exceptions apply, such as the system's Settings
app).

The system apps function both as apps for users and to provide key capabilities that developers
can access from their own app. For example, if your app would like to deliver an SMS message,
you don't need to build that functionality yourself—you can instead invoke whichever SMS app
is already installed to deliver a message to the recipient you specify.

Das könnte Ihnen auch gefallen