Sie sind auf Seite 1von 264

GETTING STARTED

CONNECTING YOUR CORE


EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

GETTING STARTED
What's in the box
Step 1: Power the Core
Step 2: Install the App
Step 3: Connect your Core to the Cloud!
NOW DO THINGS!
Blink an LED with Tinker
Put Code on Your Core
WAIT, WHAT IS THIS THING?
Buttons
LEDs
Pins

GETTING STARTED
What's in the box

Congratulations on being the owner of a brand new Spark Core! Go ahead, open the box, and
let's talk about what you see. Your box should include:
(1) Spark Core The reason you bought it!
(1) Breadboard A breadboard makes it easy to wire components to the Core without solder.
See Wikipedia for more information.
(1) USB cable The included USB cable is great for powering the Spark Core and more
technical things we'll cover later.
Step 1: Power the Core

Plug the included USB cable into the Spark Core and your computer. The Core should start
blinking blue. Have one of these u.FL connectors? Make sure to connect an antenna to it now!
Not blinking blue?
Maybe it's already been configured. Hold down the MODE button until it starts blinking
blue, then continue.

Step 2: Install the App

It's called "Spark Core", or you can click one of these links:
iPhone

Android

Now use the app to sign up for an account!


Step 3: Connect your Core to the Cloud!

Make sure your phone is connected to the WiFi you want to use (it'll show up in the SSID blank
on the app), then enter your password and click CONNECT!
This may take a little while- but don't worry. It should go through the following colors:
Blinking blue: Listening for Wi-Fi credentials
Solid blue: Getting Wi-Fi info from app
Blinking green: Connecting to the Wi-Fi network
Blinking cyan: Connecting to the Spark Cloud
Blinking magenta: Updating to the newest firmware
Breathing cyan: Connected!

See an animation

Did your phone not find any Cores?


Is it blinking blue?
Give it another go.
Is it blinking green and not getting to cyan?
Try it again by holding the MODE button on the core until it begins flashing blue, then
double-check your network name and password.
Is it now breathing cyan, but the app didn't find any Cores?
Uh oh. Your Core's on the network, but it took too long. We're going to claim your core
manually.
Something else altogether?
Give the Connecting Your Core page a read-through and if you're still stuck, search the
community.

NOW DO THINGS!
Blink an LED with Tinker

The Spark app should now be on the Tinker screen, as shown to the right.
Tap D7 then digitalWrite in the popup. Now when you tap the D7 circle the tiny blue LED
should turn off or on! This is because the LED shares a connection to the Core with the pin
labeled D7.
You could hook your own LED up to the Core on another pin and do the same thing, use
digitalRead to tell that a switch has been pressed, or analogRead to see the position of a
knob.
You can always get Tinker back on the Core by following these instructions
Put Code on Your Core

Now let's control the blue LED using code instead of Tinker. If you click here or on Build on
the main page, you'll be in the IDE- where we can write code and upload it to the Core. Log in
with the same email and password you used to sign up in the app, and we're off!
Click "BLINK AN LED" under the Example apps title. This code turns D7 (labeled led2) on and
off, once a second. Click the lightning bolt icon in the upper left and it will upload or "flash"

this code onto your Core. You'll see a series of status colors on the main LED, and then the
little blue LED blinking. Magic!
You can find more info in the Web IDE (Build) page

WAIT, WHAT IS THIS THING?


The Spark Core is a Wi-Fi development kit for internet-connected hardware. It is, in essence,
the "brains" of a connected hardware product or project.
The Core has on board a microcontroller, which is a small, low-cost, low-power computer that
can run a single application. The microcontroller runs the show; it runs your software and
tells the rest of the Core what to do. It doesn't have an Operating System the way that your
computer does; it just runs a single application (often called firmware or an embedded
application), which can be simple, just a few lines of code, or very complex, depending on
what you want to do.
Microcontrollers are particularly good at controlling things; hence the name. They have a set
of "pins" (little spider leg type things sticking off the chip) that are called GPIO (General
Purpose Input and Output) pins, or I/O pins. They can be hooked to sensors or buttons to
listen to the world, or they can be hooked to lights and motors to act upon the world. These
microcontroller's pins have been directly connected to the headers on the sides of the Core
so you can easily access them; specifically, the pins labeled D0 to D7 and A0 to A7 are hooked
directly to the microcontroller's GPIO pins.
The microcontroller can also communicate with other chips using common protocols like
Serial (also called UART), SPI, or I2C (also called Wire). You can then make the Core more
powerful by connecting it to special-purpose chips like motor drivers or shift registers.
Sometimes we'll wrap up these chips on a Shield, an accessory to the Core that makes it easy
to extend the Core.
The Core also has a Wi-Fi module, which connects it to your local Wi-Fi network in the same
way that your computer or smartphone might connect to a Wi-Fi network. The Core is
programmed to stay connected to the internet by default, so long as it can find and connect
to a network.
When the Core connects to the internet, it establishes a connection to the Spark Cloud. By
connecting to the Cloud, the Core becomes accessible from anywhere through a simple REST
API. This API is designed to make it very easy to interface with the Core through a web app or
mobile app in a secure, private way, so that only you and those you trust can access the Core.
Buttons
There are two buttons on the Core: the RESET button (when holding the Core with its USBport to the top, it's the button on the right) and the MODE button (on the left).

The RESET button will put the Core in a hard reset, effectively depowering and repowering
the microcontroller. This is a good way to restart the application that you've downloaded onto
the Core.
The MODE button serves three functions:
Hold down the MODE button for three seconds to put the Core into Smart Config mode to
connect it to your local Wi-Fi network. The LED should start flashing blue.
Hold down the MODE button for ten seconds to clear the Core's memory of Wi-Fi networks.
Hold down the MODE button, tap on the RESET button and wait for three seconds to enter
Bootloader mode, where you can reprogram the Core over USB or JTAG. Release the MODE
button when you see the LED flashing yellow. If you do this by accident, simply hit RESET
button to leave Bootloader mode.
Hold down the MODE button, tap once on the RESET button and wait for ten seconds to do
a Factory Reset, where the Core is reprogrammed with the software that was installed on
the Core in the factory (the Tinker application). The LED should turn white for three
seconds and begin flashing quickly; when the LED switches to another color the Core has
been reset. This is useful if you encounter bugs with your firmware, or if you just want to
get back to Tinker.
LEDs
There are two LEDs on the Core. The big fat one in the middle is a full-color RGB LED that
shows you the status of the Core's internet connection. The other small blue LED is the user
LED; it's hooked up to D7, so when you turn the D7 pin HIGH or LOW , it turns on and off,
respectively.
The RGB LED could show the following states:
Flashing blue: Listening mode, waiting for network information.
Solid blue: Smart Config complete, network information found.
Flashing green: Connecting to local Wi-Fi network.
Flashing cyan: Connecting to Spark Cloud.
High-speed flashing cyan: Spark Cloud handshake.
Slow breathing cyan: Successfully connected to Spark Cloud.
Flashing yellow: Bootloader mode, waiting for new code via USB or JTAG.
White pulse: Start-up, the Core was powered on or reset.
Flashing white: Factory Reset initiated.
Solid white: Factory Reset complete; rebooting.
Flashing magenta: Updating firmware.
Solid magenta: May have lost connection to the Spark Cloud. Pressing the Reset (RST)
button will attempt the update again.
The RGB LED can also let you know if there were errors in establishing an internet connection.
A red LED means an error has occurred. These errors might include:

Two red flashes: Connection failure due to bad internet connection. Check your network
connection.
Three red flashes: The Cloud is inaccessible, but the internet connection is fine. Check our
Twitter feed to see if there have been any reported outages; if not, visit our support page
for help.
Four red flashes: The Cloud was reached but the secure handshake failed. Visit our
support page for help.
Flashing yellow/red: Bad credentials for the Spark Cloud. Contact the Spark team
(hello@spark.io).
Pins
The Core has 24 pins that you can connect a circuit to. These pins are:
VIN: Connect an unregulated power source here with a voltage between 3.6V and 6V to
power the Core. If you're powering the Core over USB, this pin should not be used.
3V3: This pin will output a regulated 3.3V power rail that can be used to power any
components outside the Core. (Also, if you have your own 3.3V regulated power source, you
can plug it in here to power the Core).
3V3*: This is a separate low-noise regulated 3.3V power rail designed for analog circuitry
that may be susceptible to noise from the digital components. If you're using any sensitive
analog sensors, power them from 3V3* instead of from 3V3.
!RST: You can reset the Core (same as pressing the RESET button) by connecting this pin to
GND.
GND: These pins are your ground pins.
D0 to D7: These are the bread and butter of the Spark Core: 8 GPIO (General Purpose
Input/Output) pins. They're labeled "D" because they are "Digital" pins, meaning they can't
read the values of analog sensors. Some of these pins have additional peripherals (SPI,
JTAG, etc.) available, keep reading to find out more.
A0 to A7: These pins are 8 more GPIO pins, to bring the total count up to 16. These pins are
just like D0 to D7, but they are "Analog" pins, which means they can read the values of
analog sensors (technically speaking they have an ADC peripheral). As with the Digital pins,
some of these pins have additional peripherals available.
TX and RX: These pins are for communicating over Serial/UART. TX represents the
transmitting pin, and RX represents the receiving pin.
PWM Pins
When you want to use the analogWrite() function on the Core, for instance to smoothly dim
the brightness of LEDs, you need to use pins that have a timer peripheral. People often call
these PWM pins, since what they do is called Pulse Width Modulation. The Core has 8 PWM
pins: A0, A1, A4, A5, A6, A7, D0 and D1.

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

CONNECTING YOUR CORE


Listening Mode
Smart Con g with the Spark app
Connect over USB
CLAIMING YOUR CORE
APPENDIX
DFU Mode (Device Firmware Upgrade)
Factory Reset
Smart Con g with the TI app
Coming soon: Hard-code credentials

CONNECTING YOUR CORE

The easiest way to connect the Spark Core to Wi-Fi is using the Spark mobile app for iPhone or
Android. But in case that's not working for you or you don't have an iOS/Android phone, there
are other methods as well.
For all of the following methods, the Spark Core must be in Listening Mode, which you'll know
by its flashing blue LED.

LISTENING MODE
The Core boots into listening mode by default, so if your Core is brand new, it should go
straight into listening mode. Otherwise, hold the MODE button for three seconds. The RGB
LED will be flashing blue in this mode. To completely clear all stored Wi-Fi credentials,
continue to hold the MODE button for 10 seconds until the RGB LED flashes blue quickly,
signaling that all profiles have been deleted. The RGB LED should now be flashing blue again.

SMART CONFIG WITH THE SPARK APP

Once you've downloaded the Spark Core app from the App Store or Google Play, you should
create an account. Afterwards, you'll be asked to connect your Core using a process called
Smart Config. If your Core has a u.FL connector, you must connect an external antenna before
initiating Smart Config. NOTE: Your phone must be connected to the Wi-Fi network that you
want to connect the Core to. Wi-Fi Hotspots generated from the phone you are running this
app on typically will yield an error claiming there is no Wi-Fi available. Please try to Connect
over USB and enter your Hotspot credentials manually. When connected to Wi-Fi, the app will
automatically fill the SSID field with the name of the network that your phone is connected to.
Enter your Wi-Fi password and hit connect.
NOTE: In places like a conference or workshop where multiple cores are connected, Smart
Config is not preferred. Claiming a Core over USB will prevent confusion with accidentally
claiming of another Core.

Smart Config can take up to a minute, so be patient. The closer your phone to your Spark

Core, the faster it will connect. Once the Core hears the signal, it will go through the following
sequence of lights:
Solid blue: Credentials captured
Flashing green: Connecting to Wi-Fi network
Flashing cyan: Connecting to Spark Cloud
Breathing cyan: Connected to Spark Cloud

Once the Spark Core has connected, your phone will "claim" the Core and attach it to your
account. Then you'll get to name your Core. If you're uncertain, you can confirm that the claim
process was successful by logging into the Spark Web IDE and clicking the "Cores" icon at the
bottom of the page. Is your Core listed? Great! The world is perfect.
NOTE: The Core MUST be online (breathing cyan) in order for the claiming process to work. If
the Spark Core has been claimed by someone else, the app won't recognize it. If you need to
transfer a Spark Core to another account, email us at hello@spark.io.

If you are connecting multiple Cores, you'll go through this naming process for each Core.
You'll know which one is which by the rainbow signal.

Once you've finished naming your Cores, you can control them with Tinker! Try digitalWrite on
D7 to turn on the user LED.
For more information on how the seemingly magical Smart Config mode works, check out this
community thread by GHawkins

CONNECT OVER USB


You can also connect the Spark Core to your Wi-Fi network over USB by communicating
through Serial. NOTE: This only works when the Spark Core is in Listening Mode (i.e. RGB led
is blinking blue).
First, you'll need to download a serial terminal application.
For Windows users, we recommend PuTTY. You'll also need to install the Windows driver:
Windows driver for Spark Core
CoolTerm provides a nice GUI.

For Mac users, either CoolTerm or screen work.


For Linux command line usage, GNU Screen works great. (On OS X, the command line
invocation might look something like screen /dev/cu.usbmodem1411 9600 . On Ubuntu, it looks
something like screen /dev/ttyACM0 9600 . Device location may vary, poke around in the /dev
directory if you don't find it immediately)
How-to
Plug your Spark Core into your computer over USB. When the Spark Core is in Listening Mode,
open a serial port over USB using the standard settings, which should be:
Baudrate: 9600
Data Bits: 8
Parity: none
Stop Bits: 1
Once you've opened a serial connection, you have two commands at your disposal by hitting
either w or i on the keyboard. Here's what they do:
w: Set up your Wi-Fi SSID and password
i: ("i" as in identify) Read out the Spark Core ID

NOTE: If you connect your Core over USB the first time, you will also need to manually claim
your Core to connect it with your account. Please see the section below on claiming your Core
for more details.

CLAIMING YOUR CORE


Once your Core is connected, it needs to be claimed in order to be associated with your
account. This is what lets you control your Core and keeps anyone else from doing so.
If you use the mobile app to set up your Core, it should claim it automatically. However if you
connect your Core over USB, or if the claiming process is unsuccessful, you can claim it
manually.
The easiest way to manually claim a Core over USB is to use the Spark Command Line
Interface. Once you have this installed, you can simply type spark setup and follow the
instructions.
Alternatively, if you have troubles installing the command line tool, you can get the Core's ID
over serial and claim it via the build site. You can do this by opening a Serial connection to
the Core and pressing the i key (see the above instuctions for connecting over USB). It should
show you a number like this:

# Example Core ID
55ff68064989495329092587
Then open up Spark Build and click the 'Cores' icon. Click the button that says 'Add a Core',
and enter your ID in the text box.

APPENDIX
DFU MODE (DEVICE FIRMWARE UPGRADE)
If you are wish to program a Core with a custom firmware via USB, you'll need to use this
mode. This mode triggers the on-board bootloader that accepts firmware binary files via the
dfu-utility.
Procedure:
1. Hold down BOTH buttons
2. Release only the RST button, while holding down the MODE button.
3. Wait for the LED to start flashing yellow

4. Release the MODE button


The Core now is in the DFU mode.

FACTORY RESET
A factory reset restores the firmware on the Core to the default Tinker app and clears all your
Wi-Fi credentials.
Procedure:
The procedure is same as the one described above (DFU Mode), but in this case you should
continue holding down the MODE button until you see the Core change from flashing yellow
to flashing white. Then release the button. The Core should begin after the factory reset is
complete.
1. Hold down BOTH buttons
2. Release only the RST button, while holding down the MODE button.
3. Wait for the LED to start flashing yellow (continue to hold the MODE button)
4. The LED will turn solid white (continue to hold the MODE button)
5. Finally, the LED will turn blink white rapidly
6. Release the MODE button
Note: The video here is a continuation of the video from above (DFU Mode).

SMART CONFIG WITH THE TI APP


Smart Config with the Texas Instruments CC3000 app is similar to the process above,
although you don't need an account with Spark, and TI also has a Java applet that can work
from a Mac, Windows, or Linux computer.
Follow the instructions on Texas Instrument's website:
CC3000 Smart Config @ Texas Instruments
The only thing that's different is that you'll need to activate the optional AES key and type

sparkdevices2013 .
NOTE: TI's Android app is not available in Google Play; you'll have to download it off of their
website and side-load the apk yourself.

COMING SOON: HARD-CODE CREDENTIALS


Currently there is not a mechanism to hard-code your SSID and password into the firmware
for the Spark Core. We're working on it!

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

ANNOTATED EXAMPLES
BLINK AN LED
CONTROL LEDS OVER THE 'NET
MEASURING THE TEMPERATURE
LOCAL COMMUNICATION
TEXTING THE CORE
AN INTERNET BUTTON

ANNOTATED EXAMPLES
Here you will find a bunch of examples to get you started with your new Spark Core!

BLINK AN LED

Blinking an LED is the "Hello World" example of the microcontroller world. It's a nice way to
warm up and start your journey into the land of embedded hardware.
For this example, you will need a Spark Core (duh!), a Breadboard, an LED, a Resistor (we will
soon find out a suitable value) and a USB cable.
Connect everything together as shown in the picture. The LED is connected to pin D0 of the
Core. The positive (longer pin) of the LED is connected to D0 and its negative pin (shorter) is
connected to ground via a resistor.

But wait, what's the value of the resistor again?


Here's how we find that out:
According to Ohm's Law : Voltage = Current x Resistance
Therefore, Resistance = Voltage/ Current
In our case, the output voltage of the Core is 3.3V but the LED (typically) has a forward voltage
drop of around 2.0V. So the actual voltage would be:
3.3V - 2.0V = 1.3V
The required current to light up an LED varies any where between 2mA to 20mA. More the
current, brighter the intensity. But generally its a good idea to drive the LED at a lower limit to
prolong its life span. We will choose a drive current of 5mA.
Hence, Resistance = 1.3V/ 5mA = 260 Ohms
NOTE: Since there is so much variation in the values of the forward voltage drop of the LEDs
depending upon type, size, color, manufacturer, etc., you could successfully use a resistor
value from anywhere between 220Ohms to 1K Ohms.
In the picture above, we used a 1K resistor (Brown Black Red)

Now on to the actual program:

// Program to blink an LED connected to pin D0


// of the Spark Core.
// We name pin D0 as led
int led = D0;
// This routine runs only once upon reset
void setup()
{
// Initialize D0 pin as output
pinMode(led, OUTPUT);
}
// This routine loops forever
void loop()
{
digitalWrite(led, HIGH);
//
delay(1000);
//
digitalWrite(led, LOW);
//
delay(1000);
//
}

Turn
Wait
Turn
Wait

ON the LED
for 1000mS = 1 second
OFF the LED
for 1 second

CONTROL LEDS OVER THE 'NET

Now that we know how to blink an LED, how about we control it over the Internet? This is
where the fun begins.
Lets hook up two LEDs this time.
Here is the algorithm:
Set up the pins as outputs that have LEDs connected to them
Create and register a Spark function ( this gets called automagically when you make an API
request to it)
Parse the incoming command and take appropriate actions

// ----------------------------------// Controlling LEDs over the Internet


// ----------------------------------// name the pins
int led1 = D0;
int led2 = D1;
// This routine runs only once upon reset
void setup()
{
//Register our Spark function here
Spark.function("led", ledControl);

// Configure the pins to be outputs


pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
// Initialize both the LEDs to be OFF
digitalWrite(led1, LOW);
digitalWrite(led2, LOW);
}

// This routine loops forever


void loop()
{
// Nothing to do here
}

// This function gets called whenever there is a matching API request


// the command string format is l<led number>,<state>
// for example: l1,HIGH or l1,LOW
//
l2,HIGH or l2,LOW

int ledControl(String command)


{

int state = 0;

//find out the pin number and convert the ascii to integer
int pinNumber = (command.charAt(1) - '0') - 1;
//Sanity check to see if the pin numbers are within limits
if (pinNumber < 0 || pinNumber > 1) return -1;
// find out the state of the led
if(command.substring(3,7) == "HIGH") state = 1;
else if(command.substring(3,6) == "LOW") state = 0;
else return -1;

// write to the appropriate pin


digitalWrite(pinNumber, state);
return 1;

The API request will look something like this:

POST /v1/devices/{DEVICE_ID}/led
# EXAMPLE REQUEST IN TERMINAL
# Core ID is 0123456789abcdef
# Your access token is 123412341234

curl https://api.spark.io/v1/devices/0123456789abcdef/led \
-d access_token=123412341234 \
-d params=l1,HIGH
Note that the API endpoint is 'led', not 'ledControl'. This is because the endpoint is defined by
the first argument of Spark.function(), which is a string of characters, rather than the second
argument, which is a function.
To better understand the concept of making API calls to your Core over the cloud checkout the
Cloud API reference.

MEASURING THE TEMPERATURE

We have now learned how to send custom commands to the Core and control the hardware.
But how about reading data back from the Core?
In this example, we will hook up a temperature sensor to the Core and read the values over
the internet with a web browser.

We have used a widely available analog temperature sensor called TMP36 from Analog
Devices. You can download the datasheet here.
Notice how we are powering the sensor from 3.3V* pin instead of the regular 3.3V. This is
because the 3.3V* pin gives out a (LC) clean filtered voltage, ideal for analog applications like
these. If the readings you get are noisy or inconsistent, add a 0.01uF (10nF) ceramic capacitor
between the analog input pin (in this case, A7) and GND as shown in the set up. Ideally, the
sensor should be placed away from the Core so that the heat dissipated by the Core does not
affect the temperature readings.

// ----------------// Read temperature


// ----------------// Create a variable that will store the temperature value
int temperature = 0;

void setup()
{

// Register a Spark variable here


Spark.variable("temperature", &temperature, INT);
// Connect the temperature sensor to A7 and configure it
// to be an input
pinMode(A7, INPUT);

void loop()
{

// Keep reading the temperature so when we make an API


// call to read its value, we have the latest one
temperature = analogRead(A7);
}
The returned value from the Core is going to be in the range from 0 to 4095. You can easily
convert this value to actual temperature reading by using the following formula:

voltage = (sensor reading x 3.3)/4095


Temperature (in Celsius) = (voltage - 0.5) X 100
The API request will look something like this:

GET /v1/devices/{DEVICE_ID}/temperature
# EXAMPLE REQUEST IN TERMINAL
# Core ID is 0123456789abcdef
# Your access token is 123412341234
curl -G https://api.spark.io/v1/devices/0123456789abcdef/temperature \
-d access_token=123412341234

LOCAL COMMUNICATION
Now let's imagine you want to control your Core locally, so you build a simple server app to
which the Core will directly connect. One puzzle to solve is that you don't know in advance the
IP address of your Core or of the laptop that will run the server. How can the Core and the
server discover each other?
In this example, we will register a Spark function to pass the server IP address to the Core.
Once we've established the local connection, we'll be able to control the Core without the data
going through the Spark Cloud.

TCPClient client;
First, we construct the client that will connect to our local server.

void ipArrayFromString(byte ipArray[], String ipString) {


int dot1 = ipString.indexOf('.');
ipArray[0] = ipString.substring(0, dot1).toInt();

int dot2 = ipString.indexOf('.', dot1 + 1);

ipArray[1] = ipString.substring(dot1 + 1, dot2).toInt();


dot1 = ipString.indexOf('.', dot2 + 1);
ipArray[2] = ipString.substring(dot2 + 1, dot1).toInt();
ipArray[3] = ipString.substring(dot1 + 1).toInt();
}
Then we need a function for translating the IP address String into the array of four bytes
needed by the TCP client.
We work our way progressively through the string, saving the positions of the dots and the
numeric substrings between them.

int connectToMyServer(String ip) {


byte serverAddress[4];

ipArrayFromString(serverAddress, ip);

if (client.connect(serverAddress, 9000)) {
return 1; // successfully connected
} else {
return -1; // failed to connect
}
}
Here's the Spark function we're going to register. Like all Spark functions it takes a String
parameter and returns an int. We allocate an array of 4 bytes for the IP address, then call

ipArrayFromString() to convert the String into an array.


After that, we simply call client.connect() with the newly received address! Super simple!

void setup() {
Spark.function("connect", connectToMyServer);
for (int pin = D0; pin <= D7; ++pin) {
pinMode(pin, OUTPUT);

}
}
In setup() we only have two jobs:
Register the Spark function
Set D0D7 as output pins

void loop() {
if (client.connected()) {
if (client.available()) {

char pin = client.read() - '0' + D0;


char level = client.read();
if ('h' == level) {
digitalWrite(pin, HIGH);
} else {
digitalWrite(pin, LOW);
}
}
}
}
In loop() we first check whether the client is connected to the server. If not, we don't do
anything.
If the client is connected, then we ask whether any commands have been received over local
communication. If not, again, we don't do anything.
However, if we are connected and have received a command then we use the command to
perform a digitalWrite() .
Example server and firmware on github

TEXTING THE CORE


coming soon!

AN INTERNET BUTTON
coming soon!

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

SPARK CORE FIRMWARE


CLOUD FUNCTIONS
Spark.variable()
Spark.function()
Spark.publish()
Spark.subscribe()
Spark.connect()
Spark.disconnect()
Spark.connected()
Spark.process()
Spark.deviceID()
Spark.sleep()
Spark.syncTime()
WIFI
WiFi.on()
WiFi.o ()
WiFi.connect()
WiFi.disconnect()
WiFi.connecting()
WiFi.ready()
WiFi.listen()
WiFi.listening()
WiFi.setCredentials()
WiFi.clearCredentials()
WiFi.hasCredentials()

WiFi.macAddress()
WiFi.SSID()
WiFi.RSSI()
WiFi.ping()
WiFi.localIP()
WiFi.subnetMask()
WiFi.gatewayIP()
INPUT/OUTPUT
Setup
pinMode()
I/O
digitalWrite()
digitalRead()
analogWrite()
analogRead()
COMMUNICATION
Serial
begin()
end()
available()
peek()
write()
read()
print()
println()
ush()
SPI
begin()
end()
setBitOrder()
setClockDivider()
setDataMode()
transfer()
Wire
begin()
requestFrom()
beginTransmission()
endTransmission()
write()
available()
read()
onReceive()
onRequest()
IPAddress
IPAddress
TCPServer
TCPServer

begin()
available()
write()
print()
println()
TCPClient
TCPClient
connected()
connect()
write()
print()
println()
available()
read()
ush()
stop()
UDP
begin()
available()
beginPacket()
endPacket()
write()
parsePacket()
read()
stop()
remoteIP()
remotePort()
LIBRARIES
Servo
attach()
write()
writeMicroseconds()
read()
attached()
detach()
RGB
control(user_control)
controlled()
color(red, green, blue)
brightness(val)
Time
hour()
hourFormat12()
isAM()
isPM()
minute()
second()
day()

weekday()
month()
year()
now()
zone()
setTime()
timeStr()
OTHER FUNCTIONS
Time
millis()
micros()
delay()
delayMicroseconds()
Interrupts
attachInterrupt()
detatchInterrupt()
interrupts()
noInterrupts()
Math
min()
max()
abs()
constrain()
map()
pow()
sqrt()
EEPROM
read()
write()
ADVANCED: SYSTEM MODES
Automatic mode
Semi-automatic mode
Manual mode
LANGUAGE SYNTAX
Structure
setup()
loop()
Control structures
if
Comparison Operators
if...else
for
switch case
while
do... while
break

continue
return
goto
Further syntax
; (semicolon)
{} (curly braces)
// (single line comment)
/* */ (multi-line comment)
#de ne
#include
Arithmetic operators
= (assignment operator)
+ - * / (additon subtraction multiplication division)
% (modulo)
Boolean operators
&& (and)
|| (or)
! (not)
Bitwise operators
& (bitwise and)
| (bitwise or)
^ (bitwise xor)
~ (bitwise not)
<< (bitwise left shift), >> (bitwise right shift)
Compound operators
++ (increment), -- (decrement)
compound arithmetic
&= (compound bitwise and)
|= (compound bitwise or)
String Class
String()
charAt()
compareTo()
concat()
endsWith()
equals()
equalsIgnoreCase()
getBytes()
indexOf()
lastIndexOf()
length()
replace()
reserve()
setCharAt()
startsWith()
substring()
toCharArray()
toInt()
toLowerCase()

toUpperCase()
trim()
VARIABLES
Constants
HIGH | LOW
INPUT, OUTPUT, INPUT_PULLUP, INPUT_PULLDOWN
true | false
Data Types
void
boolean
char
unsigned char
byte
int
unsigned int
word
long
unsigned long
short
oat
double
string - char array
String - object
array

SPARK CORE FIRMWARE


CLOUD FUNCTIONS
Spark.variable()
Expose a variable through the Spark Cloud so that it can be called with GET

/v1/devices/{DEVICE_ID}/{VARIABLE} .

// EXAMPLE USAGE
int analogvalue = 0;
double tempC = 0;
char *message = "my name is spark";

void setup()
{

// variable name max length is 12 characters long

Spark.variable("analogvalue", &analogvalue, INT);


Spark.variable("temp", &tempC, DOUBLE);
Spark.variable("mess", message, STRING);
pinMode(A0, INPUT);
}

void loop()
{

// Read the analog value of the sensor (TMP36)


analogvalue = analogRead(A0);
//Convert the reading into degree celcius
tempC = (((analogvalue * 3.3)/4095) - 0.5) * 100;
Delay(200);

Currently, up to 10 Spark variables may be defined and each variable name is limited to a
max of 12 characters.
There are three supported data types:

INT
DOUBLE
STRING (maximum string size is 622 bytes)

# EXAMPLE REQUEST IN TERMINAL


# Core ID is 0123456789abcdef
# Your access token is 123412341234
curl "https://api.spark.io/v1/devices/0123456789abcdef/analogvalue?access_token=12341234
curl "https://api.spark.io/v1/devices/0123456789abcdef/temp?access_token=123412341234"
curl "https://api.spark.io/v1/devices/0123456789abcdef/mess?access_token=123412341234"
# In return you'll get something like this:
960
27.44322344322344
my name is spark

Spark.function()
Expose a function through the Spark Cloud so that it can be called with POST

device/{FUNCTION} .

// SYNTAX TO REGISTER A SPARK FUNCTION


Spark.function("funcKey", funcName);
//
^
//
|

//

(max of 12 characters long)

Currently the application supports the creation of up to 4 different Spark functions.


In order to register a Spark function, the user provides the funcKey , which is the string name
used to make a POST request and a funcName , which is the actual name of the function that
gets called in the Spark app. The Spark function can return any integer; -1 is commonly used
for a failed function call.
The length of the funcKey is limited to a max of 12 characters.
A Spark function is set up to take one argument of the String datatype. This argument length
is limited to a max of 64 characters.

// EXAMPLE USAGE
int brewCoffee(String command);

void setup()
{

// register the Spark function


Spark.function("brew", brewCoffee);

void loop()
{

// this loops forever


}
// this function automagically gets called upon a matching POST request
int brewCoffee(String command)
{
// look for the matching argument "coffee" <-- max of 64 characters long
if(command == "coffee")
{
// do something here
activateWaterHeater();
activateWaterPump();
return 1;
}
else return -1;
}

COMPLEMENTARY API CALL


POST /v1/devices/{DEVICE_ID}/{FUNCTION}
# EXAMPLE REQUEST
curl https://api.spark.io/v1/devices/0123456789abcdef/brew \

-d access_token=123412341234 \
-d "args=coffee"
The API request will be routed to the Spark Core and will run your brew function. The
response will have a return_value key containing the integer returned by brew.
Spark.publish()
Publish an event through the Spark Cloud that will be forwarded to all registered callbacks,
subscribed streams of Server-Sent Events, and Cores listening via Spark.subscribe() .
This feature allows the Core to generate an event based on a condition. For example, you
could connect a motion sensor to the Core and have the Core generate an event whenever
motion is detected.
Spark events have the following properties:
name (163 ASCII characters)
public/private (default public)
ttl (time to live, 016777215 seconds, default 60) !! NOTE: The user-specified ttl value is not
yet implemented, so changing this property will not currently have any impact.
optional data (up to 63 bytes)
Anyone may subscribe to public events; think of them like tweets. Only the owner of the Core
will be able to subscribe to private events.
A Core may not publish events beginning with a case-insensitive match for "spark". Such
events are reserved for officially curated data originating from the Spark Cloud.
For the time being there exists no way to access a previously published but TTL-unexpired
event.
Publish a public event with the given name, no data, and the default TTL of 60 seconds.

// SYNTAX
Spark.publish(const char *eventName);
Spark.publish(String eventName);
// EXAMPLE USAGE
Spark.publish("motion-detected");
Publish a public event with the given name and data, with the default TTL of 60 seconds.

// SYNTAX
Spark.publish(const char *eventName, const char *data);
Spark.publish(String eventName, String data);

// EXAMPLE USAGE
Spark.publish("temperature", "19 F");
Publish a public event with the given name, data, and TTL.

// SYNTAX
Spark.publish(const char *eventName, const char *data, int ttl);
Spark.publish(String eventName, String data, int ttl);
// EXAMPLE USAGE
Spark.publish("lake-depth/1", "28m", 21600);
Publish a private event with the given name, data, and TTL. In order to publish a private
event, you must pass all four parameters.

// SYNTAX
Spark.publish(const char *eventName, const char *data, int ttl, PRIVATE);
Spark.publish(String eventName, String data, int ttl, PRIVATE);
// EXAMPLE USAGE
Spark.publish("front-door-unlocked", NULL, 60, PRIVATE);

COMPLEMENTARY API CALL


GET /v1/events/{EVENT_NAME}
# EXAMPLE REQUEST
curl -H "Authorization: Bearer {ACCESS_TOKEN_GOES_HERE}" \
https://api.spark.io/v1/events/motion-detected
# Will return a stream that echoes text when your event is published
event: motion-detected
data: {"data":"23:23:44","ttl":"60","published_at":"2014-05-28T19:20:34.638Z","coreid"

Spark.subscribe()
Subscribe to events published by Cores.
This allows Cores to talk to each other very easily. For example, one Core could publish events
when a motion sensor is triggered and another could subscribe to these events and respond
by sounding an alarm.

int i = 0;

void myHandler(const char *event, const char *data)


{

i++;

Serial.print(i);
Serial.print(event);
Serial.print(", data: ");
if (data)
Serial.println(data);
else
Serial.println("NULL");

void setup()
{

Spark.subscribe("temperature", myHandler);
Serial.begin(9600);

To use Spark.subscribe() , define a handler function and register it in setup() .


You can listen to events published only by your own Cores by adding a MY_DEVICES constant.

// only events from my Cores


Spark.subscribe("the_event_prefix", theHandler, MY_DEVICES);
In the near future, you'll also be able to subscribe to events from a single Core by specifying
the Core's ID.

// Subscribe to events published from one Core


// COMING SOON!
Spark.subscribe("motion/front-door", motionHandler, "55ff70064989495339432587");

A subscription works like a prefix filter. If you subscribe to "foo", you will receive any event
whose name begins with "foo", including "foo", "fool", "foobar", and "food/indian/sweet-currybeans".
Received events will be passed to a handler function similar to Spark.function() . A
subscription handler (like myHandler above) must return void and take two arguments, both
of which are C strings ( const char * ).
The first argument is the full name of the published event.
The second argument (which may be NULL) is any data that came along with the event.

Spark.subscribe() returns a bool indicating success.

NOTE: A Core can register up to 4 event handlers. This means you can call Spark.subscribe()
a maximum of 4 times; after that it will return false .
Spark.connect()

Spark.connect() connects the Spark Core to the Cloud. This will automatically activate the WiFi module and attempt to connect to a Wi-Fi network if the Core is not already connected to a
network.

void setup() {}
void loop() {
if (Spark.connected() == false) {
Spark.connect();
}

}
After you call Spark.connect() , your loop will not be called again until the Core finishes
connecting to the Cloud. Typically, you can expect a delay of approximately one second.
In most cases, you do not need to call Spark.connect() ; it is called automatically when the
Core turns on. Typically you only need to call Spark.connect() after disconnecting with

Spark.disconnect() or when you change the system mode.


Spark.disconnect()

Spark.disconnect() disconnects the Spark Core from the Spark Cloud.

int counter = 10000;


void doConnectedWork() {
}

digitalWrite(D7, HIGH);
Serial.println("Working online");

void doOfflineWork() {
}

digitalWrite(D7, LOW);
Serial.println("Working offline");

bool needConnection() {

--counter;
if (0 == counter)
counter = 10000;
return (2000 > counter);

void setup() {
}

pinMode(D7, OUTPUT);
Serial.begin(9600);

void loop() {
if (needConnection()) {
if (!Spark.connected())
Spark.connect();
doConnectedWork();
} else {
if (Spark.connected())
Spark.disconnect();
doOfflineWork();
}

}
While this function will disconnect from the Spark Cloud, it will keep the connection to the WiFi network. If you would like to completely deactivate the Wi-Fi module, use WiFi.off() .
NOTE: When the Core is disconnected, many features are not possible, including over-the-air
updates, reading Spark.variables, and calling Spark.functions.
If you disconnect from the Cloud, you will NOT BE ABLE to flash new firmware over the air. A
factory reset should resolve the issue.
Spark.connected()
Returns true when connected to the Spark Cloud, and false when disconnected from the
Spark Cloud.

// SYNTAX
Spark.connected();
RETURNS

boolean (true or false)


// EXAMPLE USAGE
void setup() {
Serial.begin(9600);
}

void loop() {
if (Spark.connected()) {
Serial.println("Connected!");
}

delay(1000);
}

Spark.process()

Spark.process() checks the Wi-Fi module for incoming messages from the Cloud, and
processes any messages that have come in. It also sends keep-alive pings to the Cloud, so if
it's not called frequently, the connection to the Cloud may be lost.

void setup() {
Serial.begin(9600);
}

void loop() {
while (1) {
Spark.process();
redundantLoop();

}
}

void redundantLoop() {
Serial.println("Well that was unnecessary.");
}

Spark.process() is a blocking call, and blocks for a few milliseconds. Spark.process() is


called automatically after every loop() and during delays. Typically you will not need to call

Spark.process() unless you block in some other way and need to maintain the connection to
the Cloud, or you change the system mode. If the user puts the Core into MANUAL mode, the
user is responsible for calling Spark.process() . The more frequently this function is called,
the more responsive the Core will be to incoming messages, the more likely the Cloud
connection will stay open, and the less likely that the CC3000's buffer will overrun.
Spark.deviceID()

Spark.deviceID() provides an easy way to extract the device ID of your Core. It returns a
String object of the device ID, which is used frequently in Sparkland to identify your Core.

// EXAMPLE USAGE

void setup()
{

// Make sure your Serial Terminal app is closed before powering your Core
Serial.begin(9600);
// Now open your Serial Terminal, and hit any key to continue!
while(!Serial.available()) Spark.process();

String myID = Spark.deviceID();


}

// Prints out the device ID over Serial


Serial.println(myID);

void loop() {}

Spark.sleep()

Spark.sleep() can be used to dramatically improve the battery life of a Spark-powered


project by temporarily deactivating the Wi-Fi module, which is by far the biggest power draw.

// SYNTAX
Spark.sleep(int seconds);

// EXAMPLE USAGE: Put the Wi-Fi module in standbly (low power) for 5 seconds
Spark.sleep(5);
// The Core LED will flash green during sleep
Spark.sleep(int seconds) does NOT stop the execution of user code (non-blocking call). User
code will continue running while the Wi-Fi module is in standby mode. During sleep,
WiFi.status() will return WIFI_OFF. Once sleep time has expired and the Wi-FI module attempts
reconnection, WiFi.status() will return value WIFI_CONNECTING and WIFI_ON.

Spark.sleep() can also be used to put the entire Core into a deep sleep mode. In this
particular mode, the Core shuts down the Wi-Fi chipset (CC3000) and puts the microcontroller
in a stand-by mode. When the Core awakens from deep sleep, it will reset the Core and run
all user code from the beginning with no values being maintained in memory from before the
deep sleep. As such, it is recommended that deep sleep be called only after all user code has
completed.

// SYNTAX
Spark.sleep(SLEEP_MODE_DEEP, int seconds);

// EXAMPLE USAGE: Put the Core into deep sleep for 60 seconds
Spark.sleep(SLEEP_MODE_DEEP,60);
// The Core LED will shut off during deep sleep
The Core will automatically wake up and reestablish the WiFi connection after the specified
number of seconds.
In standard sleep mode, the Core current consumption is in the range of: 30mA to 38mA

In deep sleep mode, the Core current consumption is around: 3.2 A


Spark.syncTime()
Synchronize the time with the Spark Cloud. This happens automatically when the Core
connects to the Cloud. However, if your Core runs continuously for a long time, you may want
to synchronize once per day or so.

#define ONE_DAY_MILLIS (24 * 60 * 60 * 1000)


unsigned long lastSync = millis();

void loop() {
if (millis() - lastSync > ONE_DAY_MILLIS) {

// Request time synchronization from the Spark Cloud


Spark.syncTime();
lastSync = millis();

}
}

WIFI
WiFi.on()

WiFi.on() turns on the Wi-Fi module. Useful when you've turned it off, and you changed your
mind.
Note that WiFi.on() does not need to be called unless you have changed the system mode or
you have previously turned the Wi-Fi module off.
WiFi.off()

WiFi.off() turns off the Wi-Fi module. Useful for saving power, since most of the power draw
of the Spark Core is the Wi-Fi module.
WiFi.connect()
Attempts to connect to the Wi-Fi network. If there are no credentials stored, this will enter
listening mode. If there are credentials stored, this will try the available credentials until
connection is successful. When this function returns, the device may not have an IP address
on the LAN; use WiFi.ready() to determine the connection status.
WiFi.disconnect()

Disconnects from the Wi-Fi network, but leaves the Wi-Fi module on.
WiFi.connecting()
This function will return true once the Core is attempting to connect using stored Wi-Fi
credentials, and will return false once the Core has successfully connected to the Wi-Fi
network.
WiFi.ready()
This function will return true once the Core is connected to the network and has been
assigned an IP address, which means that it's ready to open TCP sockets and send UDP
datagrams. Otherwise it will return false .
WiFi.listen()
This will enter listening mode, which opens a Serial connection to get Wi-Fi credentials over
USB, and also listens for credentials over Smart Config.
WiFi.listening()
This will return true once WiFi.listen() has been called and will return false once the
Core has been given some Wi-Fi credentials to try, either over USB or Smart Config.
WiFi.setCredentials()
Allows the user to set credentials for the Wi-Fi network from within the code. These
credentials will be added to the CC3000's memory, and the Core will automatically attempt to
connect to this network in the future.

// Connects to an unsecured network.


WiFi.setCredentials(SSID);
WiFi.setCredentials("My_Router_Is_Big");
// Connects to a network secured with WPA2 credentials.
WiFi.setCredentials(SSID, PASSWORD);
WiFi.setCredentials("My_Router", "mypasswordishuge");
// Connects to a network with a specified authentication procedure.
// Options are WPA2, WPA, or WEP.
WiFi.setCredentials(SSID, PASSWORD, AUTH);
WiFi.setCredentials("My_Router", "wepistheworst", WEP);

WiFi.clearCredentials()

This will clear all saved credentials from the CC3000's memory. This will return true on
success and false if the CC3000 has an error.
WiFi.hasCredentials()
Will return true if there are Wi-Fi credentials stored in the CC3000's memory.
WiFi.macAddress()

WiFi.macAddress() returns the MAC address of the device.

byte mac[6];

void setup() {
Serial.begin(9600);
while (!Serial.available()) Spark.process();

Serial.print(mac[5],HEX);
Serial.print(":");
Serial.print(mac[4],HEX);
Serial.print(":");
Serial.print(mac[3],HEX);
Serial.print(":");
Serial.print(mac[2],HEX);
Serial.print(":");
Serial.print(mac[1],HEX);
Serial.print(":");
Serial.println(mac[0],HEX);

void loop() {}

WiFi.SSID()

WiFi.SSID() returns the SSID of the network the Core is currently connected to as a char* .
WiFi.RSSI()

WiFi.RSSI() returns the signal strength of a Wifi network from from -127 to -1dB as an int .
WiFi.ping()

WiFi.ping() allows you to ping an IP address and returns the number of packets received as

an int . It takes two forms:

WiFi.ping(IPAddress remoteIP) takes an IPAddress and pings that address.


WiFi.ping(IPAddress remoteIP, uint8_t nTries) and pings that address a specified number
of times.
WiFi.localIP()

WiFi.localIP() returns the local IP address assigned to the Core as an IPAddress .

void setup() {
Serial.begin(9600);
while(!Serial.available()) Spark.process();

// Prints out the local IP over Serial.


Serial.println(WiFi.localIP());

WiFi.subnetMask()

WiFi.subnetMask() returns the subnet mask of the network as an IPAddress .

void setup() {
Serial.begin(9600);
while(!Serial.available()) Spark.process();

// Prints out the subnet mask over Serial.


Serial.println(WiFi.subnetMask());

WiFi.gatewayIP()

WiFi.gatewayIP() returns the gateway IP address of the network as an IPAddress .

void setup() {
Serial.begin(9600);
while(!Serial.available()) Spark.process();

// Prints out the gateway IP over Serial.


Serial.println(WiFi.gatewayIP());

INPUT/OUTPUT
SETUP
pinMode()

pinMode() configures the specified pin to behave either as an input (with or without an
internal weak pull-up or pull-down resistor), or an output.

// SYNTAX
pinMode(pin,mode);
pinMode() takes two arguments, pin : the number of the pin whose mode you wish to set and
mode : INPUT, INPUT_PULLUP, INPUT_PULLDOWN or OUTPUT.
pinMode() does not return anything.

// EXAMPLE USAGE
int button = D0;
int LED = D1;

// button is connected to D0
// LED is connected to D1

void setup()
{

pinMode(LED, OUTPUT);
pinMode(button, INPUT_PULLDOWN);

// sets pin as output


// sets pin as input

void loop()
{

// blink the LED as long as the button


while(digitalRead(button) == HIGH) {
digitalWrite(LED, HIGH);
//
delay(200);
//
digitalWrite(LED, LOW);
//
delay(200);
//
}
}

I/O

is pressed
sets the LED on
waits for 200mS
sets the LED off
waits for 200mS

digitalWrite()
Write a HIGH or a LOW value to a digital pin.

// SYNTAX
digitalWrite(pin, value);
If the pin has been configured as an OUTPUT with pinMode(), its voltage will be set to the
corresponding value: 3.3V for HIGH, 0V (ground) for LOW.

digitalWrite() takes two arguments, pin : the number of the pin whose value you wish to set
and value : HIGH or LOW .

digitalWrite() does not return anything.

// EXAMPLE USAGE
int LED = D1;

// LED connected to D1

void setup()
{

pinMode(LED, OUTPUT);

// sets pin as output

void loop()
{

digitalWrite(LED, HIGH); // sets the LED on


delay(200);
// waits for 200mS
digitalWrite(LED, LOW); // sets the LED off
delay(200);
// waits for 200mS
}

digitalRead()
Reads the value from a specified digital pin , either HIGH or LOW .

// SYNTAX
digitalRead(pin);
digitalRead() takes one argument, pin : the number of the digital pin you want to read.
digitalRead() returns HIGH or LOW .

// EXAMPLE USAGE
int button = D0;
int LED = D1;

// button is connected to D0
// LED is connected to D1

int val = 0;

// variable to store the read value

void setup()
{

pinMode(LED, OUTPUT);
// sets pin as output
pinMode(button, INPUT_PULLDOWN); // sets pin as input
}

void loop()
{

val = digitalRead(button);
digitalWrite(LED, val);

// read the input pin


// sets the LED to the button's value

analogWrite()
Writes an analog value (PWM wave) to a pin. Can be used to light a LED at varying
brightnesses or drive a motor at various speeds. After a call to analogWrite(), the pin will
generate a steady square wave of the specified duty cycle until the next call to analogWrite()
(or a call to digitalRead() or digitalWrite() on the same pin). The frequency of the PWM signal is
approximately 500 Hz.
On the Spark Core, this function works on pins A0, A1, A4, A5, A6, A7, D0 and D1.
The analogWrite function has nothing to do with the analog pins or the analogRead function.

// SYNTAX
analogWrite(pin, value);
analogWrite() takes two arguments, pin : the number of the pin whose value you wish to set
and value : the duty cycle: between 0 (always off) and 255 (always on).

analogWrite() does not return anything.

// EXAMPLE USAGE
int ledPin = D1;
int analogPin = A0;
int val = 0;

// LED connected to digital pin D1


// potentiometer connected to analog pin A0
// variable to store the read value

void setup()
{

pinMode(ledPin, OUTPUT);
}

void loop()
{

// sets the pin as output

val = analogRead(analogPin); // read the input pin


analogWrite(ledPin, val/16); // analogRead values go from 0 to 4095,
// analogWrite values from 0 to 255.
delay(10);
}

analogRead()
Reads the value from the specified analog pin. The Spark Core has 8 channels (A0 to A7) with
a 12-bit resolution. This means that it will map input voltages between 0 and 3.3 volts into
integer values between 0 and 4095. This yields a resolution between readings of: 3.3 volts /
4096 units or, 0.0008 volts (0.8 mV) per unit.

// SYNTAX
analogRead(pin);
analogRead() takes one argument pin : the number of the analog input pin to read from ('A0
to A7'.)

analogRead() returns an integer value ranging from 0 to 4095.

// EXAMPLE USAGE
int ledPin = D1;
int analogPin = A0;
int val = 0;

// LED connected to digital pin D1


// potentiometer connected to analog pin A0
// variable to store the read value

void setup()
{

pinMode(ledPin, OUTPUT);

// sets the pin as output

void loop()
{

val = analogRead(analogPin);
analogWrite(ledPin, val/16);
delay(10);

// read the input pin


// analogRead values go from 0 to 4095, analogWrite valu

COMMUNICATION
SERIAL

Used for communication between the Spark Core and a computer or other devices. The Core
has two serial channels:

Serial: This channel communicates through the USB port and when connected to a
computer, will show up as a virtual COM port.

Serial1: This channel is available via the Core's TX and RX pins.


Serial2: This channel is optionally available via the Core's D1(TX) and D0(RX) pins. To use
Serial2, add #include "Serial2.h" near the top of your Spark App's main code file.
To use the TX/RX (Serial1) or D1/D0 (Serial2) pins to communicate with your personal
computer, you will need an additional USB-to-serial adapter. To use them to communicate
with an external TTL serial device, connect the TX pin to your device's RX pin, the RX to your
device's TX pin, and the ground of your Core to your device's ground.
NOTE: Please take into account that the voltage levels on these pins runs at 0V to 3.3V and
should not be connected directly to a computer's RS232 serial port which operates at +/- 12V
and can damage the Core.
begin()
Sets the data rate in bits per second (baud) for serial data transmission. For communicating
with the computer, use one of these rates: 300, 600, 1200, 2400, 4800, 9600, 14400, 19200,
28800, 38400, 57600, or 115200. You can, however, specify other rates - for example, to
communicate over pins TX and RX with a component that requires a particular baud rate.

// SYNTAX
Serial.begin(speed);
Serial1.begin(speed);
Serial2.begin(speed);

// serial via USB port


// serial via TX and RX pins
// serial via D1(TX) and D0(RX) pins

speed : parameter that specifies the baud rate (long)


begin() does not return anything

// EXAMPLE USAGE
void setup()
{
Serial.begin(9600); // open serial over USB
// On Windows it will be necessary to implement the following line:
// Make sure your Serial Terminal app is closed before powering your Core
// Now open your Serial Terminal, and hit any key to continue!
while(!Serial.available()) SPARK_WLAN_Loop();

Serial1.begin(9600); // open serial over TX and RX pins

Serial.println("Hello Computer");
Serial1.println("Hello Serial 1");

void loop() {}

end()
Disables serial communication, allowing the RX and TX pins to be used for general input and
output. To re-enable serial communication, call Serial1.begin() .

// SYNTAX
Serial1.end();

available()
Get the number of bytes (characters) available for reading from the serial port. This is data
that's already arrived and stored in the serial receive buffer (which holds 64 bytes).

// EXAMPLE USAGE
void setup()
{
Serial.begin(9600);
Serial1.begin(9600);
}

void loop()
{

// read from port 0, send to port 1:


if (Serial.available())
{
int inByte = Serial.read();
Serial1.write(inByte);
}
// read from port 1, send to port 0:
if (Serial1.available())
{
int inByte = Serial1.read();
Serial.write(inByte);
}
}

peek()

Returns the next byte (character) of incoming serial data without removing it from the
internal serial buffer. That is, successive calls to peek() will return the same character, as will
the next call to read() .

// SYNTAX
Serial.peek();
Serial1.peek();
peek() returns the first byte of incoming serial data available (or -1 if no data is available) int
write()
Writes binary data to the serial port. This data is sent as a byte or series of bytes; to send the
characters representing the digits of a number use the print() function instead.

// SYNTAX
Serial.write(val);
Serial.write(str);
Serial.write(buf, len);
Parameters:

val : a value to send as a single byte


str : a string to send as a series of bytes
buf : an array to send as a series of bytes
len : the length of the buffer
write() will return the number of bytes written, though reading that number is optional.

// EXAMPLE USAGE
void setup()
{
Serial.begin(9600);
}

void loop()
{

Serial.write(45); // send a byte with the value 45

int bytesSent = Serial.write(hello); //send the string hello and return the length

read()
Reads incoming serial data.

// SYNTAX
Serial.read();
Serial1.read();
read() returns the first byte of incoming serial data available (or -1 if no data is available) int

// EXAMPLE USAGE
int incomingByte = 0; // for incoming serial data

void setup() {
Serial.begin(9600); // opens serial port, sets data rate to 9600 bps
}

void loop() {

// send data only when you receive data:


if (Serial.available() > 0) {
// read the incoming byte:
incomingByte = Serial.read();

// say what you got:


Serial.print("I received: ");
Serial.println(incomingByte, DEC);

print()
Prints data to the serial port as human-readable ASCII text. This command can take many
forms. Numbers are printed using an ASCII character for each digit. Floats are similarly
printed as ASCII digits, defaulting to two decimal places. Bytes are sent as a single character.
Characters and strings are sent as is. For example:
Serial.print(78) gives "78"
Serial.print(1.23456) gives "1.23"
Serial.print('N') gives "N"
Serial.print("Hello world.") gives "Hello world."
An optional second parameter specifies the base (format) to use; permitted values are BIN
(binary, or base 2), OCT (octal, or base 8), DEC (decimal, or base 10), HEX (hexadecimal, or base
16). For floating point numbers, this parameter specifies the number of decimal places to use.

For example:
Serial.print(78, BIN) gives "1001110"
Serial.print(78, OCT) gives "116"
Serial.print(78, DEC) gives "78"
Serial.print(78, HEX) gives "4E"
Serial.println(1.23456, 0) gives "1"
Serial.println(1.23456, 2) gives "1.23"
Serial.println(1.23456, 4) gives "1.2346"
println()
Prints data to the serial port as human-readable ASCII text followed by a carriage return
character (ASCII 13, or '\r') and a newline character (ASCII 10, or '\n'). This command takes the
same forms as Serial.print() .

// SYNTAX
Serial.println(val);
Serial.println(val, format);
Parameters:

val : the value to print - any data type


format : specifies the number base (for integral data types) or number of decimal places
(for floating point types)

println() returns the number of bytes written, though reading that number is optional size_t (long)

EXAMPLE
//reads an analog input on analog in A0, prints the value out.

int analogValue = 0;

// variable to hold the analog value

void setup()
{

// Make sure your Serial Terminal app is closed before powering your Core
Serial.begin(9600);
// Now open your Serial Terminal, and hit any key to continue!
while(!Serial.available()) SPARK_WLAN_Loop();

void loop() {

// read the analog input on pin A0:


analogValue = analogRead(A0);

// print it out in many formats:


Serial.println(analogValue);
Serial.println(analogValue, DEC);
Serial.println(analogValue, HEX);
Serial.println(analogValue, OCT);
Serial.println(analogValue, BIN);

//
//
//
//
//

print
print
print
print
print

as
as
as
as
as

an
an
an
an
an

ASCII-encoded
ASCII-encoded
ASCII-encoded
ASCII-encoded
ASCII-encoded

decimal
decimal
hexadecimal
octal
binary

// delay 10 milliseconds before the next reading:


delay(10);
}

flush()
Waits for the transmission of outgoing serial data to complete.

// SYNTAX
Serial.flush();
Serial1.flush();
flush() neither takes a parameter nor returns anything

SPI
This library allows you to communicate with SPI devices, with the Spark Core as the master
device.

begin()
Initializes the SPI bus by setting SCK, MOSI, and SS to outputs, pulling SCK and MOSI low, and
SS high.
Note that once the pin is configured, you can't use it anymore as a general I/O, unless you call
the SPI.end() method on the same pin.

// SYNTAX
SPI.begin();

end()
Disables the SPI bus (leaving pin modes unchanged).

// SYNTAX
SPI.end();

setBitOrder()

Sets the order of the bits shifted out of and into the SPI bus, either LSBFIRST (least-significant
bit first) or MSBFIRST (most-significant bit first).

// SYNTAX
SPI.setBitOrder(order);
Where, the parameter order can either be LSBFIRST or MSBFIRST .
setClockDivider()
Sets the SPI clock divider relative to the system clock. The available dividers are 2, 4, 8, 16, 32,
64, 128 or 256. The default setting is SPI_CLOCK_DIV4, which sets the SPI clock to one-quarter
the frequency of the system clock.

// SYNTAX
SPI.setClockDivider(divider) ;
Where the parameter, divider can be:

SPI_CLOCK_DIV2
SPI_CLOCK_DIV4
SPI_CLOCK_DIV8
SPI_CLOCK_DIV16
SPI_CLOCK_DIV32
SPI_CLOCK_DIV64
SPI_CLOCK_DIV128
SPI_CLOCK_DIV256
setDataMode()
Sets the SPI data mode: that is, clock polarity and phase. See the Wikipedia article on SPI for
details.

// SYNTAX
SPI.setDataMode(mode) ;
Where the parameter, mode can be:

SPI_MODE0
SPI_MODE1
SPI_MODE2
SPI_MODE3

transfer()
Transfers one byte over the SPI bus, both sending and receiving.

// SYNTAX
SPI.transfer(val);
Where the parameter val , can is the byte to send out over the SPI bus.

WIRE

This library allows you to communicate with I2C / TWI devices. On the Spark Core, D0 is the
Serial Data Line (SDA) and D1 is the Serial Clock (SCL). Both of these pins runs at 3.3V logic but
are tolerant to 5V.
begin()
Initiate the Wire library and join the I2C bus as a master or slave. This should normally be
called only once.

// SYNTAX
Wire.begin();
Wire.begin(address);
Parameters: address : the 7-bit slave address (optional); if not specified, join the bus as a
master.
requestFrom()
Used by the master to request bytes from a slave device. The bytes may then be retrieved
with the available() and read() functions.
If true, requestFrom() sends a stop message after the request, releasing the I2C bus.
If false, requestFrom() sends a restart message after the request. The bus will not be
released, which prevents another master device from requesting between messages. This
allows one master device to send multiple requests while in control.
The default value is true.

// SYNTAX
Wire.requestFrom(address, quantity);
Wire.requestFrom(address, quantity, stop) ;
Parameters:

address : the 7-bit address of the device to request bytes from


quantity : the number of bytes to request
stop : boolean. true will send a stop message after the request, releasing the bus. false
will continually send a restart after the request, keeping the connection active.
Returns: byte : the number of bytes returned from the slave device.
beginTransmission()
Begin a transmission to the I2C slave device with the given address. Subsequently, queue
bytes for transmission with the write() function and transmit them by calling

endTransmission() .

// SYNTAX
Wire.beginTransmission(address);
Parameters: address : the 7-bit address of the device to transmit to.

endTransmission()
Ends a transmission to a slave device that was begun by beginTransmission() and transmits
the bytes that were queued by write() .
If true, endTransmission() sends a stop message after transmission, releasing the I2C bus.
If false, endTransmission() sends a restart message after transmission. The bus will not be
released, which prevents another master device from transmitting between messages. This
allows one master device to send multiple transmissions while in control.
The default value is true.

Wire.endTransmission();
Wire.endTransmission(stop);
Parameters: stop : boolean. true will send a stop message, releasing the bus after
transmission. false will send a restart, keeping the connection active.
Returns: byte , which indicates the status of the transmission:
0: success
1: data too long to fit in transmit buffer
2: received NACK on transmit of address
3: received NACK on transmit of data
4: other error
write()
Writes data from a slave device in response to a request from a master, or queues bytes for
transmission from a master to slave device (in-between calls to beginTransmission() and

endTransmission() ).

// Syntax
Wire.write(value);
Wire.write(string);
Wire.write(data, length);
Parameters:

value : a value to send as a single byte


string : a string to send as a series of bytes
data : an array of data to send as bytes
length : the number of bytes to transmit
Returns: byte

write() will return the number of bytes written, though reading that number is optional.

// EXAMPLE USAGE
byte val = 0;

void setup()
{
}

Wire.begin(); // join i2c bus

void loop()
{

Wire.beginTransmission(44); // transmit to device #44 (0x2c)


Wire.write(val);
Wire.endTransmission();

// device address is specified in datasheet


// sends value byte
// stop transmitting

val++;
// increment value
if(val == 64) // if reached 64th position (max)
{
val = 0;
// start over from lowest value
}
delay(500);
}

available()
Returns the number of bytes available for retrieval with read() . This should be called on a
master device after a call to requestFrom() or on a slave inside the onReceive() handler.

Wire.available();
Returns: The number of bytes available for reading.
read()
Reads a byte that was transmitted from a slave device to a master after a call to

requestFrom() or was transmitted from a master to a slave. read() inherits from the Stream
utility class.

Wire.read() ;
Returns: The next byte received

// EXAMPLE USAGE

void setup()
{

Wire.begin();
Serial.begin(9600);

// join i2c bus (address optional for master)


// start serial for output

void loop()
{

Wire.requestFrom(2, 6); // request 6 bytes from slave device #2


while(Wire.available()) // slave may send less than requested
{

char c = Wire.read(); // receive a byte as character


Serial.print(c);
// print the character

delay(500);
}

onReceive()
Registers a function to be called when a slave device receives a transmission from a master.
Parameters: handler : the function to be called when the slave receives data; this should take
a single int parameter (the number of bytes read from the master) and return nothing, e.g.:

void myHandler(int numBytes)


onRequest()
Register a function to be called when a master requests data from this slave device.
Parameters: handler : the function to be called, takes no parameters and returns nothing, e.g.:

void myHandler()

IPADDRESS
IPAddress
Creates an IP address that can be used with TCPServer, TCPClient, UPD, and Network objects.

// EXAMPLE USAGE
IPAddress localIP;

IPAddress server(8,8,8,8);
IPAddress IPfromInt( 167772162UL ); // 10.0.0.2 as 10*256^3+0*256^2+0*256+2
uint8_t server[] = { 10, 0, 0, 2};
IPAddress IPfromBytes( server );
The IPAddress also allows for comparisons.

if (IPfromInt == IPfromBytes)
{
}

Serial.println("Same IP addresses");

You can also use indexing the get or change individual bytes in the IP address.

// PING ALL HOSTS ON YOUR SUBNET EXCEPT YOURSELF


IPAddress localIP = Network.localIP();
uint8_t myLastAddrByte = localIP[3];
for(uint8_t ipRange=1; ipRange<255; ipRange++)
{
if (ipRange != myLastAddrByte)
{
localIP[3] = ipRange;
Network.ping(localIP);
}
}
You can also assign to an IPAddress from an array of uint8's or a 32-bit unsigned integer.

IPAddress IPfromInt; // 10.0.0.2 as 10*256^3+0*256^2+0*256+2


IPfromInt = 167772162UL;
uint8_t server[] = { 10, 0, 0, 2};
IPAddress IPfromBytes;
IPfromBytes = server;
Finally IPAddress can be used directly with print.

// PRINT THE CORE'S IP ADDRESS IN


// THE FORMAT 192.168.0.10
IPAddress myIP = Network.localIP();
Serial.println(myIP);
// prints the core's IP address

TCPSERVER

TCPServer
Create a server that listens for incoming connections on the specified port.

// SYNTAX

TCPServer server = TCPServer(port);


Parameters: port : the port to listen on ( int )

// EXAMPLE USAGE
// telnet defaults to port 23
TCPServer server = TCPServer(23);
TCPClient client;

void setup()
{

// start listening for clients


server.begin();
// Make sure your Serial Terminal app is closed before powering your Core
Serial.begin(9600);
// Now open your Serial Terminal, and hit any key to continue!
while(!Serial.available()) SPARK_WLAN_Loop();

Serial.println(Network.localIP());
Serial.println(Network.subnetMask());
Serial.println(Network.gatewayIP());
Serial.println(Network.SSID());

void loop()
{

if (client.connected()) {

// echo all available bytes back to the client


while (client.available()) {
server.write(client.read());
}
} else {
// if no client is yet connected, check for a new connection
client = server.available();
}
}

begin()

Tells the server to begin listening for incoming connections.

// SYNTAX
server.begin();

available()
Gets a client that is connected to the server and has data available for reading. The
connection persists when the returned client object goes out of scope; you can close it by
calling client.stop() .

available() inherits from the Stream utility class.


write()
Write data to all the clients connected to a server. This data is sent as a byte or series of
bytes.

// Syntax
server.write(val);
server.write(buf, len);
Parameters:

val : a value to send as a single byte (byte or char)


buf : an array to send as a series of bytes (byte or char)
len : the length of the buffer
Returns: byte : write() returns the number of bytes written. It is not necessary to read this.
print()
Print data to all the clients connected to a server. Prints numbers as a sequence of digits,
each an ASCII character (e.g. the number 123 is sent as the three characters '1', '2', '3').

// Syntax
server.print(data);
server.print(data, BASE) ;
Parameters:

data : the data to print (char, byte, int, long, or string)


BASE (optional): the base in which to print numbers: BIN for binary (base 2), DEC for

decimal (base 10), OCT for octal (base 8), HEX for hexadecimal (base 16).
Returns: byte : print() will return the number of bytes written, though reading that number
is optional
println()
Print data, followed by a newline, to all the clients connected to a server. Prints numbers as a
sequence of digits, each an ASCII character (e.g. the number 123 is sent as the three
characters '1', '2', '3').

// Syntax
server.println();
server.println(data);
server.println(data, BASE) ;
Parameters:

data (optional): the data to print (char, byte, int, long, or string)
BASE (optional): the base in which to print numbers: BIN for binary (base 2), DEC for
decimal (base 10), OCT for octal (base 8), HEX for hexadecimal (base 16).

TCPCLIENT
TCPClient
Creates a client which can connect to a specified internet IP address and port (defined in the

client.connect() function).

// SYNTAX

TCPClient client;
// EXAMPLE USAGE

TCPClient client;
byte server[] = { 74, 125, 224, 72 }; // Google
void setup()
{

// Make sure your Serial Terminal app is closed before powering your Core
Serial.begin(9600);
// Now open your Serial Terminal, and hit any key to continue!
while(!Serial.available()) SPARK_WLAN_Loop();

Serial.println("connecting...");
if (client.connect(server, 80))
{

Serial.println("connected");

client.println("GET /search?q=unicorn HTTP/1.0");


client.println("Host: www.google.com");
client.println("Content-Length: 0");
client.println();
}

else
{
}

Serial.println("connection failed");

void loop()
{

if (client.available())
{

char c = client.read();
Serial.print(c);

if (!client.connected())
{

Serial.println();
Serial.println("disconnecting.");
client.stop();
for(;;);

connected()
Whether or not the client is connected. Note that a client is considered connected if the
connection has been closed but there is still unread data.

// SYNTAX
client.connected();
Returns true if the client is connected, false if not.
connect()

Connects to a specified IP address and port. The return value indicates success or failure.
Also supports DNS lookups when using a domain name.

// SYNTAX
client.connect();
client.connect(ip, port);
client.connect(URL, port);
Parameters:

ip : the IP address that the client will connect to (array of 4 bytes)


URL : the domain name the client will connect to (string, ex.:"spark.io")
port : the port that the client will connect to ( int )
Returns true if the connection succeeds, false if not.
write()
Write data to the server the client is connected to. This data is sent as a byte or series of
bytes.

// SYNTAX
client.write(val);
client.write(buf, len);
Parameters:

val : a value to send as a single byte (byte or char)


buf : an array to send as a series of bytes (byte or char)
len : the length of the buffer
Returns: byte : write() returns the number of bytes written. It is not necessary to read this
value.
print()
Print data to the server that a client is connected to. Prints numbers as a sequence of digits,
each an ASCII character (e.g. the number 123 is sent as the three characters '1', '2', '3').

// Syntax
client.print(data);
client.print(data, BASE) ;

Parameters:

data : the data to print (char, byte, int, long, or string)


BASE (optional): the base in which to print numbers: BIN for binary (base 2), DEC for
decimal (base 10), OCT for octal (base 8), HEX for hexadecimal (base 16).
Returns: byte : print() will return the number of bytes written, though reading that number
is optional
println()
Print data, followed by a carriage return and newline, to the server a client is connected to.
Prints numbers as a sequence of digits, each an ASCII character (e.g. the number 123 is sent
as the three characters '1', '2', '3').

// Syntax
client.println();
client.println(data);
client.println(data, BASE) ;
Parameters:

data (optional): the data to print (char, byte, int, long, or string)
BASE (optional): the base in which to print numbers: BIN for binary (base 2), DEC for
decimal (base 10), OCT for octal (base 8), HEX for hexadecimal (base 16).
available()
Returns the number of bytes available for reading (that is, the amount of data that has been
written to the client by the server it is connected to).

// SYNTAX
client.available();
Returns the number of bytes available.
read()
Read the next byte received from the server the client is connected to (after the last call to

read() ).

// SYNTAX
client.read();

Returns the next byte (or character), or -1 if none is available.


flush()
Discard any bytes that have been written to the client but not yet read.

// SYNTAX
client.flush();

stop()
Disconnect from the server.

// SYNTAX
client.stop();

UDP
This class enables UDP messages to be sent and received.
The UDP protocol implementation has known issues that will require extra consideration
when programming with it. Please refer to the Known Issues category of the Community for
details. The are also numerous working examples and workarounds in the searchable
Community topics.
begin()
Initializes the UDP library and network settings.

// EXAMPLE USAGE
// UDP Port used for two way communication
unsigned int localPort = 8888;
// An UDP instance to let us send and receive packets over UDP
UDP Udp;

void setup() {

// start the UDP


Udp.begin(localPort);
// Print your device IP Address via serial

Serial.begin(9600);
Serial.println(Network.localIP());

void loop() {

// Check if data has been received


if (Udp.parsePacket() > 0) {
// Read first char of data received
char c = Udp.read();
// Ignore other chars
Udp.flush();
// Store sender ip and port
IPAddress ipAddress = Udp.remoteIP();
int port = Udp.remotePort();

// Echo back data to sender


Udp.beginPacket(ipAddress, port);
Udp.write(c);
Udp.endPacket();

available()
Get the number of bytes (characters) available for reading from the buffer. This is data that's
already arrived.
This function can only be successfully called after UDP.parsePacket() .

available() inherits from the Stream utility class.

// SYNTAX
UDP.available()
Returns the number of bytes available to read.
beginPacket()
Starts a connection to write UDP data to the remote connection.

// SYNTAX
UDP.beginPacket(remoteIP, remotePort);

Parameters:

remoteIP : the IP address of the remote connection (4 bytes)


remotePort : the port of the remote connection (int)
It returns nothing.
endPacket()
Called after writing UDP data to the remote connection.

// SYNTAX
UDP.endPacket();
Parameters: NONE
write()
Writes UDP data to the remote connection. Must be wrapped between beginPacket() and

endPacket() . beginPacket() initializes the packet of data, it is not sent until endPacket() is
called.

// SYNTAX
UDP.write(message);
UDP.write(buffer, size);
Parameters:

message : the outgoing message (char)


buffer : an array to send as a series of bytes (byte or char)
size : the length of the buffer
Returns:

byte : returns the number of characters sent. This does not have to be read
parsePacket()
Checks for the presence of a UDP packet, and reports the size. parsePacket() must be called
before reading the buffer with UDP.read() .

// SYNTAX
UDP.parsePacket();

Parameters: NONE
Returns:

int : the size of a received UDP packet


read()
Reads UDP data from the specified buffer. If no arguments are given, it will return the next
character in the buffer.
This function can only be successfully called after UDP.parsePacket() .

// SYNTAX
UDP.read();
UDP.read(packetBuffer, MaxSize);
Parameters:

packetBuffer : buffer to hold incoming packets (char)


MaxSize : maximum size of the buffer (int)
Returns:

char : returns the characters in the buffer


stop()
Disconnect from the server. Release any resource being used during the UDP session.

// SYNTAX
UDP.stop();
Parameters: NONE
remoteIP()
Gets the IP address of the remote connection. This function must be called after

UDP.parsePacket() .

// SYNTAX
UDP.remoteIP();
Parameters: NONE

Returns:
4 bytes : the IP address of the remote connection
remotePort()
Gets the port of the remote UDP connection. This function must be called after

UDP.parsePacket() .

// SYNTAX
UDP.remotePort();
Parameters: NONE
Returns:

int : the port of the UDP connection to a remote host

LIBRARIES
SERVO
This library allows a Spark Core to control RC (hobby) servo motors. Servos have integrated
gears and a shaft that can be precisely controlled. Standard servos allow the shaft to be
positioned at various angles, usually between 0 and 180 degrees. Continuous rotation servos
allow the rotation of the shaft to be set to various speeds.

// EXAMPLE CODE

Servo myservo; // create servo object to control a servo

// a maximum of eight servo objects can be created

int pos = 0;

// variable to store the servo position

void setup()
{

myservo.attach(A0);
}

void loop()
{

// attaches the servo on the A0 pin to the servo object

for(pos = 0; pos < 180; pos += 1) // goes from 0 degrees to 180 degrees
{

myservo.write(pos);
delay(15);

// in steps of 1 degree
// tell servo to go to position in variable 'pos'
// waits 15ms for the servo to reach the position

for(pos = 180; pos>=1; pos-=1)

// goes from 180 degrees to 0 degrees

myservo.write(pos);
delay(15);

// tell servo to go to position in variable 'pos'


// waits 15ms for the servo to reach the position

}
}

NOTE: Unlike Arduino, you do not need to include Servo.h ; it is included automatically.
attach()
Set up a servo on a particular pin. Note that, on the Spark Core, Servo can only be attached to
pins with a timer (A0, A1, A4, A5, A6, A7, D0, and D1).

// SYNTAX
servo.attach(pin)

write()
Writes a value to the servo, controlling the shaft accordingly. On a standard servo, this will set
the angle of the shaft (in degrees), moving the shaft to that orientation. On a continuous
rotation servo, this will set the speed of the servo (with 0 being full-speed in one direction,
180 being full speed in the other, and a value near 90 being no movement).

// SYNTAX
servo.write(angle)

writeMicroseconds()
Writes a value in microseconds (uS) to the servo, controlling the shaft accordingly. On a
standard servo, this will set the angle of the shaft. On standard servos a parameter value of
1000 is fully counter-clockwise, 2000 is fully clockwise, and 1500 is in the middle.

// SYNTAX
servo.writeMicroseconds(uS)
Note that some manufactures do not follow this standard very closely so that servos often
respond to values between 700 and 2300. Feel free to increase these endpoints until the

servo no longer continues to increase its range. Note however that attempting to drive a servo
past its endpoints (often indicated by a growling sound) is a high-current state, and should be
avoided.
Continuous-rotation servos will respond to the writeMicrosecond function in an analogous
manner to the write function.
read()
Read the current angle of the servo (the value passed to the last call to write()). Returns an
integer from 0 to 180 degrees.

// SYNTAX
servo.read()

attached()
Check whether the Servo variable is attached to a pin. Returns a boolean.

// SYNTAX
servo.attached()

detach()
Detach the Servo variable from its pin.

// SYNTAX
servo.detach()

RGB
This library allows the user to control the RGB LED on the front of the Spark Core.

// EXAMPLE CODE
// take control of the LED
RGB.control(true);
// red, green, blue, 0-255.
// the following sets the RGB LED to white:
RGB.color(255, 255, 255);

// wait one second


delay(1000);
// scales brightness of all three colors, 0-255.
// the following sets the RGB LED brightness to 25%:
RGB.brightness(64);
// wait one more second
delay(1000);
// resume normal operation
RGB.control(false);

control(user_control)
User can take control of the RGB LED, or give control back to the Spark Core firmware.

// take control of the RGB LED


RGB.control(true);
// resume normal operation
RGB.control(false);

controlled()
Returns Boolean true when the RGB LED is under user control, or false when it is not.

// take control of the RGB LED


RGB.control(true);
// Print true or false depending on whether
// the RGB LED is currently under user control.
// In this case it prints "true".
Serial.println(RGB.controlled());
// resume normal operation
RGB.control(false);

color(red, green, blue)


Set the color of the RGB with three values, 0 to 255 (0 is off, 255 is maximum brightness for
that color). User must take control of the RGB LED before calling this method.

// Set the RGB LED to red

RGB.color(255, 0, 0);
// Sets the RGB LED to cyan
RGB.color(0, 255, 255);
// Sets the RGB LED to white
RGB.color(255, 255, 255);

brightness(val)
Scale the brightness value of all three RGB colors with one value, 0 to 255 (0 is 0%, 255 is
100%). This setting persists after RGB.control() is set to false , and will govern the overall
brightness of the RGB LED under normal system operation. User must take control of the RGB
LED before calling this method.

// Scale the RGB LED brightness to 25%


RGB.brightness(64);
// Scale the RGB LED brightness to 50%
RGB.brightness(128);
// Scale the RGB LED brightness to 100%
RGB.brightness(255);

TIME
The Spark Core synchronizes time with the Spark Cloud during the handshake. From then, the
time is continually updated on the Core. This reduces the need for external libraries to
manage dates and times.
hour()
Retrieve the hour for the current or given time. Integer is returned without a leading zero.

// Print the hour for the current time


Serial.print(Time.hour());
// Print the hour for the given time, in this case: 4
Serial.print(Time.hour(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 0-23

hourFormat12()
Retrieve the hour in 12-hour format for the current or given time. Integer is returned without
a leading zero.

// Print the hour in 12-hour format for the current time


Serial.print(Time.hourFormat12());
// Print the hour in 12-hour format for the given time, in this case: 15
Serial.print(Time.hourFormat12(1400684400));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 1-12
isAM()
Returns true if the current or given time is AM.

// Print true or false depending on whether the current time is AM


Serial.print(Time.isAM());
// Print whether the given time is AM, in this case: true
Serial.print(Time.isAM(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Unsigned 8-bit integer: 0 = false, 1 = true
isPM()
Returns true if the current or given time is PM.

// Print true or false depending on whether the current time is PM


Serial.print(Time.isPM());
// Print whether the given time is PM, in this case: false
Serial.print(Time.isPM(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Unsigned 8-bit integer: 0 = false, 1 = true
minute()

Retrieve the minute for the current or given time. Integer is returned without a leading zero.

// Print the minute for the current time


Serial.print(Time.minute());
// Print the minute for the given time, in this case: 51
Serial.print(Time.minute(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 0-59
second()
Retrieve the seconds for the current or given time. Integer is returned without a leading zero.

// Print the second for the current time


Serial.print(Time.second());
// Print the second for the given time, in this case: 51
Serial.print(Time.second(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 0-59
day()
Retrieve the day for the current or given time. Integer is returned without a leading zero.

// Print the day for the current time


Serial.print(Time.day());
// Print the minute for the given time, in this case: 21
Serial.print(Time.day(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 1-31
weekday()
Retrieve the weekday for the current or given time.

1 = Sunday
2 = Monday
3 = Tuesday
4 = Wednesday
5 = Thursday
6 = Friday
7 = Saturday

// Print the weekday number for the current time


Serial.print(Time.weekday());
// Print the weekday for the given time, in this case: 4
Serial.print(Time.weekday(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 1-7
month()
Retrieve the month for the current or given time. Integer is returned without a leading zero.

// Print the month number for the current time


Serial.print(Time.month());
// Print the month for the given time, in this case: 5
Serial.print(Time.month(1400647897));
Optional parameters: Integer (Unix timestamp)
Returns: Integer 1-12
year()
Retrieve the 4-digit year for the current or given time.

// Print the current year


Serial.print(Time.year());
// Print the year for the given time, in this case: 2014
Serial.print(Time.year(1400647897));
Optional parameters: Integer (Unix timestamp)

Returns: Integer
now()
Retrieve the current time as seconds since January 1, 1970 (commonly known as "Unix time"
or "epoch time")

// Print the current Unix timestamp


Serial.print(Time.now()); // 1400647897
Returns: Integer
zone()
Set the time zone offset (+/-) from UTC. The Spark Core will remember this offset until reboot.
NOTE: This function does not observe daylight savings time.

// Set time zone to Eastern USA daylight saving time


Time.zone(-5);
Parameters: floating point offset from UTC in hours, from -12.0 to 13.0
setTime()
Set the Spark Core's time to the given timestamp.
NOTE: This will override the time set by the Spark Cloud. If the cloud connection drops, the
reconnection handshake will set the time again.
Also see: Spark.syncTime()

// Set the time to 2014-10-11 13:37:42


Time.setTime(1413034662);
Parameters: Unix timestamp (integer)
timeStr()
Return string representation for the given time.

Serial.print(Time.timeStr()); // Wed May 21 01:08:47 2014

Returns: String

OTHER FUNCTIONS
Note that most of the functions in newlib described at
https://sourceware.org/newlib/libc.html are available for use in addition to the functions
outlined below.

TIME
millis()
Returns the number of milliseconds since the Spark Core began running the current
program. This number will overflow (go back to zero), after approximately 49 days.

unsigned long time = millis();

// EXAMPLE USAGE
unsigned long time;

void setup()
{
}

Serial.begin(9600);

void loop()
{

Serial.print("Time: ");

time = millis();
//prints time since program started
Serial.println(time);
// wait a second so as not to send massive amounts of data
delay(1000);
}
Note: The parameter for millis is an unsigned long, errors may be generated if a programmer
tries to do math with other datatypes such as ints.
micros()
Returns the number of microseconds since the Spark Core began running the current
program. This number will overflow (go back to zero), after approximately 59.65 seconds.

unsigned long time = micros();

// EXAMPLE USAGE
unsigned long time;

void setup()
{
}

Serial.begin(9600);

void loop()
{

Serial.print("Time: ");

time = micros();
//prints time since program started
Serial.println(time);
// wait a second so as not to send massive amounts of data
delay(1000);
}

delay()
Pauses the program for the amount of time (in miliseconds) specified as parameter. (There
are 1000 milliseconds in a second.)

SYNTAX
delay(ms);
ms is the number of milliseconds to pause (unsigned long)

// EXAMPLE USAGE
int ledPin = D1;

// LED connected to digital pin D1

void setup()
{

pinMode(ledPin, OUTPUT);

// sets the digital pin as output

void loop()
{

digitalWrite(ledPin, HIGH); // sets the LED on


delay(1000);
// waits for a second
digitalWrite(ledPin, LOW); // sets the LED off
delay(1000);
// waits for a second
}
NOTE: the parameter for millis is an unsigned long, errors may be generated if a programmer

tries to do math with other datatypes such as ints.


delayMicroseconds()
Pauses the program for the amount of time (in microseconds) specified as parameter. There
are a thousand microseconds in a millisecond, and a million microseconds in a second.

SYNTAX
delayMicroseconds(us);
us is the number of microseconds to pause (unsigned int)

// EXAMPLE USAGE
int outPin = D1;

// digital pin D1

void setup()
{

pinMode(outPin, OUTPUT);

// sets the digital pin as output

void loop()
{

digitalWrite(outPin, HIGH);
delayMicroseconds(50);
digitalWrite(outPin, LOW);
delayMicroseconds(50);

//
//
//
//

sets the pin on


pauses for 50 microseconds
sets the pin off
pauses for 50 microseconds

INTERRUPTS
attachInterrupt()
Specifies a function to call when an external interrupt occurs. Replaces any previous function
that was attached to the interrupt.

// EXAMPLE USAGE
void blink(void);
int ledPin = D1;
volatile int state = LOW;

void setup()
{

pinMode(ledPin, OUTPUT);
attachInterrupt(D0, blink, CHANGE);

void loop()
{

digitalWrite(ledPin, state);
}

void blink()
{

state = !state;
}
The Spark Core currently supports external interrupts on the following pins:
D0, D1, D2, D3, D4 A0, A1, A3, A4, A5, A6, A7

attachInterrupt(pin, function, mode);


Parameters:

pin : the pin number


function : the function to call when the interrupt occurs; this function must take no
parameters and return nothing. This function is sometimes referred to as an interrupt
service routine (ISR).

mode : defines when the interrupt should be triggered. Four constants are predefined as
valid values:
CHANGE to trigger the interrupt whenever the pin changes value,
RISING to trigger when the pin goes from low to high,
FALLING for when the pin goes from high to low.
The function does not return anything.
NOTE: Inside the attached function, delay() won't work and the value returned by millis()
will not increment. Serial data received while in the function may be lost. You should declare
as volatile any variables that you modify within the attached function.
Using Interrupts: Interrupts are useful for making things happen automatically in
microcontroller programs, and can help solve timing problems. Good tasks for using an
interrupt may include reading a rotary encoder, or monitoring user input.
If you wanted to insure that a program always caught the pulses from a rotary encoder, so
that it never misses a pulse, it would make it very tricky to write a program to do anything
else, because the program would need to constantly poll the sensor lines for the encoder, in
order to catch pulses when they occurred. Other sensors have a similar interface dynamic
too, such as trying to read a sound sensor that is trying to catch a click, or an infrared slot
sensor (photo-interrupter) trying to catch a coin drop. In all of these situations, using an
interrupt can free the microcontroller to get some other work done while not missing the

input.
detatchInterrupt()
Turns off the given interrupt.

detachInterrupt(pin);
pin is the pin number of the interrupt to disable.
interrupts()
Re-enables interrupts (after they've been disabled by noInterrupts() ). Interrupts allow
certain important tasks to happen in the background and are enabled by default. Some
functions will not work while interrupts are disabled, and incoming communication may be
ignored. Interrupts can slightly disrupt the timing of code, however, and may be disabled for
particularly critical sections of code.

// EXAMPLE USAGE
void setup() {}

void loop()
{

noInterrupts(); // disable interrupts


//
// put critical, time-sensitive code here
//
interrupts();
// enable interrupts
//
// other code here
//
}
interrupts() neither accepts a parameter nor returns anything.
noInterrupts()
Disables interrupts (you can re-enable them with interrupts() ). Interrupts allow certain
important tasks to happen in the background and are enabled by default. Some functions will
not work while interrupts are disabled, and incoming communication may be ignored.
Interrupts can slightly disrupt the timing of code, however, and may be disabled for
particularly critical sections of code.

noInterrupts() neither accepts a parameter nor returns anything.

MATH
Note that in addition to functions outlined below all of the newlib math functions described
at sourceware.org are also available for use by simply including the math.h header file thus:

#include "math.h"
min()
Calculates the minimum of two numbers.

min(x, y)
x is the first number, any data type y is the second number, any data type
The functions returns the smaller of the two numbers.

// EXAMPLE USAGE
sensVal = min(sensVal, 100); // assigns sensVal to the smaller of sensVal or 100
// ensuring that it never gets above 100.

NOTE: Perhaps counter-intuitively, max() is often used to constrain the lower end of a
variable's range, while min() is used to constrain the upper end of the range.
WARNING: Because of the way the min() function is implemented, avoid using other functions
inside the brackets, it may lead to incorrect results

min(a++, 100);
a++;
min(a, 100);

// avoid this - yields incorrect results

// use this instead - keep other math outside the function

max()
Calculates the maximum of two numbers.

max(x, y)
x is the first number, any data type y is the second number, any data type
The functions returns the larger of the two numbers.

// EXAMPLE USAGE
sensVal = max(senVal, 20); // assigns sensVal to the larger of sensVal or 20

// (effectively ensuring that it is at least 20)


NOTE: Perhaps counter-intuitively, max() is often used to constrain the lower end of a
variable's range, while min() is used to constrain the upper end of the range.
WARNING: Because of the way the max() function is implemented, avoid using other
functions inside the brackets, it may lead to incorrect results

max(a--, 0);

// avoid this - yields incorrect results

a--;
max(a, 0);

// use this instead // keep other math outside the function

abs()
Computes the absolute value of a number.

abs(x);
where x is the number
The function returns x if x is greater than or equal to 0 and returns -x if x is less than 0 .
WARNING: Because of the way the abs() function is implemented, avoid using other functions
inside the brackets, it may lead to incorrect results.

abs(a++);
a++;
abs(a);

// avoid this - yields incorrect results


// use this instead // keep other math outside the function

constrain()
Constrains a number to be within a range.

constrain(x, a, b);
x is the number to constrain, all data types a is the lower end of the range, all data types b
is the upper end of the range, all data types
The function will return: x : if x is between a and b a : if x is less than a b : if x is greater
than b

// EXAMPLE USAGE
sensVal = constrain(sensVal, 10, 150);

// limits range of sensor values to between 10 and 150

map()
Re-maps a number from one range to another. That is, a value of fromLow would get mapped
to toLow , a value of fromHigh to toHigh , values in-between to values in-between, etc.

map(value, fromLow, fromHigh, toLow, toHigh);


Does not constrain values to within the range, because out-of-range values are sometimes
intended and useful. The constrain() function may be used either before or after this
function, if limits to the ranges are desired.
Note that the "lower bounds" of either range may be larger or smaller than the "upper
bounds" so the map() function may be used to reverse a range of numbers, for example

y = map(x, 1, 50, 50, 1);


The function also handles negative numbers well, so that this example

y = map(x, 1, 50, 50, -100);


is also valid and works well.
The map() function uses integer math so will not generate fractions, when the math might
indicate that it should do so. Fractional remainders are truncated, and are not rounded or
averaged.
Parameters:

value : the number to map


fromLow : the lower bound of the value's current range
fromHigh : the upper bound of the value's current range
toLow : the lower bound of the value's target range
toHigh : the upper bound of the value's target range
The function returns the mapped value

// EXAMPLE USAGE
// Map an analog value to 8 bits (0 to 255)
void setup() {}

void loop()
{

int val = analogRead(0);

val = map(val, 0, 4095, 0, 255);


analogWrite(9, val);

}
Appendix: For the mathematically inclined, here's the whole function

long map(long x, long in_min, long in_max, long out_min, long out_max)
{
}

return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;

pow()
Calculates the value of a number raised to a power. pow() can be used to raise a number to a
fractional power. This is useful for generating exponential mapping of values or curves.

pow(base, exponent);
base is the number (float) exponent is the power to which the base is raised (float)
The function returns the result of the exponentiation (double)
EXAMPLE TBD
sqrt()
Calculates the square root of a number.

sqrt(x)
x is the number, any data type
The function returns the number's square root (double)

EEPROM
The EEPROM emulator allocates 100 bytes of the Spark Core's built-in flash memory to act as
EEPROM. Unlike "true EEPROM, flash doesn't suffer from write "wear". The EEPROM functions
can be used to store small amounts of data in flash that will persist even after the Core resets
after a deep sleep.
read()
Read a byte of data from the emulated EEPROM.

read(address)

address is the address (int) of the EERPOM location (0-99) to read

// EXAMPLE USAGE
// Read the value of the second byte of EEPROM
int addr = 1;
uint8_t value = EEPROM.read(addr);

write()
Write a byte of data to the emulated EEPROM.

write(address, value)
address is the address (int) of the EERPOM location (0-99) to write to value is the byte data
(uint8_t) to write

// EXAMPLE USAGE
// Write a byte value to the second byte of EEPROM
int addr = 1;
uint8_t val = 0x45;
EEPROM.write(addr, val);

ADVANCED: SYSTEM MODES


By default, the Spark Core connects to the Cloud and processes messages automatically.
However there are many cases where a user will want to take control over that connection.
There are three available system modes: AUTOMATIC , SEMI_AUTOMATIC , and MANUAL . These
modes describe how connectivity is handled.
System modes must be called before the setup() function. By default, the Core is always in

AUTOMATIC mode.
Automatic mode
The automatic mode of connectivity provides the default behavior of the Spark Core, which is
that:

SYSTEM_MODE(AUTOMATIC);

void setup() {

// This won't be called until the Core is connected

void loop() {

// Neither will this

}
When the Core starts up, it automatically tries to connect to Wi-Fi and the Spark Cloud.
Once a connection with the Spark Cloud has been established, the user code starts
running.
Messages to and from the Cloud are handled in between runs of the user loop; the user
loop automatically alternates with Spark.process() .

Spark.process() is also called during any delay() of at least 1 second.


If the user loop blocks for more than about 20 seconds, the connection to the Cloud will be
lost. To prevent this from happening, the user can call Spark.process() manually.
If the connection to the Cloud is ever lost, the Core will automatically attempt to reconnect.
This re-connection will block from a few milliseconds up to 8 seconds.

SYSTEM_MODE(AUTOMATIC) does not need to be called, because it is the default state;


however the user can invoke this method to make the mode explicit.
In automatic mode, the user can still call Spark.disconnect() to disconnect from the Cloud,
but is then responsible for re-connecting to the Cloud by calling Spark.connect() .
Semi-automatic mode
The semi-automatic mode will not attempt to connect the Core to the Cloud automatically.
However once the Core is connected to the Cloud (through some user intervention), messages
will be processed automatically, as in the automatic mode above.

SYSTEM_MODE(SEMI_AUTOMATIC);

void setup() {

// This is called immediately

void loop() {
if (buttonIsPressed()) {
Spark.connect();
} else {
doOfflineStuff();

}
}
The semi-automatic mode is therefore much like the automatic mode, except:
When the Core boots up, the user code will begin running immediately.
When the user calls Spark.connect() , the user code will be blocked, and the Core will
attempt to negotiate a connection. This connection will block until either the Core connects
to the Cloud or an interrupt is fired that calls Spark.disconnect() .

Manual mode
The "manual" mode puts the Spark Core's connectivity completely in the user's control. This
means that the user is responsible for both establishing a connection to the Spark Cloud and
handling communications with the Cloud by calling Spark.process() on a regular basis.

SYSTEM_MODE(MANUAL);

void setup() {

// This will run automatically

void loop() {
if (buttonIsPressed()) {
Spark.connect();
}

if (Spark.connected()) {
Spark.process();
doOtherStuff();

}
}
When using manual mode:
The user code will run immediately when the Core is powered on.
Once the user calls Spark.connect() , the Core will attempt to begin the connection
process.
Once the Core is connected to the Cloud ( Spark.connected() == true ), the user must call

Spark.process() regularly to handle incoming messages and keep the connection alive.
The more frequently Spark.process() is called, the more responsive the Core will be to
incoming messages.
If Spark.process() is called less frequently than every 20 seconds, the connection with the
Cloud will die. It may take a couple of additional calls of Spark.process() for the Core to
recognize that the connection has been lost.

LANGUAGE SYNTAX
The following documentation is based on the Arduino reference which can be found here.

STRUCTURE
setup()

The setup() function is called when an application starts. Use it to initialize variables, pin
modes, start using libraries, etc. The setup function will only run once, after each powerup or
reset of the Spark Core.

// EXAMPLE USAGE
int button = D0;
int LED = D1;
//setup initializes D0 as input and D1 as output
void setup()
{
pinMode(button, INPUT_PULLDOWN);
pinMode(LED, OUTPUT);
}

void loop()
{

// ...
}

loop()
After creating a setup() function, which initializes and sets the initial values, the loop()
function does precisely what its name suggests, and loops consecutively, allowing your
program to change and respond. Use it to actively control the Spark Core.

EXAMPLE USAGE
int button = D0;
int LED = D1;
//setup initializes D0 as input and D1 as output
void setup()
{
pinMode(button, INPUT_PULLDOWN);
pinMode(LED, OUTPUT);
}
//loops to check if button was pressed,
//if it was, then it turns ON the LED,
//else the LED remains OFF
void loop()
{
if (digitalRead(button) == HIGH)
digitalWrite(LED,HIGH);

else

digitalWrite(LED,LOW);

CONTROL STRUCTURES
if

if , which is used in conjunction with a comparison operator, tests whether a certain


condition has been reached, such as an input being above a certain number.

SYNTAX
if (someVariable > 50)
{
// do something here
}
The program tests to see if someVariable is greater than 50. If it is, the program takes a
particular action. Put another way, if the statement in parentheses is true, the statements
inside the brackets are run. If not, the program skips over the code.
The brackets may be omitted after an if statement. If this is done, the next line (defined by
the semicolon) becomes the only conditional statement.

if (x > 120) digitalWrite(LEDpin, HIGH);


if (x > 120)

digitalWrite(LEDpin, HIGH);

if (x > 120){ digitalWrite(LEDpin, HIGH); }


if (x > 120)
{

digitalWrite(LEDpin1, HIGH);
digitalWrite(LEDpin2, HIGH);

// all are correct

The statements being evaluated inside the parentheses require the use of one or more
operators:
Comparison Operators

x
x
x
x
x
x

==
!=
<
>
<=
>=

y
y
y
y
y
y

(x
(x
(x
(x
(x
(x

is
is
is
is
is
is

equal to y)
not equal to
less than y)
greater than
less than or
greater than

y)
y)
equal to y)
or equal to y)

WARNING: Beware of accidentally using the single equal sign (e.g. if (x = 10) ). The single
equal sign is the assignment operator, and sets x to 10 (puts the value 10 into the variable x).
Instead use the double equal sign (e.g. if (x == 10) ), which is the comparison operator, and
tests whether x is equal to 10 or not. The latter statement is only true if x equals 10, but the
former statement will always be true.
This is because C evaluates the statement if (x=10) as follows: 10 is assigned to x
(remember that the single equal sign is the assignment operator), so x now contains 10. Then
the 'if' conditional evaluates 10, which always evaluates to TRUE, since any non-zero number
evaluates to TRUE. Consequently, if (x = 10) will always evaluate to TRUE, which is not the
desired result when using an 'if' statement. Additionally, the variable x will be set to 10, which
is also not a desired action.

if can also be part of a branching control structure using the if...else ] construction.
if...else
if/else allows greater control over the flow of code than the basic if statement, by allowing
multiple tests to be grouped together. For example, an analog input could be tested and one
action taken if the input was less than 500, and another action taken if the input was 500 or
greater. The code would look like this:

SYNTAX
if (pinFiveInput < 500)
{
// action A
}

else
{

// action B
}
else can proceed another if test, so that multiple, mutually exclusive tests can be run at the
same time.
Each test will proceed to the next one until a true test is encountered. When a true test is
found, its associated block of code is run, and the program then skips to the line following the
entire if/else construction. If no test proves to be true, the default else block is executed, if
one is present, and sets the default behavior.
Note that an else if block may be used with or without a terminating else block and vice versa.
An unlimited number of such else if branches is allowed.

if (pinFiveInput < 500)


{

// do Thing A
}

else if (pinFiveInput >= 1000)


{

// do Thing B
}

else
{

// do Thing C
}
Another way to express branching, mutually exclusive tests, is with the switch case
statement.
for
The for statement is used to repeat a block of statements enclosed in curly braces. An
increment counter is usually used to increment and terminate the loop. The for statement is
useful for any repetitive operation, and is often used in combination with arrays to operate on
collections of data/pins.
There are three parts to the for loop header:

SYNTAX
for (initialization; condition; increment)
{
//statement(s);
}
The initialization happens first and exactly once. Each time through the loop, the condition is
tested; if it's true, the statement block, and the increment is executed, then the condition is
tested again. When the condition becomes false, the loop ends.

// EXAMPLE USAGE
// slowy make the LED glow brighter
int ledPin = D1; // LED in series with 470 ohm resistor on pin D1

void setup()
{

// set ledPin as an output


pinMode(ledPin,OUTPUT);
}

void loop()
{

for (int i=0; i <= 255; i++){

analogWrite(ledPin, i);
delay(10);
}
}
The C for loop is much more flexible than for loops found in some other computer
languages, including BASIC. Any or all of the three header elements may be omitted, although
the semicolons are required. Also the statements for initialization, condition, and increment
can be any valid C statements with unrelated variables, and use any C datatypes including
floats. These types of unusual for statements may provide solutions to some rare
programming problems.
For example, using a multiplication in the increment line will generate a logarithmic
progression:

for(int x = 2; x < 100; x = x * 1.5)


{

Serial.print(x);

}
//Generates: 2,3,4,6,9,13,19,28,42,63,94
Another example, fade an LED up and down with one for loop:

// slowy make the LED glow brighter


int ledPin = D1; // LED in series with 470 ohm resistor on pin D1

void setup()
{

// set ledPin as an output


pinMode(ledPin,OUTPUT);
}

void loop()
{

int x = 1;
for (int i = 0; i > -1; i = i + x)
{

analogWrite(ledPin, i);
if (i == 255) x = -1;
delay(10);

// switch direction at peak

}
}

switch case
Like if statements, switch ... case controls the flow of programs by allowing programmers to

specify different code that should be executed in various conditions. In particular, a switch
statement compares the value of a variable to the values specified in case statements. When
a case statement is found whose value matches that of the variable, the code in that case
statement is run.
The break keyword exits the switch statement, and is typically used at the end of each case.
Without a break statement, the switch statement will continue executing the following
expressions ("falling-through") until a break, or the end of the switch statement is reached.

SYNTAX

switch (var)
{

case label:

// statements
break;
case label:
// statements
break;
default:
// statements
}
var is the variable whose value to compare to the various cases label is a value to compare
the variable to

// EXAMPLE USAGE
switch (var)
{
case 1:
// do something when var equals 1
break;
case 2:
// do something when var equals 2
break;
default:
// if nothing else matches, do the
// default (which is optional)
}

while

while loops will loop continuously, and infinitely, until the expression inside the parenthesis,
() becomes false. Something must change the tested variable, or the while loop will never
exit. This could be in your code, such as an incremented variable, or an external condition,
such as testing a sensor.

SYNTAX
while(expression)
{
// statement(s)
}
expression is a (boolean) C statement that evaluates to true or false.

// EXAMPLE USAGE
var = 0;
while(var < 200)
{
// do something repetitive 200 times
var++;
}

do... while
The do loop works in the same manner as the while loop, with the exception that the
condition is tested at the end of the loop, so the do loop will always run at least once.

SYNTAX

do
{

// statement block
} while (test condition);

// EXAMPLE USAGE

do
{

delay(50);
x = readSensors();

// wait for sensors to stabilize


// check the sensors

} while (x < 100);

break

break is used to exit from a do , for , or while loop, bypassing the normal loop condition. It is
also used to exit from a switch statement.

// EXAMPLE USAGE
for (int x = 0; x < 255; x++)
{
digitalWrite(ledPin, x);

sens = analogRead(sensorPin);
if (sens > threshold)
{
x = 0;
break; // exit for() loop on sensor detect
}
delay(50);
}

continue
The continue statement skips the rest of the current iteration of a loop ( do , for , or while ). It
continues by checking the conditional expression of the loop, and proceeding with any
subsequent iterations.

// EXAMPLE USAGE
for (x = 0; x < 255; x++)
{
if (x > 40 && x < 120) continue;

// create jump in values

digitalWrite(PWMpin, x);
delay(50);
}

return
Terminate a function and return a value from a function to the calling function, if desired.

EXAMPLE
// A function to compare a sensor input to a threshold
int checkSensor()
{
if (analogRead(0) > 400) return 1;
else return 0;
}
The return keyword is handy to test a section of code without having to "comment out" large
sections of possibly buggy code.

void loop()
{

// brilliant code idea to test here

return;

// the rest of a dysfunctional sketch here


// this code will never be executed
}

goto
Transfers program flow to a labeled point in the program

USAGE
label:

goto label; // sends program flow to the label


TIP: The use of goto is discouraged in C programming, and some authors of C programming
books claim that the goto statement is never necessary, but used judiciously, it can simplify
certain programs. The reason that many programmers frown upon the use of goto is that
with the unrestrained use of goto statements, it is easy to create a program with undefined
program flow, which can never be debugged.
With that said, there are instances where a goto statement can come in handy, and simplify
coding. One of these situations is to break out of deeply nested for loops, or if logic blocks,
on a certain condition.

// EXAMPLE USAGE
for(byte r = 0; r < 255; r++) {
for(byte g = 255; g > -1; g--) {
for(byte b = 0; b < 255; b++) {
if (analogRead(0) > 250) {
goto bailout;
}
// more statements ...
}
}
}
bailout:
// Code execution jumps here from
// goto bailout; statement

FURTHER SYNTAX
; (semicolon)
Used to end a statement.

int a = 13;
Tip: Forgetting to end a line in a semicolon will result in a compiler error. The error text may
be obvious, and refer to a missing semicolon, or it may not. If an impenetrable or seemingly
illogical compiler error comes up, one of the first things to check is a missing semicolon, in
the immediate vicinity, preceding the line at which the compiler complained.
{} (curly braces)
Curly braces (also referred to as just "braces" or as "curly brackets") are a major part of the C
programming language. They are used in several different constructs, outlined below, and
this can sometimes be confusing for beginners.

//The main uses of curly braces


//Functions
void myfunction(datatype argument){
statements(s)
}
//Loops

while (boolean expression)


{

statement(s)
}

do
{

statement(s)
} while (boolean expression);

for (initialisation; termination condition; incrementing expr)


{

statement(s)
}
//Conditional statements
if (boolean expression)
{
statement(s)
}

else if (boolean expression)


{

statement(s)
}

else
{

statement(s)
}
An opening curly brace "{" must always be followed by a closing curly brace "}". This is a
condition that is often referred to as the braces being balanced.
Beginning programmers, and programmers coming to C from the BASIC language often find
using braces confusing or daunting. After all, the same curly braces replace the RETURN
statement in a subroutine (function), the ENDIF statement in a conditional and the NEXT
statement in a FOR loop.
Because the use of the curly brace is so varied, it is good programming practice to type the
closing brace immediately after typing the opening brace when inserting a construct which
requires curly braces. Then insert some carriage returns between your braces and begin
inserting statements. Your braces, and your attitude, will never become unbalanced.
Unbalanced braces can often lead to cryptic, impenetrable compiler errors that can
sometimes be hard to track down in a large program. Because of their varied usages, braces
are also incredibly important to the syntax of a program and moving a brace one or two lines
will often dramatically affect the meaning of a program.
// (single line comment)

/* */ (multi-line comment)
Comments are lines in the program that are used to inform yourself or others about the way
the program works. They are ignored by the compiler, and not exported to the processor, so
they don't take up any space on the Spark Core.
Comments only purpose are to help you understand (or remember) how your program works
or to inform others how your program works. There are two different ways of marking a line
as a comment:

// EXAMPLE USAGE
x = 5; // This is a single line comment. Anything after the slashes is a comment
// to the end of the line
/* this is multiline comment - use it to comment out whole blocks of code
if (gwb == 0) {
// single line comment is OK inside a multiline comment
x = 3;
/* but not another multiline comment - this is invalid */
}
// don't forget the "closing" comment - they have to be balanced!
*/

TIP: When experimenting with code, "commenting out" parts of your program is a convenient
way to remove lines that may be buggy. This leaves the lines in the code, but turns them into
comments, so the compiler just ignores them. This can be especially useful when trying to
locate a problem, or when a program refuses to compile and the compiler error is cryptic or
unhelpful.
#define

#define is a useful C component that allows the programmer to give a name to a constant
value before the program is compiled. Defined constants don't take up any program memory
space on the chip. The compiler will replace references to these constants with the defined
value at compile time.

#define constantName value Note that the # is necessary.


This can have some unwanted side effects though, if for example, a constant name that had
been #defined is included in some other constant or variable name. In that case the text
would be replaced by the #defined number (or text).

// EXAMPLE USAGE
#define ledPin 3
// The compiler will replace any mention of ledPin with the value 3 at compile time.

In general, the const keyword is preferred for defining constants and should be used instead
of #define.
TIP: There is no semicolon after the #define statement. If you include one, the compiler will
throw cryptic errors further down the page.

#define ledPin 3; // this is an error


Similarly, including an equal sign after the #define statement will also generate a cryptic
compiler error further down the page.

#define ledPin = 3 // this is also an error


#include

#include is used to include outside libraries in your application code. This gives the
programmer access to a large group of standard C libraries (groups of pre-made functions),
and also libraries written especially for Spark Core.
Note that #include, similar to #define, has no semicolon terminator, and the compiler will
yield cryptic error messages if you add one.

ARITHMETIC OPERATORS
= (assignment operator)
Stores the value to the right of the equal sign in the variable to the left of the equal sign.
The single equal sign in the C programming language is called the assignment operator. It
has a different meaning than in algebra class where it indicated an equation or equality. The
assignment operator tells the microcontroller to evaluate whatever value or expression is on
the right side of the equal sign, and store it in the variable to the left of the equal sign.

// EXAMPLE USAGE
int sensVal;
senVal = analogRead(A0);

// declare an integer variable named sensVal


// store the (digitized) input voltage at analog pin A0 in S

TIP: The variable on the left side of the assignment operator ( = sign ) needs to be able to hold
the value stored in it. If it is not large enough to hold a value, the value stored in the variable
will be incorrect.
Don't confuse the assignment operator = (single equal sign) with the comparison operator

== (double equal signs), which evaluates whether two expressions are equal.
+ - * / (additon subtraction multiplication division)
These operators return the sum, difference, product, or quotient (respectively) of the two
operands. The operation is conducted using the data type of the operands, so, for example, 9

/ 4 gives 2 since 9 and 4 are ints. This also means that the operation can overflow if the
result is larger than that which can be stored in the data type (e.g. adding 1 to an int with the
value 2,147,483,647 gives -2,147,483,648). If the operands are of different types, the "larger"
type is used for the calculation.
If one of the numbers (operands) are of the type float or of type double, floating point math
will be used for the calculation.

// EXAMPLE USAGES
y = y + 3;
x = x - 7;
i = j * 6;
r = r / 5;

// SYNTAX
result = value1 + value2;
result = value1 - value2;

result = value1 * value2;


result = value1 / value2;
value1 and value2 can be any variable or constant.
TIPS:
Know that integer constants default to int, so some constant calculations may overflow
(e.g. 50 * 50,000,000 will yield a negative result).
Choose variable sizes that are large enough to hold the largest results from your
calculations
Know at what point your variable will "roll over" and also what happens in the other
direction e.g. (0 - 1) OR (0 + 2147483648)
For math that requires fractions, use float variables, but be aware of their drawbacks: large
size, slow computation speeds
Use the cast operator e.g. (int)myFloat to convert one variable type to another on the fly.
% (modulo)
Calculates the remainder when one integer is divided by another. It is useful for keeping a
variable within a particular range (e.g. the size of an array). It is defined so that a % b == a -

((a / b) * b) .
result = dividend % divisor
dividend is the number to be divided and divisor is the number to divide by.
result is the remainder
The remainder function can have unexpected behavoir when some of the opperands are
negative. If the dividend is negative, then the result will be the smallest negative equivalency
class. In other words, when a is negative, (a % b) == (a mod b) - b where (a mod b) follows
the standard mathematical definition of mod. When the divisor is negative, the result is the
same as it would be if it was positive.

// EXAMPLE USAGES
x = 9 % 5;
// x
x = 5 % 5;
// x
x = 4 % 5;
// x
x = 7 % 5;
// x
x = -7 % 5; // x
x = 7 % -5; // x
x = -7 % -5; // x

now
now
now
now
now
now
now

contains
contains
contains
contains
contains
contains
contains

4
0
4
2
-2
2
-2

EXAMPLE CODE
//update one value in an array each time through a loop

int values[10];
int i = 0;
void setup() {}
void loop()
{

values[i] = analogRead(A0);
i = (i + 1) % 10;
// modulo operator rolls over variable
}
TIP: The modulo operator does not work on floats. For floats, an equivalent expression to a %

b is a - (b * ((int)(a / b)))

BOOLEAN OPERATORS
These can be used inside the condition of an if statement.
&& (and)
True only if both operands are true, e.g.

if (digitalRead(D2) == HIGH && digitalRead(D3) == HIGH)


{

// read two switches


// ...
}
//is true only if both inputs are high.

|| (or)
True if either operand is true, e.g.

if (x > 0 || y > 0)
{

// ...
}
//is true if either x or y is greater than 0.

! (not)
True if the operand is false, e.g.

if (!x)
{

// ...
}
//is true if x is false (i.e. if x equals 0).
WARNING: Make sure you don't mistake the boolean AND operator, && (double ampersand)
for the bitwise AND operator & (single ampersand). They are entirely different beasts.
Similarly, do not confuse the boolean || (double pipe) operator with the bitwise OR operator
| (single pipe).
The bitwise not ~ (tilde) looks much different than the boolean not ! (exclamation point or
"bang" as the programmers say) but you still have to be sure which one you want where.

if (a >= 10 && a <= 20){} // true if a is between 10 and 20

BITWISE OPERATORS
& (bitwise and)
The bitwise AND operator in C++ is a single ampersand, &, used between two other integer
expressions. Bitwise AND operates on each bit position of the surrounding expressions
independently, according to this rule: if both input bits are 1, the resulting output is 1,
otherwise the output is 0. Another way of expressing this is:

0 0 1 1
0 1 0 1
---------0 0 0 1

operand1
operand2
(operand1 & operand2) - returned result

// EXAMPLE USAGE
int a = 92;
// in binary: 0000000001011100
int b = 101;
// in binary: 0000000001100101
int c = a & b; // result:
0000000001000100, or 68 in decimal.
One of the most common uses of bitwise AND is to select a particular bit (or bits) from an
integer value, often called masking.
| (bitwise or)
The bitwise OR operator in C++ is the vertical bar symbol, |. Like the & operator, | operates
independently each bit in its two surrounding integer expressions, but what it does is

different (of course). The bitwise OR of two bits is 1 if either or both of the input bits is 1,
otherwise it is 0. In other words:

0 0 1 1
0 1 0 1
---------0 1 1 1

operand1
operand2
(operand1 | operand2) - returned result

// EXAMPLE USAGE
int a = 92;
// in binary: 0000000001011100
int b = 101;
// in binary: 0000000001100101
int c = a | b; // result:
0000000001111101, or 125 in decimal.

^ (bitwise xor)
There is a somewhat unusual operator in C++ called bitwise EXCLUSIVE OR, also known as
bitwise XOR. (In English this is usually pronounced "eks-or".) The bitwise XOR operator is
written using the caret symbol ^. This operator is very similar to the bitwise OR operator |,
only it evaluates to 0 for a given bit position when both of the input bits for that position are
1:

0 0 1 1
0 1 0 1
---------0 1 1 0

operand1
operand2
(operand1 ^ operand2) - returned result

Another way to look at bitwise XOR is that each bit in the result is a 1 if the input bits are
different, or 0 if they are the same.

// EXAMPLE USAGE
int x = 12;
// binary: 1100
int y = 10;
// binary: 1010
int z = x ^ y; // binary: 0110, or decimal 6
The ^ operator is often used to toggle (i.e. change from 0 to 1, or 1 to 0) some of the bits in an
integer expression. In a bitwise OR operation if there is a 1 in the mask bit, that bit is
inverted; if there is a 0, the bit is not inverted and stays the same.
~ (bitwise not)
The bitwise NOT operator in C++ is the tilde character ~. Unlike & and |, the bitwise NOT
operator is applied to a single operand to its right. Bitwise NOT changes each bit to its
opposite: 0 becomes 1, and 1 becomes 0. For example:

0 1
operand1
---------1 0
~ operand1

int a = 103;
int b = ~a;

// binary:
// binary:

0000000001100111
1111111110011000 = -104

You might be surprised to see a negative number like -104 as the result of this operation. This
is because the highest bit in an int variable is the so-called sign bit. If the highest bit is 1, the
number is interpreted as negative. This encoding of positive and negative numbers is
referred to as two's complement. For more information, see the Wikipedia article on two's
complement.
As an aside, it is interesting to note that for any integer x, ~x is the same as -x-1.
At times, the sign bit in a signed integer expression can cause some unwanted surprises.
<< (bitwise left shift), >> (bitwise right shift)
There are two bit shift operators in C++: the left shift operator << and the right shift operator
>>. These operators cause the bits in the left operand to be shifted left or right by the
number of positions specified by the right operand.
More on bitwise math may be found here.

variable << number_of_bits


variable >> number_of_bits
variable can be byte , int , long number_of_bits and integer <= 32

// EXAMPLE USAGE
int a = 5;
int b = a << 3;
int c = b >> 3;

// binary: 0000000000000101
// binary: 0000000000101000, or 40 in decimal
// binary: 0000000000000101, or back to 5 like we started with

When you shift a value x by y bits (x << y), the leftmost y bits in x are lost, literally shifted out
of existence:

int a = 5;
// binary: 0000000000000101
int b = a << 14; // binary: 0100000000000000 - the first 1 in 101 was discarded

If you are certain that none of the ones in a value are being shifted into oblivion, a simple way

to think of the left-shift operator is that it multiplies the left operand by 2 raised to the right
operand power. For example, to generate powers of 2, the following expressions can be
employed:

1 <<
1 <<
1 <<
1 <<
...
1 <<
1 <<
1 <<
...

0
1
2
3

==
==
==
==

1
2
4
8

8 == 256
9 == 512
10 == 1024

When you shift x right by y bits (x >> y), and the highest bit in x is a 1, the behavior depends
on the exact data type of x. If x is of type int, the highest bit is the sign bit, determining
whether x is negative or not, as we have discussed above. In that case, the sign bit is copied
into lower bits, for esoteric historical reasons:

int x = -16;
// binary: 1111111111110000
int y = x >> 3; // binary: 1111111111111110
This behavior, called sign extension, is often not the behavior you want. Instead, you may wish
zeros to be shifted in from the left. It turns out that the right shift rules are different for
unsigned int expressions, so you can use a typecast to suppress ones being copied from the
left:

int x = -16;
// binary: 1111111111110000
int y = (unsigned int)x >> 3; // binary: 0001111111111110
If you are careful to avoid sign extension, you can use the right-shift operator >> as a way to
divide by powers of 2. For example:

int x = 1000;
int y = x >> 3;

// integer division of 1000 by 8, causing y = 125

COMPOUND OPERATORS
++ (increment), -- (decrement)
Increment or decrement a variable

SYNTAX
x++; // increment x by one and returns the old value of x
++x; // increment x by one and returns the new value of x
x-- ;
--x ;

// decrement x by one and returns the old value of x


// decrement x by one and returns the new value of x

where x is an integer or long (possibly unsigned)

// EXAMPLE USAGE
x = 2;
y = ++x;
// x now contains 3, y contains 3
y = x--;
// x contains 2 again, y still contains 3

compound arithmetic
+= (compound addition)
-= (compound subtraction)
*= (compound multiplication)
/= (compound division)
Perform a mathematical operation on a variable with another constant or variable. The += (et
al) operators are just a convenient shorthand for the expanded syntax.

SYNTAX
x += y;
x -= y;
x *= y;
x /= y;

//
//
//
//

equivalent
equivalent
equivalent
equivalent

to
to
to
to

the
the
the
the

expression
expression
expression
expression

x
x
x
x

=
=
=
=

x
x
x
x

+
*
/

y;
y;
y;
y;

x can be any variable type y can be any variable type or constant

// EXAMPLE USAGE
x = 2;
x += 4;
// x now contains 6
x -= 3;
// x now contains 3
x *= 10;
// x now contains 30
x /= 2;
// x now contains 15

&= (compound bitwise and)


The compound bitwise AND operator (&=) is often used with a variable and a constant to force
particular bits in a variable to the LOW state (to 0). This is often referred to in programming

guides as "clearing" or "resetting" bits.

x &= y; // equivalent to x = x & y;


x can be a char, int or long variable y can be an integer constant, char, int, or long

0 0 1 1
0 1 0 1
---------0 0 0 1

operand1
operand2
(operand1 & operand2) - returned result

Bits that are "bitwise ANDed" with 0 are cleared to 0 so, if myByte is a byte variable, myByte &

B00000000 = 0;
Bits that are "bitwise ANDed" with 1 are unchanged so, myByte & B11111111 = myByte;
Note: because we are dealing with bits in a bitwise operator - it is convenient to use the
binary formatter with constants. The numbers are still the same value in other
representations, they are just not as easy to understand. Also, B00000000 is shown for clarity,
but zero in any number format is zero (hmmm something philosophical there?)
Consequently - to clear (set to zero) bits 0 & 1 of a variable, while leaving the rest of the
variable unchanged, use the compound bitwise AND operator (&=) with the constant
B11111100

1 0 1 0 1 0 1 0
1 1 1 1 1 1 0 0
---------------------1 0 1 0 1 0 0 0

variable
mask

variable unchanged
bits cleared
Here is the same representation with the variable's bits replaced with the symbol x

x x x x x x x x
1 1 1 1 1 1 0 0
---------------------x x x x x x 0 0

variable
mask

variable unchanged
bits cleared
So if: myByte = 10101010; myByte &= B1111100 == B10101000;

|= (compound bitwise or)


The compound bitwise OR operator (|=) is often used with a variable and a constant to "set"
(set to 1) particular bits in a variable.

SYNTAX
x |= y;

// equivalent to x = x | y;

x can be a char, int or long variable y can be an integer constant or char, int or long

0 0 1 1
0 1 0 1
---------0 1 1 1

operand1
operand2
(operand1 | operand2) - returned result

Bits that are "bitwise ORed" with 0 are unchanged, so if myByte is a byte variable, myByte |

B00000000 = myByte;
Bits that are "bitwise ORed" with 1 are set to 1 so: myByte | B11111111 = B11111111;
Consequently - to set bits 0 & 1 of a variable, while leaving the rest of the variable unchanged,
use the compound bitwise OR operator (|=) with the constant B00000011

1 0 1 0 1 0 1 0
0 0 0 0 0 0 1 1
---------------------1 0 1 0 1 0 1 1
variable unchanged

variable
mask

bits set

Here is the same representation with the variables bits replaced with the symbol x

x x x x x x x x
0 0 0 0 0 0 1 1
---------------------x x x x x x 1 1
variable unchanged

variable
mask

bits set

So if: myByte = B10101010; myByte |= B00000011 == B10101011;

STRING CLASS
The String class allows you to use and manipulate strings of text in more complex ways than
character arrays do. You can concatenate Strings, append to them, search for and replace
substrings, and more. It takes more memory than a simple character array, but it is also more
useful.
For reference, character arrays are referred to as strings with a small s, and instances of the
String class are referred to as Strings with a capital S. Note that constant strings, specified in
"double quotes" are treated as char arrays, not instances of the String class.
String()
Constructs an instance of the String class. There are multiple versions that construct Strings
from different data types (i.e. format them as sequences of characters), including:
a constant string of characters, in double quotes (i.e. a char array)
a single constant character, in single quotes
another instance of the String object
a constant integer or long integer
a constant integer or long integer, using a specified base
an integer or long integer variable
an integer or long integer variable, using a specified base
Constructing a String from a number results in a string that contains the ASCII representation
of that number. The default is base ten, so

String thisString = String(13) gives you the String "13". You can use other bases, however.
For example, String thisString = String(13, HEX) gives you the String "D", which is the
hexadecimal representation of the decimal value 13. Or if you prefer binary, String

thisString = String(13, BIN) gives you the String "1101", which is the binary representation
of 13.

SYNTAX:

String(val)
String(val, base)
Parameters:
val: a variable to format as a String - string, char, byte, int, long, unsigned int, unsigned
long
base (optional) - the base in which to format an integral value
Returns: an instance of the String class

// EXAMPLES
String stringOne
String stringOne
String stringTwo
String stringOne
String stringOne
String stringOne
String stringOne
String stringOne
String stringOne

= "Hello String";
= String('a');
= String("This is a string");
= String(stringTwo + " with more");
= String(13);
= String(analogRead(0), DEC);
= String(45, HEX);
= String(255, BIN);
= String(millis(), DEC);

//
//
//
//
//
//
//
//
//

using a constant String


converting a constant char int
converting a constant string i
concatenating two strings
using a constant integer
using an int and a base
using an int and a base (hexad
using an int and a base (binar
using a long and a base

charAt()
Access a particular character of the String.

SYNTAX:

string.charAt(n)
Parameters:

string : a variable of type String


n : the character to access
Returns: the n'th character of the String
compareTo()
Compares two Strings, testing whether one comes before or after the other, or whether
they're equal. The strings are compared character by character, using the ASCII values of the
characters. That means, for example, that 'a' comes before 'b' but after 'A'. Numbers come
before letters.

SYNTAX:

string.compareTo(string2)
Parameters:
string: a variable of type String
string2: another variable of type String
Returns:

a negative number: if string comes before string2


0: if string equals string2
a positive number: if string comes after string2
concat()
Combines, or concatenates two strings into one new String. The second string is appended to
the first, and the result is placed in a new String.

SYNTAX:

string.concat(string, string2)
Parameters:
string, string2: variables of type String
Returns: new String that is the combination of the original two Strings
endsWith()
Tests whether or not a String ends with the characters of another String.

SYNTAX:

string.endsWith(string2)
Parameters:
string: a variable of type String
string2: another variable of type String
Returns:
true: if string ends with the characters of string2
false: otherwise
equals()
Compares two strings for equality. The comparison is case-sensitive, meaning the String
"hello" is not equal to the String "HELLO".

SYNTAX:

string.equals(string2)
Parameters:
string, string2: variables of type String
Returns:
true: if string equals string2
false: otherwise
equalsIgnoreCase()
Compares two strings for equality. The comparison is not case-sensitive, meaning the
String("hello") is equal to the String("HELLO").

SYNTAX:

string.equalsIgnoreCase(string2)
Parameters:
string, string2: variables of type String
Returns:
true: if string equals string2 (ignoring case)
false: otherwise
getBytes()
Copies the string's characters to the supplied buffer.

SYNTAX:

string.getBytes(buf, len)
Parameters:
string: a variable of type String
buf: the buffer to copy the characters into (byte [])
len: the size of the buffer (unsigned int)
Returns: None

indexOf()
Locates a character or String within another String. By default, searches from the beginning
of the String, but can also start from a given index, allowing for the locating of all instances of
the character or String.

SYNTAX:

string.indexOf(val)
string.indexOf(val, from)
Parameters:
string: a variable of type String
val: the value to search for - char or String
from: the index to start the search from
Returns: The index of val within the String, or -1 if not found.
lastIndexOf()
Locates a character or String within another String. By default, searches from the end of the
String, but can also work backwards from a given index, allowing for the locating of all
instances of the character or String.

string.lastIndexOf(val)
string.lastIndexOf(val, from)
Parameters:
string: a variable of type String
val: the value to search for - char or String
from: the index to work backwards from
Returns: The index of val within the String, or -1 if not found.
length()
Returns the length of the String, in characters. (Note that this doesn't include a trailing null
character.)

SYNTAX:

string.length()
Parameters:
string: a variable of type String
Returns: The length of the String in characters.
replace()
The String replace() function allows you to replace all instances of a given character with
another character. You can also use replace to replace substrings of a string with a different
substring.

SYNTAX:

string.replace(substring1, substring2)
Parameters:
string: a variable of type String
substring1: another variable of type String
substring2: another variable of type String
Returns: another String containing the new string with replaced characters.
reserve()
The String reserve() function allows you to allocate a buffer in memory for manipulating
strings.

SYNTAX:

string.reserve(size)
Parameters:
size: unsigned int declaring the number of bytes in memory to save for string
manipulation
Returns: None

//EXAMPLE

String myString;

void setup() {

// initialize serial and wait for port to open:


Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for Leonardo only
}
myString.reserve(26);
myString = "i=";
myString += "1234";
myString += ", is that ok?";

// print the String:


Serial.println(myString);

void loop() {

// nothing to do here
}

setCharAt()
Sets a character of the String. Has no effect on indices outside the existing length of the
String.

SYNTAX:

string.setCharAt(index, c)
Parameters:
string: a variable of type String
index: the index to set the character at
c: the character to store to the given location
Returns: None
startsWith()
Tests whether or not a String starts with the characters of another String.

SYNTAX:

string.startsWith(string2)

Parameters:
string, string2: variable2 of type String
Returns:
true: if string starts with the characters of string2
false: otherwise
substring()
Get a substring of a String. The starting index is inclusive (the corresponding character is
included in the substring), but the optional ending index is exclusive (the corresponding
character is not included in the substring). If the ending index is omitted, the substring
continues to the end of the String.

SYNTAX:

string.substring(from)
string.substring(from, to)
Parameters:
string: a variable of type String
from: the index to start the substring at
to (optional): the index to end the substring before
Returns: the substring
toCharArray()
Copies the string's characters to the supplied buffer.

SYNTAX:

string.toCharArray(buf, len)
Parameters:
string: a variable of type String
buf: the buffer to copy the characters into (char [])
len: the size of the buffer (unsigned int)
Returns: None

toInt()
Converts a valid String to an integer. The input string should start with an integral number. If
the string contains non-integral numbers, the function will stop performing the conversion.

SYNTAX:

string.toInt()
Parameters:
string: a variable of type String
Returns: long (If no valid conversion could be performed because the string doesn't start with
a integral number, a zero is returned.)
toLowerCase()
Get a lower-case version of a String. toLowerCase() modifies the string in place.

SYNTAX:

string.toLowerCase()
Parameters:
string: a variable of type String
Returns: None
toUpperCase()
Get an upper-case version of a String. toUpperCase() modifies the string in place.

SYNTAX:

string.toUpperCase()
Parameters:
string: a variable of type String
Returns: None

trim()
Get a version of the String with any leading and trailing whitespace removed.

SYNTAX:

string.trim()
Parameters:
string: a variable of type String
Returns: None

VARIABLES
CONSTANTS
HIGH | LOW
When reading or writing to a digital pin there are only two possible values a pin can take/beset-to: HIGH and LOW.

HIGH
The meaning of HIGH (in reference to a pin) is somewhat different depending on whether a
pin is set to an INPUT or OUTPUT . When a pin is configured as an INPUT with pinMode, and
read with digitalRead, the microcontroller will report HIGH if a voltage of 3 volts or more is
present at the pin.
A pin may also be configured as an INPUT with pinMode , and subsequently made HIGH with

digitalWrite , this will set the internal 40K pullup resistors, which will steer the input pin to a
HIGH reading unless it is pulled LOW by external circuitry. This is how INPUT_PULLUP works as
well
When a pin is configured to OUTPUT with pinMode , and set to HIGH with digitalWrite , the pin
is at 3.3 volts. In this state it can source current, e.g. light an LED that is connected through a
series resistor to ground, or to another pin configured as an output, and set to LOW.

LOW
The meaning of LOW also has a different meaning depending on whether a pin is set to INPUT
or OUTPUT . When a pin is configured as an INPUT with pinMode , and read with digitalRead ,

the microcontroller will report LOW if a voltage of 1.5 volts or less is present at the pin.
When a pin is configured to OUTPUT with pinMode , and set to LOW with digitalWrite, the pin is
at 0 volts. In this state it can sink current, e.g. light an LED that is connected through a series
resistor to, +3.3 volts, or to another pin configured as an output, and set to HIGH.
INPUT, OUTPUT, INPUT_PULLUP, INPUT_PULLDOWN
Digital pins can be used as INPUT, INPUT_PULLUP, INPUT_PULLDOWN or OUTPUT. Changing a
pin with pinMode() changes the electrical behavior of the pin.
Pins Configured as INPUT
The Spark Core's pins configured as INPUT with pinMode()`` are said to be in a high-

impedance state. Pins configured as INPUT` make extremely small demands on the circuit
that they are sampling, equivalent to a series resistor of 100 Megohms in front of the pin. This
makes them useful for reading a sensor, but not powering an LED.
If you have your pin configured as an INPUT , you will want the pin to have a reference to
ground, often accomplished with a pull-down resistor (a resistor going to ground).
Pins Configured as INPUT_PULLUP or INPUT_PULLDOWN
The STM32 microcontroller has internal pull-up resistors (resistors that connect to power
internally) and pull-down resistors (resistors that connect to ground internally) that you can
access. If you prefer to use these instead of external resistors, you can use these argument in

pinMode() .
Pins Configured as OUTPUT
Pins configured as OUTPUT with `pinMode()`` are said to be in a low-impedance state. This
means that they can provide a substantial amount of current to other circuits. STM32 pins
can source (provide positive current) or sink (provide negative current) up to 20 mA
(milliamps) of current to other devices/circuits. This makes them useful for powering LED's
but useless for reading sensors. Pins configured as outputs can also be damaged or
destroyed if short circuited to either ground or 3.3 volt power rails. The amount of current
provided by the pin is also not enough to power most relays or motors, and some interface
circuitry will be required.
true | false
There are two constants used to represent truth and falsity in the Arduino language: true,
and false.

false
false is the easier of the two to define. false is defined as 0 (zero).

true
true is often said to be defined as 1, which is correct, but true has a wider definition. Any
integer which is non-zero is true, in a Boolean sense. So -1, 2 and -200 are all defined as true,
too, in a Boolean sense.
Note that the true and false constants are typed in lowercase unlike HIGH, LOW, INPUT, &

OUTPUT.

DATA TYPES
Note: The Spark Core uses a 32-bit ARM based microcontroller and hence the datatype
lengths are different from a standard 8-bit system (for eg. Arduino Uno).
void
The void keyword is used only in function declarations. It indicates that the function is
expected to return no information to the function from which it was called.

//EXAMPLE
// actions are performed in the functions "setup" and "loop"
// but no information is reported to the larger program

void setup()
{

// ...
}

void loop()
{

// ...
}

boolean
A boolean holds one of two values, true or false . (Each boolean variable occupies one byte
of memory.)

//EXAMPLE

int LEDpin = D0;


int switchPin = A0;

// LED on D0
// momentary switch on A0, other side connected to ground

boolean running = false;

void setup()
{

pinMode(LEDpin, OUTPUT);
pinMode(switchPin, INPUT_PULLUP);

void loop()
{

if (digitalRead(switchPin) == LOW)
{

// switch is pressed - pullup keeps pin high normally


delay(100);
// delay to debounce switch
running = !running;
// toggle running variable
digitalWrite(LEDpin, running)
// indicate via LED

char
A data type that takes up 1 byte of memory that stores a character value. Character literals
are written in single quotes, like this: 'A' (for multiple characters - strings - use double quotes:
"ABC"). Characters are stored as numbers however. You can see the specific encoding in the
ASCII chart. This means that it is possible to do arithmetic on characters, in which the ASCII
value of the character is used (e.g. 'A' + 1 has the value 66, since the ASCII value of the capital
letter A is 65). See Serial.println reference for more on how characters are translated to
numbers. The char datatype is a signed type, meaning that it encodes numbers from -128 to
127. For an unsigned, one-byte (8 bit) data type, use the byte data type.

//EXAMPLE

char myChar = 'A';


char myChar = 65;

// both are equivalent

unsigned char
An unsigned data type that occupies 1 byte of memory. Same as the byte datatype. The
unsigned char datatype encodes numbers from 0 to 255. For consistency of Arduino
programming style, the byte data type is to be preferred.

//EXAMPLE

unsigned char myChar = 240;

byte
A byte stores an 8-bit unsigned number, from 0 to 255.

//EXAMPLE
byte b = 0x11;

int
Integers are your primary data-type for number storage. On the Core, an int stores a 32-bit (4byte) value. This yields a range of -2,147,483,648 to 2,147,483,647 (minimum value of -2^31
and a maximum value of (2^31) - 1). int's store negative numbers with a technique called 2's
complement math. The highest bit, sometimes referred to as the "sign" bit, flags the number
as a negative number. The rest of the bits are inverted and 1 is added.
Other variations:

int32_t : 32 bit signed integer


int16_t : 16 bit signed integer
int8_t : 8 bit signed integer
unsigned int
The Core stores a 4 byte (32-bit) value, ranging from 0 to 4,294,967,295 (2^32 - 1). The
difference between unsigned ints and (signed) ints, lies in the way the highest bit, sometimes
referred to as the "sign" bit, is interpreted.
Other variations:

uint32_t : 32 bit unsigned integer


uint16_t : 16 bit unsigned integer
uint8_t : 8 bit unsigned integer
word

word stores a 32-bit unsigned number, from 0 to 4,294,967,295.


long
Long variables are extended size variables for number storage, and store 32 bits (4 bytes),
from -2,147,483,648 to 2,147,483,647.
unsigned long

Unsigned long variables are extended size variables for number storage, and store 32 bits (4
bytes). Unlike standard longs unsigned longs won't store negative numbers, making their
range from 0 to 4,294,967,295 (2^32 - 1).
short
A short is a 16-bit data-type. This yields a range of -32,768 to 32,767 (minimum value of -2^15
and a maximum value of (2^15) - 1).
float
Datatype for floating-point numbers, a number that has a decimal point. Floating-point
numbers are often used to approximate analog and continuous values because they have
greater resolution than integers. Floating-point numbers can be as large as 3.4028235E+38
and as low as -3.4028235E+38. They are stored as 32 bits (4 bytes) of information.
Floating point numbers are not exact, and may yield strange results when compared. For
example 6.0 / 3.0 may not equal 2.0. You should instead check that the absolute value of the
difference between the numbers is less than some small number. Floating point math is also
much slower than integer math in performing calculations, so should be avoided if, for
example, a loop has to run at top speed for a critical timing function. Programmers often go to
some lengths to convert floating point calculations to integer math to increase speed.
double
Double precision floating point number. On the Core, doubles have 8-byte (64 bit) precision.
string - char array
A string can be made out of an array of type char and null-terminated.

// EXAMPLES

char
char
char
char
char
char

Str1[15];
Str2[8] = {'a', 'r', 'd', 'u', 'i', 'n', 'o'};
Str3[8] = {'a', 'r', 'd', 'u', 'i', 'n', 'o', '\0'};
Str4[ ] = "arduino";
Str5[8] = "arduino";
Str6[15] = "arduino";

Possibilities for declaring strings:


Declare an array of chars without initializing it as in Str1
Declare an array of chars (with one extra char) and the compiler will add the required null

character, as in Str2
Explicitly add the null character, Str3
Initialize with a string constant in quotation marks; the compiler will size the array to fit
the string constant and a terminating null character, Str4
Initialize the array with an explicit size and string constant, Str5
Initialize the array, leaving extra space for a larger string, Str6
Null termination: Generally, strings are terminated with a null character (ASCII code 0). This
allows functions (like Serial.print()) to tell where the end of a string is. Otherwise, they would
continue reading subsequent bytes of memory that aren't actually part of the string. This
means that your string needs to have space for one more character than the text you want it
to contain. That is why Str2 and Str5 need to be eight characters, even though "arduino" is
only seven - the last position is automatically filled with a null character. Str4 will be
automatically sized to eight characters, one for the extra null. In Str3, we've explicitly included
the null character (written '\0') ourselves. Note that it's possible to have a string without a
final null character (e.g. if you had specified the length of Str2 as seven instead of eight). This
will break most functions that use strings, so you shouldn't do it intentionally. If you notice
something behaving strangely (operating on characters not in the string), however, this could
be the problem.
Single quotes or double quotes? Strings are always defined inside double quotes ("Abc") and
characters are always defined inside single quotes('A').
Wrapping long strings

//You can wrap long strings like this:


char myString[] = "This is the first line"
" this is the second line"
" etcetera";
Arrays of strings: It is often convenient, when working with large amounts of text, such as a
project with an LCD display, to setup an array of strings. Because strings themselves are
arrays, this is in actually an example of a two-dimensional array. In the code below, the
asterisk after the datatype char "char*" indicates that this is an array of "pointers". All array
names are actually pointers, so this is required to make an array of arrays. Pointers are one of
the more esoteric parts of C for beginners to understand, but it isn't necessary to understand
pointers in detail to use them effectively here.

//EXAMPLE

char* myStrings[] = {"This is string 1", "This is string 2",


"This is string 3", "This is string 4", "This is string 5",
"This is string 6"};

void setup(){
Serial.begin(9600);

void loop(){
for (int i = 0; i < 6; i++) {
Serial.println(myStrings[i]);
delay(500);

}
}

String - object
More info can be found here.
array
An array is a collection of variables that are accessed with an index number.
Creating (Declaring) an Array: All of the methods below are valid ways to create (declare) an
array.

int myInts[6];
int myPins[] = {2, 4, 8, 3, 6};
int mySensVals[6] = {2, 4, -8, 3, 2};
char message[6] = "hello";
You can declare an array without initializing it as in myInts.
In myPins we declare an array without explicitly choosing a size. The compiler counts the
elements and creates an array of the appropriate size. Finally you can both initialize and size
your array, as in mySensVals. Note that when declaring an array of type char, one more
element than your initialization is required, to hold the required null character.
Accessing an Array: Arrays are zero indexed, that is, referring to the array initialization above,
the first element of the array is at index 0, hence

mySensVals[0] == 2, mySensVals[1] == 4 , and so forth. It also means that in an array with ten
elements, index nine is the last element. Hence:

int myArray[10] = {9,3,2,4,3,2,7,8,9,11};


//
//

myArray[9]
myArray[10]

contains 11
is invalid and contains random information (other memory address)

For this reason you should be careful in accessing arrays. Accessing past the end of an array
(using an index number greater than your declared array size - 1) is reading from memory

that is in use for other purposes. Reading from these locations is probably not going to do
much except yield invalid data. Writing to random memory locations is definitely a bad idea
and can often lead to unhappy results such as crashes or program malfunction. This can also
be a difficult bug to track down. Unlike BASIC or JAVA, the C compiler does no checking to see
if array access is within legal bounds of the array size that you have declared.
To assign a value to an array: mySensVals[0] = 10;
To retrieve a value from an array: x = mySensVals[4];
Arrays and FOR Loops: Arrays are often manipulated inside for loops, where the loop
counter is used as the index for each array element. To print the elements of an array over
the serial port, you could do something like the following code example. Take special note to a
MACRO called arraySize() which is used to determine the number of elements in myPins . In
this case, arraySize() returns 5, which causes our for loop to terminate after 5 iterations. Also
note that arraySize() will not return the correct answer if passed a pointer to an array.

int myPins[] = {2, 4, 8, 3, 6};


for (int i = 0; i < arraySize(myPins); i++) {
Serial.println(myPins[i]);
}

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

SPARK CLOUD API


INTRODUCTION
Authentication
How to send your access token
Generate a new access token
List all your tokens
Deleting an access token
Errors
Versioning
BASIC FUNCTIONS
Controlling a Core
Reading data from a Core
Variables
Events
Verifying and Flashing new rmware
Flash a Core with source code
Flash a Core with a pre-compiled binary

SPARK CLOUD API

INTRODUCTION
The Spark Cloud API is a REST API. REST means a lot of things, but first and foremost it means
that we use the URL in the way that it's intended: as a "Uniform Resource Locator".
In this case, the unique "resource" in question is your Spark Core. Every Spark Core has a URL,
which can be used to GET variables, POST a function call, or PUT new firmware. The variables
and functions that you have written in your firmware are exposed as subresources within the
Spark Core.
All requests to the Spark Core come through our API server using TLS security.

PROTOCOL AND HOST


https://api.spark.io
There are a number of API calls available, which are summarized here, and described in more
detail below.
List devices the currently authenticated user has access to.

GET /v1/devices
Get basic information about the given Core, including the custom variables and functions it
has exposed.

GET /v1/devices/{DEVICE_ID}
Update the Core, including the display name or the firmware (either binary or source).

PUT /v1/devices/{DEVICE_ID}
Request the current value of a variable exposed by the core, e.g., GET

/v1/devices/0123456789abcdef01234567/temperature

GET /v1/devices/{DEVICE_ID}/{VARIABLE}
Call a function exposed by the core, with arguments passed in request body, e.g., POST

/v1/devices/0123456789abcdef01234567/brew

POST /v1/devices/{DEVICE_ID}/{FUNCTION}
Open a stream of Server-Sent Events

GET /v1/events[/:event_name]
GET /v1/devices/events[/:event_name]
GET /v1/devices/{DEVICE_ID}/events[/:event_name]

AUTHENTICATION
Just because you've connected your Spark Core to the internet doesn't mean anyone else
should have access to it. Permissions for controlling and communciating with your Spark Core
are managed with OAuth2.

# You type in your terminal


curl https://api.spark.io/v1/devices/0123456789abcdef01234567/brew \
-d access_token=9876987698769876987698769876987698769876
# Response status is 200 OK, which means
# the Core says, "Yes ma'am!"
# Sneaky Pete tries the same thing in his terminal
curl https://api.spark.io/v1/devices/0123456789abcdef01234567/brew \
-d access_token=1234123412341234123412341234123412341234
# Response status is 403 Forbidden, which means
# the Core says, "You ain't the boss of me."
# LESSON: Protect your access token.
Your access token can be found in the Spark Build web IDE on the 'Settings' page.
Spark Build
When you connect your Spark Core to the Cloud for the first time, it will be associated with
your account, and only you will have permission to control your Spark Coreusing your
access token.
If you need to transfer ownership of the core to another user, the easiest way is to simply log
into the Spark build site, click on the 'cores' drawer on the bottom left, and then click the
small 'right arrow' by the core you want to release, then click "Remove Core". This will make it
possible for the other person you are transfering the core to, to go through the normal
claiming process.
In the future, you will be able to provision access to your Spark Core to other accounts and to
third-party app developers; however, these features are not yet available.
How to send your access token

There are three ways to send your access token in a request.


In an HTTP Authorization header (always works)
In the URL query string (only works with GET requests)
In the request body (only works for POST & PUT when body is URL-encoded)
In these docs, you'll see example calls written using a terminal program called curl which may
already be available on your machine.
Example commands will always start with curl .
To send a custom header using curl, use you the -H flag. The access token is called a
"Bearer" token and goes in the standard HTTP Authorization header.

curl -H "Authorization: Bearer 38bb7b318cc6898c80317decb34525844bc9db55"


https://...
The query string is the part of the URL after a ? question mark. To send the access token in
the query string just add access_token=38bb... . Because your terminal thinks the question
mark is special, we escape it with a backslash.

curl https://api.spark.io/v1/devices\?access_token=38bb7b318cc6898c80317decb34525844bc9d

The request body is how form contents are submitted on the web. Using curl, each parameter
you send, including the access token is preceded by a -d flag. By default, if you add a -d flag,
curl assumes that the request is a POST. If you need a different request type, you have to
specifically say so with the -X flag, for example -X PUT .

curl -d access_token=38bb7b318cc6898c80317decb34525844bc9db55
https://...

Generate a new access token

POST /oauth/token
# Using curl in your terminal
curl https://api.spark.io/oauth/token -u spark:spark \
-d grant_type=password -d username=joe@example.com -d password=SuperSecret
# A typical JSON response will look like this
{
"access_token": "254406f79c1999af65a7df4388971354f85cfee9",
"token_type": "bearer",

"expires_in": 7776000
}

When creating a new access token, you need to specify several additional pieces of info.
You must give a valid client ID and password in HTTP Basic Auth. Any client ID will work right
now, so we suggest spark:spark . In the POST body, you need three parameters:
grant_type=password
username=YOUR_EMAIL@ADDRE.SS
password=YOUR_PASSWORD
For now, Spark Build will list the single most recently created token.
List all your tokens

GET /v1/access_tokens
# Using curl in your terminal
curl https://api.spark.io/v1/access_tokens -u joe@example.com:SuperSecret
# Example JSON response
[
{
"token": "b5b901e8760164e134199bc2c3dd1d228acf2d98",
"expires_at": "2014-04-27T02:20:36.177Z",
"client": "spark"
},
{
"token": "ba54b6bb71a43b7612bdc7c972914604a078892b",
"expires_at": "2014-04-27T06:31:08.991Z",
"client": "spark"
}
]
You can list all your access tokens by passing your email address and password in an HTTP
Basic Auth header to /v1/access_tokens .
Deleting an access token

DELETE /v1/access_tokens/:token
# Using curl in your terminal
curl https://api.spark.io/v1/access_tokens/b5b901e8760164e134199bc2c3dd1d228acf2d98 \
-u joe@example.com:SuperSecret -X DELETE

# Example JSON response


{
"ok": true
}

If you have a bunch of unused tokens and want to clean up, you can delete tokens.
Just as for listing them, send your username and password in an HTTP Basic Auth header.

ERRORS
The Spark Cloud uses traditional HTTP response codes to provide feedback from the Core
regarding the validity of the request and its success or failure. As with other HTTP resources,
response codes in the 200 range indicate success; codes in the 400 range indicate failure due
to the information provided; codes in the 500 range indicate failure within Spark's server
infrastructure.

200 OK - API call successfully delivered to the Core and executed.


400 Bad Request - Your request is not understood by the Core,
or the requested subresource (variable/function) has not been exposed.
401 Unauthorized - Your access token is not valid.
403 Forbidden - Your access token is not authorized to interface with this Core.
404 Not Found - The Core you requested is not currently connected to the cloud.
408 Timed Out - The cloud experienced a significant delay when trying to reach the
500 Server errors - Fail whale. Something's wrong on our end.

VERSIONING
The API endpoints all start with /v1 to represent the first official version of the Spark Cloud
API. The existing API is stable, and we may add new endpoints with the /v1 prefix.
If in the future we make backwards-incompatible changes to the API, the new endpoints will
start with something different, probably /v2 . If we decide to deprecate any /v1 endpoints,
we'll give you lots of notice and a clear upgrade path.

BASIC FUNCTIONS
CONTROLLING A CORE
To control a Core, you must first define and expose functions in the Core firmware. You then
call these functions remotely using the Spark Cloud API.

/* FIRMWARE */
int brew(String args)
{
// parse brew temperature and duration from args
// ...
activate_heating_element(temperature);
start_water_pump(duration_seconds);

// int status_code = ...


return status_code;

Let's say, as an example, you create a Spark-powered coffeemaker. Within the firmware, we
might expect to see something like this brew function.

/* FIRMWARE */
void setup()
{
Spark.function("brew", brew);
}
In a normal coffeemaker, brew might be called when a button on the front of the coffeemaker
is pressed.
To make this function available through the Spark Cloud, simply add a Spark.function call to
your setup() .
This exposes the brew function so that it can be called through the API. When this code is
present in the firmware, you can make this API call.

POST /v1/devices/{DEVICE_ID}/{FUNCTION}
# EXAMPLE REQUEST
curl https://api.spark.io/v1/devices/0123456789abcdef01234567/brew \
-d access_token=1234123412341234123412341234123412341234 \
-d "args=202,230"

The API request will be routed to the Spark Core and will run your brew function. The
response will have a return_value key containing the integer returned by brew .

// EXAMPLE RESPONSE
{
"id": "0123456789abcdef01234567",
"name": "prototype99",
"connected": true,
"return_value": 42
}
All Spark functions take a String as the only argument and must return a 32-bit integer.

READING DATA FROM A CORE


Variables
Imagine you have a temperature sensor attached to the A0 pin of your Spark Core and your
firmware has exposed the value of the sensor as a Spark variable.

/* FIRMWARE */
int temperature = 0;

void setup()
{

Spark.variable("temperature", &temperature, INT);


pinMode(A0, INPUT);

void loop()
{

temperature = analogRead(A0);
}
You can now make a GET request, even with your browser, to read the sensor at any time. The
API endpoint is /v1/devices/{DEVICE_ID}/{VARIABLE} and as always, you have to include your
access token.

# EXAMPLE REQUEST IN TERMINAL


# Core ID is 0123456789abcdef01234567
# Your access token is 1234123412341234123412341234123412341234
curl "https://api.spark.io/v1/devices/0123456789abcdef01234567/temperature?access_token=

NOTE: Variable names are truncated after the 12th character: temperature_sensor is
accessible as temperature_
Events
Registering a callback
In the build section of the Spark website, you will be able to register a URL on your own server
to which we will POST each time one of your Spark Cores publishes a certain event. This
feature is still in progress, and will be released later in March.
Subscribing to events
You can make an API call that will open a stream of Server-Sent Events (SSEs). You will make
one API call that opens a connection to the Spark Cloud. That connection will stay open, unlike
normal HTTP calls which end quickly. Very little data will come to you across the connection
unless your Spark Core publishes an event, at which point you will be immediately notified.
To subscribe to an event stream, make a GET request to one of the following endpoints. This
will open a Server-Sent Events (SSE) stream, i.e., a TCP socket that stays open. In each case,
the event name filter in the URI is optional.
SSE resources:
http://dev.w3.org/html5/eventsource/
https://developer.mozilla.org/en-US/docs/Server-sent_events/Using_server-sent_events
http://www.html5rocks.com/en/tutorials/eventsource/basics/
Subscribe to the firehose of public events, plus private events published by devices one
owns:

GET /v1/events[/:event_name]
# EXAMPLE
curl -H "Authorization: Bearer 38bb7b318cc6898c80317decb34525844bc9db55"
https://api.spark.io/v1/events/temperature
Subscribe to all events, public and private, published by devices one owns:

GET /v1/devices/events[/:event_name]
# EXAMPLE
curl -H "Authorization: Bearer 38bb7b318cc6898c80317decb34525844bc9db55"
https://api.spark.io/v1/devices/events/temperature

Subscribe to events from one specific device. If the API user owns the device, then she will
receive all events, public and private, published by that device. If the API user does not own
the device she will only receive public events.

GET /v1/devices/:device_id/events[/:event_name]
# EXAMPLE
curl -H "Authorization: Bearer 38bb7b318cc6898c80317decb34525844bc9db55"
https://api.spark.io/v1/devices/55ff70064939494339432586/events/temperature

VERIFYING AND FLASHING NEW FIRMWARE


All your Spark firmware coding can happen entirely in the build section of the website.
However, if you prefer to use your own text editor or IDE, you can! It just means that instead of
hitting the "Flash" or "Verify" buttons, you'll make API calls that reference a file.
Flash a Core with source code
If you have written a source code file that defines setup() and loop() functions, you can
flash it to your Spark Core with an HTTP PUT request.

# HTTP REQUEST DEFINITION


PUT /v1/devices/{DEVICE_ID}
Content-Type: multipart/form-data

Send the source code file as "file" in request body.


The API request should be encoded as multipart/form-data with a file field populated. Your
filename does not matter. In particular, the extension can be .c, .cpp, .ino, or anything else
your prefer.
This API request will submit your firmware to be compiled into a Spark binary, after which, if
compilation was successful, the binary will be flashed to your Core wirelessly.

# EXAMPLE REQUEST IN TERMINAL


# Flash a Core with a file called "my-firmware-app.cpp"
curl -X PUT -F file=@my-firmware-app.cpp \
"https://api.spark.io/v1/devices/0123456789abcdef01234567?access_token=123412341234123

There are three possible response formats:


A successful response, in which both compilation and flashing succeed.

Note that the LED on your Core will blink magenta while updating.
A failure due to compilation errors.
A failure due to inability to transmit the binary to the core.

// EXAMPLE SUCCESSFUL RESPONSE


{
"ok": true,
"firmware_binary_id": "12345"
}

// EXAMPLE COMPILE FAILURE RESPONSE


{
"ok": false,
"errors": ["Compile error"],
"output": ".... lots of debug output from the compiler..."
}

// EXAMPLE FLASH FAILURE RESPONSE


{
"ok": false,
"firmware_binary_id": "1234567",
"errors": ["Device is not connected."]
}

Flash a Core with a pre-compiled binary


If you want to compile the firmware yourself and send a binary instead of a source file, you
can do that too! Just add file_type=binary to the request body, and we will skip the
compilation stage altogether. The response format will look like those shown above.

# EXAMPLE REQUEST IN TERMINAL TO FLASH A BINARY


curl -X PUT -F file=@my-firmware-app.bin -F file_type=binary \
"https://api.spark.io/v1/devices/0123456789abcdef01234567?access_token=123412341234123

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

FLASH APPS WITH SPARK BUILD


Logging In
Web IDE
Keyboard Shortcuts
Spark Apps and Libraries
Flashing Your First App
Account Information
Using Libraries
Contribute a library
Wait, what is rmware?

FLASH APPS WITH SPARK BUILD


LOGGING IN
When you're ready to reprogram your Spark Core, head over to our IDE:
Spark Build

Creating an account is a simple one-step process. When presented with the login screen,
simply enter your email address (careful!), and desired account password. Press the big
friendly "Sign Up" button, and you'll reach the Spark Build home page.

If you've already logged into Spark Build before, click the "Let me log in" text beneath the Sign
Up button, and you'll be presented with a login for existing users. Don't worry--if you already
have an account and accidentally click the "Sign Up" button, we'll still log you into your
existing account.

WEB IDE

Spark Build is an Integrated Development Environment, or IDE; that means that you can do
software development in an easy-to-use application, which just so happens to run in your
web browser.
Spark Build starts with the navigation bar on the left. On the top, there are three buttons,
which serve important functions:
Flash: Flashes the current code to the Spark Core. This initiates an over-the-air firmware
update and loads the new software onto your Spark Core.
Verify: This compiles your code without actually flashing it to the Core; if there are any
errors in your code, they will be shown in the debug console on the bottom of the screen.
Save: Saves any changes you've made to your code.
At the bottom, there are four more buttons to navigate through the IDE:
Code: Shows a list of your firmware applications and lets you select which one to
edit/flash.
Library: Explore libraries submitted by other users, and develop your own.
Docs: Brings you to the documentation for Spark.
Cores: Shows a list of your Spark Cores, so you can choose which to flash, and get more
information on each Core.
Settings: Change your password, log out, or get your access token for API calls.

KEYBOARD SHORTCUTS

Missing your keyboard shortcuts? This cheatsheet will help.

SPARK APPS AND LIBRARIES

The heart of Spark Build is the "Spark Apps" section, which displays the name of the current
app in your editor, as well as a list of your other applications and community-supported
example apps.
The application you've got open in the editor is displayed under the "Current App" header.
You'll notice that this "HELLOWORLD" sample application has only one file, but firmware with
associated libraries/multiple files are fully supported.
From this pane, you've got a lot of buttons and actions available to you that can help you grow
and manage your library of kick-ass applications:
Create: You can create a new application by clicking the "Create New App" button. Give it a
sweet name and press enter! Your app is now saved to your account and ready for editing.
Delete: Click the "Remove App" button to remove it forever from your Spark library.
Rename: You can rename your Spark App by simply double-clicking on the title of your app
under the "Current App" header. You can modify the "Optional description" field in the
same way.
My Apps: Tired of working on your current project? Select the name of another app under
the "My apps" header to open it in a tab of the Spark Build editor.
Files: This header lists all known files associated with the open application. Click on a
supporting file in your application to open it as an active tab in the editor.
Examples: The "Example apps" header lists a continuously growing number of communitysupported example apps. Use these apps as references for developing your own, or fork

them outright to extend their functionality.

FLASHING YOUR FIRST APP


The best way to get started with the IDE is to start writing code:
Connect: Make sure your Core is powered and "breathing" Cyan, which indicates that it's
connected to the Spark Cloud and ready to be updated.
Get Code: Try clicking on the "Blink an LED" example under the "Example apps" header.
The Spark Build editor should display the code for the example application in an active
tab. Alternatively, you can copy and paste this snippet of code into a new application in the
Build IDE.

//D7 LED Flash Example


int LED = D7;

void setup() {

pinMode(LED, OUTPUT);

void loop() {

digitalWrite(LED, HIGH);
delay(1000);
digitalWrite(LED, LOW);
delay(1000);

Select Your Core: The next step is to make sure that you've selected which of your Cores to
flash code to. Click on the "Cores" icon at the bottom left side of the navigation pane, and
click on the star next to the Core you'd like to update. Once you've selected a Core, the star
associated with it will turn yellow. (If you only have one core, there is no need to select it,
you can continue on to the next step).
Flash: Click the "Flash" button, and your code will be sent wirelessly to your Core. If the
flash was successful, the LED on your Core will begin flashing magenta.

Fork: Wish the timing of that LED flash was a little bit faster? Try clicking on the "Fork This
Example" button after selecting the "Blink An LED" example application. You've now got a
personal copy of that application that you can modify, save, and flash to all of your Cores.
Edit: Try changing the values in the delay() function from 1000 to 250, which changes the
timing interval from 1000 milliseconds to only 250 milliseconds. Click the Verify button,
then the Flash button. Is your Core's LED blinking faster? Well done :)

ACCOUNT INFORMATION
There are a couple of other neat bells and whistles in Spark Build. The Spark Build IDE the
best tool for viewing important information about your Core, managing Cores associated with
your Spark account, and "unclaiming" them so they can be transferred to your buddy.

Core ID: You can view your Core's Device ID by clicking on the "Cores" icon at the bottom of
the navigation pane, then clicking the dropdown arrow next to the Core of interest.
Unclaim: You can "Unclaim" a Core by pressing the "Remove Core" button that is revealed
by clicking the dropdown arrow. Once a Core has been unclaimed, it is available to be
reassociated with any Spark users' account.

API Key: You can find your most recent API Key listed under the "Settings" tab in your
account. You can press the "Reset Token" button to assign a new API Key to your account.
Note that pressing this button will require you to update any hard-coded API Credentials
in your Spark-powered projects!

USING LIBRARIES

When you want to reuse code across multiple applications, Spark Libraries are your friend.
Spark Libraries are easily shareable, extensible packages built by the community to help with
common problems many Spark applications encounter. They are hosted on GitHub and easily
pulled into the IDE where they can be included in apps and shared with others.
You can include a library in an application by opening the library drawer, finding a library
that will work for your project, and clicking the "include in app" button. This will add an

#include statement to your code that will expose all the capabilities of the library to your
code.

CONTRIBUTE A LIBRARY

Adding a library to the IDE starts by creating an open source GitHub repository where your
code will live. At minimum, this repository needs a spark.json file, some documentation,
some example firmware files, and some Arduino/C++ files. The import and validation process
is designed to be forgiving and easy to interpret, so don't be scared; the IDE will walk you
through what is required to get your library set to go.
The easiest way to generate library boilerplate code is to follow the instructions on the getting
started section of the uber-library-example , a project designed to illustrate and document
what a library is supposed to look like.

WAIT, WHAT IS FIRMWARE?


An embedded system like the Spark Core doesn't have an Operating System like a traditional
computer. Instead, it runs a single application, often called firmware, which runs whenever
the system is powered.
Firmware is so-called because it's harder than software and softer than hardware. Hardware
is fixed during manufacturing, and doesn't change. Software can be updated anytime, so it's
very flexible. Firmware is somewhere in between; hardware companies do issue firmware
updates, but they tend to be very infrequent, because upgrading firmware can be difficult.
In our case, because the Spark Core is connected to the internet, updating firmware is quite
trivial; we send it over the network, and we have put in place safeguards to keep you from
"bricking" the Core.

When you flash code onto the Spark Core, you are doing an over-the-air firmware update.
This firmware update overwrites almost all of the software on the Spark Core; the only piece
that is untouched is the bootloader, which manages the process of loading new firmware and
ensures you can always update the firmware over USB or through a factory reset.

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

SPARK CORE DATASHEET


Subsystems
Microcontroller
Wi-Fi module
External FLASH
Power regulator
RF circuit
Pins and I/O
Digital pins
Analog Inputs
Analog Outputs
Serial (UART)
SPI
I2C
JTAG
Memory mapping
Internal Flash Memory Map
External Flash Memory Map
Electrical characteristics
Power
RF
Types of Cores
Physical layout

SPARK CORE DATASHEET


SUBSYSTEMS
Microcontroller

Spark Core v1.0 uses the STM32F103CB - ARM 32-bit Cortex M3 based - microcontroller for its
brain power. You can download the datasheet here.
Some of its key features are as follows:
ARM 32-bit Cortex-M3 CPU Core
72Mhz operating frequency, 1.25 DMIPS/MHz (Dhrystone 2.1)
128KB of Flash memory
20KB of SRAM
12 bit ADC
USB 2.0 full-speed interface
USART, SPI and I2C interfaces
JTAG Debug mode

Wi-Fi module

Core v1.0 uses TI's CC3000 module for the WiFi communications.
Some of the key features of the CC3000 module are as follows:
IEEE 802.11 b/g compliant
Radio Performance
TX power: +18.0 dBm at 11 Mbps, CCK
RX sensitivity: 88 dBm, 8% PER, 11 Mbps
Operating temperature: 20 C to 70 C
Wireless security subsystem
WEP
WPA Personal
WPA2 Personal
FCC, IC, and CE certified with a chip antenna
SPI host interface
The datasheet is available here.

External FLASH

In addition to having 128KB of internal flash memory for storing the firmware, the Core also
features an external SPI based flash memory chip - SST25VF016B. This memory space (a total
of 2MB) is used to store the factory reset firmware, a back up firmware, a copy of the firmware
sent during Over The Air (OTA) update and the keys. Part of the space is also available to the
user who can use it to store log data, user parameters, etc. A detailed description of the
memory mapping can be found further down this document in the memory mapping section.
Since the flash memory is non-volatile, it retains the data even after turning off the power.
According to the manufacturer of the chip, the data retention of this memory is greater than
100 years, which we reckon should be good enough for now. Also, note that the maximum
read-write endurance is limited to 100,000 cycles. meh.
Power regulator

The entire Core, including all of the on board peripherals run at 3.3V DC. So, in order to power
the Core from the USB port or an external power supply, we need to downconvert the voltage
before feeding it into the Core. We went through a couple of iterations before choosing
Microchip's MCP1825S-3302E power regulator which comfortably meets the specs.
Some of its key features are:
500mA output current
Input voltage range of 3.6 to 6.0V (for 3.3V output)
Low Dropout (LDO) voltage of 210mV at 500mA
SOT-223 package that sits nicely on the other side of the USB connector. The connector
also acts as an additional heat sink.
Short Circuit Current Limiting and Overtemperature Protection
This means, you can power the Core via the USB port or via the VIN pin from an external
power supply that can range from 3.6V to 6.0V DC. Ideal sources of power can be: 3.6V LiPo
battery, 4AA battery pack, backup USB battery or an USB wall charger.

RF circuit

The RF circuit is probably where we spent the most time on during hardware design. RF
design is like voodoo black magic, so we sought guidance from the industry experts before
finalizing the component values and placement.
You can download a copy of the RF test report here.

PINS AND I/O

The Spark Core offers a total 18 I/O pins to the user: D0 to D7 , A0 to A7 and two pins that are
preset to serial - TX and RX . All of these I/O pins run at 3.3V and the user should keep this in
mind before attaching any external peripherals to them. The only exception to this are the
following pins that are tolerant to 5V inputs:

D0, D1, D3, D4, D5, D6 and D7


Click here to view a larger pinout diagram
Digital pins
Each pin on the Core can either be configured as input (with or without pull-up or pull-down)
or as output (push-pull or open-drain) using the pinMode() function.
After setting them up, the user can then write to or read from the pins using digitalWrite()
and digitalRead() functions respectively.
Each of these pins can individually source/sink a maximum of 20mA. In the input mode, the
user can activate internal pull-up or pull-down resistors (typically equal to 40K ohms). By
default these are deactivated.
Analog Inputs
Pins A0 to A7 can be set up as analog inputs and can measure voltages of upto 3.3V and are
internally referenced to VDD. The user can read the pins using analogRead() function which
returns a 12bit value.
Analog Outputs

This term is misleading and misused but is widely adopted in the Arduino community. The
pins that are set to output an analog value don't actually output an analog voltage but rather
produce a PWM signal whose duty cycle can be varied thus varying the total average power of
the signal. On the Core, the PWM signals have a resolution of 8 bits and run at a frequency of
500Hz.
Having said that, the user can send analog values to the pins using the function
analogWrite().
This feature is only available on the following pins: A0, A1, A4, A5, A6, A7, D0 and D1.
Serial (UART)

The Core features two serial ports. The first one is a CDC (Communications Device Class)
available over the USB port . When configured, it will show up as a virtual COM port on the
computer.
The second one is a hardware USART available via the TX and RX pins on the Core.
Both of these serial ports can be configured and used using the serial functions.
NOTE: Please take into account that the voltage levels on these pins runs at 0V to 3.3V and
should not be connected directly to a computer's RS232 serial port which operates at +/- 12V

and can damage the Core.


SPI

The Serial Peripheral Interface is available on pins:

A2: SS (Slave Select)


A3: SCK (Serial Clock)
A4: MISO (Master In Slave Out)
A5: MOSI (Master Out Slave In)
NOTE: All of these pins run at 3.3V logic levels.
I2C

I2C communication pins are multiplexed with the standard GPIO pins D0 and D1.

D0: SDA (Serial Data Line)


D1: SCL (Serial Clock)
Both of these pins run at 3.3V logic level but are tolerant to 5V inputs.
JTAG

In addition to having the ability to load new firmware over USB and WiFi, the users also have
direct access to the STM32 chip via the JTAG channel. In order to do this, you will need a JTAG
shield and a JTAG programmer. You could make your own JTAG shield or buy one from us.
Currently we have only tested the ST-LINK/V2 programmer successfully.
The hardware files for the JTAG shield are available here.

MEMORY MAPPING
Internal Flash Memory Map
The STM32 has a total of 128KB internal flash memory which is divided into three main
regions by us. Beginning at the top of the memory space is where the bootloader is saved and
locked. The second region is reserved for storing system flags and the third region holds the
actual user firmware.
Memory Address Content

Size

0x08000000

Bootloader

19 KB max

0x08004C00

System Flags

1 KB max

0x08005000

Core Firmware Location 108 KB max

External Flash Memory Map


The external flash memory gives us an additional 2MB of storage space. This space is used to
store the public and private keys, the factory reset firmware, a back-up firmware and a copy
of the firmware sent Over The Air (OTA). The rest of the memory space is available to the
user.
Memory Address Content

Size

0x00000

Reserved

4KB

0x01000

Public Key

294 Bytes - 4KB max

0x02000

Private Key

612 Bytes

0x20000

Factory Reset Firmware Location 128 KB max

0x40000

BackUp Firmware Location

128 KB max

0x60000

OTA Firmware Location

128 KB max

0x80000

End of OTA Firmware


NOT USED

0x200000

End of Flash Memory

ELECTRICAL CHARACTERISTICS
Power
Parameter

Min

Max

Input Voltage (at VIN)

3.6 V

6.0 V

Total Current Consumption

50mA

300mA

Current Consumption in Deep Sleep 3.2 A Current per I/O pin

RF

8mA

20mA

With the on board chip antenna, the peak return loss (S11) has been measured and verified
to be in the excess of 20dB.
The transmission loss for the u.FL connector has been measured to be approximately 0.5 to
0.75dB.

TYPES OF CORES

Currently the Core is available in two flavors. With an on-board chip antenna or an uFL
connector to connect an external antenna.
Chip Antenna
This version of the Core comes with an on board chip antenna that gives a signal strength
similar to a mobile phone. In most cases, a simple plastic enclosure will not affect the signal
strength of the chip antenna.
uFL Connector
If you want to improve the signal strength of the Core, you can connect an external antenna
with the help of the uFL connector. Most antennas that are designed to operate at 2.4GHz or
are WiFi rated will do the job. You can also make your own cantenna!

PHYSICAL LAYOUT

The header pins on the Core are spaced at an interval of 0.1", which is the standard pitch size
for proto-boards and breadboards. The physical layout of the Core was inspired from the
Arduino Pro Mini board.
Mechanical drawings of the Core are available here.
Parameter Value
Length

1.47"

Width

0.8"

Height

0.5"

Weight

14 grams

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

SHIELDS AND ACCESSORIES


SHIELD SHIELD
Operation
Speci cations
Pin mapping
RELAY SHIELD
Operation
Speci cations
Setting up the Relay Shield
PROGRAMMER SHIELD (JTAG)
Speci cations
Setting up the programmer
BATTERY SHIELD
Operation
Speci cations
Setting up the shield
SPARK MAKER KIT
1. Ceramic Capacitors (10 each)
2. Electrolytic Capacitor 100uF (5)
3. Headers
4. LEDs
5. RGB LED (1)
6. NPN Transistor (1)

7. Diode (6)
8. Micro Servo (1)
9. Deluxe Jumper Wire Pack (1)
10. USB Micro B Cable (1)
11. Mini DC Motor (1)
12. Vibration Motor (1)
13. Piezo Buzzer (1)
14. Mini Pushbuttons (3)
15. DPDT Switch (2)
16. Shift Register IC (1)
17. Tilt Sensor (2)
18. Temperature Sensor (1)
19. Thermistor (2)
20. Force-Sensitive Resistor (1)
21. Photo Resistors (2)
22. Resistors
23. Rotary Potentiometer (1)
24. Proto-board (1)
25. Spark Core - u.FL or CA (1)
SPARK RC CAR KIT
Kit Contents
Example code
Motor Driver Shield Speci cations
Datasheets

SHIELDS AND ACCESSORIES


SHIELD SHIELD
This shield is essentially an adapter that allows the user to connect Arduino compatible
shields to the Spark Core. There are two functions that this shield performs: pin mapping of
the Spark Core to the Arduino pin layout and voltage translation of 3.3V to/from 5V.

OPERATION

We use Texas Instruments TXB0108PWR to do the voltage translation in between Spark Core's
3.3V logic level and Arduino's 5V logic.
Due to the limited number of pin to function combinations, we have only mapped three
analog channels A0 , A1 and A2 . Unlike other IO pins, the analog pins are rated at only a max
of 3.3V and NOT 5.0V. Please remember NOT to exceed this voltage at anytime.
The shield has an onboard voltage regulator and can be powered from 7V to 15V DC. You
could also power it via the USB plug on the Spark Core alone but the current would be limited
to 500mA.

SPECIFICATIONS

Operating voltage: 7 to 15V DC


Current consumption: without the core plugged in 7mA at 9V DC and 150mA with the Core.
Dimensions: 3.79" x 2.1"
Weight: 40g
The pictures shows a robot shield interfaced with the Spark Core via the Shield Shield.

PIN MAPPING
Arduino | Spark Core | Peripherals
0
RX
Serial1 RX
1
TX
Serial1 TX
2
3
4
5
6
7
8
9
10

D2
D0
D3
D1
A7
D4
D5
D6
A2

PWM
PWM
PWM

SS

11
12
13
A0
A1
A2

A5
A4
A3
A0
A1
A6

PWM,
PWM,

MOSI
MISO
SCK
PWM*, ADC**
PWM*, ADC**
PWM*, ADC**

* Note: These pins can also function as 3.3V


PMW outputs or 3.3V Servo outputs.
** Note: ADC inputs are 3.3V max.
IMPORTANT: The Shield Shield does not map the Spark Core pins to like-numbered pins on
the Arduino. In other words, D0 on the Spark Core is not the same as D0 on the Arduino.
Please review the pin mapping table to the right and plan accordingly.
Shield Shield Hardware files

RELAY SHIELD

The Relay Shield, in combination with the Spark Core, allows you to control high power
devices over the internet. Want to control a lamp, fan or garden sprinklers? Then this is a
solution for you!

OPERATION

The schematic for the relay shield is simple and self explanatory. The shield has four relays
that are controlled by pins D0, D1, D2 and D3 on the Core. Each relay is triggered via a NPN
transistor that takes a control signal from the core and switches the relay coil ON and OFF
which in turn makes or breaks the electrical contact on the output. There is also a flyback
diode connected across the coil to help protect the transistor from high voltage transients
caused during switching.
The relays are SPDT (Single Pole Double Throw) type, which means they have three terminals
at the output: COMMON (COMM), Normally Open (NO) and Normally Closed (NC). We can
either connect the load in between the COMM and NO or COMM and NC terminals. When
connected in between COMM and NO, the output remains open/disconnected when the relay
is turned OFF and closes/connects when the relay is turned ON. In the later case, the output
remains closed/connected when the relay is OFF and opens/disconnects when the relay is
ON.

SPECIFICATIONS
Operating voltage: 7 to 15V DC
Current consumption: 150mA min to 290mA (at 9V DC)
Relay Max Voltage: 250V AC
Relay Max Current: 10Amp at 125V AC

Relay Part Number: JS1-5V-F (Data Sheet)


Dimensions: 3.5 x 3.3
Weight: 100g

SETTING UP THE RELAY SHIELD

Turning ON a relay is as simple as setting the associated pin to HIGH.


The picture shows a sample setup where the relay is used as a switch to control a light bulb.

int
int
int
int

RELAY1
RELAY2
RELAY3
RELAY4

=
=
=
=

D0;
D1;
D2;
D3;

void setup()
{

//Initilize the relay control pins as output


pinMode(RELAY1, OUTPUT);
pinMode(RELAY2, OUTPUT);
pinMode(RELAY3, OUTPUT);
pinMode(RELAY4, OUTPUT);
// Initialize all relays to an OFF state
digitalWrite(RELAY1, LOW);
digitalWrite(RELAY2, LOW);
digitalWrite(RELAY3, LOW);

digitalWrite(RELAY4, LOW);

//register the Spark function


Spark.function("relay", relayControl);

void loop()
{

// This loops for ever


}
// command format r1,HIGH
int relayControl(String command)
{
int relayState = 0;
// parse the relay number
int relayNumber = command.charAt(1) - '0';
// do a sanity check
if (relayNumber < 1 || relayNumber > 4) return -1;
// find out the state of the relay
if (command.substring(3,7) == "HIGH") relayState = 1;
else if (command.substring(3,6) == "LOW") relayState = 0;
else return -1;

// write to the appropriate relay


digitalWrite(relayNumber-1, relayState);
return 1;

An example API request to this function would look something like this:

POST /v1/devices/{DEVICE_ID}/relay
# EXAMPLE REQUEST
curl https://api.spark.io/v1/devices/0123456789abcdef/relay \
-d access_token=123412341234 -d params=r1,HIGH
USE EXTREME CAUTION WHEN DEALING WITH HIGH VOLTAGE !!
Relay Shield Hardware Files

PROGRAMMER SHIELD (JTAG)

The programmer shield is a simple adapter that lets you connect a JTAG programmer to the
Spark Core. If you need complete control over your Core and are comfortable with the ARM
development environment, you will need this shield as an interface between the JTAG
programmer and the Core.

SPECIFICATIONS
Compatible JTAG programmers : STLink V2 (the only one tested)
Dimensions: 2.2" x 1.55"
Weight: 20g

SETTING UP THE PROGRAMMER

If you are using the STLink V2, you can download the supporting drivers and utilities from
their website.
All of the hardware files for the JTAG shield are available for download.
JTAG Shield Hardware Files

BATTERY SHIELD

The battery shield is a LiPo battery charger and voltage regulator combined into one. You can
use it to power your Core with any 3.7V LiPo battery and charge it at the same time via the
USB port. The shield is built around Microchip's MCP73871 battery charge management
controller and TI's TPS61200 boost converter for up converting 3.7V to 5.0V.

OPERATION
MCP73871 is an intelligent battery charge management controller that allows one to charge
the battery and power the system simultaneously. There is also an under voltage lock out
which protects the battery from draining completely. The TPS61200 converts the 3.7V to 4.1V
battery output to a regulated 5V to power the Core or potentially any other hardware
(cellphones?!) The charge current to the battery is set to 500mA.

SPECIFICATIONS
Works with any 3.7V Lithium Polymer battery.
Simultaneously charge the battery and power the core
Dimensions: 2.3 x 0.61
Weight: 20g
Datasheet: MCP73871 and TPS61200

SETTING UP THE SHIELD

In order to just charge the battery, simply plug the battery into the JST connector (CAUTION:
Remember to check the polarity of the battery header!!) and a USB cable into the microB
socket as shown in the picture.
You will see the BLUE power LED light up on the shield and either the YELLOW (indicating
charging in progress) or GREEN (indicating charging complete) LED light up.

To summarize the LED functions:


Blue LED: Power indicator for the USB cable. Lights up only when the USB cable is plugged
in.
Yellow LED: Charging in progress indicator. Is ON when the battery is charging. Turns OFF
when charging complete.
Green LED: Charge Complete Indicator. This LED lights up when the battery is completely
charged.
You could also power the Spark Core while the battery is charging but remember that the
charging might be slower as the current will be distributed in between the Core and the
battery.

When powering the Core via the battery alone, the blue LED will NOT light up.
TIP: Remember to unplug the battery from the shield when not in use. If you leave the battery
connected to the battery shield, it will eventually drain it.
CAUTION: Check the battery polarity and its voltage rating
Battery Shield Hardware Files

SPARK MAKER KIT


1. Ceramic Capacitors (10 each)

These are standard ceramic capacitors. They are widely used in analog circuits as bypass/
decoupling capacitors, in timers, filters, etc. The kit comes with:
10nF (0.01uF) - Number code: 103
100nF (0.1uF) - Number code: 104
Note: These are non-polar capacitors which means they can be oriented both ways.
2. Electrolytic Capacitor 100uF (5)

Electrolytic capacitors offer larger values and are polar. These capacitors are ideal for
decoupling power supplies, as transient suppressors, and in timing circuits.
Note: These are polar capacitors. The longer lead denotes positive while the shorter one
denotes negative. These are also know to "pop" when subjected to voltages higher than their
ratings.
3. Headers

These are standard 0.1" pitch headers that can be cut to size. Very handy when building
circuits on breadboard or PCBs alike.
8-Pin Female Headers (5)
40-Pin Male Breakaway Headers (2)
40-Pin Male Breakaway Dual-Headers (1)
4. LEDs

These are general purpose 3mm LEDs. You can never have enough of them! Use a resistor in
series when hooking them up to the Spark Core. ( 220 ohms to 1K ohms)
Red (5)
Green (5)
Note: The longer lead is positive (anode) while the shorter one is negative (cathode).
5. RGB LED (1)

So, Red and Green aren't enough for you? Want to make bazzillion different colors? Then this
RGB LED will do it for ya. You can mix colors by connecting each pin to an analogWrite
compatible pin on the Core and feed them different values. Let the disco party begin!
This LED has four pins, one for each color and a common anode (+) pin.
Datasheet

6. NPN Transistor (1)

S9013 is a general purpose small signal NPN transistor rated at 40V, 500mA.
You can this transistor to switch small loads like relays, mini motors, buzzers, etc.
Datasheet

7. Diode (6)

1N4004 is a general purpose diode rated at 400V, 1000mA with a forward voltage drop of 1.1V.
Excellent as a fly-back diode or as a general rectifying diode.
Datasheet

8. Micro Servo (1)

Emax ES08A is a mini RC servo motor.


Operating Voltage: 4.8 to 6.0VDC
Stall Torque: 1.8Kg/cm
Speed: 0.10sec/degree at no load
Datasheet
Wiring:
Yellow - Signal (D0, D1, A0, A1, A4, A5, A6, A7)
Orange - +5V (VIN)
Brown - Ground (GND)
Note: The Ground pin may vary as Brown or Black, +5V pin may vary as Orange or Red and
Signal pin may vary as Yellow or White and sometimes Orange if there is already a Red and

Black wire.
9. Deluxe Jumper Wire Pack (1)

Multi-colored and stripped. You can never have enough of these either.
10. USB Micro B Cable (1)
A custom Spark USB cable for you Core! We were really excited to have our logo printed on
them.
11. Mini DC Motor (1)

This is a simple DC motor that you can switch using the NPN transistor provided in the kit.
Datasheet

12. Vibration Motor (1)

Wanna give your next Spark Core project a tactile feedback? This vibration motor serves the
purpose nicely. Use the NPN transistor to switch it.
Datasheet

13. Piezo Buzzer (1)

Add an audible feedback to your project with this buzzer. The longer lead is positive and the
shorter is negative. You will need a transistor to drive it.
Note: The sound gets annoying after a while. Use it sparingly!
Operating Voltage: 4.0 to 7.0 V DC
Oscillation Frequency: 2.3KHz
Current: 30mA
Sound Pressure: 85dB
Datasheet

14. Mini Pushbuttons (3)

These are nifty little switches that plug nicely into a breadboard or a proto-board. They are
normally-open type and are rated at 12V, 50mA.
15. DPDT Switch (2)

This is a tiny Double Pole Double Throw (DPDT) Switch with 6 legs.
16. Shift Register IC (1)

74HC595 is an 8 bit serial-in parallel-out shift register commonly used as an output expander.
You can drive of up to 8 outputs from only 3 lines (using one chip). You could potentially daisy
chain multiple of these to get even more outputs.
Datasheet

17. Tilt Sensor (2)

SW-200D is a tiny tilt sensor that when tilted to more than 30 degrees will internally connects
its two terminals together. The magic happens with the use of gravity and a tiny metal ball.
You can use to it detect tilt, orientation or vibrations.
Datasheet

18. Temperature Sensor (1)

The TMP36 is a low voltage, precision centigrade temperature sensor. It provides a voltage
output that is linearly proportional to the Celsius (centigrade) temperature. The TMP36 does
not require any external calibration to provide typical accuracies of 1C at +25C and 2C
over the 40C to +125C temperature range.
Here is an example of how you could use it the Core.
Datasheet

19. Thermistor (2)

A thermistor is a temperature dependent resistor. This one is a NTC type (Negative


Temperature Coefficient), which means its resistance decreases with an increase in
temperature.
Unlike the TMP36, you will need to use this as a part of a voltage divider circuit as nicely
described in this tutorial.
Datasheet

20. Force-Sensitive Resistor (1)

Manufacturer Part Number: Interlink 30-81794 This is a force sensitive resistor with a 0.5"
diameter and an operating force from 10g to 1000g. Their resistance decreases with an
increase in applied pressure.
Datasheet

21. Photo Resistors (2)

A photo resistor is a light dependent resistor whose resistance decreases with the increase in
the intensity of light striking it. You can use it to detect the ambient light in the surrounding,
detect shadows or use it as a part of a burglar alarm system.
Datasheet

22. Resistors

There are three different value resistor in this kit. All of them are rated at 5%, 1/4 Watt.
330-Ohm (10)
1K-Ohm (10)
10K-Ohm (10)
23. Rotary Potentiometer (1)

This is a variable resistor whose value can be changed by simply turning the knob.
24. Proto-board (1)

This is a 7" x 9" general purpose dot-matrix prototyping PCB.


25. Spark Core - u.FL or CA (1)
Your very own Spark Core, ready to take over the world, one byte at a time.

SPARK RC CAR KIT

The RC car kit is a two-wheeled differentially driven platform that you can control using a
Spark Core.
Kit Contents
NOTE: This is no longer available for purchase through Spark, however, if you still want to
build this, check out the links to the individual components below.
RC Car Chassis kit (buy from DFRobot)
Motor Driver Shield (buy from DFRobot)
Battery (NiCad, Alkaline, LiPo, Li Ion : 6V to 12V DC) (buy from DFRobot)
Spark Shield Shield (buy from Spark)
Spark Core (buy from Spark)
Assemble the RC Car chassis as shown in the tutorial here. (Without the electronics)
Pin 4: Connects with M1
Pin 5: Connects with E1 (PWM)
Pin 6: Connects with E2 (PWM)
Pin 7: Connects with M2
Where E1 and E2 control the speed of the motors, while M1 and M2 change the direction.

Connect the left and right motor terminals to M2+,M2-,M1+ and M1- respectively.

The motors can run from a voltage in the range of 5V to 9V DC. The jumpers can be set to get
power from Vin from the Shield below.

Example code
A simple example for controlling the RC Car is as described:

int
int
int
int

leftMotorEnable
rightMotorEnable
leftMotorDir
=
rightMotorDir
=

= D1;
= A7;
D3;
D4;

void setup()
{

//Register Spark function


Spark.function("rccar", rcCarControl);
pinMode(leftMotorDir, OUTPUT);
pinMode(leftMotorEnable, OUTPUT);
pinMode(rightMotorDir, OUTPUT);
pinMode(rightMotorEnable, OUTPUT);
pinMode(D7,OUTPUT);
}

void loop()
{

// Nothing to do here
}
/*******************************************************************************
* Function Name : rcCarControl
* Description
: Parses the incoming API commands and sets the motor control
pins accordingly
* Input
: RC Car commands
e.g.: rc,FORWARD
rc,BACK
* Output
: Motor signals
* Return
: 1 on success and -1 on fail
*******************************************************************************/
int rcCarControl(String command)
{
if(command.substring(3,7) == "STOP")
{
digitalWrite(leftMotorEnable,LOW);
digitalWrite(rightMotorEnable,LOW);
digitalWrite(leftMotorDir,LOW);
digitalWrite(rightMotorDir,LOW);

return 1;

if(command.substring(3,7) == "BACK")
{

digitalWrite(leftMotorDir,LOW);
digitalWrite(rightMotorDir,HIGH);
digitalWrite(leftMotorEnable,HIGH);
digitalWrite(rightMotorEnable,HIGH);

return 1;

if(command.substring(3,10) == "FORWARD")
{

digitalWrite(leftMotorDir,HIGH);
digitalWrite(rightMotorDir,LOW);
digitalWrite(leftMotorEnable,HIGH);
digitalWrite(rightMotorEnable,HIGH);

return 1;

if(command.substring(3,8) == "RIGHT")
{

digitalWrite(leftMotorDir,HIGH);
digitalWrite(rightMotorDir,HIGH);
digitalWrite(leftMotorEnable,HIGH);
digitalWrite(rightMotorEnable,HIGH);

return 1;

if(command.substring(3,7) == "LEFT")
{

digitalWrite(leftMotorDir,LOW);
digitalWrite(rightMotorDir,LOW);
digitalWrite(leftMotorEnable,HIGH);
digitalWrite(rightMotorEnable,HIGH);

return 1;

// If none of the commands were executed, return false


return -1;

To send API commands:

# Sending command to go forward


curl https://api.spark.io/v1/devices/0123456789abcdef/rccar -d access_token=123412341234

Motor Driver Shield Specifications


The motor driver shield is based around the L298 Full-bridge motor driver chip.
Logic voltage: 5V DC
Logic Supply Current: 36mA
Motor drive voltage: 7V to 12V DC
Motor drive current: 2Amp Max
Datasheets

L298 datasheet
Motor Driver Shield Manual

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

TINKERING WITH "TINKER"


The Tinker app
The Tinker rmware
Using Tinker with Your Code
The Tinker API
digitalWrite
analogWrite
digitalRead
analogRead

TINKERING WITH "TINKER"


THE TINKER APP

The Tinker section of the Spark mobile app makes it very easy to start playing with your Spark
Core without writing any code. It's great for early development, and often it will do everything
you need to get your project off of the ground.
The app consists of 16 pins in vertical rows - 8 analog pins on the left, 8 digital pins on the
right. These pins represent the 16 GPIO (General Purpose Input and Output) pins on the
Spark Core, and are organized the same way.
To begin, tap any of the pins. A menu will pop up showing the functions that pin has
available. Each pin can have up to four possible functions:
digitalWrite: Sets the pin to HIGH or LOW, which either connects it to 3.3V (the maximum
voltage of the system) or to GND (ground). Pin D7 is connected to an on-board LED; if you
set pin D7 to HIGH, the LED will turn on, and if you set it to LOW, it will turn off.
analogWrite: Sets the pin to a value between 0 and 255, where 0 is the same as LOW and
255 is the same as HIGH. This is sort of like sending a voltage between 0 and 3.3V, but
since this is a digital system, it uses a mechanism called Pulse Width Modulation, or PWM.
You could use analogWrite to dim an LED, as an example.
digitalRead: This will read the digital value of a pin, which can be read as either HIGH or
LOW. If you were to connect the pin to 3.3V, it would read HIGH; if you connect it to GND, it
would read LOW. Anywhere in between, it'll probably read whichever one it's closer to, but
it gets dicey in the middle.
analogRead: This will read the analog value of a pin, which is a value from 0 to 4095, where
0 is LOW (GND) and 4095 is HIGH (3.3V). All of the analog pins (A0 to A7) can handle this.
analogRead is great for reading data from sensors.
To change the function of the pin, simply tap and hold on the pin, and the function select
menu will come back up. Any further questions? Come talk to us in the forums!

THE TINKER FIRMWARE


The Tinker firmware is the default application program stored in the Spark Core upon its
commissioning from the factory assembly line. You can always get back to it by putting the
Core in the factory reset mode, or by re-flashing your Core with Tinker in the Spark app.
To reflash Tinker from within the app:
iOS Users: Tap the list button at the top left. Then tap the arrow next to your desired Core
and tap the "Re-flash Tinker" button in the pop out menu.
Android Users: With your desired Core selected, tap the options button in the upper right
and tap the "Reflash Tinker" option in the drop down menu.
The Tinker app is a great example of how to build a very powerful application with not all that
much code. You can have a look at the latest release here.

USING TINKER WITH YOUR CODE


I know what you're thinking: this is amazing, but I really want to use Tinker while my code is
running so I can see what's happening! Now you can.
Combine your code with this framework, flash it to your Core, and Tinker away.

int
int
int
int

tinkerDigitalRead(String pin);
tinkerDigitalWrite(String command);
tinkerAnalogRead(String pin);
tinkerAnalogWrite(String command);

//PUT YOUR VARIABLES HERE

void setup()
{

Spark.function("digitalread", tinkerDigitalRead);
Spark.function("digitalwrite", tinkerDigitalWrite);
Spark.function("analogread", tinkerAnalogRead);
Spark.function("analogwrite", tinkerAnalogWrite);
//PUT YOUR SETUP CODE HERE

void loop()
{

//PUT YOUR LOOP CODE HERE

int tinkerDigitalRead(String pin) {


int pinNumber = pin.charAt(1) - '0';
if (pinNumber< 0 || pinNumber >7) return -1;
if(pin.startsWith("D")) {
pinMode(pinNumber, INPUT_PULLDOWN);
return digitalRead(pinNumber);}
else if (pin.startsWith("A")){
pinMode(pinNumber+10, INPUT_PULLDOWN);
return digitalRead(pinNumber+10);}
return -2;}

int tinkerDigitalWrite(String command){


bool value = 0;
int pinNumber = command.charAt(1) - '0';
if (pinNumber< 0 || pinNumber >7) return -1;

if(command.substring(3,7) == "HIGH") value = 1;


else if(command.substring(3,6) == "LOW") value = 0;
else return -2;
if(command.startsWith("D")){
pinMode(pinNumber, OUTPUT);
digitalWrite(pinNumber, value);
return 1;}
else if(command.startsWith("A")){
pinMode(pinNumber+10, OUTPUT);
digitalWrite(pinNumber+10, value);
return 1;}
else return -3;}

int tinkerAnalogRead(String pin){


int pinNumber = pin.charAt(1) - '0';
if (pinNumber< 0 || pinNumber >7) return -1;
if(pin.startsWith("D")){
pinMode(pinNumber, INPUT);
return analogRead(pinNumber);}
else if (pin.startsWith("A")){
pinMode(pinNumber+10, INPUT);
return analogRead(pinNumber+10);}
return -2;}

int tinkerAnalogWrite(String command){


int pinNumber = command.charAt(1) - '0';
if (pinNumber< 0 || pinNumber >7) return -1;
String value = command.substring(3);
if(command.startsWith("D")){

pinMode(pinNumber, OUTPUT);
analogWrite(pinNumber, value.toInt());
return 1;}
else if(command.startsWith("A")){
pinMode(pinNumber+10, OUTPUT);
analogWrite(pinNumber+10, value.toInt());
return 1;}
else return -2;}

THE TINKER API


When the Tinker firmware is installed on your Spark Core, it will respond to certain API
requests from your mobile app, which mirror the four basic GPIO functions (digitalWrite,
analogWrite, digitalRead, analogRead). These API requests can also be made from another
application, so you can build your own web or mobile app around the Tinker firmware.
digitalWrite

Sets the pin to HIGH or LOW, which either connects it to 3.3V (the maximum voltage of the
system) or to GND (ground). Pin D7 is connected to an on-board LED; if you set pin D7 to HIGH,
the LED will turn on, and if you set it to LOW, it will turn off.

POST /v1/devices/{DEVICE_ID}/digitalwrite
# EXAMPLE REQUEST IN TERMINAL
# Core ID is 0123456789abcdef
# Your access token is 123412341234
curl https://api.spark.io/v1/devices/0123456789abcdef/digitalwrite \
-d access_token=123412341234 -d params=D0,HIGH
The parameters must be the pin (A0 to A7, D0 to D7), followed by either HIGH or LOW,
separated by a comma. The return value will be 1 if the write succeeds, and -1 if it fails.
analogWrite
Sets the pin to a value between 0 and 255, where 0 is the same as LOW and 255 is the same
as HIGH. This is sort of like sending a voltage between 0 and 3.3V, but since this is a digital
system, it uses a mechanism called Pulse Width Modulation, or PWM. You could use
analogWrite to dim an LED, as an example.

POST /v1/devices/{DEVICE_ID}/analogwrite
# EXAMPLE REQUEST IN TERMINAL
# Core ID is 0123456789abcdef
# Your access token is 123412341234
curl https://api.spark.io/v1/devices/0123456789abcdef/analogwrite \
-d access_token=123412341234 -d params=A0,215
The parameters must be the pin (A0 to A7, D0 to D7), followed by an integer value from 0 to
255, separated by a comma. The return value will be 1 if the write succeeds, and -1 if it fails.
digitalRead
This will read the digital value of a pin, which can be read as either HIGH or LOW. If you were
to connect the pin to 3.3V, it would read HIGH; if you connect it to GND, it would read LOW.
Anywhere in between, it'll probably read whichever one it's closer to, but it gets dicey in the
middle.

POST /v1/devices/{DEVICE_ID}/digitalread
# EXAMPLE REQUEST IN TERMINAL
# Core ID is 0123456789abcdef

# Your access token is 123412341234


curl https://api.spark.io/v1/devices/0123456789abcdef/digitalread \
-d access_token=123412341234 -d params=D0
The parameter must be the pin (A0 to A7, D0 to D7). The return value will be 1 if the pin is
HIGH, 0 if the pin is LOW, and -1 if the read fails.
analogRead
This will read the analog value of a pin, which is a value from 0 to 4095, where 0 is LOW (GND)
and 4095 is HIGH (3.3V). All of the analog pins (A0 to A7) can handle this. analogRead is great
for reading data from sensors.

POST /v1/devices/{DEVICE_ID}/analogread
# EXAMPLE REQUEST IN TERMINAL
# Core ID is 0123456789abcdef
# Your access token is 123412341234
curl https://api.spark.io/v1/devices/0123456789abcdef/analogread \
-d access_token=123412341234 -d params=A0
The parameters must be the pin (A0 to A7, D0 to D7). The return value will be between 0 and
4095 if the read succeeds, and -1 if it fails.

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

SPARK CLI
INSTALLING
Advanced Install
Upgrading to the latest version
Running from source (advanced)
GETTING STARTED
spark setup
spark help
BLINK AN LED WITH TINKER
UPDATE YOUR CORE REMOTELY
COMMAND REFERENCE
spark setup wi
spark login
spark logout
spark list
spark core add
spark core rename
spark core remove
spark ash
spark compile
spark call
spark get
spark monitor

spark identify
spark subscribe
spark serial list
spark serial monitor
spark keys doctor
spark keys new
spark keys load
spark keys save
spark keys send
spark keys server

SPARK CLI
The Spark CLI is a powerful tool for interacting with your cores and the Spark Cloud. The CLI
uses node.js and can run on Windows, Mac OS X, and Linux fairly easily. It's also open source
so you can edit and change it, and even send in your changes as pull requests if you want to
share!

INSTALLING
First, make sure you have node.js installed!
Next, open a command prompt or terminal, and install by typing:

# how to install the spark-cli


$ npm install -g spark-cli
$ spark cloud login

ADVANCED INSTALL
To use the local flash and key features you'll also need to install dfu-util, and openssl. They
are freely available and open-source, and there are installers and binaries for most major
platforms.
Here are some great tutorials on the community for full installs:
Installing on Ubuntu 12.04
Installing on Ubuntu 14.04
Installing on Windows

UPGRADING TO THE LATEST VERSION


To upgrade Spark-CLI, enter the following command:

# how to update the spark-cli


$ npm update -g spark-cli

RUNNING FROM SOURCE (ADVANCED)


To grab the CLI source and play with it locally

#
$
$
$

how to get the source code for the CLI


git clone git@github.com:spark/spark-cli.git
cd spark-cli/js
node app.js help

GETTING STARTED
These next two commands are all you need to get started setting up an account, claiming a
core, and discovering new features.
spark setup
This command will guide you through logging in or creating a new account as well as claiming
your core!

# how to setup your account and your core


$ spark setup

spark help
Shows you what commands are available and how to use them. You can also give the name of
a command for detailed help.

# how to get help


$ spark help
$ spark help keys

BLINK AN LED WITH TINKER


If you're just opening a new core, chances are it's already loaded with Tinker, the app we load
at the factory. If you don't have Tinker, or if you've been using the build IDE already, lets load
it quickly by typing:

# How to re-load tinker onto a core


$ spark flash my_new_core_name tinker
Including:
/usr/local/lib/node_modules/spark-cli/binaries/spark_tinker.bin
attempting to flash firmware to your core my_new_core_name
flash core said {"id":"0123456789ABCDEFGHI","status":"Update started"}
Lets make sure your core is online and loaded with Tinker. We should see the four
characteristic functions exposed by Tinker, "digitalwrite", "digitalread", "analogwrite", and
"analogread".

# how to show all your cores and their functions and variables
$ spark list

Checking with the cloud...


Retrieving cores... (this might take a few seconds)

my_core_name (0123456789ABCDEFGHI) 0 variables, and 4 functions


Functions:
int digitalread(String args)
int digitalwrite(String args)
int analogread(String args)
int analogwrite(String args)
Lets try turning on the LED attached to pin D7 on your core.

# how to call a function on your core


$ spark call my_core_name digitalwrite D7,HIGH
1
$ spark call my_core_name digitalwrite D7,LOW
1
Nice! You should have seen the small blue LED turn on, and then off.

UPDATE YOUR CORE REMOTELY


You can write whole apps and flash them remotely from the command line just as you would
from the build IDE. Lets write a small blink sketch to try it out.

Copy and paste the following program into a file called blinky.ino

#Copy me to blinky.ino
#define PIN D7
int state = 0;

void setup() {

//tell the core we want to write to this pin


pinMode(PIN, OUTPUT);

void loop() {

//alternate the PIN between high and low


digitalWrite(PIN, (state) ? HIGH : LOW);
//invert the state
state = !state;
//wait half a second
delay(500);

}
Then lets compile that program to make sure it's valid code. The CLI will automatically
download the compiled binary of your program if everything went well, and show you the url.
The server will also keep a copy of your binary around for you for about 24 hours.

# how to compile a program without flashing to your core


$ spark compile blinky.ino
Including:
blinky.ino
attempting to compile firmware
pushing file: blinky.ino
grabbing binary from: https://api.spark.io/v1/binaries/01234567890ABCDEFGH
saved firmware to firmware_123456781234.bin
Compiled firmware downloaded.
Now that we have a valid program, lets flash it to our core! We can use either the source code
again, or we can send our binary.

# how to flash a program to your core (from source code)


$ spark flash my_core_name blinky.ino
# OR - how to flash a pre-compiled binary to your core
$ spark flash my_core_name firmware_123456781234.bin
Including:
firmware_123456781234.bin

attempting to flash firmware to your core my_core_name


flash core said {"id":"01234567890ABCDEFGH","status":"Update started"}

COMMAND REFERENCE
spark setup wifi
Helpful shortcut for adding another wifi network to a core connected over USB. Make sure
your core is connected via a USB cable, and is slow blinking blue listening mode

# how to just update your wifi settings.


# Make sure your core is connected and in listening mode first
$ spark setup wifi

spark login
Login and save an access token for interacting with your account on the Spark Cloud.

# how to login - creates and saves an access token for your session with the CLI
$ spark login

spark logout
Logout and optionally revoke the access token for your CLI session.

# how to remove your saved access token, and optionally revoke that token as well
$ spark logout

spark list
Generates a list of what cores you own, and displays information about their status, including
what variables and functions are available

# how to show what cores of yours are online


# and what functions and variables are available
$ spark list

Checking with the cloud...


Retrieving cores... (this might take a few seconds)

my_core_name (0123456789ABCDEFGHI) 0 variables, and 4 functions


Functions:
int digitalWrite(string)
int digitalRead(string)
int analogWrite(string)
int analogRead(string)

spark core add


Adds a new core to your account

# how to add a new core to your account


$ spark cloud claim 0123456789ABCDEFGHI
Claiming core 0123456789ABCDEFGHI
Successfully claimed core 0123456789ABCDEFGHI

spark core rename


Assigns a new name to a core you've claimed

# how to change the name of your core


$ spark core rename 0123456789ABCDEFGHI "pirate frosting"

spark core remove


Removes a core from your account so someone else can claim it.

# how to remove a core from your account


$ node app.js core remove 0123456789ABCDEFGHI
Are you sure? Please Type yes to continue: yes
releasing core 0123456789ABCDEFGHI
server said { ok: true }
Okay!

spark flash
Sends a firmware binary, a source file, or a directory of source files, or a known app to your
core.
Flashing a directory
You can setup a directory of source files and libraries for your project, and the CLI will use
those when compiling remotely. You can also create spark.include and / or a spark.ignore

file in that directory that will tell the CLI specifically which files to use or ignore.

# how to compile and flash a directory of source code to your core


$ spark flash 0123456789ABCDEFGHI my_project

Flashing one or more source les

# how to compile and flash a list of source files to your core


$ spark flash 0123456789ABCDEFGHI app.ino library1.cpp library1.h

Flashing a known app


Two pre-built apps are included with the CLI to help you get back on track. Tinker, and the
CC3000 patching app. You can flash these both over the cloud or locally via USB and dfu-util
through the CLI.

# how to flash a "known app" like tinker, or the cc3000 patcher to your core
$ spark flash 0123456789ABCDEFGHI tinker
$ spark flash 0123456789ABCDEFGHI cc3000
#
#
$
$

how to flash if your core is blinking yellow and connected over usb
requires dfu-util
spark flash --usb tinker
spark flash --usb cc3000

Compiling remotely and Flashing locally


To work locally, but use the cloud compiler, simply use the compile command, and then the
local flash command after. Make sure you connect your core via USB and place it into dfu
mode.

# how to compile a directory of source code and tell the CLI where to save the results
$ spark compile my_project_folder --saveTo firmware.bin
OR
# how to compile a list of source files
$ spark compile app.ino library1.cpp library1.h --saveTo firmware.bin
#
#
#
$

how to flash a pre-compiled binary over usb to your core


make sure your core is flashing yellow and connected via USB
requires dfu-util to be installed
spark flash --usb firmware.bin

spark compile
Compiles one or more source file, or a directory of source files, and downloads a firmware
binary.
compiling a directory
You can setup a directory of source files and libraries for your project, and the CLI will use
those when compiling remotely. You can also create spark.include and / or a spark.ignore
file in that directory that will tell the CLI specifically which files to use or ignore. Those files
are just plain text with one line per filename

# how to compile a directory of source code


$ spark compile my_project_folder

example spark.include
The spark.include and spark.ignore files are just regular text files with one filename per line.
If your directory has one of these files, the CLI will use it to try and determine what to include
or ignore when compiling your app.

# spark.include
application.cpp
library1.h
library1.cpp

example spark.ignore

# spark.ignore
.ds_store
logo.png
old_version.cpp

Compiling one or more source les

# how to compile a list of source files


$ spark compile app.ino library1.cpp library1.h

spark call
Calls a function on one of your cores, use spark list to see which cores are online, and what

functions are available.

# how to call a function on your core


$ spark call 0123456789ABCDEFGHI digitalWrite "D7,HIGH"
1

spark get
Retrieves a variable value from one of your cores, use spark list to see which cores are
online, and what variables are available.

# how to get a variable value from a core


$ spark get 0123456789ABCDEFGHI temperature
72.1

spark monitor
Pulls the value of a variable at a set interval, and optionally display a timestamp
Minimum delay for now is 500 (there is a check anyway if you keyed anything less)
hitting CTRL + C in the console will exit the monitoring

#
$
$
$
$
$

how to poll for a variable value from one or more cores continuously
spark monitor 0123456789ABCDEFGHI temperature 5000
spark monitor 0123456789ABCDEFGHI temperature 5000 --time
spark monitor all temperature 5000
spark monitor all temperature 5000 --time
spark monitor all temperature 5000 --time > my_temperatures.csv

spark identify
Retrieves your core id when the core is connected via USB and in listening mode (flashing
blue).

#
#
$
$
$
$

helps get your


make sure your
spark identify
spark identify
spark identify
spark identify

core id via usb and serial


core is connected and blinking blue
1
COM3
/dev/cu.usbmodem12345

$ spark identify
0123456789ABCDEFGHI

spark subscribe
Subscribes to published events on the cloud, and pipes them to the console. Special core
name "mine" will subscribe to events from just your cores.

#
$
$
$
$
$
$

opens
spark
spark
spark
spark
spark
spark

a connection to the
subscribe
subscribe mine
subscribe eventName
subscribe eventName
subscribe eventName
subscribe eventName

API so you can stream events coming from your cores

mine

CoreName

0123456789ABCDEFGHI

spark serial list


Shows currently connected Spark Core's acting as serial devices over USB

# shows a list of cores connected via serial usb


$ spark serial list

spark serial monitor


Starts listening to the specified serial device, and echoes to the terminal

#
$
$
$
$

opens
spark
spark
spark
spark

a read-only serial monitor for a particular core


serial monitor
serial monitor 1
serial monitor COM3
serial monitor /dev/cu.usbmodem12345

spark keys doctor


Helps you update your keys, or recover your core when the keys on the server are out of sync
with the keys on your core. The spark keys tools requires both dfu-util, and openssl to be
installed.
Connect your core in dfu mode, and run this command to replace the unique cryptographic
keys on your core. Automatically attempts to send the new public key to the cloud as well.

# helps repair key issues on a core


$ spark keys doctor 0123456789ABCDEFGHI

spark keys new


Generates a new public / private keypair that can be used on a core.

# generates a new public/private keypair


$ spark keys new
running openssl genrsa -out core.pem 1024
running openssl rsa -in core.pem -pubout -out core.pub.pem
running openssl rsa -in core.pem -outform DER -out core.der
New Key Created!
# generates a new public/private keypair with the filename mykey
$ spark keys new mykey
running openssl genrsa -out mykey.pem 1024
running openssl rsa -in mykey.pem -pubout -out mykey.pub.pem
running openssl rsa -in mykey.pem -outform DER -out mykey.der
New Key Created!

spark keys load


Copies a .DER formatted private key onto your core's external flash. Make sure your core is
connected and in dfu mode. The spark keys tools requires both dfu-util, and openssl to be
installed. Make sure any key you load is sent to the cloud with spark keys send core.pub.pem

# loads a key to your core via USB


# make sure your core is connected and blinking yellow
# requires dfu-util
$ spark keys load core.der
...
Saved!

spark keys save


Copies a .DER formatted private key from your core's external flash to your computer. Make
sure your core is connected and in dfu mode. The spark keys tools requires both dfu-util,
and openssl to be installed.

# creates a backup of the private key from your core to a file


# requires dfu-util
$ spark keys save core.der
...
Saved!

spark keys send

Sends a core's public key to the cloud for use in opening an encrypted session with your core.
Please make sure your core has the corresponding private key loaded using the spark keys

load command.

# sends a new public key to the API for use for your core
$ spark keys send 0123456789ABCDEFGHI core.pub.pem
submitting public key succeeded!

spark keys server


Switches the server public key stored on the core's external flash. This command is important
when changing which server your core is connecting to, and the server public key helps
protect your connection. Your core will stay in DFU mode after this command, so that you can
load new firmware to connect to your server.
Coming Soon - more commands to make it easier to change the server settings on your core!

# changes the public server key stored on your core


# (useful when switching servers)
$ spark keys server my_server.der
Okay! New keys in place, your core will not restart.

GETTING STARTED
CONNECTING YOUR CORE
EXAMPLES
CORE CODE (FIRMWARE)
CLOUD CODE (API)
WEB IDE (BUILD)
HARDWARE DATASHEET
SHIELDS AND KITS
TINKER
COMMAND LINE
TROUBLESHOOTING

TROUBLESHOOTING
Can't get connected?
STEP 0: Check the basics
STEP 1: Set up your Core over USB
STEP 2: Try another network
STEP 3: Reboot and clear memory
STEP 4: Check your router settings
STEP 5: Search the forums
STEP 6: Post a report
OTHER PROBLEMS
I can't talk to my Core
My Core won't start up
My Core is behaving erratically
TROUBLESHOOT BY COLOR
Flashing blue
Flashing green
Flashing yellow
Flashing red
Flashing orange (red/yellow)
Flashing green then red
Pulsing White
Main LED o , small blue LED dim
LEDs o and unresponsive
DEEP UPDATE
Overview

Flash via Spark Build IDE


Flash via Spark CLI
Flash via USB with dfu-util
Troubleshooting
Verify the deep update worked
It won't work, help!
KNOWN ISSUES
Spark UDP - numerous issues
RECENTLY RESOLVED ISSUES
Flashing Cyan
Spark.publish() breaks inside of Spark.function()
Flashing Blue
Inaccurate analog readings
Serial1 UART missing data
Long delays break connectivity
Can't init. peripherals in constructors

TROUBLESHOOTING
CAN'T GET CONNECTED?
There are many reasons that your Spark Core might not be able to connect to your network.
There are many types of Wi-Fi networks, and the Spark Core and the CC3000 do not support
all of them. We consider it an important goal of ours to connect easily and painlessly to as
many networks as possible, and your feedback is extremely valuable so that we can get
better.
The Spark Core works best with a traditional home network: simple networks with WPA/WPA2
or WEP security (or unsecured), with a single router from a reputable company (Apple,
Netgear, Linksys, D-Link, etc.) without any fancy settings. The more your network diverges
from the norm, there more likely you might encounter issues.
There are known issues with the following types of networks:
802.11n-only networks. The Spark Core is 802.11b/g. Most 802.11n networks are
backwards compatible with 802.11b/g, but if yours is not, the Spark Core will not connect.
Networks with "captive portal" security. A captive portal is the little website that comes
up to ask you to sign in to a network or sign an agreement, like at a Starbucks. The Spark
Core can't navigate these portals.
Enterprise networks. We have had mixed results connecting the Spark Core to enterprise
networks, although we don't yet have a great understanding of what's causing the issue.
This is something that we are working to improve.

Complex networks. Networks with multiple routers, with non-standard firewalls, and with
non-standard settings.
Channels above 11. This is in particular an international issue; if you are outside the U.S.,
your Wi-Fi router might run at channels 12, 13, or 14, which the CC3000 does not support.
Please use channels numbered 11 or lower.
So, let's dig in. If your Spark Core is not connecting to your Wi-Fi network, we recommend
following these steps:

STEP 0: CHECK THE BASICS


There are a few common issues to check first:
Check your Wi-Fi credentials (SSID and password) to make sure you typed them correctly.
Make sure you're in range of your Wi-Fi network. If your phone or computer has a poor
connection in the same location, try moving closer to your Wi-Fi access point.
If you're using a u.FL Core, make sure you have an antenna attached, and that it's firmly
connected.
Make sure your Core has enough power to transmit Wi-Fi signals (300mA in bursts). Try a
different power source, or unplug components that draw a lot of power.

STEP 1: SET UP YOUR CORE OVER USB


On some networks, Smart Config does not work, but the Core can connect to the network just
fine. We've implemented a back-up mechanism so you can set up your Core over USB. Don't
forget that you'll need to claim your Core manually as well if you haven't already!
Setup with USB

STEP 2: TRY ANOTHER NETWORK


There are many reasons that your Core might not connect; some of them have to do with the
Spark Core; some have to do with your mobile device sending the Wi-Fi credentials; some
have to do with the network. If your Core doesn't connect, try another Wi-Fi network. This will
quickly help you figure out which type of issue you might be seeing.

STEP 3: REBOOT AND CLEAR MEMORY

So often, electronics start behaving after you shut them off and turn them back on. Try:
Closing your mobile app and re-opening it
Un-plugging the Spark Core and plugging it back in
Clear the Spark Core's memory of Wi-Fi networks by holding the MODE button for 10
seconds. After 3 seconds, the light should start flashing blue; after 10 seconds, it should
do a quick burst of blue flashes. That means the memory has been cleared.
Restoring the Spark Core's firmware to the factory default. Getting this right can be tricky,
see this video for illustration.

STEP 4: CHECK YOUR ROUTER SETTINGS


There are a million ways router settings could cause problems, but here's a few things to look
out for:
Use DHCP. Although the Spark Core can handle static IP addresses, it's not configured for
it out of the box, so you'll have to dig into the source code.
Turn off access control and firewalls. Not permanently, but temporarily, to see if it
resolves the issue. If it does, you can hopefully just tweak your settings to accommodate
the Core rather than taking down your security. The only change you may need to make to
your router is to open up outgoing port 5683, the default CoAP port the Spark Core uses to
connect to the Spark Cloud. If your core flashes cyan and occasionally flashes red, router
issues are likely the culprit.

STEP 5: SEARCH THE FORUMS


It's possible that other folks have encountered the same issues that you have. The best way to
check and learn from others is to search the forums; search for your particular symptoms or
for your Wi-Fi router make and model to find relevant posts.

Visit the forums

STEP 6: POST A REPORT


We would love to hear about your issues, regardless of whether you were able to resolve
them, so that we can improve our platform. If you haven't been able to resolve your issue,
hopefully we or the community will be able to help.
Please post issues with connectivity either as responses to this topic or, if they are dissimilar
from other reported issues, as their own topic. When you make a post, please include:
Router make and model
Network security (unsecured, WEP, WPA2, etc.)
Environment (home, small office, enterprise, public network, etc.)
Network topology (number of routers and/or range extenders, estimated number of
devices connected to network)
Internet Service Provider
Any network settings that might diverge from the norm

OTHER PROBLEMS
I CAN'T TALK TO MY CORE
Once your Core is connected, it needs to be claimed in order to be associated with your
account. This is what lets you control your Core and keeps anyone else from doing so.
If you use the mobile app to set up your Core, it should claim it automatically. However if you
connect your Core over USB, or if the claiming process is unsuccessful, you can claim it
manually.
Head over to our connection page to learn about this:
Claiming your Core

MY CORE WON'T START UP


If your Core won't start up (the LED never comes on), here are a few things to check:
Is the Core receiving sufficient power? If you're not sure, connect a multimeter to the 3.3V
pin and GND and see if you get 3.3V, as expected. Try connecting the Core to another

power source.
Have any components been damaged? Visually inspect both sides of the Core.

MY CORE IS BEHAVING ERRATICALLY


If you're seeing unexpected behavior with your Core, here are a few things to check:
Is the Core receiving sufficient power? The Core might behave eratically if it's plugged into
an unpowered USB hub and not receiving enough power. In addition, if you have
components that draw a lot of power (motors, for instance), you might need more power
than your computer can supply. Try using a USB power supply or providing more power
directly to the VIN or 3.3V pins.
If you have a u.FL Core, is an antenna connected? Are you within range of the Wi-Fi router?

TROUBLESHOOT BY COLOR
The Spark Core has an RGB LED positioned on the front that displays the connectivity status
of the Core. This LED can help you debug your Core and resolve any issues that you might
encounter.

FLASHING BLUE

Whats the Core doing? My Core is flashing blue.


Whats the problem? Your Core doesnt have Wi-Fi credentials to join your local network
How do I fix it?
Right now, your Core does not have the information it needs to connect to your local Wi-Fi
network. If you havent already, try using the Spark Core app for iPhone or Android to send

your network credentials to your Core. Detailed instructions can be found here.
If that doesnt work, try the steps below:
1. If your network router supports 802.11n, make sure that it also supports Legacy network
protocols, and that it is configured into that mode (the Core supports 802.11 a/c networks)
2. If you have a Core with a u.FL connector, make sure the antenna is attached
3. Try rebooting the Core and clearing its memory.
4. Try configuring your Core over USB. Instructions can be found here.
5. If all else fails, please contact the Spark team and provide us with the brand and model of
your smartphone.

FLASHING GREEN
Whats the Core doing? My Core is flashing green, but doesnt progress to flashing Cyan.
Whats the problem? Your Core has received Wi-Fi credentials (an SSID and password), but
still can't connect to the Wi-Fi network.
How do I fix it?
Please complete the following steps:
1. Check the basics.
2. Try a new power source. You should be powering your Core with a power supply that is
capable of providing 500mA of current. We recommend the 5V/1A wall wart power
supplies that are commonly used for charging cell phones.
3. If your network has a landing page or splash page, the Core will not be able to connect; try
configuring it onto a different network.
4. Try rebooting the Core and clearing its memory.
5. Try a factory reset. Hold down both buttons, then release the RST button, while holding
down the MODE button. The LED should begin flashing yellow. Continue holding down the
MODE button until you see the Core change from flashing yellow to flashing white. Then
release the button. The Core should begin flashing blue after the factory reset is complete.
6. Try manually re-running the patch programmer to update the CC3000s firmware over USB.
You can find detailed instructions here.
7. If none of the above are successful, please contact the Spark team and provide us with the
brand and model number of your access point.

FLASHING YELLOW
Whats the Core doing? My Core is starts flashing yellow when I plug it or when I hit the
RST button.
Whats the problem? Your Core is missing important firmware.
How do I fix it?
Please complete the following steps:

1. Try hitting the RST button to make sure you did not accidentally configure your Core into
DFU mode.
2. Try a factory reset. Hold down both buttons, then release the RST button, while holding
down the MODE button. The LED should begin flashing yellow. Continue holding down the
MODE button until you see the Core change from flashing yellow to flashing white. Then
release the button. The Core should begin flashing blue after the factory reset is complete.
3. If a factory reset is unsuccessful, then we have to write the firmware over DFU. You can
accomplish this by following the steps below:
Install dfu-util for your system either using homebrew on a mac, http://dfuutil.gnumonks.org/ on windows, or you can build from source on linux:

opkg install libusb-1.0-dev


wget http://dfu-util.gnumonks.org/releases/dfu-util-0.7.tar.gz
tar xvf dfu-util-0.7.tar.gz
cd dfu-util-0.7
./configure
make
sudo make install
If you install those you should be able to run, with your core connected over USB:

sudo dfu-util -l
This should give you a list with something like [1d50:607f] in the list, if that's the case, then
we can install the missing firmware (can be found here: https://s3.amazonaws.com/sparkwebsite/factory_firmware.bin)

dfu-util -d 1d50:607f -a 1 -s 0x00020000 -D factory_firmware.bin


dfu-util -d 1d50:607f -a 0 -s 0x08005000:leave -D factory_firmware.bin
You can reboot your Core and it should start slow flashing blue, or start flashing green if
everything worked.
If none of these steps are successful, please contact the Spark team.

FLASHING RED
Whats the Core doing? My Core is flashing red lights at different intervals when I power it
on
Whats the problem? The Core is reporting a panic code, which could be caused by one of a
large number of potential firmware issues.
How do I fix it?

The panic code is signified by a series of flashing red LED blinks. First, the LED will spell SOS (
... - - - ... ), followed by a number of flashes, followed by another SOS message.
The meaning of the panic codes is described below. 8 flashes, signifying out of heap memory,
is the most common issue.
1. Hard fault
2. Non-maskable interrupt fault
3. Memory Manager fault
4. Bus fault
5. Usage fault
6. Invalid length
7. Exit
8. Out of heap memory
9. SPI over-run
10. Assertion failure
11. Invalid case
12. Pure virtual call

FLASHING ORANGE (RED/YELLOW)


Whats the Core doing? My Core is flashing yellow/red/orange lights after it connects to WiFi.
Whats the problem? A decryption error occurred during the handshake with the Spark
Cloud
How do I fix it?
Please complete the following steps:
1. A full set of instructions for resolving this issue can be found at the following location on
the Spark Community forums. If the steps included in the link below are unsuccessful,
please contact the Spark team.
Replacing your Spark Cloud credentials

FLASHING GREEN THEN RED


Whats the Core doing? My Core starts flashing green to connect to my network, then the
LED turns red.
Whats the problem? Your Core is facing a networking issue and cannot connect to the
Cloud.
How do I fix it?

There are two potential failure modes here--either your home network does not have a
working internet connection, or we are having issues with our servers.
1. Try power cycling your router to resolve any transient networking hiccups in your home
Wi-Fi network
2. Try going to a website like Google on your computer or laptop to verify that your Wi-Fi
network is connected to the internet and is capable of serving up web pages
3. Check www.spark.io/status to see if there is a known issue with the Spark Cloud
4. If youre still seeing this issue, please contact the Spark team.

PULSING WHITE
Whats the Core doing? The main LED on my Core slowly pulses white, even if I reset it or
perform a factory reset.
Whats the problem? The CC3000 on the Core is having trouble initializing due to a
potential hardware issue.
How do I fix it?
In general, if the LED on your Core starts breathing white, the best thing to do is to reach out
to the Spark team. Refer to this issue in your email, and Spark's Technical Support staff will
help you resolve the problem directly.

MAIN LED OFF, SMALL BLUE LED DIM


Whats the Core doing? The main LED on my Spark Core is off, but the small blue LED in the
upper right corner is dimly glowing.
Whats the problem? Your Core is missing firmware.
How do I fix it?
1. Try a factory reset. Hold down both buttons, then release the RST button, while holding
down the MODE button. The LED should begin flashing yellow. Continue holding down the
MODE button until you see the Core change from flashing yellow to flashing white. Then
release the button. The Core should begin after the factory reset is complete.
2. If you see no flashing lights during factory reset, then your Core may be temporarily
nonfunctional. If you have a JTAG shield, contact the Spark team so we can help walk you
through re-installing the Core firmware. If you do not have a JTAG shield, please contact
the Spark team to let us know, and well help you take next steps.

LEDS OFF AND UNRESPONSIVE


Whats the Core doing? My Core isnt showing any LED activity when I power it over USB.
Whats the problem? Your Core is not receiving power.
How do I fix it?

Please complete the following steps:


1. Try powering the Core with a different USB cable and power supply (different USB port on
your computer, for example)
2. If a different USB cable and power supply does not fix the issue, your Core may have a
hardware short. Please contact the Spark team for further debugging.

DEEP UPDATE
A deep update is a firmware update that reaches deep into the internals of a core and
updates the firmware of peripheral modules like the CC3000. Periodically, as enhancements
and bugfixes become available for components on the Core, we'll release new deep updates
to keep your hardware always running the latest, greatest firmware within your application
and the other underlying flashable components. Our first deep update release,

deep_update_2014_06 is the maiden voyage of this feature, designed to apply the CC3000
patch, fix the flashing cyan issue, and dramatically improve the stability and performance of
the Core.

OVERVIEW
There are multiple ways to apply the CC3000 deep update described below. Regardless of
which path you choose, all of them will invoke the same behaviors once the binary has been
flashed to the Core. This firmware employs the following logic:
1. Selectively apply the patch if needed, if the CC3000 firmware version is less than "1.28".
2. Restart, reconnect to cloud, auto-upgrade to the latest Tinker via an over the air firmware
update.
3. Restart, reconnect to cloud, publish spark/cc3000-patch-version (latest Tinker does this).
In step one, when the CC3000 firmware is being upgraded the LED will blink orange. It looks
very similar to the bootloader mode's blinking yellow; if you look closely, it is in fact orange! :)
Sometimes over air firmware updates can fail. If your Core freezes while blinking magenta,
just reset it and try again.
If you want to get a preview of what to expect, please checkout these videos that illustrate
what a deep update looks like on a Core.
This video illustrates what a deep update looks like when the OTA firmware update fails a
couple of times, but ultimately succeeds.

FLASH VIA SPARK BUILD IDE


The easiest way to apply deep_update_2014_06 is to simply log into the Spark Build IDE.
When you login, you'll be prompted with instructions and links that will show you the way.
Once all of your claimed cores have had the deep update applied to them, you'll no longer be
prompted. Note: You'll need have a Core connected and breathing cyan for this to work.
If you're on a noisy WiFi network you've had troubles flashing wirelessly in the past, you might
want to consider using one of the alternate USB-based approaches described below.
Flash via Spark CLI
The Spark-CLI is a swiss army command line knife that can be used to do all kinds of cool
things...like flash a deep update to your core. The README provides some nice
documentation about how to install it and how to do a deep update over USB. The process is
pretty simple:
Install or Upgrade the CLI (requires Node.js):

npm install -g spark-cli


Connect a Core to your computer via USB and put it into dfu-mode.
Run the flash command:

spark flash --usb deep_update_2014_06


This installs the deep update from a binary that is packaged with the Spark CLI, so you don't
have to download it.

FLASH VIA USB WITH DFU-UTIL


Note: If you can, you should use the spark-cli described above; it's simpler and easier.
To flash a Core over USB without the spark-cli , you'll need the dfu-util utility and the

deep_update_2014_06 binary.
You can install dfu-util via the instructions provided here.
You can download the deep_update_2014_06 binary here.
To flash the deep update binary to the core, first put it into dfu-mode:
Hold down BOTH buttons
Release only the RST button, while holding down the MODE button.

Wait for the LED to start flashing yellow


Release the MODE button
Then, with your lovely Core blinking yellow, type:

dfu-util -d 1d50:607f -a 0 -s 0x8005000:leave -D \


~/Downloads/deep_update_2014_06.bin
^^ YOU WILL NEED TO CHANGE THIS ^^
TO POINT AT THE FILE YOU DOWNLOADED
After that completes, the core will reset and commence the deep update.

TROUBLESHOOTING
Verify the deep update worked
The following gets very technical, it is provided in case you're unsure whether the patch
worked or not or you need to inspect the state of your Core more closely.
If a core requires a deep update, the API will tell you via the list devices endpoint:

# Which cores require a deep update?


curl 'https://api.spark.io/v1/devices?access_token=9aa51...'
#
REPLACE WITH YOUR ACCESS TOKEN ^^^^^^^^
[
{
"id": "51ff69065067545755380687",
"name": "joe_prod_core2",
"last_app": null,
"last_heard": "2014-07-02T23:15:00.409Z",
"connected": true,
"requires_deep_update": true # <--- NEW KEY/VALUE
}
]
The API will also tell you what firmware version the CC3000 is running for a particular core.
This is handy for verifying that the patch was successfully applied.

# Before applying the patch, the version is less than the newest
curl 'https://api.spark.io/v1/devices/51fab...?access_token=9aa51091b8...'
#
REPLACE WITH CORE ID ^^^^^^^^ REPLACE ACCESS TOKEN ^^
# Note the cc3000_patch_version and requires_deep_update keys
{

"id": "51ff6b0e5e675f5755380687",
"name": "jfg_core",
"connected": true,
"variables": {},
"functions": [],
"cc3000_patch_version": "1.23", # <--- THE RADIO FIRMWARE VERSION
"requires_deep_update": true
# <--- REQUIRES UPDATE
}

# After applying the patch, the requires_deep_update key is not there and the version is
curl 'https://api.spark.io/v1/devices/51fab...?access_token=9aa51091b8...'
#
REPLACE WITH CORE ID ^^^^^^^^ REPLACE ACCESS TOKEN ^^
# Note the updated cc3000_patch_version key
{
"id": "51ff69065067545755380687",
"name": "joe_prod_core2",
"connected": true,
"variables": {},
"functions": [
"digitalread",
"digitalwrite",
"analogread",
"analogwrite"
],
"cc3000_patch_version": "1.28" # <-- AH SO FRESH, DEEP UPDATE DONE

It won't work, help!


If you've tried both the IDE and command line approaches to applying the deep update and
neither of them are working for you:
1. Search the community site for "deep update", maybe someone else has encountered your
issue and can help you out.
2. If you can't find a thread that sounds similar to the problem you're experiencing, post a
thread that includes the words "deep update" in the title. Be sure to include detailed
information about what you've tried so far and what the failure condition looks like so it's
easy for others to help you
3. If you you've tried the suggestions here and on the community site and nothing seems to
be working, please contact hello@spark.io. Again, please provide information about what
you've tried, how it's failing, relevant threads that didn't work, etc.

KNOWN ISSUES

SPARK UDP - NUMEROUS ISSUES


Status: Acknowledged
Forum Thread: https://community.spark.io/t/udp-issues-and-workarounds/4975
Description
There are numerous issues with Spark UDP. The central one is that received datagram
boundaries are not preserved by Spark UDP. This and other issues, together with some
workarounds, are detailed at the forum thread linked to above.

RECENTLY RESOLVED ISSUES


FLASHING CYAN
Status: Solution available, see below.
Forum Thread: https://community.spark.io/t/bug-bounty-kill-the-cyan-flash-of-death/1322
Description
With certain Wi-Fi networks, the Spark Core will sometimes enter a state where the status LED
will flash cyan. Flashing cyan means that the Spark Core can no longer communicate with the
Spark Cloud server. If this happens, the Spark Core is programmed to try to reconnect. When
the Core reconnects, the status LED will go back to 'Breathing Cyan'.
The Spark Core is equipped with a Texas Instruments (TI) CC3000 Wi-Fi module to facilitate
wireless networking. The CC3000 module has its own closed-source firmware that was
created by TI. Unfortunately, it was discovered that the firmware on the CC3000 module itself
has an issue that can cause the module to stop responding. In this case, the Spark Core
entered a permanent state of flashing cyan referred to as the 'Cyan Flash of Death' or CFOD. A
manual reset was required to reconnect the Spark Core.
As of July 2014, after months of iterating with TI to develop a patch that resolves the issue, we
finally have a working patch and streamlined approach to roll it out. See the deep update
section above for complete documentation of the fix

SPARK.PUBLISH() BREAKS INSIDE OF


SPARK.FUNCTION()
Status: Resolved as of v0.2.1

Forum Thread: https://community.spark.io/t/spark-publish-crashing-core/3463


Description
If Spark.publish() is called within a function declared in Spark.function() , the Core may
become unresponsive for a short period of time and return a 408 timed out error in the cloud
API call.
A fix can be applied in the user code that will work around this issue. A simple explanation
can be found in post #10 of the forum thread.

FLASHING BLUE
Status: Resolved as of v0.2.0
Github Issue: https://github.com/spark/core-firmware/issues/144
Forum Thread: https://community.spark.io/t/status-led-flashing-blue/2915
Description
In some cases after attempting to connect to a Wi-Fi network and failing repeatedly, the Core
will step back into listening mode, and will stop attempting to connect to the internet.
This issue has been resolved, and the fix was pushed with firmware v0.2.0 on March 25.

INACCURATE ANALOG READINGS


Status: Resolved as of v0.2.0
Forum Thread: https://community.spark.io/t/odd-analog-readings/906
Forum Thread: https://community.spark.io/t/odd-analog-readings-part-2/2718
Description
Timing issues were causing analog readings to return incorrectly; this has now been fixed
with this commit.
This issue has been resolved, and the fix was pushed with firmware v0.2.0 on March 25.

SERIAL1 UART MISSING DATA


Status: Resolved
Description

Previously, Serial UART was polling, and data could be dropped if the user code did not check
frequently enough. Serial UART is now interrupt driven, so this is no longer an issue.

LONG DELAYS BREAK CONNECTIVITY


Status: Resolved
Forum Thread: https://community.spark.io/t/known-issue-long-delays-or-blocking-codekills-the-connection-to-the-cloud/950
Description
Long delays can keep messages from being sent to the Cloud, which can cause the connection
with the Cloud to abruptly die.
We recently released an update to process Cloud messages during long delays, making this
issue significantly less of a problem. It is still possible to block the connection to the Cloud
with a long series of very short delays, but longer delays will no longer cause issues.

CAN'T INIT. PERIPHERALS IN CONSTRUCTORS


Status: Resolved
Forum Thread: https://community.spark.io/t/serial1-begin-in-class-constructor-hangscore/3133
Description
Constructors are now called after the Core is initialized.

Das könnte Ihnen auch gefallen