Sie sind auf Seite 1von 126

Table

of Contents
Introduction 1.1
Chapter 1: Getting Started 1.2
Chapter 2: The Basics 1.3
Chapter 3: Syntax and Data-Types 1.4
Chapter 4: Functions and Definitions 1.5
Chapter 5: Control Flow 1.6
Chapter 6: Types in Detail 1.7
Chapter 7: Signatures and Structures 1.8
Chapter 8: Functors, Applicatives and Monads 1.9
Chapter 9: Metaprogramming 1.10
Chapter 10: The AST and Macros 1.11
Chapter 11: Syntax Macros 1.12
Chapter 12: I\/O 1.13
Chapter 13: JVM Interop 1.14
Chapter 14: Concurrency 1.15
Chapter 15: Persistent Data Structures 1.16
Chapter 16: Testing 1.17
Conclusion 1.18
Appendix A: Import Syntax 1.19
Appendix B: Math in Lux 1.20
Appendix C: Pattern-Matching Macros 1.21
Appendix D: The Art of Piping 1.22
Appendix E: Lux Implementation Details 1.23
Appendix F: Structure Auto-Selection 1.24
Appendix G: Lexers and Regular Expressions 1.25

1
Introduction

Introduction
The Lux programming language is a functional language belonging to the Lisp family. It
features a flexible and expressive static type-system, and it's meant to run in a variety of
different platforms.

Lux is currently in development. Some of the features expected of the language have yet to
be added (in particular, more compilers to support more platforms).

Despite this, Lux has already come far enough in its development that it can be used to write
a variety of programs that can run on the Java Virtual Machine, the first platform targeted by
Lux.

The semantics of Lux are in no way tied to those of the JVM, and as such, Lux should be
understood as a universal language; meant to express programs in a way that is as cross-
platform as possible, while at the same time able to tap into the richness that each particular
platform has got to offer.

Besides the focus on targeting multiple platforms, Lux's design also covers several other
important topics in computer science and software engineering.

Lux is committed to the functional style of program design, being a purely-functional


programming language, while also adopting eager-evaluation over lazy-evaluation, to
promote simpler reasoning over the performance and behavior of programs.

Lux also offers novel features in the area of meta-programming, with first-class types that
can be examined and constructed at compile-time, monadic macros with access to the state
of the compiler, and a style of macro definition that promotes composition and easy
interaction between different macros.

While the richness and variety of what Lux has got to offer is much larger than what can be
described in this introduction, hopefully I've already mentioned enough to stimulate the
curiosity of those interested in advanced concepts in programming languages and computer
science, and those engineers seeking powerful tools for both program design and
implementation.

Lux is both a simple and a complex language. It's design allows you to make effective
programs with just a small subset of what it has to offer, but the goal of the language is to
provide its users with an arsenal of powerful tools to suit their various needs in their projects.

2
Introduction

Finally, I must note that Lux is a practical language, meant for day-to-day usage by software
engineers, instead of just research and experimentation by academics. It may seem
unnecessary to point that out, but both Lisp-like languages and functional languages have
earned a reputation for being academic in nature. While Lux's design does involve a lot of
advanced ideas in computer science, it's with the intention of turning Lux into a powerful and
effective tool for day-to-day software engineering.

It is my hope that within these pages the reader will find both a host of new ideas to enrich
his/her perspective on programming, and the promise of great power, should they choose to
add Lux to their arsenal of programming languages.

I wish you, my dear reader, good luck on this journey, and much fun!

3
Chapter 1: Getting Started

Chapter 1: Getting Started


Where you will learn how to set up a development environment for Lux.

Before any coding can happen, it is necessary to set-up everything you need to become a
productive Lux programmer.

Question #1: How do I write Lux code?

Text editor support is a fundamental thing for any language, and Lux already covers some of
that. The catch is that there's only support for Emacs at the moment.

The plugin is called lux-mode, and you can obtain it from this website:
https://github.com/LuxLang/lux/tree/master/lux-mode.

The instructions for how to install it are at the link and it won't take much time.

Question #2: How do I build Lux programs?

Currently, Lux doesn't have a build tool of its own.

Instead, we're going to piggy-back on Leiningen, a build tool originally meant for Clojure, but
that can be customized for our purposes.

To install it, go to this website: http://leiningen.org/.

Question #3: How do I use Leiningen for Lux?

To find out, let's create a sample project that will have everything we need.

These are the steps:

1. Create a directory called my_project .


2. Create a new project file at my_project/project.clj .
3. Add this to the project file:

(defproject my_project "0.1.0-SNAPSHOT"


:plugins [[com.github.luxlang/lein-luxc "0.5.0"]]
:dependencies []
:lux {:program "main"}
:source-paths ["source"]
)

4
Chapter 1: Getting Started

This will fetch the Lux plugin for Leiningen, alongside the Lux standard library.

It also tells the plugin that we'll be using the my_project/source directory for our source
code, instead of Leiningen's default src directory.

The file containing our program will be my_project/source/main.lux .

4. Create source/main.lux and add this code to it:

(;module: {#;doc "This will be our program's main module."}


lux
(lux (codata io)
[cli #+ program:]))

(program: args
(io (log! "Hello, world!")))

As you can see, this is nothing more than a very simple "Hello, world!" program to test
things out.

Everything will be explained later in the rest of the book.

5. In your terminal, go to my_project , and execute lein lux build .


When it's done, you should see a message like this:

...
Compilation complete!
"Elapsed time: 13022.487488 msecs"
[COMPILATION END]

A directory named target will have been created, containing everything that was
compiled, alongside an executable JAR file.

6. Run the program with this: java -jar target/jvm/program.jar

7. Smile :)

Question #4: Where can I find documentation for Lux?

A specially useful source of information is the documentation for the standard library, at
https://github.com/LuxLang/lux/wiki/Standard-Library.

You can also explore the Lux repository on GitHub for more information at
https://github.com/LuxLang/lux.

Question #5: Bro, do you even REPL?

5
Chapter 1: Getting Started

REPL support for Lux is still work in progress, but you can play around with it.

There is the lein lux repl command, but for reasons I can't explain right now, that won't
actually run the REPL, but instead it will spit out a command that will run it for you.

If you just want to run the REPL, you can get the job done with this command:

eval "$(lein lux repl)"

You can play with it for a while and type exit when you're done.

Question #6: Where do I talk about Lux?

Right now, there are 2 places where the Lux community can gather and talk.

There is the Gitter group, for more dynamic and chatty communication:
https://gitter.im/LuxLang/lux.

And there is the Google group, for announcements and more asynchronous email-like
communication: https://groups.google.com/forum/#!forum/lux-programming-language.

Now, you know how to set-up a Lux-based project using Leiningen, and you have hopefully
configured your Emacs to use lux-mode.

We can proceed to the actual teaching of the language!

See you in the next chapter!

6
Chapter 2: The Basics

Chapter 2: The Basics


Where you will learn the fundamentals of Lux programming.

Modules
Lux programs are made of modules.

A module is a file containing Lux code, and bearing the extension .lux at the end of its
name (like our main.lux file).

Modules contain a single module statement, various definitions and a few other kinds of
statements as top-level code (that is, code that is not nested within other code).

Definitions
Definitions are the top-level or global values that are declared within a module. They may be
of different types, such as constant values or functions, or even fancier things like types,
signatures or structures (more on those in later chapters).

Also, definitions may be private to a module, or exported so other modules can refer to
them. By default, all definitions are private.

Value
Values are just entities which carry some sort of information. Every value has a type, which
describes its properties.

Lux supports a variety of basic and composite values:

Bool: true and false boolean values.


Nat: Unsigned integers (64-bit longs, in the JVM environment).
Int: Signed integers (64-bit longs, in the JVM environment).
Real: Signed floats (64-bit doubles, in the JVM environment).
Frac: Unsigned numbers in the interval [0,1) (with 64-bit precision, in the JVM
environment).
Char: Characters.
Text: Strings.
Unit: A special value that sort-of represents an empty value or a non-value.
Function: A first-class function or procedure which may be invoked, or passed around

7
Chapter 2: The Basics

like other values.


Tuple: An ordered group of heterogeneous values which may be handled as a single
entity.
Variant: A value of a particular type, from a set of heterogeneous options.

Note: The Bool , Nat , Int , Real , Frac and Char values in Lux are actually
represented using boxed/wrapper classes in the JVM, instead of the primitive values.

That means Bool corresponds to java.lang.Boolean , Int corresponds to


java.lang.Long , Real corresponds to java.lang.Double , and Char corresponds to

java.lang.Character .

The reason is that currently the infrastructure provided by the JVM, and it's innate
class/object system, is better suited for working with objects than for working with
primitive values, with some important features available only for objects (like collections
classes, for example).

To minimize friction while inter-operating with the host platform, Lux uses the
boxed/wrapper classes instead of primitives, and (un)boxing is done automatically
when necessary while working with the JVM.

Types
Types are descriptions of values that the compiler uses to make sure that programs are
correct and invalid operations (such as multiplying two bools) are never performed.

The thing that makes Lux types special is that they are first-class values, the same as bools
and ints (albeit, a little more complex). They are data-structures, and they even have a
type... named Type (I know, it's so meta). We'll talk more about that in later chapters.

Macros
Macros are special functions that get invoked at compile time, and that have access to the
full state of the compiler.

The reason they run during compilation is that they can perform transformations on code,
which is a very useful thing to implement various features, DSLs (domain-specific
languages) and optimizations. We'll also explore macros further in later chapters.

Comments
We haven't seen any comments yet being used in Lux code, but Lux offers 2 varieties:

1. Single-line comments:

8
Chapter 2: The Basics

## They look like this.


## They all start with 2 continuous # characters and go on until the end of the li
ne.

2. Multi-line comments:

#( The look like this.


They can span as many lines as you need them to.
#( And they can even be nested inside one another. )# )#
#(Psss! The space-padding I added is entirely optional; but don't tell anyone!)#

Expressions
An expression is code that may perform calculations in order to generate a value.

Data literals (like int, tuple or function literals) are expressions, but so are function calls,
pattern-matching and other complex code which yields values.

Macro calls can also be involved if the macro in question generates code that constitutes an
expression.

Statements
Statements looks similar to expressions, except that their purpose is not to produce a value,
but to communicate something to the compiler. This is a bit of a fuzzy line, since some things
which also communicate stuff to the compiler are actually expressions (for example, type
annotations, which we'll see in next chapter).

Examples of statements are module statements and definitions of all kinds (such as program
definitions).

Programs
Lux doesn't have special "main" functions/procedures/methods that you define, but the
program: macro accomplishes the same thing and works similarly.

It takes a list of command-line inputs and must produce some sort of action to be performed
as the program's behavior. That action must be of type (IO Unit) , which just means it is a
synchronous process which produces a Unit value once it is finished.

Command-Line Interface

9
Chapter 2: The Basics

Lux programs can have graphical user interfaces, and in the future they may run in various
environments with much different means of interfacing with users, or other programs.

But as a bare minimum, the Lux standard library provides the means to implement
command-line interfaces, through the functionality in the lux/cli module.

That module implements a variety of parsers for implementing rich command-line argument
processing, and you should definitely take a look at it once you're ready to write your first
serious Lux program.

Functional Programming
This is the main paradigm behind Lux, and there are a few concepts that stem from it which
you should be familiar with:

Immutable Values: The idea is that once you have created a value of any type, it's
frozen forever. Any changes you wish to introduce must be done by creating a new
value with the differences you want. Think, for instance, of the number 5. If you have 2
variables with the same number, and you decide to change the value in one variable to
8, you wouldn't want the other variable to be affected. Well, the same idea applies to all
values. This is clearly a departure from the imperative and object-oriented style of
having all data be mutable, but it introduces a level of safety and reliability in functional
programs that is missing in the imperative style.
First-Class Functions: This just means that functions are values like any other. In most
languages, functions/methods/procedures are more like features you register in the
compiler for later use, but that just remain static in the background until you invoke
them. In functional programming, you can actually pass functions as arguments to other
functions, or return them as well. You can store functions in variables and inside data-
structures, and you can even produce new functions on the fly at run-time.
Closures: Functions that get generated at run-time can also "capture" their environment
(the set of local variables within the function's reach), and become closures. This is the
name for a function which "closes over" its environment, making it capable to access
those values long after the function was originally created. This allows you to create
functions as templates which get customized at run-time with values from their
environment.

Now, let's talk a bit more about the program we saw last time.

In the previous chapter we compiled and ran a Lux program, but nothing has been explained
yet. Let's review the code and see in detail what was done.

10
Chapter 2: The Basics

(;module: {#;doc "This will be our program's main module."}


lux
(lux (codata io)
[cli #+ program:]))

(program: args
(io (log! "Hello, world!")))

The first part of this program is the module declaration.

All Lux modules automatically import the lux module, but they don't locally import every
single definition, so everything would have to be accessed by using the lux; prefix or the
; (short-cut) prefix.

To avoid that, we import the lux module in a plain way.

By the way, what I just explained about the lux module is the reason why we couldn't
just use the module macro as module: .

Also, the lux module is the where log! function resides.

Then we import the lux/codata/io module, also in a plain way. What that means is that
we're locally importing every single exported/public definition within that module, so we can
access it easily in our own module (more on that in a moment). Also, we're not giving this
module any alias, and instead we will always refer to it explicitly as lux/codata/io , should
we ever want to prefix something. Notice how we express nested modules (up to arbitrary
depths) by simply nesting in parentheses. The lux/codata/io module, by the way, is where
we get the io macro that we use later.

Finally, we import the lux/cli module. Notice how the syntax is a bit different in this case.
Here, we're saying that we don't want to locally import any definition within it, except
program: . Also, we're giving the lux/cli module the shorter alias cli .

Now, let's analyse the actual code!

We're defining the entry point of our program (what in many other languages is referred to as
the main function/procedure/method). We'll be receiving all the command-line arguments in
a (List Text) called args , and we must produce a value of type (IO Unit) .

We'll go into more detail about what IO and Unit mean in the next chapter.

Suffice it to say that the log! function will produce a value of type Unit after
printing/logging our "Hello, world!" text, and the io macro will wrap that in the IO type.

That (IO Unit) value will then be run by the system at run-time, giving us the result we
want.

11
Chapter 2: The Basics

Now that we've discussed some of the basics of what goes on inside of Lux programs, it's
time for us to explore the language in a little bit more depth.

See you in the next chapter!

12
Chapter 3: Syntax and Data-Types

Chapter 3: Syntax and Data-Types


Where you will learn the what Lux code is made of.

Syntax for data-types


Bools look like this:

true false

Nats look like this:

+10 +0 +20 +123_456_789

Ints look like this:

10 0 -20 123_456_789

Reals look like this:

123.456 -456.789 0.001 123_456.789

Fracs look like this:

.456 .789 .001

Chars look like this:

#"a" #"\n" #"\u1234"

Texts look like this:

"This is a single-line text"

"And this one is multi-line.

Mind-you, that columns must align on each start of a line, or the compiler will c
omplain.

But empty lines can just stay empty, so you don't need to pad them with white-spa
ce."

Unit looks like this:

[]

13
Chapter 3: Syntax and Data-Types

Tuples look like this:


[10 ["nested" #tuple] true]

Variants look like this:

#Foo (#Bar 10 20.0 "thirty")

Records look like this: {#name "Lux" #paradigm #Functional #platforms (list #JVM)}

As you can see, underscores ( _ ) can be used as separators for the numeric literals.

From looking at this, we can see a few interesting bits we haven't discussed.

One is that the hash ( # ) character is overloaded.

In the last chapter we saw it being used for comments, but now we're seeing it being used
as a prefix for characters, but also as a prefix for some weird "label" thingies (more on that in
a moment).

To avoid reserving many characters for the language, Lux overloads the hash ( # ) character
in situations where it can be used unambiguously. That way, most characters can be used
by anyone without fear of stepping on the feet of the language.

Regarding those label thingies we saw earlier, they're called tags, and the reason they're not
mentioned as part of Lux's data-types is that they're not really data-types; they're just part of
the language syntax.

They're used as part of the syntax for data-types, but they're not data-types in themselves.

Also, you can't just use anything you want as a tag, as you first have to declare them.

We'll talk more about tags a bit later, when we talk about defining types.

Also, just from looking at the syntax for unit and tuples, you can see that they look quite
similar. The reason is that unit is actually the empty tuple. I know it sounds odd, but for the
most part you just have to think of unit as a kind of empty value, considering that it contains
no information inside.

It might sound specially odd that we have an "empty" value at all in the first place, but
as it turns out, it's quite useful in a variety of situations.

You're about to see one of those pretty soon.

14
Chapter 3: Syntax and Data-Types

In the section for variants, you can see 2 different alternatives, and you might wonder how
do they differ.

Well, a variant is a pair of a tag and a single value. That's right, I said single value; so you
might be wondering how come we're associating 3 values with the #Bar tag.

It's pretty simple, actually. Whenever you're trying to create a variant with more than one
value, Lux just wraps all the values inside a tuple for you.

So, (#Bar 10 20.0 "thirty") is the same as (#Bar [10 20.0 "thirty"])

Now, you might be thinking: what's up with that #Foo variant?

Well, sometimes you only care about a variant for its tag, and not for any value it may hold
(for example, if you're trying to use a variant type as an enumeration). In that case, you'll
want to pair the tag with an empty value (since it has to be paired with something).

That's right! You've just witnessed unit in action and you didn't even know it. If you just write
the name of the tag without any parentheses, Lux will stick a unit in there for you.

That means #Foo is the same as (#Foo [])

You might have noticed that I mentioned records in this chapter, but not in the previous
chapter, where I also talked about the basic data-types Lux offers.

The reason is that records are a bit of a magic trick in Lux. That means records are not really
a data-type that's distinct from the other ones. In fact, records just offer you an alternative
syntax for writing tuples.

That's right! {#name "Lux" #paradigm #Functional #platforms (list #JVM)} could mean the
same as ["Lux" #Functional (list #JVM)] , depending on the ordering imposed by the tags.

Remember when I said that you needed to declare your tags? Well, depending on the order
in which you declare them, that means that #name could point to the first element in the
tuple, or to another position entirely. Also, in the same way that tags have a numeric value
when it comes to their usage in tuples/records, that's also the case for variants.

For example, the List type has two tags: #;Nil and #;Cons . The #;Nil tag has value
0, while the #;Cons tag has value 1. That's what allows Lux to the able to identify which
option it's working with at runtime when you're dealing with variants.

15
Chapter 3: Syntax and Data-Types

Tags belong to the module in which they were declared, and you must use the module name
(or an alias) as a prefix when using tags. That is why I've written #;Nil and #;Cons ,
instead of #Nil and #Cons . However, you may forgo the prefixes if you're referring to tags
which were defined in the same module in which they're being used.

Finally, you may have noticed that, unlike all other data-types, variants re-use some syntax
that you're already seen before: the parentheses. Clearly, we didn't build our program by
creating a bunch of variants, so, what's going on?

Well, the parenthesis delimit the syntax of what is called a form in Lux. This is actually an
old concept that's very familiar to those with experience with other Lisp-like languages.
Basically, a form is a composite expression or statement.

When the form starts with a tag, Lux interprets that to be a variant.

Types for data-types


"But, wait!", you might say. "We didn't talk about functions!"

Patience, young grasshopper. We'll talk about those in the next chapter.

For now, let's talk about types.

The type-annotation macro is called : (I know, real cute). You use it like this (: Some-Type
some-value) .

There is also a separate macro for type-coerciones that's called :! , which is used the
same way. However, you should probably steer clear off that one, unless you know what
you're doing, since you can trick the compiler into thinking a value belongs to any type you
want by using it.

Now that we know about type annotations, I'll show you some types by giving you some
valid Lux expressions:

(: Bool true)

(: Nat +123)

(: Int 123)

(: Real 456.789)

(: Frac .789)

(: Char #"a")

(: Text "YOLO")

(type: Some-Enum #primitive #tuple #variant)

(: [Int [Text Some-Enum] Bool] [10 ["nested" #tuple] true])

16
Chapter 3: Syntax and Data-Types

(type: Quux #Foo (#Bar Int Real Text))

(: Quux #Foo)

(: Quux (#Bar 10 20.0 "thirty"))

(type: Lang {#name Text #paradigm Paradigm #platforms (List Platform)})

(: Lang {#name "Lux" #paradigm #Functional #platforms (list #JVM)})

(: Lang ["Lux" #Functional (list #JVM)])

(: [Text Paradigm (List Platform)] {#name "Lux" #paradigm #Functional #platforms


(list #JVM)})

By the way, the value of a type-annotation or a type-coearcion expression is just the


value being annotated/coerced. So (: Bool true) simply yields true .

What is that type: thingie?

It's just a macro for defining types. We'll learn more about it in a future chapter.

The tags that get mentioned in the type definition get automatically declared, and the order
in which they appear determines their value. #Foo came first, so it's value is 0. #Bar , as
you may guess, gets the value 1.

Also, you might be wondering what's the difference between List and list . Well, the first
one is the type of lists (or a type-constructor for lists, however you want to look at it). The
second one is a macro for constructing actual list values. List can only take one argument
(the type of the element values). list can take any number of arguments (the elements
that make up the list).

Again, we haven't mentioned functions. But if you're impatient to learn about them, just turn
the page (or scroll down) to find out how to make them!

See you in the next chapter!

17
Chapter 4: Functions and Definitions

Chapter 4: Functions and Definitions


Where you will learn how to build your own Lux code.

OK, so you've seen several explanations and details so far, but you haven't really seen how
to make use of all of this information.

No worries. You're about to find out.

First, let's talk about how to make your own functions.

(lambda [x] (i.* x x))

Here's the first example. This humble function multiplies an Int by itself. You may have
heard of it before; it's called the square function.

What is it's type?

Well, I'm glad you asked.

(: (-> Int Int) (lambda [x] (i.* x x)))

That -> thingie you see there is a macro for generating function types. It works like this:

(-> arg-1T arg-2T ... arg-nT returnT)

The types of the arguments and the return type can be any type you want (even other
function types, but more on that later).

How do we use our function? Just put it at the beginning for a form:

((lambda [x] (i.* x x)) 5)


=> 25

Cool, but... inconvenient.

It would be awful to have to use functions that way.

How do we use the square function without having to inline its definition (kinda like the
log! function we used previously)?

18
Chapter 4: Functions and Definitions

Well, we just need to define it!

(def: square
(lambda [x]
(i.* x x)))

Or, alternatively:

(def: square
(-> Int Int)
(lambda [x]
(i.* x x)))

Notice how the def: macro can take the type of its value before the value itself, so we don't
need to wrap it in the type-annotation : macro.

Now, we can use the square function more conveniently.

(square 7)
=> 49

Nice!

Also, I forgot to mention another form of the def: macro which is even more convenient:

(def: (square x)
(-> Int Int)
(i.* x x))

The def: macro is very versatile, and it allows us to define constants and functions.

If you omit the type, the compiler will try to infer it for you, and you'll get an error if there are
any ambiguities.

You'll also get an error if you add the types but there's something funny with your code
and things don't match up.

Error messages keep improving on each release, but in general you'll be getting the file,
line and column on which an error occurs, and if it's a type-checking error, you'll usually get
the type that was expected and the actual type of the offending expression... in multiple
levels, as the type-checker analyses things in several steps. That way, you can figure out
what's going on by seeing the more localized error alongside the more general, larger-scope
error.

19
Chapter 4: Functions and Definitions

Functions, of course, can take more than one argument, and you can even refer to a
function within its own body (also known as recursion).

Check this one out:

(def: (factorial' acc n)


(-> Nat Nat Nat)
(if (n.= +0 n)
acc
(factorial' (n.* n acc) (n.dec n))))

(def: (factorial n)
(-> Nat Nat)
(factorial' +1 n))

And if we just had the function expression itself, it would look like this:

(lambda factorial' [acc n]


(if (n.= +0 n)
acc
(factorial' (n.* n acc) (n.dec n))))

Yep. Lambda expressions can have optional names.

Here, we're defining the factorial function by counting down on the input and multiplying
some accumulated value on each step. We're using an intermediary function factorial' to
have access to an accumulator for keeping the in-transit output value, and we're using an
if expression (one of the many macros in the lux module) coupled with a recursive call

to iterate until the input is 0 and we can just return the accumulated value.

As it is (hopefully) easy to see, the if expression takes a test as it's first argument, a
"then" expression as it's second argument, and an "else" expression as it's third argument.

Both the n.= and the n.* functions operate on nats, and n.dec is a function for
decreasing nats; that is, to subtract +1 from the nat.

You might be wondering what's up with those i. and n. prefixes.

The reason they exist is that Lux math functions are not polymorphic on the numeric
types, and so there are similar functions for each type, with different prefixes to
distinguish them (in particular, n. for nats, i. for ints, r. for reals, and f. for
fracs).

I know it looks annoying, but later in the book you'll discover a way to do math on any
Lux number without having to worry about types and prefixes.

20
Chapter 4: Functions and Definitions

Also, it might be good to explain that Lux functions can be partially applied. This means that
if a function takes N arguments, and you give it M arguments, where M < N, then instead of
getting a compilation error, you'll just get a new function that takes the remaining arguments
and then runs as expected.

That means, our factorial function could have been implemented like this:

(def: factorial
(-> Nat Nat)
(factorial' +1))

Or, to make it shorter:

(def: factorial (factorial' +1))

Nice, huh?

You might be wondering why the function-definition macro is called lambda , instead of
function or fun or fn.

The reason is mostly historical: older lisps named their function-definition macros
lambda, plus the theoretical foundation of both lisps and functional programming is the
Lambda Calculus, so it just felt more natural to call it lambda .

We've seen how to make our own definitions, which are the fundamental components in Lux
programs.

We've also seen how to make functions, which is how you make your programs do things.

Next, we'll make things more interesting, with branching, loops and pattern-matching!

See you in the next chapter!

21
Chapter 5: Control Flow

Chapter 5: Control Flow


Where you will learn how to give intelligence to your code.

Branching
So far, we've only seen very simple examples of program/function logic that are very
straightforward.

But real world programs are far from straightforward, and there's a lot of testing and
decision-making involved in how they operate.

The first important concept to master in any programming language is how to make
decisions and branch the path our program takes, and Lux offers 1 primary way of doing
this: pattern-matching.

But before we head into that, let's first see 2 weaker mechanisms for branching that might
be familiar to programmers coming from other programming languages.

If
We've already met the humble if expression in the previous chapter. As explained there,
the expression takes the following form:

(if test
then
else)

Where test , then and else are arbitrary Lux expressions.

In terms of types, it works like this:

(: X (if (: Bool test)


(: X then)
(: X else)))

Here is an example:

22
Chapter 5: Control Flow

(if true
"Oh, yeah!"
"Aw, hell naw!")

So, both branches must produce the same type for the type-checker to let it pass.

Cond
cond is like a more general version of the if macro.

For those of you coming from conventional programming languages, cond is like a chain of
if-else statements/expressions, with a default else branch at the end, in case all fails.

It looks like this:

(cond test-1 then-1


test-2 then-2
...
test-n then-n
else)

And, in terms of types, it looks like this:

(: X (cond (: Bool test-1) (: X then-1)


(: Bool test-2) (: X then-2)
...
(: Bool test-n) (: X then-n)
(: X else)
))

Here is an example:

(cond (n.even? num) "even"


(n.odd? num) "odd"
## else-branch
"???")

So, it's easy to intuit how cond would desugar into several nested if expressions.

Also, I'd like to point out that both if and cond are macros, instead of native Lux
syntax.

The reason for that is simply that they can both be implemented in terms of pattern-
matching.

23
Chapter 5: Control Flow

Pattern-Matching
Some of you may not be familiar with the concept of pattern-matching if you come from non-
functional programming languages, or from FP languages that lack pattern-matching (e.g.
Clojure).

Pattern-matching is similar to the branching expressions that we saw previously, except that
instead of being based on making boolean tests, it's based on comparing patterns against
data, and executing a branch if the pattern matches that data.

The beautiful thing is that the pattern can be very complicated (even involving the binding of
variables), and so the testing can be very powerful.

We can see its power by looking at some examples.

For instance, the factorial' function you saw in the previous chapter could have been
written like this:

(def: (factorial' acc n)


(-> Nat Nat Nat)
(case n
+0 acc
_ (factorial' (n.* n acc) (n.dec n))
))

As you may imagine, case is the pattern-matching macro. It takes the data you want to
pattern-match against (in this case, the n variable), and then tests it against several
patterns until it finds a match, in which case it executes its branch.

Here, we test if n equals +0 . If it does, we just return the acc value. Otherwise, we have
a default branch with a pattern that doesn't test anything called _ . That will handle the case
where the input is greater than 0.

The "default" branch works because we're binding the value of n onto another variable
called _ , and binding always succeeds, which is why we can use that branch as a default.

However, since it is variable binding, that means we could have used _ instead of n
during our calculations. Like this:

(def: (factorial' acc n)


(-> Nat Nat Nat)
(case n
+0 acc
_ (factorial' (n.* _ acc) (n.dec _))
))

24
Chapter 5: Control Flow

However, as a convention, _ is used as the name for values we don't care about and don't
plan to use in our code.

Pattern-matching doesn't limit itself only to nats, and can also be used with bools, ints, reals,
fracs, chars, text, tuples, records, variants, and much more!

Regarding the "much more" claim, you should check out Appendix C, where I discuss a
powerful extension mechanism called pattern-matching macros.

Here are a couple more examples so you can see the possibilities.

(let [test true]


(case test
true "Oh, yeah!"
false "Aw, hell naw!"
))

(case (list 1 2 3)
(#;Cons x (#;Cons y (#;Cons z #;Nil)))
(#;Some (i.+ x (i.* y z)))

_
#;None)

In the first example, you'll notice that we have rewritten the prior if example in terms of
pattern-matching. Also, you'll notice the introduction of a new macro, called let .

let is the way we create local-variables in Lux. It's syntax looks like this:

(let [var-1 expr-1


var-2 expr-2
...
var-n expr-n]
body)

Where the types of the variables will correspond to those of their matching expressions, and
the type of the let expression will be the same as that of its body.

Also, remember when I told you that you can use pattern-matching to bind variables?

Well, guess what! let is implemented in terms of case , and it just gives you a more
convenient way to bind variables than to go through all the trouble of doing pattern-
matching.

25
Chapter 5: Control Flow

Now, in the second example, we're deconstructing a list in order to extract its individual
elements.

The List type is defined like this:

(type: (List a)
#Nil
(#Cons a (List a)))

#;Nil represents the empty list, while #;Cons constructs a list by prepending an element

to the beginning of another list.

With pattern-matching, we're opening our list up to 3 levels in order to extract its 3 elements
and do a simple math calculation.

If the match succeeds, we produce a value of type (Maybe Int) by wrapping our result with
the #;Some tag, from the Maybe type. If the match fails, we just produce nothing, by using
the #;None tag, also from the Maybe type.

While List allows you to group an arbitrary number of values into a single structure,
Maybe is for values you may or may not have.

Also, you may have noticed how different (and uncomfortable!) it is to pattern-match
against a list, since you have to use it's real syntax, with it's tags; whereas to build the
list we can just piggy-back on the list macro.

Don't worry too much about it, because there's a better way to do it that also allows us
to use the list macro. If you're curious about it, head over to Appendix C to learn
more about pattern-matching macros.

Looping
Alright. So, we know several ways to branch, and also how to bind variables. But we know
life isn't just about making decisions. Sometimes, you just have to do your work over and
over again until you're done.

That's what looping is for!

Recursion
In functional programming, recursion is the main mechanism for looping in your code.

Recursion is nothing more than the capacity for a function to call itself (often with different
parameters than the initial call). It's not hard to see how this mechanism can be used to loop
in any way you want, and we've already seen examples of recursion in action.

26
Chapter 5: Control Flow

(def: (factorial' acc n)


(-> Nat Nat Nat)
(if (n.= +0 n)
acc
(factorial' (n.* n acc) (n.dec n))))

The factorial' function calls itself with an ever increasing accumulator (that will hold the
eventual result), and an ever decreasing input value.

Recursion in many languages is seen as a potentially dangerous operation, since


programming languages have what are called "stacks", which are structures holding the
parameters to functions and the return addresses for where to send the results once the
functions are done.

Every function call you issue puts a new frame onto the stack, and if enough frames are
pushed, eventually the stack "overflows" its capacity, causing the program to fail.

However, an old trick that has been employed for a long time in programming languages is
called tail-call optimization, and it allows you to optimize recursive calls that are in a "tail
position"; that is, a position where the result of the call can just be returned immediately,
instead of needing any further processing.

Our example factorial' function has it's recursive call in the tail position (and thus, can
benefit from that optimization).

This alternative doesn't:

(def: (factorial' acc n)


(-> Nat Nat Nat)
(if (n.= +0 n)
acc
(n.+ +0 (factorial' (n.* n acc) (n.dec n)))))

Can you spot the difference?

In the alternative, the result of the recursive call would have to be sent to the n.+ function
as its second parameter, so it wouldn't be a tail-call.

The beautiful thing about tail-call optimization (or TCO) is that it removes the recursion
altogether, eliminating the possibility for stack overflows.

Pretty neat, huh?

27
Chapter 5: Control Flow

For the sake of correctness, I must point out that Lux doesn't implement full tail-call
optimization, since that would require some extra power that Lux can't implement, since
it's too low-level and (currently) the JVM doesn't offer the means to achieve that.

For that reason, going forward, I will refer to what Lux does as tail-recursion
optimization (or TRO), instead.

Loop
Some of you may be more familiar with the for loops or the while loops of other programming
languages and need some time to wrap your heads around recursion. That's OK.

Lux also offers a macro that gives you a slightly similar experience to those kinds of loops,
which can get your mind off recursion just a little bit.

To see it in action, let's rewrite (once more!) our factorial function:

(def: (factorial n)
(-> Nat Nat)
(loop [acc +1
n n]
(if (n.= +0 n)
acc
(recur (n.* n acc) (n.dec n)))))

We have eliminated our dependency on the factorial' function.

Just like with let , we're creating some local variables, but these are going to change on
each iteration. Then, in the body, we perform the usual if test, and if the number is not 0,
then I use the recur operator (which only works inside of loop ) to update the values of my
variables for the next iteration.

Piping
Piping isn't really control-flow per se, but I include it here because it is a powerful technique
for organizing code that involves taking multiple steps to perform a calculation.

It's based on using a single macro, called |> , which allows you to write deeply nested code
in a flat way, making it much easier to read and understand, and also giving it a bit of an
imperative flavor.

Here is a simple example to see how it works:

28
Chapter 5: Control Flow

(|> elems (map to-text) (interpose " ") (fold Text/append ""))

=>
(fold Text/append ""
(interpose " "
(map to-text elems)))

If you read carefully, you'll see that each element (from left to right) gets lodged at the end of
the next expression and the pattern continues until everything has been nested.

A good convention to follow in functional programming (and especially in Lux), is that the
most important argument to a function (or it's subject) ought to be the last argument the
function takes. One of the really cool benefits of this convention is that your code becomes
very amenable to piping, as the nesting is only done in one way.

It's not hard to see how much easier to read and understand the piped version is, compared
to the resulting code.

Also, you might be interested to know that piping can also be extended in really cool
ways (similarly to how pattern-matching can be extended). The way is to use piping
macros (you may be noticing a theme here).

If you want to know more about those, feel free to check out Appendix D, where I
review them in detail.

Oh, and before I forget, there is also a macro for doing reverse piping (which can be very
useful in some situations).

Out previous example would look like this:

(<| (fold Text/append "") (interpose " ") (map to-text) elems)

Higher-Order Functions
I can't finish this chapter without talking about one of the coolest features in the world of
functional programming.

So far, we have seen several control-flow mechanisms that could potentially exist in any
language/paradigm, but now we'll talk about something endogenous to the FP landscape.

You already know that in the world of functional programming, functions are first-class
values. That just means functions can be treated like other values (such as ints, bools and
texts). You can create new functions at run-time, you can pass functions around as
arguments to other functions and you can combine functions in arbitrary ways.

29
Chapter 5: Control Flow

Well, we haven't really seen that in action yet.

It's time to put that theory into practice... with an example:

(def: (iterate-list f xs)


(All [a b] (-> (-> a b) (List a) (List b)))
(case xs
#;Nil
#;Nil

(#;Cons x xs')
(#;Cons (f x) (iterate-list f xs'))
))

This is a function that allows you to transform lists in arbitrary ways.

However, you may notice that we're seeing many new things. For instance, what is that All
thingie over there, and what does it do? Is it even a type?

Well, it's not a type. It's actually a macro for creating types (in the same way that -> is a
macro that creates types). The difference is that All allows you to create universally-
quantified types. That's just a fancy way of saying that your types are not fixed to working in
a particular way, but are flexible enough to allow some variability.

Here, it's being used to make a function that can takes lists with elements of any type
(denoted by the type variable a ), and can produce lists with elements of any other type
(denoted by the type variable b ), so long as you give it a function f , that transforms
elements of a into elements of b .

That... is... mind-blowing!

In other programming languages, whenever you want to process the elements of a


sequence (say, an array) you have to write something like a for loop with some index
variable, some condition... and then the code for actually working with the data.

But here, we're pretty much defining a function that takes care of all the ceremony, so you
just need to give it the operation you wish to perform on each element, and the data.

You could use it like this:

(iterate-list (* 5) (list 0 1 2 3 4 5 6 7 8 9))

"=> (list 0 5 10 15 20 25 30 35 40 45)"

Pretty cool!

But this is just scratching the surface of what's possible.

30
Chapter 5: Control Flow

As it turns out, higher-order functions (that is, functions which take or produce other
functions) are at the foundation of many advanced techniques in functional programming.
Mastering this little trick will prove invaluable to you as you delve deeper into the mysteries
of functional programming.

Alright.

We've seen quite a lot so far.

But, don't get complacent!

You've only seen the basics of Lux and the next chapters are going to expose some of the
more advanced features the language.

Brace yourself, great power is coming!

See you in the next chapter!

31
Chapter 6: Types in Detail

Chapter 6: Types in Detail


Where you will learn the truth behind types.

We've talked about Lux types already, but only in a very high-level way.

On this chapter, you'll see how types are constructed, and hopefully that will give you some
insight to understand better the subjects of later chapters.

(type: #rec Type


(#HostT Text (List Type))
#VoidT
#UnitT
(#SumT Type Type)
(#ProdT Type Type)
(#LambdaT Type Type)
(#BoundT Nat)
(#VarT Nat)
(#ExT Nat)
(#UnivQ (List Type) Type)
(#ExQ (List Type) Type)
(#AppT Type Type)
(#NamedT Ident Type))

This is the type of types.

Crazy, right?

But as I've said before, Lux types are values like any other.

Type is a variant type, which just means that there are multiple options for type values.

Also, you may have noticed that #rec tag in the definition. You need to add it
whenever you're defining a recursive type that takes no parameters.

So, the definition of List doesn't need it, but the definition of Type does.

Let's go over each of them.

(#HostT Text (List Type))

32
Chapter 6: Types in Detail

This is what connects Lux's type-system with the host platform's. These types represent
classes (in the JVM), with their respective parameters, if they have them (as would be the
case for ArrayList<User> in the JVM).

#VoidT
#UnitT

You've already met unit, but what is that "void" thingie?

Well, remember when I told you that unit was the empty tuple? You can think of void as the
empty variant. But more on that in a bit.

(#SumT Type Type)


(#ProdT Type Type)

You may have noticed that none of those options are called #TupleT or #VariantT . The
reason is that variants and tuples are just names for mathematical constructs called "sums"
and "products". Funny names, right?

Well, mathematicians see variants as a way of "adding" types and tuples as a way of
"multiplying" types, Of course, it's a bit difficult to picture that if you're thinking of numbers.

But a way to see variants is as an "OR" operation for types: you get this option OR that
option. Conversely, tuples are like an "AND" operation for types: you get this type AND that
type.

But, you may be wondering: "why do #SumT and #ProdT only take 2 types, instead of a list
like #HostT does?"

Well, as it turns out, you don't need a list of types to implement variants and tuples, because
you can actually chain #SumT and #ProdT with other instances of themselves to get the
same effect.

What do I mean?

Well, let me show you. To the left, you'll see the type as it's written in normal Lux code, and
to the right you'll see the type value it generates.

33
Chapter 6: Types in Detail

[] => #UnitT
[Bool] => Bool
[Bool Int] => (#ProdT Bool Int)
[Bool Int Real] => (#ProdT Bool (#ProdT Int Real))
[Bool Int Real Char] => (#ProdT Bool (#ProdT Int (#ProdT Real Char)))

(|) => #VoidT


(| Bool) => Bool
(| Bool Int) => (#SumT Bool Int)
(| Bool Int Real) => (#SumT Bool (#SumT Int Real))
(| Bool Int Real Char) => (#SumT Bool (#SumT Int (#SumT Real Char)))

You can see where this is going.

If I have a way to to pair up 2 types, and I can nest that, then I can chain things as much as I
want to get the desired length.

What is a tuple/variant of 1 type? It's just the type itself; no pairing required.

And what happens when the tuple/variant has 0 types? That's when #UnitT and #VoidT
come into play, being the empty tuple and the empty variant respectively.

You might observe that you've been taught how to create unit values (with [] ), but not
how to create void values.

The reason is that it's technically impossible to make instances of void.

Think of it this way: there is only 1 possible instance of unit, which is [] . That means
every [] in Lux is actually the same value (kind of like every 5 is the same 5
everywhere). For void, there are 0 possible instances (hence, it's name).

It's odd that there exists a type without instances; but, just like unit, it comes in handy in
certain circumstances.

This embedding means that [true 123 456.789 #"X"] is the same as [true [123 456.789
#"X"]] , and the same as [true [123 [456.789 #"X"]]] .

It also means 5 is the same as [5] , and [[5]] , and [[[[[5]]]]] .

As far as the compiler is concerned, there are no differences.

That might sound crazy, but there are some really cool benefits to all of this. If you're curious
about that, you can check out Appendix E for more information on how Lux handles this sort
of stuff.

34
Chapter 6: Types in Detail

(#LambdaT Type Type)

Now that we have discussed variant and tuple types, it shouldn't come as a surprise that a
similar trick can be done with function types.

You see, if you can implement functions of 1 argument, you can implement functions of N
arguments, where N >= 1.

All I need to do is to embed the rest of the function as the return value to the outer function.

It might sound like this whole business of embedding tuples, variants and functions
inside one another must be super inefficient; but trust me: Lux has taken care of that.

The Lux compiler features many optimizations that compile things down in a way that
gives you maximum efficiency. So, to a large extent, these embedded encodings are
there for the semantics of the language, but not as something that you'll pay for at run-
time.

One of the cool benefits of this approach to functions is Lux's capacity to have partially
applied functions.

Yep, that's a direct consequence of this theoretical model.

(#BoundT Nat)

This type is there mostly for keeping track of type-parameters in universal and existential
quantification.

We'll talk about those later. But, suffice it to say that #BoundT helps them do their magic.

(#VarT Nat)

These are type variables.

They are used during type-inference by the compiler, but they're also part of what makes
universal quantification (with the All macro) able to adjust itself to the types you use it
with.

35
Chapter 6: Types in Detail

Type-variables start unbound (which means they're not associated with any type), but once
they have been successfully matched with another type, they become bound to it, and every
time you use them afterwards it's as if you're working with the original type.

Type-variables, however, can't be "re-bound" once they have been set, to avoid
inconsistencies during type-checking.

(#ExT Nat)

An existential type is an interesting concept (which is related, but not the same as existential
quantification).

You can see it as a type that exists, but is unknown to you. It's like receiving a type in a box
you can't open.

What can you do with it, then? You can compare it to other types, and the comparison will
only succeed if it is matched against itself.

It may sound like a useless thing, but it can power some advanced techniques.

(#UnivQ (List Type) Type)

This is what the All macro generates. This is universal quantification.

That (List Type) you see there is meant to be the "context" of the universal quantification.
It's kind of like the environment of a function closure, only with types.

The other Type there is the body of the universal quantification.

To understand better what's going on, let's transform the type of our iterate-list function
from Chapter 5 into its type value.

(All [a b] (-> (-> a b) (List a) (List b)))

=>
(#UnivQ #;Nil (#UnivQ #;Nil (-> (-> (#BoundT +3) (#BoundT +1)) (List (#BoundT +3)) (Li
st (#BoundT +1))))

I didn't transform the type entirely to avoid unnecessary verbosity.

36
Chapter 6: Types in Detail

As you can see, I do the same embedding trick to have universal quantification with multiple
parameters.

Also, a and b are just nice syntactic labels that get transformed into #BoundT instances.

The reason the type-parameters have those IDs is due to a technique called De Bruijn
Indices. You can read more about it here: https://en.wikipedia.org/wiki/De_Bruijn_index.

(#ExQ (List Type) Type)

Existential quantification works pretty much the same way as universal quantification.

It's associated macro is Ex .

Whereas universal quantification works with type-variables, existential quantification works


with existential types.

(#AppT Type Type)

This is the opposite of quantification.

#AppT is what you use to parameterize your quantified types; to customize them as you

need.

With #AppT , (List Int) transforms into (#AppT List Int) .

For multi-parameter types, like Dict (from lux/data/dict ), (Dict Text User) would
become (#AppT (#AppT Dict Text) User) .

As you can see, the nesting is slightly different than how it is for tuples, variant and
functions.

(#NamedT Ident Type)

#NamedT is less of a necessity and more of a convenience.

The type-system would work just fine without it, but users of the language probably wouldn't
appreciate it while reading documentation or error messages.

37
Chapter 6: Types in Detail

#NamedT is what gives the name "List" to the List type, so you can actually read about it

everywhere without getting bogged down in implementation details.

You see, Lux's type system is structural in nature, rather than nominal (the dominating style
in programming languages).

That means all that matters is how a type is built; not what you call it.

That implies 2 types with different names, but the exact same value, would actually type-
check in your code.

That may sound odd (if you come from Java or other languages with nominal types), but it's
actually very convenient and enables you to do some pretty nifty tricks.

For more information on that, head over to Appendix E.

#NamedT gives Lux's type-system a bit of a nominal feel for the convenience of

programmers.

Regarding Error Messages


When you get error messages from the type-checker during your coding sessions, types will
show up in intuitive ways most of the time, with a few exceptions you might want to know.

Existential types show up in error messages like e:246 (where 246 is the ID of the type).
Whereas type-variables show up like v:278 .

Those types tend to show up when there are errors in the definition of some polymorphic
function.

You may be tired of reading about types, considering that they are (to a large degree) an
implementation detail of the language.

However, one of the key features of Lux is that types can be accessed and manipulated by
programmers (often in macros) to implement various powerful features.

In the next chapter, you'll get acquainted with one such feature.

See you in the next chapter!

38
Chapter 7: Signatures and Structures

Chapter 7: Signatures and Structures


Where types and values collide.

You endured all that tedious talk about types; but it wasn't for nothing.

Now, you'll see types in action, as they take a new shape... and a new purpose.

Many programming languages have some kind of module system or polymorphism system.

You know what I'm talking about.

Object-oriented languages have classes with methods that can be overriden by their
subclasses. The moment you call one of those methods on an object, the run-time system
selects for you the correct implementation, based on the class hierarchy.

Or maybe you come from Haskell, where they have type-classes, that basically perform the
same process, but during compilation. Types are checked, instances get picked, and the
proper functions and constants get plugged-in.

Or may, you come from the world of ML (specially Standard ML), where they have a module
system based on... (drumroll, please) signatures and structures. In those systems, the
function implementations you want don't get selected for you automatically (you have to pick
them yourself), but you tend to have more control when it comes to choosing what to use.

OK. Now you know the origin of Lux's module system. I... um... borrowed it from the SML
guys.

But I added my own little twist.

You see, module/polymorphism systems in programming languages tend to live in a


mysterious world that is removed from the rest of the language. It's a similar situation as with
types.

Remember Lux's type system? Most languages keep their types separate from their values.
Types are just some cute annotations you put in your code to keep the compiler happy. Lux's
types, on the other hand, are alive; for they are values. Nothing stops you from using them,
transforming them and analyzing them in ways that go beyond the language designer's
imagination (that would be me).

39
Chapter 7: Signatures and Structures

Well, there's a similar story to tell about module/polymorphism systems. The run-
time/compiler chooses everything for you; and even when you choose for yourself, you're
still somewhat limited in what you can do. Modules are not values, and there is a
fundamental division between them and the rest of the language.

But not in Lux.

Lux's module system is actually based on regular types and values. And because type are
values, that means it's just turtles values all the way down.

But, how does it work?

Read on!

Signatures
Signatures are like interfaces in other programming languages. They provide a description of
the functionality expected of proper implementations. They have a list of expected member
values/functions, with their associated types.

Here's an example:

(sig: #export (Ord a)


(: (Eq a)
eq)

(: (-> a a Bool)
<)

(: (-> a a Bool)
<=)

(: (-> a a Bool)
>)

(: (-> a a Bool)
>=))

That signature definition comes from the lux/control/ord module, and it deals with ordered
types; that is, types for which you can compare their values in ways that imply some sort of
sequential order.

It's polymorphic/parameterized because this signature must be able to adapt to any type that
fits its requirements.

Also, you may notice that it has a member called eq , of type (Eq a) . The reason is that
signatures can expand upon (or be based on) other signatures (such as Eq ).

40
Chapter 7: Signatures and Structures

How do signatures differ from types?

They don't. They're actually implemented as types. Specifically, as record/tuple types.

You see, if I can create a record type with one field for every expected definition in a
signature, then that's all I need.

Structures
They are the other side of the coin.

If signatures are record types, then that means structures must be actual records.

Let's take a look at how you make one:

(struct: #export Ord<Real> (ord;Ord Real)


(def: eq Eq<Real>)
(def: < r.<)
(def: <= r.<=)
(def: > r.>)
(def: >= r.>=))

This structure comes from lux/data/number .

As you may notice, structures have names; unlike in object-oriented languages where the
"structure" would just be the implemented methods of a class, or Haskell where instances
are anonymous.

Also, the convention is Name-of-Signature<Name-of-Type> .

(struct: #export Monoid<List> (All [a]


(Monoid (List a)))
(def: unit #;Nil)
(def: (append xs ys)
(case xs
#;Nil ys
(#;Cons x xs') (#;Cons x (append xs' ys)))))

Here is another example, from the lux/data/struct/list module.

The reason why structures have names (besides the fact that they are definitions like any
other), is that you can actually construct multiple valid structures for the same combination of
signatures and parameter types. That would require you to distinguish each structure in
some way in order to use it. This is one cool advantage over Haskell's type-classes and
instances, where you can only have one instance for any combination of type-class and
parameter.

41
Chapter 7: Signatures and Structures

Haskellers often resort to "hacks" such as using newtype to try to get around this
limitation.

The upside of having the run-time/compiler pick the implementation for you is that you can
avoid some boilerplate when writing polymorphic code.

The upside of picking the implementation yourself is that you get more control and
predictability over what's happening (which is specially cool when you consider that
structures are first-class values).

What's the big importance of structures being first-class values? Simple: it means you can
create your own structures at run-time based on arbitrary data and logic, and you can
combine and transform structures however you want.

Standard ML offers something like that by a mechanism they call "functors" (unrelated to a
concept of "functor" we'll see in a later chapter), but they are more like magical functions that
the compiler uses to combine structures in limited ways.

In Lux, we dispense with the formalities and just use regular old functions and values to get
the job done.

How to use structures


We've put functions and values inside our structures.

It's time to get them out and use them.

There are 2 main ways to use the stuff inside your structures: open and :: . Let's check
them out.

## Opens a structure and generates a definition for each of its members (including nes
ted members).
## For example:
(open Number<Int> "i:")
## Will generate:
(def: i:+ (:: Number<Int> +))
(def: i:- (:: Number<Int> -))
(def: i:* (:: Number<Int> *))
## ...

The open macro serves as a statement that creates private/un-exported definitions in your
module for every member of a particular structure. You may also give it an optional prefix for
the definitions, in case you want to avoid any name clash.

You might want to check out Appendix C to discover a pattern-matching macro version
of open called ^open .

42
Chapter 7: Signatures and Structures

## Allows accessing the value of a structure's member.


(:: Codec<Text,Int> encode)

## Also allows using that value as a function.


(:: Codec<Text,Int> encode 123)

:: is for when you want to use individual parts of a structure immediately in your code,

instead of opening them first.

Psss! Did you notice :: is piping enabled?

Also, you don't really need to worry about boilerplate related to using structures. There is a
module called lux/type/auto which gives you a macro called ::: for using structures
without actually specifying which one you need.

The macro infers everything for you based on the types of the arguments, the expected
return-type of the expression, and the structures available in the environment.

For more information about that, head over to Appendix F to read more about that.

Structures as Values
I can't emphasize enough that structures are values. And to exemplify it for you, here's a
function from the lux/control/monad module that takes in a structure (among other things)
and uses it within it's code:

(def: #export (mapM monad f xs)


(All [M a b]
(-> (Monad M) (-> a (M b)) (List a) (M (List b))))
(case xs
#;Nil
(:: monad wrap #;Nil)

(#;Cons x xs')
(do monad
[_x (f x)
_xs (mapM monad f xs')]
(wrap (#;Cons _x _xs)))
))

Monad is a signature and the mapM function take arbitrary structures that implement it and

can work with any of them without an issue.

43
Chapter 7: Signatures and Structures

Signatures and structure are the main mechanism for writing polymorphic code in Lux, and
they allow flexible and precise selection of implementations.

It may be the case that in the future Lux adds new mechanisms for achieving the same
goals (I believe in having variety), but the spirit of implementing things in terms of accessible
values anybody can manipulate will likely underlie every such mechanism.

Now that we've discussed signatures and structures, it's time to talk about a very special
family of signatures.

See you in the next chapter!

44
Chapter 8: Functors, Applicatives and Monads

Chapter 8: Functors, Applicatives and


Monads
Where I will try to explain something really confusing, and you'll pretend you understand to
avoid hurting my feelings.

OK. It's time to get serious.

The following topics are known to be troublesome to teach, so I can only promise you that I
will try really, really hard not too say something confusing (or stupid).

Functors
Functors, applicatives and monads are all mathematical concepts that are prevalent in
Category Theory. You may have heard of it before. It's a branch of abstract mathematics
that many are drawing inspiration from when developing new tools and techniques for
functional programming.

But I will not go down the route of explaining things to you from a mathematical
perspective... as I'm not confident that's gonna work.

Imagine that you have some data (maybe an Int , or a Text ). You can work with that data:
you can pass it around, apply functions to it, print it to the console, or pattern-match against
it.

Well, imagine a functor as some kind of wrapper on your data. You can still access what's
inside, but the wrapper itself offers special superpowers, and each wrapper is different.

For instance, there's one wrapper that allows us to have (or not to have) our data.

Schrodinger's wrapper (although most people prefer to call it Maybe ).

That wrapper happens to be a type. All functor wrappers are types.

But not just any type. You see, functors have requirements.

45
Chapter 8: Functors, Applicatives and Monads

(sig: #export (Functor f)


(: (All [a b]
(-> (-> a b) (f a) (f b)))
map))

This is the Functor signature, from lux/control/functor .

As you can see, it only has a single member: the map function.

The parameter type f is very special, because instead of being a simple type (like Int or
Bool ), it's actually a parameterized type (with a single parameter). That explains why it's

being used the way it is in the type of map .

Not every parameterized type can be a functor, but if the type is something that you can
open to work with its inner elements, then it becomes a good candidate.

And you would be surprised how many things fit that requirement.

Remember that Maybe type we talked about? Let's see how it plays with Functor .

(type: (Maybe a)
#;None
(#;Some a))

We've seen Maybe before, but now we can check out how it's implemented.

By the way, it lives in the lux module, so you don't need to import anything.

Here is it's Functor implementation.

(struct: #export Functor<Maybe> (Functor Maybe)


(def: (map f ma)
(case ma
#;None #;None
(#;Some a) (#;Some (f a)))))

This one lives in the lux/data/maybe module, though.

We'll know how everything fits if we fill in the blanks for map 's type:

(All [a b]
(-> (-> a b) (Maybe a) (Maybe b))

So, the job of map here is to take a Maybe containing some a value, and somehow
transform it into a b , without escaping the Maybe .

46
Chapter 8: Functors, Applicatives and Monads

By looking at the Functor implementation, we can see how this works out.

We can actually pattern-match against the entire input and handle the different cases, using
the given function to transform our a into a b .

Not that hard.

Oh, and remember our iterate-list function from chapter 5?

Turns out, that's just the Functor<List> implementation from lux/data/struct/list :

(struct: #export _ (Functor List)


(def: (map f ma)
(case ma
#;Nil #;Nil
(#;Cons a ma') (#;Cons (f a) (map f ma')))))

Not bad.

Did you notice that underscore?

If the name of a structure can be trivially derived from it's type-signature, you can just
use an underscore, and have the struct: macro fill in the blanks for you.

Of course, that only works if you're fine using the conventional name.

In the case of List , the wrapper superpower it provides is the capacity to handle multiple
values as a group. This can be used for some really cool techniques; like implementing non-
deterministic computations by treating every list element as a branching value (but let's not
go down that rabbit-hole for now).

The power of functors is that they allow you to decorate your types with extra functionality.
You can still access the inner data and the map function will take advantage of the
wrapper's properties to give you that extra power you want.

You can implement things like stateful computations, error-handling, logging, I/O,
asynchronous concurrency and many other crazy things with the help of functors.

However, to make them really easy to use, you might want to add some extra layers of
functionality.

Applicatives
One thing you may have noticed about the Functor signature is that you have a way to
operate on functor values, but you don't have any (standardized) means of creating them.

47
Chapter 8: Functors, Applicatives and Monads

I mean, you can use the list and list& macros to create lists and the #;Some and
#;None tags for Maybe , but there is no unified way for creating any functor value.

Well, let me introduce you to Applicative :

(sig: #export (Applicative f)


(: (F;Functor f)
functor)
(: (All [a]
(-> a (f a)))
wrap)
(: (All [a b]
(-> (f (-> a b)) (f a) (f b)))
apply))

This signature extends Functor with both the capacity to wrap a normal value inside the
functor, and to apply a function wrapped in the functor, on a value that's also wrapped in it.

Sweet!

Wrapping makes working with functors so much easier because you don't need to memorize
a bunch of tags, macros or functions in order to create the structures that you need.

And being able to apply wrapped functions to wrapped values simplifies a lot of
computations where you may want to work with multiple values, all within the functor.

To get a taste for it, let's check out another functor type.

Remember what I said about error-handling?

(type: #export (Error a)


(#Error Text)
(#Success a))

This type expresses errors as Text messages (and it lives in the lux/data/error module).

Here are the relevant Functor and Applicative implementations:

48
Chapter 8: Functors, Applicatives and Monads

(struct: #export _ (Functor Error)


(def: (map f ma)
(case ma
(#;Error msg) (#;Error msg)
(#;Success datum) (#;Success (f datum)))))

(struct: #export _ (Applicative Error)


(def: functor Functor<Error>)

(def: (wrap a)
(#;Success a))

(def: (apply ff fa)


(case ff
(#;Success f)
(case fa
(#;Success a)
(#;Success (f a))

(#;Error msg)
(#;Error msg))

(#;Error msg)
(#;Error msg))
))

The apply function is a bit more complicated than map , but it still (roughly) follows the
same pattern:

1. Unwrap the data.


2. Handle all the cases.
3. Generate a wrapped output.

Applicatives are really nice and all, but we can still take things further with these functors.

Monads
If you listen to functional programmers, you'll likely get the impression that the invention of
monads rivals the invention of the wheel. It is this incredibly powerful and fundamental
abstraction for a lot of functional programs.

Monads extend applicatives by adding one key operation that allows you to concatenate or
compound monadic values.

Why is that important?

49
Chapter 8: Functors, Applicatives and Monads

Because, until now, we have only been able to use pure functions on our functorial values.
That is, functions which don't cause (or require) any kind of side-effect during their
computation.

Think about it: we can compute on the elements of lists, but if our function generates lists by
itself, you can't just merge all the output lists together. You're just going to end up with a
lousy list of lists.

Oh, and the functions you apply on your Error values better not fail, cause neither the
Functor<Error> nor the Applicative<Error> are going to do anything about it.

Enter the Monad :

(sig: #export (Monad m)


(: (A;Applicative m)
applicative)
(: (All [a]
(-> (m (m a)) (m a)))
join))

The thing about Monad is that with it, you can give map functions that also generate
wrapped values (and take advantage of their special properties), and then you can
collapse/merge/combine those values into a "joined" value by using (you guessed it) the
join function.

Let's see that in action:

(;module:
lux
(lux/data/struct [list]))

(open list;Monad<List>)

(def foo (|> (list 1 2 3 4)


(map (list;repeat 3))
join
))

## The value of 'foo' is:


(list 1 1 1 2 2 2 3 3 3 4 4 4)

It's magic!

Not really. It's just the Monad<List> :

50
Chapter 8: Functors, Applicatives and Monads

(def: unit (All [a] (List a)) #;Nil)

(def: (append xs ys)


(All [a] (-> (List a) (List a) (List a)))
(case xs
#;Nil ys
(#;Cons x xs') (#;Cons x (append xs' ys))))

(struct: #export _ (Monad List)


(def: applicative Applicative<List>)

(def: (join list-of-lists) (|> list-of-lists reverse (fold append unit))))

The fold function is for doing incremental iterative computations.

Here, we're using it to build the total output list by concatenating all the input lists in our
list-of-lists .

Monads are incredibly powerful, since being able to use the special power of our functor
inside our mapping functions allows us to layer that power in complex ways.

But... you're probably thinking that writing a bunch of map s followed by join s is a very
tedious process. And, you're right!

If functional programmers had to subject themselves to that kind of tedium all the time,
they'd probably not be so excited about monads.

Time for the VIP treatment.

The do Macro
These macros always show up at the right time to saves us from our hurdles!

## Macro for easy concatenation of monadic operations.


(do Monad<Maybe>
[x (f1 123)
#let [y (f2 x)] ## #let enables you to use full-featured let-expressions
z (f3 y)]
(wrap (f4 z)))

The do macro allows us to write monadic code with great ease (it's almost as if we're just
making let bindings).

Just tell it which Monad implementation you want, and it will write all the steps in your
computation piece by piece using map and join without you having to waste your time
with all the boilerplate.

51
Chapter 8: Functors, Applicatives and Monads

Finally, whatever you write as the body of the do , it must result in a functorial/monadic
value (in this case, a Maybe value).

Remember: join may collapse/merge/combine layers of the functor, but it never escapes
it, and neither can you.

Also, I can't forget to tell you that the do macro binds the monad instance you're using
to the local variable @ , so you may refer to the monad using that short name
anywhere within the do expression.

This can come in handy when calling some of the functions in the lux/control/monad
module, which take monad structures as parameters.

Functors, applicatives and monads have a bad reputation for being difficult to understand,
but hopefully I didn't botch this explanation too much.

Personally, I think the best way to understand functors and monads is to read different
implementations of them for various types (and maybe write a few of your own).

For that, feel free to peruse the Lux Standard Library at your leisure.

This is the sort of think that you need to learn by intuition and kind of get the feel for.

Hopefully, you'll be able to get a feel for them in the next chapters, because we're going to
be exploring a lot of monads from here on.

So, buckle-up, cowboy. This ride is about to get bumpy.

See you in the next chapter!

52
Chapter 9: Metaprogramming

Chapter 9: Metaprogramming
Where we go meta. For real.

Metaprogramming is the art of making programs... that make programs.

There are many techniques and tools for achieving this, but one that is very familiar to Lisp
fans is to use macros to generate code at compile-time.

However, we're not going to talk about macros on this chapter.

Instead, I'll reveal the infrastructure that makes macros possible, and we'll discuss macros
on the next chapter.

The Compiler Type


The Lux compiler was designed to integrate very well with the language itself.

Most compilers are just programs that take source code and emit some binary executable or
some byte-code. But the Lux compiler opens itself for usage within Lux programs and
provides Lux programmers with a wealth of information.

The Compiler type enters the stage.

(type: Compiler
{#info Compiler-Info
#source Source
#cursor Cursor
#modules (List [Text Module])
#envs (List Scope)
#type-vars (Bindings Int Type)
#expected (Maybe Type)
#seed Int
#scope-type-vars (List Int)
#host Void})

By the way, the Compiler type and other weird types you may not recognize there are
all defined in the lux module. Check the documentation in the Standard Library for
more details.

The Compiler type represents the state of the Lux compiler at any given point.

53
Chapter 9: Metaprogramming

It's not a reflection of that state, or a subset of it. It is the state of the Lux compiler; and, as
you can see, it contains quite a lot of information about compiled modules, the state of the
type-checker, the lexical and global environments, and more.

Heck, you can even access the yet-unprocessed source code of a module at any given time.

That's pretty neat.

You can actually write computations that can read and even modify (careful with that one)
the state of the compiler. This turns out to be massively useful when implementing a variety
of powerful macros.

For example, remember the open and :: macros from chapter 7?

They actually look up the typing information for the structures you give them to figure out the
names of members and generate the code necessary to get that functionality going.

And that is just the tip of the iceberg.

The module: macro uses module information to generate all the necessary code for locally
importing foreign definitions, and some macros for doing host interop analyze the
annotations of local definitions to help you write shorter code when importing Java
classes/methods/fields or defining your own.

The possibilities are really vast when it comes to using the information provided by the
Compiler state.

The Lux Type


But, how do I use it?

Well, that is where the Lux type and the lux/compiler module come into play.

Yeah, I'm aware that it's weird there's a type with the same name as the language, but I
couldn't figure out a better name.

The lux/compiler module houses many functions for querying the Compiler state for
information, and even to change it a little bit (in safe ways).

I won't go into detail about what's available, but you'll quickly get an idea of what you can do
if you read the documentation for it in the Standard Library.

However, one thing I will say is that those functions rely heavily on the Lux type, which is
defined thus:

54
Chapter 9: Metaprogramming

(type: (Lux a)
(-> Compiler (Either Text [Compiler a])))

The Lux type is defined in the lux module, although most functions that deal with it
are in the lux/compiler module. Also, lux/compiler contains Functor<Lux> and
Monad<Lux> .

The Lux type is a functor, and a monad, but it is rather complicated.

You saw some functor/applicative/monad examples in the last chapter, but this is more
colorful.

Lux instances are functions that given an instance of the Compiler state, will perform

some calculations which may fail (with an error message); but if they succeed, they return a
value, plus a (possibly updated) instance of the Compiler .

Lux metaprogramming is based heavily on the Lux type, and macros themselves rely on it
for many of their functionalities, as you'll see in the next chapter.

Where do Compiler instances come from?


Clearly, Compiler instances are data, but the compiler is not available at all times.

The compiler is only ever present during... well... compilation.

And that is precisely when all of your Compiler -dependant code will be run.

Basically, in order for you to get your hands on that sweet compiler information, your code
must be run at compile-time. But only macro code can ever do that, so you will have to wait
until the next chapter to learn how this story ends.

Definition Annotations
Another important piece of information you should be aware of is that definitions don't just
have values and types associated with them, but also arbitrary meta-data which you can
customize as much as you want.

The relevant types in the lux module are

55
Chapter 9: Metaprogramming

(type: #rec Ann-Value


(#BoolM Bool)
(#IntM Int)
(#RealM Real)
(#CharM Char)
(#TextM Text)
(#IdentM Ident)
(#ListM (List Ann-Value))
(#DictM (List [Text Ann-Value])))

and

(type: Anns
(List [Ident Ann-Value]))

You can add annotations to definitions in the many definition macros offered in the standard
library. All you need to do is pass in some record syntax, with tags being the Ident part of
the annotations, and the associated value being either an explicit variant Ann-Value , or
some function or macro call that would produce such a value.

Here's an example from lux :

(def: #export (is left right)


{#;doc (doc "Tests whether the 2 values are identical (not just \"equal\")."
"This one should succeed:"
(let [value 5]
(is 5 5))

"This one should fail:"


(is 5 (+ 2 3)))}
(All [a] (-> a a Bool))
(_lux_proc ["lux" "=="] [left right]))

The (optional) annotations always goes after the declaration or name of the thing being
defined.

Note that all tag usage within annotation records should be prefixed, to avoid potential
confusions, as different modules could be using annotation tags with similar names.

The lux/compiler module contains various functions for reading and exploring the definition
annotations, and some modules in the standard library (for example, the lux/host module)
make heavy use of annotations to figure out properties of definitions which may be useful
during code-generation and parsing in macros.

And also, as you can appreciate from the previous example, some macros may be designed
to be used during annotation specification.

56
Chapter 9: Metaprogramming

This chapter feels a little empty because the topic only makes sense within the context of
macros. But macros by themselves are a huge subject, and involve more machinery than
you've seen so far.

However, I wanted to give you a taste of what's possible in order to whet your appetite, while
keeping the chapter focused.

In the next chapter, I'll complete this puzzle, and you'll be given access to a power greater
than you've ever known (unless you've already been a lisper for a while).

See you in the next chapter!

57
Chapter 10: The AST and Macros

Chapter 10: The AST and Macros


Where magic turns into science.

I've talked about many macros in this book.

There's a macro for this and a macro for that.

You use macros for defining stuff, for making types and functions and lists, for doing pattern-
matching, and for control-flow.

There's a macro for everything. Yet, I haven't even shown a macro being defined yet.

Quiet your mind, young grasshopper. You're about to be enlightened.

But first, you need to learn a few things.

The AST
The word AST stands for Abstract Syntax Tree.

An AST is a representation of the syntax of a programming language, and compilers use


them for the sake of analyzing the source-code (like, by type-checking it), and then
generating the binary/byte-code output.

You might think that's none of your business. Only compiler writers have to worry about that
stuff, right?

Oh, you have much to learn, young grasshopper.

You see, the power of macros lies in the fact that (to some extend) users of the language
can play the role of language designers and implementers.

Macros allow you to implement your own features in the language and to have them look
and feel just like native features.

I mean, beyond the native syntax for writing numbers, text, tuples, variants and records,
every single thing you have written so far has been macros.

Module statements? Yep, macros.

Definition statements? Yep, macros.

Function expressions? Yep, macros.

58
Chapter 10: The AST and Macros

And you'd have never suspected those weren't native Lux features had I not told you they
were macros.

Now, just imagine making your own!

But macros work with the Lux AST, so that's the first thing you need to master.

Check it out:

(type: (Meta m v)
{#meta m
#datum v})

(type: Cursor
{#module Text
#line Int
#column Int})

(type: (AST' w)
(#BoolS Bool)
(#IntS Int)
(#RealS Real)
(#CharS Char)
(#TextS Text)
(#SymbolS Ident)
(#TagS Ident)
(#FormS (List (w (AST' w))))
(#TupleS (List (w (AST' w))))
(#RecordS (List [(w (AST' w)) (w (AST' w))])))

(type: AST
(Meta Cursor (AST' (Meta Cursor))))

These types are all in the lux module.

The AST type is the one you'll be interacting with, but all it does is wrap (recursively) the
incomplete AST' type, giving it some meta-data to know where each AST node comes from
in your source-code.

The real magic is in the AST' type, where you can see all the alternative syntactic
elements.

Most of it is self-explanatory, but you may not recognize #SymbolS . A symbol is lisp-speak
for what is called an identifier in most other programming languages. map is a symbol, as is
lux/data/struct/list;reverse . They are the things we use to refer to variables, types,

definitions and modules.

59
Chapter 10: The AST and Macros

The Ident type (from the lux module), is just a [Text Text] type. The first part holds the
module/prefix of the symbol/tag, and the second part holds the name itself. So
lux/data/struct/list;reverse becomes ["lux/data/struct/list" "reverse"] , and map

becomes ["" "map"] .

list;reverse would become ["lux/data/struct/list" "reverse"] anyway, because

aliases get resolved prior to analysis and macro expansion.

Forms are (syntactic structures delimited by parentheses) , and tuples are [syntactic
structures delimited by brackets] . Records {#have lists #of pairs} of AST s instead of

single AST s, because everything must come in key-value pairs.

Quotations
We know everything we need to extract information from the AST type, but how do we build
AST values?

Do we have to build it with our bare hands using variants and tuples?

That sounds... exhausting.

Well, we don't have to. There are actually many nice tools for making our lives easier.

One nice resource within our reach is the lux/macro/ast module, which contains a variety
of functions for building AST values, so we don't have to worry about cursors and variants
and all that stuff.

But, even with that, things would get tedious. Imagine having to generate an entire function
definition (or something even larger), by having to call a bunch of functions for every small
thing you want.

Well, don't fret. The Lux Standard Library already comes with a powerful mechanism for
easily generating any code you want and you don't even need to import it (it's in the lux
module).

## Quotation as a macro.
(' "YOLO")

Quotation is a mechanism that allows you to write the code you want to generate, and then
builds the corresponding AST value.

The ' macro is the simplest version, which does exactly what I just described.

60
Chapter 10: The AST and Macros

This would turn the text "YOLO" into [{#;module "" #;line -1 #;column -1} (#;TextS
"YOLO")] . If you want to know what that would look like with the tools at lux/macro/ast , it

would be: (text "YOLO") .

The beatiful thing is that (' (you can use the #"'" #macro [to generate {arbitrary ASTs}
without] worrying (about the "complexity"))) .

## Hygienic quasi-quotation as a macro. Unquote (~) and unquote-splice (~@) must also
be used as forms.
## All unprefixed macros will receive their parent module's prefix if imported; otherw
ise will receive the prefix of the module on which the quasi-quote is being used.
(` (def: (~ name)
(lambda [(~@ args)]
(~ body))))

This is a variation on the ' macro that allows you to do templating with the code you want
to generate.

Everything you write will be generated as is, except those forms which begin with ~ or ~@ .

~ means: evaluate this expression and use it's AST value.

~@ means: the value of this expression is a list of ASTs, and I want to splice all of them in

the surrounding AST node.

With these tools, you can introduce a lot of complexity and customization into your code
generation, which would be a major hassle if you had to build the AST nodes yourself.

You may be wondering what does "hygienic" means in this context. It just means that if
you use any symbol in your template which may refer to an in-scope definition or local
variable, the symbol will be resolved to it.

Any symbol that does not correspond to any known in-scope definition or variable will
trigger a compile-time error.

This ensures that if you make a mistake writing your template code, it will be easy to
spot during development. Also, it will be harder to collide (by mistake) with user code if
you, for instance, write the code for making a local variable named foo , and then the
person using your macro uses a different foo somewhere in their code.

## Unhygienic quasi-quotation as a macro. Unquote (~) and unquote-splice (~@) must als
o be used as forms.
(`' (def: (~ name)
(lambda [(~@ args)]
(~ body))))

61
Chapter 10: The AST and Macros

Finally, there is this variation, which removes the hygiene check.

Out of the 3 variations, the one you'll most likely use is the 2nd one, since it provides both
safety and power.

Macros
Now that you know how to generate code like a pro, it's time to see how macros get made.

First, let's check the type of macros:

(type: Macro
(-> (List AST) (Lux (List AST))))

From the lux module.

You might remember from the previous chapter that you can only access the Compiler
state inside of macros. Now, you can see how everything connects.

You define macros by using the macro: macro (so meta...):

(macro: #export (ident-for tokens)


(case tokens
(^template [<tag>]
(^ (list [_ (<tag> [prefix name])]))
(:: Monad<Lux> wrap (list (` [(~ (ast;text prefix)) (~ (ast;text name))]))))
([#;SymbolS] [#;TagS])

_
(compiler;fail "Wrong syntax for ident-for")))

Here's another example:

(macro: #export (default tokens)


(case tokens
(^ (list else maybe))
(do Monad<Lux>
[g!temp (compiler;gensym "")]
(wrap (list (` (case (~ maybe)
(#;Some (~ g!temp))
(~ g!temp)

#;None
(~ else))))))

_
(compiler;fail "Wrong syntax for default")))

62
Chapter 10: The AST and Macros

You may want to read Appendix C to learn about the pattern-matching macros used in
these examples.

As you can see, I'm using both quotation and the functions from the lux/macro/ast module
to generate code here.

I'm also using the gensym function from lux/compiler , which generates unique symbols for
usage within code templates in order to avoid collision with any code provided by the user of
the macro.

The macro receives the raw list of AST tokens and must process them manually to extract
any information it needs for code generation. After that, a new list of AST tokens must be
generated.

If there are any macros in the output, they will be expanded further until only primitive/native
syntax remains that the Lux compiler can then analyze and compile.

You have learned how to use one of the greatest superpowers that Lux has to offer.

But, if you're like me, you might be getting the nagging feeling that something is not right
here.

I mean, if I have to pattern-match against the code I receive; what happens when my macros
have complex inputs?

Clearly, analyzing the input code is far more difficult than generating it with the quoting
macros.

Don't worry about it. Because in the next chapter, you will meet a more sophisticated method
of macro definition that will make writing complex macros a breeze.

See you in the next chapter!

63
Chapter 11: Syntax Macros

Chapter 11: Syntax Macros


Where science turns into magic once more.

You've now learned how to create your own macros to make your own custom syntax, and
the features involved.

I would advice you to take a look at the many macros in the Lux Standard Library for
inspiration as to what can be accomplished.

In the meantime, let's find out how to take our macro chops to the next level.

The lux/macro/syntax module houses some powerful tools.

For starters, it's the home of the Syntax type:

(type: (Syntax a)
(-> (lux;List lux;AST) (lux/data/error;Error [(lux;List lux;AST) a])))

Note: This is also a functorial/monadic type.

Syntax is the type of syntax-parsers: parsers which analyze AST nodes to extract arbitrary

information.

The Syntax type works with streams of inputs instead of single elements, and it often
consumes some of those inputs, which is why the output involves an updated list of AST s.

There are many such syntax-parsers (and combinators) in the lux/macro/syntax module,
and you should definitely take a look at what's available in the documentation.

But in that module there is also another mechanism for defining macros: the syntax:
macro.

64
Chapter 11: Syntax Macros

## A more advanced way to define macros than macro:.


## The inputs to the macro can be parsed in complex ways through the use of syntax par
sers.
## The macro body is also (implicitly) run in the Lux/Monad, to save some typing.
## Also, the compiler state can be accessed through the *state* binding.
(syntax: #export (object [#let [imports (class-imports *state*)]]
[#let [class-vars (list)]]
[super (s;opt (super-class-decl^ imports class-vars))]
[interfaces (s;tuple (s;some (super-class-decl^ imports class-vars)
))]
[constructor-args (constructor-args^ imports class-vars)]
[methods (s;some (overriden-method-def^ imports))])
(let [def-code ($_ Text/append-class:"
(spaced (list (super-class-decl$ (default object-super-class supe
r))
(with-brackets (spaced (map super-class-decl$ inter
faces)))
(with-brackets (spaced (map constructor-arg$ constr
uctor-args)))
(with-brackets (spaced (map (method-def$ id) method
s))))))]
(wrap (list (` (;_lux_proc ["jvm" (~ (ast;text def-code))] []))))))

This sample is a macro for making anonymous classes that lives in lux/host .

The difference between macro: and syntax: is that syntax: allows you to parse, in a
structured manner, the inputs to your macro, thereby reducing considerably the complexity
necessary for making "big" macros.

Also, because you're using syntax-parsers for the hard work, you can write reusable parsers
that you can share throughout your macros, if you want to have common syntax. You can
compose your parsers, or use parsers from someone else's library.

There is already a small module called lux/macro/syntax/common which houses a few


reusable parsers and generators.

It will grow over time, as more syntax becomes standardized in the Lux Standard
Library and it makes more sense to re-use resources.

Additionally, syntax: binds the Compiler value on a variable called *state* , so you can
use it during your parsing.

What do those syntax-parsers look like?

Here are some samples:

65
Chapter 11: Syntax Macros

(def: (type-param^ imports)


(-> ClassImports (Syntax TypeParam))
(s;either (do Monad<Syntax>
[param-name s;local-symbol]
(wrap [param-name (list)]))
(tuple^ (do Monad<Syntax>
[param-name s;local-symbol
_ (s;symbol! ["" "<"])
bounds (s;many (generic-type^ imports (list)))]
(wrap [param-name bounds])))))

(def: body^
(Syntax (List AST))
(s;tuple (s;many s;any)))

66
Chapter 11: Syntax Macros

(type: #rec Infix


(#Const AST)
(#Call (List AST))
(#Infix Infix AST Infix))

(def: (infix^ _)
(-> Unit (Syntax Infix))
($_ s;alt
($_ s;either
(Syntax/map ast;bool s;bool)
(Syntax/map ast;int s;int)
(Syntax/map ast;real s;real)
(Syntax/map ast;char s;char)
(Syntax/map ast;text s;text)
(Syntax/map ast;symbol s;symbol)
(Syntax/map ast;tag s;tag))
(s;form (s;many s;any))
(s;tuple (s;either (do Monad<Syntax>
[_ (s;tag! ["" "and"])
init-subject (infix^ [])
init-op s;any
init-param (infix^ [])
steps (s;some (s;seq s;any (infix^ [])))]
(wrap (product;right (fold (lambda [[op param] [subject [_s
ubject _op _param]]]
[param [(#Infix _subject _op _
param)
(` and)
(#Infix subject op para
m)]])
[init-param [init-subject init-o
p init-param]]
steps))))
(do Monad<Syntax>
[_ (wrap [])
init-subject (infix^ [])
init-op s;any
init-param (infix^ [])
steps (s;some (s;seq s;any (infix^ [])))]
(wrap (fold (lambda [[op param] [_subject _op _param]]
[(#Infix _subject _op _param) op param])
[init-subject init-op init-param]
steps)))
))
))

In all of these cases, s is being used as an alias for lux/macro/syntax .

And here are some samples of syntax macros:

67
Chapter 11: Syntax Macros

(syntax: #export (^stream& [patterns (s;form (s;many s;any))] body [branches (s;some s
;any)])
{#;doc (doc "Allows destructuring of streams in pattern-matching expressions."
"Caveat emptor: Only use it for destructuring, and not for testing value
s within the streams."
(let [(^stream& x y z _tail) (some-stream-func 1 2 3)]
(func x y z)))}
(with-gensyms [g!s]
(let [body+ (` (let [(~@ (List/join (List/map (lambda [pattern]
(list (` [(~ pattern) (~ g!s)])
(` (cont;run (~ g!s)))))
patterns)))]
(~ body)))]
(wrap (list& g!s body+ branches)))))

(syntax: #export (?> [branches (s;many (s;seq body^ body^))]


[?else (s;opt body^)]
prev)
{#;doc (doc "Branching for pipes."
"Both the tests and the bodies are piped-code, and must be given inside
a tuple."
"If a last else-pipe isn't given, the piped-argument will be used instea
d."
(|> 5
(?> [even?] [(* 2)]
[odd?] [(* 3)]
[(_> -1)])))}
(with-gensyms [g!temp]
(wrap (list (` (let% [(~ g!temp) (~ prev)]
(cond (~@ (do Monad<List>
[[test then] branches]
(list (` (|> (~ g!temp) (~@ test)))
(` (|> (~ g!temp) (~@ then))))))
(~ (case ?else
(#;Some else)
(` (|> (~ g!temp) (~@ else)))

_
g!temp)))))))))

(syntax: (@runnable expr)


(wrap (list (`' (object [java.lang.Runnable]
[]
(java.lang.Runnable run [] void
(exec (~ expr)
[])))))))

68
Chapter 11: Syntax Macros

By the way, the body of syntax: runs inside a (do Monad<Lux> [] ...) expression, so
you have immediate access to wrap for simple macros, like the last one.

This may be a short chapter, but not because it's subject is small.

The opportunities that syntax-parsers open are fantastic, as it puts within your reach macros
which would otherwise be much harder to implement correctly.

Don't worry about complex inputs: your macros can implement entire new embedded
programming languages if you want them to. Syntax-parsers can generate any data-type
you want, so you can easily translate the information in the input syntax to whatever data-
model you need.

But, now that we've spent 3 chapters about metaprogramming in Lux, I think it's fair that we
clear our minds a little by looking at other subjects.

You're going to learn how to go beyond Lux and interact with everything and everyone.

See you in the next chapter!

69
Chapter 12: I\/O

Chapter 12: I/O


Where you will learn how to interact with the outside world.

I/O (short for input and output) is a very important subject in programming.

Arguably, it's the only reason why we make programs in the first place.

Software wouldn't be very useful if we couldn't use files, or interact with foreign devices (like
sensors, or other computers), or interact with our users through GUIs.

I/O is fundamental to software development; but, like every fundamental thing in


programming, there are many approaches to it, and some of them are in opposition to
others.

I would say there are 2 main schools of thought when it comes to the role of I/O in computer
programs.

Implicit I/O
These guys say that all programs are fundamentally about the I/O.

Everything, in one way or another, revolves around I/O, and it is a pervasive force that
defines what computing is about from it's very foundations.

Most programming languages you may be familiar with subscribe to this idea (either by
choice of the language designer(s) or by simply following standard practice).

Here, we see operations which have side-effects (such as printing to standard output, or
reading files, or connecting to another computer over the network) be treated like operations
which lack them (like adding two numbers), with the side-effects being seen as some kind of
magical property exhibited by those operations, which neither the language nor its libraries
are obligated to handle in any special way.

Side-effects tend to be mentioned in documentation, but the lack of separation means that
programmers must be wary of what they're using if the want to avoid the unfortunate
consequences of careless coding.

A common pitfall happens to involve concurrency, where operations being executed on


multiple threads, and which have access to common resources, can cause data-
corruption and other problems during run-time.

70
Chapter 12: I\/O

However, programmers who subscribe to this philosophy don't tend to see that as a sign that
their languages or tools lack anything, but that programmers need to be careful and mindful
of what they code, and that the effectful nature of programming is something to be
embraced, rather than constrained.

Explicit I/O
These guys are a very different breed.

Just like static typing is used to impose some order in programs and to make explicit that
which only lived in the minds (and comments) of programmers; explicit I/O acknowledges
that effectful computations are fundamentally different from pure computations (that is,
computations which just process their inputs and calculate their outputs).

Making I/O explicit, rather than being a denial of the very nature of programs, is an
acknowledgement of this reality, and a reification of something which only used to live in the
minds of programmers (and their comments).

By making I/O explicit, it can be separated from pure computations (thus, avoiding many
common mistakes); it can be isolated and handled in safe places (thus avoiding its often
pervasive nature); and it can be better integrated into the design of programs by making it a
more concrete element.

In a way, it can even be said that immutable data-types (a staple of functional programming
that Lux embraces) are just another form of explicit I/O.

Being able to change data after it was created can easily propagate changes beyond your
local scope and affect threads and data-structures you aren't even aware of in ways that
may be troublesome.

But, just like having immutable data-types makes change explicit and a concrete part of
one's design, making I/O explicit achieves a similar goal, with regards to one's interactions
with the outside world.

While immutable data-types deal with the mechanics of the inner world of programs, explicit
I/O deals with interaction with the outside world; from the very near to the very far.

Lux chooses explicit I/O as it's underlying model.

It may seem odd that I have to justify the choice of explicit I/O in Lux; but, at the time of
this writing, implicit I/O is the industry standard, with many even doubting the benefits of
an alternative approach.

The only major language which also adopts this model is Haskell (from which Lux takes
heavy inspiration), in its efforts to maintain theoretical purity.

71
Chapter 12: I\/O

How does Explicit I/O Work?


Now that I have (hopefully) convinced you that this is a good idea, how does it work?

Well, by using types; of course!

If you head to the lux/codata/io module, you will find the following type definition:

(type: #export (IO a)


(-> Void a))

IO is a type that embodies this model, by making it something that the type-system can

recognize and work with.

But there is something odd with that definition: it's a function, but it also takes something...
impossible.

The function part is easy to explain: to separate I/O from the rest of the language, effectful
computations must be delayed until the proper time comes to execute them. That is the
reason why Lux programs must produce values of type (IO Unit) , instead of just running
and returning anything. The run-time system can then take those values and execute them
separately.

That might sound like a pointless task, but you may want to run those operations on different
contexts (like running them on different threads/processes, for example).

Also, it may be interesting to know that IO happens to be a monadic type; allowing


you to compose effectful computations with ease.

But what about Void over there? Didn't you say you can't create void values in chapter 6?

Good observation (and good memory)!

Lux takes care to do some... black magic to get its hands on a Void value in order to run
those IO computations.

For the most part, that Void over there is just a means to enforce a separation between
normal code and effectful code that most programmers will respect.

I like using Void instead of Unit (or any other type) because it gives I/O a sort of
creatio ex nihilo vibe that's very epic, but also illustrates that these otherworldly values
are not just calculating something in your programs, but may actually be pulling data
out of nowhere by their interactions with the outside world.

Additionally, if you explore the lux/codata/io module, you'll find the means to "run" these
I/O operations through the run function.

72
Chapter 12: I\/O

The io macro, on the other hand, gives you the way to label effectful code as such, by
wrapping it inside the IO type.

However, it's unlikely you'll need to use the io macro yourself very often, as some
tools we'll see later can take care of that for you.

What I/O Operations are Available?


Sadly, this is one area where Lux (currently) falls short.

There are 2 reasons for that:

1. Lux's cross-platform ideals.


2. Lux's youth.

Different platforms (e.g. JVM, Node.js, web browsers, CLR) offer different tools for doing I/O
(or maybe even no tools), which makes providing cross-platform I/O functionality a bit of a
challenge.

As Lux grows (as a language and as a community), many of the gaps regarding I/O will be
filled and there will (hopefully) be some sort of standard I/O infrastructure that will be
relatively platform-independent.

In the meantime, I/O will depend on the capabilities of the host platform, accessed through
the mechanisms Lux provides for doing host inter-operation (which is, coincidentally, the
subject of the next chapter).

This is not as bad as it sounds; and one way or another, programmers will always want to
reach to their host platforms to access features beyond what Lux can provide.

And this is fine.

Rather than trying to be what everybody needs and wants all the time, Lux chooses to focus
on the subset of things it can do really well, and leaves the rest on the hands of the platform,
and the clever programmers who use both.

But, I must confess, there is one way of doing I/O Lux does provide in a cross-platform
way, which is the log! function (which prints to standard output).

However, log! has type (-> Text Unit) , instead of the expected (-> Text (IO
Unit)) . The reason is that it was meant for casual logging (as done while debugging),

instead of serious I/O, and I felt that forcing people to use IO while logging would have
made the process too tedious.

73
Chapter 12: I\/O

This chapter has been mostly theoretical, as I had to make the case for why explicit I/O is a
valuable tool before I could actually explain what it was.

Here, you have learned how to walk outside and yell.

Next, I shall teach you how to walk inside and whisper... to the host platform.

See you in the next chapter!

74
Chapter 13: JVM Interop

Chapter 13: JVM Interop


Where you will cross the great divide.

No language is an island, and even compiled-to-native languages need to have some FFI
(foreign function interface) to interact with C, Fortran or some other language.

There's a ton of awesome infrastructure out there that was implemented in other
technologies, and there is no reason for us not to take a piece of that pie.

The beautiful thing about the inter-operation mechanism offered by Lux is that it can be used
to interact with any language running on the JVM (not only Java).

Although, due to its simplicity, it's better suited for interacting with programs originally
written in Java, as other languages tend to implement some tricks that make the
classes they generate a little bit... funky.

So, what do you need to work with the JVM?

Basically, just 3 things:

1. The means to consume the resources it provides (classes/methods/fields/objects).


2. The means to provide your own resources (class definitions).
3. The means to access its special features (such as synchronization).

Let's explore them.

By the way, the only module relevant to this chapter is lux/host .

Importing Classes, Methods & Fields


It's all done with the help of the jvm-import macro:

## Allows importing JVM classes, and using them as types.


## Their methods, fields and enum options can also be imported.
## Also, classes which get imported into a module can also be referred-to with their s
hort names in other macros that require JVM classes.
## Examples:
(jvm-import java.lang.Object
(new [])
(equals [Object] boolean)
(wait [int] #io #try void))

75
Chapter 13: JVM Interop

## Special options can also be given for the return values.


## #? means that the values will be returned inside a Maybe type. That way, null becom
es #;None.
## #try means that the computation might throw an exception, and the return value will
be wrapped by the Error type.
## #io means the computation has side effects, and will be wrapped by the IO type.
## These options must show up in the following order [#io #try #?] (although, each opt
ion can be used independently).
(jvm-import java.lang.String
(new [(Array byte)])
(#static valueOf [char] String)
(#static valueOf #as int-valueOf [int] String))

(jvm-import #long (java.util.List e)


(size [] int)
(get [int] e))

(jvm-import (java.util.ArrayList a)
([T] toArray [(Array T)] (Array T)))

## #long makes it so the class-type that is generated is of the fully-qualified name.


## In this case, it avoids a clash between the java.util.List type, and Lux's own List
type.
(jvm-import java.lang.Character$UnicodeScript
(#enum ARABIC CYRILLIC LATIN))

## All enum options to be imported must be specified.


(jvm-import #long (lux.concurrency.promise.JvmPromise A)
(resolve [A] boolean)
(poll [] A)
(wasResolved [] boolean)
(waitOn [lux.Function] void)
(#static [A] make [A] (JvmPromise A)))

## It should also be noted, the only types that may show up in method arguments or ret
urn values may be Java classes, arrays, primitives, void or type-parameters.
## Lux types, such as Maybe can't be named (otherwise, they'd be confused for Java cla
sses).
## Also, the names of the imported members will look like ClassName.MemberName.
## E.g.:
(Object.new [])

(Object.equals [other-object] my-object)

(java.util.List.size [] my-list)

Character$UnicodeScript.LATIN

This will be the tool you use the most when working with the JVM.

76
Chapter 13: JVM Interop

As you have noticed, it works by creating functions (and constant values) for all the class
members you need. It also creates Lux type definitions matching the classes you import, so
that you may easily refer to them when you write your own types later in regular Lux code.

It must be noted that jvm-import requires that you only import methods and fields from
their original declaring classes/interfaces.

What that means is that if class A declares/defines method foo , and class B
extends A , to import foo , you must do it by jvm-import 'ing it from A , instead of
B .

Writing Classes
Normally, you'd use the class: macro:

## Allows defining JVM classes in Lux code.


## For example:
(class: #final (JvmPromise A) []

(#private resolved boolean)


(#private datum A)
(#private waitingList (java.util.List lux.Function))

(#public [] new [] []
(exec (:= .resolved false)
(:= .waitingList (ArrayList.new []))
[]))
(#public [] resolve [{value A}] boolean
(let [container (.new! [])]
(synchronized _jvm_this
(if .resolved
false
(exec (:= .datum value)
(:= .resolved true)
(let [sleepers .waitingList
sleepers-count (java.util.List.size [] sleepers)]
(map (lambda [idx]
(let [sleeper (java.util.List.get [(l2i idx)] sleepers)]
(Executor.execute [(@runnable (lux.Function.apply [(:! O
bject value)] sleeper))]
executor)))
(i.range 0 (i.dec (i2l sleepers-count)))))
(:= .waitingList (null))
true)))))
(#public [] poll [] A
.datum)
(#public [] wasResolved [] boolean
(synchronized _jvm_this
.resolved))
(#public [] waitOn [{callback lux.Function}] void

77
Chapter 13: JVM Interop

(synchronized _jvm_this
(exec (if .resolved
(lux.Function.apply [(:! Object .datum)] callback)
(:! Object (java.util.List.add [callback] .waitingList)))
[])))
(#public #static [A] make [{value A}] (lux.concurrency.promise.JvmPromise A)
(let [container (.new! [])]
(exec (.resolve! (:! (host lux.concurrency.promise.JvmPromise [Unit]) con
tainer) [(:! Unit value)])
container))))

## The vector corresponds to parent interfaces.


## An optional super-class can be specified before the vector. If not specified, java.
lang.Object will be assumed.
## Fields and methods defined in the class can be used with special syntax.
## For example:
## .resolved, for accessing the "resolved" field.
## (:= .resolved true) for modifying it.
## (.new! []) for calling the class's constructor.
## (.resolve! container [value]) for calling the "resolve" method.

And, for anonymous classes, you'd use object :

## Allows defining anonymous classes.


## The 1st vector corresponds to parent interfaces.
## The 2nd vector corresponds to arguments to the super class constructor.
## An optional super-class can be specified before the 1st vector. If not specified, j
ava.lang.Object will be assumed.
(object [java.lang.Runnable]
[]
(java.lang.Runnable (run) void
(exec (do-something some-input)
[])))

Also, it's useful to know that you can always use the short name for classes in the
java.lang package, even if you haven't imported them.

Special Features
Accessing class objects.

## Loads the class as a java.lang.Class object.


(class-for java.lang.String)

Test instances.

78
Chapter 13: JVM Interop

"Checks whether an object is an instance of a particular class.


Caveat emptor: Can't check for polymorphism, so avoid using parameterized classes
."
(instance? String "YOLO")

Synchronizing threads.

"Evaluates body, while holding a lock on a given object."


(synchronized object-to-be-locked
(exec (do-something ...)
(do-something-else ...)
(finish-the-computation ...)))

Calling multiple methods consecutively

(do-to vreq
(HttpServerRequest.setExpectMultipart [true])
(ReadStream.handler
[(object [(Handler Buffer)]
[]
((Handler A) handle [] [(buffer A)] void
(run (do Monad<IO>
[_ (frp;write (Buffer.getBytes [] buffer) body)]
(wrap []))))
)])
(ReadStream.endHandler
[[(object [(Handler Void)]
[]
((Handler A) handle [] [(_ A)] void
(exec (do Monad<Async>
[#let [_ (run (frp;close body))]
response (handler (request$ vreq body))]
(respond! response vreq))
[]))
)]]))

do-to is inspired by Clojure's own doto macro. The difference is that, whereas

Clojure's version pipes the object as the first argument to the method, Lux's pipes it
at the end (which is where method functions take their object values).

The lux/host module offers much more, but you'll have to discover it yourself by heading
over to the documentation for the Standard Library.

Using Lux from Java


It may be the case that you want to consume Lux code from some Java code-base (perhaps
as part of some mixed-language project).

79
Chapter 13: JVM Interop

The current way to do it, is to create a Lux project that exposes classes defined with the
class: macro.

Once compiled, the resulting JAR can be imported to a Java project and consumed just like
any other JAR.

Here is some example code:

(;module:
lux
(lux (data text/format)
host))

(class: MyAPI []
(#public [] (new) []
[])
(#public #static (foo) java.lang.Object
"LOL")
(#public #static (bar [input long]) java.lang.String
(%i input))
)

Here is the example project.clj file:

(defproject lux_java_interop "0.1.0"


:plugins [[com.github.luxlang/lein-luxc "0.5.0"]]
:lux {:program "api"}
:source-paths ["source"]
)

Host platform inter-operation is a feature whose value can never be understated, for there
are many important features that could never be implemented without the means provided
by it.

We're actually going to explore one such feature in the next chapter, when we abandon our
old notions of sequential program execution and explore the curious world of concurrency.

See you in the next chapter!

80
Chapter 14: Concurrency

Chapter 14: Concurrency


Where you will harness the power of modern computing.

Concurrency is one of the most important subjects in modern programming because of the
pressing need to improve the efficiency of programs; coupled with the troublesome limitation
that increasing the speed of individual CPUs is becoming harder and harder.

The solution, at the moment, is to increase the number of CPUs in the machine to allow
programmers to run their code in parallel.

The problem is that new models of computation which take concurrency into account have
had to be developed to address this need, but nobody knows yet which of the many
alternatives is the right one, or if there is even a right one at all.

Until now, most programmers just thought about their programs in a purely sequential
manner; and, as we explore some of these concurrency models, you'll notice that they try to
restore some of this feeling, while at the same time scheduling work to happen concurrently,
without (almost) any programmer involvement.

Lux takes the approach of providing the means to use multiple concurrency models, since I
couldn't decide on which was the right one.

Some languages (like Erlang and Go) choose to commit themselves to one model or
another, but I'm not confident that the software industry (as a whole) is experienced enough
with concurrency as to declare any one model the winner.

The result: more variety.

And watch out, because the amount of concurrency models may increase with future Lux
releases.

Anyhow, let's quit the chit-chat and dive in!

Promises
This is my favorite one, because it can be used for almost anything, whereas I see the other
modules as more specialized tools for certain use cases.

This model is based on concepts you may be familiar with: futures and promises.

81
Chapter 14: Concurrency

Futures are basically concurrent and asynchronous computations which run and yield a
value, which you can later access.

Promises are more like storage locations (which may be set only once), which you can use
for communication by setting their values in one process, and reading it from another.

Some languages offer both (often used in conjunction), while others only offer one (while
also kind of giving it properties of the other).

I pretty much came to the conclusion that, for all intents and purposes, their similarities were
much greater than their differences.

So, I just fused them.

And so, Lux implements promises in the lux/concurrency/promise module, by means of the
Promise type.

You can run IO computations concurrently using the future function (which returns a
Promise that will contain the result of the computation).

You can also bind functions to Promise values in order to be notified when they have been
resolved (the term for when the value of the Promise is set).

By means of this ability to watch Promise values, it's possible to implement Functor ,
Applicative and Monad structures for Promise , which is precisely what is done in the

standard library.

The result is that, through the do macro, you can implement complex concurrent and
asynchronous computations that look and feel just like synchronous ones.

If you're curious about how that looks, take a peek:

(def: #export (seq left right)


{#;doc "Sequencing combinator."}
(All [a b] (-> (Promise a) (Promise b) (Promise [a b])))
(do Monad<Promise>
[a left
b right]
(wrap [a b])))

Oh, and did I mention there are combinators in that module?

If you didn't know there was some magic going on in the Promise type, you wouldn't have
suspected this was concurrent code. It looks just like any other old synchronous code you
might have use with any other monad.

Pretty neat, huh?

82
Chapter 14: Concurrency

Functional Reactive Programming


FRP is based on the idea of values that change over time, and structuring your programs to
dynamically respond to those changes in a reactive way.

The way it's implemented in Lux is through the Chan type in lux/concurrency/frp (itself
implemented on top of Promise ). Chan instances are (potentially infinite) sequences of
values that you can process in various ways as new values come into being. Chan
instances may be closed, but they may also go on forever if you'd like them to.

By the way, "chan" is just short for channel.

The lux/concurrency/frp module offers various functions for processing channels in various
them (some of them generating new channels), and the Chan type also happens to be a
monad, so you can write fairly complex and powerful code with it.

Software Transactional Memory


Implemented in the lux/concurrency/stm module.

STM is quite a different beast from the other 2 approaches, in that they address the problem
of how do I propagate information within the system, while STM deals with how to keep data
in one place, where it can be accessed and modified concurrently by multiple processes.

It works by having variables which may be read and written to, but only within transactions,
which could be seen as descriptions of changes to be made to one (or more) variables in an
atomic, consistent and isolated way.

Let's break down those last 3 terms:

Atomic: This just means that if more than 1 change needs to be made in a transaction,
either all gets done, or none. There is no room for partial results.
Consistent: This just means that transactional computations will take the set of
variables they operate from one valid state to another. This is largely a consecuence of
transactions being atomic.
Isolated: This means that transactions run in isolation (or, without interference) from
one another, thereby ensuring no transaction may see or modify any in-trasaction value
being computed somewhere else, and they all get the impression that they are the only
transaction running at any given time.

For those of you familiar with relational databases, this might remind you of their ACID
properties (with the caveat that Lux STM is non-durable, as it's done entirely in memory).

83
Chapter 14: Concurrency

The way it works is by running multiple transactions concurrently, and then committing their
results to the affected variables. If 2 transactions modify any common variables, the first one
to commit wins, and the second one would be re-calculated to take into account the changes
to those variables. This implies that transactions are sensitive to some "version" of the
variables they involve and that is correct. That is the mechanism use to avoid collisions and
ensure no inconsistencies ever arise.

The relevant types are Var , which corresponds to the variables, and STM which are
computations which transform transactions in some way and yield results.

Like IO and unlike Promise , just writing STM computations doesn't actually run them, and
you must call the commit function to actually schedule the system to execute them
(receiving a Promise value for the result of the transaction).

You may also follow variables to get channels of their values if you're interesting in
tracking them.

The Actor Model


Buyer Beware: Of all the concurrency modules, this is the one most likely to change by
the next release, so be careful when you use it.

The actor model is also very different from the other models in that, while they deal with
computations which produce values concurrently (even if they also do other things), the
actor model is all about processes running concurrently and communicating with one
another.

You can't run an actor and just wait for it to finish to get the result. For all you know, it may
never end and just run forever.

Also, interaction with actors is based on message passing, and an actor may consume an
indefinite number of such messages (and send messages to other actors).

The relevant module is the lux/concurrency/actor module, and the relevant type is:

(type: (Actor s m)
{#mailbox (lux/concurrency/stm;Var m)
#kill-signal (lux/concurrency/promise;Promise lux;Unit)
#obituary (lux/concurrency/promise;Promise [(lux;Maybe lux;Text) s (lux;List m)])})

Actors have mailboxes in which they receive their messages.

By following the mailbox vars, the actors can react to all the incoming messages.

84
Chapter 14: Concurrency

It's also possible to kill an actor (although it can also die "naturally" if it encounters a failure
condition during normal execution).

And if it dies, you'll receive it's state at the time of death, a list of unconsumed messages
from its mailbox and (possibly) a message detailing the cause of death.

Just from this definition, it's easy to see that actors are stateful (a necessity for modeling a
variety of complex behaviors).

To create an actor, you must first create a procedure of type Proc :

(type: (Behavior s m)
{#step (-> (Actor s m) m s (lux/concurrency/promise;Promise (lux/data/error;Error s)
))
#end (-> (lux;Maybe lux;Text) s (lux/concurrency/promise;Promise lux;Unit))})

They are pairs of functions to be run on each iteration of the actor, and when it dies (at its
end).

You can then call the spawn function with an initial state and a compatible procedure.

But writing complex actors with multiple options for its messages can be messy with these
tools, so a macro was made to simplify that.

"Allows defining an actor, with a set of methods that can be called on it.
The methods can return asynchronous outputs.
The methods can access the actor's state through the *state* variable.
The methods can also access the actor itself through the *self* variable."
(actor: Adder
Int

(method: (add! [offset Int])


[Int Int]
(let [new-state (i.+ offset *state*)]
(wrap (#;Success [new-state [*state* new-state]]))))

(stop:
(exec (log! (format "Cause of death: " (default "???" *cause*)))
(log! (format "Current state: " (%i *state*)))
(wrap []))))

You can have as many methods as you want, and refer to the state of the actor through the
*state* variable ( *self* being the variable you use to refer to the actor itself).

For every method you define, a function will be defined in your module with the same name,
and taking the same arguments. That function will always take the actor itself as its last
argument, and will return an Async of the return type.

85
Chapter 14: Concurrency

You can either die with an #;Error value, or continue on to the next message with a
#;Success containing an updated actor state, and a return value for the method. The type of

the return value must match the type following the method signature.

stop: creates the end method and gives you access to the (possible) cause of death with

the *cause* variable. It expects an (Promise Unit) return value, and the body of stop:
(as well as the other methods) runs implicitly inside an Monad<Promise> do expression.

In this chapter, you have learned how to use the many tools Lux offers to tap into the multi-
processor power of modern computing systems.

But if you think about it, being able to hold onto values or pass them around concurrently is
rather useless unless you have some important and complex data to move around in the first
place; and so far we have only dealt with fairly simple data-structures.

Well, read the next chapter if you want to learn how to take your data to the next level with
the help of persistent data structures.

See you in the next chapter!

86
Chapter 15: Persistent Data Structures

Chapter 15: Persistent Data Structures


Where you will learn a new way to organize your data.

So far, you know how to use tuples, variants and records to structure your data.

You're also familiar with lists, and you might even have come up with clever ways to use lists
to implement key-value structures, like property lists.

But the reality is that there are many different ways to organize data, and how you
implement the mechanisms to do so will have an impact on the performance of your
programs.

That is why there are so many different types of data-structures, and so many different
implementations for them.

But not all such implementations fit into the functional paradigm of keeping all your data
immutable, and most implementations of data-structures are actually mutable, and meant for
imperative programming.

Now, let's not be nave. Everybody can figure out that making a data-structure immutable
and just copying the whole thing every time you want to make a change would make those
data-structures prohibitively expensive to use.

Lucky for us, there is a way to have immutable data-structures that have reasonable
performance. The reason why they are fast enough to be used, is that they are designed to
re-use as many nodes as they can whenever an update needs to be done, in order to avoid
wasteful re-work wherever possible.

Make no mistake, they are still not as fast as their mutable counterparts (which you can still
access by doing host-interop), but they are designed with high-performance in mind, and so
they tend to be fast enough for most use-cases.

Currently, Lux offers 4 varieties of these data-structures.

Vectors
Located in lux/data/struct/vector .

These are similar to lists in that they are sequential data-structures, but there are a few
differences:

87
Chapter 15: Persistent Data Structures

1. Whereas lists prepend values to the front, vectors append them to the back.
2. Random access on lists has a complexity of O(N), whereas it's O(log N) for vectors.

Vectors are a great substitute for lists whenever random access is a must, and their design
ensures updates are as cheap as possible.

Queues
Located in lux/data/struct/queue .

Queues are the classic first-in first-out (FIFO) data-structure.

Use them whenever processing order matters, but you need to add to the back (unlike lists,
where order matters but add to the front).

Dictionaries
Located in lux/data/struct/dict .

This is your standard key-value data-structure.

Known by other names (tables, maps, etc), dictionaries give you efficient access and
updating functionality.

All you need to do is give it a Hash instance (from lux/control/hash ) for your "key" type,
and you're good to go.

There are already instances for Nat , Int , Real and Text in the lux/data/number
and the lux/data/text modules.

Sets
Located in lux/data/struct/set .

This is similar to dictionaries in that a Hash implementation is needed, but instead of it


being a key-value data-structure, it only stores values (and then tells you if any given value
is a member of the set).

This is a useful data-structure for modelling group membership and keeping track of things.
Plus, there are several set-theoretic operations defined in that module.

Persistent Data Structures and Software Transactional


Memory
This is a killer combination.

88
Chapter 15: Persistent Data Structures

Instead of using mutable data-structures for your changing program data, you can just use
persistent data-structures, with the mutability being delegated the the STM system.

This will make concurrently working with these data-structures a piece of cake, since you
never have to worry about synchronizing/locking anything to avoid simultaneous updating, or
any of the other crazy things programmers have to do to avoid data corruption.

Not-So-Persistent Data Structures (Arrays)


Located in lux/data/struct/array .

This is probably not the best place to talk about this, but I don't really know where to place
this information, and there's not enough to say about it to merit its own appendix.

The lux/data/struct/array module features mutable arrays you can use if you need fast
access and mutation and are willing to run the risks involved with using mutable data.

Another possible use is to implement other data-structures (and, as it turns out, dictionaries,
vectors and sets all rely on arrays in their implementations).

Also, it's worth nothing that in the JVM, this corresponds to object arrays. If you want
primitive arrays, you should check out the functions for that in lux/host .

I won't go into detail as to the functions available because the documentation for the
standard library enumerates everything in detail.

Go take a look!

It may seem odd that this chapter doesn't feature any code samples, but most of what you
need to know is already located in the standard library documentation.

These data-structures are very easy to use and offer decent performance, so you're
encouraged to use them to model all your data aggregation code.

The next chapter is going to be slightly different, in that we're going to be learning not how to
write programs, but how to test them.

See you in the next chapter!

89
Chapter 16: Testing

Chapter 16: Testing


Where you will learn how to avoid annoying bug reports.

Automated testing is a fundamental aspect of modern software development.

Long gone are the days of manual, ad-hoc testing.

With modern testing tools and frameworks, it's (somewhat) easy to increase the quality of
programs by implementing comprehensive test suites that can cover large percentages of a
program's functionality and behavior.

Lux doesn't stay behind and includes a testing module as part of its standard library.

The lux/test module contains the machinery you need to write unit-testing suites for your
programs.

Not only that, but the Leiningen plugin for Lux also includes a command for testing, in the
form of lein lux test .

How do you set that up? Let's take a look at the project.clj file for the Lux standard library
itself.

90
Chapter 16: Testing

(defproject com.github.luxlang/lux-stdlib "0.5.0"


:description "Standard library for the Lux programming language."
:url "https://github.com/LuxLang/stdlib"
:license {:name "Mozilla Public License (Version 2.0)"
:url "https://www.mozilla.org/en-US/MPL/2.0/"}
:plugins [[com.github.luxlang/lein-luxc "0.5.0"]]
:deploy-repositories [["releases" {:url "https://oss.sonatype.org/service/local/stag
ing/deploy/maven2/"
:creds :gpg}]
["snapshots" {:url "https://oss.sonatype.org/content/repositor
ies/snapshots/"
:creds :gpg}]]
:pom-addition [:developers [:developer
[:name "Eduardo Julian"]
[:url "https://github.com/eduardoejp"]]]
:repositories [["snapshots" "https://oss.sonatype.org/content/repositories/snapshots
/"]
["releases" "https://oss.sonatype.org/service/local/staging/deploy/ma
ven2/"]]
:source-paths ["source"]
:test-paths ["test"]
:lux {:tests "tests"}
)

The :tests parameter is similar to the :program parameter in that it specifies the name of
a module file (this time, inside the test directory).

Here are the contents of the file:

(;module:
lux
(lux (control monad)
(codata [io])
(concurrency [promise])
[cli #+ program:]
[test])
(test lux
(lux ["_;" cli]
["_;" host]
["_;" pipe]
["_;" lexer]
["_;" regex]
(codata ["_;" io]
["_;" env]
["_;" state]
["_;" cont]
(struct ["_;" stream]))
(concurrency ["_;" actor]
["_;" atom]
["_;" frp]

91
Chapter 16: Testing

["_;" promise]
["_;" stm])
(control [effect])
(data [bit]
[bool]
[char]
[error]
[ident]
[identity]
[log]
[maybe]
[number]
[product]
[sum]
[text]
(error [exception])
(format [json])
(struct [array]
[dict]
[list]
[queue]
[set]
[stack]
[tree]
## [vector]
[zipper])
(text [format])
)
["_;" math]
(math ["_;" ratio]
["_;" complex]
## ["_;" random]
["_;" simple])
## ["_;" macro]
(macro ["_;" ast]
["_;" syntax]
(poly ["poly_;" eq]
["poly_;" text-encoder]
["poly_;" functor]))
["_;" type]
(type ["_;" check]
["_;" auto])
)
))

## [Program]
(program: args
(test;run))

This looks very weird.

92
Chapter 16: Testing

There's almost nothing going on here, yet this is the most important file in the whole test
suite (this is where everything comes together and the tests are run).

But where do those tests come from? Nothing is being defined here.

Well, the run macro, from lux/test pulls in all the tests from the imported modules to run
them later once the program starts.

To know how tests work, let's take a look at one of those modules.

From test/lux/concurrency/promise .

(;module:
lux
(lux (control monad)
(data [number]
text/format
[error #- fail])
(concurrency ["&" promise])
(codata function
[io #- run])
["R" random]
pipe)
lux/test)

(test: "Promises"
($_ seq
(do &;Monad<Promise>
[running? (&;future (io true))]
(assert "Can run IO actions in separate threads."
running?))

(do &;Monad<Promise>
[_ (&;wait +500)]
(assert "Can wait for a specified amount of time."
true))

(do &;Monad<Promise>
[[left right] (&;seq (&;future (io true))
(&;future (io false)))]
(assert "Can combine promises sequentially."
(and left (not right))))

(do &;Monad<Promise>
[?left (&;alt (&;delay +100 true)
(&;delay +200 false))
?right (&;alt (&;delay +200 true)
(&;delay +100 false))]
(assert "Can combine promises alternatively."
(case [?left ?right]
[(#;Left true) (#;Right false)]
true

93
Chapter 16: Testing

_
false)))

(do &;Monad<Promise>
[?left (&;either (&;delay +100 true)
(&;delay +200 false))
?right (&;either (&;delay +200 true)
(&;delay +100 false))]
(assert "Can combine promises alternatively [Part 2]."
(and ?left (not ?right))))

(assert "Can poll a promise for its value."


(and (|> (&;poll (:: &;Monad<Promise> wrap true))
(case> (#;Some true) true _ false))
(|> (&;poll (&;delay +200 true))
(case> #;None true _ false))))

(assert "Cant re-resolve a resolved promise."


(and (not (io;run (&;resolve false (:: &;Monad<Promise> wrap true))))
(io;run (&;resolve true (: (&;Promise Bool) (&;promise))))))

(do &;Monad<Promise>
[?none (&;time-out +100 (&;delay +200 true))
?some (&;time-out +200 (&;delay +100 true))]
(assert "Can establish maximum waiting times for promises to be fulfilled."
(case [?none ?some]
[#;None (#;Some true)]
true

_
false)))
))

You define a test using the test: macro, which just takes a description instead of a name.

The seq function (from lux/test ) allows you to combine tests sequentially so that one
runs after the other, and if one fails, everything fails.

Also, all tests are assumed to be asynchronous and produce promises, which is why
you see so much monadic code in the promise monad.

The assert function checks if a condition is true, and raises the given message as an error
otherwise.

Oh, and there's also a type for tests:

(type: #export Test


{#;doc "Tests are asynchronous process which may fail."}
(Promise (Error Unit)))

94
Chapter 16: Testing

There is also a more advanced way to write tests called property-based testing.

The idea is this: unit tests could also be called "testing by example" because you pick the
particular inputs and outputs that you want to test you code against.

The problem is that you might miss some cases which could cause your tests to fail, or you
might introduce some bias when picking cases, such that faulty code could still pass the
tests.

With property-based testing, the inputs to your tests are generated randomly, and you're
then forced to check that the invariants you expect are still kept.

The testing framework can then produce hundreds of variations, exposing your code to far
more rigurous testing than unit tests could provide.

Lux already comes with great support for this, by producing random values thanks to the
lux/random module, and integrating them in the tests with the help of the test: macro.

For example:

## From test/lux in the standard library's test-suite.


(;module:
lux
lux/test
(lux (control monad)
(codata [io])
[math]
["R" random]
(data [text "T/" Eq<Text>]
text/format)
[compiler]
(macro ["s" syntax #+ syntax:])))

(test: "Value identity."


[size (|> R;nat (:: @ map (|>. (n.% +100) (n.max +10))))
x (R;text size)
y (R;text size)]
($_ seq
(assert "Every value is identical to itself, and the 'id' function doesn't chang
e values in any way."
(and (is x x)
(is x (id x))))

(assert "Values created separately can't be identical."


(not (is x y)))
))

95
Chapter 16: Testing

By just adding bindings to your test, you're immediately working on the Random monad and
can use any of the generators and combinators offered by the lux/random module.

By default, your test will be run 100 times, but you can configure how many times do you
want with the #times option:

(test: "Some test..."


#times +12345
[... ...]
...
)

And if your test fails, you'll be shown the seed that the random number generator used to
produce the failing inputs.

If you want to re-create the conditions that led to the failure, just use the #seed option:

(test: "Some test..."


#seed +67890
[... ...]
...
)

If you want to learn more about how to write tests, feel free to check out the test-suite for the
Lux standard library. It's very comprehensive and filled with good examples. You can find it
here: https://github.com/LuxLang/lux/tree/master/stdlib/test.

Without tests, the reliability of programs becomes a matter of faith, not engineering.

Automated tests can be integrated into processes of continuous delivery and integration to
increase the confidence of individuals and teams that real value is being delivered, and that
the customer won't be dissatisfied by buggy software.

Now that you know how to test your programs, you know everything you need to know to be
a Lux programmer.

96
Conclusion

Conclusion
This may be a short book, but I hope it did its job of showing you the power that Lux has got
to offer.

The beauty of a programming language is that it is a tool for the mind.

It gives you the structure and order you need to turn your thoughts into logical machines that
consume and transform and move around information on computers.

But, the structure a programming language provides is not constraining, but enabling.

It gives you the power to perform amazingly complex tasks with relatively little effort.

My mission with Lux has been (and continues to be) to create a language that maximizes
the effectiveness of programmers, by making it as easy as possible to achieve great levels
of complexity, without getting buried by it.

Lux is still in its adolescence.

What you have learned is Lux version 0.5.0.

In future releases, much more power will be added to the language, more platforms will be
within reach of Lux programmers, and better performance will be achieved, with little to no
effort on the side of programmers.

The future of Lux is bright, and this is just the beginning of an amazing adventure for you,
my dear reader.

As a parting gift, I wish to show you this repository: https://github.com/LuxLang/tutorial1. It


contains a Lux program that will compile and run properly for version 0.5.0.

The program is a simple TODO-list web application.

When you run it, head over to http://localhost:8080/, and you will find a simple form.

Play around with it; and when you're done, head over to the source code and figure out how
it works.

I was tempted to write a chapter explaining how it works, but I felt that it would be more fun
for you to figure it out by reading the source code and maybe playing with it by changing a
few things and maybe adding your own features.

97
Conclusion

It's a small program and it's decently documented, so it shouldn't be a difficult task, and
you'll be able to see a simple Lux program in action.

Have fun with Lux, and don't forget to say hi on the Gitter channel and the Google group!

98
Appendix A: Import Syntax

Appendix A: Import Syntax


You've already seen some import syntax, but now you'll see all the options available.

If you recall Chapter 1, there was this example code:

(;module:
lux
(lux (codata io)
[cli #+ program:]))

Here, we can see flat imports, nested imports and more specialized import syntax.

Flat is simple: you just mention the name of a module and all of its exported definitions will
be locally imported in your module.

This may cause some issues if you import 2 definitions with the same name from different
modules, or if you get a definition from one module, but then write your own definition with
the same name in your code.

In those circumstances, the compiler will complain, saying that you can't re-define X; where
X is the name of the definition.

It might be harder to figure out what you're re-defining, so you might want to avoid flat
imports outside of the lux module to avoid potential clashes; or get acquainted with the
Lux documentation to know what is defined where.

Also, nested imports can be done in more than one way.

We've already seen (lux (codata io)) , but that can also be done like (lux codata/io) , or
like (lux/codata io) , or even like lux/codata/io .

In all cases, you would get a flat import.

Oh, and you can also use these variations when dealing with the specialized imports.

And speaking of those, lets see what other options they give us.

(lux [cli #+ program:])

This ones gives lux/cli the shorter cli alias, plus it locally imports the program: macro.

Everything else inside that module would remain hidden.

99
Appendix A: Import Syntax

(lux [cli #- program:])

That one would do the opposite.

Everything inside lux/cli but program: would be locally imported.

(lux [cli #*])

This third alternative would locally import everything (just like a flat import), while also giving
lux/cli the shorter cli alias.

Oh, and for #+ and #- you can give it more than one parameter. For example:

(lux [cli #+ program: run Monad<CLI>])

Now that we've mentioned a monad, I should also point out that you can open structures
inside your imported modules right there on the import syntax, and you can even use
different prefixes for different structures.

(lux [cli #+ program: run "CLI/" Monad<CLI> "" Functor<CLI>])

Here, I'm opening 2 structures (with different prefixes).

This import will give me access to the CLI/wrap and map functions, coming from the
CLI/Monad and the CLI/Functor respectively.

And, just like for #+ and #- , you can provide multiple parameters after the prefix texts.

You may be wondering if you can combine #+ and #- the same way as the prefixes,
but you can only pick one of #+ , #- or #* , and (if present) they must always show-
up before the opening prefixes.

You can also have custom aliases with the specialized syntax by giving a text format before
the name of the module.

(lux ["my-cli" cli])

In this example, the alias will be my-cli , so my-cli;program: will access the program:
macro.

100
Appendix A: Import Syntax

Alternatively:

(lux ["my-;" cli])

By using the semicolon, I can refer to the name of the module, thereby accomplishing the
same goal.

Finally, I would like to point out that absolute addressing of module paths isn't the only
option.

Lux also supports relative addressing by using . or .. at the start of a module path.

For example, if you have the following directory structure:

foo
bar
baz
quux

Where each name may refer to a .lux file and (potentially) to a directory housing other
.lux files; if you're located in the baz module, you could have the following import

statement:

(;import lux
(.. [bar])
(. [quux]))

Thereby accessing both your sibling module, and your child module.

The quux module could also have the following import statement:

(;import lux
["dad" ..])

So, as you can see, relative paths are well supported within Lux import syntax.

However, you must remember that you may only use the relative path syntax at the
beginning of a module path, and at no point afterwards.

There is an older syntax for specialized imports that is still available, but I won't explain it
here since it does pretty much the same as this new syntax, but with higher verbosity.

101
Appendix A: Import Syntax

You may encounter it if you read the source code of the standard library some day.

102
Appendix B: Math in Lux

Appendix B: Math in Lux


Math in Lux is a bit different from what you might be used to in other languages.

For starters, Lux is a lisp, which means that it uses prefix syntax for everything.

That means, familiar operations such as 3 + 5 or 8 = 8 get written as (+ 3 5) and (= 8


8) .

There's also the issue of different operators for different types.

Whereas other programming languages often overload the math operators + , - , > , etc.
for all numeric (and some non-numeric) types, Lux actual offers different versions for
different types.

The Nat operators are as follows: n.+ , n.- , n.* , n./ , n.% , n.= , n.< , n.<= ,
n.> , n.>= .

The Int operators are as follows: i.+ , i.- , i.* , i./ , i.% , i.= , i.< , i.<= ,
i.> , i.>= .

The Real operators are as follows: r.+ , r.- , r.* , r./ , r.% , r.= , r.< , r.<= ,
r.> , r.>= .

The Frac operators are as follows: f.+ , f.- , f.* , f./ , f.% , f.= , f.< , f.<= ,
f.> , f.>= .

The differences may look trivial, but since the numeric types are treated differently in Lux,
you must be aware of which function-set you're using when working with your data.

However, this is not the biggest difference in Lux's math operators in comparison to other
languages. The difference that takes the crown is the ordering of the arguments.

What do I mean?

In most languages you'd write 4 - 7 .

In other lisps you'd write (- 4 7) .

But in Lux, you'd write (- 7 4) .

What is going on!? This is so bizarre!

OK, OK. Chill out.

103
Appendix B: Math in Lux

What's going on is that in functional programming, there is this convention of putting the
most significant argument to a function as the last one. In the case of math functions, this
would be the argument on which you're operating. I call it "the subject" of the function.

In the case of the subtraction operation, it would be the 4, since you're subtracting 7 from it.

In most lisps, the order of the arguments is such that the subject is the first argument; but
not so in Lux.

Now, you may be wondering: what could possibly be the benefit of making this bizarre
change?

Piping. Piping convinced me to make this change.

You see; this might look very impractical to those accustomed to the old way, but when
you're writing complex calculations with many levels of nesting, being able to pipe your
operations helps a lot, and this style of doing math lends itself perfectly for it.

Consider this:

(|> x (i./ scale) (pow 3) (i.- shift))

If I was using the traditional way of doing math, I wouldn't be able to pipe it, and it would look
like this:

(i.- (pow (i./ x scale)


3)
shift)

pow is the power function, located in lux/math .

You can complain all you want about me breaking with tradition, but that just looks ugly.

So, I'm just going to say it: I broke with tradition because tradition wasn't helping, and I'm not
one to just comply because I'm supposed to.

However, I'm not without a heart; and I know that a lot of people would prefer to have a
more... traditional way of doing math.

So, for you guys, I've introduced a special macro in the lux/math module.

It's called infix , and it allows you to do infix math, with nested expressions.

Here's an example:

(infix [[3 pow 2] i.+ [5 i.* 8]])

104
Appendix B: Math in Lux

So, that corresponds to 3^2 + 5*8 .

Note that infix doesn't enforce any grouping rules, since you can actually use it with
arbitrary functions that you import or define (even with partially applied functions).

The rule is simple, the argument to the left of the operator will be taken first, and then the
argument to the right.

So [3 pow 2] becomes (pow 2 3) , and [5 i.* 8] becomes (i.* 8 5) . Thus, the infix
syntax is transformed into Lux's prefix variation.

Also, having to use a prefix every time you want to use a math operator can quickly get
tedious; specially if you're writing a lot of math-intensive code.

To make things easier, there is a module named lux/math/simple , which introduces


unprefixed macros for all the common numeric operations.

Those macros analyse their inputs' types in order to automatically choose for you the correct
functions in each case.

That way, you get the benefit of having polymorphic math operators, through the magic of
meta-programming.

Oh, and you can definitely use those macros in conjunction with the infix macro!

So, check lux/math/simple in the documentation for the Standard Library and have fun with
your mathy code.

I know that Lux's way of doing math is a bit... foreign; but I made that change to ensure math
fit the rest of the language perfectly.

Hopefully you'll come to see that getting used to the new way is a piece of cake and has its
advantages as soon as you write complex calculations.

105
Appendix C: Pattern-Matching Macros

Appendix C: Pattern-Matching Macros


Pattern-matching is a native Lux feature, and yet case is a macro.

Why?, you may wonder. What does being a macro add to the mix?

Well, as it turns out, by making case be a macro, Lux can perform some compile-time
calculations which ultimately enable a set of really cool features to be implemented: custom
pattern-matching.

Most languages with pattern-matching have a fixed set of rules and patterns for how
everything works. Not so with Lux.

Lux provides a set of default mechanisms, but by using macros where patterns are located,
case can expand those macro calls to get the myriad benefits they offer.

But enough chit-chat. Let's see them in action.

PM-Macros in the Standard Library

(case (list 1 2 3)
(^ (list x y z))
(#;Some (+ x (* y z)))

_
#/None)

You may remember how annoying it was to pattern-match against lists in the Chapter 5
example.

Well, by using the ^ PM-macro, you can use any normal macros you want inside the
pattern to profit from their code-construction capacities.

106
Appendix C: Pattern-Matching Macros

## Multi-level pattern matching.


## Useful in situations where the result of a branch depends on further refinements on
the values being matched.
## For example:
(case (split (size static) uri)
(^=> (#;Some [chunk uri']) [(Text/= static chunk) true])
(match-uri endpoint? parts' uri')

_
(#;Left (format "Static part " (%t static) " doesn't match URI: " uri)))

## Short-cuts can be taken when using boolean tests.


## The example above can be rewritten as...
(case (split (size static) uri)
(^=> (#;Some [chunk uri']) (Text/= static chunk))
(match-uri endpoint? parts' uri')

_
(#;Left (format "Static part " (%t static) " doesn't match URI: " uri)))

I love ^=> .

It's one of those features you don't need often, but when you do, it saves the day.

The possibilities are endless when it comes to the refinement you can do, and when you
consider what you'd have to do to get the same results without it, it's easy to see how much
code it saves you.

## Allows you to simultaneously bind and de-structure a value.


(def: (hash (^@ set [Hash<a> _]))
(List/fold (lambda [elem acc] (n.+ (:: Hash<a> hash elem) acc))
+0
(to-list set)))

^@ is for when you want to deal with a value both as a whole and in parts, but you want to

save yourself the hassle of this 2-part operation.

## Same as the "open" macro, but meant to be used as a pattern-matching macro for gene
rating local bindings.
## Can optionally take a "prefix" text for the generated local bindings.
(def: #export (range (^open) from to)
(All [a] (-> (Enum a) a a (List a)))
(range' <= succ from to))

^open allows you to open structures using local variables during pattern-matching.

107
Appendix C: Pattern-Matching Macros

It's excellent when taking structures as function arguments, or when opening structures
locally in let expressions.

## Or-patterns.
(type: Weekday
#Monday
#Tuesday
#Wednesday
#Thursday
#Friday
#Saturday
#Sunday)

(def: (weekend? day)


(-> Weekday Bool)
(case day
(^or #Saturday #Sunday)
true

_
false))

^or patterns allow you to have multiple patterns with the same branch.

It's a real time-saver.

108
Appendix C: Pattern-Matching Macros

## It's similar to do-template, but meant to be used during pattern-matching.


(def: (beta-reduce env type)
(-> (List Type) Type Type)
(case type
(#;HostT name params)
(#;HostT name (List/map (beta-reduce env) params))

(^template [<tag>]
(<tag> left right)
(<tag> (beta-reduce env left) (beta-reduce env right)))
([#;SumT] [#;ProdT])

(^template [<tag>]
(<tag> left right)
(<tag> (beta-reduce env left) (beta-reduce env right)))
([#;LambdaT]
[#;AppT])

(^template [<tag>]
(<tag> old-env def)
(case old-env
#;Nil
(<tag> env def)

_
type))
([#;UnivQ]
[#;ExQ])

(#;BoundT idx)
(default type (list;at idx env))

(#;NamedT name type)


(beta-reduce env type)

_
type
))

^template is ^or 's big brother.

Whereas ^or demands that you provide the exact patterns you want (and with a single
branch body), ^template lets you provide templates for both the patterns and bodies, and
then fills the blanks with all the given parameters.

You can save yourself quite a lot of typing (and debugging) by reusing a lot of pattern-
matching code with this macro.

It's a great asset!

109
Appendix C: Pattern-Matching Macros

## Allows you to extract record members as local variables with the same names.
## For example:
(let [(^slots [#foo #bar #baz]) quux]
(f foo bar baz))

^slots is great for working with records, as it allows you to create local variables with the

names of the tags you specify.

Now you can work with record member values with ease.

## Allows destructuring of streams in pattern-matching expressions.


## Caveat emptor: Only use it for destructuring, and not for testing values within the
streams.
(let [(^stream& x y z _tail) (some-stream-func 1 2 3)]
(func x y z))

^stream& hails from the lux/codata/struct/stream module, and it's quite special, because it

allows you to de-structure something you normally wouldn't be able to: functions.

You see, Lux streams (as defined in lux/codata/struct/stream ) are implemented using
functions.

The reason is that they are infinite in scope, and having an infinite data-structure would be...
well... impossible (unless you manage to steal a computer with infinite RAM from space
aliens).

Well, no biggie!

^stream& does some black magic to make sure you can de-structure your stream just like

any other data-structure.

How to Make your Own


The technique is very simple.

Just create a normal macro that will take as a first argument a form containing all the
"parameters" to your pattern-matching macro.

It's second argument will be the body associated with the pattern.

After that, you'll receive all the branches (pattern + body) following the call to PM-macro.

You can process all your inputs as you wish, but in the end you must produce an even
number of outputs (even, because the outputs must take the form of pattern+body pairs).

110
Appendix C: Pattern-Matching Macros

These will be further macro-expanded until all macros have been dealt with and only
primitive patterns remain, so case can just go ahead and do normal pattern-matching.

You may wonder: why do I receive the body?

The answer is simple: depending on your macro, you may be trying to provide some
advance functionality that requires altering (or replicating) the code of the body.

For example, ^or copies the body for every alternative pattern you give it, and ^template
must both copy and customize the bodies (and patterns) according to the parameters you
give it.

But receiving the next set of branches takes the cake for the weirdest input.

What gives?

It's simple: some macros are so advanced that they require altering not just their bodies, but
anything that comes later.

A great example of that is the ^=> macro (which is actually the reason those inputs are
given to PM-macros in the first place).

^=> performs some large-scale transformations on your code which require getting access

to the rest of the code after a given usage of ^=> .

However, most of the time, you'll just return the branches (and sometimes the body)
unchanged.

To make things easier to understand, here is the implementation of the ^ macro, from the
lux module:

(macro: #export (^ tokens)


(case tokens
(#Cons [_ (#FormS (#Cons pattern #Nil))] (#Cons body branches))
(do Monad<Lux>
[pattern+ (macro-expand-all pattern)]
(case pattern+
(#Cons pattern' #Nil)
(wrap (list& pattern' body branches))

_
(fail "^ can only expand to 1 pattern.")))

_
(fail "Wrong syntax for ^ macro")))

111
Appendix C: Pattern-Matching Macros

The ^ prefix given to PM-macros was chosen simply to make them stand-out when you see
them in code.

There is nothing special attached to that particular character.

112
Appendix D: The Art of Piping

Appendix D: The Art of Piping


I'm a big fan of piping.

No kidding.

Piping is my favorite way of writing code.

It's almost like a game to me.

I try to figure out ways to get my code to be more pipe-sensitive, to see how far I can get
while piping my code.

My personal record is 14 steps.

Anyhow, after looking at some of the innovations in Clojure on the piping department, I
decided to come up with my own tricks to try to get Lux to become a piping superpower.

I added the lux/pipe module, which contains several macros meant to be used within the
|> macro, and which extend it with awesome capabilities

Take a look at these babies:

Piping Macros in the Standard Library

## Loops for pipes.


## Both the testing and calculating steps are pipes and must be given inside tuples.
(|> 1
(!> [(i.< 10)]
[i.inc]))

!> takes a test tuple and a body tuple.

The reason is that each of those tuples represents the steps on an implicit piping macro (oh,
yeah!).

So [(i.< 10)] is like (|> value (i.< 10)) , and [i.inc] is like (|> value i.inc) .

Which value? Whatever has been piped into !> from the underlying |> call (in this case,
the value 1 ).

113
Appendix D: The Art of Piping

## Branching for pipes.


## Both the tests and the bodies are piped-code, and must be given inside a tuple.
## If a last else-pipe isn't given, the piped-argument will be used instead.
(|> 5
(?> [i.even?] [(i.* 2)]
[i.odd?] [(i.* 3)]
[(_> -1)]))

We have looping, and now we have branching; with a cond -inspired piping macro
(complete with else branch, just in case).

But what's that thing over there? That _> thing?

Well, it's another piping macro. Of course!

## Ignores the piped argument, and begins a new pipe.


(|> 20
(i.* 3)
(i.+ 4)
(_> 0 i.inc))

_> establishes a new piping sequence that ignores any previous one.

Useful in certain kinds of situations.

## Gives the name '@' to the piped-argument, within the given expression.
(|> 5
(@> (+ @ @)))

@> binds the current value piped into it so you can refer to it multiple times within it's body.

Pretty nifty, huh?

114
Appendix D: The Art of Piping

## Pattern-matching for pipes.


## The bodies of each branch are NOT pipes; just regular values.
(|> 5
(case> 0 "zero"
1 "one"
2 "two"
3 "three"
4 "four"
5 "five"
6 "six"
7 "seven"
8 "eight"
9 "nine"
_ "???"))

Yeah, that's right!

I just couldn't resist rolling full-blown pattern-matching into this.

You'll thank me later.

## Monadic pipes.
## Each steps in the monadic computation is a pipe and must be given inside a tuple.
(|> 5
(%> Monad<Identity>
[(i.* 3)]
[(i.+ 4)]
[i.inc]))

And just to show you I'm serious, I did the unthinkable.

Piped macro expressions!

How to Make your Own Piping Macros


They're easier to make than pattern-matching macros.

All you need is a macro that takes anything you want as parameters, but always takes as its
last argument the computation so far, as it has been constructed by the |> macro prior to
the call to your piping macro.

As an example, here's the definition for @> :

115
Appendix D: The Art of Piping

(syntax: #export (@> [body s;any]


prev)
(wrap (list (` (let% [(~' @) (~ prev)]
(~ body))))))

All this looks like madness, but I just couldn't contain myself.

Piping is one of the few ways of writing code that just amuses me when I do it.

These macros can keep you in the flow while you're writing complex code, so you don't have
to switch so much between piping-code and non-piping-code.

Oh... and did I mentioned the |>. function macro that generates for you a single-argument
function that will immediately pipe it's argument through all the steps you give it?

(filter (|>. (member? -defs) not)


*defs)

Yeah. This is real!

116
Appendix E: Lux Implementation Details

Appendix E: Lux Implementation Details


If you read Chapter 6, you'll have encountered Lux's funny way of encoding tuples, variants
and functions.

You may be wondering: how can this possibly have good performance?

And: what benefit can this possible have?

I'll tackle those questions one at a time.

How can this possibly have good performance?


First, let me explain how things get compiled down in the JVM.

Tuples are compiled as object arrays. That means an n-tuple is (roughly) an n-array.

The reason why I say "roughly" will be explained shortly.

Variants, on the other hand, are 3-arrays.

The first element is the int value of its associated tag.

The second element is a kind of boolean flag used internally by the Lux run-time
infrastructure.

The third element contains the variant's value.

Finally, functions produce custom classes, and function values are just objects of those
classes.

These classes contain everything the function needs: it's compiled code, it's
environment/closure, and any partially-applied arguments it may have.

How, then, can all of this be made efficient?

Does applying a function f to arguments a , b and c create intermediate function


values because you can only apply it one argument at a time?

Do tuples consume a lot of memory because everything gets nested?

Not really.

With regards to tuples, remember what I said: an n-tuple is (roughly) an n-array.

117
Appendix E: Lux Implementation Details

If you write [true 12 34.56 #"7" "eight"] , Lux will actually compile it down as a 5-array,
instead of a series of nested 2-arrays.

However, if you have a variable foo which contains the last two arguments, and you build
your tuple like [true 12 34.56 foo] , Lux will compile it as a 4-array, with the last element
pointing to the [#"7" "eight"] sub-tuple.

But, as I said in Chapter 6, Lux treats both the same.

How does that work?

Well, Lux knows how to work with both flat and nested tuples and it can do so efficiently; so
ultimately it doesn't matter. It will all be transparent to you.

When it comes to variants, the situation is similar in some ways, but different in others.

Regardless, Lux also knows how to work with the different cases efficiently (which is
important for pattern-matching, not just for variant/tuple construction).

Finally, we have to consider functions.

Merging nested functions into a single one that can work like all the nested versions turns
out to be pretty easy.

Just allocate enough space for all the (potentially) partially-applied arguments, plus space
for the environment/closure.

If you invoke the function with all the arguments, you just run it.

If you invoke it with less than needed, you just use the space you have to store the partial
arguments and generate a single new instance with the extra data (instead of generating a
new function object for every argument you apply).

And if you're invoking a partially applied function, then you run it with the partial arguments
and the new arguments.

Piece of cake.

What benefit can this possible have?


I already explained in Chapter 6 how the nested nature of Lux functions enables partial
application (a useful day-to-day feature that saves you from writing a lot of boilerplate).

What about tuples and variants?

Well, the cool thing is that this makes your data-structures composable, a property that
enables you to implement many really cool features.

118
Appendix E: Lux Implementation Details

One that I really like and has turned out to be very useful to me, is that you can use
combinators for various data-types that produce single bits of data, and you can fuse them
to generate composite data-types, with minimal plumbing.

You can see combinators as functions that allow you to provide an extra layer of
functionality on top of other components, or that allow you to fuse components to get
more complex ones.

Here are some examples from the lux/host module, where I have some types and syntax-
parsers for the many macros implemented there:

(type: Privacy-Modifier
#PublicPM
#PrivatePM
#ProtectedPM
#DefaultPM)

(def: privacy-modifier^
(Syntax Privacy-Modifier)
(let [(^open) Monad<Syntax>]
($_ s;alt
(s;tag! ["" "public"])
(s;tag! ["" "private"])
(s;tag! ["" "protected"])
(wrap []))))

Here, I have a variant type, and I'm creating a syntax-parser that produces instances of it by
simply combining smaller parsers (that just produce unit values, if they succeed) through the
s;alt combinator.

These syntax-parsers and combinators are defined in the lux/meta/syntax module.

s;alt is a combinator for generating variant types.

It's tuple counterpart is called s;seq (also, remember that records are tuples, so you'd use
the same function).

This wouldn't be possible if variant types weren't nested/composable; forcing me to write


custom ad-hoc code instead of taking advantage of common, reusable infrastructure.

Here's an example of s;seq in action:

119
Appendix E: Lux Implementation Details

(type: Arg-Decl
{#arg-name Text
#arg-type Generic-Type})

(def: (arg-decl^ imports type-vars)


(-> Class-Imports (List Type-Param) (Syntax Arg-Decl))
(s;form (s;seq s;local-symbol (generic-type^ imports type-vars))))

The cool thing is that these combinators show up not just in syntax parsers, but also in
command-line argument parsing, lexing, concurrency operations, error-handling and in many
other contexts.

The nested/composable semantics of Lux entities provide a flexibility that enables powerful
features (such as this) to be built on top.

120
Appendix F: Structure Auto-Selection

Appendix F: Structure Auto-Selection


If you've used Lux structures already (with the :: macro), you've probably noticed that you
need to use and pass around the specific structures you need every time you want to call
some signature's function, or access some constant value.

That can become tiresome if you need to do it all the time, and specially if you come from
languages that do method-selection for you automatically.

Object-oriented languages do polymorphism in an easy way, because they link objects to the
method table of their associated classes, and when you call a method on an object, the run-
time system can figure out where the code that needs to be run lies within the program's
memory.

Languages with type-classes, such as Haskell, perform that look-up at compile-time, by


using the type-information present in the compilation context to figure out which
implementation (or instance) of a type-class is suitable to each particular circumstance.

Lux, on the other hand, forces you to be specific with the structures that you're going to use.

While that gives you a level of power and flexibility you wouldn't otherwise have in other
languages, it also introduces the problem that when what you want doesn't warrant that level
of power, you have to pay the tax it involves nonetheless.

But, that sounds like a raw deal.

Why do you have to pay for something you're not taking advantage of?

Clearly, there is an asymmetry here.

There is a feature that is most useful in the few instances when you want full power. At any
other point, it's a hindrance.

Well... there is an alternative.

The Lux Standard Library includes a module called lux/type/auto , which provides a macro
called ::: , that serves as an easier-to-use alternative to the :: macro.

What it does is that instead of requiring the structure you want to use, it only requires the
name of the function you want to call and the arguments.

Then, at compile-time, it does some type-checking and some look-ups and selects a
structure for you that will satisfy those requirements.

121
Appendix F: Structure Auto-Selection

That structure can come from the local-var environment, from the definitions in your own
module, or even from the exported definitions of the modules you're importing.

That way, you can use :: whenever you need precision and power, and use :::
whenever you're doing more lightweight programming.

Fantastic!

You is how you'd use it:

## Equality for nats


(:: number;Eq<Nat> = x y)
## vs
(::: = x y)

## Equality for lists of nats


(:: (list;Eq<List> number;Eq<Nat>) =
(list;n.range +0 +9)
(list;n.range +0 +9))
## vs
(::: =
(list;n.range +0 +9)
(list;n.range +0 +9))

## Doing functor mapping


(:: list;Functor<List> map n.inc (list;n.range +0 +9))
## vs
(::: map n.inc (list;n.range +0 +9))

Thanks to structure auto-selection, you don't have to choose between power and ease of
use.

Just do a flat-import of the lux/type/auto module, and you'll get the ::: available and
ready for action.

122
Appendix G: Lexers and Regular Expressions

Appendix G: Lexers and Regular


Expressions
Working with text is a pretty common and fundamental thing in day-to-day programming.

Lux's approach to doing it is with the use of composable, monadic lexers.

The idea is that a lexer is a function that takes some text input, performs some calculations
which consume that input, and then returns some value, and the remaining, unconsumed
input.

Of course, the lexer may fail, in which case the user should receive some meaningful error
message to figure out what happened.

The lux/lexer library provides a type, and a host of combinators, for building and working
with lexers.

Check it out:

(type: #export (Lexer a)


(-> Text (Error [Text a])))

A good example of lexers being used is the lux/data/format/json module, which


implements full JSON serialization, and which uses lexers when working with text inputs to
the JSON parser.

However, programmers coming from other programming languages may be familiar with a
different approach to test processing that has been popular for a number of years now:
regular expressions.

Regular expressions offer a short syntax to building lexers that is great for writing quick text-
processing tools.

Lux also offers support for this style in its lux/regex module, which offers the regex
macro.

The regex macro, in turn, compiles the given syntax into a lexer, which means you can
combine both approaches, for maximum flexibility.

Here are some samples for regular expressions:

123
Appendix G: Lexers and Regular Expressions

## Literals
(regex "a")

## Wildcards
(regex ".")

## Escaping
(regex "\\.")

## Character classes
(regex "\\d")

(regex "\\p{Lower}")

(regex "[abc]")

(regex "[a-z]")

(regex "[a-zA-Z]")

(regex "[a-z&&[def]]")

## Negation
(regex "[^abc]")

(regex "[^a-z]")

(regex "[^a-zA-Z]")

(regex "[a-z&&[^bc]]")

(regex "[a-z&&[^m-p]]")

## Combinations
(regex "aa")

(regex "a?")

(regex "a*")

(regex "a+")

## Specific amounts
(regex "a{2}")

## At least
(regex "a{1,}")

## At most
(regex "a{,1}")

## Between

124
Appendix G: Lexers and Regular Expressions

(regex "a{1,2}")

## Groups
(regex "a(.)c")

(regex "a(b+)c")

(regex "(\\d{3})-(\\d{3})-(\\d{4})")

(regex "(\\d{3})-(?:\\d{3})-(\\d{4})")

(regex "(?<code>\\d{3})-\\k<code>-(\\d{4})")

(regex "(?<code>\\d{3})-\\k<code>-(\\d{4})-\\0")

(regex "(\\d{3})-((\\d{3})-(\\d{4}))")

## Alternation
(regex "a|b")

(regex "a(.)(.)|b(.)(.)")

Another awesome feature of the regex macro is that it will build fully type-safe code for
you.

This is important because the groups and alternations that you use in your regular
expression will affect its output.

For example:

## This returns a single piece of text


(regex "a{1,}")

## But this one returns a pair of texts


## The first is the whole match: aXc
## And the second is the thing that got matched: the X itself
(regex "a(.)c")

## That means, these are the types of these regular-expressions:


(: (Lexer Text)
(regex "a{1,}"))

(: (Lexer [Text Text])


(regex "a(.)c"))

125
Appendix G: Lexers and Regular Expressions

The benefits of lexers are that they are a bit easier to understand when reading them (due to
their verbosity), and that they are very easy to combine (thanks to their monadic nature, and
the combinator library).

The benefits of regular expressions are their familiarity to a lot of programmers, and how
quick they are to write.

Ultimately, it makes the most sense to provide both mechanisms to Lux programmers, and
let everyone choose whatever they find most useful.

126

Das könnte Ihnen auch gefallen