Sie sind auf Seite 1von 46

Sensing Room Occupancy Distributions

for Smart Lighting Applications

Kim L. Boyer
Professor and Head
Quan Wang and Xinchi Zhang
Electrical, Computer, and Systems Engineering
Rensselaer Polytechnic Institute
US NSF Engineering Research Center on Smart Lighting
Target and Motivation
We want to build an occupancy sensitive, smart lighting system

Color-controllable
LED fixtures Sensors
What light should be delivered?
Occupancy?

Minimize energy consumption


Maximize human comfort,
well-being, productivity


Target and Motivation
to produce something like this:

Ground Truth 3D Reconstruction Occupancy Map


The socket and bulb model is ending; new business
models are emerging enabled in fact, forced by
efficient, reliable solid state lighting.
The Future of Lighting?
The end of the bulb and socket model
Lighting company shakeout underway
Lighting will do more:
Visible light communications
Sensing
Adaptation to ambient, occupancy, activity
Human health, well-being, productivity
Internet of Things
Lighting fixtures will provide services
Sensor Options
Privacy Ambient
Sensor Type Output Concern
Cost Ligh:ng Key Components

Webcam RGB image Signicant $$ Not too dark CCD/CMOS array

Laser, scanning
Scanning LiDAR Depth Map Moderate $$$$ Anything mirror, av. photo.

Laser, avalanche
Time of Flight Depth Map Moderate $$$ Anything photodiode

RGB-D RGB & IR laser, IR sensor,
(e.g. Kinect) Depth Signicant $$$ Not too dark RGB camera

Non-imaging A Few Real Must be
color sensor Numbers None $ (cheap) Controllable Photodiodes, lter
Non-Imaging Color Sensors
Flora Color Sensor - TCS34725
Retail $10 (qty. 1) => Cheap!
Color-filtered photodiodes => Privacy preserving
Drawback: Little information in the output
Solution: Distributed sensors, multiple measurements
Many sensors Many measurements
distributed in under different
the room lighting conditions


Spatial Information Spectral Information
Current Sensor (an RPI Design)
Flora sensor chip
Raspberry Pi controller, network connection
3D-printed housing, simple lens, non-directional
Issues are speed, sensitivity, spectrum, network

Mounted in the ceiling


Smart Lighting ERC Testbed
Smart Space Testbed (SST)
12 color-controllable LED fixtures
For each LED, we can specify the intensity of 3 channels: R, G, B
12 color sensors (photodiodes + color filters)
Each sensor measures luminous flux of 4 channels: R, G, B, White
Sensors can be installed anywhere (well, almost anywhere)

Smart Space Testbed (SST) 12 LED Fixtures Color Sensors


Light Transport Model (LTM)
x: Input vector to the LED fixtures, 36 dimensions (3 channels 12 LEDs)
y: Output vector from color sensors, 48 dimensions (4 channels 12 sensors)
Affine relationship:
y = Ax + b

A: light transport matrix, 4836, lighting independent


NB: A depends only on occupancy
b: sensor response to ambient light, 48 entries

Matrix A is a very good signature for occupancy!


Make it Linear: Eliminate b
Start with the affine light transport model (LTM)
y0 = Ax0 + b Measure initial conditions once
(y0 + yi) = A(x0 + xi) + b i = 1,,N measurements
Subtract y equation with random perturbations
0

yi = Axi i = 1,,N equations to solve for A


x0: desired base input light signal vector, from control module
xi: ith randomly generated perturbation vector
x0+xi: ith total input light signal vector
y0+yyi: ith resulting vector of color sensor responses
Solving for A: The Light Transport Matrix
From N measurements, we obtain a linear system:
[y1 y2 yN] = A [x1 x2 xN]

Y =A X
Current Testbed: 48 Rows 48 by 36 36 Rows N = 200>>36
With N >> number of columns in A, the system will
be over determined and we can use the Moore-
Penrose pseudo-inverse to estimate A:

A = YX T(XX T)1
Possible Run Time Considerations
The number of measurements may be time-constrained
If the system is under determined, no unique solution
Add a constraint: The room is sparsely occupied
(a reasonable assumption for changes since calibration)
Let A0 be the light transport matrix for the empty room
E = A0 A is the difference matrix, and measures the
change in the light transport model vs. the empty room.
If necessary, estimate E according to (assumption):

Changes are sparse Changes are similar from path to path


Perturbation Modulation
sense adjust sense

Perturbation
(low-amplitude)

Perturbation-
Base light modulated light
Perturbation Modulation
Total Input Light = BaseLight (x0) + Perturbation (x)
The set of perturbation patterns should be
rich in variation to capture sufficient scene
information over entire space
The amplitude must be small for human
comfort and well being (x0-dominant)
The amplitude must be large enough to
achieve reliable sensor readings
Perturbation Modulation

Sensing Adjustment

Two alternating stages


Perturbation Ordering
The set of patterns should be applied in a sequence
that maximizes comfort, minimizes visibility.

x1
Neighboring patterns in
0
the sequence should be
as similar as possible to
x2
produce gradual changes. x4

x3

Model: The well-known Traveling Salesman Problem


Solution: We use a genetic algorithm
Occupancy Sensing
With perturbation-modulated lighting, we obtain:
The light transport matrix A, or
The difference from the unoccupied room E = A0 A
Machine learning
Classify the occupancy patterns
Now what?
Light blockage model
Three approaches. Reconstruct the 3D scene
Lets consider them in turn.
Light reflection model
Estimate the 2D occupancy map
Approach 1: Learning Occupancy Using SVM
The most obvious approach is direct classification
Collect (much) Train SVM
Training Data Classifiers

Classify
Compute A
at Run Time
Approach 1: Learning Occupancy Using SVM

We divided the room into six regions, and defined 15 categories:

Empty room
One person at 6 regions
Small group at 6 regions
Large group gathered
Large group scattered
Approach 1: Learning Occupancy Using SVM
Classification results are promising:

Mean average precision (mAP) is 78.69%; random guess would be 7.95%


But the machine learning approach has its limitations:
1. Limited number of categories
2. Cross-subject generalization is difficult (overfitting problem)
Approach 2: 3D Scene Reconstruction
Let the light transport matrix of the empty room be A0
Let E = A0 A, also 4836 (the difference matrix)
We aggregate E to a 1212 matrix by summing over each
fixture-sensor pair on the (currently) three color channels

RG B
R
G
B
W
Light Blockage Model: Understanding E (hat)
Each entry of captures the cumulative effect of a set of
light paths from one fixture to one sensor, and vice versa

wall
diffuse
reflection
reflection paths

direct path

LED fixture color sensor


Light Blockage Model: Understanding E
In E, direct path blockage dominates reflection path blockage

wall wall
>>

Therefore, the entries in E can serve as


indicators of the probability of obstruction along
the corresponding 3D fixture-to-sensor path.
Light Blockage Model
A positive value in indicates an occlusion somewhere along
the corresponding light path:

With two or more intersecting light paths occluded, it is very


likely that their intersection point is occupied:
light paths
LED fixtures color sensors
blocked light paths

human subject
3D Volume Rendering
Assume there are M direct light paths (M = 1212 = 144)
Light path m corresponds to entry m
For a point P in the room, let the distance from P to line m
be dm(P)
The confidence that point P is occupied is:

normalization

Gaussian Kernel
Testbed Configuration
Room size:
85.5 " 135 " 86.4 "

Spatial resolution:
1 voxel = 1 " 1 " 1 "
= 20 "

Positions of sensors:
6 sensors on each wall
3D Reconstruction Complexity Analysis
Algorithm implemented in C++, compiled with MEX

Takes about 18 seconds on a MacBook to render one volume (87 136 88)
1 million voxels 144 lines 0.15 billion operations
Each operation:
Compute point-to-line distance
Compute Gaussian kernel

Acceleration: pre-computation of Gaussian kernels and hashing


Takes only 2 seconds to render on a MacBook, but requires 1 GB more memory
Even faster?
Parallel computing: multithreading, GPU
Lower resolution
Current bottleneck is sensing time, not rendering
3D Scene Reconstruction Results

Ground truth Camera images (unused)

z-integral confidence map 3D confidence map


3D Scene Reconstruction Results

Ground truth Camera images (unused)

z-integral confidence map 3D confidence map


3D Scene Reconstruction Results

Ground truth Camera images (unused)

z-integral confidence map 3D confidence map


3D Scene Reconstruction Results

Ground truth Camera images (unused)

z-integral confidence map 3D confidence map


3D Scene Reconstruction Results

Ground truth Camera images (unused)

z-integral confidence map 3D confidence map


3D Scene Reconstruction Results (1 region)

Ground truth Camera images (unused)

z-integral confidence map 3D confidence map


Approach 3: Light Reflection Model
Limitations of the Blockage Model
Sensors on the walls; what about furniture?
For a large room, model fails far from walls
Alternative: Ceiling-Mounted Sensors Only
No direct light path from fixture to sensor
Non-directional sensors; no height information
Therefore, 3D reconstruction is not possible
We can analyze reflection with geometrical optics
and photometry; what can we do with that?
Approach 3: Light Reflection Model
2D Occupancy Estimation
Using geometrical optics and photometry, we compute a reflection
kernel Ri,j for each fixture-sensor pair

Then the confidence map of the occupancy is a weighted linear


combination of the reflection kernels:
2D Occupancy Results

Ground truth Camera images 2D confidence map


(unused)
2D Occupancy Results

Ground truth Camera images 2D confidence map


(unused)
2D Occupancy Results

Ground truth Camera images 2D confidence map


(unused)
Quantitative Evaluation
For a 2D confidence map, we can compute the correlation
coefficient with manually generated floor-plane ground truth

Floor plane ground truth:


Model each person or chair as a disk of radius = 10 "
Quantitative Evaluation
Figure of Merit: Mean average correlation coefficient
(mACC) over all occupancy scenarios. (higher is better)
Summary of the Three Approaches

Number Spatial
Sensor Room Training
Approach Generalizes of Position
Mount Size Data?
Classes Recovery?

Machine Wall or Need a


Any Poorly Limited No
Learning Ceiling Lot of it

Blockage
Wall Small Well No Limit None Yes
Model
Reflection
Ceiling Any Well No Limit None Yes
Model
Ongoing Work
Weakly directional sensors
Compound Eye concept
Triangulation from pairs of sensor responses in the E
matrix (~ tri-diagonally dominant, if permuted correctly)
Infer approximate height information
Perturbation Patterns
Minimal spanning set (size, basis) to effectively interrogate
complete space in minimal time
Dimensionality, locality requirements in pattern design
Formal measure of pattern-change visibility vs.
mathematical pattern distance
Do problems exist even below perceptibility?
By how much do we need to over-determine the system?
Public Policy Implications;
Obstacles to Adoption
Should such systems be subject to regulation? If so, on what basis?
Privacy concerns, real and imagined. Worse with better sensors?
Public acceptance, trust, use in public spaces. Disclosure needed?
Fail-safe mode, ensuring personal well-being. Verification?
Residential use, requirements?
Human vulnerability to the modulation of the light? Standards?
Susceptibility to sabotage, unauthorized access?
Realizing net energy savings. (Computers use electricity)
Outdoor applications? Safety, security vs. fear of being watched?
Conclusions and Next (Technical) Steps
Novel occupancy sensing for smart lighting systems:
Uses very few low-cost color sensors
Very different from PIR, LiDAR, or ultrasonic sensing
Faster, directional sensors will improve precision

Next Steps
Faster, less-obtrusive sensors (we have built)
Directional sensors (compound eyes, but no images)
Rough depth from ceiling-mounted directional sensors
Combined approaches
Well-developed design rules for smart spaces (by type)
Thank You! To Read More About It
Quan Wang, Xinchi Zhang, and Kim Boyer, 3D Scene Estimation for
Smart Light Delivery with Perturbation-Modulated Light Sensing,
Journal of Solid State Lighting, 2014 1:17, ISSN 2196-1107,
doi:10.1186/s40539-014-0017-2.
Quan Wang, Meng Wang, and Kim Boyer, Learning Room Occupancy
Patterns from Sparsely Recovered Light Transport Models,
International Conference on Pattern Recognition, Stockholm,
SWEDEN, August 2014.
Quan Wang, Xinchi Zhang, and Kim Boyer, 3D Scene Estimation with
Perturbation-Modulated Light and Distributed Sensors, IEEE
Workshop on Perception Beyond the Visible Spectrum, Columbus, OH,
June 2014.
Xinchi Zhang, Quan Wang, and Kim L. Boyer, Illumination
Adaptation with Rapid-Response Color Sensors, SPIE Optical
Engineering and Applications, San Diego, CA, August 2014.

Das könnte Ihnen auch gefallen