Sie sind auf Seite 1von 7

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 62, NO.

3, JUNE 2015 1033

FPGA Based Data Read-Out System of the


Belle II Pixel Detector
Dmytro Levit, Igor Konorov, Daniel Greenwald, and Stephan Paul

AbstractThe upgrades of the Belle experiment and the KEKB and four layers silicon strip detector. The silicon vertex de-
accelerator aim to increase the data set of the experiment by a tector is surrounded by the central drift chamber which recon-
factor 50. This will be achieved by increasing the luminosity of the structs the tracks of the charged particles and measures its mo-
accelerator which requires a signicant upgrade of the detector. A
new pixel detector based on DEPFET technology will be installed mentum. The particle identication system is represented by
to handle the increased reaction rate and provide better vertex res- the focusing aerogel ring imaging Cherenkov detector and the
olution. One of the features of the DEPFET detector is a long in- time-of-propagation counters. The energy of the electrons and
tegration time of s, which increases detector occupancy up to photons is measured in the high resolution CsI(Tl) electromag-
3%. The detector will generate about 22 GB/s of data. An FPGA- netic calorimeter. The outermost detector of the experiment de-
based two-level read-out system, the Data Handling Hub, was de-
veloped for the Belle II pixel detector. The system consists of 40 tects long lived kaons and muons with the resistive plate cham-
read-out and eight controller modules. All modules are built in bers and scitillator detectors.
TCA form factor using Xilinx Virtex-6 FPGA and can utilize up
to 4 GB DDR3 RAM. The system was successfully tested in the II. PIXEL DETECTOR
beam test at DESY in January 2014. The functionality and the ar- The silicon pixel detector will be installed as the innermost
chitecture of the Belle II Data Handling Hybrid system as well as
the performance of the system during the beam test are presented detector layer in Belle II. The detector is an active-pixel de-
in the paper. tector built using DEpleted P-channel Field Effect Transistor
(DEPFET) [3] technology. This technology allows us to build
Index TermsData acquisition, eld programmable gate arrays,
a detector with a very low material budget ( per layer
high energy physics instrumentation computing.
[4]), which reduces multiple scattering and provides spatial res-
olution below m, improving vertex resolution.
I. INTRODUCTION The matrix of a pixel detector module, called a half ladder,
consists of DEPFET pixels. The integration time of a
half ladder is s. Two half ladders are glued on the far edge
of the silicon frame together and form a mechanical module, a
T HE Belle II experiment [1] is a successor of the Belle ex-
periment, the B-Factory located in the High Energy Ac-
celerator Research Organization, KEK, in Tsukuba, Japan. The
ladder. The pixel detector composed of 20 mechanical modules
is arranged in two cylindrical layers around the interaction point
of the accelerator. The inner layer consists of eight modules with
experiment will focus on precise measurements of the avor
an average radius of 14 mm and sensitive length of 90 mm. The
physics reactions at low energies (10 GeV) to observe signa-
outer layer consists of 12 modules with an average radius of
tures of new particles and obtain deviations from the Standard
22 mm and sensitive length of 123 mm [4].
Model predictions. The upgrade of the Belle experiment aims
The half ladder forms the read-out unit. The DEPFET matrix
to increase the recorded data set of the B-Factory by a factor 50.
is operated in a rolling shutter mode by three kinds of appli-
Besides the upgrade of the accelerator SuperKEKB [2] that will
cation-specic integrated circuits (ASICs) bump bonded on the
bring an increase in the luminosity by adopting a nano-beam
silicon frame of the half ladder: six SwitcherB, four drain cur-
collision scheme at the interaction point, the signicant detector
rent digitizers (DCD) and four data handling processors (DHP).
upgrade is required to cope with the increased reaction rate.
The pixels in the matrix are controlled by two lines: gate line
The Belle II detector consists of the following subdetectors.
activates transistors and the clear line erases the charge accu-
The innermost part of the Belle II is the silicon vertex detector.
mulated in the internal gate of the DEPFET. The gate and clear
The detector consists of the two layers silicon pixel detector
lines of the DEPFET matrix are steered by the SwitcherB [5]
ASIC. These chips are connected in the daisy chain that propa-
Manuscript received June 15, 2014; revised December 22, 2014; accepted
gates the externally generated detector read-out sequence. The
April 10, 2015. Date of publication May 21, 2015; date of current version June
12, 2015. This work was supported in part by the European Commission under gate and clear lines of each four rows in the matrix are con-
the FP7 Research Infrastructures project AIDA under Grant 262025, by the trolled by the same SwitcherB channel in a so-called 4-fold read
German Ministry of Education and Research, Excellence Cluster Universe, and
out: four detector rows are digitized in the same read-out cycle.
by the Maier-Leibnitz-Laboratory.
The authors are with the Physikdepartment E18, Technische Univer- The read-out cycle consists of activating four detector rows and
sitt Mnchen, 85748 Garching, Germany (e-mail: dmytro.levit@tum.de; clearing rows at the end of the cycle. The period of the read-out
igor.konorov@cern.ch; daniel.greenwald@tum.de; stephan.paul@tum.de).
cycle is 100 ns.
Color versions of one or more of the gures in this paper are available online
at http://ieeexplore.ieee.org. The digitization of the drain current is performed in the DCD
Digital Object Identier 10.1109/TNS.2015.2424713 ASIC [6]. A DCD has 256 8-bit analog to digital converter

0018-9499 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
1034 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 62, NO. 3, JUNE 2015

(ADC) channels with common-mode correction operated at


305 MHz. The digitized values are serialized and sent to the
DHP ASIC at a total data rate of 20.48 Gb/s.
The DHP ASIC [7] generates the control sequence for the
SwitcherB and synchronizes it with the signal digitization in
the DCD. Initial data processing, e.g., digital common-mode
correction, zero suppression, or pedestal subtraction, is per-
formed in the chip. The DHP can withstand an average detector
occupancy up to 3% with negligible data loss. The processed
data is serialized and sent to the data-read-out hardware over a
1.52 Gb/s 8b/10b simplex Aurora [8] channel. Control of the
read-out process is performed by the read-out hardware over
four LVDS lines.
All ASICs on the half ladder are connected in the JTAG chain.
The JTAG chain is used for slow control and conguration of Fig. 1. Data read-out chain of the Belle II pixel detector.
the ASICs that is performed by the read-out hardware.
The total data rate at the trigger rate of 30 kHz and occu-
pancy of 3% is estimated at 22 GB/s which is 10 times higher
than the combined data rate of all other detectors in Belle II. The
read-out topology of the half-ladder which consists of four in-
dependent Aurora links poses major challenge for the read-out
hardware. It is required to synchronize four data streams before
data are processed. The data processing consists of preparing
data from the silicon pixel detector, e.g., through cluster re-
covery and subevent building, for the online data reduction.

III. DHH SYSTEM


Fig. 2. DHE AMC module.
The read-out hardware of the Belle II pixel detector is rep-
resented by the data handling hub system (DHH), which has a
two-layer architecture (Fig. 1). The rst-layer modules, the data Six FPGA modules and an interface AMC module are
handling engine (DHE), are interfaced over two 15-m Inni- installed on the custom designed ATCA (Advanced Telecom
band cables to the front-end electronics on the half ladders. The Computing Architecture) carrier board (Fig. 3). The carrier
modules receive data, perform cluster reconstruction and clas- board provides galvanic isolation between DHE and DHC
sication, and provide a JTAG master interface for slow control modules as well as interconnections between modules. This is
of the front-end ASICs. The second layer modules, the data han- necessary since the sensor half-ladder has ground connection to
dling concentrators (DHC), are connected to ve DHE modules the DHE module by the copper cables and the detector ground
over 6.25 Gb/s Aurora channels. A DHC module collects hit in- is dened elsewhere. The galvanic isolation is implemented by
formation from DHE modules and performs subevent building providing separate 12 V power supply for each FPGA module
based on the trigger number. Additionally, a DHC module pro- generated by the DC/DC converters from the 48 V of the ATCA
visions interfaces with the trigger and clock distribution system power bus. The isolation of the high speed links between DHE
and slow control. The synchronization information and Ethernet and DHC modules is done on the rear transition module using
frames are distributed to the DHE modules over high-speed se- ber optics and SFP+ optical transceivers. The ATCA board is
rial links. managed by the IPMI mezzanine board [10].

A. Hardware B. System Synchronization


The DHE and DHC modules are built in the advanced mez- The pixel detector in the Belle II experiment is synchronized
zanine card (AMC) form factor and share a hardware design with the SuperKEKB accelerator. The synchronization is done
(Fig. 2). The main element of the module is a Xilinx Virtex-6 by the Belle II trigger and time distribution system (B2TT)
LX130T eld-programmable gate array (FPGA). The module [11]. The B2TT system broadcasts a 127.21 MHz clock that
is connected to a half ladder using two Inniband connectors. is generated from the accelerators radio frequency clock of
The DDR3 SODIMM slot allows up to 4 GB of memory and a 508.84 MHz and synchronized to the beam revolution cycle
bandwidth up to 6 GB/s. The congurable clock synthesizer is of 100 kHz [12]. The trigger and synchronization information
used as a source of clock signal for stand-alone setup and as a is distributed source synchronous with the B2TT clock as an
jitter cleaner in setups with an external clock. There is a desig- 8b/10b encoded serial data stream.
nated connector for a current-mirror mezzanine board used for The DHC synthesizes the 76.33 MHz clock from the B2TT
the characterization of the ADCs in the DCD, and a connector clock. The new clock is then distributed to all connected
for the MMC mezzanine board [9]. DHE modules and is used as a clock source for the front-end
LEVIT et al.: FPGA BASED DATA READ-OUT SYSTEM OF THE BELLE II PIXEL DETECTOR 1035

Fig. 5. Overhead of the external FIFO core (simulation).

Fig. 3. ATCA Carrier Board and Rear Transition Module.


This information includes the trigger number, the number of
the rst row that will be processed by the DHP for this trigger,
and the expected DHP frame ID. Because DHE and DHP share
the same clock, the number of the rst row is calculated in the
DHE and corresponds to the data pointer in the DHP. The ac-
tivated data-storage core analyzes the data stream to lter ex-
actly one DEPFET frame from the DHP data stream and stores
Fig. 4. Data ow in the DHE. data frames in the FIFO. Finally, the data reader also uses a
round-robin algorithm to read nished events from data-storage
cores.
electronics. The trigger information is propagated synchro- Another problem is addressed in the data-storage core. If a
nously through the custom clock domain crossing core before DEPFET event is received in two DHP frames, then the row
being distributed to the DHE modules. This core is programmed number inside of the event jumps between two frames. The cor-
with the assumption that one clock is a fraction of the second rection of this condition is implemented in the core. The frames
clock. Therefore after a dened time period, the phase relation are stored in separate FIFOs. Then the frames are read in re-
between the clocks will repeat. Data are written to a register at verse order and merged to a single frame. Finally, the single
the beginning of this period in one clock domain and read from DHP frame is stored with the trigger tag in the DDR3 memory.
it at the end of the period in the other. This reduces the time The DDR3 memory provides four FIFO-like LocalLink in-
resolution but is well within the trigger interval of 190 ns [11]. terfaces for storing detector frames. The custom core is built
The 76.33 MHz clock and data are distributed synchronously around the Xilinx Memory Interface Generator (MIG) [13] core
from the DHC to DHE modules over a dedicated high-speed to share the same DDR3 memory unit. The data are written
data link. The phase information of the source clock is preserved into an intermediate FIFO that keeps data until access to the
by using 8b/10b encoding, which makes clock recovery on the memory interface is granted by the memory arbiter core. The
receiver side possible. The bypassing of the elastic buffers in arbiter divides single address space of the DDR3 memory into
transmitter and receiver also xes the latency for the transmis- several memory regions that are treated as ring buffers. The ar-
sion of the trigger information. The recovered clock on the re- biter maintains a set of pointers for each ring buffer. The size of
ceiver side has a xed phase relation to the source clock. the ring buffers can be dynamically changed in the slow control
registers. If data are present in the intermediate FIFO, then the
C. Data Processing next time the arbiter activates this memory region, a block of
The data ow in the DHE is shown in Fig. 4: data are received data is written to the DDR3 memory. The read process is sim-
from DHPs by four independent Aurora cores. The DHP data is ilar to the write: the arbiter reads data from the ring buffer and
divided only in DEPFET frames when the data pointer in the writes it into the intermediate FIFO, where it is decoded and pre-
DHP memory transitions from the last row to the rst row. The pared for the LocalLink interface. The overhead of the algorithm
data that belong to overlapping triggers are disentangled in the goes asymptotically to 6.25% since 16 bits of the 256-bit vector
DHE. in the MIG interface are used for service information. The last
The algorithm consists of the job allocator, data-storage vector of the frame may not be lled completely and therefore
cores, and data reader. The number of the data-storage cores the overhead also oscillates with the frame size (Fig. 5).
denes the maximum number of overlapping triggers that the The buffered data are then optionally processed by the
DHE can process. cluster-recovery algorithm. The clustering of the data is ben-
The job allocator uses a round-robin algorithm to activate ecial for the effective implementation of the data-reduction
data storage cores and provides information about the trigger. algorithms. The clustering algorithm takes advantage of the fact
1036 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 62, NO. 3, JUNE 2015

that the hits are already ordered in incremental row number se-
quence. The core of the algorithm is implemented as a chain of
nite state machines, representing one detector row. Each state
machine processes hits of two neighboring columns. Therefore
the algorithm has 32 state machines for 64 detector columns
corresponding to the number of columns handled by one DHP
chip. The clustering process receives one hit per clock. The
state machine checks the status of its direct neighbours and if
one of the neighbours is active, then the state machine takes its
cluster number. Otherwise the next free cluster number is used.
When there are two active neighbours, the clusters are merged
and the state machine assigns the smallest cluster number to the
hit. To remap preliminary cluster numbers to nal ones there is
a table that stores the new cluster number at the address of the
old cluster number. After the preliminary cluster numbers have
been assigned to all hits, a remapping of the cluster numbers is
performed in the order of pixels stored in the hit FIFO.
Since only the smallest cluster numbers are propagated, the Fig. 6. Sub-event builder algorithm.
cluster number remapping requires at most two look-ups to
the table to nd the true cluster number. Once the true cluster
number is found, the value is written into the look-up table to DHEDHC link. In the case if a DHEDHC link is not estab-
speed up the following remapping. Since the remapping algo- lished, the framing state machine ignores the corresponding
rithm is deterministic and the hits are received in the correct intermediate FIFO, generates empty frame for this link to
order, the data stream can be processed in a pipeline. Finally, preserve subevent data structure and switches to reading the
four data streams are merged together by remapping clusters next FIFO. The state machines enclose data frames with the
that are located on the frame border. same event number by a header and a trailer frame, thereby
The verication of the algorithm was performed in rmware building a subevent that contains information from up to ve
and in software in parallel. The data in the pixel detector data detector modules. Finally, the complete subevents are written
format were generated by the software running on a dedicated into a large FIFO that is implemented in the external memory.
PC. Then the data were loaded into FPGA over Ethernet and The data from the external FIFOs are directly sent to outgoing
sent to the software for verication. The clustered data were links.
downloaded from FPGA and compared with the results of the Since the data frames already contain event number infor-
cluster recovery in hardware. This test proved the ability of the mation, the consistency of the subevent is directly checked in
cluster recovery algorithm in FPGA to recover the clusters in the framing state machines. The subevent-builder core also pro-
the data stream correctly. vides the possibility to mask incoming and outgoing channels.
The data ready to be sent downstream are formated into In this case, the round-robin algorithm is altered to bypass inac-
DHE frames that provide service information on the data type tive channels.
and event number. The DHE frames are then sent to the DHC
module for subevent building. E. Slow Control
The slow control of the system is implemented as an abstrac-
D. Subevent Builder tion layer between high-level system-control algorithms in Ex-
The DHC performs online subevent building in rmware. perimental Physics and Industrial Control System (EPICS) and
The data ow in the subevent building algorithm is shown in low-level hardware registers. The DHE and DHC are controlled
Fig. 6: the received frames are buffered in a Xilinx Block RAM directly over Ethernet using a UDP-based protocol IPBus [15].
FIFO [14] and then written into one of the four intermediate The front-end ASICs are connected into a JTAG chain and are
FIFOs, implemented as a Xilinx Block RAM FIFO as well, that controlled by a hardware master in the DHE. A middleware li-
correspond to the four outgoing links. The active intermediate brary was developed that translates IPBus and JTAG protocols
FIFO is determined by a round-robin algorithm. Overow of the for the high-level EPICS slow-control network.
buffering FIFO is prevented by using the Native Flow Control The ATCA carrier board provides only a single Ethernet con-
feature of the Aurora protocol. This feature allows us to sus- nection for the whole read-out system. All modules on the car-
pend data transmission over the Aurora link until the ll level rier board are accessible by a unique IP address. This is done
of the FIFO falls under a predened threshold. During this time by implementing a simple Ethernet hub in the FPGA logic of
period the data are buffered in the external memory of the DHE the DHC to share this connection with the DHE modules. The
modules. hub broadcasts all received Ethernet frames to the DHE mod-
The frames stored in the intermediate FIFOs are then read ules and to the IPBus client on the DHC. The Ethernet frames
out by framing state machines in event framer cores. The order are transmitted over the same Aurora link that is used to read
in which the intermediate FIFOs are read by framing state out data from the DHE. The replies are multiplexed with the
machines is static. Each intermediate FIFO corresponds to a data stream in the DHE. The multiplexer assigns high priority
LEVIT et al.: FPGA BASED DATA READ-OUT SYSTEM OF THE BELLE II PIXEL DETECTOR 1037

to the data and low priority to the Ethernet stream. The received
replies on the DHC are then transmitted into the slow-control
Ethernet network.
The JTAG master is implemented as a two-level system. The
hardware master in the DHE consists of a state machine that exe-
cutes external commands. The control registers of the hardware
master are mapped to the IPBus registers. The software master
is built as an EPICS driver using the asynPortDriver class. The
main task of this master is generation of the JTAG commands
and transmission of the commands to the hardware master on the
DHE over IPBus. The software maintains not only the knowl-
edge of the JTAG registers available in the ASICs, but also of
the bit elds inside of the registers. Every bit eld that controls
a specic function in an ASIC is connected to an EPICS register
in the software. If a JTAG register access is scheduled, the soft-
ware constructs the bitstream using the cached values of the bit Fig. 7. DEPFET module bonded on the Hybrid6 board. Courtesy of M. Schnell.
elds and writes the bistream commands into the corresponding
DHE module over IPBus.

IV. SYSTEM TEST AT DESY IN JANUARY 2014


The rst test of the full system was performed at DESY in
January 2014. A beam of electrons and positrons produced from
the bremsstrahlung of the DESY II accelerator was used for the
test. The goal of the campaign was to test the integration of the
data-acquisition and online-data-reduction systems of the Belle Fig. 8. SODIMM Adapter Board.
II vertex detector. The test was performed on a detector proto-
type consisting of one layer of the pixel detector and four layers
of the double-sided silicon strip detector [16]. While the detector strobe (DQS) lines of the DDR3 interface. These lines are al-
provides enough planes for track reconstruction, the AIDA tele- ready routed as differential pairs with controlled impedance on
scope was used to estimate the performance of the detector. the DHE PCB and therefore are directly connected to the Inni-
Three planes of the telescope were installed before the detector band connector. Five of the singled ended data (DQ) lines of the
and another three planes were installed downstream. The de- DDR3 interface are used for the slow JTAG communication. To
tector and the telescope were installed inside of the Persistent preserve the signal integrity over the long Inniband cable the
Current superconducting MAGnet (PCMAG) that allowed us single-ended JTAG signals are converted to the differential sig-
to study detector performance in a range of magnetic elds up nals by an LVDS converter chip on the adapter board. The LVDS
to 1 T. converter chip is powered by 3.3 V connected to the power pin
of the Serial Present Detect interface on the SODIMM socket.
A. Pixel Detector Module The sequence of the signals for SwitcherB ASICs that con-
The pixel detector module in the beam test, the Hybrid 6 trols rolling shutter mode operation of the detector is generated
board (Fig. 7), has a DEPFET matrix of pixels with by the sequencer core. The sequencer is implemented as a dual
the pixel area of m . Three pairs of DHP and DCD port memory. The sequence is programmed by the slow control
ASICs and four SwitcherB ASICs are installed on the module. system and read continuously until the end of the sequence in
The main difference of this module compared with the Belle II the memory is reached or a frame synchronization signal is re-
design is the control of the SwitcherB ASICs: the ASICs are ceived. Then the read pointer resets and the frame cycle repeats.
not controlled by the DHP, but by the DHE instead. The Hy- The implementation of the switcher slow control over JTAG
brid 6 board offers two additional Inniband connectors for the imitates the main JTAG master. The second JTAG player in
SwitcherB control signals. rmware is controlled by the second instance of the software
master, which maps switcher registers to EPICS registers. This
B. Switcher Control by the DHE Module eliminates the need for modication of the GUI and high-level
logic.
The DHE is extended by the adapter board for the SODIMM
slot (Fig. 8). This board is connected to the Hybrid 6 board
C. Detector Read-Out Chain
by two Inniband connectors. The control functionality of the
DHE requires the extension of the rmware with two additional The data read-out chain of the detectors is shown in Fig. 9.
cores: the switcher sequencer and the JTAG player. The gen- The fast trigger signal generated by scintillators is transmitted
eral purpose IO pins of the FPGA are used to drive the sig- to the Trigger Logic Unit (TLU) to be used as a level 1 trigger.
nals on the adapter board. The faster switcher sequencer sig- The TLU assigns consecutive event numbers to the triggers and
nals which are switched with 250 MHz are driven by the data transmits them to the telescope and to the B2TT system. The
1038 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 62, NO. 3, JUNE 2015

Fig. 9. Read-out chain at the beam test setup.

Fig. 10. Data and trigger rates test at the beam test setup.
B2TT system passes along this information and triggers the
read-out of the pixel detector and silicon vertex detector.
Once the trigger is received by the DHC module, the trigger During the beam test, approximately 60 million good events
frame is transmitted to the DHE module, where the trigger signal were recorded by the system.
for the front-end electronics is generated. In response to the
trigger signal, the DHP chips send up to two data frames in V. CONCLUSIONS
the zero-suppressed format. Then the frames are formated into
The read-out system for the Belle II pixel detector was de-
DHE format and sent to the DHC. The DHC formats data in
veloped and integrated into the EPICS slow control, and the
the DHC format and sends them to the Online Selection Node
pixel detector read-out chain. The full read-out chain was eval-
(ONSEN) [17]. A copy of the DHE data is also sent over Eth-
uated in the beam test at DESY. During the test we built a data
ernet to the standalone DAQ PC. On the standalone DAQ PC the
acquisition chain which resembles the future Belle II pixel de-
data are monitored by the data quality monitor and forwarded to
tector read-out chain: an external trigger, data-read out modules,
the EUDAQ PC to be merged with the telescope data.
subevent builder, online data reduction using data from outer
The ONSEN system is designed to perform online data reduc-
detectors, data quality monitoring, event builder, and the com-
tion by using the information from the outer detectors of Belle
patibility with the Belle II analysis software framework [18].
II. The data from outer detectors are used to generate Regions
With this setup we successfully operated the DEPFET sensor
of Interest (ROI), the intersection points of the particle tracks
and gained better understanding of the system behaviour. The
with the planes of the pixel detector, by extrapolating the tracks
performance of the system was tested at a trigger rate of 5 kHz
down to the interaction point. The ROIs are received by ONSEN
which is lower than the designed trigger rate of 30 kHz. The
from two sources: the FPGA-based silicon-strip-detector-only
test of the subevent building algorithm and the system perfor-
track nder Data Concentrator (DATCON) and software-based
mance measurements at the designed trigger rate remain the
high-level trigger. Both ROI sources were used in the beam test
open issues for the next system test with the nal version of
although they used data from the same detector. The pixel data
the DEPFET sensor in the autumn 2015.
were ltered in the ONSEN by checking the data against ROIs.
The ltered data were then sent to the event builder farm, which
merged data streams from the pixel detector and silicon strip REFERENCES
detector. [1] T. Abe et al., Belle II Technical Design Report, Tsukuba, Japan, KEK
Report 2010-1, 2010, arXiv:1011.0352.
D. System Performance at the Beam Test [2] Y. Ohnishi et al., Accelerator design at superKEKB, Progr. Theor.
Exper. Phys., vol. 2013, no. 3, 2013.
A dry run was performed on the DHEDHC setup to test the [3] J. Kemmer and G. Lutz, New detector concepts, Nucl. Instrum.
performance of the system before declaring the system opera- Methods Phys. Res. Sec. A: Accel., Spectrometers, Detectors Assoc.
Equip., vol. 253, no. 3, pp. 365377, 1987.
tional. An articial trigger with a dened frequency was gen- [4] S. Furletov, The Belle II pixel vertex detector, J. Instrum., vol. 7, no.
erated by the B2TT system, and the data were read out from 01, p. C01014, 2012.
the DEPFET module and sent to the ONSEN module. No high- [5] I. Peric, P. Fischer, J. Knopf, and T. H. H. Nguyen, DCDB and
SWITCHERB, the readout ASICS for Belle II DEPFET pixel
level trigger was generated during the dry run and therefore the detector, in Proc. IEEE Nucl. Sci. Symp. Med. Imaging Conf.
run duration was limited by the capacity of the memory in the (NSS/MIC), Oct. 2011, pp. 15361539.
ONSEN system. The system ran at a trigger rate of 5 kHz with a [6] J. Knopf, P. Fischer, C. Kreidl, and I. Peric, A 256 channel 8-bit cur-
rent digitizer ASIC for the Belle-II PXD, J. Instrum., vol. 6, no. 01,
continuous data stream of 17 MB/s for 6 minutes without errors p. C01085, 2011.
(Fig. 10). [7] M. Lemarenko et al., Test results of the data handling processor for the
The read-out system was in continuous operation for seven DEPFET pixel vertex detector, J. Instrum., vol. 8, no. 01, p. C01032,
2013.
days with the average trigger rate of 300 Hz. The rate of the [8] Aurora 8B/10B Protocol Specification, SP 002 Xilinx. San Jose, CA,
data stream generated by the detector averaged around 200 kB/s. USA, 2014.
LEVIT et al.: FPGA BASED DATA READ-OUT SYSTEM OF THE BELLE II PIXEL DETECTOR 1039

[9] J. Cachemiche, Module Management Controller mezzanine [14] Virtex-6 FPGA Memory Resources, User Guide 363, Xilinx, San
boardSpecication, no. 3.0, 2011 [Online]. Available: Jose, CA, USA, 2014, pp. 4366.
https://espace.cern.ch/ph-dep-ESE-BE-uTCAEvaluationProject/ [15] R. Frazier, G. Iles, D. Newbold, and A. Rose, Software and rmware
MMC_project/Shared Documents/MMC mezzanine/Specica- for controlling CMS trigger and readout hardware via gigabit ethernet,
tion/MMC_ specication_v3.0.pdf Phys. Proc., vol. 37, no. 0, pp. 18921899, 2012.
[10] N. D. Dayot, Development of an ATCA IPMI controller mezzanine [16] M. Friedl et al., The Belle II silicon vertex detector, Nucl. Instrum.
board to be used in the ATCA developments for the ATLAS Liquid Methods Phys. Res. Sec. A: Accel., Spectrometers, Detectors Assoc.
Argon upgrade, J. Instrum., vol. 7, no. 01, p. C01020, 2012 [Online]. Equip., vol. 732, no. 0, pp. 8386, 2013.
Available: http://stacks.iop.org/1748-0221/7/i=01/a=C01020=0pt [17] B. Spruck et al., The Belle II pixel detector data acquisition and re-
[11] M. Nakao, C. Lim, M. Friedl, and T. Uchida, Minimizing dead time of duction system, IEEE Trans. Nucl. Sci., vol. 60, no. 5, pp. 37093713,
the Belle II data acquisition system with pipelined trigger ow control, Oct. 2013.
IEEE Trans. Nucl. Sci., vol. 60, no. 5, pp. 37293734, Oct. 2013. [18] D. Kim, The software library of the coming Belle II experiment and
[12] M. Nakao, Timing distribution for the Belle II data acquisition its simulation package, in Proc. IEEE Nucl. Sci. Symp. Med. Imaging
system, J. Instrum., vol. 7, no. 01, p. C01028, 2012. Conf. (NSS/MIC), Oct. 2013, pp. 14.
[13] Virtex-6 FPGA Memory Interface Solutions, User Guide 406,
Xilinx, San Jose, CA, USA, 2011, pp. 17152.

Das könnte Ihnen auch gefallen