Beruflich Dokumente
Kultur Dokumente
HALCON/.NET
Reference Manual
This manual describes the operators of HALCON, version 8.0.2, in .NET syntax. It was generated on May 13,
2008.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior written
permission of the publisher.
Copyright
c 1997-2008 by MVTec Software GmbH, München, Germany MVTec Software GmbH
1 Classification 1
1.1 Gaussian-Mixture-Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
AddSampleClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
ClassifyClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
ClearAllClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ClearClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ClearSamplesClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
CreateClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
EvaluateClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
GetParamsClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
GetPrepInfoClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
GetSampleClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
GetSampleNumClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
ReadClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
ReadSamplesClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
TrainClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
WriteClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
WriteSamplesClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
ClearSampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
CloseAllClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
CloseClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
CreateClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
DescriptClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
EnquireClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
EnquireRejectClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
GetClassBoxParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
LearnClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
LearnSampsetBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
ReadClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
ReadSampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
SetClassBoxParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
TestSampsetBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
WriteClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
AddSampleClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
ClassifyClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
ClearAllClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
ClearClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
ClearSamplesClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
CreateClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
EvaluateClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
GetParamsClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
GetPrepInfoClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
GetSampleClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
GetSampleNumClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
ReadClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
ReadSamplesClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
TrainClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
WriteClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
WriteSamplesClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
AddSampleClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
ClassifyClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
ClearAllClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
ClearClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
ClearSamplesClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
CreateClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
GetParamsClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
GetPrepInfoClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
GetSampleClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
GetSampleNumClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
GetSupportVectorClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
GetSupportVectorNumClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
ReadClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
ReadSamplesClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
ReduceClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
TrainClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
WriteClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
WriteSamplesClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2 File 63
2.1 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
ReadImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
ReadSequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
WriteImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.2 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
DeleteFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
FileExists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
ListFiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
ReadWorldFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.3 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
ReadRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
WriteRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.4 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
CloseAllFiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
CloseFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
FnewLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
FreadChar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
FreadLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
FreadString . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
FwriteString . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
OpenFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.5 Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
ReadTuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
WriteTuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.6 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
ReadContourXldArcInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
ReadContourXldDxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
ReadPolygonXldArcInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
ReadPolygonXldDxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
WriteContourXldArcInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
WriteContourXldDxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
WritePolygonXldArcInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
WritePolygonXldDxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3 Filter 89
3.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
AbsImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
AddImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
DivImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
InvertImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
MaxImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
MinImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
MultImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
ScaleImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
SqrtImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
SubImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.2 Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
BitAnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
BitLshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
BitMask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
BitNot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
BitOr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
BitRshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
BitSlice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
BitXor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
CfaToRgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
GenPrincipalCompTrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
LinearTransColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
PrincipalComp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Rgb1ToGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Rgb3ToGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
TransFromRgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
TransToRgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
CloseEdges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
CloseEdgesLength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
DerivateGauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
DiffOfGauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
EdgesColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
EdgesColorSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
EdgesImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
EdgesSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
FreiAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
FreiDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
HighpassImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
InfoEdges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
KirschAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
KirschDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
LaplaceOfGauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
PrewittAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
PrewittDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Roberts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
RobinsonAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
RobinsonDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
SobelAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
SobelDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.5 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
AdjustMosaicImages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
CoherenceEnhancingDiff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Emphasize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
EquHistoImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Illuminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
MeanCurvatureFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
ScaleImageMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
ShockFilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
3.6 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
ConvolFft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
ConvolGabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
CorrelationFft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
EnergyGabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
FftGeneric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
FftImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
FftImageInv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
GenBandfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
GenBandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
GenDerivativeFilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
GenFilterMask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
GenGabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
GenGaussFilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
GenHighpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
GenLowpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
GenSinBandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
GenStdBandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
OptimizeFftSpeed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
OptimizeRftSpeed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
PhaseDeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
PhaseRad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
PowerByte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
PowerLn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
PowerReal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
ReadFftOptimizationData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
RftGeneric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
WriteFftOptimizationData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
3.7 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
AffineTransImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
AffineTransImageSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
GenBundleAdjustedMosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
GenCubeMapMosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
GenProjectiveMosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
GenSphericalMosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
MapImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
MirrorImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
PolarTransImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
PolarTransImageExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
PolarTransImageInv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
ProjectiveTransImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
ProjectiveTransImageSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
RotateImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
ZoomImageFactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
ZoomImageSize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
3.8 Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
HarmonicInterpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
InpaintingAniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
InpaintingCed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
InpaintingCt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
InpaintingMcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
InpaintingTexture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
3.9 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
BandpassImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
LinesColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
LinesFacet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
LinesGauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
3.10 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
ExhaustiveMatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
ExhaustiveMatchMg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
GenGaussPyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Monotony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
3.11 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
ConvolImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
ExpandDomainGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
GrayInside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
GraySkeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
LutTrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
TopographicSketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
3.12 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
AddNoiseDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
AddNoiseWhite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
GaussDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
NoiseDistributionMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
SpDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
3.13 Optical-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
OpticalFlowMg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
UnwarpImageVectorField . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
VectorFieldLength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
3.14 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
CornerResponse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
DotsImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
PointsFoerstner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
PointsHarris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
PointsSojka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
3.15 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
AnisotropeDiff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
AnisotropicDiffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
BinomialFilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
EliminateMinMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
EliminateSp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
FillInterlace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
GaussImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
InfoSmooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
IsotropicDiffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
MeanImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
MeanN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
MeanSp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
MedianImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
MedianSeparate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
MedianWeighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
MidrangeImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
RankImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
SigmaImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
SmoothImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
TrimmedMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
3.16 Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
DeviationImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
EntropyImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
TextureLaws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
3.17 Wiener-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
GenPsfDefocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
GenPsfMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
SimulateDefocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
SimulateMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
WienerFilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
WienerFilterNi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
4 Graphics 305
4.1 Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
DragRegion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
DragRegion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
DragRegion3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
DrawCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
DrawCircleMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
DrawEllipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
DrawEllipseMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
DrawLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
DrawLineMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
DrawNurbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
DrawNurbsInterp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
DrawNurbsInterpMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
DrawNurbsMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
DrawPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
DrawPointMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
DrawPolygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
DrawRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
DrawRectangle1Mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
DrawRectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
DrawRectangle2Mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
DrawRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
DrawXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
DrawXldMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
4.2 Gnuplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
GnuplotClose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
GnuplotOpenFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
GnuplotOpenPipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
GnuplotPlotCtrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
GnuplotPlotFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
GnuplotPlotImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
4.3 LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
DispLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
DrawLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
GetFixedLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
GetLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
GetLutStyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
QueryLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
SetFixedLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
SetLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
SetLutStyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
WriteLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
4.4 Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
GetMbutton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
GetMposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
GetMshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
QueryMshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
SetMshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
4.5 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
DispArc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
DispArrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
DispChannel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
DispCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
DispColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
DispCross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
DispDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
DispEllipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
DispImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
DispLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
DispObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
DispPolygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
DispRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
DispRectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
DispRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
DispXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
4.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
GetComprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
GetDraw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
GetFix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
GetHsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
GetIcon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
GetInsert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
GetLineApprox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
GetLineStyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
GetLineWidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
GetPaint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
GetPart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
GetPartStyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
GetPixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
GetRgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
GetShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
QueryAllColors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
QueryColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
QueryColored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
QueryGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
QueryInsert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
QueryLineWidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
QueryPaint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
QueryShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
SetColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
SetColored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
SetComprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
SetDraw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
SetFix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
SetGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
SetHsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
SetIcon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
SetInsert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
SetLineApprox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
SetLineStyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
SetLineWidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
SetPaint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
SetPart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
SetPartStyle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
SetPixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
SetRgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
SetShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
4.7 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
GetFont . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
GetStringExtents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
GetTposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
GetTshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
NewLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
QueryFont . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
QueryTshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
ReadChar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
ReadString . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
SetFont . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
SetTposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
SetTshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
WriteString . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
4.8 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
ClearRectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
ClearWindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
CloseWindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
CopyRectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
DumpWindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
DumpWindowImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
GetOsWindowHandle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
GetWindowAttr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
GetWindowExtents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
GetWindowPointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
GetWindowType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
MoveRectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
NewExternWindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
OpenTextwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
OpenWindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
QueryWindowType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
SetWindowAttr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
SetWindowDc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
SetWindowExtents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
SetWindowType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
SlideImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
5 Image 439
5.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
GetGrayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
GetImagePointer1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
GetImagePointer1Rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
GetImagePointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
GetImageTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
5.2 Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
CloseAllFramegrabbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
CloseFramegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
GetFramegrabberLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
GetFramegrabberParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
GrabData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
GrabDataAsync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
GrabImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
GrabImageAsync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
GrabImageStart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
InfoFramegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
OpenFramegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
SetFramegrabberLut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
SetFramegrabberParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.3 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
AccessChannel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
AppendChannel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
ChannelsToImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Compose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Compose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Compose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Compose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Compose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Compose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
CountChannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Decompose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Decompose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Decompose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Decompose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Decompose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Decompose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
ImageToChannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
5.4 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
CopyImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
GenImage1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
GenImage1Extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
GenImage1Rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
GenImage3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
GenImageConst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
GenImageGrayRamp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
GenImageInterleaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
GenImageProto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
GenImageSurfaceFirstOrder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
GenImageSurfaceSecondOrder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
RegionToBin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
RegionToLabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
RegionToMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
5.5 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
AddChannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
ChangeDomain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
FullDomain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
GetDomain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
Rectangle1Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
ReduceDomain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
5.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
AreaCenterGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
CoocFeatureImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
CoocFeatureMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
EllipticAxisGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
EntropyGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
EstimateNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
FitSurfaceFirstOrder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
FitSurfaceSecondOrder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
FuzzyEntropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
FuzzyPerimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
GenCoocMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
GrayHisto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
GrayHistoAbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
GrayProjections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Histo2dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
MinMaxGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
MomentsGrayPlane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
PlaneDeviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
SelectGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
ShapeHistoAll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
ShapeHistoPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
5.7 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
ChangeFormat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
CropDomain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
CropDomainRel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
CropPart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
CropRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
TileChannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
TileImages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
TileImagesOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
5.8 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
OverpaintGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
OverpaintRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
PaintGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
PaintRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
PaintXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
SetGrayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
5.9 Type-Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
ComplexToReal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
ConvertImageType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
RealToComplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
RealToVectorField . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
VectorFieldToReal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
6 Lines 535
6.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
ApproxChain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
ApproxChainSimple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
6.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
LineOrientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
LinePosition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
PartitionLines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
SelectLines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
SelectLinesLongest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
7 Matching 547
7.1 Component-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
ClearAllComponentModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
ClearAllTrainingComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
ClearComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
ClearTrainingComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
ClusterModelComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
CreateComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
CreateTrainedComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
FindComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
GenInitialComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
GetComponentModelParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
GetComponentModelTree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
GetComponentRelations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
GetFoundComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
GetTrainingComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
InspectClusteredComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
ModifyComponentRelations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
ReadComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
ReadTrainingComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
TrainModelComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
WriteComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
WriteTrainingComponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
7.2 Correlation-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
ClearAllNccModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
ClearNccModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
CreateNccModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
FindNccModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
GetNccModelOrigin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
GetNccModelParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
ReadNccModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
SetNccModelOrigin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
WriteNccModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
7.3 Gray-Value-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
AdaptTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
BestMatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
BestMatchMg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
BestMatchPreMg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
BestMatchRot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
BestMatchRotMg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
ClearAllTemplates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
ClearTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
CreateTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
CreateTemplateRot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
FastMatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
FastMatchMg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
ReadTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
SetOffsetTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
SetReferenceTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
WriteTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
7.4 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
ClearAllShapeModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
ClearShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
CreateAnisoShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
CreateScaledShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
CreateShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
DetermineShapeModelParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
FindAnisoShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
FindAnisoShapeModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
FindScaledShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
FindScaledShapeModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
FindShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
FindShapeModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
GetShapeModelContours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
GetShapeModelOrigin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
GetShapeModelParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
InspectShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
ReadShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
SetShapeModelOrigin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
WriteShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
8 Matching-3D 665
AffineTransObjectModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
ClearAllObjectModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
ClearAllShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
ClearObjectModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
ClearShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
ConvertPoint3dCartToSpher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
ConvertPoint3dSpherToCart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
CreateCamPoseLookAtPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
CreateShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
FindShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
GetObjectModel3dParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
GetShapeModel3dContours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
GetShapeModel3dParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
ProjectObjectModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
ProjectShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
ReadObjectModel3dDxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
ReadShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
TransPoseShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
WriteShapeModel3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
9 Morphology 693
9.1 Gray-Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
DualRank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
GenDiscSe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
GrayBothat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
GrayClosing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
GrayClosingRect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
GrayClosingShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
GrayDilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
GrayDilationRect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
GrayDilationShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
GrayErosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
GrayErosionRect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
GrayErosionShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
GrayOpening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
GrayOpeningRect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
GrayOpeningShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
GrayRangeRect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
GrayTophat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
ReadGraySe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
9.2 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
BottomHat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
Closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
ClosingCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
ClosingGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
ClosingRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
Dilation1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
Dilation2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
DilationCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
DilationGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
DilationRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
DilationSeq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
Erosion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
Erosion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
ErosionCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
ErosionGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
ErosionRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
ErosionSeq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
GenStructElements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
GolayElements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
HitOrMiss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
HitOrMissGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
HitOrMissSeq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
MinkowskiAdd1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
MinkowskiAdd2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
MinkowskiSub1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
MinkowskiSub2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
MorphHat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
MorphSkeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
MorphSkiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
Opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
OpeningCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
OpeningGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
OpeningRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
OpeningSeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
Thickening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
ThickeningGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
ThickeningSeq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
ThinningGolay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
ThinningSeq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
TopHat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
10 OCR 765
10.1 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
CloseAllOcrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
CloseOcr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
CreateOcrClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
DoOcrMulti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
DoOcrSingle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
InfoOcrClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
OcrChangeChar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
OcrGetFeatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
ReadOcr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
TestdOcrClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
TraindOcrClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
TrainfOcrClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
WriteOcr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
10.2 Lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
ClearAllLexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
ClearLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
CreateLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
ImportLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
InspectLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
LookupLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
SuggestLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
10.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
ClearAllOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
ClearOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
CreateOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
DoOcrMultiClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
DoOcrSingleClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
DoOcrWordMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
GetFeaturesOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788
GetParamsOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
GetPrepInfoOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
ReadOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
TrainfOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792
WriteOcrClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
10.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
ClearAllOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
ClearOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
CreateOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
DoOcrMultiClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
DoOcrSingleClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
DoOcrWordSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
GetFeaturesOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
GetParamsOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
GetPrepInfoOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
GetSupportVectorNumOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
GetSupportVectorOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
ReadOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
ReduceOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
TrainfOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
WriteOcrClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
10.5 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
SegmentCharacters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
SelectCharacters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
TextLineOrientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
TextLineSlant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
10.6 Training-Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
AppendOcrTrainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
ConcatOcrTrainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
ReadOcrTrainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
ReadOcrTrainfNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
ReadOcrTrainfSelect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
WriteOcrTrainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
WriteOcrTrainfImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
11 Object 825
11.1 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
CountObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
GetChannelInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
GetObjClass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
TestEqualObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
TestObjDef . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
11.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
ClearObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
ConcatObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
CopyObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
GenEmptyObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
IntegerToObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
ObjToInteger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
SelectObj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
12 Regions 839
12.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
GetRegionChain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
GetRegionContour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
GetRegionConvex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
GetRegionPoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
GetRegionPolygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
GetRegionRuns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
12.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
GenCheckerRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
GenCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
GenEllipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
GenEmptyRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
GenGridRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
GenRandomRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
GenRandomRegions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
GenRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
GenRectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
GenRegionContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
GenRegionHisto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
GenRegionHline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
GenRegionLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
GenRegionPoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
GenRegionPolygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
GenRegionPolygonFilled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
GenRegionPolygonXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
GenRegionRuns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
LabelToRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
12.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
AreaCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
Circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
ConnectAndHoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
Contlength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
DiameterRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
Eccentricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
EllipticAxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
EulerNumber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
FindNeighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
GetRegionIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
GetRegionThickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
HammingDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
HammingDistanceNorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
InnerCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
InnerRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
MomentsRegion2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
MomentsRegion2ndInvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
MomentsRegion2ndRelInvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
MomentsRegion3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
MomentsRegion3rdInvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
MomentsRegionCentral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
MomentsRegionCentralInvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
OrientationRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
Rectangularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
Roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
RunlengthDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
RunlengthFeatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
SelectRegionPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
SelectRegionSpatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
SelectShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
SelectShapeProto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
SelectShapeStd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
SmallestCircle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
SmallestRectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
SmallestRectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
SpatialRelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
12.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
AffineTransRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
MirrorRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
MoveRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
PolarTransRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
PolarTransRegionInv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
ProjectiveTransRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913
TransposeRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
ZoomRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
12.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916
Complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916
Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
SymmDifference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
Union1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
Union2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
12.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
TestEqualRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
TestRegionPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
TestSubsetRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
12.7 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
BackgroundSeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
ClipRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
ClipRegionRel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
DistanceTransform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
EliminateRuns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
ExpandRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
FillUp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
FillUpShape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
HammingChangeRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
Interjacent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
JunctionsSkeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
MergeRegionsLineScan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
PartitionDynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
PartitionRectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
RankRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938
RemoveNoiseRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
ShapeTrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
Skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
SortRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
SplitSkeletonLines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943
SplitSkeletonRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
13 Segmentation 947
13.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
AddSamplesImageClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
AddSamplesImageClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
AddSamplesImageClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
Class2dimSup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
Class2dimUnsup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
ClassNdimBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
ClassNdimNorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
ClassifyImageClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
ClassifyImageClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
ClassifyImageClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
LearnNdimBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
LearnNdimNorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
13.2 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
DetectEdgeSegments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
HysteresisThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
NonmaxSuppressionAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
NonmaxSuppressionDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
13.3 Regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
ExpandGray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
ExpandGrayRef . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
ExpandLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
Regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
RegiongrowingMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
RegiongrowingN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
13.4 Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
AutoThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
BinThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
CharThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
CheckDifference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
DualThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
DynThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
FastThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
HistoToThresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
ThresholdSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
VarThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
ZeroCrossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
ZeroCrossingSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
13.5 Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
CriticalPointsSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
LocalMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997
LocalMaxSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998
LocalMin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
LocalMinSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
Lowlands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
LowlandsCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
Plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
PlateausCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
Pouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
SaddlePointsSubPix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
Watersheds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008
WatershedsThreshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
14 System 1011
14.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
CountRelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
GetModules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
ResetObjDb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
14.2 Error-Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
GetCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
GetErrorText . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
GetSpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
QuerySpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
SetCheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
SetSpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
14.3 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
GetChapterInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
GetKeywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
GetOperatorInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021
GetOperatorName . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
GetParamInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
GetParamNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
GetParamNum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
GetParamTypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026
QueryOperatorInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
QueryParamInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
SearchOperator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
14.4 Operating-System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
CountSeconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
SystemCall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
WaitSeconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
14.5 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
CheckParHwPotential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
LoadParKnowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1031
StoreParKnowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
14.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
GetSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
SetSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
14.7 Serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
ClearSerial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
CloseAllSerials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
CloseSerial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
GetSerialParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
OpenSerial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
ReadSerial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
SetSerialParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
WriteSerial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
14.8 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
CloseSocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
GetNextSocketDataType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
GetSocketDescriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
GetSocketTimeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
OpenSocketAccept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
OpenSocketConnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1051
ReceiveImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
ReceiveRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
ReceiveTuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
ReceiveXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
SendImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054
SendRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054
SendTuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
SendXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
SetSocketTimeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
SocketAcceptConnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
15 Tools 1059
15.1 2D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
AffineTransPixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
AffineTransPoint2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
BundleAdjustMosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
HomMat2dCompose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
HomMat2dDeterminant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
HomMat2dIdentity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
HomMat2dInvert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
HomMat2dRotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066
HomMat2dRotateLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067
HomMat2dScale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
HomMat2dScaleLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070
HomMat2dSlant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
HomMat2dSlantLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
HomMat2dToAffinePar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
HomMat2dTranslate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
HomMat2dTranslateLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
HomMat2dTranspose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
HomMat3dProject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
HomVectorToProjHomMat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
ProjMatchPointsRansac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
ProjectiveTransPixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
ProjectiveTransPoint2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
VectorAngleToRigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
VectorFieldToHomMat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
VectorToHomMat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
VectorToProjHomMat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
VectorToRigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
VectorToSimilarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
15.2 3D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
AffineTransPoint3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
ConvertPoseType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
CreatePose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
GetPoseType . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1100
HomMat3dCompose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1100
HomMat3dIdentity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
HomMat3dInvert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102
HomMat3dRotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102
HomMat3dRotateLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104
HomMat3dScale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106
HomMat3dScaleLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1108
HomMat3dToPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
HomMat3dTranslate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1110
HomMat3dTranslateLocal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
PoseToHomMat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1112
ReadPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113
SetOriginPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114
WritePose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
15.3 Background-Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116
CloseAllBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116
CloseBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117
CreateBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1118
GetBgEstiParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120
GiveBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122
RunBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
SetBgEstiParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124
UpdateBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
15.4 Barcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127
ClearAllBarCodeModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127
ClearBarCodeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128
CreateBarCodeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128
FindBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129
GetBarCodeObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130
GetBarCodeParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
GetBarCodeResult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1133
SetBarCodeParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1134
15.5 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
CaltabPoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
CamMatToCamPar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
CamParToCamMat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138
CameraCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139
ChangeRadialDistortionCamPar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
ChangeRadialDistortionContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
ChangeRadialDistortionImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149
ContourToWorldPlaneXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1150
CreateCaltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151
DispCaltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
FindCaltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
FindMarksAndPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
GenCaltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158
GenImageToWorldPlaneMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
GenRadialDistortionMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164
GetCirclePose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165
GetLineOfSight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
GetRectanglePose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
HandEyeCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171
ImagePointsToWorldPlane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
ImageToWorldPlane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
Project3dPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
RadiometricSelfCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
ReadCamPar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
SimCaltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
StationaryCameraSelfCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
WriteCamPar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
15.6 Datacode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
ClearAllDataCode2dModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
ClearDataCode2dModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
CreateDataCode2dModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
FindDataCode2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
GetDataCode2dObjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
GetDataCode2dParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
GetDataCode2dResults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
QueryDataCode2dParams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
ReadDataCode2dModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
SetDataCode2dParam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
WriteDataCode2dModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
15.7 Fourier-Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
AbsInvarFourierCoeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
Fourier1dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
Fourier1dimInv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
InvarFourierCoeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
MatchFourierCoeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
MoveContourOrig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
PrepContourFourier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
15.8 Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
AbsFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
ComposeFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
CreateFunct1dArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
CreateFunct1dPairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
DerivateFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
DistanceFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
Funct1dToPairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
GetPairFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
GetYValueFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
IntegrateFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
InvertFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
LocalMinMaxFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
MatchFunct1dTrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
NegateFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
NumPointsFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
ReadFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
SampleFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242
ScaleYFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242
SmoothFunct1dGauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
SmoothFunct1dMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
TransformFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
WriteFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
XRangeFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
YRangeFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
ZeroCrossingsFunct1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
15.9 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
AngleLl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
AngleLx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
DistanceCc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
DistanceCcMin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250
DistanceLc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1251
DistanceLr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
DistancePc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
DistancePl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
DistancePp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
DistancePr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
DistancePs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257
DistanceRrMin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258
DistanceRrMinDil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
DistanceSc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1260
DistanceSl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261
DistanceSr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
DistanceSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1263
GetPointsEllipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
IntersectionLl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
ProjectionPl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267
15.10 Grid-Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268
ConnectGridPoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268
CreateRectificationGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1269
FindRectificationGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1270
GenArbitraryDistortionMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1271
GenGridRectificationMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
15.11 Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
HoughCircleTrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
HoughCircles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
HoughLineTrans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275
HoughLineTransDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
HoughLines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
HoughLinesDir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
SelectMatchingLines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
15.12 Image-Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
ClearAllVariationModels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
ClearTrainDataVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
ClearVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
CompareExtVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
CompareVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
CreateVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
GetThreshImagesVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
GetVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
PrepareDirectVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
PrepareVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
ReadVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
TrainVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292
WriteVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
15.13 Kalman-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
FilterKalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
ReadKalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297
SensorKalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1300
UpdateKalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301
15.14 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
CloseAllMeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
CloseMeasure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
FuzzyMeasurePairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
FuzzyMeasurePairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
FuzzyMeasurePos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
GenMeasureArc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1311
GenMeasureRectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
MeasurePairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
MeasurePos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1318
MeasureProjection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1320
MeasureThresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1320
ResetFuzzyMeasure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
SetFuzzyMeasure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
SetFuzzyMeasureNormPair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324
TranslateMeasure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326
15.15 OCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
CloseAllOcvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
CloseOcv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
CreateOcvProj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328
DoOcvSimple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
ReadOcv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
TraindOcvProj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1331
WriteOcv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1332
15.16 Shape-from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
DepthFromFocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
EstimateAlAm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1334
EstimateSlAlLr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1335
EstimateSlAlZc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1335
EstimateTiltLr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336
EstimateTiltZc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336
PhotStereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1337
SelectGrayvaluesFromChannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1337
SfsModLr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1338
SfsOrigLr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
SfsPentland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
ShadeHeightField . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
15.17 Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344
BinocularCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344
BinocularDisparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
BinocularDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
DisparityToDistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353
DisparityToPoint3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354
DistanceToDisparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1355
EssentialToFundamentalMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
GenBinocularProjRectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358
GenBinocularRectificationMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
IntersectLinesOfSight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
MatchEssentialMatrixRansac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
MatchFundamentalMatrixRansac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
MatchRelPoseRansac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
Reconst3dFromFundamentalMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376
RelPoseToFundamentalMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
VectorToEssentialMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
VectorToFundamentalMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
VectorToRelPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
15.18 Tools-Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
Decode1dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
Decode2dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
Discrete1dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
Find1dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
Find1dBarCodeRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
Find1dBarCodeScanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
Find2dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
Gen1dBarCodeDescr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
Gen1dBarCodeDescrGen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
Gen2dBarCodeDescr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
Get1dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
Get1dBarCodeScanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407
Get2dBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1410
Get2dBarCodePos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1414
16 Tuple 1417
16.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
TupleAbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
TupleAcos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
TupleAdd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1418
TupleAsin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1418
TupleAtan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419
TupleAtan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419
TupleCeil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
TupleCos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
TupleCosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
TupleCumul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
TupleDeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
TupleDiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1422
TupleExp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1422
TupleFabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
TupleFloor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
TupleFmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
TupleLdexp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
TupleLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
TupleLog10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
TupleMax2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
TupleMin2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
TupleMod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
TupleMult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
TupleNeg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
TuplePow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428
TupleRad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428
TupleSgn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
TupleSin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
TupleSinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
TupleSqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1430
TupleSub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1430
TupleTan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
TupleTanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
16.2 Bit-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
TupleBand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
TupleBnot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
TupleBor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
TupleBxor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1433
TupleLsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1433
TupleRsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1434
16.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
TupleEqual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
TupleGreater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
TupleGreaterEqual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436
TupleLess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436
TupleLessEqual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437
TupleNotEqual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437
16.4 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1438
TupleChr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1438
TupleChrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1438
TupleInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
TupleIsNumber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
TupleNumber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
TupleOrd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1440
TupleOrds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1440
TupleReal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
TupleRound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
TupleString . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
16.5 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443
TupleConcat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443
TupleGenConst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443
TupleRand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
16.6 Element-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
TupleInverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
TupleSort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445
TupleSortIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445
16.7 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
TupleDeviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
TupleLength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
TupleMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
TupleMean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
TupleMedian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
TupleMin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
TupleSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
16.8 Logical-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
TupleAnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
TupleNot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
TupleOr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
TupleXor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
16.9 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
TupleFind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
TupleFirstN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
TupleLastN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
TupleRemove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
TupleSelect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
TupleSelectRange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
TupleSelectRank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454
TupleStrBitSelect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454
TupleUniq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
16.10 String-Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
TupleEnvironment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
TupleRegexpMatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
TupleRegexpReplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
TupleRegexpSelect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
TupleRegexpTest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
TupleSplit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1460
TupleStrFirstN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1461
TupleStrLastN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1461
TupleStrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1462
TupleStrlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
TupleStrrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
TupleStrrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1464
TupleStrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1464
17 XLD 1467
17.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
GetContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
GetLinesXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
GetParallelsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
GetPolygonXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1469
17.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
GenContourNurbsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
GenContourPolygonRoundedXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
GenContourPolygonXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
GenContourRegionXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473
GenContoursSkeletonXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
GenCrossContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
GenEllipseContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
GenParallelsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
GenPolygonsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478
GenRectangle2ContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
ModParallelsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1480
17.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1481
AreaCenterPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1481
AreaCenterXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
CircularityXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
CompactnessXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
ContourPointNumXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485
ConvexityXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485
DiameterXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
DistEllipseContourPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487
DistEllipseContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
DistRectangle2ContourPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1490
EccentricityPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
EccentricityXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
EllipticAxisPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493
EllipticAxisXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
FitCircleContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496
FitEllipseContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1498
FitLineContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501
FitRectangle2ContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503
GetContourAngleXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505
GetContourAttribXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
GetContourGlobalAttribXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
GetRegressParamsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
InfoParallelsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1508
LengthXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
LocalMaxContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1510
MaxParallelsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1511
MomentsAnyPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1511
MomentsAnyXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1513
MomentsPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1514
MomentsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515
OrientationPointsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1516
OrientationXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517
QueryContourAttribsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1518
QueryContourGlobalAttribsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1518
SelectContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1519
SelectShapeXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
SelectXldPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1522
SmallestCircleXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1523
SmallestRectangle1Xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524
SmallestRectangle2Xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525
TestSelfIntersectionXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1526
TestXldPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1526
17.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527
AffineTransContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527
AffineTransPolygonXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1528
GenParallelContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1529
PolarTransContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1530
PolarTransContourXldInv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1532
ProjectiveTransContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
17.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
DifferenceClosedContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
DifferenceClosedPolygonsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535
IntersectionClosedContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1536
IntersectionClosedPolygonsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
SymmDifferenceClosedContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1538
SymmDifferenceClosedPolygonsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
Union2ClosedContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1540
Union2ClosedPolygonsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
17.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1542
AddNoiseWhiteContourXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1542
ClipContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1543
CloseContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1544
CombineRoadsXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545
CropContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546
MergeContLineScanXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1547
RegressContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548
SegmentContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1549
ShapeTransXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1551
SmoothContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
SortContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
SplitContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1553
UnionAdjacentContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1554
UnionCocircularContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1555
UnionCollinearContoursExtXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
UnionCollinearContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559
UnionStraightContoursHistoXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
UnionStraightContoursXld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
18 Classes 1565
18.1 HBarCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565
18.2 HBgEsti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565
18.3 HClassBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566
18.4 HClassGmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1567
18.5 HClassMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1568
18.6 HClassSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1569
18.7 HComponentModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1570
18.8 HComponentTraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1570
18.9 HDataCode2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1571
18.10 HFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
18.11 HFramegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
18.12 HFunction1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1573
18.13 HGnuplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574
18.14 HHomMat2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1575
18.15 HHomMat3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1577
18.16 HImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1578
18.17 HInfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1597
18.18 HLexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1598
18.19 HMeasure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1599
18.20 HMisc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1600
18.21 HNCCModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1603
18.22 HOCRBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1604
18.23 HOCRMlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
18.24 HOCRSvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
18.25 HOCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
18.26 HObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607
18.27 HObjectModel3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608
18.28 HOperatorSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608
18.29 HPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608
18.30 HRegion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
18.31 HSerial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1620
18.32 HShapeModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1621
18.33 HShapeModel3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1622
18.34 HSocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
18.35 HSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
18.36 HTemplate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
18.37 HTuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
18.38 HVariationModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1630
18.39 HWindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
18.40 HXLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
18.41 HXLDCont . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
18.42 HXLDExtPara . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
18.43 HXLDModPara . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
18.44 HXLDPara . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
18.45 HXLDPoly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
Index 1647
Chapter 1
Classification
1.1 Gaussian-Mixture-Models
static void HOperatorSet.AddSampleClassGmm ( HTuple GMMHandle,
HTuple features, HTuple classID, HTuple randomize )
1
2 CHAPTER 1. CLASSIFICATION
HALCON 8.0.2
4 CHAPTER 1. CLASSIFICATION
ClearClassGmm clears the Gaussian Mixture Model (GMM) given by GMMHandle and frees all memory re-
quired for the GMM. After calling ClearClassGmm, the GMM can no longer be used. The handle GMMHandle
becomes invalid.
Parameter
exactly one parameter: The parameter determines the exact number of centers to be used for all classes.
exactly two parameters: The first parameter determines the mimimum number of centers, the second determines
the maximum number of centers for all classes.
exactly 2 · N umClasses parameters: Alternatingly every first parameter determines the minimum number of
centers per class and every second parameters determines the maximum number of centers per class.
When upper and lower bounds are specified, the optimum number of centers will be determined with the help of
the Mimimum Message Length Criterion (MML). In general, we recommend to start the training with (too) many
centers as maximum and the expected number of centers as minimum.
Each center is described by the parameters center mj , covariance matrix Cj , and mixing coefficient Pj . These pa-
rameters are calculated from the training data by means of the Expectation Maximization (EM) algorithm. A GMM
can approximate an arbitrary probability density, provided that enough centers are being used. The covariance ma-
trices Cj have the dimensions numDim · numDim (numComponents · numComponents if preprocessing is
used) and are symmetric. Further constraints can be given by covarType:
For covarType = ’spherical’, Cj is a scalar multiple of the identity matrix Cj = s2j I. The center density
function p(x|j) is
2
1 kx − mj k
p(x|j) = exp(− )
2
(2πsj )d/2 2s2j
For covarType = ’diag’, Cj is a diagonal matrix Cj = diag(s2j,1 , ..., s2j,d ). The center density function p(x|j)
is
d
1 X (xi − mj,i )2
p(x|j) = exp(− )
d 2s2j,i
s2j,i )d/2
Q
(2π i=1
i=1
For covarType = ’full’, Cj is a positive definite matrix. The center density function p(x|j) is
1 1
p(x|j) = 1 exp(− (x − mj )T C−1 (x − mj ))
(2π)d/2 |Cj | 2 2
HALCON 8.0.2
6 CHAPTER 1. CLASSIFICATION
The complexity of the calculations increases from covarType = ’spherical’ over covarType = ’diag’ to
covarType = ’full’. At the same time the flexibility of the centers increases. In general, ’spherical’ therefore
needs higher values for numCenters than ’full’.
The procedure to use GMM is as follows: First, a GMM is created by CreateClassGmm. Then, training vectors
are added by AddSampleClassGmm, afterwards they can be written to disk with WriteSamplesClassGmm.
With TrainClassGmm the classifier center parameters (defined above) are determined. Furthermore, they can
be saved with WriteClassGmm for later classifications.
From the mixing probabilities Pj and the center density function p(x|j), the probability density function p(x) can
be calculated by:
ncomp
X
p(x) = P (j)p(x|j)
j=1
The probability density function p(x) can be evaluated with EvaluateClassGmm for a feature vector x.
ClassifyClassGmm sorts the p(x) and therefore discovers the most probable class of the feature vector.
The parameters preprocessing and numComponents can be used to preprocess the training data and reduce
its dimensions. These parameteters are explained in the description of the operator CreateClassMlp.
CreateClassGmm initializes the coordinates of the centers with random numbers. To ensure that the results of
training the classifier with TrainClassGmm are reproducible, the seed value of the random number generator is
passed in randSeed.
Parameter
. numDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of dimensions of the feature space.
Default Value : 3
Suggested values : NumDim ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumDim ≥ 1
. numClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of classes of the GMM.
Default Value : 5
Suggested values : NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : NumClasses ≥ 1
. numCenters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Number of centers per class.
Default Value : 1
Suggested values : NumCenters ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30}
Restriction : NumClasses ≥ 1
. covarType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of the covariance matrices.
Default Value : "spherical"
List of values : CovarType ∈ {"spherical", "diag", "full"}
. preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
. numComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Preprocessing parameter: Number of transformed features (ignored for preprocessing = ’none’ and
preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. randSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Seed value of the random number generator that is used to initialize the GMM with random values.
Default Value : 42
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; HClassGmm / HTuple (IntPtr)
GMM handle.
Result
If the parameters are valid, the operator CreateClassGmm returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
CreateClassGmm is processed completely exclusively without parallelization.
Possible Successors
AddSampleClassGmm, AddSamplesImageClassGmm
Alternatives
CreateClassMlp, CreateClassSvm, CreateClassBox
See also
ClearClassGmm, TrainClassGmm, ClassifyClassGmm, EvaluateClassGmm,
ClassifyImageClassGmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
ncomp
X
p(i|x) = P (j)p(x|j)
j=1
and returned for each class in classProb. The formulas for the calculation of the center density function p(x|j)
are described with CreateClassGmm.
The probablity density of the feature vector is computed as a sum of the posterior class probabilities
HALCON 8.0.2
8 CHAPTER 1. CLASSIFICATION
nclasses
X
p(x) = P r(i)p(i|x)
i=1
and is returned in density. Here, P r(i) are the prior classes probabilities as computed by TrainClassGmm.
density can be used for novelty detection, i.e., to reject feature vectors that do not belong to any of the trained
classes. However, since density depends on the scaling of the feature vectors and since density is a probabil-
ity density, and consequently does not need to lie between 0 and 1, the novelty detection can typically be performed
more easily with KSigmaProb (see below).
A k-sigma error ellipsoid is defined as a locus of points for which
(x − µ)T C −1 (x − µ) = k 2
In the one dimensional case this is the interval [µ − kσ, µ + kσ]. For any 1D Gaussian distribution, it is true
that approximately 65% of the occurrences of the random variable are within this range for k = 1, approximately
95% for k = 2, approximately 99% for k = 3, etc. Hence, the probability that a Gaussian distribution will
generate a random variable outside this range is approximately 35%, 5%, and 1%, respectively. This probability is
called k-sigma probability and is denoted by P [k]. P [k] can be computed numerically for univariate as well as for
multivariate Gaussian distributions, where it should be noted that for the same values of k, P (N ) [k] > P (N +1) [k]
(here N and (N+1) denote dimensions). For Gaussian mixture models the k-sigma probability is computed as:
ncomp
X
PGM M [x] = P (j)Pj [kj ], where kj2 = (x − µj )T Cj−1 (x − µj )
j=1
They then are weighted with the class priors, normalized, and returned for each class in KSigmaProb, such that
P r(i)
KSigmaProb[i] = PGM M [x]
P rmax
KSigmaProb can be used for novelty detection. Typically, feature vectors having values below 0.0001
should be rejected. The parameter rejectionThreshold in ClassifyImageClassGmm is based on the
KSigmaProb values of the features.
Before calling EvaluateClassGmm, the GMM must be trained with TrainClassGmm.
The position of the maximum value of classProb is usally interpreted as the class of the feature vector and the
corresponding value as the probability of the class. In this case, ClassifyClassGmm should be used instead of
EvaluateClassGmm, because ClassifyClassGmm directly returns the class and corresponding probability.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; HClassGmm / HTuple (IntPtr)
GMM handle.
. features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Feature vector.
. classProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
A-posteriori probability of the classes.
. density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Probability density of the feature vector.
. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator EvaluateClassGmm returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
EvaluateClassGmm is reentrant and processed without parallelization.
Possible Predecessors
TrainClassGmm, ReadClassGmm
Alternatives
ClassifyClassGmm
See also
CreateClassGmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
HALCON 8.0.2
10 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator GetPrepInfoClassGmm returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
GetPrepInfoClassGmm may return the error 9211 (Matrix is not positive definite) if preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
GetPrepInfoClassGmm is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassGmm, ReadSamplesClassGmm
Possible Successors
ClearClassGmm, CreateClassGmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Return a training sample from the training data of a Gaussian Mixture Models (GMM).
GetSampleClassGmm reads out a training sample from the Gaussian Mixture Model (GMM) given by
GMMHandle that was stored with AddSampleClassGmm or AddSamplesImageClassGmm. The index
of the sample is specified with numSample. The index is counted from 0, i.e., numSample must be a number
between 0 and numSamples − 1, where numSamples can be determined with GetSampleNumClassGmm.
The training sample is returned in features and classID. features is a feature vector of length numDim,
while classID is its class (see AddSampleClassGmm and CreateClassGmm).
GetSampleClassGmm can, for example, be used to reclassify the training data with ClassifyClassGmm in
order to determine which training samples, if any, are classified incorrectly.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; HClassGmm / HTuple (IntPtr)
GMM handle.
. numSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Index of the stored training sample.
HALCON 8.0.2
12 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator GetSampleClassGmm returns the value 2 (H_MSG_TRUE). If neces-
sary an exception handling is raised.
Parallelization Information
GetSampleClassGmm is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassGmm, AddSamplesImageClassGmm, ReadSamplesClassGmm,
GetSampleNumClassGmm
Possible Successors
ClassifyClassGmm, EvaluateClassGmm
See also
CreateClassGmm
Module
Foundation
int HClassGmm.GetSampleNumClassGmm ( )
Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
GetSampleNumClassGmm returns in numSamples the number of training samples that are stored in the Gaus-
sian Mixture Model (GMM) given by GMMHandle. GetSampleNumClassGmm should be called before the
individual training samples are read out with GetSampleClassGmm, e.g., for the purpose of reclassifying the
training data (see GetSampleClassGmm).
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; HClassGmm / HTuple (IntPtr)
GMM handle.
. numSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of stored training samples.
Result
If the parameters are valid, the operator GetSampleNumClassGmm returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
GetSampleNumClassGmm is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassGmm, AddSamplesImageClassGmm, ReadSamplesClassGmm
Possible Successors
GetSampleClassGmm
See also
CreateClassGmm
Module
Foundation
HALCON 8.0.2
14 CHAPTER 1. CLASSIFICATION
It should be noted that the training samples must have the correct dimensionality. The feature vectors stored in
fileName must have the lengths numDim that was specified with CreateClassGmm, and enough classes
must have been created in CreateClassGmm. If this is not the case, an error message is returned.
It is possible to read files of samples that were written with WriteSamplesClassSvm or
WriteSamplesClassMlp.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; HClassGmm / HTuple (IntPtr)
GMM handle.
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
File name.
Result
If the parameters are valid, the operator ReadSamplesClassGmm returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
ReadSamplesClassGmm is processed completely exclusively without parallelization.
Possible Predecessors
CreateClassGmm
Possible Successors
TrainClassGmm
Alternatives
AddSampleClassGmm
See also
WriteSamplesClassGmm, WriteSamplesClassMlp, ClearSamplesClassGmm
Module
Foundation
regularize is added to each main diagonal element of the covariance matrix, which prevents this element from
becoming smaller than regularize. A recommended value for regularize is 0.0001. If regularize is
set to 0.0, no regularization is performed.
The centers are initially randomly distributed. In individual cases, relatively high errors will result from the algo-
rithm because the initial random values determined by randSeed in CreateClassGmm lead to local minima.
In this case, a new GMM with a different value for randSeed should be generated to test whether a significantly
smaller error can be obtained.
It should be noted that, depending on the number of centers, the type of covariance matrix, and the number of
training samples, the training can take from a few seconds to several hours.
On output, TrainClassGmm returns in centers the number of centers per class that have been found to be
optimal by the EM algorithm. This values can be used as a reference in numCenters (in CreateClassGmm)
for future GMMs. If the number of centers found by training a new GMM on integer training data is unexpectedly
high, this might be corrected by adding a randomize noise to the training data in AddSampleClassGmm.
iter contains the number of performed iterations per class. If a value in iter equals maxIter, the training
algorithm has been terminated prematurely (see above).
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; HClassGmm / HTuple (IntPtr)
GMM handle.
. maxIter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Maximum number of iterations of the expectation maximization algorithm
Default Value : 100
Suggested values : MaxIter ∈ {10, 20, 30, 50, 100, 200}
. threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Threshold for relative change of the error for the expectation maximization algorithm to terminate.
Default Value : 0.001
Suggested values : Threshold ∈ {0.001, 0.0001}
Restriction : (Threshold ≥ 0.0) ∧ (Threshold ≤ 1.0)
. classPriors (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mode to determine the a-priori probabilities of the classes
Default Value : "training"
List of values : ClassPriors ∈ {"training", "uniform"}
. regularize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Regularization value for preventing covariance matrix singularity.
Default Value : 0.0001
Restriction : (Regularize ≥ 0.0) ∧ (Regularize < 1.0)
. centers (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Number of found centerss per class
. iter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Number of executed iterations per class
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator TrainClassGmm returns the value 2 (H_MSG_TRUE). If necessary an
exception handling is raised.
Parallelization Information
TrainClassGmm is processed completely exclusively without parallelization.
HALCON 8.0.2
16 CHAPTER 1. CLASSIFICATION
Possible Predecessors
AddSampleClassGmm, ReadSamplesClassGmm
Possible Successors
EvaluateClassGmm, ClassifyClassGmm, WriteClassGmm
Alternatives
ReadClassGmm
See also
CreateClassGmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
of training samples, and hence to improve the performance of the GMM by training it with an extended data set
(see TrainClassGmm).
The file fileName is overwritten by WriteSamplesClassGmm. Nevertheless, extending the database of
training samples is easy because ReadSamplesClassGmm and AddSampleClassGmm add the training
samples to the training samples that are already stored in memory with the GMM.
The created file can be read with ReadSamplesClassMlp if the classificator of a multilayer perceptron (MLP)
should be used. The class of a training sample in the GMM corresponds to a component of the target vector in the
MLP being 1.0.
Parameter
1.2 Hyperboxes
HALCON 8.0.2
18 CHAPTER 1. CLASSIFICATION
Parameter
. classifHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . class_box ; HClassBox / HTuple (IntPtr)
Classificator’s handle number.
Result
CreateClassBox returns 2 (H_MSG_TRUE) if the parameter is correct. An exception handling is raised if a
classificator with this name already exists or there is not enough memory.
Parallelization Information
CreateClassBox is local and processed completely exclusively without parallelization.
Possible Successors
LearnClassBox, EnquireClassBox, WriteClassBox, CloseClassBox, ClearSampset
See also
LearnClassBox, EnquireClassBox, CloseClassBox
Module
Foundation
HALCON 8.0.2
20 CHAPTER 1. CLASSIFICATION
unknown by indicating the symbol ’∗’ instead of a number. If you specify n values, then all following values, i.e.
the attributes n+1 until ’max’, are automatically supposed to be undefined.
See LearnClassBox for more details about the functionality of the classificator.
You may call the procedures LearnClassBox and EnquireClassBox alternately, so that it is possible to
classify already in the phase of learning. This means you could see when a satisfying behavior had been reached.
Parameter
. classifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; HClassBox / HTuple (IntPtr)
Classificator’s handle number.
. featureList (input_control) . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double / int / long / string)
Array of attributes which has to be classified.
Default Value : 1.0
. classVal (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the class to which the array of attributes had been assigned.
Result
EnquireClassBox returns 2 (H_MSG_TRUE).
Parallelization Information
EnquireClassBox is local and processed completely exclusively without parallelization.
Possible Predecessors
CreateClassBox, LearnClassBox, SetClassBoxParam
Possible Successors
LearnClassBox, WriteClassBox, CloseClassBox
Alternatives
EnquireRejectClassBox
See also
TestSampsetBox, LearnClassBox, LearnSampsetBox
Module
Foundation
Result
EnquireRejectClassBox returns 2 (H_MSG_TRUE).
Parallelization Information
EnquireRejectClassBox is local and processed completely exclusively without parallelization.
Possible Predecessors
CreateClassBox, LearnClassBox, SetClassBoxParam
Possible Successors
LearnClassBox, WriteClassBox, CloseClassBox
Alternatives
EnquireClassBox
See also
TestSampsetBox, LearnClassBox, LearnSampsetBox
Module
Foundation
HALCON 8.0.2
22 CHAPTER 1. CLASSIFICATION
in sampKey, then a cyclic start at the beginning occurs. If the error underpasses the value stopError, then
the training sequence is prematurely terminated. stopError is calculated with N / ErrorN. Whereby N signifi-
cates the number of examples which were wrong classified during the last errorN training examples. Typically
errorN is the number of examples in sampKey and NSamples is a multiple of it. If you want a data set with
100 examples to run 5 times at most and if you want it to terminate with an error lower than 5%, then the cor-
responding values are NSamples = 500, errorN = 100 and stopError = 0.05. A protocol of the training
activity is going to be written in file outfile.
Parameter
. classifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; HClassBox / HTuple (IntPtr)
Classificator’s handle number.
. sampKey (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; HTuple (int / long)
Number of the data set to train.
. outfile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; HTuple (string)
Name of the protocol file.
Default Value : "training_prot"
. NSamples (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of arrays of attributes to learn.
Default Value : 500
. stopError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Classification error for termination.
Default Value : 0.05
. errorN (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Error during the assignment.
Default Value : 100
Result
LearnSampsetBox returns 2 (H_MSG_TRUE). An exception handling is raised if key sampKey does not exist
or there are problems while opening the file.
Parallelization Information
LearnSampsetBox is local and processed completely exclusively without parallelization.
Possible Predecessors
CreateClassBox
Possible Successors
TestSampsetBox, EnquireClassBox, WriteClassBox, CloseClassBox, ClearSampset
See also
TestSampsetBox, EnquireClassBox, LearnClassBox, ReadSampset
Module
Foundation
HALCON 8.0.2
24 CHAPTER 1. CLASSIFICATION
Result
ReadClassBox returns 2 (H_MSG_TRUE). An exception handling is raised if it was not possible to open file
fileName or the file has the wrong format.
Parallelization Information
ReadClassBox is local and processed completely exclusively without parallelization.
Possible Predecessors
CreateClassBox
Possible Successors
TestSampsetBox, EnquireClassBox, WriteClassBox, CloseClassBox, ClearSampset
See also
CreateClassBox, WriteClassBox
Module
Foundation
SetClassBoxParam modifies parameter which manipulate the training sequence while calling
LearnClassBox. Only parameters of the classificator are modified, all other classificators remain un-
modified. ’min_samples_for_split’ is the number of examples at least which have to train in one cuboid of this
classificator, before the cuboid is allowed to divide itself. ’split_error’ indicates the critical error. By its exceeding
the cuboid divides itself, if there are more than ’min_samples_for_split’ examples to train. ’prop_constant’
manipulates the extension of the cuboids. It is proportional to the average distance of the training examples in this
cuboid to the center of the cuboid. More detailed:
extension × prop = average distance of the expectation value.
This relation is valid in every dimension. Hence inside a cuboid the dimensions of the feature space are supposed
to be independent.
The parameters are set with problem independent default values, which must not modified without any rea-
son. Parameters are only important during a learning sequence. They do not influence on the behavior of
EnquireClassBox.
Default setting:
’min_samples_for_split’ = 80,
’split_error’ = 0.1,
’prop_constant’ = 0.25
Parameter
HALCON 8.0.2
26 CHAPTER 1. CLASSIFICATION
1.3 Neural-Nets
static void HOperatorSet.AddSampleClassMlp ( HTuple MLPHandle,
HTuple features, HTuple target )
HALCON 8.0.2
28 CHAPTER 1. CLASSIFICATION
HALCON 8.0.2
30 CHAPTER 1. CLASSIFICATION
should only be used if the MLP is trained in the same process that uses the MLP for evaluation with
EvaluateClassMlp or for classification with ClassifyClassMlp. In this case, the memory required
for the training samples can be freed with ClearSamplesClassMlp, and hence memory can be saved. In
the normal usage, in which the MLP is trained offline and written to a file with WriteClassMlp, it is typically
unnecessary to call ClearSamplesClassMlp because WriteClassMlp does not save the training samples,
and hence the online process, which reads the MLP with ReadClassMlp, requires no memory for the training
samples.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; HClassMlp / HTuple (IntPtr)
MLP handle.
Result
If the parameters are valid, the operator ClearSamplesClassMlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
ClearSamplesClassMlp is processed completely exclusively without parallelization.
Possible Predecessors
TrainClassMlp, WriteSamplesClassMlp
See also
CreateClassMlp, ClearClassMlp, AddSampleClassMlp, ReadSamplesClassMlp
Module
Foundation
ni
(1) (1) (1)
X
aj = wji xi + bj , j = 1, . . . , nh
i=1
(1)
zj = tanh aj , j = 1, . . . , nh
(1) (1)
Here, the matrix wji and the vector bj are the weights of the input layer (first layer) of the MLP. In the hidden
layer (second layer), the activations zj are transformed in a first step by using linear combinations of the variables
in an analogous manner as above:
nh
(2) (2) (2)
X
ak = wkj zj + bk , k = 1, . . . , no
j=1
(2) (2)
Here, the matrix wkj and the vector bk are the weights of the second layer of the MLP.
The activation function used in the output layer can be determined by setting outputFunction. For
outputFunction = ’linear’, the data are simply copied:
(2)
yk = ak , k = 1, . . . , no
This type of activation function should be used for regression problems (function approximation). This activation
function is not suited for classification problems.
For outputFunction = ’logistic’, the activations are computed as follows:
1
yk = (2)
, k = 1, . . . , no
1 + exp − ak
This type of activation function should be used for classification problems with multiple (numOutput) indepen-
dent logical attributes as output. This kind of classification problem is relatively rare in practice.
For outputFunction = ’softmax’, the activations are computed as follows:
(2)
exp ak
yk = Pno (2) , k = 1, . . . , no
l=1 al
This type of activation function should be used for common classification problems with multiple (numOutput)
mutually exclusive classes as output. In particular, outputFunction = ’softmax’ must be used for the classifi-
cation of pixel data with ClassifyImageClassMlp.
The parameters preprocessing and numComponents can be used to specify a preprocessing of the feature
vectors. For preprocessing = ’none’, the feature vectors are passed unaltered to the MLP. numComponents
is ignored in this case.
For all other values of preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification or evaluation.
For preprocessing = ’normalization’, the feature vectors are normalized by subtracting the mean of the
training vectors and dividing the result by the standard deviation of the individual components of the training
vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization
does not change the length of the feature vector. numComponents is ignored in this case. This transformation
can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively,
or for data in which the components of the feature vectors are measured in different units (e.g., if some of the data
are gray value features and some are region features, or if region features are mixed, e.g., ’circularity’ (unit: scalar)
and ’area’ (unit: pixel squared)). In these cases, the training of the net will typically require fewer iterations than
without normalization.
For preprocessing = ’principal_components’, a principal component analysis is performed. First, the feature
vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that decor-
relates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and the co-
variance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the transformed
features that contain the most variation is contained in the first components of the transformed feature vector. With
this, it is possible to omit the transformed features in the last components of the feature vector, which typically are
mainly influenced by noise, without losing a large amount of information. The parameter numComponents can
be used to detemine how many of the transformed feature vector components should be used. Up to numInput
components can be selected. The operator GetPrepInfoClassMlp can be used to determine how much in-
formation each transformed component contains. Hence, it aids the selection of numComponents. Like data
normalization, this transformation can be used if the mean and standard deviation of the feature vectors differs
substantially from 0 and 1, respectively, or for feature vectors in which the components of the data are measured in
different units. In addition, this transformation is useful if it can be expected that the features are highly correlated.
In contrast to the above three transformations, which can be used for all MLP types, the transformation spec-
ified by preprocessing = ’canonical_variates’ can only be used if the MLP is used as a classifier with
outputFunction = ’softmax’). The computation of the canonical variates is also called linear discriminant
HALCON 8.0.2
32 CHAPTER 1. CLASSIFICATION
analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the training
vectors on average over all classes is computed. At the same time, the transformation maximally separates the mean
values of the individual classes. As for preprocessing = ’principal_components’, the transformed compo-
nents are sorted by information content, and hence transformed components with little information content can be
omitted. For canonical variates, up to min(numOutput−1, numInput) components can be selected. Also in this
case, the information content of the transformed components can be determined with GetPrepInfoClassMlp.
Like principal component analysis, canonical variates can be used to reduce the amount of data without losing a
large amount of information, while additionally optimizing the separability of the classes after the data reduction.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the actual number of
input units of the MLP is determined by numComponents, whereas numInput determines the dimensionality
of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transfor-
mations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this,
the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.
Usually, numHidden should be selected in the order of magnitude of numInput and numOutput. In many
cases, much smaller values of numHidden already lead to very good classification results. If numHidden is
chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e.,
the MLP learns the training data very well, but does not return very good results on unknown data.
CreateClassMlp initializes the above described weights with random numbers. To ensure that the results of
training the classifier with TrainClassMlp are reproducible, the seed value of the random number generator is
passed in randSeed. If the training results in a relatively large error, it sometimes may be possible to achieve a
smaller error by selecting a different value for randSeed and retraining an MLP.
After the MLP has been created, typically training samples are added to the MLP by repeatedly calling
AddSampleClassMlp or ReadSamplesClassMlp. After this, the MLP is typically trained using
TrainClassMlp. Hereafter, the MLP can be saved using WriteClassMlp. Alternatively, the MLP can
be used immediately after training to evaluate data using EvaluateClassMlp or, if the MLP is used as a
classifier (i.e., for outputFunction = ’softmax’), to classify data using ClassifyClassMlp.
A comparison of the MLP and the support vector machine (SVM) (see CreateClassSvm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
Result
If the parameters are valid, the operator CreateClassMlp returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
CreateClassMlp is processed completely exclusively without parallelization.
HALCON 8.0.2
34 CHAPTER 1. CLASSIFICATION
Possible Successors
AddSampleClassMlp
Alternatives
CreateClassSvm, CreateClassGmm, CreateClassBox
See also
ClearClassMlp, TrainClassMlp, ClassifyClassMlp, EvaluateClassMlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
GetPrepInfoClassMlp computes the information content of the training vectors that have been transformed
with the preprocessing given by preprocessing. preprocessing can be set to ’principal_components’
HALCON 8.0.2
36 CHAPTER 1. CLASSIFICATION
or ’canonical_variates’. The preprocessing methods are described with CreateClassMlp. The information
content is derived from the variations of the transformed components of the feature vector, i.e., it is computed
solely based on the training data, independent of any error rate on the training data. The information content is
computed for all relevant components of the transformed feature vectors (NumInput for ’principal_components’
and min(NumOutput − 1, NumInput) for ’canonical_variates’, see CreateClassMlp), and is returned in
informationCont as a number between 0 and 1. To convert the information content into a percentage, it
simply needs to be multiplied by 100. The cumulative information content of the first n components is returned
in the n-th component of cumInformationCont, i.e., cumInformationCont contains the sums of the
first n elements of informationCont. To use GetPrepInfoClassMlp, a sufficient number of samples
must be added to the multilayer perceptron (MLP) given by MLPHandle by using AddSampleClassMlp or
ReadSamplesClassMlp.
informationCont and cumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of cumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to CreateClassMlp. The call to GetPrepInfoClassMlp already re-
quires the creation of an MLP, and hence the setting of NumComponents in CreateClassMlp to an ini-
tial value. However, if GetPrepInfoClassMlp is called it is typically not known how many components
are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step approach
should typically be used to select NumComponents: In a first step, an MLP with the maximum number for
NumComponents is created (NumInput for ’principal_components’ and min(NumOutput − 1, NumInput)
for ’canonical_variates’). Then, the training samples are added to the MLP and are saved in a file using
WriteSamplesClassMlp. Subsequently, GetPrepInfoClassMlp is used to determine the information
content of the components, and with this NumComponents. After this, a new MLP with the desired number of
components is created, and the training samples are read with ReadSamplesClassMlp. Finally, the MLP is
trained with TrainClassMlp.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; HClassMlp / HTuple (IntPtr)
MLP handle.
. preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. informationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Relative information content of the transformed feature vectors.
. cumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator GetPrepInfoClassMlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
GetPrepInfoClassMlp may return the error 9211 (Matrix is not positive definite) if preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
GetPrepInfoClassMlp is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassMlp, ReadSamplesClassMlp
Possible Successors
ClearClassMlp, CreateClassMlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
* Train an MLP
HALCON 8.0.2
38 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator GetSampleClassMlp returns the value 2 (H_MSG_TRUE). If neces-
sary an exception handling is raised.
Parallelization Information
GetSampleClassMlp is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassMlp, ReadSamplesClassMlp, GetSampleNumClassMlp
Possible Successors
ClassifyClassMlp, EvaluateClassMlp
See also
CreateClassMlp
Module
Foundation
int HClassMlp.GetSampleNumClassMlp ( )
Return the number of training samples stored in the training data of a multilayer perceptron.
GetSampleNumClassMlp returns in numSamples the number of training samples that are stored in the mul-
tilayer perceptron (MLP) given by MLPHandle. GetSampleNumClassMlp should be called before the
individual training samples are accessed with GetSampleClassMlp, e.g., for the purpose of reclassifying the
training data (see GetSampleClassMlp).
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; HClassMlp / HTuple (IntPtr)
MLP handle.
. numSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of stored training samples.
Result
If MLPHandle is valid, the operator GetSampleNumClassMlp returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
GetSampleNumClassMlp is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassMlp, ReadSamplesClassMlp
Possible Successors
GetSampleClassMlp
See also
CreateClassMlp
Module
Foundation
HALCON 8.0.2
40 CHAPTER 1. CLASSIFICATION
Parameter
values between 0.00001 and 1 should typically be used. The optimization is terminated if the weight change is
smaller than weightTolerance and the change of the error value is smaller than errorTolerance. In any
case, the optimization is terminated after at most maxIterations iterations. It should be noted that, depending
on the size of the MLP and the number of training samples, the training can take from a few seconds to several
hours.
On output, TrainClassMlp returns the error of the MLP with the optimal weights on the training samples
in error. Furthermore, errorLog contains the error value as a function of the number of iterations. With
this, it is possible to decide whether a second training of the MLP with the same training data without creating
the MLP anew makes sense. If errorLog is regarded as a function, it should drop off steeply initially, while
leveling out very flatly at the end. If errorLog is still relatively steep at the end, it usually makes sense to
call TrainClassMlp again. It should be noted, however, that this mechanism should not be used to train the
MLP successively with maxIterations = 1 (or other small values for maxIterations) because this will
substantially increase the number of iterations required to train the MLP.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; HClassMlp / HTuple (IntPtr)
MLP handle.
. maxIterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Maximum number of iterations of the optimization algorithm.
Default Value : 200
Suggested values : MaxIterations ∈ {20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280,
300}
. weightTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.
Default Value : 1.0
Suggested values : WeightTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction : WeightTolerance ≥ 1.0e-8
. errorTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the
optimization algorithm.
Default Value : 0.01
Suggested values : ErrorTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction : ErrorTolerance ≥ 1.0e-8
. error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Mean error of the MLP on the training data.
. errorLog (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Mean error of the MLP on the training data as a function of the number of iterations of the optimization
algorithm.
Example (Syntax: HDevelop)
* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’normalization’, 1,
42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, ’classifier.mlp’)
clear_class_mlp (MLPHandle)
Result
If the parameters are valid, the operator TrainClassMlp returns the value 2 (H_MSG_TRUE). If necessary an
exception handling is raised.
TrainClassMlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = ’canoni-
cal_variates’ is used. This typically indicates that not enough training samples have been stored for each class.
Parallelization Information
TrainClassMlp is processed completely exclusively without parallelization.
Possible Predecessors
AddSampleClassMlp, ReadSamplesClassMlp
HALCON 8.0.2
42 CHAPTER 1. CLASSIFICATION
Possible Successors
EvaluateClassMlp, ClassifyClassMlp, WriteClassMlp
Alternatives
ReadClassMlp
See also
CreateClassMlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
1.4 Support-Vector-Machines
HALCON 8.0.2
44 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid the operator AddSampleClassSvm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception handling is raised.
Parallelization Information
AddSampleClassSvm is processed completely exclusively without parallelization.
Possible Predecessors
CreateClassSvm
Possible Successors
TrainClassSvm, WriteSamplesClassSvm, GetSampleNumClassSvm, GetSampleClassSvm
Alternatives
ReadSamplesClassSvm
See also
ClearSamplesClassSvm, GetSupportVectorClassSvm
Module
Foundation
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation
HALCON 8.0.2
46 CHAPTER 1. CLASSIFICATION
nsv
!
X
f (z) = sign αi yi < xi , z > +b
i=1
Here, xi are the support vectors, yi encodes their class membership (±1) and αi the weight coefficients. The
distance of the hyperplane to the origin is b. The α and b are determined during training with TrainClassSvm.
Note that only a subset of the original training set (nsv : number of support vectors) is necessary for the definition
of the decision boundary and therefore data vectors that are not support vectors are discarded. The classification
speed depends on the evaluation of the dot product between support vectors and the feature vector to be classified,
and hence depends on the length of the feature vector and the number nsv of support vectors.
For classification problems in which the classes are not linearly separable the algorithm is extended in two ways.
First, during training a certain amount of errors (overlaps) is compensated with the use of slack variables. This
means that the α are upper bounded by a regularization constant. To enable an intuitive control of the amount of
training errors, the Nu-SVM version of the training algorithm is used. Here, the regularization parameter nu is an
asymptotic upper bound on the number of training errors and an asymptotic lower bound on the number of support
vectors. As a rule of thumb, the parameter nu should be set to the prior expectation of the application’s specific
error ratio, e.g., 0.01 (corresponding to a maximum training error of 1%). Please note that a too big value for nu
might lead to an infeasible training problem, i.e., the SVM cannot be trained correctly (see TrainClassSvm
for more details). Since this can only be determined during training, an exception can only be raised there. In this
case, a new SVM with nu chosen smaller must be created.
Second, because the above SVM exclusively calculates dot products between the feature vectors, it is possible to
incorporate a kernel function into the training and testing algorithm. This means that the dot products are substi-
tuted by a kernel function, which implicitly performs the dot product in a higher dimensional feature space. Given
the appropriate kernel transformation, an originally not linearly separable classification task becomes linearly sep-
arable in the higher dimensional feature space.
Different kernel functions can be selected with the parameter kernelType. For kernelType = ’linear’ the
dot product, as specified in the above formula is calculated. This kernel should solely be used for linearly or nearly
linearly separable classification tasks. The parameter kernelParam is ignored here.
The radial basis function (RBF) kernelType = ’rbf’ is the best choice for a kernel function because it achieves
good results for many classification tasks. It is defined as:
2
= e−γ·
x−z
K(x, z)
Here, the parameter kernelParam is used to select γ. The intuitive meaning of γ is the amount of influence of
a support vector upon its surroundings. A big value of γ (small influence on the surroundings) means that each
training vector becomes a support vector. The training algorithm learns the training data “by heart”, but lacks any
generalization ability (over-fitting). Additionally, the training/classification times grow significantly. A too small
value for γ (big influence on the surroundings) leads to few support vectors defining the separating hyperplane
(under-fitting). One typical strategy is to select a small γ-nu pair and consecutively increase the values as long as
the recognition rate increases.
With kernelType = ’polynomial_homogeneous’ or ’polynomial_inhomogeneous’, polynomial kernels can be
selected. They are defined in the following way:
The degree of the polynomial kernel must be set with kernelParam. Please note that a too high degree polyno-
mial (d > 10) might result in numerical problems.
As a rule of thumb, the RBF kernel provides a good choice for most of the classification problems and should
therefore be used in almost all cases. Nevertheless, the linear and polynomial kernels might be better suited
for certain applications and can be tested for comparison. Please note that the novelty-detection mode and the
ReduceClassSvm operator are provided only for the RBF kernel.
mode specifies the general classification task, which is either how to break down a multi-class decision problem to
binary sub-cases or whether to use a special classifier mode called ’novelty-detection’. mode = ’one-versus-all’
creates a classifier where each class is compared to the rest of the training data. During testing the class with the
largest output (see the classification formula without sign) is chosen. mode = ’one-versus-one’ creates a binary
classifier between each single class. During testing a vote is cast and the class with the majority of the votes
is selected. The optimal mode for multi-class classification depends on the number of classes. Given n classes
’one-versus-all’ creates n classifiers, whereas ’one-versus-one’ creates n(n − 1)/2. Note that for a binary decision
task ’one-versus-one’ would create exactly one, whereas ’one-versus-all’ unnecessarily creates two symmetric
HALCON 8.0.2
48 CHAPTER 1. CLASSIFICATION
classifiers. For few classes (3-10) ’one-versus-one’ is faster for training and testing, because the sub-classifier all
consist of fewer training data and result in overall fewer support vectors. In case of many classes ’one-versus-all’
is preferable, because ’one-versus-one’ generates a prohibitively large amount of sub-classifiers, as their number
grows quadratically with the number of classes.
A special case of classification is mode = 0 novelty − detection 0 , where the test data is classified with regard to
membership to the training data. The separating hyperplane lies around the training data and thereby implicitly
divides the training data from the rejection class. The advantage is that the rejection class is not defined explicitly,
which is difficult to do in certain applications like texture classification. The resulting support vectors are all lying
at the border. With the parameter nu, the ratio of outliers in the training data set is specified.
The parameters preprocessing and numComponents can be used to specify a preprocessing of the feature
vectors. For preprocessing = ’none’, the feature vectors are passed unaltered to the SVM. numComponents
is ignored in this case.
For all other values of preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification.
For preprocessing = ’normalization’, the feature vectors are normalized. In case of a polynomial kernel, the
minimum and maximum value of the training data set is transformed to -1 and +1. In case of the RBF kernel, the
data is normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation
of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and
a standard deviation of 1. The normalization does not change the length of the feature vector. numComponents
is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors
differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are
measured in different units (e.g., if some of the data are gray value features and some are region features, or if
region features are mixed, e.g., ’circularity’ (unit: scalar) and ’area’ (unit: pixel squared)). The normalization
transformation should be performed in general, because it increases the numerical stability during training/testing.
For preprocessing = ’principal_components’, a principal component analysis (PCA) is performed. First, the
feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space)
that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is
0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that
the transformed features that contain the most variation is contained in the first components of the transformed
feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
numComponents can be used to determine how many of the transformed feature vector components should
be used. Up to numFeatures components can be selected. The operator GetPrepInfoClassSvm can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
numComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated. Please note that the RBF kernel is very robust against the dimensionality reduction
performed by PCA and should therefore be the first choice when speeding up the classification time.
The transformation specified by preprocessing = ’canonical_variates’ first normalizes the training vectors
and then decorrelates the training vectors on average over all classes. At the same time, the transformation maxi-
mally separates the mean values of the individual classes. As for preprocessing = ’principal_components’,
the transformed components are sorted by information content, and hence transformed components with little in-
formation content can be omitted. For canonical variates, up to min(numClasses − 1, numFeatures) compo-
nents can be selected. Also in this case, the information content of the transformed components can be determined
with GetPrepInfoClassSvm. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction. The computation of the canonical variates is also called linear discriminant
analysis.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the length of input
data of the SVM is determined by numComponents, whereas numFeatures determines the dimensionality of
the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transforma-
tions, the size of the SVM with respect to data length is reduced, leading to shorter training/classification times by
the SVM.
After the SVM has been created with CreateClassSvm, typically training samples are added to the SVM by
repeatedly calling AddSampleClassSvm or ReadSamplesClassSvm. After this, the SVM is typically
trained using TrainClassSvm. Hereafter, the SVM can be saved using WriteClassSvm. Alternatively, the
SVM can be used immediately after training to classify data using ClassifyClassSvm.
A comparison of the SVM and the multi-layer perceptron (MLP) (see CreateClassMlp) typically shows that
SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition rates
than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications. Please
note that this guideline assumes optimal tuning of the parameters.
Parameter
. numFeatures (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of input variables (features) of the SVM.
Default Value : 10
Suggested values : NumFeatures ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumFeatures ≥ 1
. kernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
The kernel type.
Default Value : "rbf"
List of values : KernelType ∈ {"linear", "rbf", "polynomial_inhomogeneous",
"polynomial_homogeneous"}
. kernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Additional parameter for the kernel function. In case of RBF kernel the value for γ. For polynomial kernel the
degree
Default Value : 0.02
Suggested values : KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Regularisation constant of the SVM.
Default Value : 0.05
Suggested values : Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction : (Nu > 0.0) ∧ (Nu < 1.0)
. numClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of classes.
Default Value : 5
Suggested values : NumClasses ∈ {2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : NumClasses ≥ 1
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
The mode of the SVM.
Default Value : "one-versus-one"
List of values : Mode ∈ {"novelty-detection", "one-versus-all", "one-versus-one"}
. preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
. numComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Preprocessing parameter: Number of transformed features (ignored for preprocessing = ’none’ and
preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
SVM handle.
Example (Syntax: HDevelop)
HALCON 8.0.2
50 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid the operator CreateClassSvm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
CreateClassSvm is processed completely exclusively without parallelization.
Possible Successors
AddSampleClassSvm
Alternatives
CreateClassMlp, CreateClassGmm, CreateClassBox
See also
ClearClassSvm, TrainClassSvm, ClassifyClassSvm
References
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Module
Foundation
Compute the information content of the preprocessed feature vectors of a support vector machine
GetPrepInfoClassSvm computes the information content of the training vectors that have been transformed
with the preprocessing given by preprocessing. preprocessing can be set to ’principal_components’
or ’canonical_variates’. The preprocessing methods are described with CreateClassSvm. The information
content is derived from the variations of the transformed components of the feature vector, i.e., it is computed solely
based on the training data, independent of any error rate on the training data. The information content is computed
for all relevant components of the transformed feature vectors (NumFeatures for ’principal_components’ and
min(NumClasses − 1, NumFeatures) for ’canonical_variates’, see CreateClassSvm), and is returned
in informationCont as a number between 0 and 1. To convert the information content into a percentage, it
simply needs to be multiplied by 100. The cumulative information content of the first n components is returned
in the n-th component of cumInformationCont, i.e., cumInformationCont contains the sums of the
first n elements of informationCont. To use GetPrepInfoClassSvm, a sufficient number of samples
must be added to the support vector machine (SVM) given by SVMHandle by using AddSampleClassSvm or
ReadSamplesClassSvm.
informationCont and cumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of cumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to CreateClassSvm. The call to GetPrepInfoClassSvm already
requires the creation of an SVM, and hence the setting of NumComponents in CreateClassSvm to an
HALCON 8.0.2
52 CHAPTER 1. CLASSIFICATION
initial value. However, when GetPrepInfoClassSvm is called, it is typically not known how many com-
ponents are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step
approach should typically be used to select NumComponents: In a first step, an SVM with the maximum num-
ber for NumComponents is created (NumFeatures for ’principal_components’ and min(NumClasses −
1, NumFeatures) for ’canonical_variates’). Then, the training samples are added to the SVM and are saved in
a file using WriteSamplesClassSvm. Subsequently, GetPrepInfoClassSvm is used to determine the
information content of the components, and with this NumComponents. After this, a new SVM with the desired
number of components is created, and the training samples are read with ReadSamplesClassSvm. Finally,
the SVM is trained with TrainClassSvm.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
SVM handle.
. preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. informationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Relative information content of the transformed feature vectors.
. cumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)
Result
If the parameters are valid the operator GetPrepInfoClassSvm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
GetPrepInfoClassSvm may return the error 9211 (Matrix is not positive definite) if preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
GetPrepInfoClassSvm is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassSvm, ReadSamplesClassSvm
Possible Successors
ClearClassSvm, CreateClassSvm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Return a training sample from the training data of a support vector machine.
GetSampleClassSvm reads out a training sample from the support vector machine (SVM) given by
SVMHandle that was added with AddSampleClassSvm or ReadSamplesClassSvm. The index
of the sample is specified with indexSample. The index is counted from 0, i.e., indexSample
must be a number between 0 and IndexSamples − 1, where IndexSamples can be determined with
GetSampleNumClassSvm. The training sample is returned in features and target. features is a
feature vector of length NumFeatures (see CreateClassSvm), while target is the index of the class,
ranging between 0 and NumClasses-1 (see AddSampleClassSvm).
GetSampleClassSvm can, for example, be used to reclassify the training data with ClassifyClassSvm in
order to determine which training samples, if any, are classified incorrectly.
Parameter
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Reclassify the training samples
get_sample_num_class_svm (SVMHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_svm (SVMHandle, I, Data, Target)
classify_class_svm (SVMHandle, Data, 1, Class)
if (Class # Target)
* Sample has been classified incorrectly
endif
endfor
clear_class_svm (SVMHandle)
HALCON 8.0.2
54 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid the operator GetSampleClassSvm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception handling is raised.
Parallelization Information
GetSampleClassSvm is reentrant and processed without parallelization.
Possible Predecessors
AddSampleClassSvm, ReadSamplesClassSvm, GetSampleNumClassSvm,
GetSupportVectorClassSvm
Possible Successors
ClassifyClassSvm
See also
CreateClassSvm
Module
Foundation
int HClassSvm.GetSampleNumClassSvm ( )
Return the number of training samples stored in the training data of a support vector machine.
GetSampleNumClassSvm returns in numSamples the number of training samples that are stored in the sup-
port vector machine (SVM) given by SVMHandle. GetSampleNumClassSvm should be called before the
individual training samples are accessed with GetSampleClassSvm, e.g., for the purpose of reclassifying the
training data (see GetSampleClassSvm).
Parameter
double HClassSvm.GetSupportVectorClassSvm (
HTuple indexSupportVector )
Return the index of a support vector from a trained support vector machine.
The operator GetSupportVectorClassSvm maps support vectors of a trained SVM (given in SVMHandle)
to the original training data set. The index of the SV is specified with indexSupportVector. The index is
counted from 0, i.e., indexSupportVector must be a number between 0 and IndexSupportVectors
− 1, where IndexSupportVectors can be determined with GetSupportVectorNumClassSvm. The
index of this SV in the training data is returned in index. This index can be used for a query with
GetSampleClassSvm to obtain the feature vectors that become support vectors. GetSampleClassSvm
can, for example, be used to visualize the support vectors.
Note that when using TrainClassSvm with a mode different from ’default’ or reducing the SVM with
ReduceClassSvm, the returned index will always be -1, i.e., it will be invalid. The reason for this is that
a consistent mapping between SV and training data becomes impossible.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
SVM handle.
. indexSupportVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Number of stored support vectors.
. index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Index of the support vector in the training set.
Result
If the parameters are valid the operator GetSampleClassSvm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception handling is raised.
Parallelization Information
GetSupportVectorClassSvm is reentrant and processed without parallelization.
Possible Predecessors
TrainClassSvm, GetSupportVectorNumClassSvm
Possible Successors
GetSampleClassSvm
See also
CreateClassSvm
Module
Foundation
HALCON 8.0.2
56 CHAPTER 1. CLASSIFICATION
Result
If SVMHandle is valid the operator GetSampleNumClassSvm returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception handling is raised.
Parallelization Information
GetSupportVectorNumClassSvm is reentrant and processed without parallelization.
Possible Predecessors
TrainClassSvm
Possible Successors
GetSampleClassSvm
See also
CreateClassSvm
Module
Foundation
Approximate a trained support vector machine by a reduced support vector machine for faster classification.
As described in CreateClassSvm, the classification time of a SVM depends on the number of kernel evalu-
ations between the support vectors and the feature vectors. While the length of the data vectors can be reduced
in a preprocessing step like ’pricipal_components’ or ’canonical_variates’ (see CreateClassSvm for details),
the number of resulting SV depends on the complexity of the classification problem. The number of SVs is deter-
mined during training. To further reduce classification time, the number of SVs can be reduced by approximating
the original separating hyperplane with fewer SVs than originally required. For this purpose, a copy of the orig-
inal SVM provided by SVMHandle is created and returned in SVMHandleReduced. This new SVM has the
same parametrization as the original SVM, but a different SV expansion. The training samples that are included in
SVMHandle are not copied. The original SVM is not modified by ReduceClassSvm.
The reduction method is selected with method. Currently, only a bottom up approch is supported, which itera-
tively merges SVs. The algorithm stops if either the minimum number of SVs is reached (minRemainingSV)
or if the accumulated maximum error exceeds the threshold maxError. Note that the approximation reduces the
complexity of the hyperplane and thereby leads to a deteriorated classification rate. A common approch is there-
fore to start from a small maxError e.g., 0.001, and to increase its value step by step. To control the reduction
ratio, at each step the number of remaining SVs is determined with GetSupportVectorNumClassSvm and
the classification rate is checked on a separate test data set with ClassifyClassSvm.
HALCON 8.0.2
58 CHAPTER 1. CLASSIFICATION
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
Original SVM handle.
. method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of postprocessing to reduce number of SV.
Default Value : "bottom_up"
List of values : Method ∈ {"bottom_up"}
. minRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Minimum number of remaining SVs.
Default Value : 2
Suggested values : MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction : MinRemainingSV ≥ 2
. maxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Maximum allowed error of reduction.
Default Value : 0.001
Suggested values : MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction : MaxError > 0.0
. SVMHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
SVMHandle of reduced SVM.
Example (Syntax: HDevelop)
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Create a reduced SVM
reduce_class_svm (SVMHandle, ’bottom_up’, 2, 0.01, SVMHandleReduced)
write_class_svm (SVMHandleReduced, ’classifier.svm)
clear_class_svm (SVMHandleReduced)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator TrainClassSvm returns the value 2 (H_MSG_TRUE). If necessary, an
exception handling is raised.
Parallelization Information
ReduceClassSvm is processed completely exclusively without parallelization.
Possible Predecessors
TrainClassSvm, GetSupportVectorNumClassSvm
Possible Successors
ClassifyClassSvm, WriteClassSvm, GetSupportVectorNumClassSvm
See also
TrainClassSvm
Module
Foundation
TrainClassSvm trains the support vector machine (SVM) given in SVMHandle. Before the SVM can be
trained, the training samples to be used for the training must be added to the SVM using AddSampleClassSvm
or ReadSamplesClassSvm.
Technically, training an SVM means solving a convex quadratic optimization problem. This implies that it can
be assured that training terminates after finite steps at the global optimum. In order to recognize termination,
the gradient of the function that is optimized intenally must fall below a threshold, which is set in epsilon.
By default, a value of 0.001 should be used for epsilon since this yields the best results in practice. A too
big value leads to a too early termination and might result in suboptimal solutions. With a too small value the
optimization requires a longer time, often without changing the recognition rate significantly. Nevertheless, if
longer training times are possible, a smaller value than 0.001 might be chosen. There are two common reasons
for changing epsilon: First, if you specified a very small value for Nu when calling ( CreateClassSvm),
e.g., Nu = 0.001, a smaller epsilon might significantly improve the recognition rate. A second case is the
determination of the optimal kernel function and its parameterization (e.g., the KernelParam-Nu pair for the
RBF kernel) with the computationally intensive n-fold cross validation. Here, choosing a bigger epsilon reduces
the computational time without changing the parameters of the optimal kernel that would be obtained when using
the default epsilon. After the optimal KernelParam-Nu pair is obtained, the final training is conducted with
a small epsilon.
The duration of the training depends on the training data, in particular on the number of resulting support vectors
(SVs), and epsilon. It can lie between seconds and several hours. It is therefore recommended to choose the
SVM parameter Nu in CreateClassSvm so that as few SVs as possible are generated without decreasing the
recognition rate. Special care must be taken with the parameter Nu in CreateClassSvm so that the optimization
starts from a feasible region. If too many training errors are chosen with a too big Nu, an exception handling is
raised. In this case, an SVM with the same training data, but with smaller Nu must be trained.
With the parameter trainMode you can choose between different training modes. Normally, you train an SVM
without additional information and trainMode is set to ’default’. If multiple SVMs for the same data set but
with different kernels are trained, subsequent training runs can reuse optimization results and thus speedup the
overall training time of all runs. For this mode, in trainMode a SVM handle of a previously trained SVM is
passed. Note that the SVM handle passed in SVMHandle and the SVMHandle passed in trainMode must have
the same training data, the same mode and the same number of classes (see CreateClassSvm). The application
for this training mode is the evaluation of different kernel functions given the same training set. In the literature
this is referred to as alpha seeding.
With trainMode = ’add_sv_to_train_set’ it is possible to append the support vectors that were generated by
a previous call of TrainClassSvm to the currently saved training set. This mode has two typical application
areas: First, it is possible to gradually train a SVM. For this, the complete training set is divided into disjunctive
chunks. The first chunk is trained normally using trainMode = ’default’. Afterwards, the previous training
set is removed with ClearSamplesClassSvm, the next chunk is added with AddSampleClassSvm and
trained with trainMode = ’add_sv_to_train_set’. This is repeated until all chunks are trained. This approach has
the advantage that even huge training data sets can be trained efficiently with respect to memory consumption. A
second application area for this mode is that a general purpose classifier can be specialized by adding characteristic
training samples and then retraining it. Please note that the preprocessing (as described in CreateClassSvm)
is not changed when training with trainMode = ’add_sv_to_train_set’.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
SVM handle.
. epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Stop parameter for training.
Default Value : 0.001
Suggested values : Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. trainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (string / int / long)
Mode of training. For normal operation: ’default’. If SVs already included in the SVM should be used for
training: ’add_sv_to_train_set’. For alpha seeding: the respective SVM handle.
Default Value : "default"
List of values : TrainMode ∈ {"default", "add_sv_to_train_set"}
Example (Syntax: HDevelop)
* Train an SVM
HALCON 8.0.2
60 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid the operator TrainClassSvm returns the value 2 (H_MSG_TRUE). If necessary, an
exception handling is raised.
Parallelization Information
TrainClassSvm is processed completely exclusively without parallelization.
Possible Predecessors
AddSampleClassSvm, ReadSamplesClassSvm
Possible Successors
ClassifyClassSvm, WriteClassSvm
Alternatives
ReadClassSvm
See also
CreateClassSvm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation
See also
CreateClassSvm, ReadClassSvm, WriteSamplesClassSvm
Module
Foundation
HALCON 8.0.2
62 CHAPTER 1. CLASSIFICATION
File
2.1 Images
HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
63
64 CHAPTER 2. FILE
Attention
If CMYK or YCCK JPEG files are read, HALCON assumes that these files follow the Adobe Photoshop convention
that the CMYK channels are stored inverted, i.e., 0 represents 100% ink coverage, rather than 0% ink as one would
expect. The images are converted to RGB images using this convention. If the JPEG file does not follow this
convention, but stores the CMYK channels in the usual fashion, InvertImage must be called after reading the
image.
If PNG images that contain an alpha channel are read, the alpha channel is returned as the second or fourth channel
of the output image, unless the alpha channel contains exactly two different gray values, in which case a one or
three channel image with a reduced domain is returned, in which the points in the domain correspond to the points
with the higher gray value in the alpha channel.
Parameter
/* Reading an image: */
read_image(Image,’monkey’).
Result
If the parameters are correct the operator ReadImage returns the value 2 (H_MSG_TRUE). Otherwise an ex-
ception handling is raised.
Parallelization Information
ReadImage is reentrant and processed without parallelization.
Possible Successors
DispImage, Threshold, Regiongrowing, CountChannels, Decompose3, ClassNdimNorm,
GaussImage, FillInterlace, ZoomImageSize, ZoomImageFactor, CropPart, WriteImage,
Rgb1ToGray
Alternatives
ReadSequence
See also
SetSystem, WriteImage
Module
Foundation
Read images.
The operator ReadSequence reads unformatted image data, from a file and returns a “suitable” image. The
image data must be filled consecutively pixel by pixel and line by line.
Any file headers (with the length headerSize bytes) are skipped. The parameters sourceWidth and
sourceHeight indicate the size of the filled image. destWidth and destHeight indicate the size of the
image. In the simplest case these parameters are the same. However, areas can also be read. The upper left corner
of the required image area can be determined via startRow and startColumn.
The pixel types ’bit’, ’byte’, ’short’ (16 bits, unsigned), ’signed_short’ (16 bits, signed), ’long’ (32 bits, signed),
’swapped_long’ (32 bits, with swapped segments), and ’real’ (32 bit floating point numbers) are supported. Fur-
thermore, the operator ReadSequence enables the extraction of components of a RBG image, if a triple of three
bytes (in the sequence “red”, “green”, “blue”) was filed in the image file. For the red component the pixel type
’r_byte’ must be chosen, and correspondingly for the green and blue components ’g_byte’ or ’b_byte’, respectively.
’MSBFirst’ (most significant bit first) or the inversion thereof (’LSBFirst’) can be chosen for the bit order
(bitOrder). The byte orders (byteOrder) ’MSBFirst’ (most significant byte first) or ’LSBFirst’, respectively,
are processed analogously. Finally an alignment (pad) can be set at the end of the line: ’byte’, ’short’ or ’long’. If
a whole image sequence is stored in the file a single image (beginning at Index 1) can be chosen via the parameter
index.
Image files are searched in the current directory (determined by the environment variable) and in the image direc-
tory of HALCON . The image directory of HALCON is preset at ’.’ and ’/usr/local/halcon/images’ in a UNIX
environment and can be set via the operator SetSystem. More than one image directory can be indicated. This
is done by separating the individual directories by a colon.
Furthermore the search path can be set via the environment variable HALCONIMAGES (same structure as ’im-
age_dir’). Example:
HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If files of pixel type ’real’ are read and the byte order is chosen incorrectly (i.e., differently from the byte order in
which the data is stored in the file) program error and even crashes because of floating point exceptions may result.
Parameter
. image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image read.
. headerSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of bytes for file header.
Default Value : 0
Typical range of values : 0 ≤ HeaderSize
. sourceWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Number of image columns of the filed image.
Default Value : 512
Typical range of values : 1 ≤ SourceWidth
. sourceHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Number of image lines of the filed image.
Default Value : 512
Typical range of values : 1 ≤ SourceHeight
HALCON 8.0.2
66 CHAPTER 2. FILE
’tiff’ TIFF format, 3-channel-images (RGB): 3 samples per pixel; other images (grayvalue images): 1 sample per
pixel, 8 bits per sample, uncompressed,72 dpi; file extension: *.tif
’bmp’ Windows-BMP format, 3-channel-images (RGB): 3 bytes per pixel; other images (gray value image): 1
byte per pixel; file extension: *.bmp
’jpeg’ JPEG format, with lost of information; together with the format string the quality value determining the
compression rate can be provided: e.g., ’jpeg 30’. Attention: images stored for being processed later should
not be compressed with the jpeg format according to the lost of information.
’jp2’ : JPEG-2000 format (lossless and lossy compression); together with the format string the quality value
determing the compression rate can be provided (e.g., ’jp2 40’). This value corresponds to the ratio of the
size of the compressed image and the size of the uncompressed image (in percent). Since lossless JPEG-
2000 compression already reduces the file size significantly, only smaller values (typically smaller than 50)
influence the file size. If no value is provided for the compression (and only then), the image is compressed
lossless. The image can contain an arbitrary number of channels. Possible types are byte, cyclic, direction,
int1, uint2, int2, and int4. In the case of int4 it is only possible to store images with less or equal to 24
bits precision (otherwise an exception handling is raised). If an image with a reduced domain is written, the
region is stored as 1 bit alpha channel.
’png’ PNG format (lossless compression); together with the format string, a compresion level between 0 and 9 can
be specified, where 0 corresponds to no compression and 9 to the best possible compression. Alternatively,
the compression can be selected with the following strings: ’best’, ’fastest’, and ’none’. Hence, examples for
correct parameters are ’png’, ’png 7’, and ’png none’. Images of type byte and uint2 can be stored in PNG
files. If an image with a reduced domain is written, the region is stored as the alpha channel, where the points
within the domain are stored as the maximum gray value of the image type and the points outside the domain
are stored as the gray value 0. If an image with a full domain is written, no alpha channel is stored.
’ima’ The data is written binary line by line (without header or carriage return). The size of the image and the
pixel type are stored in the description file "’fileName.exp"’. All HALCON pixel types except complex
and vector_field can be written. Only the first channel of the image is stored in the file. The file extension
is: ’.ima’
Parameter
HALCON 8.0.2
68 CHAPTER 2. FILE
2.2 Misc
static void HOperatorSet.DeleteFile ( HTuple fileName )
static void HMisc.DeleteFile ( string fileName )
Delete a file.
DeleteFile deletes the file given by fileName.
Parameter
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; HTuple (string)
Name of file to be checked.
Result
DeleteFile returns the value 2 (H_MSG_TRUE) if the file exists and could be deleted. Otherwise, an exception
is raised.
Parallelization Information
DeleteFile is reentrant and processed without parallelization.
Module
Foundation
Alternatives
OpenFile
Module
Foundation
HALCON 8.0.2
70 CHAPTER 2. FILE
ReadWorldFile reads a geocoding from an ARC/INFO world file with the file name fileName and returns
it as a homogeneous 2D transformation matrix in worldTransformation. To find the file fileName, all
directories contained in the HALCON system variable ’image_dir’ (usually this is the content of the environment
variable HALCONIMAGES) are searched (see ReadImage). This transformation matrix can be used to trans-
form XLD contours to the world coordinate system before writing them with WriteContourXldArcInfo.
If the matrix worldTransformation is inverted by calling HomMat2dInvert, the resulting matrix can
be used to transform contours that have been read with ReadContourXldArcInfo to the image coordinate
system.
Parameter
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
Name of the ARC/INFO world file.
. worldTransformation (output_control) . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Transformation matrix from image to world coordinates.
Result
If the parameters are correct and the world file could be read, the operator ReadWorldFile returns the value 2
(H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
ReadWorldFile is reentrant and processed without parallelization.
Possible Successors
HomMat2dInvert, AffineTransContourXld, AffineTransPolygonXld
See also
WriteContourXldArcInfo, ReadContourXldArcInfo, WritePolygonXldArcInfo,
ReadPolygonXldArcInfo
Module
Foundation
2.3 Region
Result
If the parameter values are correct the operator ReadRegion returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
ReadRegion is reentrant and processed without parallelization.
Possible Predecessors
ReadImage
Possible Successors
ReduceDomain, DispRegion
See also
WriteRegion, ReadImage
Module
Foundation
regiongrowing(Img,Segmente,3,3,5,10)
write_region(Segmente,’result1’).
Result
If the parameter values are correct the operator WriteRegion returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
WriteRegion is reentrant and processed without parallelization.
Possible Predecessors
OpenWindow, ReadImage, ReadRegion, Threshold, Regiongrowing
See also
ReadRegion
Module
Foundation
HALCON 8.0.2
72 CHAPTER 2. FILE
2.4 Text
static void HOperatorSet.CloseAllFiles ( )
static void HMisc.CloseAllFiles ( )
Close all open files.
CloseAllFiles closes all open files.
Attention
CloseAllFiles exists solely for the purpose of implementing the “reset program” functionality in HDevelop.
CloseAllFiles must not be used in any application.
Result
If it is possible to close the files the operator CloseAllFiles returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
CloseAllFiles is processed completely exclusively without parallelization.
Alternatives
CloseFile
Module
Foundation
open_file(’/tmp/data.txt’,’input’,FileHandle)
// ....
close_file(FileHandle).
Result
If the file handle is correct CloseFile returns the value 2 (H_MSG_TRUE). Otherwise an exception handling
is raised.
Parallelization Information
CloseFile is processed completely exclusively without parallelization.
Possible Predecessors
OpenFile
See also
OpenFile
Module
Foundation
The operator FnewLine puts out a line feed into the output file. At the same time the output buffer is cleaned.
Parameter
fwrite_string(FileHandle,’Good Morning’)
fnew_line(FileHandle)
Result
If an output file is open and it can be written to the file the operator FnewLine returns the value 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
FnewLine is reentrant and processed without parallelization.
Possible Predecessors
FwriteString
See also
FwriteString
Module
Foundation
repeat >
fread_char(FileHandle:Char)
(if(Char = ’nl’) > fnew_line(FileHandle)) |
(if(Char != ’nl’) > fwrite_string(FileHandle,Char))
until(Char = ’eof’).
Result
If an input file is open the operator fread_char returns 2 (H_MSG_TRUE). Otherwise an exception handling is
raised.
Parallelization Information
FreadChar is reentrant and processed without parallelization.
Possible Predecessors
OpenFile
HALCON 8.0.2
74 CHAPTER 2. FILE
Possible Successors
CloseFile
Alternatives
FreadString, ReadString, FreadLine
See also
OpenFile, CloseFile, FreadString, FreadLine
Module
Foundation
do {
fread_line(FileHandle,&Line,&IsEOF) ;
} while(IsEOF==0) ;
Result
If the file is open and a suitable line is read FreadLine returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
FreadLine is reentrant and processed without parallelization.
Possible Predecessors
OpenFile
Possible Successors
CloseFile
Alternatives
FreadChar, FreadString
See also
OpenFile, CloseFile, FreadChar, FreadString
Module
Foundation
Result
If a file is open and a suitable string is read FreadString returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
FreadString is reentrant and processed without parallelization.
Possible Predecessors
OpenFile
Possible Successors
CloseFile
Alternatives
FreadChar, ReadString, FreadLine
See also
OpenFile, CloseFile, FreadChar, FreadLine
Module
Foundation
HALCON 8.0.2
76 CHAPTER 2. FILE
Parameter
Result
If the writing procedure was carried out successfully the operator FwriteString returns the value 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
FwriteString is reentrant and processed without parallelization.
Possible Predecessors
OpenFile
Possible Successors
CloseFile
Alternatives
WriteString
See also
OpenFile, CloseFile, SetSystem
Module
Foundation
Result
If the parameters are correct the operator OpenFile returns the value 2 (H_MSG_TRUE). Otherwise an excep-
tion handling is raised.
Parallelization Information
OpenFile is processed completely exclusively without parallelization.
Possible Successors
FwriteString, FreadChar, FreadString, FreadLine, CloseFile
See also
CloseFile, FwriteString, FreadChar, FreadString, FreadLine
Module
Foundation
2.5 Tuple
HALCON 8.0.2
78 CHAPTER 2. FILE
2.6 XLD
static void HOperatorSet.ReadContourXldArcInfo (
out HObject contours, HTuple fileName )
Result
If the parameters are correct and the file could be read, the operator ReadContourXldArcInfo returns the
value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
ReadContourXldArcInfo is reentrant and processed without parallelization.
Possible Successors
HomMat2dInvert, AffineTransContourXld
See also
ReadWorldFile, WriteContourXldArcInfo, ReadPolygonXldArcInfo
Module
Foundation
• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
• ELLIPSE
• SPLINE
• BLOCK
• INSERT
The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
contours contours.
HALCON 8.0.2
80 CHAPTER 2. FILE
If the file has been created with the operator WriteContourXldDxf, all attributes and global attributes that
were originally defined for the XLD contours are read. This means that ReadContourXldDxf supports all the
extended data written by the operator WriteContourXldDxf. The reading of these attributes can be switched
off by setting the generic parameter ’read_attributes’ to ’false’. Generic parameters are set by specifying the
parameter name(s) in genParamNames and the corresponding value(s) in genParamValues.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD contours. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). The parameter ’min_num_points’ defines the mini-
mum number of sampling points that are used for the approximation. Note that the parameter ’min_num_points’
always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if ’min_num_points’ is
set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-circle is approximated
by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum deviation of the XLD
contour from the ideal circle or ellipse, respectively (unit: pixel). For the determination of the accuracy of the
approximation both criteria are evaluated. Then, the criterion that leads to the more accurate approximation is
used.
Internally, the following default values are used for the generic parameters:
’read_attributes’ = ’true’
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Parameter
Result
If the parameters are correct and the file could be read, the operator ReadPolygonXldArcInfo returns the
value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
ReadPolygonXldArcInfo is reentrant and processed without parallelization.
Possible Successors
HomMat2dInvert, AffineTransPolygonXld
See also
ReadWorldFile, WritePolygonXldArcInfo, ReadContourXldArcInfo
Module
Foundation
HALCON 8.0.2
82 CHAPTER 2. FILE
The output parameter dxfStatus contains information about the number of polygons that were read and, if
necessary, warnings that parts of the DXF file could not be interpreted.
The operator ReadPolygonXldDxf supports the following DXF entities:
• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
• ELLIPSE
• SPLINE
• BLOCK
• INSERT
The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
polygons polygons.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD polygons. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). Generic parameters are set by specifying the pa-
rameter name(s) in genParamNames and the corresponding value(s) in genParamValues. The parameter
’min_num_points’ defines the minimum number of sampling points that are used for the approximation. Note that
the parameter ’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical
arcs, i.e., if ’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle,
this semi-circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the
maximum deviation of the XLD polygon from the ideal circle or ellipse, respectively (unit: pixel). For the deter-
mination of the accuracy of the approximation both criteria are evaluated. Then, the criterion that leads to the more
accurate approximation is used.
Internally, the following default values are used for the generic parameters:
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Note that reading a DXF file with ReadPolygonXldDxf results in exactly the same geometric information as
reading the file with ReadContourXldDxf. However, the resulting data structure is different.
Parameter
. polygons (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; HXLDPoly
Read XLD polygons.
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
Name of the DXF file.
. genParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; HTuple (string)
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {"min_num_points", "max_approx_error"}
. genParamValues (input_control) . . . . . . . . . attribute.value(-array) ; HTuple (double / int / long / string)
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {0.1, 0.25, 0.5, 1, 2, 5, 10, 20}
. dxfStatus (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Status information.
Result
If the parameters are correct and the file could be read the operator ReadPolygonXldDxf returns the value 2
(H_MSG_TRUE). Otherwise, an exception is raised.
Parallelization Information
ReadPolygonXldDxf is reentrant and processed without parallelization.
Possible Predecessors
WritePolygonXldDxf
See also
WritePolygonXldDxf, ReadContourXldDxf
Module
Foundation
Result
If the parameters are correct and the file could be written, the operator WriteContourXldArcInfo returns
the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
WriteContourXldArcInfo is reentrant and processed without parallelization.
Possible Predecessors
AffineTransContourXld
See also
ReadWorldFile, ReadContourXldArcInfo, WritePolygonXldArcInfo
Module
Foundation
HALCON 8.0.2
84 CHAPTER 2. FILE
DXF Explanation
1000 Meaning
contour attributes
1002 Beginning of the value list
{
1070 Number of attributes (here: 3)
3
1040 Value of the first attribute
5.00434303
1040 Value of the second attribute
126.8638916
1040 Value of the third attribute
4.99164152
1002 End of the value list
}
The global attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
global contour attributes
1002 Beginning of the value list
{
1070 Number of global attributes (here: 5)
5
1040 Value of the first global attribute
0.77951831
1040 Value of the second global attribute
0.62637949
1040 Value of the third global attribute
103.94314575
1040 Value of the fourth global attribute
0.21434096
1040 Value of the fifth global attribute
0.21921949
1002 End of the value list
}
The names of the attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
names of contour attributes
1002 Beginning of the value list
{
1070 Number of attribute names (here: 3)
3
1000 Name of the first attribute
angle
1000 Name of the second attribute
response
1000 Name of the third attribute
edge_direction
1002 End of the value list
}
The names of the global attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
names of global contour attributes
1002 Beginning of the value list
{
1070 Number of global attribute names (here: 5)
5
1000 Name of the first global attribute
regr_norm_row
1000 Name of the second global attribute
regr_norm_col
1000 Name of the third global attribute
regr_dist
1000 Name of the fourth global attribute
regr_mean_dist
1000 Name of the fifth global attribute
regr_dev_dist
1002 End of the value list
}
HALCON 8.0.2
86 CHAPTER 2. FILE
Parameter
Result
If the parameters are correct and the file could be written, the operator WritePolygonXldArcInfo returns
the value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
WritePolygonXldArcInfo is reentrant and processed without parallelization.
Possible Predecessors
AffineTransPolygonXld
See also
ReadWorldFile, ReadPolygonXldArcInfo, WriteContourXldArcInfo
Module
Foundation
HALCON 8.0.2
88 CHAPTER 2. FILE
Filter
3.1 Arithmetic
static void HOperatorSet.AbsImage ( HObject image,
out HObject imageAbs )
HImage HImage.AbsImage ( )
Calculate the absolute value (modulus) of an image.
The operator AbsImage calculates the absolute gray values of images of any type and stores the result in
imageAbs. The power spectrum of complex images is calculated as a ’real’ image. The operator AbsImage
generates a logical copy of unsigned images.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image(s) for which the absolute gray values are to be calculated.
. imageAbs (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Result image(s).
Example (Syntax: HDevelop)
Result
The operator AbsImage returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no input
images available) is set via the operator SetSystem(’no_object_result’,<Result>)
Parallelization Information
AbsImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
ConvertImageType, PowerByte
Module
Foundation
89
90 CHAPTER 3. FILTER
The operator AddImage adds two images. The gray values (g1, g2) of the input images (image1 and image2)
are transformed as follows:
If an overflow or an underflow occurs the values are clipped. This is not the case with int2 images if mult is equal
to 1 and add is equal to 0. To reduce the runtime the underflow and overflow check is skipped. The resulting
image is stored in imageResult.
It is possible to add byte images with int2, uint2 or int4 images and to add int4 to int2 or uint2 images. In this case
the result will be of type int2 or int4 respectively.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Please note that the runtime of the operator varies with different control parameters. For frequently used combina-
tions special optimizations are used. Additionally, for byte, int2, uint2, and int4 images special optimizations are
implemented that use SIMD technology. The actual application of these special optimizations is controlled by the
system parameter ’mmx_enable’ (see SetSystem). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction
set is available), the internal calculations are performed using SIMD technology.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of AddImage might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by SetSystem
(’mmx_enable’,’false’).
Parameter
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
add_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
Result
The operator AddImage returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
AddImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
SubImage, MultImage
See also
SubImage, MultImage
Module
Foundation
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
div_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
HALCON 8.0.2
92 CHAPTER 3. FILTER
Result
The operator DivImage returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
DivImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
AddImage, SubImage, MultImage
See also
AddImage, SubImage, MultImage
Module
Foundation
HImage HImage.InvertImage ( )
Invert an image.
The operator InvertImage inverts the gray values of an image. For images of the ’byte’ and ’cyclic’ type the
result is calculated as:
g 0 = 255 − g
In the case of signed types the values are negated. The resulting image has the same pixel type as the input image.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
read_image(Orig,"fabrik")
invert_image(Orig,Invert)
disp_image(Invert,WindowHandle).
Parallelization Information
InvertImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
Watersheds
Alternatives
ScaleImage
See also
ScaleImage, AddImage, SubImage
Module
Foundation
read_image(Bild1,"affe")
read_image(Bild2,"fabrik")
max_image(Bild1,Bild2,Max)
disp_image(Max,WindowHandle)
Result
If the parameter values are correct the operator MaxImage returns the value 2 (H_MSG_TRUE). The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
MaxImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
MaxImage
See also
MinImage
Module
Foundation
HALCON 8.0.2
94 CHAPTER 3. FILTER
Parameter
g 0 := g1 ∗ g2 ∗ mult + add
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
mult_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
Result
The operator MultImage returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
MultImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
AddImage, SubImage, DivImage
See also
AddImage, SubImage, DivImage
Module
Foundation
g 0 := g ∗ mult + add
255
mult = add = −mult ∗ GMin
GMax − GMin
The values for GMin and GMax can be determined, e.g., with the operator MinMaxGray.
Please note that the runtime of the operator varies with different control parameters. For frequently used combi-
nations special optimizations are used. Additionally, special optimizations are implemented that use fixed point
arithmetic (for int2 and uint2 images), and further optimizations that use SIMD technology (for byte, int2, and uint2
images). The actual application of these special optimizations is controlled by the system parameters ’int_zooming’
and ’mmx_enable’ (see SetSystem). If ’int_zooming’ is set to ’true’, the internal calculation is performed us-
ing fixed point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed
HALCON 8.0.2
96 CHAPTER 3. FILTER
gray values is slightly lower in this mode. The difference to the more accurate calculation (using ’int_zooming’
= ’false’) is typically less than two gray levels. If ’mmx_enable’ is set to ’true’(and the SIMD instruction set is
available), the internal calculations are performed using fixed point arithmetic and SIMD technology. In this case
the setting of ’int_zooming’ is ignored.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of ScaleImage might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by SetSystem
(’mmx_enable’,’false’).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image(s) whose gray values are to be scaled.
. imageScaled (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Result image(s) by the scale.
. mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Scale factor.
Default Value : 0.01
Suggested values : Mult ∈ {0.001, 0.003, 0.005, 0.008, 0.01, 0.02, 0.03, 0.05, 0.08, 0.1, 0.5, 1.0}
Minimum Increment : 0.001
Recommended Increment : 0.1
. add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Offset.
Default Value : 0
Suggested values : Add ∈ {0, 10, 50, 100, 200, 500}
Minimum Increment : 0.01
Recommended Increment : 1.0
Example (Syntax: HDevelop)
Result
The operator ScaleImage returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) Otherwise an exception treatment is carried out.
Parallelization Information
ScaleImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
MinMaxGray
Alternatives
MultImage, AddImage, SubImage
See also
MinMaxGray
Module
Foundation
HImage HImage.SqrtImage ( )
Calculate the square root of an image.
SqrtImage calculates the square root of an input image image and stores the result in the image sqrtImage
of the same pixel type. In case the picture image is of a signed pixel type, negative pixel values will be mapped
to zero in sqrtImage.
Parameter
HALCON 8.0.2
98 CHAPTER 3. FILTER
read_image(Image0,"fabrik")
disp_image(Image0,WindowHandle)
read_image(Image1,"Affe")
disp_image(Image1,WindowHandle)
sub_image(Image0,Image1,Result,2.0,10.0)
disp_image(Result,WindowHandle)
Result
The operator SubImage returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
SubImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DualThreshold
Alternatives
MultImage, AddImage, SubImage
See also
AddImage, MultImage, DynThreshold, CheckDifference
Module
Foundation
3.2 Bit
static void HOperatorSet.BitAnd ( HObject image1, HObject image2,
out HObject imageAnd )
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
read_image(Image1,’fabrik’)
disp_image(Image1,WindowHandle)
bit_and(Image0,Image1,ImageBitA)
disp_image(ImageBitA,WindowHandle).
Result
If the images are correct (type and number) the operator BitAnd returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitAnd is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
BitMask, AddImage, MaxImage
See also
BitMask, AddImage, MaxImage
Module
Foundation
read_image(&ByteImage,"fabrik");
convert_image_type(ByteImage,&Int2Image,"int2");
bit_lshift(Int2Image,&FullInt2Image,8);
Result
If the images are correct (type) and if shift has a valid value the operator BitLshift returns the value
HALCON 8.0.2
100 CHAPTER 3. FILTER
2 (H_MSG_TRUE). The behavior in case of empty input (no input images available) is set via the operator
SetSystem(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitLshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
ScaleImage
See also
BitRshift
Module
Foundation
HImage HImage.BitNot ( )
Complement all bits of the pixels.
The operator BitNot calculates the “complement” of all pixels of the input image bit by bit. The semantics of
the “complement” operation corresponds to that of C (“∼”) for the respective types (signed char, unsigned char,
short, unsigned short, int/long). Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
bit_not(Image0,ImageBitN)
disp_image(ImageBitN,WindowHandle).
Result
If the images are correct (type) the operator BitNot returns the value 2 (H_MSG_TRUE). The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitNot is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
BitOr, BitAnd, AddImage
See also
BitSlice, BitMask
Module
Foundation
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
HALCON 8.0.2
102 CHAPTER 3. FILTER
read_image(Image1,’fabrik’)
disp_image(Image1,WindowHandle)
bit_or(Image0,Image1,ImageBitO)
disp_image(ImageBitO,WindowHandle).
Result
If the images are correct (type and number) the operator BitOr returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitOr is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
BitAnd, AddImage
See also
BitXor, BitAnd
Module
Foundation
bit_rshift(Int2Image,&ReducedInt2Image,8);
convert_image_type(ReducedInt2Image,&ByteImage,"byte");
Result
If the images are correct (type) and shift has a valid value the operator BitRshift returns the value
2 (H_MSG_TRUE). The behavior in case of empty input (no input images available) is set via the operator
SetSystem(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitRshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
ScaleImage
See also
BitLshift
Module
Foundation
read_image(&ByteImage,"fabrik");
for (bit=1; bit<=8; i++)
{
bit_slice(ByteImage,&Slice,bit);
threshold(Slice,&Region,0,255);
disp_region(Region,WindowHandle);
clear(bit_slice); clear(Slice); clear(Region);
}
Result
If the images are correct (type) and bit has a valid value, the operator BitSlice returns the value 2
(H_MSG_TRUE). The behavior in case of empty input (no input images available) is set via the operator
SetSystem(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitSlice is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
Threshold, BitOr
Alternatives
BitMask
See also
BitAnd, BitLshift
Module
Foundation
HALCON 8.0.2
104 CHAPTER 3. FILTER
read_image(Image0,’affe’)
disp_image(Image0,WindowHandle)
read_image(Image1,’fabrik’)
disp_image(Image1,WindowHandle)
bit_xor(Image0,Image1,ImageBitX)
disp_image(ImageBitX,WindowHandle).
Result
If the parameter values are correct the operator BitXor returns the value 2 (H_MSG_TRUE). The behav-
ior in case of empty input (no input images available) can be determined by the operator SetSystem
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
BitXor is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
BitOr, BitAnd, AddImage
See also
BitOr, BitAnd
Module
Foundation
3.3 Color
static void HOperatorSet.CfaToRgb ( HObject CFAImage,
out HObject RGBImage, HTuple CFAType, HTuple interpolation )
into an RGB image. Hence, the operator CfaToRgb is normally used if the images are not being grabbed using
the HALCON frame grabber interface ( GrabImage or GrabImageAsync), but are grabbed using function
calls from the frame grabber SDK, and are passed to HALCON using GenImage1 or GenImage1Extern.
In single-chip CCD cameras, a color filter array in front of the sensor provides (subsampled) color information.
The most frequently used filter is the so called Bayer filter. The color filter array has the following layout in this
case:
G B G B G B ···
R G R G R G ···
G B G B G B ···
R G R G R G ···
.. .. .. .. .. .. ..
. . . . . . .
Each gray value of the input image CFAImage corresponds to the brightness of the pixel behind the corresponding
color filter. Hence, in the above layout, the pixel (0,0) corresponds to a green color value, while the pixel (0,1)
corresponds to a blue color value. The layout of the Bayer filter is completely determined by the first two elements
of the first row of the image, and can be chosen with the parameter CFAType. In particular, this enables the correct
conversion of color filter array images that have been cropped out of a larger image (e.g., using CropPart or
CropRectangle1). The algorithm that is used to interpolate the RGB values is determined by the parameter
interpolation. Currently, the only possible choice is ’bilinear’.
Parameter
HALCON 8.0.2
106 CHAPTER 3. FILTER
Compute the transformation matrix of the principal component analysis of multichannel images.
GenPrincipalCompTrans computes the transformation matrix of a principal components analysis of mul-
tichannel images. This is useful for images obtained, e.g., with the thematic mapper of the Landsat satellite.
Because the spectral bands are highly correlated, it is desirable to transform them to uncorrelated images. This can
be used to save storage, since the bands containing little information can be discarded, and with respect to a later
classification step.
The operator GenPrincipalCompTrans takes one or more multichannel images multichannelImage
and computes the transformation matrix trans for the principal components analysis, as well as its inverse
transInv. All input images must have the same number of channels. The principal components analysis is
performed based on the collection of data of all images. Hence, GenPrincipalCompTrans facilitates using
the statistics of multiple images.
If n is the number of channels, trans and transInv are matrices of dimension n × (n + 1), which describe an
affine transformation of the multichannel gray values. They can be used to transform a multichannel image with
LinearTransColor. For information purposes, the mean gray value of the channels and the n × n covariance
matrix of the channels are returned in mean and cov, respectively. The parameter infoPerComp contains the
relative information content of each output channel.
Parameter
. multichannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Multichannel input image.
. trans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Transformation matrix for the computation of the PCA.
. transInv (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Transformation matrix for the computation of the inverse PCA.
. mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Mean gray value of the channels.
. cov (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; HTuple (double)
Covariance matrix of the channels.
. infoPerComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Information content of the transformed channels.
Result
The operator GenPrincipalCompTrans returns the value 2 (H_MSG_TRUE) if the parameters are correct.
Otherwise an exception is raised.
Parallelization Information
GenPrincipalCompTrans is reentrant and processed without parallelization.
Possible Successors
LinearTransColor
Alternatives
PrincipalComp
Module
Foundation
LinearTransColor performs an affine transformation of the color values of the multichannel image image
and returns the result in imageTrans. The affine transformation of the color values is described by the transfor-
mation matrix transMat. If n is the number of channels in image, transMat is a homogeneous n × (n + 1)
that is stored row by row. Homogeneous means that the left n × n submatrix of transMat describes a linear
transformation of the color values, while the last column of transMat describes a constant offset of the color
values. The transformation matrix is typically computed with GenPrincipalCompTrans. It can, however,
also be specified directly. For example, a transformation from RGB to YIQ, which is described by the following
transformation
Y 0.299 0.587 0.144 R 0
I = 0.595 −0.276 −0.333 G + 128
Q 0.209 −0.522 0.287 B 128
[0.299, 0.587, 0.144, 0.0, 0.595, −0.276, −0.333, 128.0, 0.209, −0.522, 0.287, 128.0]
Here, it should be noted that the above transformation is unnormalized, i.e., the resulting color values can lie
outside the range [0, 255]. The transformation ’yiq’ in TransFromRgb additionally scales the rows of the matrix
(except for the constant offset) appropriately.
To avoid a loss of information, LinearTransColor returns an image of type real. If a different image type is
desired, the image can be transformed with ConvertImageType.
Parameter
HALCON 8.0.2
108 CHAPTER 3. FILTER
The operator PrincipalComp takes a (multichannel) image multichannelImage and transforms it to the
output image PCAImage, which contains the same number of channels, using the principal components analysis.
The parameter infoPerComp contains the relative information content of each output channel.
Parameter
. multichannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; HImage
Multichannel input image.
. PCAImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; HImage
Multichannel output image.
. infoPerComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Information content of each output channel.
Result
The operator PrincipalComp returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
PrincipalComp is reentrant and processed without parallelization.
Alternatives
GenPrincipalCompTrans
See also
LinearTransColor
Module
Foundation
HImage HImage.Rgb1ToGray ( )
Transform an RGB image into a gray scale image.
Rgb1ToGray transforms an RGB image into a gray scale image. The three channels of the RGB image are passed
as the first three channels of the input image. The image is transformed according to the following formula:
Parameter
. RGBImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Three-channel RBG image.
. grayImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Gray scale image.
Example (Syntax: HDevelop)
Parallelization Information
Rgb1ToGray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
Compose3
Alternatives
TransFromRgb, Rgb3ToGray
Module
Foundation
Parameter
Parallelization Information
Rgb3ToGray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
Decompose3
Alternatives
Rgb1ToGray, TransFromRgb
Module
Foundation
Transform an image from the RGB color space to an arbitrary color space.
TransFromRgb transforms an image from the RGB color space to an arbitrary color space (colorSpace).
The three channels of the image are passed as three separate images on input and output.
The operator TransFromRgb supports the image types byte, uint2, int4, and real. In the case of int4 images, the
images should not contain negative values. In the case of real images, all values should lay within 0 and 1. If not,
the results of the transformation may not be reasonable.
Certain scalings are performed accordingly to the image type:
HALCON 8.0.2
110 CHAPTER 3. FILTER
• Considering byte and uint2 images, the domain of color space values is generally mapped to the full domain
of [0..255] resp. [0..65535]. Because of this, the origin of signed values (e.g., CIELab or YIQ) may not be at
the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].
Range of values:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]
Range of values:
Y ∈ [0; 1], U ∈ [−0.436; 0.436], V ∈ [−0.615; 0.496]
’argyb’
A 0.30 0.59 0.11 R
Rg = 0.50 −0.50 0.00 G
Yb 0.25 0.25 −0.50 B
Range of values:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]
’ciexyz’
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
colors (x, y, z):
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Range of values:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’hsi’
√2 −1 −1
√ √
M1 6 6 6 R
√1 −1
M2 =
0 2
√
2
G
I1 √1 √1 √1 B
3 3 3
M2
H √arctan M 1
S = M 12 + M 22
I1
I √
3
Range of values: q
2
H ∈ [0; 2π], S ∈ [0; 3 ], I ∈ [0; 1]
HALCON 8.0.2
112 CHAPTER 3. FILTER
if (I == 0)
H = 0
S = 1
else
S = 1 - min / I
if (S == 0)
H = 0
else
A = (R + R - G - B) / 2
B = (R - G) * (R - G) + (R - B) * (G - B)
C = sqrt(B)
if (C == 0)
H = 0
else
H = acos(A / C)
fi
if (B > G)
H = 2 * pi - H
fi
fi
fi
Range of values:
I ∈ [0; 1], H ∈ [0; 2π], S ∈ [0; 1]
’cielab’
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
Y
L = 116 ∗ f () − 16
Yw
X Y
a = 500 ∗ (f ( ) − f ( ))
Xw Yw
Y Z
b = 200 ∗ (f ( ) − f ( ))
Yw Zw
where 1 24 3
f (t) = t 3 , t > ( 116 )
841 16
f (t) = 108 ∗ t + 116 , otherwise
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Range of values:
L ∈ [0; 100], a ∈ [−86.1813; 98.2352], b ∈ [−107.8617; 94.4758]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’i1i2i3’
I1 0.333 0.333 0.333 R
I2 = 1.0 0.0 −1.0 G
I3 −0.5 1.0 −0.5 B
Range of values:
I1 ∈ [0; 1], I2 ∈ [−1; 1], I3 ∈ [−1; 1]
’ciexyz2’
X 0.620 0.170 0.180 R
Y = 0.310 0.590 0.110 G
Z 0.000 0.066 1.020 B
Range of values:
X ∈ [0; 0.970], Y ∈ [0; 1.010], Z ∈ [0; 1.086]
’ciexyz3’
X 0.618 0.177 0.205 R
Y = 0.299 0.587 0.114 G
Z 0.000 0.056 0.944 B
Range of values:
X ∈ [0; 1], Y ∈ [0; 1], Z ∈ [0; 1]
’ciexyz4’
X 0.476 0.299 0.175 R
Y = 0.262 0.656 0.082 G
Z 0.020 0.161 0.909 B
colors(x, y, z):
Used primary
0.628 0.268 0.150 0.313
red:= 0.346 , green:= 0.588 , blue:= 0.070 , white65 := 0.329
0.026 0.144 0.780 0.358
Range of values:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]
Parameter
. imageRed (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image (red channel).
. imageGreen (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image (green channel).
. imageBlue (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image (blue channel).
. imageResult1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Color-transformed output image (channel 1).
. imageResult2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Color-transformed output image (channel 1).
. imageResult3 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Color-transformed output image (channel 1).
. colorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Color space of the output image.
Default Value : "hsv"
List of values : ColorSpace ∈ {"cielab", "hsv", "hsi", "yiq", "yuv", "argyb", "ciexyz", "ciexyz2",
"ciexyz3", "ciexyz4", "hls", "ihs", "i1i2i3"}
Example (Syntax: HDevelop)
HALCON 8.0.2
114 CHAPTER 3. FILTER
Result
TransFromRgb returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
TransFromRgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
Decompose3
Possible Successors
Compose3
Alternatives
Rgb1ToGray, Rgb3ToGray
See also
TransToRgb
Module
Foundation
Transform an image from an arbitrary color space to the RGB color space.
TransToRgb transforms an image from an arbitrary color space (colorSpace) to the RGB color space. The
three channels of the image are passed as three separate images on input and output.
The operator TransToRgb supports the image types byte, uint2, int4, and real. The domain of the input images
must match the domain provided by a corresponding transformation with TransFromRgb. If not, the results of
the transformation may not be reasonable.
This includes some scalings in the case of certain image types and transformations:
• Considering byte and uint2 images, the domain of color space values is expected to be spread to the full
domain of [0..255] resp. [0..65535]. This includes a shift in the case of signed values, such that the origin of
signed values (e.g. CIELab or YIQ) may not be at the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].
Domain:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]
’argyb’
R 1.00 1.29 0.22 A
G = 1.00 −0.71 0.22 Rg
B 1.00 0.29 −1.78 Yb
Domain:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]
’ciexyz’
R 3.240479 −1.53715 −0.498535 X
G = −0.969256 1.875991 0.041556 Y
B 0.055648 −0.204043 1.057311 Z
The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
colors (x, y, z):
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Domain:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’cielab’
fy = (L + 16)/116
fx = a/500 + fy
fz = b/200 − fy
24
X = Xw ∗ fx3 , fx > 116
16 108
X = (fx − 116 ) ∗ Xw ∗ 841 , otherwise
24
Y = Yw ∗ fy3 , fy > 116
16 108
Y = (fy − 116 ) ∗ Yw ∗ 841 , otherwise
24
Z = Zw ∗ fz3 , fz > 116
16 108
Z = (fz − 116 ) ∗ Zw ∗ 841 , otherwise
R 3.240479 −1.53715 −0.498535 X
G = −0.969256 1.875991 0.041556 Y
B 0.055648 −0.204043 1.057311 Z
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Domain:
L ∈ [0; 100], a ∈ [−94.3383; 90.4746], b ∈ [−101.3636; 84.4473]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
HALCON 8.0.2
116 CHAPTER 3. FILTER
’hls’ Hi = integer(H * 6)
Hf = fraction(H * 6)
if (L <= 0.5)
max = L * (S + 1)
else
max = L + S - (L * S)
fi
min = 2 * L - max
if (S == 0)
R = L
G = L
B = L
else
if (Hi == 0)
R = max
G = min + Hf * (max - min)
B = min
elif (Hi == 1)
R = min + (1 - Hf) * (max - min)
G = max
B = min
elif (Hi == 2)
R = min
G = max
B = min + Hf * (max - min)
elif (Hi == 3)
R = min
G = min + (1 - Hf) * (max - min)
B = max
elif (Hi == 4)
R = min + Hf * (max - min)
G = min
B = max
elif (Hi == 5)
R = max
G = min
B = min + (1 - Hf) * (max - min)
fi
fi
Domain:
H ∈ [0; 2π], L ∈ [0; 1], S ∈ [0; 1]
’hsi’
M 1 = S ∗ sin H
M 2 = S ∗ cos H
I
I1 = √
3
√2
0 √13
R 6 M1
−1 1 1
√6 √2 √3 M 2
G =
B −1
√ −1
√ √1 I1
6 2 3
’hsv’ Domain: q
H ∈ [0; 2π], S ∈ [0; 23 ], I ∈ [0; 1]
if (S == 0)
R = V
G = V
B = V
else
Hi = integer(H)
Hf = fraction(H)
if (Hi == 0)
R = V
G = V * (1 - (S * (1 - Hf)))
B = V * (1 - S)
elif (Hi == 1)
R = V * (1 - (S * Hf))
G = V
B = V * (1 - S)
elif (Hi == 2)
R = V * (1 - S)
G = V
B = V * (1 - (S * (1 - Hf)))
elif (Hi == 3)
R = V * (1 - S)
G = V * (1 - (S * Hf))
B = V
elif (Hi == 4)
R = V * (1 - (S * (1 - Hf)))
G = V * (1 - S)
B = V
elif (Hi == 5)
R = V
G = V * (1 - S)
B = V * (1 - (S * Hf))
fi
fi
Domain:
H ∈ [0; 2π], S ∈ [0; 1], V ∈ [0; 1]
’ciexyz4’
R 2.750 −1.149 −0.426 X
G = −1.118 2.026 0.033 Y
B 0.138 −0.333 1.104 Z
Domain:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]
Parameter
. imageInput1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image (channel 1).
. imageInput2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image (channel 2).
. imageInput3 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image (channel 3).
. imageRed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Red channel.
. imageGreen (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Green channel.
. imageBlue (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Blue channel.
. colorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Color space of the input image.
Default Value : "hsv"
List of values : ColorSpace ∈ {"hsi", "yiq", "yuv", "argyb", "ciexyz", "ciexyz4", "cielab", "hls", "hsv"}
HALCON 8.0.2
118 CHAPTER 3. FILTER
Result
TransToRgb returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
TransToRgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
Decompose3
Possible Successors
Compose3, DispColor
See also
Decompose3
Module
Foundation
3.4 Edges
Example (Syntax: C)
sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges(ThinEdge,EdgeAmp,&CloseEdges,15);
skeleton(CloseEdges,&ThinCloseEdges);
Result
CloseEdges returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
CloseEdges is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
EdgesImage, SobelAmp, Threshold, Skeleton
Possible Successors
Skeleton
Alternatives
CloseEdgesLength, Dilation1, Closing
See also
GraySkeleton
Module
Foundation
HALCON 8.0.2
120 CHAPTER 3. FILTER
sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges_length(ThinEdge,EdgeAmp,&CloseEdges,15,3);
Result
CloseEdgesLength returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
CloseEdgesLength is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
EdgesImage, SobelAmp, Threshold, Skeleton
Alternatives
CloseEdges, Dilation1, Closing
References
M. Üsbeck: “Untersuchungen zur echtzeitfähigen Segmentierung”; Studienarbeit, Bayerisches Forschungszentrum
für Wissensbasierte Systeme (FORWISS), Erlangen, 1993.
Module
Foundation
∂g(x, y) ∂g(x, y)
φ = atan2( , )
∂y ∂x
∂ 2 g(x, y) ∂ 2 g(x, y)
TR = +
∂x2 ∂y 2
HALCON 8.0.2
122 CHAPTER 3. FILTER
A = EG − F 2
2
∂g(x, y)
E = 1+
∂x
∂g(x, y) ∂g(x, y)
F =
∂x ∂y
2
∂g(x, y)
G = 1+
∂y
Parameter
read_image(&Image,"mreut");
derivate_gauss(Image,&Gauss,3.0,"x");
zero_crossing(Gauss,&ZeroCrossings);
Parallelization Information
DerivateGauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
ZeroCrossing, DualThreshold
Alternatives
Laplace, LaplaceOfGauss, BinomialFilter, GaussImage, SmoothImage,
IsotropicDiffusion
See also
ZeroCrossing, DualThreshold
Module
Foundation
sigma
sigma1 = r
log ( SigF1actor )
−2 sigFactor 2 −1
sigma1
sigma2 =
sigFactor
HALCON 8.0.2
124 CHAPTER 3. FILTER
For a sigFactor = 1.6, according to Marr, an approximation to the Mexican-Hat-Operator results. The resulting
image is stored in diffOfGauss.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image
. diffOfGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
LoG image.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Smoothing parameter of the Laplace operator to approximate.
Default Value : 3.0
Suggested values : Sigma ∈ {2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.2 ≤ Sigma ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma > 0.0
. sigFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Ratio of the standard deviations used (Marr recommends 1.6).
Default Value : 1.6
Typical range of values : 0.1 ≤ SigFactor ≤ 10.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : SigFactor > 0.0
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
diff_of_gauss(Image,Laplace,2.0,1.6)
zero_crossing(Laplace,ZeroCrossings).
Complexity
The execution time depends linearly on the number of pixels and the size of sigma.
Result
DiffOfGauss returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
DiffOfGauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
ZeroCrossing, DualThreshold
Alternatives
Laplace, DerivateGauss
References
D. Marr: “Vision (A computational investigation into human representation and processing of visual information)”;
New York, W.H. Freeman and Company; 1982.
Module
Foundation
EdgesColor extracts color edges from the input image image. To define color edges, the multi-channel image
image is regarded as a mapping f : R2 7→ Rn , where n is the number of channels in image. For such functions,
there is a natural extension of the gradient: the metric tensor G, which can be used to calculate for every direction,
given by the direction vector v, the rate of change of f in the direction v. For notational convenience, G will be
regarded as a two-dimensional matrix. Thus, the rate of change of the function f in the direction v is given by
v T Gv, where
Xn n
∂fi ∂fi X ∂fi ∂fi
fxT fx fxT fy i=1 ∂x ∂x ∂x ∂y
G= = X i=1
.
n n
fxT fy fyT fy ∂fi ∂fi X ∂fi ∂fi
i=1
∂x ∂y i=1
∂y ∂y
The partial derivatives of the images, which are necessary to calculate the metric tensor, are calculated with the
corresponding edge filters, analogously to EdgesImage. For filter = ’canny’, the partial derivatives of
the Gaussian smoothing masks are used (see DerivateGauss), for ’deriche1’ and filter = ’deriche2’ the
corresponding Deriche filters, for filter = ’shen’ the corresponding Shen filters, and for filter = ’sobel_fast’
the Sobel filter. Analogously to single-channel images, the gradient direction is defined by the vector v in which the
rate of change f is maximum. The vector v is given by the eigenvector corresponding to the largest eigenvalue of
G. The square root of the eigenvalue is the equivalent of the gradient magnitude (the amplitude) for single-channel
images, and is returned in imaAmp. For single-channel images, both definitions are equivalent. Since the gradient
magnitude may be larger than what can be represented in the input image data type (byte or uint2), it is stored in
the next larger data type (uint2 or int4) in imaAmp. The eigenvector also is used to define the edge direction. In
contrast to single-channel images, the edge direction can only be defined modulo 180 degrees. Like in the output
of EdgesImage, the edge directions are stored in 2-degree steps, and are returned in imaDir. Points with edge
amplitude 0 are assigned the edge direction 255 (undefined direction). For speed reasons, the edge directions are
not computed explicitly for filter = ’sobel_fast’. Therefore, imaDir is an empty object in this case.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarilyfor all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and alpha is ignored), and can be estimated by calling InfoEdges for concrete values
of the parameter alpha. It decreases for increasing alpha for the Deriche and Shen filters and increases for
the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide”
filters exhibit a larger invariance to noise, but also a decreased ability to detect small details. Non-recursive filters,
such as the Canny filter, are realized using filter masks, and thus the execution time increases for increasing filter
width. In contrast, the execution time for recursive filters does not depend on the filter width. Thus, arbitrary
filter widths are possible using the Deriche and Shen filters without increasing the run time of the operator. The
resulting advantage in speed compared to the Canny operator naturally increases for larger filter widths. As border
treatment, the recursive operators assume that the images are zero outside of the image, while the Canny operator
mirrors the gray value at the image border. Comparable filter widths can be obtained by the following choices of
alpha:
nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,1000,...)
For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imaAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Edge amplitude (gradient magnitude) image.
HALCON 8.0.2
126 CHAPTER 3. FILTER
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; pp. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; pp. 167-187; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; pp. 78-87; 1990.
J. Shen, S. Castan: “An Optimal Linear Operator for Step Edge Detection”; Computer Vision, Graphics, and Image
Processing: Graphical Models and Image Processing, vol. 54, no. 2; pp. 112-133; 1992.
Module
Foundation
Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
EdgesColorSubPix extracts subpixel precise color edges from the input image image. The definition of color
edges is given in the description of EdgesColor. The same edge filters as in EdgesColor can be selected:
’canny’, ’deriche1’, ’deriche2’, and ’shen’. In addition, a fast Sobel filter can be selected with ’sobel_fast’. The
filters are specified by the parameter filter.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily. For a detailed description of this
parameter see EdgesColor. This parameter is ignored for filter = ’sobel_fast’.
The extracted edges are returned as subpixel precise XLD contours in edges. For all edge operators except for
’sobel_fast’, the following attributes are defined for each edge point (see GetContourAttribXld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
EdgesColorSubPix links the edge points into edges by using an algorithm similar to a hysteresis threshold
operation, which is also used in EdgesSubPix and LinesGauss. Points with an amplitude larger than high
are immediately accepted as belonging to an edge, while points with an amplitude smaller than low are rejected.
All other points are accepted as edges if they are connected to accepted edge points (see also LinesGauss and
HysteresisThreshold).
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of filter that are described
above. This mode is analogous to the mode for completing junctions that is available in EdgesSubPix and
LinesGauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter
HALCON 8.0.2
128 CHAPTER 3. FILTER
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; pp. 78-87; 1990.
J. Shen, S. Castan: “An Optimal Linear Operator for Step Edge Detection”; Computer Vision, Graphics, and Image
Processing: Graphical Models and Image Processing, vol. 54, no. 2; pp. 112-133; 1992.
Module
2D Metrology
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all filters except ’sobel_fast’ (where
the filter width is 3×3 and alpha is ignored), and can be estimated by calling InfoEdges for concrete values of
the parameter alpha. It decreases for increasing alpha for the Deriche, Lanser and Shen filters and increases for
the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide”
HALCON 8.0.2
130 CHAPTER 3. FILTER
filters exhibit a larger invariance to noise, but also a decreased ability to detect small details. Non-recursive filters,
such as the Canny filter, are realized using filter masks, and thus the execution time increases for increasing filter
width. In contrast, the execution time for recursive filters does not depend on the filter width. Thus, arbitrary filter
widths are possible using the Deriche, Lanser and Shen filters without increasing the run time of the operator. The
resulting advantage in speed compared to the Canny operator naturally increases for larger filter widths. As border
treatment, the recursive operators assume that the images to be zero outside of the image, while the Canny operator
repeats the gray value at the image’s border. Comparable filter widths can be obtained by the following choices of
alpha:
The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators — closely followed by the Deriche operators.
EdgesImage optionally offers to apply a non-maximum-suppression (NMS = ’nms’/’inms’/’hvnms’; ’none’ if
not desired) and hysteresis threshold operation (low,high; at least one negative if not desired) to the resulting
edge image. Conceptually, this corresponds to the following calls:
nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,999,...)
For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter
read_image(Image,’fabrik’)
edges_image(Image,Amp,Dir,’lanser2’,0.5,’none’,-1,-1)
hysteresis_threshold(Amp,Margin,20,30,30).
Result
EdgesImage returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If the
input is empty the behaviour can be set via SetSystem(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
EdgesImage is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
InfoEdges
Possible Successors
Threshold, HysteresisThreshold, CloseEdgesLength
Alternatives
SobelDir, FreiDir, KirschDir, PrewittDir, RobinsonDir
See also
InfoEdges, NonmaxSuppressionAmp, HysteresisThreshold, BandpassImage
References
S.Lanser, W.Eckstein: “Eine Modifikation des Deriche-Verfahrens zur Kantendetektion”; 13. DAGM-Symposium,
München; Informatik Fachberichte 290; Seite 151 - 158; Springer-Verlag; 1991.
S.Lanser: “Detektion von Stufenkanten mittels rekursiver Filter nach Deriche”; Diplomarbeit; Technische Univer-
sität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; S. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; S. 167-187; 1987.
R.Deriche: “Optimal Edge Detection Using Recursive Filtering”; Proc. of the First International Conference on
Computer Vision, London; S. 501-505; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
S.Castan, J.Zhao und J.Shen: “Optimal Filter for Edge Detection Methods and Results”; Proc. of the First Euro-
pean Conference on Computer Vision, Antibes; Lecture Notes on computer Science; no. 427; S. 12-17; Springer-
Verlag; 1990.
HALCON 8.0.2
132 CHAPTER 3. FILTER
Module
Foundation
Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
EdgesSubPix detects step edges using recursively implemented filters (according to Deriche, Lanser and Shen)
or the conventionally implemented “derivative of Gaussian” filter (using filter masks) proposed by Canny. Thus,
the following edge operators are available:
’deriche1’, ’lanser1’, ’deriche2’, ’lanser2’, ’shen’, ’mshen’, ’canny’, ’sobel’, and ’sobel_fast’
(parameter filter).
The extracted edges are returned as sub-pixel precise XLD contours in edges. For all edge operators except
’sobel_fast’, the following attributes are defined for each edge point (see GetContourAttribXld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all edge operators except ’sobel
and ’sobel_fast’, and can be estimated by calling InfoEdges for concrete values of the parameter alpha. It
decreases for increasing alpha for the Deriche, Lanser and Shen filters and increases for the Canny filter, where
it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide” filters exhibit a larger
invariance to noise, but also a decreased ability to detect small details. Non-recursive filters, such as the Canny
filter, are realized using filter masks, and thus the execution time increases for increasing filter width. In contrast,
the execution time for recursive filters does not depend on the filter width. Thus, arbitrary filter widths are possible
using the Deriche, Lanser and Shen filters without increasing the run time of the operator. The resulting advantage
in speed compared to the Canny operator naturally increases for larger filter widths. As border treatment, the
recursive operators assume that the images to be zero outside of the image, while the Canny operator repeats the
gray value at the image’s border. Comparable filter widths can be obtained by the following choices of alpha:
The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators that supprt arbitrary mask sizes, closely followed by the Deriche
operators. The two Sobel filters, which use a fixed mask size of (3 × 3), are faster than the other filters. Of these
two, the filter ’sobel_fast’ is significantly faster than ’sobel’.
EdgesSubPix links the edge points into edges by using an algorithm similar to a hysteresis threshold operation,
which is also used in LinesGauss. Points with an amplitude larger than high are immediately accepted as
belonging to an edge, while points with an amplitude smaller than low are rejected. All other points are accepted
as edges if they are connected to accepted edge points (see also LinesGauss and HysteresisThreshold).
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of filter that are described
above. This mode is analogous to the mode for completing junctions that is available in LinesGauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter
read_image(Image,’fabrik’)
edges_sub_pix(Image,Edges,’lanser2’,0.5,20,40).
Complexity
Let A be the number of pixels in the domain of image. Then the runtime complexity is O(A ∗ Sigma) for the
Canny filter and O(A) for the recursive Lanser, Deriche, and Shen filters.
Let S = Width ∗ Height be the number of pixels of image. Then EdgesSubPix requires at least 60 ∗ S bytes
of temporary memory during execution for all edge operators except ’sobel_fast’. For ’sobel_fast’, at least 9 ∗ S
bytes of temporary memory are required.
Result
EdgesSubPix returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution.
HALCON 8.0.2
134 CHAPTER 3. FILTER
HImage HImage.FreiAmp ( )
Detect edges (amplitude) using the Frei-Chen operator.
FreiAmp calculates an approximation of the first derivative of the image data and is used as an edge detector. The
filter is based on the following filter masks:
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B.
Parameter
read_image(Image,’fabrik’)
frei_amp(Image,Frei_amp)
threshold(Frei_amp,Edges,128,255).
Result
FreiAmp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
FreiAmp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Alternatives
SobelAmp, KirschAmp, PrewittAmp, RobinsonAmp, Roberts
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
√1 0 −1
√
B= 2 0 − 2
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
imageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
HALCON 8.0.2
136 CHAPTER 3. FILTER
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageEdgeAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge amplitude (gradient magnitude) image.
. imageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge direction image.
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
frei_dir(Image,Frei_dirA,Frei_dirD)
threshold(Frei_dirA,Res,128,255).
Result
FreiDir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
FreiDir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Possible Successors
HysteresisThreshold, Threshold, GraySkeleton, NonmaxSuppressionDir, CloseEdges,
CloseEdgesLength
Alternatives
EdgesImage, SobelDir, RobinsonDir, PrewittDir, KirschDir
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 −35 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
This corresponds to applying a mean operator ( MeanImage), and then subtracting the original gray value. A
value of 128 is added to the result, i.e., zero crossings occur for 128.
This filter emphasizes high frequency components (edges and corners). The cutoff frequency is determined by the
size (height × width) of the filter matrix: the larger the matrix, the smaller the cutoff frequency is.
At the image borders the pixels’ gray values are mirrored. In case of over- or underflow the gray values are clipped
(255 and 0, resp.).
Attention
If even values are passed for height or width, the operator uses the next larger odd value instead. Thus, the
center of the filter mask is always uniquely determined.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. highpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
High-pass-filtered result image.
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of the filter mask.
Default Value : 9
Suggested values : Width ∈ {3, 5, 7, 9, 11, 13, 17, 21, 29, 41, 51, 73, 101}
Typical range of values : 3 ≤ Width ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Width ≥ 3) ∧ odd(Width)
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of the filter mask.
Default Value : 9
Suggested values : Height ∈ {3, 5, 7, 9, 11, 13, 17, 21, 29, 41, 51, 73, 101}
Typical range of values : 3 ≤ Height ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Height ≥ 3) ∧ odd(Height)
Example (Syntax: C)
highpass_image(Image,&Highpass,7,5);
threshold(Highpass,&Region,60.0,255.0);
skeleton(Region,&Skeleton);
Result
HighpassImage returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
HighpassImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
Threshold, Skeleton
Alternatives
MeanImage, SubImage, ConvolImage, BandpassImage
See also
DynThreshold
Module
Foundation
HALCON 8.0.2
138 CHAPTER 3. FILTER
The parameter mode (’edge’/’smooth’) is used to determine whether the corresponding edge or smoothing operator
is to be sampled. The Canny operator (which uses the Gaussian for smoothing) is implemented using conventional
filter masks, while all other filters are implemented recursively. Therefore, for the Canny filter the coefficients of
the one-dimensional impulse responses f (n) with n ≥ 0 are returned in coeffs in addition to the filter width.
Parameter
. filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the edge operator.
Default Value : "lanser2"
List of values : Filter ∈ {"deriche1", "lanser1", "deriche2", "lanser2", "shen", "mshen", "canny"}
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
1D edge filter (’edge’) or 1D smoothing filter (’smooth’).
Default Value : "edge"
List of values : Mode ∈ {"edge", "smooth"}
. alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 0.5
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. size (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Filter width in pixels.
. coeffs (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
For Canny filters: Coefficients of the “positive” half of the 1D impulse response.
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
info_edges(’lanser2’,’edge’,0.5,Size,Coeffs)
edges_image(Image,Amp,Dir,’lanser2’,0.5,’none’,-1,-1)
hysteresis_threshold(Amp,Margin,20,30,30).
Result
InfoEdges returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
InfoEdges is reentrant and processed without parallelization.
Possible Successors
EdgesImage, Threshold, Skeleton
See also
EdgesImage
Module
Foundation
HImage HImage.KirschAmp ( )
Detect edges (amplitude) using the Kirsch operator.
KirschAmp calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:
−3 −3 5
−3 0 5
−3 −3 5
−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5
The result image contains the maximum response of all masks.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageEdgeAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Edge amplitude (gradient magnitude) image.
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
kirsch_amp(Image,Kirsch_amp)
threshold(Kirsch_amp,Edges,128,255).
Result
KirschAmp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
KirschAmp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Alternatives
SobelAmp, FreiAmp, PrewittAmp, RobinsonAmp, Roberts
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
HALCON 8.0.2
140 CHAPTER 3. FILTER
KirschDir calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:
−3 −3 5
−3 0 5
−3 −3 5
−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5
The result image contains the maximum response of all masks. The edge directions are returned in
imageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter
read_image(Image,’fabrik’)
kirsch_dir(Image,Kirsch_dirA,Kirsch_dirD)
threshold(Kirsch_dirA,Res,128,255).
Result
KirschDir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
KirschDir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Possible Successors
HysteresisThreshold, Threshold, GraySkeleton, NonmaxSuppressionDir, CloseEdges,
CloseEdgesLength
Alternatives
EdgesImage, SobelDir, RobinsonDir, PrewittDir, FreiDir
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
1
1 −4 1
1
’n_8’
1 1 1
1 −8 1
1 1 1
’n_8_isotropic’
10 22 10
22 −128 22
10 22 10
For the three filter masks the following normelizations of the resulting gray values is applied, (i.e., by dividing
the result by the given divisor): ’n_4’ normalization by 1, ’n_8’, normalization by 2 and for ’n_8_isotropic’
normalization by 32.
For a Laplace operator with size 3 × 3, the corresponding filter is applied directly, while for larger filter sizes
the input image is first smoothed using using a Gaussian filter (see GaussImage) or a binomial filter (see
BinomialFilter) of size maskSize-2. The Gaussian filter is selected for the above values of resultType.
Here, maskSize = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending ’_binomial’ to the
above values of resultType. Here, maskSize can be selected between 5 and 39. Furthermore, it is possible
to select different amounts of smoothing for the column and row direction by passing two values in maskSize.
Here, the first value of maskSize corresponds to the mask width (smoothing in the column direction), while the
second value corresponds to the mask height (smoothing in the row direction) of the binomial filter. Therefore,
laplace(O:R:’absolute’,MaskSize,N:)
HALCON 8.0.2
142 CHAPTER 3. FILTER
gauss_image(O:G:MaskSize-2:) .
laplace(G:R:’absolute’,MaskSize,N:).
and
laplace(O:R:’absolute_binomial’,MaskSize,N:)
is equivalent to
binomial_filter(O:B:MaskSize-2,MaskSize-2:) .
laplace(B:R:’absolute’,3,N:)
Laplace either returns the absolute value of the Laplace filtered image (resultType = ’absolute’) in a byte
or uint2 image or the signed result (resultType = ’signed’ or ’signed_clipped’). Here, the output image type
has the same number of bytes per pixel as the input image (i.e., int1 or int2) for ’signed_clipped’, while the output
image has the next larger number of pixels (i.e., int2 or int4) for ’signed’.
Parameter
read_image(&Image,"mreut");
laplace(Image,&Laplace,"signed",3,"n_8_isotropic");
zero_crossing(Laplace,&ZeroCrossings);
Result
Laplace returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
Laplace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
ZeroCrossing, DualThreshold, Threshold
Alternatives
DiffOfGauss, LaplaceOfGauss, DerivateGauss
See also
HighpassImage, EdgesImage
Module
Foundation
∂ 2 g(x, y) ∂ 2 g(x, y)
∆g(x, y) = +
∂x2 ∂y 2
The derivatives in LaplaceOfGauss are calculated by appropriate derivatives of the Gaussian, resulting in the
following formula for the convolution mask:
∆Gσ (x, y) =
x2 + y 2
2
x + y2
1
− 1 exp −
2πσ 4 2σ 2 2σ 2
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageLaplace (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Laplace filtered image.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Smoothing parameter of the Gaussian.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0}
Typical range of values : 0.7 ≤ Sigma ≤ 5.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (Sigma > 0.7) ∧ (Sigma ≤ 25.0)
Example (Syntax: C)
read_image(&Image,"mreut");
laplace_of_gauss(Image,&Laplace,2.0);
zero_crossing(Laplace,&ZeroCrossings);
Parallelization Information
LaplaceOfGauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
ZeroCrossing, DualThreshold
Alternatives
Laplace, DiffOfGauss, DerivateGauss
See also
DerivateGauss
Module
Foundation
HImage HImage.PrewittAmp ( )
Detect edges (amplitude) using the Prewitt operator.
HALCON 8.0.2
144 CHAPTER 3. FILTER
PrewittAmp calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageEdgeAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge amplitude (gradient magnitude) image.
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
prewitt_amp(Image,Prewitt)
threshold(Prewitt,Edges,128,255).
Result
PrewittAmp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PrewittAmp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Possible Successors
Threshold, GraySkeleton, NonmaxSuppressionAmp, CloseEdges, CloseEdgesLength
Alternatives
SobelAmp, KirschAmp, FreiAmp, RobinsonAmp, Roberts
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
imageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageEdgeAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge amplitude (gradient magnitude) image.
. imageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge direction image.
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
prewitt_dir(Image,PrewittA,PrewittD)
threshold(PrewittA,Edges,128,255).
Result
PrewittDir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PrewittDir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Possible Successors
HysteresisThreshold, Threshold, GraySkeleton, NonmaxSuppressionDir, CloseEdges,
CloseEdgesLength
Alternatives
EdgesImage, SobelDir, RobinsonDir, FreiDir, KirschDir
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
HALCON 8.0.2
146 CHAPTER 3. FILTER
Roberts calculates the first derivative of an image and is used as an edge operator. If the following mask describes
a part of the image,
A B
C D
If an overflow occurs the result is clipped. The result of the operator is stored at the pixel with the coordinates of
“D”.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageRoberts (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Roberts-filtered result images.
. filterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Filter type.
Default Value : "gradient_sum"
List of values : FilterType ∈ {"roberts_max", "gradient_max", "gradient_sum"}
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
roberts(Image,Roberts,’roberts_max’)
threshold(Roberts,Margin,128,255).
Result
Roberts returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
Roberts is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage
Possible Successors
Threshold, Skeleton
Alternatives
EdgesImage, SobelAmp, FreiAmp, KirschAmp, PrewittAmp
See also
Laplace, HighpassImage, BandpassImage
Module
Foundation
HImage HImage.RobinsonAmp ( )
Detect edges (amplitude) using the Robinson operator.
RobinsonAmp calculates an approximation of the first derivative of the image data and is used as an edge detector.
In RobinsonAmp the following four of the originally proposed eight 3 × 3 filter masks are convolved with the
image. The other four masks are obtained by a multiplication by -1. All masks contain only the values 0,1,-1,2,-2.
−1 0 1
−2 0 2
−1 0 1
2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1
read_image(Image,’fabrik’)
robinson_amp(Image,Robinson_amp)
threshold(Robinson_amp,Edges,128,255).
Result
RobinsonAmp always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
RobinsonAmp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Alternatives
SobelAmp, FreiAmp, PrewittAmp, RobinsonAmp, Roberts
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
−1 0 1
−2 0 2
−1 0 1
HALCON 8.0.2
148 CHAPTER 3. FILTER
2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1
The result image contains the maximum response of all masks. The edge directions are returned in
imageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter
read_image(Image,’fabrik’)
robinson_dir(Image,Robinson_dirA,Robinson_dirD)
threshold(Robinson_dirA,Res,128,255).
Result
RobinsonDir always returns 2 (H_MSG_TRUE). If the input is empty the behaviour can be set via
SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
RobinsonDir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, SigmaImage, MedianImage, SmoothImage
Possible Successors
HysteresisThreshold, Threshold, GraySkeleton, NonmaxSuppressionDir, CloseEdges,
CloseEdgesLength
Alternatives
EdgesImage, SobelDir, KirschDir, PrewittDir, FreiDir
See also
BandpassImage, LaplaceOfGauss
Module
Foundation
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)
√
’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
’thin_sum_abs’ (thin(|a|) + thin(|b|))/4
’thin_max_abs’ max(thin(|a|), thin(|b|))/4
’x’ b/4
’y’ a/4
Here, thin(x) is equal to x for a vertical maximum (mask A) and a horizontal maximum (mask B), respectively,
and 0 otherwise. Thus, for ’thin_sum_abs’ and ’thin_max_abs’ the gradient image is thinned. For the filter types
’x’ and ’y’ if the input image is of type byte the output image is of type int1, of type int2 otherwise. For a Sobel
operator with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter sizes the input
image is first smoothed using a Gaussian filter (see GaussImage) or a binomial filter (see BinomialFilter)
of size size-2. The Gaussian filter is selected for the above values of filterType. Here, size = 5, 7, 9, 11, or
13 must be used. The binomial filter is selected by appending ’_binomial’ to the above values of filterType.
Here, size can be selected between 5 and 39. Furthermore, it is possible to select different amounts of smoothing
the the column and row direction by passing two values in size. Here, the first value of size corresponds
to the mask width (smoothing in the column direction), while the second value corresponds to the mask height
(smoothing in the row direction) of the binomial filter. The binomial filter can only be used for images of type
byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge amplitudes are multiplied by a
factor of 2 to prevent information loss. Therefore,
sobel_amp(I,E,Dir,FilterTyp,S)
scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_amp(G,E,FilterType,3)
or to
scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_amp(G,E,FilterType,3).
For SobelAmp special optimizations are implemented filterType = 0 sum_abs 0 that use SIMD technology.
The actual application of these special optimizations is controlled by the system parameter ’mmx_enable’ (see
SetSystem). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal calculations
are performed using SIMD technology. Note that SIMD technology performs best on large, compact input regions.
Depending on the input region and the capabilities of the hardware the execution of SobelAmp might even take
significantly more time with SIMD technology than without.
HALCON 8.0.2
150 CHAPTER 3. FILTER
Parameter
read_image(Image,’fabrik’)
sobel_amp(Image,Amp,’sum_abs’,3)
threshold(Amp,Edg,128,255).
Result
SobelAmp returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
SobelAmp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, MeanImage, AnisotropicDiffusion, SigmaImage
Possible Successors
Threshold, NonmaxSuppressionAmp, GraySkeleton
Alternatives
FreiAmp, Roberts, KirschAmp, PrewittAmp, RobinsonAmp
See also
Laplace, HighpassImage, BandpassImage
Module
Foundation
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)
√
’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
For a Sobel operator with size 3 × 3, the corresponding filters A and B are applied directly, while for larger
filter sizes the input image is first smoothed using a Gaussian filter (see GaussImage) or a binomial filter (see
BinomialFilter) of size size-2. The Gaussian filter is selected for the above values of filterType. Here,
size = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending ’_binomial’ to the above values
of filterType. Here, size can be selected between 5 and 39. Furthermore, it is possible to select different
amounts of smoothing the the column and row direction by passing two values in size. Here, the first value of
size corresponds to the mask width (smoothing in the column direction), while the second value corresponds to
the mask height (smoothing in the row direction) of the binomial filter. The binomial filter can only be used for
images of type byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge amplitudes are
multiplied by a factor of 2 to prevent information loss. Therefore,
sobel_dir(I:Amp,Dir:FilterTyp,S:)
scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_dir(G,Amp,Dir,FilterType,3:)
or to
scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_dir(G,Amp,Dir,FilterType,3:).
The edge directions are returned in edgeDirection, and are stored in 2-degree steps, i.e., an edge direction of x
degrees with respect to the horizontal axis is stored as x/2 in the edge direction image. Furthermore, the direction
of the change of intensity is taken into account. Let [Ex , Ey ] denote the image gradient. Then the following edge
directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
HALCON 8.0.2
152 CHAPTER 3. FILTER
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. edgeAmplitude (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge amplitude (gradient magnitude) image.
. edgeDirection (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Edge direction image.
. filterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Filter type.
Default Value : "sum_abs"
List of values : FilterType ∈ {"sum_abs", "sum_sqrt", "sum_abs_binomial", "sum_sqrt_binomial"}
. size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
sobel_dir(Image,Amp,Dir,’sum_abs’,3)
threshold(Amp,Edg,128,255).
Result
SobelDir returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
SobelDir is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
BinomialFilter, GaussImage, MeanImage, AnisotropicDiffusion, SigmaImage
Possible Successors
NonmaxSuppressionDir, HysteresisThreshold, Threshold
Alternatives
EdgesImage, FreiDir, KirschDir, PrewittDir, RobinsonDir
See also
Roberts, Laplace, HighpassImage, BandpassImage
Module
Foundation
3.5 Enhancement
static void HOperatorSet.AdjustMosaicImages ( HObject images,
out HObject correctedImages, HTuple from, HTuple to,
HTuple referenceImage, HTuple homMatrices2D, HTuple estimationMethod,
HTuple estimateParameters, HTuple OECFModel )
estimationMethod is used for choosing whether a fast but less accurate, or a slower but more accurate
determination method should be used. This is done by setting estimationMethod either to ’standard’ or
’gold_standard’. The availability of the individual method is depending on the selected estimateParameters,
which determines the model to be used for estimating the radiometric adjustment terms. It is always pos-
sible to determine the amount of vignetting in the images by selecting ’vignetting’. However, if selected,
estimationMethod must be set to ’gold_standard’. For the remainder of the radiometric adjustment three
different options are available:
1. Image adjustment with the additive model. This should only be used to adjust images with very small differences
in exposure or white balance. To choose this method, estimateParameters must be set to ’add_gray’. This
model can be selected either exclusively and only with estimationMethod = ’standard’ or in combination
with estimateParameters = ’vignetting’ and only with estimationMethod = ’gold_standard’.
2. Image adjustment with the linear model. In this model, images are expected to be taken with a camera using
a linear transfer function. The adjustment terms are consequently represented as multiplication factors. To select
this model, estimateParameters must be set to ’mult_gray’. It can be called with estimationMethod
= ’standard’ or estimationMethod = ’gold_standard’. A combined call with estimateParameters =
’vignetting’ is also possible, estimationMethod must be set to ’gold_standard’ in that case.
3. Image adjustment with the calibrated model. In this model, images are assumed to be taken with a camera using
a nonlinear transfer function. A function of the OECF class selected with OECFModel is used to approximate
the actually used OECF in the process of image acquisition. As with the linear model, the correction terms
are represented as multiplication factors. This model can be selected by choosing estimateParameters =
[’mult_gray’,’response’] and must be called with estimationMethod = ’gold_standard’. It is possible to
determine the amount of vignetting as well in this case by choosing estimateParameters = ’vignetting’.
This model is similar to the linear model. However, in this case the camera may have a nonlinear response. This
means that before the gray values of the images can be multiplied by their respective correction factor, the gray
values must be backprojected to a linear response. To do so, the camera’s response must be determined. Since the
response usually does not change over an image sequence, this parameter is assumed to be constant throughout the
whole image sequence.
Any kind of function could be considered to be used as an OECF. As in the operator
RadiometricSelfCalibration, a polynomial fitting might be used, but for typical images in a mo-
saicking application this would not work very well. The reason for this is that polynomial fitting has too many
parameters that need to be determined. Instead, only simpler types of response functions can be estimated.
Currently, only so-called Laguerre-functions are available.
The response of a Laguerre-type OECF is determined by only one parameter called Phi. In a first step, the whole
gray value spectrum (in case of 8bit images the values 0 to 255) is converted to floating point numbers in the
interval [0:1]. Then, the OECF backprojection is calculated based on this and the resulting gray values are once
again converted to the original interval.
The inverse transform of the gray values back to linear values based on a Laguerre-type OECF is described by the
following equation:
2 P hi · sin(π · I_nl)
I_l = I_nl + · arctan( )
π 1 − P hi · cos(π · I_nl)
with I_l the linear gray value and I_nl the (nonlinear) gray value.
The parameter OECFModel is only used if the calibrated model has been chosen. Otherwise, any input for
OECFModel will be ignored.
The parameter estimateParameters can also be used to influence the performance and memory consumption
of the operator. With ’no_cache’ the internal caching mechanism can be disabled. This switch only has and influ-
ence if estimationMethod is set to ’gold_standard’. Otherwise this switch will be ignored. When disabling
the internal caching, the operator uses far less memory, but therefore calculates the corresponding grayvalue pairs
in each iteration of the minimization algorithm again. Therefore, disabling caching is only advisable if all physical
memory is used up at some point of the calculation and the operating system starts using swap space.
A second option to inluence the performance is using subsampling. When setting estimateParameters to
’subsampling_2’, images are internally zoomed down by a factor of 2. Despite the suggested value list, not only
factors of 2 and 4 are available, but any integer number might be specified by appending it to subsampling_ in
estimateParameters. With this, the amount of image data is tremendously reduced, which leads to a much
HALCON 8.0.2
154 CHAPTER 3. FILTER
faster computation of the internal minimization. In fact, using moderate subsampling might even lead to better
results since it also decreases the influence of slightly misaligned pixels. Although subsampling also influences
the minimization if estimationMethod is set to ’standard’, it is mostly useful for ’gold_standard’.
Some more general remarks on using adjust_mosaic_images in applications:
• Estimation of vignetting will only work well if significant vignetting is visible in the images. Otherwise, the
operator may lead to erratic results.
• Estimation of the response is rather slow because the problem is quite complex. Therefore, it is advisable not
to determine the response in time critical applications. Apart from this, the response can only be determined
correctly if there are relatively large brightness differences between the images.
• It is not possible to correct saturation. If there are saturated areas in an image, they will remain saturated.
• adjust_mosaic_images can only be used to correct different brightness in images, which is caused by different
exposure (shutter time, aperture) or different light intensity. It cannot be used to correct brightness differences
based on inhomogeneous illumination within each image.
Parameter
Result
If the parameters are valid, the operator AdjustMosaicImages returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
AdjustMosaicImages is reentrant and processed without parallelization.
Possible Predecessors
StationaryCameraSelfCalibration
Possible Successors
GenSphericalMosaic
References
David Hasler, Sabine S"usstrunk: Mapping colour in image stitching applications. Journal of Visual Communica-
tion and Image Representation, 15(1):65-90, 2004.
Module
Foundation
ut = div(G(u)∇u)
formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in image, this is an
enhancement of the mean curvature flow or intrinsic heat equation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined by the input image image at a time t0 = 0. The smoothing operator
MeanCurvatureFlow is a direct application of the mean curvature flow equation. The discrete diffusion equa-
tion is solved in iterations time steps of length theta, so that the output image imageCED contains the
gray value function at the time iterations · theta.
To detect the edge direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator IsotropicDiffusion for isotropic image smoothing.
While the matrix G is given by
1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2
in the case of the operator MeanCurvatureFlow, where I denotes the unit matrix, GMCF is again smoothed
componentwise by a Gaussian filter of standard deviation rho for CoherenceEnhancingDiff. Then, the
final coefficient matrix
HALCON 8.0.2
156 CHAPTER 3. FILTER
is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions
g1 (p) = 0.001
−1
g2 (p) = 0.001 + 0.999 exp
p
The operator Emphasize emphasizes high frequency areas of the image (edges and corners). The resulting
images appears sharper.
First the procedure carries out a filtering with the low pass ( MeanImage). The resulting gray values (res) are
calculated from the obtained gray values (mean) and the original gray values (orig) as follows:
factor serves as measurement of the increase in contrast. The division frequency is determined via the size of
the filter matrix: The larger the matrix, the lower the disivion frequency.
As an edge treatment the gray values are mirrored at the edges of the image. Overflow and/or underflow of gray
values is clipped.
Parameter
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
draw_region(Region,WindowHandle)
reduce_domain(Image,Region,Mask)
emphasize(Mask,Sharp,7,7,2.0)
disp_image(Sharp,WindowHandle).
Result
If the parameter values are correct the operator Emphasize returns the value 2 (H_MSG_TRUE)
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
Emphasize is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DispImage
HALCON 8.0.2
158 CHAPTER 3. FILTER
Alternatives
MeanImage, SubImage, Laplace, AddImage
See also
MeanImage, HighpassImage
Module
Foundation
HImage HImage.EquHistoImage ( )
Histogram linearisation of images
The operator EquHistoImage enhances the contrast. The starting point is the histogram of the input images.
The following simple gray value transformation f (g) is carried out for byte images:
X
f (g) = 255 h(x)
x=0...g
h(x) describes the relative frequency of the occurrence of the gray value x. For uint2 images, the only difference
is that the value 255 is replaced with a different maximum value. The maximum value is computed from the
number of significant bits stored with the input image, provided that this value is set. If not, the value of the system
parameter ’int2_bits’ is used (see SetSystem), if this value is set (i.e., different from -1). If none of the two
values is set, the number of significant bits is set to 16.
This transformation linearises the cumulative histogram. Maxima in the original histogram are "‘spreaded"’ and
thus the contrast in image regions with these frequently occuring gray values is increased. Supposedly homogenous
regions receive more easily visible structures. On the other hand, of course, the noise in the image increases cor-
respondlingly. Minima in the original histogram are dually "‘compressed"’. The transformed histogram contains
gaps, but the remaining gray values used occur approximately at the same frequency ("‘histogram equalization"’).
Attention
The operator EquHistoImage primarily serves for optical processing of images for a human viewer. For
example, the (local) contrast spreading can lead to a detection of fictitious edges.
Parameter
Illuminate image.
The operator Illuminate enhances contrast. Very dark parts of the image are "‘illuminated"’ more strongly,
very light ones are "‘darkened"’. If orig is the original gray value and mean is the corresponding gray value of
the low pass filtered image detected via the operators MeanImage and filter size maskHeight x maskWidth.
For byte-images val equals 127, for int2-images and uint2-images val equals the median value. The resulting gray
value is new:
The low pass should have rather large dimensions (30 x 30 to 200 x 200). Reasonable parameter combinations
might be:
i.e. the larger the low pass mask is chosen, the larger factor should be as well.
The following "‘spotlight effect"’ should be noted: If, for example, a dark object is in front of a light wall the object
as well as the wall, which is already light in the immediate proximity of the object contours, are lightened by the
operator Illuminate. This corresponds roughly to the effect that is produced when the object is illuminated
by a strong spotlight. The same applies to light objects in front of a darker background. In this case, however, the
fictitious "‘spotlight"’ darkens objects.
Parameter
HALCON 8.0.2
160 CHAPTER 3. FILTER
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
illuminate(Image,Better,40,40,0.55)
disp_image(Better,WindowHandle).
Result
If the parameter values are correct the operator Illuminate returns the value 2 (H_MSG_TRUE)
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
Illuminate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DispImage
Alternatives
ScaleImageMax, EquHistoImage, MeanImage, SubImage
See also
Emphasize, GrayHisto
Module
Foundation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
to the gray value function u defined by the input image image at a time t0 = 0. The discretized equation is solved
in iterations time steps of length theta, so that the output image contains the gray value function at the time
iterations · theta.
The mean curvature flow causes a smoothing of image in the direction of the edges in the image, i.e. along the
contour lines of u, while perpendicular to the edge direction no smoothing is performed and hence the boundaries
of image objects are not smoothed. To detect the image direction more robustly, in particular on noisy input data,
an additional isotropic smoothing step can precede the computation of the gray value gradients. The parameter
sigma determines the magnitude of the smoothing by means of the standard deviation of a corresponding Gaussian
convolution kernel, as used in the operator IsotropicDiffusion for isotropic image smoothing.
Parameter
HImage HImage.ScaleImageMax ( )
Maximum gray value spreading in the value range 0 to 255.
The operator ScaleImageMax calculates the minimum and maximum and scales the image to the maximum
value range of a byte image. This way the dynamics (value range) is fully exploited. The number of different gray
scales does not change, but in general the visual impression is enhanced. The gray values of images of the real,
int2, uint2 and int4 type are scaled to the range 0 to 255 and returned as byte images.
Attention
The output always is an image of the type byte.
Parameter
HALCON 8.0.2
162 CHAPTER 3. FILTER
ut = s |∇u|
on the function u defined by the gray values in image at a time t0 = 0. The discretized equation is solved in
iterations time steps of length theta, so that the output image sharpenedImage contains the gray value
function at the time iterations · theta.
The decision between dilation and erosion is made using the sign function s ∈ {−1, 0, +1} on a conventional edge
detector. The detector of Canny
∇u ∇u
s = −sgn D2 u( , )
|∇u| |∇u|
is available with mode = 0 canny 0 and the detector of Marr/Hildreth (the Laplace operator)
s = −sgn(∆u)
Parallelization Information
ShockFilter is reentrant and automatically parallelized (on tuple level).
References
F. Guichard, J. Morel; “A Note on Two Classical Shock Filters and Their Asymptotics”; Michael Kerckhove (Ed.):
Scale-Space and Morphology in Computer Vision, LNCS 2106, pp. 75-84; Springer, New York; 2001.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
3.6 FFT
static void HOperatorSet.ConvolFft ( HObject imageFFT,
HObject imageFilter, out HObject imageConvol )
gen_highpass(Highpass,0.2,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Highpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)
Result
ConvolFft returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
ConvolFft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
FftImage, FftGeneric, RftGeneric, GenHighpass, GenLowpass, GenBandpass,
GenBandfilter
Possible Successors
PowerByte, PowerReal, PowerLn, FftImageInv, FftGeneric, RftGeneric
Alternatives
ConvolGabor
See also
GenGabor, GenHighpass, GenLowpass, GenBandpass, ConvolGabor, FftImageInv
Module
Foundation
HALCON 8.0.2
164 CHAPTER 3. FILTER
gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)
Result
ConvolGabor returns 2 (H_MSG_TRUE) if all images are of correct type. If the input is empty the behavior can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
ConvolGabor is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
FftImage, FftGeneric, GenGabor
Possible Successors
PowerByte, PowerReal, PowerLn, FftImageInv, FftGeneric
Alternatives
ConvolFft
See also
ConvolImage
Module
Foundation
CorrelationFft calculates the correlation of the Fourier-transformed input images in the frequency do-
main. The correlation is calculated by multiplying imageFFT1 with the complex conjugate of imageFFT2.
It should be noted that in order to achieve a correct scaling of the correlation in the spatial domain, the operators
FftGeneric or RftGeneric with Norm = ’none’ must be used for the forward transform and FftGeneric
or RftGeneric with Norm = ’n’ for the reverse transform. If imageFFT1 and imageFFT2 contain the same
number of images, the corresponding images are correlated pairwise. Otherwise, imageFFT2 must contain only
one single image. In this case, the correlation is performed for each image of imageFFT1 with imageFFT2 .
Attention
The filtering is always performed on the entire image, i.e., the domain of the image is ignored.
Parameter
. imageFFT1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Fourier-transformed input image 1.
. imageFFT2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Fourier-transformed input image 2.
Number of elements : (ImageFFT2 = ImageFFT1) ∨ (ImageFFT2 = 1)
. imageCorrelation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Correlation of the input images in the frequency domain.
Example (Syntax: HDevelop)
Result
ConvolFft returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
CorrelationFft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
FftGeneric, FftImage, RftGeneric
Possible Successors
FftGeneric, FftImageInv, RftGeneric
Module
Foundation
Often the calculation of the energy is preceded by the convolution of an image with a Gabor filter and the
Hilbert transform of the Gabor filter (see ConvolGabor). In this case, the first channel of the image passed
to EnergyGabor is the Gabor-filtered image, transformed back into the spatial domain (see FftImageInv),
and the second channel the result of the convolution with the Hilbert transform, also transformed back into the
spatial domain. The local energy is a measure for the local contrast of structures (e.g., edges and lines) in the
image.
HALCON 8.0.2
166 CHAPTER 3. FILTER
Parameter
fft_image(Image,&FFT);
gen_gabor(&Filter,1.4,0.4,1.0,1.5,512);
convol_gabor(FFT,Filter,&Gabor,&Hilbert);
fft_image_inv(Gabor,&GaborInv);
fft_image_inv(Hilbert,&HilbertInv);
energy_gabor(GaborInv,HilbertInv,&Energy);
Result
EnergyGabor returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
EnergyGabor is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
GenGabor, ConvolGabor, FftImageInv
Module
Foundation
M −1 N −1
1 X X s2πi(km/M +ln/N )
F (m, n) = e f (k, l)
c
k=0 l=0
Opinions vary on whether the sign s in the exponent should be set to 1 or -1 for the forward transform, i.e., the
transform for going to the frequency domain. There is also disagreement on the magnitude of the normalizing
factor c. This is √
sometimes set to 1 for the forward transform, sometimes to M N , and sometimes (in case of the
unitary FFT) to M N . Especially in image processing applications the DC term is shifted to the center of the
image.
FftGeneric allows to select these choices individually. The parameter direction allows to select the logical
direction of the FFT. (This parameter is not unnecessary; it is needed to discern how to shift the image if the
DC term should rest in the center of the image.) Possible values are ’to_freq’ and ’from_freq’. The parameter
exponent is used to determine the sign of the exponent. It can be set to 1 or -1. The normalizing factor can be
set with norm, and can take on the values ’none’, ’sqrt’ and ’n’. The parameter mode determines the location of
the DC term of the FFT. It can be set to ’dc_center’ or ’dc_edge’.
In any case, the user must ensure the consistent use of the parameters. This means that the normalizing factors
used for the forward and backward transform must yield M N when multiplied, the exponents must be of opposite
sign, and mode must be equal for both transforms.
A consistent combination is, for example (’to_freq’,-1,’n’,’dc_edge’) for the forward transform and
(’from_freq’,1,’none’,’dc_edge’) for the reverse transform. In this case, the FFT can be interpreted as interpo-
lation with trigonometric basis functions. Another possible combination is (’to_freq’,-1,’sqrt’,’dc_center’) and
(’from_freq’,1,’sqrt’,’dc_center’).
The parameter resultType can be used to specify the result image type of the reverse transform (direction
= ’from_freq’). In the forward transform (direction = ’to_freq’), resultType must be set to ’complex’.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. imageFFT (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Fourier-transformed image.
. direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Calculate forward or reverse transform.
Default Value : "to_freq"
List of values : Direction ∈ {"to_freq", "from_freq"}
. exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Sign of the exponent.
Default Value : -1
List of values : Exponent ∈ {-1, 1}
. norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Normalizing factor of the transform.
Default Value : "sqrt"
List of values : Norm ∈ {"none", "sqrt", "n"}
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge"}
. resultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Image type of the output image.
Default Value : "complex"
List of values : ResultType ∈ {"complex", "byte", "int1", "int2", "uint2", "int4", "real", "direction",
"cyclic"}
Example (Syntax: C)
/* simulation of fft */
my_fft(Hobject In, Hobject *Out)
{
fft_generic(In,Out,"to_freq",-1,"sqrt","dc_center","complex");
}
/* simulation of fft_image_inv */
my_fft_image_inv(Hobject In, Hobject *Out)
{
fft_generic(In,&Out,"from_freq",1,"sqrt","dc_center","byte");
}
Result
FftGeneric returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
FftGeneric is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
OptimizeFftSpeed, ReadFftOptimizationData
HALCON 8.0.2
168 CHAPTER 3. FILTER
Possible Successors
ConvolFft, ConvolGabor, ConvertImageType, PowerByte, PowerReal, PowerLn,
PhaseDeg, PhaseRad, EnergyGabor
Alternatives
FftImage, FftImageInv, RftGeneric
Module
Foundation
HImage HImage.FftImage ( )
Compute the fast Fourier transform of an image.
FftImage calculates the Fourier transform of the input image (image), i.e., it transforms the image into the
frequency domain. The algorithm used is the fast Fourier transform. This corresponds to the call
FftGeneric(Image,ImageFFT,’to_freq’,-1,’sqrt’,’dc_center’,’complex’)
.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
HImage HImage.FftImageInv ( )
Compute the inverse fast Fourier transform of an image.
FftImageInv calculates the inverse Fourier transform of the input image (image), i.e., it transforms the image
back into the spatial domain. This corresponds to the call
FftGeneric(Image,ImageFFT,’from_freq’,1,’sqrt’,’dc_center’,’byte’)
.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
HALCON 8.0.2
170 CHAPTER 3. FILTER
Result
GenBandfilter returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
GenBandfilter is reentrant and processed without parallelization.
Possible Successors
ConvolFft
Alternatives
GenCircle, PaintRegion
See also
GenHighpass, GenLowpass, GenBandpass, GenGaussFilter, GenDerivativeFilter
Module
Foundation
parameter norm can be used to specify the normalization factor of the filter. If FftGeneric and norm = ’n’
is used the normalization in the FFT can be avoided. mode can be used to determine where the DC term of the
filter lies or whether the filter should be used in the real-valued FFT. If FftGeneric is used, ’dc_edge’ can be
used to gain efficiency. If FftImage and FftImageInv are used for filtering, norm = ’none’ and mode =
’dc_center’ must be used. If RftGeneric is used, mode = ’rft’ must be used. The resulting image contains an
annulus with the value 255, and the value 0 outside of this annulus.
Parameter
Result
GenBandpass returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
GenBandpass is reentrant and processed without parallelization.
Possible Successors
ConvolFft
See also
GenHighpass, GenLowpass, GenBandfilter, GenGaussFilter, GenDerivativeFilter
Module
Foundation
HALCON 8.0.2
172 CHAPTER 3. FILTER
Result
GenDerivativeFilter returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
handling is raised.
Parallelization Information
GenDerivativeFilter is reentrant and processed without parallelization.
Possible Predecessors
FftImage, FftGeneric, RftGeneric
Possible Successors
ConvolFft
See also
FftImageInv, GenGaussFilter, GenLowpass, GenBandpass, GenBandfilter,
GenHighpass
Module
Foundation
HALCON 8.0.2
174 CHAPTER 3. FILTER
Parallelization Information
GenFilterMask is reentrant and processed without parallelization.
Possible Successors
FftImage, FftGeneric
See also
ConvolImage
Module
Foundation
Parameter
. imageFilter (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; HImage
Gabor and Hilbert filter.
. angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Angle range, inversely proportional to the range of orientations.
Default Value : 1.4
Suggested values : Angle ∈ {1.0, 1.2, 1.4, 1.6, 2.0, 2.5, 3.0, 5.0, 6.0, 10.0, 20.0, 30.0, 50.0, 70.0, 100.0}
Typical range of values : 1.0 ≤ Angle ≤ 500.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Distance of the center of the filter to the DC term.
Default Value : 0.4
Suggested values : Frequency ∈ {0.0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.50, 0.55, 0.60, 0.65,
0.699}
Typical range of values : 0.0 ≤ Frequency ≤ 0.7
Minimum Increment : 0.00001
Recommended Increment : 0.005
. bandwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Bandwidth range, inversely proportional to the range of frequencies being passed.
Default Value : 1.0
Suggested values : Bandwidth ∈ {0.1, 0.3, 0.7, 1.0, 1.5, 2.0, 3.0, 5.0, 7.0, 10.0, 15.0, 20.0, 30.0, 50.0}
Typical range of values : 0.05 ≤ Bandwidth ≤ 100.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. orientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Angle of the principal orientation.
Default Value : 1.5
Suggested values : Orientation ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0,
3.14}
Typical range of values : 0.0 ≤ Orientation ≤ 3.1416
Minimum Increment : 0.0001
Recommended Increment : 0.05
. norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge"}
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)
gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)
HALCON 8.0.2
176 CHAPTER 3. FILTER
Result
GenGabor returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is raised.
Parallelization Information
GenGabor is reentrant and processed without parallelization.
Possible Predecessors
FftImage, FftGeneric
Possible Successors
ConvolGabor
Alternatives
GenBandpass, GenBandfilter, GenHighpass, GenLowpass
See also
FftImageInv, EnergyGabor
Module
Foundation
Result
GenGaussFilter returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling
is raised.
Parallelization Information
GenGaussFilter is reentrant and processed without parallelization.
Possible Predecessors
FftImage, FftGeneric, RftGeneric
Possible Successors
ConvolFft
See also
FftImageInv, GenGaussFilter, GenLowpass, GenBandpass, GenBandfilter,
GenHighpass
Module
Foundation
HALCON 8.0.2
178 CHAPTER 3. FILTER
be used. The resulting image has an inner part with the value 0, and an outer part with the value determined by the
normalization factor.
Parameter
. imageHighpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Highpass filter in the frequency domain.
. frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Cutoff frequency.
Default Value : 0.1
Suggested values : Frequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : Frequency ≥ 0
. norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)
Result
GenHighpass returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
GenHighpass is reentrant and processed without parallelization.
Possible Successors
ConvolFft
See also
ConvolFft, GenLowpass, GenBandpass, GenBandfilter, GenGaussFilter,
GenDerivativeFilter
Module
Foundation
GenLowpass generates an ideal lowpass filter in the frequency domain. The parameter frequency determines
the cutoff frequency of the filter as a fraction of the maximum (horizontal and vertical) frequency that can be
represented in an image of size width × height, i.e., frequency should lie between 0 and 1. To achieve a
maximum efficiency of the filtering operation, the parameter norm can be used to specify the normalization factor
of the filter. If FftGeneric and norm = ’n’ is used the normalization in the FFT can be avoided. mode can be
used to determine where the DC term of the filter lies or whether the filter should be used in the real-valued FFT.
If FftGeneric is used, ’dc_edge’ can be used to gain efficiency. If FftImage and FftImageInv are used
for filtering, norm = ’none’ and mode = ’dc_center’ must be used. If RftGeneric is used, mode = ’rft’ must
be used. The resulting image has an inner part with the value set to the normalization factor, and an outer part with
the value 0.
Parameter
Result
GenLowpass returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
GenLowpass is reentrant and processed without parallelization.
Possible Successors
ConvolFft
See also
GenHighpass, GenBandpass, GenBandfilter, GenGaussFilter, GenDerivativeFilter
Module
Foundation
HALCON 8.0.2
180 CHAPTER 3. FILTER
HALCON 8.0.2
182 CHAPTER 3. FILTER
HALCON 8.0.2
184 CHAPTER 3. FILTER
Parallelization Information
OptimizeRftSpeed is reentrant and processed without parallelization.
Possible Successors
RftGeneric, WriteFftOptimizationData
Alternatives
ReadFftOptimizationData
See also
OptimizeFftSpeed
Module
Foundation
HImage HImage.PhaseDeg ( )
Return the phase of a complex image in degrees.
PhaseDeg computes the phase of a complex image in degrees. The following formula is used:
90
phase = atan2(imaginary part, real part) .
π
Hence, imagePhase contains half the phase angle. For negative phase angles, 180 is added.
Parameter
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_deg(FFT,&Phase);
disp_image(Phase,WindowHandle);
Result
PhaseDeg returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PhaseDeg is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
FftImage, FftGeneric, RftGeneric
Possible Successors
DispImage
Alternatives
PhaseRad
See also
FftImageInv
Module
Foundation
HImage HImage.PhaseRad ( )
Return the phase of a complex image in radians.
PhaseRad computes the phase of a complex image in radians. The following formula is used:
Parameter
. imageComplex (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image in frequency domain.
. imagePhase (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Phase of the image in radians.
Example (Syntax: C)
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_rad(FFT,&Phase);
disp_image(Phase,WindowHandle);
Result
PhaseRad returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PhaseRad is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
FftImage, FftGeneric, RftGeneric
Possible Successors
DispImage
Alternatives
PhaseDeg
See also
FftImageInv, FftGeneric, RftGeneric
Module
Foundation
HImage HImage.PowerByte ( )
Return the power spectrum of a complex image.
PowerByte computes the power spectrum from the real and imaginary parts of a Fourier-transformed image (see
FftImage), i.e., the modulus of the frequencies. The result image is of type ’byte’. The following formula is
used:
p
realpart2 + imaginarypart2 .
HALCON 8.0.2
186 CHAPTER 3. FILTER
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image in frequency domain.
. powerByte (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Power spectrum of the input image.
Example (Syntax: C)
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_byte(FFT,&Power);
disp_image(Power,WindowHandle);
Result
PowerByte returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PowerByte is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
FftImage, FftGeneric, RftGeneric, ConvolFft, ConvolGabor
Possible Successors
DispImage
Alternatives
AbsImage, ConvertImageType, PowerReal, PowerLn
See also
FftImage, FftGeneric, RftGeneric
Module
Foundation
HImage HImage.PowerLn ( )
Return the power spectrum of a complex image.
PowerLn computes the power spectrum from the real and imaginary parts of a Fourier-transformed image (see
FftImage), i.e., the modulus of the frequencies. Additionally, the natural logarithm is applied to the result. The
result image is of type ’real’. The following formula is used:
p
ln realpart2 + imaginarypart2 .
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image in frequency domain.
. imageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Power spectrum of the input image.
Example (Syntax: C)
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_ln(FFT,&Power);
disp_image(Power,WindowHandle);
Result
PowerLn returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PowerLn is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
FftImage, FftGeneric, RftGeneric, ConvolFft, ConvolGabor
Possible Successors
DispImage, ConvertImageType, ScaleImage
Alternatives
AbsImage, ConvertImageType, PowerReal, PowerByte
See also
FftImage, FftGeneric, RftGeneric
Module
Foundation
HImage HImage.PowerReal ( )
Return the power spectrum of a complex image.
PowerReal computes the power spectrum from the real and imaginary parts of a Fourier-transformed image (see
FftImage), i.e., the modulus of the frequencies. The result image is of type ’real’. The following formula is
used:
p
realpart2 + imaginarypart2 .
Parameter
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_real(FFT,&Power);
disp_image(Power,WindowHandle);
Result
PowerReal returns 2 (H_MSG_TRUE) if the image is of correct type. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
PowerReal is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
FftImage, FftGeneric, RftGeneric, ConvolFft, ConvolGabor
Possible Successors
DispImage, ConvertImageType, ScaleImage
Alternatives
AbsImage, ConvertImageType, PowerByte, PowerLn
HALCON 8.0.2
188 CHAPTER 3. FILTER
See also
FftImage, FftGeneric, RftGeneric
Module
Foundation
The parameter resultType can be used to specify the result image type of the reverse transform (direction
= ’from_freq’). In the forward transform (direction = ’to_freq’), resultType must be set to ’complex’.
The parameter direction determines whether the transform should be performed to the frequency domain or back
into the spatial domain. For direction = ’to_freq’ the input image must have a real-valued type, i.e., a complex
image may not be used as input. All image types that can be converted into an image of type real are supported. In
this case, the output is a complex image of dimension (w/2 + 1) × h, where w and h are the width and height of
the input image. In this mode, the exponent -1 is used in the transform (see FftGeneric). For direction =
’from_freq’, the input image must be complex. In this case, the size of the input image is insufficient to determine
the size of the output image. This must be done by setting width to a valid value, i.e., to 2w − 2 or 2w − 1, where
w is the width of the complex image. In this mode, the exponent 1 is used in the transform.
The normalizing factor can be set with norm, and can take on the values ’none’, ’sqrt’ and ’n’. The user must
ensure the consistent use of the parameters. This means that the normalizing factors used for the forward and
backward transform must yield wh when multiplied.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. imageFFT (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Fourier-transformed image.
. direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Calculate forward or reverse transform.
Default Value : "to_freq"
List of values : Direction ∈ {"to_freq", "from_freq"}
. norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Normalizing factor of the transform.
Default Value : "sqrt"
List of values : Norm ∈ {"none", "sqrt", "n"}
. resultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Image type of the output image.
Default Value : "complex"
List of values : ResultType ∈ {"complex", "byte", "int1", "int2", "uint2", "int4", "real", "direction",
"cyclic"}
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048}
Result
RftGeneric returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
RftGeneric is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
OptimizeRftSpeed, ReadFftOptimizationData
Possible Successors
ConvolFft, ConvertImageType, PowerByte, PowerReal, PowerLn, PhaseDeg, PhaseRad
Alternatives
FftGeneric, FftImage, FftImageInv
Module
Foundation
HALCON 8.0.2
190 CHAPTER 3. FILTER
WriteFftOptimizationData stores the data for the optimization of the runtime of the FFT that were
determined with OptimizeFftSpeed in the file given by fileName. The data can be loaded with
ReadFftOptimizationData.
Parameter
3.7 Geometric-Transformations
static void HOperatorSet.AffineTransImage ( HObject image,
out HObject imageAffinTrans, HTuple homMat2D, HTuple interpolation,
HTuple adaptImageSize )
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
In addition, the system parameter ’int_zooming’ (see SetSystem) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image can be controlled by the parameter adaptImageSize: With value ’true’ the size
will be adapted so that no clipping occurs at the right or lower edge. With value ’false’ the target image has the
same size as the input image. Note that, independent of adaptImageSize, the image is always clipped at the
left and upper edge, i.e., all image parts that have negative coordinates after the transformation are clipped.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in AffineTransPixel. This means that in fact not homMat2D
is applied but a modified version. Therefore, applying AffineTransImage corresponds to the following
chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output pixels as
homogeneous vectors):
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
ColT rans_i = 0 1 −0.5 · homMat2D · 0 1 +0.5 · Col_i
1 0 0 1 0 0 1 1
As an effect, you might get unexpected results when creating affine transformations based on coordinates that are
derived from the image, e.g., by operators like AreaCenterGray. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric image and then rotate the image around this point using
HomMat2dRotate, the resulting image will not lie on the original one. In such a case, you can compensate this
effect by applying the following translations to homMat2D before using it in AffineTransImage:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image(Image, ImageAffinTrans, HomMat2DAdapted, ’constant’,
’false’)
Parameter
HALCON 8.0.2
192 CHAPTER 3. FILTER
hom_mat2d_identity(Matrix1)
hom_mat2d_scale(Matrix1,0.5,0.5,256.0,256.0,Matrix2)
hom_mat2d_rotate(Matrix2,3.14,256.0,256.0,Matrix3)
hom_mat2d_translate(Matrix3,-128.0,-128.0,Matrix4,)
affine_trans_image(Image,TransImage,Matrix4,1).
draw_rectangle2(WindowHandle,L,C,Phi,L1,L2)
hom_mat2d_identity(Matrix1)
get_system(width,Width)
get_system(height,Height)
hom_mat2d_translate(Matrix1,Height/2.0-L,Width/2.0-C,Matrix2)
hom_mat2d_rotate(Matrix2,3.14-Phi,Height/2.0,Width/2.0,Matrix3)
hom_mat2d_scale(Matrix3,Height/(2.0*L2),Width/(2.0*L1),
Height/2.0,Width/2.0,Matrix4)
affine_trans_image(Image,Matrix4,TransImage,1).
Result
If the matrix homMat2D represents an affine transformation (i.e., not a projective transformation),
AffineTransImage returns 2 (H_MSG_TRUE). If the input is empty the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
AffineTransImage is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
HomMat2dIdentity, HomMat2dTranslate, HomMat2dRotate, HomMat2dScale
Alternatives
AffineTransImageSize, ZoomImageSize, ZoomImageFactor, MirrorImage, RotateImage,
AffineTransRegion
See also
SetPartStyle
Module
Foundation
Apply an arbitrary affine 2D transformation to an image and specify the output image size.
AffineTransImageSize applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation, and
slant (skewing), to the images given in image and returns the transformed images in imageAffinTrans.
The affine transformation is described by the homogeneous transformation matrix given in homMat2D, which
can be created using the operators HomMat2dIdentity, HomMat2dScale, HomMat2dRotate,
HomMat2dTranslate, etc., or be the result of operators like VectorAngleToRigid.
The components of the homogeneous transformation matrix are interpreted as follows: The row coordinate of the
image corresponds to x and the col coordinate corresponds to y of the coordinate system in which the transforma-
tion matrix was defined. This is necessary to obtain a right-handed coordinate system for the image. In particular,
this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite
naturally corresponds to the usual (row,column) order for coordinates in the image.
The region of the input image is ignored, i.e., assumed to be the full rectangle of the image. The region of the
resulting image is set to the transformed rectangle of the input image. If necessary, the resulting image is filled
with zero (black) outside of the region of the original image.
Generally, transformed points will lie between pixel coordinates. Therefore, an appropriate interpolation scheme
has to be used. The interpolation can also be used to avoid aliasing effects for scaled images. The quality and
speed of the interpolation can be set by the parameter interpolation:
none Nearest-neighbor interpolation: The gray value is determined from the nearest pixel’s gray value (pos-
sibly low quality, very fast).
constant Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of mean
filter is used to prevent aliasing effects (medium quality and run time).
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
In addition, the system parameter ’int_zooming’ (see SetSystem) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image is specifed by the parameters width and height. Note that the image is always
clipped at the left and upper edge, i.e., all image parts that have negative coordinates after the transformation are
clipped. If the affine transformation (in particular, the translation) is chosen appropriately, a part of the image
can be transformed as well as cropped in one call. This is useful, for example, when using the variation model
(see CompareVariationModel), because with this mechanism only the parts of the image that should be
examined, are transformed.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in AffineTransPixel. This means that in fact not homMat2D is
applied but a modified version. Therefore, applying AffineTransImageSize corresponds to the following
chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output pixels as
homogeneous vectors):
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
ColT rans_i = 0 1 −0.5 · homMat2D · 0 1 +0.5 · Col_i
1 0 0 1 0 0 1 1
As an effect, you might get unexpected results when creating affine transformations based on coordinates that are
derived from the image, e.g., by operators like AreaCenterGray. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric image and then rotate the image around this point using
HomMat2dRotate, the resulting image will not lie on the original one. In such a case, you can compensate this
effect by applying the following translations to homMat2D before using it in AffineTransImageSize:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image_size(Image, ImageAffinTrans, HomMat2DAdapted,
HALCON 8.0.2
194 CHAPTER 3. FILTER
Parameter
The origin of mosaicImage and its size are automatically chosen so that all of the input images are completely
visible.
The order in which the images are added to the mosaic is given by the array stackingOrder. The first index in
this array will end up at the bottom of the image stack while the last one will be on top. If ’default’ is given instead
of an array of integers, the canonical order (images in the order used in images) will be used.
The parameter transformRegion can be used to determine whether the domains of images are also trans-
formed. Since the transformation of the domains costs runtime, this parameter should be used to specify whether
this is desired or not. If transformRegion is set to ’false’ the domain of the input images is ignored and the
complete images are transformed.
On output, the parameter transMat2D contains a 3 × 3 projective transformation matrix that describes the
translation that was necessary to transform all images completely into the output image.
Parameter
HALCON 8.0.2
196 CHAPTER 3. FILTER
Result
If the parameters are valid, the operator GenCubeMapMosaic returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
HALCON 8.0.2
198 CHAPTER 3. FILTER
Parallelization Information
GenCubeMapMosaic is reentrant and processed without parallelization.
Possible Predecessors
StationaryCameraSelfCalibration
Alternatives
GenSphericalMosaic, GenProjectiveMosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
Parameter
gen_empty_obj (Images)
for J := 1 to 6 by 1
read_image (Image, ’mosaic/pcb_’+J$’02’)
concat_obj (Images, Image, Images)
endfor
From := [1,2,3,4,5]
To := [2,3,4,5,6]
Num := |From|
ProjMatrices := []
for J := 0 to Num-1 by 1
F := From[J]
T := To[J]
select_obj (Images, F, ImageF)
select_obj (Images, T, ImageT)
points_foerstner (ImageF, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsF, ColJunctionsF, CoRRJunctionsF,
CoRCJunctionsF, CoCCJunctionsF, RowAreaF,
ColAreaF, CoRRAreaF, CoRCAreaF, CoCCAreaF)
points_foerstner (ImageT, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsT, ColJunctionsT, CoRRJunctionsT,
CoRCJunctionsT, CoCCJunctionsT, RowAreaT,
ColAreaT, CoRRAreaT, CoRCAreaT, CoCCAreaT)
proj_match_points_ransac (ImageF, ImageT, RowJunctionsF,
ColJunctionsF, RowJunctionsT,
ColJunctionsT, ’ncc’, 21, 0, 0, 480, 640,
0, 0.5, ’gold_standard’, 1, 4364537,
ProjMatrix, Points1, Points2)
ProjMatrices := [ProjMatrices,ProjMatrix]
endfor
gen_projective_mosaic (Images, MosaicImage, 2, From, To, ProjMatrices,
’default’, ’false’, MosaicMatrices2D)
HALCON 8.0.2
200 CHAPTER 3. FILTER
Parallelization Information
GenProjectiveMosaic is reentrant and processed without parallelization.
Possible Predecessors
ProjMatchPointsRansac, VectorToProjHomMat2d, HomVectorToProjHomMat2d
See also
ProjectiveTransImage, ProjectiveTransImageSize, ProjectiveTransRegion,
ProjectiveTransContourXld, ProjectiveTransPoint2d, ProjectiveTransPixel
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
means that the gray values are taken from the points of the input image to whose center the pixel in the mosaic
image has the smallest distance on the sphere. This mode has the advantage that vignetting and uncorrected radial
distortions are less noticeable in the mosaic image because they typically are symmetric with respect to the image
center. Alternatively, with the choice of parameters described described in the following, a mode can be selected
that has the same effect as if the images were painted successively into the mosaic image. Here, the order in which
the images are added to the mosaic image is important. Therefore, an array of integer values can be passed in
stackingOrder. The first index in this array will end up at the bottom of the image stack while the last one
will be on top. If ’default’ is given instead of an array of integers, the canonical order (images in the order used
in images) will be used. Hence, if neither ’voronoi’ nor ’default’ are used, stackingOrder must contain a
permutation of the numbers 1,...,n, where n is the number of images passed in images. It should be noted that
the mode ’voronoi’ cannot always be used. For example, at least two images must be passed to use this mode.
Furthermore, for very special configurations of the positions of the image centers on the sphere, the Voronoi cells
cannot be determined uniquely. With stackingOrder = 0 blend 0 , an additional mode is available, which blends
the images of the mosaic smoothly. This way seams between the images become less apparent. The seam lines
between the images are the same as in ’voronoi’. This mode leads to visually more appealing images, but requires
significantly more resources. If the mode ’voronoi’ or ’blend’ cannot be used for whatever reason the mode is
switched internally to ’default’ automatically.
The parameter interpolation can be used to select the desired interpolation mode for creating the mosaic.
Bilinear and bicubic interpolation is available.
Parameter
. images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; HImage
Input images.
. mosaicImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Output image.
. cameraMatrices (input_control) . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D [ ] / HTuple (double)
(Array of) 3 × 3 projective camera matrices that determine the interior camera parameters.
. rotationMatrices (input_control) . . . . . . . . . . hom_mat2d-array ; HHomMat2D [ ] / HTuple (double)
Array of 3 × 3 transformation matrices that determine rotation of the camera in the respective image.
. latMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (double / int / long)
Minimum latitude of points in the spherical mosaic image.
Default Value : -90
Suggested values : LatMin ∈ {-100, -90, -80, -70, -60, -50, -40, -30, -20, -10}
Restriction : LatMin ≤ 90
. latMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (double / int / long)
Maximum latitude of points in the spherical mosaic image.
Default Value : 90
Suggested values : LatMax ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : (LatMax ≥ -90) ∧ (LatMax > LatMin)
. longMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (double / int / long)
Minimum longitude of points in the spherical mosaic image.
Default Value : -180
Suggested values : LongMin ∈ {-200, -180, -160, -140, -120, -100, -90, -80, -70, -60, -50, -40, -30, -20, -10}
Restriction : LongMin ≤ 180
. longMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (double / int / long)
Maximum longitude of points in the spherical mosaic image.
Default Value : 180
Suggested values : LongMax ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 140, 160, 180, 200}
Restriction : (LongMax ≥ -90) ∧ (LongMax > LongMin)
. latLongStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (double / int / long)
Latitude and longitude angle step width.
Default Value : 0.1
Suggested values : LatLongStep ∈ {0, 0.02, 0.05, 0.1, 0.2, 0.5, 1}
Restriction : LatLongStep ≥ 0
. stackingOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / int / long)
Mode of adding the images to the mosaic image.
Default Value : "voronoi"
Suggested values : StackingOrder ∈ {"blend", "voronoi", "default"}
HALCON 8.0.2
202 CHAPTER 3. FILTER
Result
If the parameters are valid, the operator GenSphericalMosaic returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
GenSphericalMosaic is reentrant and processed without parallelization.
Possible Predecessors
StationaryCameraSelfCalibration
Alternatives
GenCubeMapMosaic, GenProjectiveMosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
HALCON 8.0.2
204 CHAPTER 3. FILTER
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
mirror_image(Image,MirImage,’row’).
disp_image(MirImage,WindowHandle)
Parallelization Information
MirrorImage is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
HomMat2dRotate, AffineTransImage, RotateImage
See also
RotateImage, HomMat2dRotate
Module
Foundation
Parameter
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
polar_trans_image(Image,PolarImage,100,100,314,200).
disp_image(PolarImage,WindowHandle)
Parallelization Information
PolarTransImage is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
PolarTransImageExt
See also
PolarTransImageInv, PolarTransRegion, PolarTransRegionInv,
PolarTransContourXld, PolarTransContourXldInv, AffineTransImage
Module
Foundation
HALCON 8.0.2
206 CHAPTER 3. FILTER
HALCON 8.0.2
208 CHAPTER 3. FILTER
HALCON 8.0.2
210 CHAPTER 3. FILTER
enlargements of the image. For interpolation = ’bilinear’, the gray values are interpolated bilinearly, leading
to longer runtimes, but also to significantly improved results.
The parameter transformRegion can be used to determine whether the domain of image is also transformed.
Since the transformation of the domain costs runtime, this parameter should be used to specify whether this is
desired or not. If transformRegion is set to ’false’ the domain of the input image is ignored and the complete
image is transformed.
The projective transformation matrix could for example be created using the operator
VectorToProjHomMat2d.
In a homography the points to be projected are represented by homogeneous vectors of the form (x, y, w). A
x y
Euclidean point can be derived as (x’,y’) = ( w , w ).
Just like in AffineTransImage, x represents the row coordinate while y represents the column coordinate
in ProjectiveTransImage. With this convention, affine transformations are a special case of projective
transformations in which the last row of homMat2D is of the form (0, 0, c).
For images of type byte or uint2 the system parameter ’int_zooming’ selects between fast calculation in fixed point
arithmetics (’int_zooming’ = ’true’) and highly accurate calculation in floating point arithmetics (’int_zooming’ =
’false’). Especially for interpolation = ’bilinear’, however, fixed point calculation can lead to minor gray
1
value deviations since the faster algorithm achieves an accuracy of no more than 16 pixels. Therefore, when
applying large scales ’int_zooming’ = ’false’ is recommended.
Parameter
Apply a projective transformation to an image and specify the output image size.
ProjectiveTransImageSize applies the projective transformation (homography) determined by the homo-
geneous transformation matrix homMat2D on the input image image and stores the result into the output image
transImage.
transImage will be clipped at the output dimensions height×width. Apart from this,
ProjectiveTransImageSize is identical to its alternative version ProjectiveTransImage.
Parameter
HALCON 8.0.2
212 CHAPTER 3. FILTER
RotateImage rotates the image image counterclockwise by phi degrees about its center. This operator is
much faster if phi is a multiple of 90 degrees than the general operator AffineTransImage. For rotations by
90, 180, and 270 degrees, the region is rotated accordingly. For all other rotations the region is set to the maximum
region, i.e., to the extent of the resulting image. The effect of the parameter interpolation is the same as in
AffineTransImage. It is ignored for rotations by 90, 180, and 270 degrees. The size of the resulting image is
the same as that of the input image, with the exception of rotations by 90 and 270 degrees, where the width and
height will be exchanged.
Attention
The angle phi is given in degrees, not in radians.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageRotate (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Rotated image.
. phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (double / int / long)
Rotation angle.
Default Value : 90
Suggested values : Phi ∈ {90, 180, 270}
Typical range of values : 0 ≤ Phi ≤ 360
Minimum Increment : 0.001
Recommended Increment : 0.2
. interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
Example (Syntax: HDevelop)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
rotate_image(ImageRotImage,270).
disp_image(RotImage,WindowHandle)
Parallelization Information
RotateImage is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
HomMat2dRotate, AffineTransImage
See also
MirrorImage
Module
Foundation
the following two cases: First, if ZoomImageFactor is used on an uint2 or int2 image with high dynamics
(i.e. images containing values close to the respective limits) in combination with scale factors smaller than 0.5,
then the gray values of the output image may be erroneous. Second, if interpolation is set to a value other
than ’none’, a large scale factor is applied, and a large output image is obtained, then undefined gray values at the
lower and at the right image border may result. The maximum width Bmax of this border of undefined gray values
can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale factor in one dimension and I is the size of the
output image in the corresponding dimension. In both cases, it is recommended to set ’int_zooming’ to ’false’ via
the operator SetSystem.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageZoomed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Scaled image.
. scaleWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (double)
Scale factor for the width of the image.
Default Value : 0.5
Suggested values : ScaleWidth ∈ {0.25, 0.5, 1.5, 2.0}
Typical range of values : 0.001 ≤ ScaleWidth ≤ 10.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. scaleHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (double)
Scale factor for the height of the image.
Default Value : 0.5
Suggested values : ScaleHeight ∈ {0.25, 0.5, 1.5, 2.0}
Typical range of values : 0.001 ≤ ScaleHeight ≤ 10.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
Example (Syntax: HDevelop)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
zoom_image_factor(Image,ZooImage,0,0.5,0.5).
disp_image(ZooImage,WindowHandle)
Parallelization Information
ZoomImageFactor is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
ZoomImageSize, AffineTransImage, HomMat2dScale
See also
HomMat2dScale, AffineTransImage
Module
Foundation
HALCON 8.0.2
214 CHAPTER 3. FILTER
ZoomImageSize scales the image image to the size given by width and height. The parameter
interpolation determines the type of interpolation used (see AffineTransImage).
Attention
If the system parameter ’int_zooming’ is set to ’true’, the internally used integer arithmetic may lead to errors in the
following two cases: First, if ZoomImageSize is used on an uint2 or int2 image with high dynamics (i.e. images
containing values close to the respective limits) in combination with scale factors (ratio of output to input image
size) smaller than 0.5, then the gray values of the output image may be erroneous. Second, if interpolation is
set to a value other than ’none’, a large scale factor is applied, and a large output image is obtained, then undefined
gray values at the lower and at the right image border may result. The maximum width Bmax of this border of
undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale factor in one dimension
and I is the size of the output image in the corresponding dimension. In both cases, it is recommended to set
’int_zooming’ to ’false’ via the operator SetSystem.
Parameter
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
zoom_image_size(Image,ZooImage,0,200,200).
disp_image(ZooImage,WindowHandle)
Parallelization Information
ZoomImageSize is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
ZoomImageFactor, AffineTransImage, HomMat2dScale
See also
HomMat2dScale, AffineTransImage
Module
Foundation
3.8 Inpainting
HALCON 8.0.2
216 CHAPTER 3. FILTER
The operator InpaintingAniso uses the anisotropic diffusion according to the model of Perona and Malik, to
continue image edges that cross the border of the region region and to connect them inside of region.
With this, the structure of the edges in region will be made consistent with the surrounding image matrix, so that
an occlusion of errors or unwanted objects in the input image, a so called inpainting, is less visible to the human
beholder, since there remain no obvious artefacts or smudges.
Considering the image as a gray value function u, the algorithm is a discretization of the partial differential equation
ut = div(g(|∇u|2 , c)∇u)
with the initial value u = u0 defined by image at a time t0 = 0. The equation is iterated iterations times in
time steps of length theta, so that the output image inpaintedImage contains the gray value function at the
time iterations · theta.
The primary goal of the anisotropic diffusion, which is also referred to as nonlinear isotropic diffusion, is the
elimination of image noise in constant image patches while preserving the edges in the image. The distinction
between edges and constant patches is achieved using the threshold contrast on the magnitude of the gray
value differences between adjacent pixels. contrast is referred to as the contrast parameter and is abbreviated
with the letter c. If the edge information is distributed in an environment of the already existing edges by smoothing
the edge amplitude matrix, it is furthermore possible to continue edges into the computation area region. The
standard deviation of this smoothing process is determined by the parameter rho.
The algorithm used is basically the same as in the anisotropic diffusion filter AnisotropicDiffusion, except
that here, border treatment is not done by mirroring the gray values at the border of region. Instead, this
procedure is only applicable on regions that keep a distance of at least 3 pixels to the border of the image matrix
of image, since the gray values on this band around region are used to define the boundary conditions for the
respective differential equation and thus assure consistency with the neighborhood of region. Please note that
the inpainting progress is restricted to those pixels that are included in the ROI of the input image image. If the
ROI does not include the entire region region, a band around the intersection of region and the ROI is used to
define the boundary values.
The result of the diffusion process depends on the gray values in the computation area of the input image image.
It must be pointed out that already exisiting image edges are preserved within region. In particular, this holds
for gray value jumps at the border of region, which can result for example from a previous inpainting with
constant gray value. If the procedure is to be used for inpainting, it is recommended to apply the operator
HarmonicInterpolation first to remove all unwanted edges inside the computation area and to minimize the
gray value difference between adjacent pixels, unless the input image already contains information inside region
that should be preserved.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter mode,
the following functions can be selected:
1
g1 (x, c) = p
1 + 2 cx2
Choosing the function g1 by setting mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size theta. In this case however, there remains a slight diffusion even across edges of an amplitude larger than c.
1
g2 (x, c) =
1 + cx2
The choice of ’perona-malik’ for mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.
c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Furthermore, the choice of the value ’shock’ is possible for mode to select a contrast invariant modification of the
anisotropic diffusion. In this variant, the generation of edges is not achieved by variation of the diffusion coefficient
g, but the constant coefficient g = 1 and thus isotropic diffusion is used. Additionally, a shock filter of type
ut = −sgn(∇|∇u|)|∇u|
is applied, which, just like a negative diffusion coefficient, causes a sharpening of the edges, but works independent
of the absolute value of |∇u|. In this mode, contrast does not have the meaning of a contrast parameter,
but specifies the ratio between the diffusion and the shock filter part applied at each iteration step. Hence, the
value 0 would correspond to pure isotropic diffusion, as used in the operator IsotropicDiffusion. The
parameter is scaled in such a way that diffusion and sharpening cancel each other out for contrast = 1 . A
value contrast > 1 should not be used, since it would make the algorithm unstable.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Inpainting region.
. inpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Output image.
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of edge sharpening algorithm.
Default Value : "weickert"
List of values : Mode ∈ {"weickert", "perona-malik", "parabolic", "shock"}
. contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Contrast parameter.
Default Value : 5.0
Suggested values : Contrast ∈ {0.5, 2.0, 5.0, 10.0, 20.0, 50.0, 100.0}
Restriction : Contrast > 0
. theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Step size.
Default Value : 0.5
Suggested values : Theta ∈ {0.5, 1.0, 5.0, 10.0, 30.0, 100.0}
Restriction : Theta > 0
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 3, 10, 100, 500}
Restriction : Iterations ≥ 1
. rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Smoothing coefficient for edge information.
Default Value : 3.0
Suggested values : Rho ∈ {0.0, 0.1, 0.5, 1.0, 3.0, 10.0}
Restriction : Rho ≥ 0
Example (Syntax: HDevelop)
Parallelization Information
InpaintingAniso is reentrant and automatically parallelized (on tuple level).
Alternatives
HarmonicInterpolation, InpaintingCt, InpaintingMcf, InpaintingTexture,
InpaintingCed
HALCON 8.0.2
218 CHAPTER 3. FILTER
References
J. Weickert; “’Anisotropic Diffusion in Image Processing’; PhD Thesis; Fachbereich Mathematik, Universität
Kaiserslautern; 1996.
P. Perona, J. Malik; “Scale-space and edge detection using anisotropic diffusion”; Transactions on Pattern Analysis
and Machine Intelligence 12(7), pp. 629-639; IEEE; 1990.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
ut = div(G(u)∇u)
formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in image, this is an
enhancement of the mean curvature flow or intrinsic heat equation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined by the input image image at a time t0 = 0. The smoothing oper-
ator MeanCurvatureFlow is a direct application of the mean curvature flow equation. With the opera-
tor InpaintingMcf, it can also be used for image inpainting. The discrete diffusion equation is solved in
iterations time steps of length theta, so that the output image inpaintedImage contains the gray value
function at the time iterations · theta.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator IsotropicDiffusion for isotropic image smoothing.
Similar to the operator InpaintingMcf, the structure of the image data in region is simplified by smoothing
the level lines of image. By this, image errors and unwanted objects can be removed from the image, while the
edges in the neighborhood are extended continuously. This procedure is called image inpainting. The objective is
to introduce a minimum amount of artefacts or smoothing effects, so that the image manipulation is least visible to
a human beholder.
While the matrix G is given by
1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2
in the case of the operator InpaintingMcf, where I denotes the unit matrix, GM CF is again smoothed com-
ponentwise by a Gaussian filter of standard deviation rho for CoherenceEnhancingDiff. Then, the final
coefficient matrix
is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions
g1 (p) = 0.001
−1
g2 (p) = 0.001 + 0.999 exp
p
HALCON 8.0.2
220 CHAPTER 3. FILTER
Parallelization Information
InpaintingCed is reentrant and automatically parallelized (on tuple level).
Alternatives
HarmonicInterpolation, InpaintingCt, InpaintingAniso, InpaintingMcf,
InpaintingTexture
References
J. Weickert, V. Hlavac, R. Sara; “Multiscale texture enhancement”; Computer analysis of images and patterns,
Lecture Notes in Computer Science, Vol. 970, pp. 230-237; Springer, Berlin; 1995.
J. Weickert, B. ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever; “A review of nonlinear diffusion
filtering”; Scale-Space Theory in Computer Vision, Lecture Notes in Comp. Science, Vol. 1252, pp. 3-28;
Springer, Berlin; 1997.
Module
Foundation
• The order of the pixels to process is given by their Euclidean distance to the boundary of the region to inpaint.
• A new value ui is computed as a weighted average of already known values uj within a disc of radius
epsilon around the current pixel. The disc is restricted to already known pixels.
• The size of this scheme’s mask depends on epsilon.
The initially used image data comes from a stripe of thickness epsilon around the region to inpaint. Thus,
epsilon must be at least 1 for the scheme to work, but should be greater. The maximum value for epsilon
depends on the gray values that should be transported into the region. Choosing epsilon = 5 can be used in
many cases.
Since the goal is to close broken contour lines, the direction of the level lines must be estimated and used in the
weight. This estimated direction is called the coherence direction, and is computed by means of the structure tensor
S.
S = Gρ ∗ DvDv T
and
v = Gσ ∗ u
where ∗ denotes the convolution, u denotes the gray value image, D the derivative and G Gaussian kernels with
standard deviation σ and ρ. These standard deviations are defined by the operator’s parameters sigma and rho.
sigma should have the size of the noise or uninportant little objects, which are then not considered in the estima-
tion step by the pre-smoothing. rho gives the size of the window around a pixel that will be used for direction
estimation. The coherence direction c then is given by the eigendirection of S with respect to the minimal eigen-
value λ, i.e.
Sc = λc, |c| = 1
For multichannel or color images, the scheme above is applied to each channel separately, but the weights must be
the same for all channels to propagate information in the same direction. Since the weight depends on the coherence
direction, the common direction is given by the eigendirection of a composite structure tensor. If u1 , ..., un denote
the n channels of the image, the channel structure tensors S1 , ..., Sn are computed and then combined to the
composite structure tensor S.
n
X
S= ai Si
i=1
The coefficients ai are passed in channelCoefficients, which is a tuple of length n or length 1. If the tuple’s
length is 1, the arithmetic mean is used, i.e., ai = 1/n. If the length of channelCoefficients matches the
number of channels, the ai are set to
channelCoefficientsi
ai = Pn
i=1 channelCoefficientsi
in order to get a well-defined convex combination. Hence, the channelCoefficients must be greater than or
equal to zero and their sum must be greater than zero. If the tuple’s length is neither 1 nor the number of channels
or the requirement above is not satisfied, the operator returns an error message.
The purpose of using other channelCoefficients than the arithmetic mean is to adapt to different color
codes. The coherence direction is a geometrical information of the composite image, which is given by high
contrasts such as edges. Thus the more contrast a channel has, the more geometrical information it contains, and
consequently the greater its coefficient should be chosen (relative to the others). For RGB images, [0.299, 0.587,
0.114] is a good choice.
The weight in the scheme is the product of a directional component and a distance component. If p is the 2D
coordinate vector of the current pixel to be inpainted and q the 2D coordinate of a pixel in the neighborhood (the
disc restricted to already known pixels), the directional component measures the deviation of the vector p − q
from the coherence direction. If the deviation exponentially scaled by β is large, a low directional component is
assigned, whereas if it is small, a large directional component is assigned. β is controlled by kappa (in percent):
β = 20 ∗ epsilon ∗ kappa/100
kappa defines how important it is to propagate information along the coherence direction, so a large kappa
yields sharp edges, while a low kappa allows for more diffusion.
A special case is when kappa is zero: In this case the directional component of the weight is constant (one).
The direction estimation step is then skipped to save computational costs and the parameters sigma, rho,
channelCoefficients become meaningless, i.e, the propagation of information is not based on the struc-
tures visible in the image.
The distance component is 1/|p − q|. Consequently, if q is far away from p, a low distance component is assigned,
whereas if it is near to p, a high distance component is assigned.
HALCON 8.0.2
222 CHAPTER 3. FILTER
Parameter
Parallelization Information
InpaintingCt is reentrant and automatically parallelized (on tuple level).
Alternatives
HarmonicInterpolation, InpaintingAniso, InpaintingMcf, InpaintingCed,
InpaintingTexture
References
Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathemati-
cal Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.
Module
Foundation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined in the region region by the input image image at a time t0 = 0.
The discretized equation is solved in iterations time steps of length theta, so that the output image
inpaintedImage contains the gray value function at the time iterations · theta.
A stationary state of the mean curvature flow equation, which is also the basis of the operator
MeanCurvatureFlow, has the special property that the level lines of u all have the curvature 0. This means that
after sufficiently many iterations there are only straight edges left inside the computation area of the output image
inpaintedImage. By this, the structure of objects inside of region can be simplified, while the remaining
edges are continuously connected to those of the surrounding image matrix. This allows for a removal of image
errors and unwanted objects in the input image, a so called image inpainting, which is only weakly visible to a
human beholder since there remain no obvious artefacts or smudges.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator IsotropicDiffusion for isotropic image smoothing.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Inpainting region.
. inpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Output image.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Smoothing for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0
. theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
InpaintingMcf is reentrant and automatically parallelized (on tuple level).
Alternatives
HarmonicInterpolation, InpaintingCt, InpaintingAniso, InpaintingCed,
InpaintingTexture
HALCON 8.0.2
224 CHAPTER 3. FILTER
References
M. G. Crandall, P. Lions; “Convergent Difference Schemes for Nonlinear Parabolic Equations and Mean Curvature
Motion”; Numer. Math. 75 pp. 17-41; 1996.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
Parameter
3.9 Lines
static void HOperatorSet.BandpassImage ( HObject image,
out HObject imageBandpass, HTuple filterType )
filterType: ’lines’
In contrast to the edge operator SobelAmp this filter detects lines instead of edges, i.e., two closely adjacent
edges.
0 −2 −2 −2 0
−2 0 3 0 −2
−2 3 12 3 −2
−2 0 3 0 −2
0 −2 −2 −2 0
HALCON 8.0.2
226 CHAPTER 3. FILTER
At the border of the image the gray values are mirrored. Over- and underflows of gray values are clipped. The
resulting images are returned in imageBandpass.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input images.
. imageBandpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Bandpass-filtered images.
. filterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Filter type: currently only ’lines’ is supported.
Default Value : "lines"
List of values : FilterType ∈ {"lines"}
Example (Syntax: C)
bandpass_image(Image,&LineImage,"lines");
threshold(LineImage,&Lines,60.0,255.0);
skeleton(Lines,&ThinLines);
Result
BandpassImage returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
BandpassImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
Threshold, Skeleton
Alternatives
ConvolImage, TopographicSketch, TextureLaws
See also
HighpassImage, GraySkeleton
Module
Foundation
If extractWidth is set to ’true’ the line width is extracted for each line point. Because the line extractor is
unable to extract certain junctions because of differential geometric reasons, it tries to extract these by different
means if completeJunctions is set to ’true’.
LinesColor links the line points into lines by using an algorithm similar to a hysteresis threshold operation,
which is also used in LinesGauss and EdgesColorSubPix. Points with an amplitude larger than high
are immediately accepted as belonging to a line, while points with an amplitude smaller than low are rejected.
All other points are accepted as lines if they are connected to accepted line points (see also LinesGauss).
Here, amplitude means the line amplitude of the dark line (see LinesGauss and LinesFacet). This value
corresponds to the third directional derivative of the smoothed input image in the direction perpendicular to the
line.
For the choice of the thresholds high and low one has to keep in mind that the third directional derivative depends
on the amplitude and width of the line as well as the choice of sigma. The value of the third derivative depends
linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the line there
is an inverse dependence: The wider the line is, the smaller the response gets. This holds analogously for the
dependence on sigma: The larger sigma is chosen, the smaller the second derivative will be. This means that
for larger smoothing correspondingly smaller values for high and low should be chosen.
The extracted lines are returned in a topologically sound data structure in lines. This means that lines are
correctly split at junction points.
LinesColor defines the following attributes for each line point if extractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line (oriented such that the normal vectors point to
the right side of the line as the line is traversed from start to end point; the angles are given with
respect to the row axis of the image.)
’response’ The magnitude of the second derivative
If extractWidth was set to ’true’, additionally the following attributes are defined:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
All these attributes can be queried via the operator GetContourAttribXld.
Attention √
In general, but in particular if the line width is to be extracted, sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, sigma ≥ 2.3
should be selected. If it is expected that staircase lines are present in at least one channel, and if such lines should
be extracted, in addition to the above restriction, sigma ≤ w should be selected. This is necessary because
staircase lines turn into normal step edges for large amounts of smoothing, and therefore no longer appear as dark
lines in the amplitude image of the color edge filter.
Parameter
HALCON 8.0.2
228 CHAPTER 3. FILTER
lead to worse localization of the line. The parameters of the polynomial are used to calculate the line direction
for each pixel. Pixels which exhibit a local maximum in the second directional derivative perpendicular to the
line direction are marked as line points. The line points found in this manner are then linked to contours. This
is done by immediately accepting line points that have a second derivative larger than high. Points that have
a second derivative smaller than low are rejected. All other line points are accepted if they are connected to
accepted points by a connected path. This is similar to a hysteresis threshold operation with infinite path length (see
HysteresisThreshold). However, this function is not used internally since it does not allow the extraction
of sub-pixel precise contours.
The gist of how to select the thresholds in the description of LinesGauss also holds for this operator. A value
of Sigma = 1.5 there roughly corresponds to a maskSize of 5 here.
The extracted lines are returned in a topologically sound data structure in lines. This means that lines are
correctly split at junction points.
LinesFacet defines the following attributes for each line point:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
These attributes can be queried via the operator GetContourAttribXld.
Attention
The smaller the filter size maskSize is chosen, the more short, fragmented lines will be extracted. This can lead
to considerably longer execution times.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; HXLDCont
Extracted lines.
. maskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Size of the facet model mask.
Default Value : 5
List of values : MaskSize ∈ {3, 5, 7, 9, 11}
. low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. high (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
. lightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Extract bright or dark lines.
Default Value : "light"
List of values : LightDark ∈ {"dark", "light"}
Example (Syntax: HDevelop)
Complexity
Let A be the number of pixels in the domain of image. Then the runtime complexity is O(A ∗ MaskSize).
HALCON 8.0.2
230 CHAPTER 3. FILTER
Let S = Width ∗ Height be the number of pixels of image. Then LinesFacet requires at least 55 ∗ S bytes of
temporary memory during execution.
Result
LinesFacet returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If the
input is empty the behaviour can be set via SetSystem(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
LinesFacet is reentrant and processed without parallelization.
Possible Successors
GenPolygonsXld
Alternatives
LinesGauss
See also
BandpassImage, DynThreshold, TopographicSketch
References
A. Busch: "‘Fast Recognition of Lines in Digital Images Without User-Supplied Parameters"’. In H. Ebner, C.
Heipke, K.Eder, eds., "‘Spatial Information from Digital Photogrammetry and Computer Vision"’, International
Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3/1, pp. 91-97, 1994.
Module
2D Metrology
For the choice of the thresholds high and low one has to keep in mind that the second directional derivative
depends on the amplitude and width of the line as well as the choice of sigma. The value of the second derivative
depends linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the
line there is an approximately inverse exponential dependence: The wider the line is, the smaller the response
gets. This holds analogously for the dependence on sigma: The larger sigma is chosen, the smaller the second
derivative will be. This means that for larger smoothing correspondingly smaller values for high and low have
to be chosen. Two examples help to illustrate this: If 5 pixel wide lines with an amplitude larger than 100 are to be
extracted from an image with a smoothing of sigma = 1.5, high should be chosen larger than 14. If, on the other
hand, 10 pixel wide lines with an amplitude larger than 100 and a sigma = 3 are to be detected, high should be
chosen larger than 3.5. For the choice of low values between 0.25 high and 0.5 high are appropriate.
The extracted lines are returned in a topologically sound data structure in lines. This means that lines are
correctly split at junction points.
LinesGauss defines the following attributes for each line point if extractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
If extractWidth was set to ’true’ and correctPositions to ’false’, the following attributes are defined in
addition to the above ones:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
Finally, if correctPositions was set to ’true’, additionally the following attributes are defined:
’asymmetry’ The asymmetry of the line point
’contrast’ The contrast of the line point
Here, the asymmetry is positive if the asymmetric part, i.e., the part with the weaker gradient, is on the right side of
the line, while it is negative if the asymmetric part is on the left side of the line. All these attributes can be queried
via the operator GetContourAttribXld.
Attention √
In general, but in particular if the line width is to be extracted, sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, sigma ≥ 2.3
should be selected.
Parameter
HALCON 8.0.2
232 CHAPTER 3. FILTER
Complexity
Let A be the number of pixels in the domain of image. Then the runtime complexity is O(A ∗ Sigma).
Let S = Width ∗ Height be the number of pixels of image. Then LinesGauss requires at least 55 ∗ S bytes of
temporary memory during execution.
Result
LinesGauss returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If the
input is empty the behaviour can be set via SetSystem(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
LinesGauss is reentrant and processed without parallelization.
Possible Successors
GenPolygonsXld
Alternatives
LinesFacet
See also
BandpassImage, DynThreshold, TopographicSketch
References
C. Steger: “Extracting Curvilinear Structures: A Differential Geometric Approach”. In B. Buxton, R. Cipolla, eds.,
“Fourth European Conference on Computer Vision”, Lecture Notes in Computer Science, Volume 1064, Springer
Verlag, pp. 630-641, 1996.
C. Steger: “Extraction of Curved Lines from Images”. In “13th International Conference on Pattern Recognition”,
Volume II, pp. 251-255, 1996.
C. Steger: “An Unbiased Detector of Curvilinear Structures”. Technical Report FGBV-96-03, Forschungsgruppe
Bildverstehen (FG BV), Informatik IX, Technische Universit"at M"unchen, July 1996.
Module
2D Metrology
3.10 Match
static void HOperatorSet.ExhaustiveMatch ( HObject image,
HObject regionOfInterest, HObject imageTemplate,
out HObject imageMatch, HTuple mode )
HImage HImage.ExhaustiveMatch ( HRegion regionOfInterest,
HImage imageTemplate, string mode )
whereby X[i][j] indicates the grayvalue in the ith column and jth row of the image X. (l, c) is the centre of
the region of imageTemplate. u and v are chosen so that all points of the template will be reached, i, j
run accross the regionOfInterest. At the image frame only those parts of imageTemplate will be
considered which lie inside the image (i.e. u and v will be restricted correspondingly). Range of values: 0 -
255 (best fit).
’dfd’ Calculating the average “displaced frame difference”:
P
u,v |image[i − u][j − v] − imageTemplate[l − u][c − v]|
imageMatch[i][j] =
AREA(ImageT emplate)
The terms are the same as in ’norm_correlation’. AREA ( X ) means the area of the region X. Range of value
0 (best fit) - 255.
To calculate the normalized correlation as well as the “displaced frame difference” is (with regard to the
area of imageTemplate) very time consuming. Therefore it is important to restrict the input region
(regionOfInterest if possible, i.e. to apply the filter only in a very confined “region of interest”.
As far as quality is concerned, both modes return comparable results, whereby the mode ’dfd’ is faster by a factor
of about 3.5.
Parameter
read_image(Image,’monkey’)
disp_image(Image,WindowHandle)
HALCON 8.0.2
234 CHAPTER 3. FILTER
draw_rectangle2(WindowHandle,Row,Column,Phi,Length1,Length2)
gen_rectangle2(Rectangle,Row,Column,Phi,Length1,Length2)
reduce_domain(Image,Rectangle,Template)
exhaustive_match(Image,Image,Template,ImageMatch,’dfd’)
invert_image(ImageMatch,ImageInvert)
local_max(Image,Maxima)
union1(Maxima,AllMaxima)
add_channels(AllMaxima,ImageInvert,FitMaxima)
threshold(FitMaxima,BestFit,230.0,255.0)
disp_region(BestFit,WindowHandle).
Result
If the parameter values are correct, the operator ExhaustiveMatch returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
ExhaustiveMatch is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
DrawRegion, DrawRectangle1
Possible Successors
LocalMax, Threshold
Alternatives
ExhaustiveMatchMg
Module
Foundation
Parameter
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
draw_rectangle2(WindowHandle,&Row,&Column,&Phi,&Length1,&Length2);
gen_rectangle2(&Rectangle,Row,Column,Phi,Length1,Length2);
reduce_domain(Image,Rectangle,&Template);
exhaustive_match_mg(Image,Template,&ImageMatch,’dfd’1,30);
invert_image(ImageMatch,&ImageInvert);
local_max(ImageInvert,&BestFit);
disp_region(BestFit,WindowHandle);
Result
If the parameter values are correct, the operator ExhaustiveMatchMg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
ExhaustiveMatchMg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
DrawRegion, DrawRectangle1
Possible Successors
Threshold, LocalMax
Alternatives
ExhaustiveMatch
See also
GenGaussPyramid
Module
Foundation
HALCON 8.0.2
236 CHAPTER 3. FILTER
gen_gauss_pyramid(Image,Pyramid,"weighted",0.5);
count_obj(Pyramid,&num);
for (i=1; i<=num; i++)
{
select_obj(Pyramid,&Single,i);
disp_image(Single,WindowHandle);
clear(Single);
}
Parallelization Information
GenGaussPyramid is reentrant and automatically parallelized (on channel level).
Possible Successors
ImageToChannels, CountObj, SelectObj, CopyObj
Alternatives
ZoomImageSize, ZoomImageFactor
See also
AffineTransImage
Module
Foundation
Parallelization Information
Monotony is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
BinomialFilter, GaussImage, MedianImage, MeanImage, SmoothImage, InvertImage
Possible Successors
Threshold, ExhaustiveMatch, DispImage
Alternatives
LocalMax, TopographicSketch, CornerResponse
Module
Foundation
3.11 Misc
static void HOperatorSet.ConvolImage ( HObject image,
out HObject imageResult, HTuple filterMask, HTuple margin )
HALCON 8.0.2
238 CHAPTER 3. FILTER
All image points are convolved with the filter mask. If an overflow or underflow occurs, the resulting gray value
is clipped. Hence, if filters that result in negative output values are used (e.g., derivative filters) the input image
should be of type int2. If a filename is given in filterMask the filter mask is read from a text file with the
following structure:
hMask sizei
hInverse weight of the maski
hMatrixi
The first line contains the size of the filter mask, given as two numbers separated by white space (e.g., 3 3 for
3 × 3). Here, the first number defines the height of the filter mask, while the second number defines its width. The
next line contains the inverse weight of the mask, i.e., the number by which the convolution of a particular image
point is divided. The remaining lines contain the filter mask as integer numbers (separated by white space), one
line of the mask per line in the file. The file must have the extension “.fil”. This extension must not be passed to
the operator. If the filter mask is to be computed from a tuple, the tuple given in filterMask must also satisfy
the structure described above. However, in this case the line feed is omitted.
For example, lets assume we want to use the following filter mask:
1 2 1
1
16
2 4 2
1 2 1
If the filter mask should be generated from a file, then the file should look like this:
33
16
121
242
121
In contrast, if the filter mask should be generated from a tuple, then the following tuple must be passed in
filterMask:
[3,3,16,1,2,1,2,4,2,1,2,1]
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image to be convolved.
. imageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Convolved result image.
. filterMask (input_control) . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; HTuple (string / int / long)
Filter mask as file name or tuple.
Default Value : "sobel"
Suggested values : FilterMask ∈ {"sobel", "laplace4", "lowpas_3_3"}
. margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string / int / long / double)
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Parallelization Information
ConvolImage is reentrant and automatically parallelized (on tuple level, channel level).
Module
Foundation
domain depending on the filter width. This may lead to undesirable side effects especially in the border region
of the domain. For example, if the foreground (domain) and the background of the image differ strongly in
brightness, the result of a filter operation may lead to undesired darkening or brightening at the border of the
domain. In order to avoid this drawback, the domain is expanded by ExpandDomainGray in a preliminary
stage, copying the gray values of the border pixels to the outside of the domain. In addition, the domain itself
is also expanded to reflect the newly set pixels. Therefore, in many cases it is reasonable to reduce the domain
again ( ReduceDomain or ChangeDomain) after using ExpandDomainGray and call the filter operation
afterwards. expansionRange should be set to the half of the filter width.
Parameter
read_image(Fabrik, ’fabrik.tif’);
gen_rectangle2(Rectangle_Label,243,320,-1.55,62,28);
reduce_domain(Fabrik, Rectangle_Label, Fabrik_Label);
/* Character extraction without gray value expansion: */
mean_image(Fabrik_Label,Label_Mean_normal,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_normal,Characters_normal,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_normal);
/* The characters in the border region are not extracted ! */
stop();
/* Character extraction with gray value expansion: */
expand_domain_gray(Fabrik_Label, Label_expanded,15);
reduce_domain(Label_expanded,Rectangle_Label, Label_expanded_reduced);
mean_image(Label_expanded_reduced,Label_Mean_expanded,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_expanded,Characters_expanded,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_expanded);
/* Now, even in the border region the characters are recognized */
Complexity
Let L the perimeter of the domain. Then the runtime complexity is approximately O(L) ∗ expansionRange.
Result
ExpandDomainGray returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
ExpandDomainGray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
ReduceDomain
Possible Successors
ReduceDomain, MeanImage, DynThreshold
See also
ReduceDomain, MeanImage
Module
Foundation
HALCON 8.0.2
240 CHAPTER 3. FILTER
HImage HImage.GrayInside ( )
Calculate the lowest possible gray value on an arbitrary path to the image border for each point in the image.
GrayInside determines the “cheapest” path to the image border for each point in the image, i.e., the path on
which the lowest gray values have to be overcome. The resulting image contains the difference of the gray value
of the particular point and the maximum gray value on the path. Bright areas in the result image therefore signify
that these areas (which are typically dark in the original image) are surrounded by bright areas. Dark areas in the
result image signify that there are only small gray value differences between them and the image border (which
doesn’t mean that they are surrounded by dark areas; a small “gap” of dark values suffices). The value 0 (black) in
the result image signnifies that only darker or equally bright pixels exist on the path to the image border.
The operator is implemented by first segmenting into basins and watersheds the image using the Watersheds
operator. If the image is regarded as a gray value mountain range, basins are the places where water accumulates
and the mountain ridges are the watersheds. Then, the watersheds are distributed to adjacent basins, thus leaving
only basins. The border of the domain (region) of the original image is now searched for the lowest gray value,
and the region in which it resides is given its result values. If the lowest gray value resides on the image border,
all result values can be calculated immediately using the gray value differences to the darkest point. If the smalles
found gray value lies in the interior of a basin, the lowest possible gray value has to be determined from the already
processed adjacent basins in order to compute the new values. An 8-neighborhood is used to determine adjacency.
The found region is subtracted from the regions yet to process, and the whole process is repeated. Thus, the image
is “stripped” form the outside.
Analogously to Watersheds, it is advisable to apply a smoothing operation before calling Watersheds, e.g.,
BinomialFilter or GaussImage, in order to reduce the amount of regions that result from the watershed
algorithm, and thus to speed up the processing time.
Parameter
read_image(Bild,’coin’)
gauss_image (Bild,G_Bild,11)
open_window (0,0,512,512,0,’visible’,’’,WindowHandle)
gray_inside(G_Bild,Ausgabebild)
disp_image (Ausgabebild,WindowHandle).
Result
GrayInside always returns 2 (H_MSG_TRUE).
Parallelization Information
GrayInside is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage, MeanImage, MedianImage
Possible Successors
SelectShape, AreaCenter, CountObj
See also
Watersheds
Module
Foundation
HImage HImage.GraySkeleton ( )
Thinning of gray value images.
GraySkeleton applies a gray value thinning operation to the input image image. Figuratively, the gray
value “mountain range” is reduced to its ridge lines by setting the gray value of “hillsides” to the gray value
at the corresponding valley bottom. The resulting ridge lines are at most two pixels wide. This operator is es-
pecially useful for thinning edge images, and is thus an alternative to NonmaxSuppressionAmp. In con-
trast to NonmaxSuppressionAmp, GraySkeleton preserves contours, but is much slower. In contrast to
Skeleton, this operator changes the gray values of an image while leaving its region unchanged.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image to be thinned.
. graySkeleton (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Thinned image.
Example (Syntax: HDevelop)
Result
GraySkeleton returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
GraySkeleton is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
MeanImage
Alternatives
NonmaxSuppressionAmp, NonmaxSuppressionDir, LocalMax
See also
Skeleton, GrayDilationRect
Module
Foundation
HALCON 8.0.2
242 CHAPTER 3. FILTER
Parameter
def_tab(Tab,I) :- I=255
Tab = 0
def_tab([Tk|Ts],I) :-
Tk is 255 - I
Iw is I -1
def_tab(Ts,Iw)
Result
The operator LutTrans returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an excep-
tion is raised.
Parallelization Information
LutTrans is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Module
Foundation
maskSize exponent
255 X |g(i) − g(−i)|
sym := 255 −
maskSize i=1
255
read_image(Image,’monkey’)
symmetry(Image,ImageSymmetry,70,0.0,0.5)
threshold(ImageSymmetry,SymmPoints,170,255)
Result
If the parameter values are correct the operator Symmetry returns the value 2 (H_MSG_TRUE) The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
Symmetry is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
Threshold
Module
Foundation
HImage HImage.TopographicSketch ( )
Compute the topographic primal sketch of an image.
TopographicSketch computes the topographic primal sketch of the input image image. This is done by
approximating the image locally by a bicubic polynomial (“facet model”). It serves to calculate the first and
second partial derivatives of the image, and thus to classify the image into 11 classes. These classes are coded in
the output image sketch as numbers from 1 to 11. The classes are as follows:
HALCON 8.0.2
244 CHAPTER 3. FILTER
Peak 1
Pit 2
Ridge 3
Ravine 4
Saddle 5
Flat 6
Hillside Slope 7
Hillside Convex 8
Hillside Concave 9
Hillside Saddle 10
Hillside Inflection 11
In order to obtain the separate classes as regions, a threshold operation has to be applied to the result image with
the appropriate thresholds.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image for which the topographic primal sketch is to be computed.
. sketch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Label image containing the 11 classes.
Example (Syntax: HDevelop)
Complexity
Let n be the number of pixels in the image. Then O(n) operations are performed.
Result
TopographicSketch returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behav-
ior can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
TopographicSketch is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
Threshold
References
R. Haralick, L. Shapiro: “Computer and Robot Vision, Volume I”; Reading, Massachusetts, Addison-Wesley;
1992; Kapitel 8.13.
Module
Foundation
3.12 Noise
static void HOperatorSet.AddNoiseDistribution ( HObject image,
out HObject imageNoise, HTuple distribution )
respective frequency for a grayvalue decrease of 2, and so on. Analogously, the value at position 257 defines the
frequency of pixels for which the grayvalue is increased by 1.
The distribution represents salt and pepper noise if at most one value at a position smaller than 256 is not
equal to zero and at most one value at a position larger than 256 is not equal to zero. In case of salt and pepper
noise, the noisified pixels are set to the minimum (pepper) and maximum (salt) values that can be represented by
imageNoise if the amount of pepper is indicated by the value at position 0 and the amount of salt is indicated
by the value at position 512 in the tuple.
Parameter
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
sp_distribution(30,30,Dist)
add_noise_distribution(Image,ImageNoise,Dist)
disp_image(ImageNoise,WindowHandle).
Result
AddNoiseDistribution returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
behaviour can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
AddNoiseDistribution is reentrant and automatically parallelized (on tuple level, channel level, domain
level).
Possible Predecessors
GaussDistribution, SpDistribution, NoiseDistributionMean
Alternatives
AddNoiseWhite
See also
SpDistribution, GaussDistribution, NoiseDistributionMean, AddNoiseWhite
Module
Foundation
HALCON 8.0.2
246 CHAPTER 3. FILTER
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageNoise (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Noisy image.
Number of elements : ImageNoise = Image
. amp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Maximum noise amplitude.
Default Value : 60.0
Suggested values : Amp ∈ {1.0, 2.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 1.0 ≤ Amp ≤ 1000.0
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Amp > 0
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
add_noise_white(Image,ImageNoise,90)
disp_image(ImageNoise,WindowHandle).
Result
AddNoiseWhite returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
AddNoiseWhite is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
AddNoiseDistribution
See also
AddNoiseDistribution, NoiseDistributionMean, GaussDistribution, SpDistribution
Module
Foundation
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
gauss_distribution(30,Dist)
add_noise_distribution(Image,ImageNoise,Dist)
disp_image(ImageNoise,WindowHandle).
Parallelization Information
GaussDistribution is reentrant and processed without parallelization.
Possible Successors
AddNoiseDistribution
Alternatives
SpDistribution, NoiseDistributionMean
See also
SpDistribution, AddNoiseWhite, NoiseDistributionMean
Module
Foundation
HALCON 8.0.2
248 CHAPTER 3. FILTER
Parallelization Information
NoiseDistributionMean is reentrant and processed without parallelization.
Possible Predecessors
DrawRegion, GenCircle, GenEllipse, GenRectangle1, GenRectangle2, Threshold,
ErosionCircle, BinomialFilter, GaussImage, SmoothImage, SubImage
Possible Successors
AddNoiseDistribution, DispDistribution
See also
MeanImage, GaussDistribution
Module
Foundation
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
sp_distribution(30,30,Dist)
add_noise_distribution(Image,ImageNoise,Dist)
disp_image(ImageNoise,WindowHandle)
Parallelization Information
SpDistribution is reentrant and processed without parallelization.
Possible Successors
AddNoiseDistribution
Alternatives
GaussDistribution, NoiseDistributionMean
See also
GaussDistribution, NoiseDistributionMean, AddNoiseWhite
Module
Foundation
3.13 Optical-Flow
where w = (u, v, 1) is the optical flow vector field to be determined (with a time step of 1 in the third coordinate).
The image sequence is regarded as a continuous function f (x), where x = (r, c, t) and (r, c) denotes the position
and t the time. Furthermore, ED (w) denotes the data term, while ES (w) denotes the smoothness term, and α is a
regularization parameter that determines the smoothness of the solution. The regularization parameter α is passed
HALCON 8.0.2
250 CHAPTER 3. FILTER
in flowSmoothness. While the data term encodes assumptions about the constancy of the object features in
consecutive images, e.g., the constancy of the gray values or the constancy of the first spatial derivative of the
gray values, the smoothness term encodes assumptions about the (piecewise) smoothness of the solution, i.e., the
smoothness of the vector field to be determined.
The FDRIG algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (r + u, c + v, t + 1) = f (r, c, t). This can be written more compactly as
f (x + w) = f (x) using vector notation.
Constancy of the spatial gray value derivatives: It is assumed that corresponding pixels in consecutive images of an
image sequence additionally have have the same spatial gray value derivatives, i.e, that ∇2 f (x + u, y + v, t + 1) =
∇2 f (x, y, t) also holds, where ∇2 f = (∂x f, ∂y f ). This can be written more compactly as ∇2 f (x+w) = ∇2 f (x).
In contrast to the gray value constancy, the gradient constancy has the advantage that it is invariant to additive global
illumination changes.
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field I: The solution is assumed to be piecewise smooth. While the actual
2 2
smoothness is achieved by penalizing the first
2
√ derivatives of the flow |∇2 u| + |∇2 v| , the use of a statistically
2 2
robust (linear) penalty function ΨS (s ) = s + with = 0.001 provides the desired preservation of edges in
the movement in the flow field to be determined. This type of smoothness term is called flow-driven and isotropic.
Taking into account all of the above assumptions, the energy functional of the FDRIG algorithm can be written as
Z
2 2
EFDRIG (w) = |f (x + w) − f (x)| + γ |∇2 f (x + w) − ∇2 f (x)| drdc
ΨD
| {z } | {z }
gray value constancy gradient constancy
Z
+α ΨS |∇2 u(x)|2 + |∇2 v(x)|2 drdc
| {z }
smoothness assumption
Here, α is the regularization parameter passed in flowSmoothness, while γ is the gradient constancy weight
passed in gradientConstancy. These two parameters, which constitute the model parameters of the FDRIG
algorithm, are described in more detail below.
The DDRAW algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field II: The solution is assumed to be piecewise smooth. In contrast to
the FDRIG algorithm, which allows discontinuities everywhere, the DDRAW algorithm only allows discontinuities
at the edges in the original image. Here, the local smoothness is controlled in such a way that the flow field is sharp
across image edges, while it is smooth along the image edges. This type of smoothness term is called data-driven
and anisotropic.
All assumptions of the DDRAW algorithm can be combined into the following energy functional:
Z
2
EDDRAW (w) = ΨD |f (x + w) − f (x)| drdc
| {z }
gray value constancy
Z
∇2 u(x)> PNE (∇2 f (x)) ∇2 u(x) + ∇2 v(x)> PNE (∇2 f (x)) ∇2 v(x) drdc ,
+α
| {z }
smoothness assumption
where PNE (∇2 f (x)) is a normalized projection matrix orthogonal to ∇2 f (x), for which
holds. This matrix ensures that the smoothness of the flow field is only assumed along the image edges. In
contrast, no assumption is made with respect to the smoothness across the image edges, resulting in the fact
that discontinuities in the solution may occur across the image edges. In this respect, S = 0.001 serves as a
regularization parameter that prevents the projection matrix PNE (∇2 f (x)) from becoming singular. In contrast to
the FDRIG algorithm, there is only one model parameter for the DDRAW algorithm: the regularization parameter
α. As mentioned above, α is described in more detail below.
As for the two approaces described above, the CLG algorithm uses certain assumptions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Small displacements: In contrast to the two approaches above, it is assumed that only small displacements can
occur, i.e., displacements in the order of a few pixels. This facilitates a linearization of the constancy assumptions
in the model, and leads to the approximation f (x) + ∇3 f (x)> w(x) = f (x), i.e., ∇3 f (x)> w(x) = 0 should
hold. Here, ∇3 f (x) denotes the gradient in the spatial as well as the temporal domain.
Local constancy of the solution: Furthermore, it is assumed that the flow field to be computed is locally constant.
This facilitates the integration of the image data in the data term over the respective neighborhood of each pixel.
This, in turn, increases the robustness of the algorithm against noise. Mathematically, this can be achieved by
reformulating the quadratic data term as (∇3 f (x)> w(x))2 = w(x)> ∇3 f (x)∇3 f (x)> w(x). By performing a
local Gaussian-weighted integration over a neighborhood specified by the ρ (passed in integrationSigma),
the following data term is obtained: w(x)> Gρ ∗(∇3 f (x)∇3 f (x)> ) w(x). Here, Gρ ∗. . . denotes a convolution of
the 3 × 3 matrix ∇3 f (x)∇3 f (x)> with a Gaussian filter with a standard deviation of ρ (see DerivateGauss).
General smoothness of the flow field: Finally, the solution is assumed to be smooth everywhere in the image. This
particular type of smoothness term is called homogeneous.
All of the above assumptions can be combined into the following energy functional:
Z Z
w(x)> Gρ ∗ (∇3 f (x)∇3 f (x)> ) w(x) drdc + α |∇2 u(x)|2 + |∇2 v(x)|2 drdc ,
ECLG (w) =
| {z } | {z }
gray value constancy smoothness assumption
The corresponding model parameters are the regularization parameter α as well as the integration scale ρ (passed
in integrationSigma), which determines the size of the neighborhood over which to integrate the data term.
These two parameters are described in more detail below.
To compute the optical flow vector field for two consecutive images of an image sequence with the FDRIG,
DDRAW, or CLG algorithm, the solution that best fulfills the assumptions of the respective algorithm must be
determined. From a mathematical point of view, this means that a minimization of the above energy functionals
should be performed. For the FDRIG and DDRAW algorithms, so called coarse-to-fine warping strategies play an
important role in this minimization, because they enable the calculation of large displacements. Thus, they are a
suitable means to handle the omission of the linearization of the constancy assumptions numerically in these two
approaches.
To calculate large displacements, coarse-to-fine warping strategies use two concepts that are closely interlocked:
The successive refinement of the problem (coarse-to-fine) and the successive compensation of the current image
pair by already computed displacements (warping). Algorithmically, such coarse-to-fine warping strategies can be
described as follows:
HALCON 8.0.2
252 CHAPTER 3. FILTER
1. First, both images of the current image pair are zoomed down to a very coarse resolution level.
2. Then, the optical flow vector field is computed on this coarse resolution.
3. The vector field is required on the next resolution level: It is applied there to the second image of the image
sequence, i.e., the problem on the finer resolution level is compensated by the already computed optical flow field.
This step is also known as warping.
4. The modified problem (difference problem) is now solved on the finer resolution level, i.e., the optical flow
vector field is computed there.
5. The steps 3-4 are repeated until the finest resolution level is reached.
6. The final result is computed by adding up the vector fields from all resolution levels.
This incremental computation of the optical flow vector field has the following advantage: While the coarse-to-fine
strategy ensures that the displacements on the finest resolution level are very small, the warping strategy ensures
that the displacements remain small for the incremental displacements (optical flow vector fields of the difference
problems). Since small displacements can be computed much more accurately than larger displacements, the
accuracy of the results typically increases significantly by using such a coarse-to-fine warping strategy. However,
instead of having to solve a single correspondence problem, an entire hierarchy of these problems must now be
solved. For the CLG algorithm, such a coarse-to-fine warping strategy is unnecessary since the model already
assumes small displacements.
The maximum number of resolution levels (warping levels), the resolution ratio between two consecutive resolution
levels, as well as the finest resolution level can be specified for the FDRIG as well as the DDRAW algorithm.
Details can be found below.
The minimization of functionals is mathematically very closely related to the minimization of functions: Like
the fact that the zero crossing of the first derivative is a necessary condition for the minimum of a function, the
fulfillment of the so called Euler-Lagrange equations is a necessary condition for the minimizing function of a
functional (the minimizing function corresponds to the desired optical flow vector field in this case). The Euler-
Lagrange equations are partial differential equations. By discretizing these Euler-Lagrange equations using finite
differences, large sparse nonlinear equation systems result for the FDRIG and DDRAW algorithms. Because
coarse-to-fine warping strategies are used, such an equation system must be solved for each resolution level, i.e.,
for each warping level. For the CLG algorithm, a single sparse linear equation system must be solved.
To ensure that the above nonlinear equation systems can be solved efficiently, the FDRIG and DDRAW use bidi-
rectional multigrid methods. From a numerical point of view, these strategies are among the fastest methods for
solving large linear and nonlinear equation systems. In contrast to conventional nonhierarchical iterative methods,
e.g., the different linear and nonlinear Gauss-Seidel variants, the multigrid methods have the advantage that correc-
tions to the solution can be determined efficiently on coarser resolution levels. This, in turn, leads to a significantly
faster convergence. The basic idea of multigrid methods additionally consists of hierarchically computing these
correction steps, i.e., the computation of the error on a coarser resolution level itself uses the same strategy and
efficiently computes its error (i.e., the error of the error) by correction steps on an even coarser resolution level.
Depending on whether one or two error correction steps are performed per cycle, a so called V or W cycle is
obtained. The corresponding strategies for stepping through the resolution hierarchy are as follows for two to four
resolution levels:
Fine
V-Cycles W-Cycles
1 u u u u u u u u u u u u
AAs A s s As s
AAs A s s s As s s
2 A A A A A
s As s As A
s As s s As s s
A
3 A
AAs AAsAA
s AsAAs
4
Coarse
Here, iterations on the original problem are denoted by large markers, while small markers denote iterations on
error correction problems.
Algorithmically, a correction cycle can be described as follows:
1. In the first step, several (few) iterations using an interative linear or nonlinear basic solver are performed (e.g.,
a variant of the Gauss-Seidel solver). This step is called pre-relaxation step.
2. In the second step, the current error is computed to correct the current solution (the solution after step 1).
For efficiency reasons, the error is calculated on a coarser resolution level. This step, which can be performed
iteratively several times, is called coarse grid correction step.
3. In a final step, again several (few) iterations using the interative linear or nonlinear basic solver of step 1 are
performed. This step is called post-relaxation step.
In addition, the solution can be initialized in a hierarchical manner. Starting from a very coarse variant of the
original (non)linear equation system, the solution is successively refined. To do so, interpolated solutions of
coarser variants of the equation system are used as the initialization of the next finer variant. On each resolution
level itself, the V or W cycles described above are used to efficiently solve the (non)linear equation system on that
resolution level. The corresponding multigrid methods are called full multigrid methods in the literature. The full
multigrid algorithm can be visualized as follows:
Coarse
This example represents a full multigrid algorithm that uses two W correction cycles per resolution level of the
hierarchical initialization. The interpolation steps of the solution from one resolution level to the next arew denoted
by i and the two W correction cycles by w1 and w2 . Iterations on the original problem are denoted by large markers,
while small markers denote iterations on error correction problems.
In the multigrid implementation of the FDRIG, DDRAW, and CLG algorithm, the following parameters can be
set: whether a hierarchical initialization is performed; the number of coarse grid correction steps; the maximum
number of correction levels (resolution levels); the number of pre-relaxation steps; the number of post-relaxation
steps. These parameters are described in more detail below.
The basic solver for the FDRIG algorithm is a point-coupled fixed-point variant of the linear Gauss-Seidel algo-
rithm. The basic solver for the DDRAW algorithm is an alternating line-coupled fixed-point variant of the same
type. The number of fixed-point steps can be specified for both algorithms with a further parameter. The basic
solver for the CLG algorithm is a point-coupled linear Gauss-Seidel algorithm. The transfer of the data between
the different resolution levels is performed by area-based interpolation and area-based averaging, respectively.
After the algorithms have been described, the effects of the individual parameters are discussed in the following.
The input images, along with their domains (regions of interest) are passed in image1 and image2. The com-
putation of the optical flow vector field vectorField is performed on the smallest surrounding rectangle of the
intersection of the domains of image1 and image2. The domain of vectorField is the intersection of the
two domains. Hence, by specifying reduced domains for image1 and image2, the processing can be focused
and runtime can potentially be saved. It should be noted, however, that all methods compute a global solution of
the optical flow. In particular, it follows that the solution on a reduced domain need not (and cannot) be identical
to the resolution on the full domain restricted to the reduced domain.
smoothingSigma specifies the standard deviation of the Gaussian kernel that is used to smooth both input
images. The larger the value of smoothingSigma, the larger the low-pass effect of the Gaussian kernel, i.e., the
smoother the preprocessed image. Usually, smoothingSigma = 0.8 is a suitable choice. However, other values
in the interval [0, 2] are also possible. Larger standard deviations should only be considered if the input images are
very noisy. It should be noted that larger values of smoothingSigma lead to slightly longer execution times.
integrationSigma specifies the standard deviation ρ of the Gaussian kernel Gρ that is used for the local
integration of the neighborhood information of the data term. This parameter is used only in the CLG algorithm and
has no effect on the other two algorithms. Usually, integrationSigma = 1.0 is a suitable choice. However,
other values in the interval [0, 3] are also possible. Larger standard deviations should only be considered if the
input images are very noisy. It should be noted that larger values of integrationSigma lead to slightly longer
execution times.
HALCON 8.0.2
254 CHAPTER 3. FILTER
flowSmoothness specifies the weight α of the smoothness term with respect to the data term. The larger the
value of flowSmoothness, the smoother the computed optical flow field. It should be noted that choosing
flowSmoothness too small can lead to unusable results, even though statistically robust penalty functions are
used, in particular if the warping strategy needs to predict too much information outside of the image. For byte
images with a gray value range of [0, 255], values of flowSmoothness around 20 for the flow-driven FDRIG
algorithm and around 1000 for the data-driven DDRAW algorithm and the homogeneous CLG algorithm typically
yield good results.
gradientConstancy specifies the weight γ of the gradient constancy with respect to the gray value constancy.
This parameter is used only in the FDRIG algorithm. For the other two algorithms, it does not influence the results.
For byte images with a gray value range of [0, 255], a value of gradientConstancy = 5 is typically a good
choice, since then both constancy assumptions are used to the same extent. For large changes in illumination, how-
ever, significantly larger values of gradientConstancy may be necessary to achieve good results. It should be
noted that for large values of the gradient constancy weight the smoothness parameter flowSmoothness must
also be chosen larger.
The parameters of the multigrid solver and for the coarse-to-fine warping strategy can be specified with the
generic parameters MGParamName and MGParamValue. Usually, it suffices to use one of the four default
parameter sets via MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. The default parameter sets are described below. If the parameters should be speci-
fied individually, MGParamName and MGParamValue must be set to tuples of the same length. The values
corresponding to the parameters specified in MGParamName must be specified at the corresponding position in
MGParamValue.
MGParamName = ’warp_zoom_factor’ can be used to specify the resolution ratio between two consecutive warp-
ing levels in the coarse-to-fine warping hierarchy. ’warp_zoom_factor’ must be selected from the open interval
(0, 1). For performance reasons, ’warp_zoom_factor’ is typically set to 0.5, i.e., the number of pixels is halved in
each direction for each coarser warping level. This leads to an increase of 33% in the calculations that need to be
performed with respect to an algorithm that does not use warping. Values for ’warp_zoom_factor’ close to 1 can
lead to slightly better results. However, they require a disproportionately larger computation time, e.g., 426% for
’warp_zoom_factor’ = 0.9.
MGParamName = ’warp_levels’ can be used to restrict the warping hierarchy to a maximum number of levels.
For ’warp_levels’ = 0, the largest possible number of levels is used. If the image size does not allow to use
the specified number of levels (taking the resolution ratio ’warp_zoom_factor’ into account), the largest possible
number of levels is used. Usually, ’warp_levels’ should be set to 0.
MGParamName = ’warp_last_level’ can be used to specify the number of warping levels for which the flow
increment should no longer be computed. Usually, ’warp_last_level’ is set to 1 or 2, i.e., a flow increment is
computed for each warping level, or the finest warping level is skipped in the computation. Since in the latter case
the computation is performed on an image of half the resolution of the original image, the gained computation
time can be used to compute a more accurate solution, e.g., by using a full multigrid algorithm with additional
iterations. The more accurate solution is then interpolated to the full resolution.
The three parameters that specify the coarse-to-fine warping strategy are only used in the FDRIG and DDRAW
algorithms. They are ignored for the CLG algorithm.
MGParamName = ’mg_solver’ can be used to specify the general multigrid strategy for solving the (non)linear
equation system (in each warping level). For ’mg_solver’ = ’multigrid’, a normal multigrid algorithm (without
coarse-to-fine initialization) is used, while for ’mg_solver’ = ’full_multigrid’ a full multigrid algorithm (with
coarse-to-fine initialization) is used. Since a resolution reduction of 0.5 is used between two consecutive levels of
the coarse-to-fine initialization (in contrast to the resolution reduction in the warping strategy, this value is hard-
coded into the algorithm), the use of a full multigrid algorithm results in an increase of the computation time by
approximately 33% with respect to the normal multigrid algorithm. Using ’mg_solver’ to ’full_multigrid’ typically
yields numerically more accurate results than ’mg_solver’ = ’multigrid’.
MGParamName = ’mg_cycle_type’ can be used to specify whether a V or W correction cycle is used per multigrid
level. Since a resolution reduction of 0.5 is used between two consecutive levels of the respective correction cycle,
using a W cycle instead of a V cycle increases the computation time by approximately 50%. Using ’mg_cycle_type’
= ’w’ typically yields numerically more accurate results than ’mg_cycle_type’ = ’v’.
MGParamName = ’mg_levels’ can be used to restrict the multigrid hierarchy for the coarse-to-fine initialization
as well as for the actual V or W correction cycles. For ’mg_levels’ = 0, the largest possible number of levels is
used. If the image size does not allow to use the specified number of levels, the largest possible number of levels
is used. Usually, ’mg_levels’ should be set to 0.
MGParamName = ’mg_cycles’ can be used to specify the total number of V or W correction cycles that are being
performed. If a full multigrid algorithm is used, ’mg_cycles’ refers to each level of the coarse-to-fine initialization.
Usually, one or two cycles are sufficient to yield a sufficiently accurate solution of the equation system. Typically,
the larger ’mg_cycles’, the more accurate the numerical results. This parameter enters almost linearly into the
computation time, i.e., doubling the number of cycles leads approximately to twice the computation time.
MGParamName = ’mg_pre_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver before the actual error correction is performed.
Usually, one or two pre-relaxation steps are sufficient. Typically, the larger ’mg_pre_relax’, the more accurate the
numerical results.
MGParamName = ’mg_post_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver after the actual error correction is performed.
Usually, one or two post-relaxation steps are sufficient. Typically, the larger ’mg_post_relax’, the more accurate
the numerical results.
Like when increasing the number of correction cycles, increasing the number of pre- and post-relaxation steps
increases the computation time asymptotically linearly. However, no additional restriction and prolongation oper-
ations (zooming down and up of the error correction images) are performed. Consequently, a moderate increase in
the number of relaxation steps only leads to a slight increase in the computation times.
MGParamName = ’mg_inner_iter’ can be used to specify the number of iterations to solve the linear equation
systems in each fixed-point iteration of the nonlinear basic solver. Usually, one iteration is sufficient to achieve a
sufficient convergence speed of the multigrid algorithm. The increase in computation time is slightly smaller than
for the increase in the relaxation steps. This parameter only influences the FDRIG and DDRAW algorithms since
for the CLG algorithm no nonlinear equation system needs to be solved.
As described above, usually it is sufficient to use one of the default parameter sets for the parameters described
above by using MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. If necessary, individual parameters can be modified after the default parameter set has
been chosen by specifying a subset of the above parameters and corresponding values after ’default_parameters’ in
MGParamName and MGParamValue (e.g., MGParamName = [’default_parameters’,’warp_zoom_factor’] and
MGParamValue = [’accurate’,0.6]).
The default parameter sets use the following values for the above parameters:
’default_parameters’ = ’very_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1,
’mg_post_relax’ = 1, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2, ’mg_solver’
= ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1, ’mg_post_relax’ =
1, ’mg_inner_iter’ = 1.
It should be noted that for the CLG algorithm the two modes ’fast_accurate’ and ’fast’ are identical to the modes
’very_accurate’ and ’accurate’ since the CLG algorithm does not use a coarse-to-fine warping strategy.
Parameter
HALCON 8.0.2
256 CHAPTER 3. FILTER
Result
If the parameter values are correct, the operator OpticalFlowMg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
OpticalFlowMg is reentrant and automatically parallelized (on tuple level).
Possible Successors
Threshold, VectorFieldLength
See also
UnwarpImageVectorField
References
T. Brox, A. Bruhn, N. Papenberg, and J. Weickert: High accuracy optic flow estimation based on a theory for
warping. In T. Pajdla and J. Matas, editors, Computer Vision - ECCV 2004, volume 3024 of Lecture Notes in
Computer Science, pages 25–36. Springer, Berlin, 2004.
A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schnörr: Variational optical flow computation in real-
time. IEEE Transactions on Image Processing, 14(5):608-615, May 2005.
H.-H. Nagel and W. Enkelmann: An investigation of smoothness constraints for the estimation of displacement
vector fields from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(5):565-
593, September 1986.
Ulrich Trottenberg, Cornelis Oosterlee, Anton Schüller: Multigrid. Academic Press, Inc., San Diego, 2000.
Module
Foundation
Result
If the parameter values are correct, the operator UnwarpImageVectorField returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
UnwarpImageVectorField is reentrant and automatically parallelized (on domain level, tuple level).
Possible Predecessors
OpticalFlowMg
Module
Foundation
HALCON 8.0.2
258 CHAPTER 3. FILTER
VectorFieldLength compute the length of the vectors of the vector field vectorField and returns them
in length. The parameter mode can be used to specify how the lengths are computed. For mode = ’length’,
the Euclidean length of the vectors is computed. For mode = ’squared_length’, the square of the length of the
vectors is computed. This avoids having to compute a square root internally, which is a costly operation on many
processors, and hence saves runtime on these processors.
Parameter
. vectorField (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Input vector field
. length (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Length of the vectors of the vector field.
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mode for computing the length of the vectors.
Default Value : "length"
List of values : Mode ∈ {"length", "squared_length"}
Result
If the parameter values are correct, the operator VectorFieldLength returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
VectorFieldLength is reentrant and automatically parallelized (on domain level, tuple level).
Possible Predecessors
OpticalFlowMg
Possible Successors
Threshold
Module
Foundation
3.14 Points
static void HOperatorSet.CornerResponse ( HObject image,
out HObject imageCorner, HTuple size, HTuple weight )
2
R(x, y) = A(x, y) · B(x, y) − C 2 (x, y) − W eight · (A(x, y) + B(x, y))
A(x, y) = W (u, v) ∗ (∇x I(x, y))2
B(x, y) = W (u, v) ∗ (∇y I(x, y))2
C(c, y) = W (u, v) ∗ (∇x I(x, y)∇y I(x, y))
where I is the input image and R the output image of the filter. The operator GaussImage is used for smoothing
(W ), the operator SobelAmp is used for calculating the derivative (∇).
The corner response function is invariant with regard to rotation. In order to achieve a suitable dependency of the
function R(x, y) on the local gradient, the parameter weight must be set to 0.04. With this, only gray value
corners will return positive values for R(x, y), while straight edges will receive negative values. The output image
type is identical to the input image type. Therefore, the negative output values are set to 0 if byte images are
used as input images. If this is not desired, the input image should be converted into a real or int2 image with
ConvertImageType.
Parameter
read_image(&Fabrik,"fabrik");
corner_response(Fabrik,&CornerResponse,3,0.04);
local_max(CornerResponse,&LocalMax);
disp_image(Fabrik,WindowHandle);
set_color(WindowHandle,"red");
disp_region(LocalMax,WindowHandle);
Parallelization Information
CornerResponse is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
LocalMax, Threshold
See also
GaussImage, SobelAmp, ConvertImageType
References
C.G. Harris, M.J. Stephens, “A combined corner and edge detector”’; Proc. of the 4th Alvey Vision Conference;
August 1988; pp. 147-152.
H. Breit, “Bestimmung der Kameraeigenbewegung und Gewinnung von Tiefendaten aus monokularen Bildfol-
gen”; Diplomarbeit am Lehrstuhl f"ur Nachrichtentechnik der TU M"unchen; 30. September 1990.
Module
Foundation
HALCON 8.0.2
260 CHAPTER 3. FILTER
−21 −21 −21
−21 16 16 16 −21
−21 16 16 16 16 16 −21
1 −21
16 16 16 16 16 −21
336
−21
16 16 16 16 16 −21
−21 16 16 16 −21
−21 −21 −21
The parameter filterType selects whether dark, light, or all dots in the image should be enhanced. The
pixelShift can be used either to increase the contrast of the output image (pixelShift > 0) or to dampen
the values in extremly bright areas that would be cut off otherwise (pixelShift = −1).
Parameter
is calculated, where Ix,c and Iy,c are the first derivatives of each image channel and S stands for a smoothing.
If smoothing is ’gauss’, the derivatives are computed with Gaussian derivatives of size sigmaGrad and the
smoothing is performed by a Gaussian of size sigmaInt. If smoothing is ’mean’, the derivatives are computed
with a 3 × 3 Sobel filter (and hence sigmaGrad is ignored) and the smoothing is performed by a sigmaInt ×
sigmaInt mean filter. Then
inhomogeneity = TraceM
DetM
isotropy = 4 ·
(TraceM )2
is the degree of the isotropy of the texture in the image. Image points that have an inhomogeneity greater or equal to
threshInhom and at the same time an isotropy greater or equal to threshShape are subsequently examined
further.
In the second step, two optimization functions are calculated for the resulting points. Essentially, these optimiza-
tion functions average for each point the distances to the edge directions (for junction points) and the gradient
directions (for area points) within an observation window around the point. If smoothing is ’gauss’, the aver-
aging is performed by a Gaussian of size sigmaPoints, if smoothing is ’mean’, the averaging is performed
HALCON 8.0.2
262 CHAPTER 3. FILTER
by a sigmaPoints × sigmaPoints mean filter. The local minima of the optimization functions determine
the extracted points. Their subpixel precise position is returned in (rowJunctions, colJunctions) and
(rowArea, colArea).
In addition to their position, for each extracted point the elements coRRJunctions, coRCJunctions, and
coCCJunctions (and coRRArea, coRCArea, and coCCArea, respectively) of the corresponding covariance
matrix are returned. This matrix facilitates conclusions about the precision of the calculated point position. To
obtain the actual values, it is necessary to estimate the amount of noise in the input image and to multiply all
components of the covariance matrix with the variance of the noise. (To estimate the amount of noise, apply
Intensity to homgeneous image regions or PlaneDeviation to image regions, where the gray values form
a plane. In both cases the amount of noise is returned in the parameter Deviation.) This is illustrated by the example
program
%HALCONROOT%\examples\hdevelop\Filter\Points\ points_foerstner_ellipses.dev .
It lies in the nature of this operator that corners often result in two distinct points: One junction point, where the
edges of the corner actually meet, and one area point inside the corner. Such doublets will be eliminated automati-
cally, if eliminateDoublets is ’true’. To do so, each pair of one junction point and one area point is examined.
If the points lie within each others’ observation window of the optimization function, for both points the precision
of the point position is calculated and the point with the lower precision is rejected. If eliminateDoublets is
’false’, every detected point is returned.
Attention
Note that only odd values for sigmaInt and sigmaPoints are allowed, if smoothing is ’mean’. Even
values automatically will be replaced by the next larger odd value.
Parameter
HALCON 8.0.2
264 CHAPTER 3. FILTER
C. Fuchs: “Extraktion polymorpher Bildstrukturen und ihre topologische und geometrische Gruppierung”. Volume
502, Series C, Deutsche Geodätische Kommission, München, 1998.
Module
Foundation
where Gσ stands for a Gaussian smoothing of size sigmaSmooth and Ix,c and Iy,c are the first derivatives of
each image channel, computed with Gaussian derivatives of size sigmaGrad. The resulting points are the positive
local extrema of
If necessary, they can be restricted to points with a minimum filter response of threshold. The coordinates of
the points are calculated with subpixel accuracy.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Input image.
. sigmaGrad (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Amount of smoothing used for the calculation of the gradient.
Default Value : 0.7
Suggested values : SigmaGrad ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaGrad ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaGrad > 0.0
. sigmaSmooth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Amount of smoothing used for the integration of the gradients.
Default Value : 2.0
Suggested values : SigmaSmooth ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaSmooth ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaSmooth > 0.0
. alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Weight of the squared trace of the squared gradient matrix.
Default Value : 0.04
Suggested values : Alpha ∈ {0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08}
Typical range of values : 0.001 ≤ Alpha ≤ 0.1
Minimum Increment : 0.001
Recommended Increment : 0.01
Restriction : Alpha > 0.0
HALCON 8.0.2
266 CHAPTER 3. FILTER
result than pixels with a smaller distance. Typically, it is not necessary to modify the default value 0.75 of sigmaD
.
As a further criterion, the angle is calculated, by which the gray value edges change their direction in the corner
point. A point can only be accepted as a corner when this angle is greater than minAngle.
The position of the detected corner points is returned in (row, col). row und col are calculated with subpixel
accuracy if subpix is ’true’. They are calculated only with pixel accuracy if subpix is ’false’.
Parameter
3.15 Smoothing
read_image(Image,’fabrik’)
HALCON 8.0.2
268 CHAPTER 3. FILTER
anisotrope_diff(Image,Aniso,80,1,5,8)
sub_image(Image,Aniso,Sub,2.0,127)
disp_image(Sub,WindowHandle).
Complexity
For each pixel: O(Iterations ∗ 18).
Result
If the parameter values are correct the operator AnisotropeDiff returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
AnisotropeDiff is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
ReadImage, GrabImage
Possible Successors
Regiongrowing, Threshold, SubImage, DynThreshold, AutoThreshold
Alternatives
SigmaImage, RankImage
See also
SmoothImage, BinomialFilter, GaussImage, SigmaImage, RankImage, EliminateMinMax
References
P. Perona, J. Malik: “Scale-space and edge detection using anisotropic diffusion”, IEEE transaction on pattern
analysis and machine intelligence, Vol. 12, No. 7, July 1990.
Module
Foundation
ut = div(g(|∇u|2 , c)∇u)
with the initial value u = u0 defined by image at a time t0 . The equation is iterated iterations times in
time steps of length theta, so that the output image imageAniso contains the gray value function at the time
t0 + iterations · theta.
The goal of the anisotropic diffusion is the elimination of image noise in constant image patches while preserv-
ing the edges in the image. The distinction between edges and constant patches is achieved using the threshold
contrast on the size of the gray value differences between adjacent pixels. contrast is referred to as the
contrast parameter and abbreviated with the letter c.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter mode,
the following functions can be selected:
1
g1 (x, c) = p
1 + 2 cx2
Choosing the function g1 by setting mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size theta. In this case however, there remains a slight diffusion even across edges of a height larger than c.
1
g2 (x, c) =
1 + cx2
The choice of ’perona-malik’ for mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.
c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Parameter
HALCON 8.0.2
270 CHAPTER 3. FILTER
as follows:
1 m−1 n−1
bij =
2n+m−2 i j
Here, i = 0, . . . , m − 1 and√
j = 0, . . . , n − 1. The binomial filter performs approximately the same smoothing as
a Gaussian filter with σ = n − 1/2, where for simplicity it is assumed that m = n. In detail, the relationship
between n and σ is:
n σ
3 0.7523
5 1.0317
7 1.2505
9 1.4365
11 1.6010
13 1.7502
15 1.8876
17 2.0157
19 2.1361
21 2.2501
23 2.3586
25 2.4623
27 2.5618
29 2.6576
31 2.7500
33 2.8395
35 2.9262
37 3.0104
If different values are chosen for maskHeight and maskWidth, the above relation between n and σ still holds
and refers to the amount of smoothing in the row and column directions.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageBinomial (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Smoothed image.
. maskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Filter width.
Default Value : 5
List of values : MaskWidth ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
. maskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Filter height.
Default Value : 5
List of values : MaskHeight ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
Result
If the parameter values are correct the operator BinomialFilter returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
BinomialFilter is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage, GrabImage
Possible Successors
Regiongrowing, Threshold, SubImage, DynThreshold, AutoThreshold
Alternatives
GaussImage, SmoothImage, DerivateGauss, IsotropicDiffusion
See also
MeanImage, AnisotropicDiffusion, SigmaImage, GenLowpass
Module
Foundation
HALCON 8.0.2
272 CHAPTER 3. FILTER
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Image to smooth.
. filteredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Smoothed image.
. maskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9}
Typical range of values : 3 ≤ MaskWidth ≤ width(Image)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. maskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9}
Typical range of values : 3 ≤ MaskHeight ≤ width(Image)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. gap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Gap between local maximum/minimum and all other gray values of the neighborhood.
Default Value : 1.0
Suggested values : Gap ∈ {1.0, 2.0, 5.0, 10.0}
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Replacement rule (1 = next minimum/maximum, 2 = average, 3 =median).
Default Value : 3
List of values : Mode ∈ {1, 2, 3}
Result
EliminateMinMax returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
EliminateMinMax returns with an error message.
Parallelization Information
EliminateMinMax is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
WienerFilter, WienerFilterNi
See also
MeanSp, MeanImage, MedianImage, MedianWeighted, BinomialFilter, GaussImage,
SmoothImage
References
M. Imme:“A Noise Peak Elimination Filter”; S. 204-211 in CVGIP Graphical Models and Image Processing, Vol.
53, No. 2, March 1991
M. Lückenhaus:“Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse”; Diplomarbeit; Tech-
nische Universität München, Institut für Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation
The operator EliminateSp replaces all gray values outside the indicated gray value intervals (minThresh to
maxThresh) with the neighboring mean values. Only those neighboring pixels which also fall within the gray
value interval are used for averaging. If no such pixel is present in the vicinity the original gray value is used. The
gray values in the input image falling within the gray value interval are also adopted without change.
Attention
If even values instead of odd values are given for maskHeight or maskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageFillSP (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Smoothed image.
. maskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskWidth ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. maskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskHeight ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. minThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Minimum gray value.
Default Value : 1
Suggested values : MinThresh ∈ {1, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
. maxThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Maximum gray value.
Default Value : 254
Suggested values : MaxThresh ∈ {5, 7, 9, 11, 15, 23, 31, 43, 61, 101, 200, 230, 250, 254}
Restriction : MinThresh ≤ MaxThresh
Example (Syntax: HDevelop)
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
eliminate_sp(Image,ImageMeansp,3,3,101,201)
disp_image(ImageMeansp,WindowHandle).
Parallelization Information
EliminateSp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DispImage
Alternatives
MeanSp, MeanImage, MedianImage, EliminateMinMax
See also
BinomialFilter, GaussImage, SmoothImage, AnisotropicDiffusion, SigmaImage,
EliminateMinMax
Module
Foundation
HALCON 8.0.2
274 CHAPTER 3. FILTER
read_image(Image,’video_bild’)
fill_interlace(Image,New,’odd’)
sobel_amp(New,Sobel,’sum_abs’,3).
Complexity
For each pixel: O(2).
Result
If the parameter values are correct the operator FillInterlace returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
FillInterlace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage, GrabImage
Possible Successors
SobelAmp, EdgesImage, Regiongrowing, DiffOfGauss, Threshold, DynThreshold,
AutoThreshold, MeanImage, BinomialFilter, GaussImage, AnisotropicDiffusion,
SigmaImage, MedianImage
See also
MedianImage, BinomialFilter, GaussImage, CropPart
Module
Foundation
The operator GaussImage smoothes images using the discrete Gaussian. The smoothing effect increases with
increasing filter size. The following filter sizes (size) are supported (the sigma value of the gauss function is
indicated in brackets):
3 (0.65)
5 (0.87)
7 (1.43)
9 (1.88)
11 (2.31)
For border treatment the gray values of the images are reflected at the image borders.
The operator BinomialFilter can be used as an alternative to GaussImage. BinomialFilter is
significantly faster than GaussImage. It should be noted that the mask size in BinomialFilter does not
lead to the same amount of smoothing as the mask size in GaussImage. Corresponding mask sizes can be
determined based on the respective values of the Gaussian smoothing parameter sigma.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image to be smoothed.
. imageGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Filtered image.
. size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Required filter size.
Default Value : 5
List of values : Size ∈ {3, 5, 7, 9, 11}
Example (Syntax: HDevelop)
gauss_image(Input,Gauss,7)
regiongrowing(Gauss,Segments,7,7,5,100).
Complexity
For each pixel: O(Size ∗ 2).
Result
If the parameter values are correct the operator GaussImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
GaussImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage, GrabImage
Possible Successors
Regiongrowing, Threshold, SubImage, DynThreshold, AutoThreshold
Alternatives
BinomialFilter, SmoothImage, DerivateGauss, IsotropicDiffusion
See also
MeanImage, AnisotropicDiffusion, SigmaImage, GenLowpass
Module
Foundation
HALCON 8.0.2
276 CHAPTER 3. FILTER
The operator InfoSmooth returns an estimation of the width of the smoothing filters used in routine
SmoothImage. For this purpose the underlying continuous impulse answers of filter are scanned until a
filter coefficient is smaller than five percent of the maximum coefficient (at the origin). alpha is the filter param-
eter (see SmoothImage). Currently four filters are supported (parameter filter):
The gauss filter was conventionally implemented with filter masks (the other three are recursive filters). In the case
of the gauss filter the filter coefficients (of the one-dimensional impulse answer f (n) with n ≥ 0) are returned in
coeffs in addition to the filter size.
Parameter
. filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of required filter.
Default Value : "deriche2"
List of values : Filter ∈ {"deriche1", "deriche2", "shen", "gauss"}
. alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Filter parameter: small values effect strong smoothing (reversed in case of ’gauss’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.01 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. size (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of filter is approx. size x size pixels.
. coeffs (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
In case of gauss filter: coefficients of the “positive” half of the 1D impulse answer.
Example (Syntax: HDevelop)
info_smooth(’deriche2’,0.5,Size,Coeffs)
smooth_image(Input,Smooth,’deriche2’,7).
Result
If the parameter values are correct the operator InfoSmooth returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
InfoSmooth is reentrant and processed without parallelization.
Possible Predecessors
ReadImage
Possible Successors
SmoothImage
See also
SmoothImage
Module
Foundation
IsotropicDiffusion returns the same results as the operator DerivateGauss when choosing ’none’ for
its parameter Component. If the gray value matrix is larger than the ROI of image the two operators differ
since DerivateGauss takes the gray values outside of the ROI into account, while IsotropicDiffusion
mirrors the values at the boundary of the ROI in any case. The computational complexity increases linearly with
the value of sigma.
If iterations has a positive value the smoothing process is considered as an application of the heat equation
ut = ∆u
on the gray value function u with the initial value u = u0 defined by the gray values of image at a time t0 . This
equation is then solved up to a time t0 + 12 sigma2 , which is equivalent to the above convolution, using an iterative
procedure for parabolic partial differential equations. The computational complexity is proportional to the value
of iterations and independent of sigma in this case. For small values of iterations, the computational
accuracy is very low, however. For this reason, choosing iterations < 3 is not recommended.
For smaller values of sigma, the convolution implementation is typically the faster method. Since the runtime of
the partial differential equation solver only depends on the number of iterations and not on the value of sigma, it
is typically faster for large values of sigma if few iterations are chosen (e.g., iterations = 3 ).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. smoothedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Output image.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Standard deviation of the Gauss distribution.
Default Value : 1.0
Suggested values : Sigma ∈ {0.1, 0.5, 1.0, 3.0, 10.0, 20.0, 50.0}
Restriction : Sigma > 0
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {0, 3, 10, 100, 500}
Restriction : Iterations ≥ 0
Parallelization Information
IsotropicDiffusion is reentrant and automatically parallelized (on tuple level).
Module
Foundation
HALCON 8.0.2
278 CHAPTER 3. FILTER
Attention
If even values instead of odd values are given for maskHeight or maskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image to be smoothed.
. imageMean (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Smoothed image.
. maskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of filter mask.
Default Value : 9
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskWidth ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. maskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of filter mask.
Default Value : 9
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskHeight ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
mean_image(Image,Mean,3,3)
disp_image(Mean,WindowHandle).
Complexity
For each pixel: O(15).
Result
If the parameter values are correct the operator MeanImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
MeanImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReduceDomain, Rectangle1Domain
Possible Successors
DynThreshold, Regiongrowing
Alternatives
BinomialFilter, GaussImage, SmoothImage
See also
AnisotropicDiffusion, SigmaImage, ConvolImage, GenLowpass
Module
Foundation
HImage HImage.MeanN ( )
Average gray values over several channels.
The operator MeanN generates the pixel-by-pixel mean value of all channels . For each coordinate point the sum
of all gray values at this coordinate is calculated. The result is the mean of the gray values (sum divided by the
number of channels). The output image has one channel.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Multichannel gray image.
. imageMean (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Result of averaging.
Example (Syntax: C)
compose3(Channel1,Channel2,Channel3,&MultiChannel);
mean_n(MultiChannel,&Mean);
Parallelization Information
MeanN is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
Compose2, Compose3, Compose4, AddChannels
Possible Successors
DispImage
See also
CountChannels
Module
Foundation
HALCON 8.0.2
280 CHAPTER 3. FILTER
read_image(Image,’mreut’)
disp_image(Image,WindowHandle)
mean_sp(Image,ImageMeansp,3,3,101,201)
disp_image(ImageMeansp,WindowHandle).
Parallelization Information
MeanSp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DispImage
Alternatives
MeanImage, MedianImage, MedianSeparate, EliminateMinMax
See also
AnisotropicDiffusion, SigmaImage, BinomialFilter, GaussImage, SmoothImage,
EliminateMinMax
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels of the objects once. For each of these pixels all neighboring pixels covered by the
mask are sorted in an ascending sequence according to their gray values. Thus, each of these sorted gray value
sequences contains exactly as many gray values as the mask has pixels. From these sequences the median is
selected and entered as resulting gray value at the corresponding output image.
Parameter
read_image(Image,’fabrik’)
median_image(Image,Median,’circle’,3,’continued’)
disp_image(MedianWeighted,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 5) with F = area of maskType.
Result
If the parameter values are correct the operator MedianImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
MedianImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
RankImage
See also
GenCircle, GenRectangle1, GrayErosionRect, GrayDilationRect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-319
Module
Foundation
HALCON 8.0.2
282 CHAPTER 3. FILTER
read_image(Image,’fabrik’)
median_separate(Image,MedianSeparate,5,5,3)
disp_image(MedianSeparate,WindowHandle).
Complexity
For each pixel: O(40).
Parallelization Information
MedianSeparate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
TextureLaws, SobelAmp, DeviationImage
Possible Successors
LearnNdimNorm, LearnNdimBox, MedianSeparate, Regiongrowing, AutoThreshold
Alternatives
MedianImage
See also
RankImage
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
’gauss’ (maskSize = 3)
1 2 1
2 4 2
1 2 1
’inner’ (maskSize = 3)
1 1 1
1 3 1
1 1 1
The operator MedianWeighted means that, contrary to MedianImage, gray value corners remain.
Parameter
read_image(Image,’fabrik’)
median_weighted(Image,MedianWeighted,’gauss’,3)
disp_image(MedianWeighted,WindowHandle).
HALCON 8.0.2
284 CHAPTER 3. FILTER
Complexity
For each pixel: O(F ∗ log F ) with F = area of maskType.
Parallelization Information
MedianWeighted is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
MedianImage, TrimmedMean, SigmaImage
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once.
Parameter
read_image(Image,’fabrik’)
draw_region(Region,WindowHandle)
midrange_image(Image,Region,Midrange,’mirrored’)
disp_image(Midrange,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 5) with F = area of mask.
Result
If the parameter values are correct the operator MidrangeImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
MidrangeImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage, DrawRegion, GenCircle, GenRectangle1
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
SigmaImage
See also
GenCircle, GenRectangle1, GrayErosionRect, GrayDilationRect, GrayRangeRect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
HALCON 8.0.2
286 CHAPTER 3. FILTER
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Image to be filtered.
. mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Region serving as filter mask.
. imageRank (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Filtered image.
. rank (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Rank of the output gray value in the sorted sequence of input gray values inside the filter mask. Typical value
(median): area(mask) / 2.
Default Value : 5
Suggested values : Rank ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31}
Typical range of values : 1 ≤ Rank ≤ 512
Minimum Increment : 1
Recommended Increment : 2
. margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string / int / long / double)
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
draw_region(Region,WindowHandle)
rank_image(Image,Region,ImageRank,5,’mirrored’)
disp_image(ImageRank,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 5) with F = area of mask.
Result
If the parameter values are correct the operator RankImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
RankImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage, DrawRegion, GenCircle, GenRectangle1
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
SigmaImage
See also
GenCircle, GenRectangle1, GrayErosionRect, GrayDilationRect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-320
Module
Foundation
The operator SigmaImage carries out a non-linear smoothing of the gray values of all input images (image).
All pixels are checked in a rectangular window (maskHeight × maskWidth). All pixels of the window which
differ from the current pixel by less than sigma are used for calculating the new pixel, which is the average of the
chosen pixels. If all differences are larger than sigma the gray value is adapted unchanged.
Attention
If even values instead of odd values are given for maskHeight or maskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image to be smoothed.
. imageSigma (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Smoothed image.
. maskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of the mask (number of lines).
Default Value : 5
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 101
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. maskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of the mask (number of columns).
Default Value : 5
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 101
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Max. deviation to the average.
Default Value : 3
Suggested values : Sigma ∈ {3, 5, 7, 9, 11, 20, 30, 50}
Typical range of values : 0 ≤ Sigma ≤ 255
Minimum Increment : 1
Recommended Increment : 2
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
sigma_image(Image,ImageSigma,5,5,3)
disp_image(ImageSigma,WindowHandle).
Complexity
For each pixel: O(maskHeight× maskWidth).
Result
If the parameter values are correct the operator SigmaImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
SigmaImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
AnisotropicDiffusion, RankImage
HALCON 8.0.2
288 CHAPTER 3. FILTER
See also
SmoothImage, BinomialFilter, GaussImage, MeanImage
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 325
Module
Foundation
The “filter width” (i.e., the range of the filter and thereby result of the filter) can be of any size. In the case that the
Deriche or Shen is choosen it decreases by increasing the filter parameter alpha and increases in the case of the
Gauss filter (and alpha corresponds to the standard deviation of the Gaussian function). An approximation of the
appropiate size of the filterwidth alpha is performed by the operator InfoSmooth.
Non-recursive filters like the Gaussian filter are often implemented using filter-masks. In this case the runtime
of the operator increases with increasing size of the filter mask. The runtime of the recursive filters remains
constant; except the border treatment becomes a little bit more time consuming. The Gaussian filter becomes slow
in comparison to the recursive ones but is in contrast to them isotropic (the filter ’deriche2’ is only weakly direction
sensitive). A comparable result of the smoothing is achieved by choosing the following values for the parameter:
alpha(0 deriche10 )
alpha(0 deriche20 ) =
2
alpha(0 deriche10 )
alpha(0 shen0 ) =
2
1.77
alpha(0 gauss0 ) =
alpha(0 deriche10 )
Parameter
info_smooth(’deriche2’,0.5,Size,Coeffs)
smooth_image(Input,Smooth,’deriche2’,7)
Result
If the parameter values are correct the operator SmoothImage returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
SmoothImage is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
ReadImage
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
BinomialFilter, GaussImage, MeanImage, DerivateGauss, IsotropicDiffusion
See also
InfoSmooth, MedianImage, SigmaImage, AnisotropicDiffusion
References
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once. For each of these pixels all neighboring pixels covered by the mask are sorted
in an ascending sequence according to their gray values. Thus, each of these sorted gray value sequences contains
exactly as many gray values as the mask has pixels. If F is the area of the mask the average of these sequences is
calculated as follows: The first (F - Number)/2 gray values are ignored. Then the following number gray values
are summed up and divided by number. Again the remaining (F - number)/2 gray values are ignored.
Parameter
HALCON 8.0.2
290 CHAPTER 3. FILTER
read_image(Image,’fabrik’)
draw_region(Region,WindowHandle)
trimmed_mean(Image,Region,TrimmedMean,5,’mirrored’)
disp_image(TrimmedMean,WindowHandle).
Result
If the parameter values are correct the operator TrimmedMean returns the value 2 (H_MSG_TRUE).
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
TrimmedMean is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage, DrawRegion, GenCircle, GenRectangle1
Possible Successors
Threshold, DynThreshold, Regiongrowing
Alternatives
SigmaImage, MedianWeighted, MedianImage
See also
GenCircle, GenRectangle1, GrayErosionRect, GrayDilationRect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 320
Module
Foundation
3.16 Texture
static void HOperatorSet.DeviationImage ( HObject image,
out HObject imageDeviation, HTuple width, HTuple height )
Parameter
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
deviation_image(Image,Deviation,9,9)
disp_image(Deviation,WindowHandle).
Result
DeviationImage returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
DeviationImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DispImage
Alternatives
EntropyImage, EntropyGray
See also
ConvolImage, TextureLaws, Intensity
Module
Foundation
HALCON 8.0.2
292 CHAPTER 3. FILTER
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
entropy_image(Image,Entropy1,9,9)
disp_image(Entropy1,WindowHandle).
Result
EntropyImage returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
EntropyImage is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
DispImage
Alternatives
EntropyGray
See also
EnergyGabor, EntropyGray
Module
Foundation
l = [ 1 2 1]
e = [−1 0 1]
s = [−1 2 −1]
l = [ 1 4 6 4 1]
e = [−1 −2 0 2 1]
s = [−1 0 2 0 −1]
r = [ 1 −4 6 −4 1]
w = [−1 2 0 −2 1]
l = [ 1 6 15 20 15 6 1]
e = [−1 −4 −5 0 5 4 1]
s = [−1 −2 1 4 1 −2 −1]
r = [−1 −2 −1 4 −1 −2 −1]
w = [−1 0 3 0 −3 0 1]
o = [−1 6 −15 20 −15 6 −1]
For most of the filters the resulting gray values must be modified by a shift. This makes the different textures in
the output image more comparable to each other, provided suitable filters are used.
The name of the filter is composed of the letters of the two vectors used, where the first letter denotes convolution
in the column direction while the second letter denotes convolution in the row direction.
Parameter
HALCON 8.0.2
294 CHAPTER 3. FILTER
Result
TextureLaws returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behaviour can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
TextureLaws is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
MeanImage, BinomialFilter, GaussImage, MedianImage, Histo2dim, LearnNdimNorm,
LearnNdimBox, Threshold
Alternatives
ConvolImage
See also
Class2dimSup, ClassNdimNorm
References
Laws, K.I. “Textured image segmentation”; Ph.D. dissertation, Dept. of Engineering, Univ. Southern California,
1980
Module
Foundation
3.17 Wiener-Filter
static void HOperatorSet.GenPsfDefocus ( out HObject psf,
HTuple PSFwidth, HTuple PSFheight, HTuple blurring )
This representation conforms to that of the impulse response parameter of the HALCON-operator
WienerFilter. So one can use GenPsfDefocus to generate an impulse response for Wiener filtering.
Parameter
HALCON 8.0.2
296 CHAPTER 3. FILTER
The blurring affects all part of the image uniformly. blurring controls the extent of blurring. It specifies the
number of pixels (lying one after another) that are affetcetd by the blurring. This number is determined by velocity
of the motion and exposure time. If blurring is a negative number, an adequate blurring in reverse direction
is simulated. If angle is a negative number, it is interpreted clockwise. If angle exceeds 360 or falls below
-360, it is transformed modulo(360) in an adequate number between [0..360] resp. [−360..0]. The result image
of GenPsfMotion encloses an spatial domain impulse response of the specified blurring. Its representation
presumes the origin in the upper left corner. This results in the following disposition of an N xM sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
This representation conforms to that of the impulse response parameter of the HALCON-operator
WienerFilter. So one can use GenPsfMotion to generate an impulse response for Wiener filtering a
motion blurred image.
Parameter
. psf (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Impulse response of motion-blur.
. PSFwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of impulse response image.
Default Value : 256
Suggested values : PSFwidth ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFwidth
. PSFheight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of impulse response image.
Default Value : 256
Suggested values : PSFheight ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFheight
. blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Degree of motion-blur.
Default Value : 20.0
Suggested values : Blurring ∈ {5.0, 10.0, 20.0, 30.0, 40.0}
. angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Angle between direction of motion and x-axis (anticlockwise).
Default Value : 0
Suggested values : Angle ∈ {0, 45, 90, 180, 270}
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
PSF prototype resp. type of motion.
Default Value : 3
List of values : Type ∈ {1, 2, 3, 4, 5}
Result
GenPsfMotion returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
GenPsfMotion is reentrant and processed without parallelization.
Possible Predecessors
GenPsfMotion, SimulateDefocus, GenPsfDefocus
Possible Successors
SimulateMotion, WienerFilter, WienerFilterNi
See also
SimulateMotion, SimulateDefocus, GenPsfDefocus, WienerFilter, WienerFilterNi
References
Anil K. Jain:Fundamentals of Digital Image Processing, Prentice-Hall International Inc., Englewood Cliffs, New
Jersey, 1989
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Kha-Chye Tan, Hock Lim, B. T. G. Tan:"‘Restoration of Real-World Motion-Blurred Images"’;S. 291-299 in:
CVGIP Graphical Models and Image Processing, Vol. 53, No. 3, May 1991
Module
Foundation
HALCON 8.0.2
298 CHAPTER 3. FILTER
The simulated blurring affects all part of the image uniformly. blurring controls the extent of blurring. It
specifies the number of pixels (lying one after another) that are affetcetd by the blurring. This number is determined
by velocity of the motion and exposure time. If blurring is a negative number, an adequate blurring in reverse
direction is simulated. If angle is a negative number, it is interpreted clockwise. If angle exceeds 360 or falls
below -360, it is transformed modulo(360) in an adequate number between [0..360] resp. [−360..0].
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
image to be blurred.
. movedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
motion blurred image.
. blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
extent of blurring.
Default Value : 20.0
Suggested values : Blurring ∈ {5.0, 10.0, 20.0, 30.0, 40.0}
. angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Angle between direction of motion and x-axis (anticlockwise).
Default Value : 0
Suggested values : Angle ∈ {0, 45, 90, 180, 270}
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
impulse response of motion blur.
Default Value : 3
List of values : Type ∈ {1, 2, 3, 4, 5}
Result
SimulateMotion returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
SimulateMotion returns with an error message.
Parallelization Information
SimulateMotion is reentrant and processed without parallelization.
Possible Predecessors
GenPsfMotion, GenPsfMotion
Possible Successors
SimulateDefocus, WienerFilter, WienerFilterNi
See also
GenPsfMotion, SimulateDefocus, GenPsfDefocus
References
Anil K. Jain:Fundamentals of Digital Image Processing, Prentice-Hall International Inc., Englewood Cliffs, New
Jersey, 1989
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Kha-Chye Tan, Hock Lim, B. T. G. Tan:"‘Restoration of Real-World Motion-Blurred Images"’;S. 291-299 in:
CVGIP Graphical Models and Image Processing, Vol. 53, No. 3, May 1991
Module
Foundation
So WienerFilter needs a smoothed version of the input image to estimate the power spectral density of
noise and original image. One can use one of the smoothing HALCON-filters (e.g. EliminateMinMax)to
get this version. WienerFilter needs further the impulse response that describes the specific degradation.
This impulse response (represented in spatial domain) must fit into an image of HALCON image type ’real’.
There exist two HALCON-operators for generation of an impulse response for motion blur and out-of-focus (see
GenPsfMotion, GenPsfDefocus). The representation of the impulse response presumes the origin in the
upper left corner. This results in the following disposition of an N xM sized image:
• estimation of the power spectrum density of the original image by using the smoothed version of the corrupted
image,
• estimation of the power spectrum density of each pixel by subtracting smoothed version from unsmoothed
version,
HALCON 8.0.2
300 CHAPTER 3. FILTER
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.
Result
WienerFilter returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty WienerFilter
returns with an error message.
Parallelization Information
WienerFilter is reentrant and processed without parallelization.
Possible Predecessors
GenPsfMotion, SimulateMotion, SimulateDefocus, GenPsfDefocus
Alternatives
WienerFilterNi
See also
SimulateMotion, GenPsfMotion, SimulateDefocus, GenPsfDefocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation
WienerFilterNi estimates the noise term as follows: The user defines a region that is suitable for noise
estimation within the image (homogeneous as possible, as edges or textures aggravate noise estimation). After
smoothing within this region by an (unweighted) median filter and subtracting smoothed version from unsmoothed,
the average noise amplitude of the region is processed within WienerFilterNi. This amplitude together with
the average gray value within the region allows estimating the quotient of the power spectral densities of noise and
original image (in contrast to WienerFilter WienerFilterNi assumes a rather constant quotient within
the whole image). The user can define width and height of the rectangular (median-)filter mask to influence the
noise estimation (maskWidth, maskHeight). WienerFilterNi needs further the impulse response that
describes the specific degradation. This impulse response (represented in spatial domain) must fit into an image
of HALCON image type ’real’. There exist two HALCON-operators for generation of an impulse response for
motion blur and out-of-focus (see GenPsfMotion, GenPsfDefocus). The representation of the impulse
response presumes the origin in the upper left corner. This results in the following disposition of an N xM sized
image:
• estimating the quotient of the power spectrum densities of noise and original image,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.
HALCON 8.0.2
302 CHAPTER 3. FILTER
Result
WienerFilterNi returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty
WienerFilterNi returns with an error message.
Parallelization Information
WienerFilterNi is reentrant and processed without parallelization.
Possible Predecessors
GenPsfMotion, SimulateMotion, SimulateDefocus, GenPsfDefocus
Alternatives
WienerFilter
See also
SimulateMotion, GenPsfMotion, SimulateDefocus, GenPsfDefocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation
HALCON 8.0.2
304 CHAPTER 3. FILTER
Graphics
4.1 Drawing
draw_region(Obj,WindowHandle)
drag_region1(Obj,New,WindowHandle)
disp_region(New,WindowHandle)
position(Obj,_,Row1,Column1,_,_,_,_)
position(New,_,Row2,Column2,_,_,_,_)
disp_arrow(WindowHandle,Row1,Column1,Row2,Column2,1.0)
fwrite_string([’Transformation: (’,Row2-Row1,’,’,Column2-Column1,’)’])
fnew_line().
Result
DragRegion1 returns 2 (H_MSG_TRUE), if a region is entered, the window is valid and the needed drawing
305
306 CHAPTER 4. GRAPHICS
mode (see SetInsert) is available. If necessary, an exception handling is raised. You may determine the
behavior after an empty input with SetSystem(’no_object_result’,<Result>).
Parallelization Information
DragRegion1 is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
GetMposition, MoveRegion
See also
SetInsert, SetDraw, AffineTransImage
Module
Foundation
Parallelization Information
DragRegion2 is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert,
AffineTransImage
Alternatives
GetMposition, MoveRegion, DragRegion1, DragRegion3
See also
SetInsert, SetDraw, AffineTransImage
Module
Foundation
HALCON 8.0.2
308 CHAPTER 4. GRAPHICS
mode (see SetInsert) is available. If necessary, an exception handling is raised. You may determine the
behavior after an empty input with SetSystem(’no_object_result’,<Result>).
Parallelization Information
DragRegion3 is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, GetMposition
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert,
AffineTransImage
Alternatives
GetMposition, MoveRegion, DragRegion1, DragRegion2
See also
SetInsert, SetDraw, AffineTransImage
Module
Foundation
read_image(Image,’affe’)
draw_circle(WindowHandle,Row,Column,Radius)
gen_circle(Circle,Row,Column,Radius,)
reduce_domain(Image,Circle,GrayCircle)
disp_image(GrayCircle,WindowHandle).
Result
DrawCircle returns 2 (H_MSG_TRUE) if the window is valid and the needed drawing mode (see SetInsert)
is available. If necessary, an exception handling is raised.
Parallelization Information
DrawCircle is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawCircleMod, DrawEllipse, DrawRegion
See also
GenCircle, DrawRectangle1, DrawRectangle2, DrawPolygon, SetInsert
Module
Foundation
read_image(Image,’affe’)
draw_circle_mod(WindowHandle,20,20,15,Row,Column,Radius)
gen_circle(Circle,Row,Column,Radius,)
reduce_domain(Image,Circle,GrayCircle)
disp_image(GrayCircle,WindowHandle).
Result
DrawCircleMod returns 2 (H_MSG_TRUE) if the window is valid and the needed drawing mode (see
SetInsert) is available. If necessary, an exception handling is raised.
Parallelization Information
DrawCircleMod is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
HALCON 8.0.2
310 CHAPTER 4. GRAPHICS
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawCircle, DrawEllipse, DrawRegion
See also
GenCircle, DrawRectangle1, DrawRectangle2, DrawPolygon, SetInsert
Module
Foundation
read_image(Image,’affe’)
draw_ellipse(WindowHandle,Row,Column,Phi,Radius1,Radius2)
gen_ellipse(Ellipse,Row,Column,Phi,Radius1,Radius2)
reduce_domain(Image,Ellipse,GrayEllipse)
sobel_amp(GrayEllipse,Sobel,’sum_abs’,3)
disp_image(Sobel,WindowHandle).
Result
DrawEllipse returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
SetInsert) is available. If necessary, an exception handling is raised.
Parallelization Information
DrawEllipse is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawEllipseMod, DrawCircle, DrawRegion
See also
GenEllipse, DrawRectangle1, DrawRectangle2, DrawPolygon, SetInsert
Module
Foundation
HALCON 8.0.2
312 CHAPTER 4. GRAPHICS
read_image(Image,’affe’)
draw_ellipse_mod(WindowHandle,RowIn,ColumnIn,PhiIn,Radius1In,Radius2In,Row,Column,Phi,Ra
gen_ellipse(Ellipse,Row,Column,Phi,Radius1,Radius2)
reduce_domain(Image,Ellipse,GrayEllipse)
sobel_amp(GrayEllipse,Sobel,’sum_abs’,3)
disp_image(Sobel,WindowHandle).
Result
DrawEllipseMod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
SetInsert) is available. If necessary, an exception handling is raised.
Parallelization Information
DrawEllipseMod is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawEllipse, DrawCircle, DrawRegion
See also
GenEllipse, DrawRectangle1, DrawRectangle2, DrawPolygon, SetInsert
Module
Foundation
Draw a line.
DrawLine returns the parameter for a line, which has been created interactively by the user in the window.
To create a line you have to press the left mouse button determining a start point of the line. While keeping the
button pressed you may “drag” the line in any direction. After another mouse click in the middle of the created
line you can move it. If you click on one end point of the created line, you may move this point. Pressing the right
mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_line(WindowHandle,Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
DrawLine returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see SetInsert)
is available. If necessary, an exception handling is raised.
Parallelization Information
DrawLine is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispLine, SetColored, SetLineWidth, SetDraw, SetInsert
See also
DrawLineMod, GenRectangle1, DrawCircle, DrawEllipse, SetInsert
Module
Foundation
Draw a line.
DrawLineMod returns the parameter for a line, which has been created interactively by the user in the window.
To create a line are expected the coordinates of the start point row1In, column1In and of the end point
row2In,column2In. If you click on one end point of the created line, you may move this point. After an-
other mouse click in the middle of the created line you can move it.
Pressing the right mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. row1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; HTuple (double)
Row index of the first point of the line.
HALCON 8.0.2
314 CHAPTER 4. GRAPHICS
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_line_mod(WindowHandle,10,20,55,124,Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
DrawLineMod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
DrawLineMod is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispLine, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawLine, DrawEllipse, DrawRegion
See also
GenCircle, DrawRectangle1, DrawRectangle2
Module
Foundation
HALCON 8.0.2
316 CHAPTER 4. GRAPHICS
HALCON 8.0.2
318 CHAPTER 4. GRAPHICS
Attention
In contrast to DrawNurbs, each point specified by the user influences the whole curve. Thus, if one point is
moved, the whole curve can and will change. To minimize this effects, it is recommended to use a small degree
(3-5) and to place the points such that they are approximately equally spaced. In general, uneven degrees will
perform slightly better than even degrees.
Parameter
See also
DrawNurbsInterpMod, DrawNurbs, GenContourNurbsXld
Module
Foundation
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter keepRatio has to be
set to true.
Edit Mode
HALCON 8.0.2
320 CHAPTER 4. GRAPHICS
In this mode, the curve is displayed together with its interpolation points and the start and end tangent. Start and
end point are marked by an additional square. You can perform the following modifications:
• To append new points, click with the left mouse button in the window and a new point is added at this position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the curve, click on the desired position on the curve.
• To close respectively open the curve, click on the first or on the last point.
HALCON 8.0.2
322 CHAPTER 4. GRAPHICS
control polygon if edit is set to true. Similarly, you can only rotate, move or scale it if rotate, move, and
scale, respectively, are set to true.
DrawNurbsMod starts in the transformation mode. In this mode, the curve is displayed together with 3 symbols:
a cross in the middle and an arrow to the right if rotate is set to true, and a double-headed arrow to the upper
right if scale is set to true. To switch into the edit mode, press the Shift key; by pressing it again, you can switch
back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter keepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its control polygon. Start and end point are marked by an
additional square and the point which was handled last is surrounded by a circle representing its weight. You can
perform the following modifications:
• To append control points, click with the left mouse button in the window and a new point is added at this
position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the control polygon, click on the desired position on the polygon.
• To close respectively open the curve, click on the first or on the last control point.
• You can modify the weight of a control point by first clicking on the point itself (if it is not already the point
which was modified or created last) and then dragging the circle around the point.
HALCON 8.0.2
324 CHAPTER 4. GRAPHICS
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_point(WindowHandle,Row1,Column1)
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1)
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fnew_line().
Result
DrawPoint returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
DrawPoint is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispLine, SetColored, SetLineWidth, SetDraw, SetInsert
See also
DrawPointMod, DrawCircle, DrawEllipse, SetInsert
Module
Foundation
Draw a point.
DrawPointMod returns the parameter for a point, which has been created interactively by the user in the window.
To create a point are expected the coordinates rowIn and columnIn. While keeping the button pressed you may
“drag” the point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.
Parameter
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_point_mod(WindowHandle,Row1,Column1)
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1)
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fnew_line().
Result
DrawPointMod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
DrawPointMod is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispLine, SetColored, SetLineWidth, SetDraw, SetInsert
See also
DrawPoint, DrawCircle, DrawEllipse, SetInsert
Module
Foundation
draw_polygon(Polygon,WindowHandle)
shape_trans(Polygon,Filled,’convex’)
disp_region(Filled,WindowHandle).
HALCON 8.0.2
326 CHAPTER 4. GRAPHICS
Result
If the window is valid, DrawPolygon returns 2 (H_MSG_TRUE). If necessary, an exception handling is raised.
Parallelization Information
DrawPolygon is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw
Alternatives
DrawRegion, DrawCircle, DrawRectangle1, DrawRectangle2, Boundary
See also
ReduceDomain, FillUp, SetColor
Module
Foundation
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
DrawRectangle1 returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
SetInsert) is available. If necessary, an exception handling is raised.
Parallelization Information
DrawRectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawRectangle1Mod, DrawRectangle2, DrawRegion
See also
GenRectangle1, DrawCircle, DrawEllipse, SetInsert
Module
Foundation
HALCON 8.0.2
328 CHAPTER 4. GRAPHICS
get_system(’width’,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Width-1,Height-1)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_rectangle1_mod(WindowHandle,Row1In,Column1In,Row2In,Column2In,Row1,Column1,Row2,Col
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
fwrite_string([’Clipping = (’,Row1,’,’,Column1,’)’])
fwrite_string([’,(’,Row2,’,’,Column2,’)’])
fnew_line().
Result
DrawRectangle1Mod returns 2 (H_MSG_TRUE), if the window is valid and the needed drawing mode (see
SetInsert) is available. If necessary, an exception handling is raised.
Parallelization Information
DrawRectangle1Mod is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw, SetInsert
Alternatives
DrawRectangle1, DrawRectangle2, DrawRegion
See also
GenRectangle1, DrawCircle, DrawEllipse, SetInsert
Module
Foundation
Parameter
HALCON 8.0.2
330 CHAPTER 4. GRAPHICS
Parameter
Attention
The output object’s gray values are not defined.
Parameter
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
draw_region(Region,WindowHandle)
reduce_domain(Image,Region,New)
regiongrowing(New,Segmente,5,5,6,50)
set_colored(WindowHandle,12)
disp_region(Segmente,WindowHandle).
Result
If the window is valid, DrawRegion returns 2 (H_MSG_TRUE). If necessary, an exception handling is raised.
Parallelization Information
DrawRegion is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow
Possible Successors
ReduceDomain, DispRegion, SetColored, SetLineWidth, SetDraw
Alternatives
DrawCircle, DrawEllipse, DrawRectangle1, DrawRectangle2
See also
DrawPolygon, ReduceDomain, FillUp, SetColor
Module
Foundation
• move contour points by clicking with the left mouse button on a point marked by a rectangle and keep the
mouse button pressed while moving the mouse,
HALCON 8.0.2
332 CHAPTER 4. GRAPHICS
• insert contour points by clicking with the left mouse button in the vicinity of a line and then move the mouse
to the position where you want the new point to be placed, and
• delete contour points by selecting the point which should be deleted with the left mouse button and then press
the Ctrl key.
By pressing the Shift key, you can switch into the transformation mode. In this mode you can rotate, move, and
scale the contour as a whole, but only if you set the parameters rotate, move, and scale, respectively, to true.
Instead of the pick points, 3 symbols are displayed with the contour: a cross in the middle and an arrow to the right
if rotate is set to true, and a double-headed arrow to the upper right if scale is set to true.
You can
• move the contour by clicking the left mouse button on the cross in the center and then dragging it to the new
position,
• rotate it by clicking with the left mouse button on the arrow and then dragging it, till the contour has the right
direction, and
• scale it by dragging the double arrow. To keep the ratio the parameter keepRatio has to be set to true.
• To move the contour, click with the left mouse button on the cross in the center and then drag it to the new
position´, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the contour has the right
direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio the parameter keepRatio has to be set
to true.
Edit Mode In this mode, the contour is display together with 5 pick points, which are located in the middle and
at the corners of the surrounding rectangle. If the contour is closed, the pick points are displayed as squares,
otherwise shaped like a ’u’. By clicking on a pick point, you can close an open contour and vice versa. Depending
on the state of the contour, you can perform different modifications. Open contours (pick points shaped like a ’u’)
• To append points, click with the left mouse button in the window and a new point is added at this position.
• You can delete the point appended last by pressing the Ctrl key.
• To move or insert points, you must first close the contour by clicking on one of the pick points.
HALCON 8.0.2
334 CHAPTER 4. GRAPHICS
4.2 Gnuplot
Possible Predecessors
GnuplotOpenPipe, GnuplotOpenFile, GnuplotPlotImage
See also
GnuplotOpenPipe, GnuplotOpenFile, GnuplotPlotImage
Module
Foundation
void HGnuplot.GnuplotOpenPipe ( )
Open a pipe to a gnuplot process for visualization of images and control values.
GnuplotOpenPipe opens a pipe to a gnuplot sub-process with which subsequently images can be visualized
as 3D-plots ( GnuplotPlotImage) or control values can be visualized as 2D-plots ( GnuplotPlotCtrl).
The sub-process must be terminated after displaying the last plot by calling GnuplotClose. The corresponding
identifier for the gnuplot output stream is returned in gnuplotFileID.
HALCON 8.0.2
336 CHAPTER 4. GRAPHICS
Attention
GnuplotOpenPipe is only implemented for Unix because gnuplot for Windows (wgnuplot) cannot be con-
trolled by an external process.
Parameter
HALCON 8.0.2
338 CHAPTER 4. GRAPHICS
variables samples and isosamples. The parameters viewRotX und viewRotZ determine the rotation of the plot
with respect to the viewer. viewRotX is the rotation of the coordinate system about the x-axis, while viewRotZ
is the rotation of the plot about the z-axis. These two parameters correspond directly to the first two parameters
of the ’set view’ command in gnuplot. The parameter hidden3D determines whether hidden surfaces should be
removed. This is equivalent to the ’set hidden3d’ command in gnuplot. If a single image is passed to the operator,
it is displayed in a separate plot. If multiple images are passed, they are displayed in the same plot.
Parameter
4.3 LUT
static void HOperatorSet.DispLut ( HTuple windowHandle, HTuple row,
HTuple column, HTuple scale )
set_lut(WindowHandle,’color’)
disp_lut(WindowHandle,256,256,1)
get_mbutton(WindowHandle,_,_,_)
set_lut(WindowHandle,’sqrt’)
disp_lut(WindowHandle,128,128,2).
Result
DispLut returns 2 (H_MSG_TRUE) if the hardware supports a look-up-table, the window is valid and the pa-
rameters are correct. Otherwise an exception handling is raised.
Parallelization Information
DispLut is reentrant, local, and processed without parallelization.
Possible Predecessors
SetLut
See also
OpenWindow, OpenTextwindow, DrawLut, SetLut, SetFix, SetPixel, WriteLut, GetLut,
SetColor
Module
Foundation
HALCON 8.0.2
340 CHAPTER 4. GRAPHICS
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
draw_lut(WindowHandle)
write_lut(WindowHandle,’my_lut’).
...
read_image(Image,’fabrik’)
set_lut(WindowHandle,’my_lut’).
Result
DrawLut returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
DrawLut is reentrant, local, and processed without parallelization.
Possible Successors
SetLutStyle, SetLut, WriteLut, DispLut
Alternatives
SetFix, SetRgb
See also
WriteLut, SetLut, GetLut, DispLut
Module
Foundation
string HWindow.GetFixedLut ( )
Get fixing of "‘look-up-table"’ (lut) for "‘real color images"’
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mode of fixing.
Default Value : "true"
List of values : Mode ∈ {"true", "false"}
Parallelization Information
GetFixedLut is reentrant, local, and processed without parallelization.
Possible Successors
SetFixedLut
Module
Foundation
HTuple HWindow.GetLut ( )
Get current look-up-table (lut).
GetLut returns the name or the values of the look-up-table (lut) of the window, currently used by DispImage
(or indirectly by DispRegion, etc.) for output. To set a look-up-table use SetLut. If the current table is a
system table without any modification ( by SetFix ), the name of the table is returned. If it is a modified table,
a table read from a file or a table for output with pseudo real colors, the RGB-values of the table are returned.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. lookUpTable (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string / int / long)
Name of look-up-table or tuple of RGB-values.
Result
GetLut returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetLut is reentrant, local, and processed without parallelization.
Possible Successors
DrawLut, SetLut
Alternatives
SetFix, GetPixel
See also
SetLut, DrawLut
Module
Foundation
hue: 0.0
HALCON 8.0.2
342 CHAPTER 4. GRAPHICS
saturation 1.0
intensity 1.0
Parameter
HTuple HWindow.QueryLut ( )
Query all available look-up-tables (lut).
QueryLut returns the names of all look-up-tables available on the current used device. These tables can be set
with SetLut. An table named ’default’ is always available.
Parameter
Colors in S descend from applications that were active before starting HALCON and should not get lost. Graphic
colors in G are used for operators such as DispRegion, DispCircle etc. and are set unique within all
look-up-tables. An output in a graphic color has always got the same (color-)look, even if different look-up-tables
are used. SetColor and SetRgb set graphic colors. Gray values resp. colors in B are used by DispImage
to display an image. They can change according to the current look-up-table. There exist two exceptions to this
concept:
• SetGray allows setting of colors of the area B for operators such as DispRegion,
• SetFix that allows modification of graphic colors.
For common monitors only one look-up-table can be loaded per screen. Whereas SetLut can be activated
separately for each window. There is the following solution for this problem: It will always be activated the
look-up-table that is assigned to the "‘active window"’ (a window is set into the state "‘active"’ by the window
manager).
HALCON 8.0.2
344 CHAPTER 4. GRAPHICS
look-up-table can also be used with truecolor displays. In this case the look-up-table will be simulated in software.
This means, that the look-up-table will be used each time an image is displayed.
WindowsNT specific: if the graphiccard is used in mode different from truecolor, you must display the image after
setting the look-up-taple.
QueryLut lists the names of all look-up-tables. They differ from each other in the area used for gray values.
Within this area the following behaiviour is defined:
gray value tables (1-7 image levels)
’default’: Only the two basic colors (generally black and white) are used.
color tables (Real color, static gray value steps)
’default’: Table proposed by the hardware.
gray value tables (256 colors)
’default’: As ’linear’.
’linear’: Linear increasing of gray values from 0 (black) to 255 (white).
’inverse’: Inverse function of ’linear’.
’sqr’: Gray values increase according to square function.
’inv_sqr’: Inverse function of ’sqr’.
’cube’: Gray values increase according to cubic function.
’inv_cube’: Inverse function of ’cube’.
’sqrt’: Gray values increase according to square-root function.
’inv_sqrt’: Inverse Function of ’sqrt’.
’cubic_root’: Gray values increase according to cubic-root function.
’inv_cubic_root’: Inverse Function of ’cubic_root’.
color tables (256 colors)
’color1’: Linear transition from red via green to blue.
’color2’: Smooth transition from yellow via red, blue to green.
’color3’: Smooth transition from yellow via red, blue, green, red to blue.
’color4’: Smooth transition from yellow via red to blue.
’three’: Displaying the three colors red, green and blue.
’six’: Displaying the six basic colors yellow, red, magenta, blue, cyan and green.
’twelve’: Displaying 12 colors.
’twenty_four’: Displaying 24 colors.
’rainbow’: Displaying the spectral colors from red via green to blue.
’temperature’: Temperature table from black via red, yellow to white.
’change1’: Color changement after every pixel within the table alternating the six basic colors.
’change2’: Fivefold color changement from green via red to blue.
’change3’: Threefold color changement from green via red to blue.
A look-up-table can be read from a file. Every line of such a file must contain three numbers in the range of 0 to
255, with the first number describing the amount of red, the second the amount of green and the third the amount
of blue of the represented display color. The number of lines can vary. The first line contains information for the
first gray value and the last line for the last value. If there are less lines than gray values, the available information
values are distributed over the whole interval. If there are more lines than gray values, a number of (uniformly
distributed) lines is ignored. The file-name must conform to "‘lookUpTable.lut"’. Within the parameter the
name is specified without file extension. HALCON will search for the file in the current directory and after that in
a specified directory ( see SetSystem(’lut_dir’,<Pfad>) ). It is also possible to call SetLut with a
tuple of RGB-Values. These will be set directly. The number of parameter values must conform to the number of
pixels currently used within the look-up-table.
Attention
SetLut can only be used with monitors supporting 256 gray levels/colors.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. lookUpTable (input_control) . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; HTuple (string / int / long)
Name of look-up-table, values of look-up-table (RGB) or file name.
Default Value : "default"
Suggested values : LookUpTable ∈ {"default", "linear", "inverse", "sqr", "inv_sqr", "cube", "inv_cube",
"sqrt", "inv_sqrt", "cubic_root", "inv_cubic_root", "color1", "color2", "color3", "color4", "three", "six",
"twelve", "twenty_four", "rainbow", "temperature", "cyclic_gray", "cyclic_temperature", "hsi", "change1",
"change2", "change3"}
Example (Syntax: HDevelop)
read_image(Image,’affe’)
query_lut(WindowHandle,LUTs)
for(1,|LUTs|,i)
set_lut(WindowHandle,LUTs[i])
fwrite_string([’current table ’,LUTs[i]])
fnew_line()
get_mbutton(WindowHandle,_,_,_)
loop().
Result
SetLut returns 2 (H_MSG_TRUE) if the hardware supports a look-up-table and the parameter is correct. Other-
wise an exception handling is raised.
Parallelization Information
SetLut is reentrant, local, and processed without parallelization.
Possible Predecessors
QueryLut, DrawLut, GetLut
Possible Successors
WriteLut
Alternatives
DrawLut, SetFix, SetPixel
See also
GetLut, QueryLut, DrawLut, SetFix, SetColor, SetRgb, SetHsi, WriteLut
Module
Foundation
HALCON 8.0.2
346 CHAPTER 4. GRAPHICS
Parameter
read_image(Image,’affe’)
set_lut(WindowHandle,’color’)
repeat(:::) >
get_mbutton(WindowHandle,Row,Column,Button)
eval(Row/300.0,Saturation)
eval(Column/512.0,Hue)
set_lut_style(WindowHandle,Hue,Saturation,1.0)
until(Button = 1).
Result
SetLutStyle returns 2 (H_MSG_TRUE) if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
SetLutStyle is reentrant, local, and processed without parallelization.
Possible Predecessors
GetLutStyle
Possible Successors
SetLut
Alternatives
SetLut, ScaleImage
See also
GetLutStyle
Module
Foundation
Attention
WriteLut is only suitable for systems using 256 colors.
Parameter
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
draw_lut(WindowHandle)
write_lut(WindowHandle,’test_lut’).
Result
WriteLut returns 2 (H_MSG_TRUE) if the window with the required properties (256 colors) is valid and the
parameter (file name) is correct. Otherwise an exception handling is raised.
Parallelization Information
WriteLut is reentrant, local, and processed without parallelization.
Possible Predecessors
DrawLut, SetLut
See also
SetLut, DrawLut, SetPixel, GetPixel
Module
Foundation
4.4 Mouse
static void HOperatorSet.GetMbutton ( HTuple windowHandle,
out HTuple row, out HTuple column, out HTuple button )
void HWindow.GetMbutton ( out int row, out int column, out int button
)
1: Left button,
2: Middle button,
4: Right button.
The operator waits until a button is pressed in the output window. If more than one button is pressed, the sum of
the individual buttons’ values is returned. The origin of the coordinate system is located in the left upper corner
of the window. The row coordinates increase towards the bottom, while the column coordinates increase towards
the right. For graphics windows, the coordinates of the lower right corner are (image height-1,image width-1)
(see OpenWindow, ResetObjDb), while for text windows they are (window height-1,window width-1) (see
OpenTextwindow).
Attention
GetMbutton only returns if a mouse button is pressed in the window.
HALCON 8.0.2
348 CHAPTER 4. GRAPHICS
Parameter
0: No button,
1: Left button,
2: Middle button,
4: Right button.
The origin of the coordinate system is located in the left upper corner of the window. The row coordinates increase
towards the bottom, while the column coordinates increase towards the right. For graphics windows, the coordi-
nates of the lower right corner are (image height-1,image width-1) (see OpenWindow, ResetObjDb), while
for text windows they are (window height-1,window width-1) (see OpenTextwindow).
Attention
GetMposition fails (returns FAIL) if the mouse pointer is not located within the window. In this case, no values
are returned.
Parameter
string HWindow.GetMshape ( )
Query the current mouse pointer shape.
GetMshape returns the name of the pointer shape set for the window. The mouse pointer shape can be used in
the operator SetMshape.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. cursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mouse pointer name.
Result
GetMshape returns the value 2 (H_MSG_TRUE).
Parallelization Information
GetMshape is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow, QueryMshape
Possible Successors
SetMshape
See also
SetMshape, QueryMshape
Module
Foundation
HALCON 8.0.2
350 CHAPTER 4. GRAPHICS
Parameter
4.5 Output
HALCON 8.0.2
352 CHAPTER 4. GRAPHICS
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_draw(WindowHandle,’fill’)
set_color(WindowHandle,’white’)
set_insert(WindowHandle,’not’)
Row = 100
Column = 100
disp_arc(WindowHandle,Row,Column,3.14,Row+10,Column+10)
close_window(WindowHandle).
Result
DispArc returns 2 (H_MSG_TRUE).
Parallelization Information
DispArc is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, SetRgb, SetHsi
Alternatives
DispCircle, DispEllipse, DispRegion, GenCircle, GenEllipse
See also
OpenWindow, OpenTextwindow, SetColor, SetDraw, SetRgb, SetHsi
Module
Foundation
set_color(WindowHandle,[’red’,’green’])
disp_arrow(WindowHandle,[10,10],[10,10],[118,110],[118,118],1.0).
Result
DispArrow returns 2 (H_MSG_TRUE).
Parallelization Information
DispArrow is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, SetRgb, SetHsi
Alternatives
DispLine, GenRegionPolygon, DispRegion
See also
OpenWindow, OpenTextwindow, SetColor, SetDraw, SetLineWidth
Module
Foundation
HALCON 8.0.2
354 CHAPTER 4. GRAPHICS
Result
If the used images contain valid values and a correct output mode is set, DispChannel returns 2
(H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
DispChannel is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi
Alternatives
DispImage, DispColor
See also
OpenWindow, OpenTextwindow, ResetObjDb, SetLut, DrawLut, DumpWindow
Module
Foundation
DispCircle displays one or several circles in the output window. A circle is described by the center (row,
column) and the radius radius. If the used coordinates are not within the window the circle is clipped accord-
ingly.
The procedures used to control the display of regions (e.g. SetDraw, SetGray, SetDraw) can also be used
with circles. Several circles can be displayed with one call by using tuple parameters. For the use of colors with
several circles, see SetColor.
Attention
The center of the circle must be within the window.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; HTuple (double / int / long)
Row index of the center.
Default Value : 64
Suggested values : Row ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; HTuple (double / int / long)
Column index of the center.
Default Value : 64
Suggested values : Column ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; HTuple (double / int / long)
Radius of the circle.
Default Value : 64
Suggested values : Radius ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Radius ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Radius > 0.0
Example (Syntax: HDevelop)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_draw(WindowHandle,’fill’)
set_color(WindowHandle,’white’)
set_insert(WindowHandle,’not’)
repeat()
get_mbutton(WindowHandle,Row,Column,Button)
disp_circle(WindowHandle,Row,Column,(Row + Column) mod 50)
until(Button = 1)
close_window(WindowHandle).
Result
DispCircle returns 2 (H_MSG_TRUE).
Parallelization Information
DispCircle is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, SetRgb, SetHsi
Alternatives
DispEllipse, DispRegion, GenCircle, GenEllipse
See also
OpenWindow, OpenTextwindow, SetColor, SetDraw, SetRgb, SetHsi
HALCON 8.0.2
356 CHAPTER 4. GRAPHICS
Module
Foundation
Result
If the used image contains valid values and a correct output mode is set, DispColor returns 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
DispColor is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi
Alternatives
DispChannel, DispObj
See also
DispImage, OpenWindow, OpenTextwindow, ResetObjDb, SetLut, DrawLut, DumpWindow
Module
Foundation
HALCON 8.0.2
358 CHAPTER 4. GRAPHICS
DispDistribution displays a distribution in the window. The parameters are the same as in SetPaint
(WindowHandle,’histogram’) or GenRegionHisto. Noise distributions can be generated with opera-
tions like GaussDistribution or NoiseDistributionMean.
Parameter
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_draw(WindowHandle,’fill’)
set_color(WindowHandle,’white’)
set_insert(WindowHandle,’not’)
read_image(Image,’affe’)
draw_region(Region,WindowHandle)
noise_distribution_mean(Region,Image,21,Distribution)
disp_distribution (WindowHandle,Distribution,100,100,3).
Parallelization Information
DispDistribution is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, SetRgb, SetHsi,
NoiseDistributionMean, GaussDistribution
See also
GenRegionHisto, SetPaint, GaussDistribution, NoiseDistributionMean
Module
Foundation
Displays ellipses.
DispEllipse displays one or several ellipses in the output window. An ellipse is described by the center
(centerRow, centerCol), the orientation phi (in radians) and the radii of the major and the minor axis
(radius1 and radius2).
The procedures used to control the display of regions (e.g. SetDraw, SetGray, SetDraw) can also be used
with ellipses. Several ellipses can be displayed with one call by using tuple parameters. For the use of colors with
several ellipses, see SetColor.
Attention
The center of the ellipse must be within the window.
Parameter
HALCON 8.0.2
360 CHAPTER 4. GRAPHICS
set_color(WindowHandle,’red’)
draw_region(MyRegion,WindowHandle)
elliptic_axis(MyRegionRa,Rb,Phi)
area_center(MyRegion,_,Row,Column)
disp_ellipse(WindowHandle,Row,Column,Phi,Ra,Rb).
Result
DispEllipse returns 2 (H_MSG_TRUE), if the parameters are correct. Otherwise an exception handling is
raised.
Parallelization Information
DispEllipse is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, SetRgb, SetHsi,
EllipticAxis, AreaCenter
Alternatives
DispCircle, DispRegion, GenEllipse, GenCircle
See also
OpenWindow, OpenTextwindow, SetColor, SetRgb, SetHsi, SetDraw, SetLineWidth
Module
Foundation
Parameter
Result
If the used image contains valid values and a correct output mode is set, DispImage returns 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
DispImage is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, ScaleImage, ConvertImageType, MinMaxGray
Alternatives
DispObj, DispColor
See also
OpenWindow, OpenTextwindow, ResetObjDb, SetComprise, SetPaint, SetLut, DrawLut,
PaintGray, ScaleImage, ConvertImageType, DumpWindow
Module
Foundation
HALCON 8.0.2
362 CHAPTER 4. GRAPHICS
disp_rectangle1_margin(WindowHandle,Row1,Column1,Row2,Column2):
disp_line(WindowHandle,Row1,Column1,Row1,Column2)
disp_line(WindowHandle,Row1,Column2,Row2,Column2)
disp_line(WindowHandle,Row2,Column2,Row2,Column1)
disp_line(WindowHandle,Row2,Column1,Row1,Column1).
Result
DispLine returns 2 (H_MSG_TRUE).
Parallelization Information
DispLine is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, SetDraw, SetColor, SetColored, SetLineWidth
Alternatives
DispArrow, DispRectangle1, DispRectangle2, DispRegion, GenRegionPolygon,
GenRegionPoints
See also
OpenWindow, OpenTextwindow, SetColor, SetRgb, SetHsi, SetInsert, SetLineWidth
Module
Foundation
Result
If the used object is valid and a correct output mode is set, DispObj returns 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
DispObj is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, ScaleImage, ConvertImageType, MinMaxGray
Alternatives
DispColor, DispImage, DispXld, DispRegion
See also
OpenWindow, OpenTextwindow, ResetObjDb, SetComprise, SetPaint, SetLut, DrawLut,
PaintGray, ScaleImage, ConvertImageType, DumpWindow
Module
Foundation
HALCON 8.0.2
364 CHAPTER 4. GRAPHICS
/* display a rectangle */
disp_rectangle1_margin1(long WindowHandle,
long Row1, long Column1,
long Row2, long Column2)
{
Htuple Row, Col;
create_tuple(&Row,4) ;
create_tuple(&Col,4) ;
set_i(Row,Row1,0) ;
set_i(Col,Column1,0) ;
set_i(Row,Row1,1) ;
set_i(Col,Column2,1) ;
set_i(Row,Row2,2) ;
set_i(Col,Column2,2) ;
set_i(Row,Row2,3) ;
set_i(Col,Column1,3) ;
set_i(Row,Row1,4) ;
set_i(Col,Column1,4) ;
T_disp_polygon(WindowHandle,Row,Col) ;
Result
DispPolygon returns 2 (H_MSG_TRUE).
Parallelization Information
DispPolygon is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, SetDraw, SetColor, SetColored, SetLineWidth
Alternatives
DispLine, GenRegionPolygon, DispRegion
See also
OpenWindow, OpenTextwindow, SetColor, SetRgb, SetHsi, SetInsert, SetLineWidth
Module
Foundation
HALCON 8.0.2
366 CHAPTER 4. GRAPHICS
set_color(WindowHandle,’green’)
draw_region(MyRegion,WindowHandle)
smallest_rectangle1(MyRegion,R1,C1,R2,C2)
disp_rectangle1(WindowHandle,R1,C1,R2,C2).
Result
DispRectangle1 returns 2 (H_MSG_TRUE).
Parallelization Information
DispRectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, SetDraw, SetColor, SetColored, SetLineWidth
Alternatives
DispRectangle2, GenRectangle1, DispRegion, DispLine, SetShape
See also
OpenWindow, OpenTextwindow, SetColor, SetDraw, SetLineWidth
Module
Foundation
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. centerRow (input_control) . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; HTuple (double / int / long)
Row index of the center.
Default Value : 48
Suggested values : CenterRow ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. centerCol (input_control) . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; HTuple (double / int / long)
Column index of the center.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; HTuple (double / int / long)
Orientation of rectangle in radians.
Default Value : 0.0
Suggested values : Phi ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Phi ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; HTuple (double / int / long)
Half of the length of the longer side.
Default Value : 48
Suggested values : Length1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Length1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. length2 (input_control) . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; HTuple (double / int / long)
Half of the length of the shorter side.
Default Value : 32
Suggested values : Length2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Length2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Length2 < Length1
Example (Syntax: HDevelop)
set_color(WindowHandle,’green’)
draw_region(MyRegion:WindowHandle)
elliptic_axis(MyRegion,Ra,Rb,Phi)
area_center(MyRegion,_,Row,Column)
disp_rectangle2(WindowHandle,Row,Column,Phi,Ra,Rb).
Result
DispRectangle2 returns 2 (H_MSG_TRUE), if the parameters are correct. Otherwise an exception handling
is raised.
Parallelization Information
DispRectangle2 is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, SetDraw, SetColor, SetColored, SetLineWidth
Alternatives
DispRegion, GenRectangle2, DispRectangle1, SetShape
HALCON 8.0.2
368 CHAPTER 4. GRAPHICS
See also
OpenWindow, OpenTextwindow, DispRegion, SetColor, SetDraw, SetLineWidth
Module
Foundation
/* Symbolic representation: */
set_draw(WindowHandle,’margin’)
set_color(WindowHandle,’red’)
set_shape(WindowHandle,’ellipse’)
disp_region(SomeSegmentsWindowHandle).
Result
DispRegion returns 2 (H_MSG_TRUE).
Parallelization Information
DispRegion is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetRgb, SetLut, SetHsi, SetShape, SetLineStyle, SetInsert, SetFix,
SetDraw, SetColor, SetColored, SetLineWidth
Alternatives
DispObj, DispArrow, DispLine, DispCircle, DispRectangle1, DispRectangle2,
DispEllipse
See also
OpenWindow, OpenTextwindow, SetColor, SetColored, SetDraw, SetShape, SetPaint,
SetGray, SetRgb, SetHsi, SetPixel, SetLineWidth, SetLineStyle, SetInsert, SetFix,
PaintRegion, DumpWindow
Module
Foundation
4.6 Parameters
static void HOperatorSet.GetComprise ( HTuple windowHandle,
out HTuple mode )
HALCON 8.0.2
370 CHAPTER 4. GRAPHICS
See also
SetComprise, DispImage, DispColor
Module
Foundation
string HWindow.GetDraw ( )
Get the current region fill mode.
GetDraw returns the region fill mode of the output window. It is used by operators as DispRegion,
DispCircle, DispArrow, DispRectangle1, DispRectangle2 etc. The region fill mode is set with
SetDraw.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Current region fill mode.
Result
GetDraw returns 2 (H_MSG_TRUE), if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetDraw is reentrant and processed without parallelization.
Possible Successors
SetDraw, DispRegion
See also
SetDraw, DispRegion, SetPaint
Module
Foundation
string HWindow.GetFix ( )
Get mode of fixing of current look-up-table (lut).
Use GetFix to get mode of fixing of current look-up-table (look-up-table of valid window) set before by
SetFix.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Current Mode of fixing.
Result
GetFix returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetFix is reentrant, local, and processed without parallelization.
Possible Successors
SetFix, SetPixel, SetRgb
See also
SetFix
Module
Foundation
HALCON 8.0.2
372 CHAPTER 4. GRAPHICS
Result
GetIcon always returns 2 (H_MSG_TRUE).
Parallelization Information
GetIcon is reentrant and processed without parallelization.
Possible Predecessors
SetIcon
Possible Successors
DispRegion
Module
Foundation
string HWindow.GetInsert ( )
Get the current display mode.
GetInsert returns the display mode of the output window. It is used by procedures like DispRegion,
DispLine, DispRectangle1, etc. The mode is set with SetInsert. Possible values for mode can
be queried with the procedure QueryInsert.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Display mode.
Result
GetInsert returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetInsert is reentrant and processed without parallelization.
Possible Predecessors
QueryInsert
Possible Successors
SetInsert, DispImage
See also
SetInsert, QueryInsert, DispRegion, DispLine
Module
Foundation
int HWindow.GetLineApprox ( )
Get the current approximation error for contour display.
GetLineApprox returns a parameter that controls the approximation error for region contour display in the
window. It is used by the procedure DispRegion. approximation controls the polygon approximation
for contour display (0 ⇔ no approximation). approximation is only important for displaying the contour of
objects, especially if a line style was set with SetLineStyle.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. approximation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (int / long)
Current approximation error for contour display.
Result
GetLineApprox returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetLineApprox is reentrant and processed without parallelization.
Possible Successors
SetLineApprox, SetLineStyle, DispRegion
See also
GetRegionPolygon, SetLineApprox, SetLineStyle, DispRegion
Module
Foundation
HTuple HWindow.GetLineStyle ( )
Get the current graphic mode for contours.
GetLineStyle returns the display mode for contoures when displaying regions. It is used by procedures like
DispRegion, DispLine, DispPolygon, etc. style is set with the procedure SetLineStyle. style
is only important for displaying the contour of objects, especially if a line style was set with SetLineStyle.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. style (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Template for contour display.
Result
GetLineStyle returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetLineStyle is reentrant, local, and processed without parallelization.
See also
SetLineStyle, DispRegion
Module
Foundation
int HWindow.GetLineWidth ( )
Get the current line width for contour display.
GetLineWidth returns the line width for region display in the window. It is used by procedures like
DispRegion, DispLine, DispPolygon, etc. width is set with the procedure SetLineWidth. width
is only important for displaying the contour of objects.
HALCON 8.0.2
374 CHAPTER 4. GRAPHICS
Parameter
HTuple HWindow.GetPaint ( )
Get the current display mode for grayvalues.
GetPaint returns the display mode for grayvalues in the window. mode is used by the procedure DispImage.
GetPaint is used for temporary changes of the grayvalue display mode. The current value is queried, then
changed (with procedure SetPaint) and finally the old value is written back. The available modes can be
viewed with the procedure QueryPaint. mode is the name of the display mode. If a mode can be customized
with parameters, the parameter values are passed in a tuple after the mode name. The order of values is the same
as in SetPaint.
Parameter
void HWindow.GetPart ( out int row1, out int column1, out int row2,
out int column2 )
int HWindow.GetPartStyle ( )
Get the current interpolation mode for grayvalue display.
GetPartStyle returns the interpolation mode used for displaying an image part in the window. An interpolation
takes place if the output window is larger than the image format or the image output format (see SetPart).
HALCON supports three interpolation modes:
HALCON 8.0.2
376 CHAPTER 4. GRAPHICS
Result
GetPartStyle returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetPartStyle is reentrant and processed without parallelization.
Possible Successors
SetPartStyle, DispRegion, DispImage
See also
SetPartStyle, SetPart, DispImage, DispColor
Module
Foundation
HTuple HWindow.GetPixel ( )
Get the current color lookup table index.
GetPixel returns the internal coding of the output grayvalue or color, respectively, for the window. If the
output mode is set to color(s) or grayvalue(s) (see SetColor or SetGray), then the color- or grayvalues are
transformed for internal use. The internal code is then used for (physical) screen display. The transformation
depends on the mapping characteristics and the condition of the output device and can be different in different
program runs. Don’t confuse the term "‘pixel"’ with the term "‘pixel"’ in image processing (the other procedure is
GetGrayval). Here a pixel is meant to be the color loookup table index.
With GetPixel it is possible to save the output mode without knowing whether colors or grayvalues are used.
pixel is set with the procedure SetPixel.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. pixel (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Index of the current color look-up table.
Result
GetPartStyle returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetPixel is reentrant and processed without parallelization.
Possible Successors
SetPixel, DispRegion, DispImage
See also
SetPixel, SetFix
Module
Foundation
void HWindow.GetRgb ( out HTuple red, out HTuple green, out HTuple blue
)
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. red (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
The current color’s red value.
. green (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
The current color’s green value.
. blue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
The current color’s blue value.
Result
GetRgb returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetRgb is reentrant and processed without parallelization.
Possible Successors
SetRgb, DispRegion, DispImage
See also
SetRgb
Module
Foundation
string HWindow.GetShape ( )
Get the current region output shape.
GetShape returns the shape in which regions are displayed. The available shapes can be queried with
QueryShape and then changed with SetShape.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. displayShape (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Current region output shape.
Result
GetShape returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetShape is reentrant and processed without parallelization.
Possible Predecessors
QueryShape
Possible Successors
SetShape, DispRegion
See also
SetShape, QueryShape, DispRegion
Module
Foundation
HTuple HWindow.QueryAllColors ( )
Query all color names.
HALCON 8.0.2
378 CHAPTER 4. GRAPHICS
QueryAllColors returns the names of all colors that are known to HALCON . That doesn’t mean, that these
colors are available for specific screens. On some screens there may only be a subset of colors available (see
QueryColor). Before opening the first window, SetSystem can be used to define which and how many
colors should be used. The HALCON colors are used to display regions ( DispRegion, DispPolygon,
DispCircle, etc.). They can be defined with SetColor.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. colors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Color names.
Example (Syntax: HDevelop)
query_all_colors(WindowHandle,Colors)
<interactive selection from Colors provide ActColors> >
set_system(’graphic_colors’,ActColors)
open_window(0,0,1,1,’root’,’invisible’,"’,WindowHandle)
query_color(WindowHandle,F)
close_window(WindowHandle)
fwrite_string([’Setting Colors: ’,F]).
Result
QueryAllColors always returns 2 (H_MSG_TRUE)
Parallelization Information
QueryAllColors is reentrant, local, and processed without parallelization.
Possible Successors
SetSystem, SetColor, DispRegion
See also
QueryColor, SetSystem, SetColor, DispRegion, OpenWindow, OpenTextwindow
Module
Foundation
HTuple HWindow.QueryColor ( )
Query all color names displayable in the window.
QueryColor returns the names of all colors that are usable for region output ( DispRegion, DispPolygon,
DispCircle, etc.). On a b/w screen QueryColor returns ’black’ and ’white’. These two "‘colors"’ are dis-
playable on any screen. In addition to ’black’ and ’white’ several grayvalues (e.g. ’dim gray’) are returned on
screens capable of grayvalues. A list of all displayable colors is returned for screens with color lookup table.
The returned tuple of colors begins with b/w, followed by the three primaries (’red’,’green’,’blue’) and several
grayvalues. Before opening the first window it is furthermore possible to define the color list with SetSystem
(’graphic_colors’,...). QueryAllColors(WindowHandle,Colors ) returns a list of all avail-
able colors for the SetSystem(’graphic_colors’,...) call. For screens with truecolor output the same
list is returned by QueryColor. The list of available colors (to HALCON ) must not be confused with the list of
displayable colors. For screens with truecolor output the available colors are only a small subset of the displayable
colors. Colors that are not directly available to HALCON can be chosen manually with SetRgb or SetHsi. If
colors are chosen that are known to HALCON but cannot be displayed, HALCON can choose a similar color. To
use this faeture, SetCheck(’˜color’:) must be set.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. colors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Color names.
open_window(0,0,-1,-1,’root’,’invisible’,"’,WindowHandle)
query_color(WindowHandle,Colors)
close_window(WindowHandle)
fwrite_string([’Displayable colors: ’,Farben]).
Result
QueryColor returns 2 (H_MSG_TRUE), if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
QueryColor is reentrant, local, and processed without parallelization.
Possible Successors
SetColor, DispRegion
See also
QueryAllColors, SetColor, DispRegion, OpenWindow, OpenTextwindow
Module
Foundation
regiongrowing(Image,Seg,5,5,6,100)
query_colored(Colors)
set_colored(WindowHandle,Colors[1])
disp_region(Seg,WindowHandle).
Result
QueryColored always returns 2 (H_MSG_TRUE).
Parallelization Information
QueryColored is reentrant and processed without parallelization.
Possible Successors
SetColored, SetColor, DispRegion
Alternatives
QueryColor
See also
SetColored, SetColor
Module
Foundation
HALCON 8.0.2
380 CHAPTER 4. GRAPHICS
HTuple HWindow.QueryInsert ( )
Query the possible graphic modes.
QueryInsert returns the possible modes pixels can be displayed in the output window. New pixels may e.g.
overwrite old ones. In most of the cases there is a functional relationship between old and new values.
Possible display functions:
See also
SetInsert, GetInsert
Module
Foundation
HTuple HWindow.QueryPaint ( )
Query the grayvalue display modes.
QueryPaint returns the names of all grayvalue display modes (e.g. ’gray’, ’3D-plot’, ’contourline’, etc.) for
the output window. These modes are used by SetPaint. QueryPaint only returns the names of the display
values, not the additional parameters that may be necessary for some modes.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Grayvalue display mode names.
Result
QueryPaint returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
QueryPaint is reentrant, local, and processed without parallelization.
Possible Successors
GetPaint, SetPaint, DispImage
HALCON 8.0.2
382 CHAPTER 4. GRAPHICS
See also
SetPaint, GetPaint, DispImage
Module
Foundation
Parameter
set_color(WindowHandle,[’red’,’green’])
disp_circle(WindowHandle,[100,200,300],[200,300,100],[100,100,100]).
Result
SetColor returns 2 (H_MSG_TRUE) if the window is valid and the passed colors are displayable on the screen.
Otherwise an exception handling is raised.
Parallelization Information
SetColor is reentrant, local, and processed without parallelization.
Possible Predecessors
QueryColor
Possible Successors
DispRegion
Alternatives
SetRgb, SetHsi
See also
GetRgb, DispRegion, SetFix, SetPaint
Module
Foundation
HALCON 8.0.2
384 CHAPTER 4. GRAPHICS
Possible Predecessors
QueryColored, SetColor
Possible Successors
DispRegion
See also
QueryColored, SetColor, DispRegion
Module
Foundation
open_window(0,0,-1,-1,’root’,’visible’,"’,WindowHandle)
read_image(Image,’fabrik’)
threshold(Image,Seg,100,255)
set_system(’init_new_image’,’false’)
sobel_amp(Image,Sob,’sum_abs’,3)
disp_image(Sob,WindowHandle)
get_comprise(Mode)
fwrite_string([’Current mode for gray values: ’,Mode])
fnew_line()
set_comprise(WindowHandle,’image’)
get_mbutton(WindowHandle,_,_,_)
disp_image(Sob,WindowHandle)
fwrite_string([’Current mode for gray values: image’])
fnew_line().
Result
SetComprise returns 2 (H_MSG_TRUE) if mode is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
SetComprise is reentrant and processed without parallelization.
Possible Predecessors
GetComprise
Possible Successors
DispImage
See also
GetComprise, DispImage, DispColor
Module
Foundation
HALCON 8.0.2
386 CHAPTER 4. GRAPHICS
Parameter
set_gray(WindowHandle,[100,200])
disp_circle(WindowHandle,[100,200,300],[200,300,100],[100,100,100]).
Result
SetGray returns 2 (H_MSG_TRUE) if grayValues is displayable and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
SetGray is reentrant, local, and processed without parallelization.
Possible Successors
DispRegion
See also
GetPixel, SetColor
Module
Foundation
H = (2πhue)/255
√
I = ( 6intensity)/255 √
M 1 = (sin (H)saturation)/(255 √6)
M 2 = (cos (H)saturation)/(255 2)
√
R = (2M 1 + I)/(4√6)
G = (−M 1 + M 2 + I)/(4√6
B = (−M 1 − M 2 + I)/(4 6)
Red = R ∗ 255
Green = G ∗ 255
Blue = B ∗ 255
If only one combination is passed, all output will take place in that color. If a tuple of colors is passed, the output
color of regions and geometric objects is modulo to the number of colors. HALCON always begins output with
the first color passed. Note, that the number of output colors depends on the number of objects that are displayed
in one procedure call. If only single objects are displayed, they always appear in the first color, even if the consist
of more than one connected components.
Selected colors are used until the next call of SetColor, SetPixel, SetRgb or SetGray. Colors are
relevant to windows, i.e. only the colors of the valid window can be set. Region output colors are used by
operatores like DispRegion, DispLine, DispRectangle1, DispRectangle2, DispArrow, etc. It
is also used by procedures with grayvalue output in certain output modes (e.g. ’3D-plot’,’histogram’, ’contourline’,
etc. See SetPaint).
HALCON 8.0.2
388 CHAPTER 4. GRAPHICS
Attention
The selected intensities may not be available for the selected hues. In that case, the intensities will be lowered
automatically.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. hue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Hue for region output.
Default Value : 30
Typical range of values : 0 ≤ Hue ≤ 255
Restriction : (0 ≤ Hue) ∧ (Hue ≤ 255)
. saturation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Saturation for region output.
Default Value : 255
Typical range of values : 0 ≤ Saturation ≤ 255
Restriction : (0 ≤ Saturation) ∧ (Saturation ≤ 255)
. intensity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Intensity for region output.
Default Value : 84
Typical range of values : 0 ≤ Intensity ≤ 255
Restriction : (0 ≤ Intensity) ∧ (Intensity ≤ 255)
Result
SetHsi returns 2 (H_MSG_TRUE) if the window is valid and the output colors are displayable. Otherwise an
exception handling is raised.
Parallelization Information
SetHsi is reentrant, local, and processed without parallelization.
Possible Predecessors
GetHsi
Possible Successors
DispRegion
See also
GetHsi, GetPixel, TransFromRgb, TransToRgb, DispRegion
Module
Foundation
draw_region(&Icon,WindowHandle) ;
set_icon(Icon) ;
set_shape(WindowHandle,"icon") ;
disp_region(Region,WindowHandle) ;
Result
SetIcon returns 2 (H_MSG_TRUE) if exactly one region is passed. Otherwise an exception handling is raised.
Parallelization Information
SetIcon is reentrant and processed without parallelization.
Possible Predecessors
GenCircle, GenEllipse, GenRectangle1, GenRectangle2, DrawRegion
Possible Successors
SetShape, DispRegion
Module
Foundation
There may not be all functions available, depending on the physical display. However, "‘copy"’ is always available.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the display function.
Default Value : "copy"
List of values : Mode ∈ {"copy", "xor", "complement"}
Result
SetInsert returns 2 (H_MSG_TRUE) if the paramter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
SetInsert is reentrant, local, and processed without parallelization.
Possible Predecessors
QueryInsert, GetInsert
Possible Successors
DispRegion
See also
GetInsert, QueryInsert
Module
Foundation
HALCON 8.0.2
390 CHAPTER 4. GRAPHICS
/* Calling */
set_line_approx(WindowHandle,Approximation)
set_draw(WindowHandle,’margin’)
disp_region(Obj,WindowHandle).
/* correspond with */
get_region_polygon(Obj,Approximation,Row,Col)
disp_polygon(WindowHandle,Row,Col).
Result
SetLineApprox returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
SetLineApprox is reentrant and processed without parallelization.
Possible Predecessors
GetLineApprox
Possible Successors
DispRegion
Alternatives
GetRegionPolygon, DispPolygon
See also
GetLineApprox, SetLineStyle, SetDraw, DispRegion
Module
Foundation
style contains up to five pairs of values. The first value is the length of the visible contour part, the second is the
length of the invisible part. The value pairs are used cyclical for contour output.
Attention
SetLineStyle does an implicit polygon approximation (see SetLineApprox(WindowHandle,3)). It is
only possible to enlarge it with SetLineApprox.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. style (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Contour pattern.
Default Value : []
Example (Syntax: HDevelop)
Result
SetLineStyle returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
SetLineStyle is reentrant, local, and processed without parallelization.
Possible Predecessors
GetLineStyle
Possible Successors
DispRegion
See also
GetLineStyle, SetLineApprox, DispRegion
Module
Foundation
HALCON 8.0.2
392 CHAPTER 4. GRAPHICS
Result
SetLineWidth returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
SetLineWidth is reentrant and processed without parallelization.
Possible Predecessors
QueryLineWidth, GetLineWidth
Possible Successors
DispRegion
See also
GetLineWidth, QueryLineWidth, SetDraw, DispRegion
Module
Foundation
• Only the name of the mode is passed: the defaults or the most recently used values are used, respectively.
Example: SetPaint(WindowHandle,’contourline’)
• All values are passed: all output characteristics can be set. Example: SetPaint
(WindowHandle,[’contourline’,10,1])
• Only the first n values are passed: only the passed values are changed. Example: SetPaint
(WindowHandle,[’contourline’,10])
• Some of the values are replaced by an asterisk (’*’): The value of the replaced parameters is not changed.
Example: SetPaint(WindowHandle,[’contourline’,’*’,1])
If the current mode is ’default’, HALCON chooses a suitable algorithm for the output of 2- and 3-channel images.
No SetPaint call is necessary in this case.
Apart from SetPaint there are other operators that affect the output of grayvalues. The most important of
them are SetPart, SetPartStyle, SetLut and SetLutStyle. Some output modes display gray-
values using region output (e.g. ’histogram’,’contourline’,’3D-plot’, etc.). In these modes, paramters set with
SetColor, SetRgb, SetHsi, SetPixel, SetShape, SetLineWidth and SetInsert influence
grayvalue output. This can lead to unexpected results when using SetShape(’convex’) and SetPaint
(WindowHandle,’histogram’). Here the convex hull of the histogram is displayed.
Modes:
• one-channel images:
’default’ optimal display on given hardware
’gray’ grayvalue output
’mean’ mean grayvalue
’dither4_1’ binary image, dithering matrix 4x4
’dither4_2’ binary image, dithering matrix 4x4
’dither4_3’ binary image, dithering matrix 4x4
’dither8_1’ binary image, dithering matrix 8x8
’floyd_steinberg’ binary image, optimal grayvalue simulation
[’threshold’,Threshold ]
’threshold’ binary image, threshold: 128 (default)
[’threshold’,200 ] binary image, any threshold: (here: 200)
[’histogram’,Line,Column,Scale ]
’histogram’ grayvalue output as histogram.
position default: max. size, in the window center
[’histogram’,256,256,2 ] grayvalue output as histogram, any parameter values.
positioning: window center (here (256,256))
size: (here 2, half the max. size)
[’component_histogram’,Line,Column,Scale ]
’component_histogram’ output as histogram of the connection components.
Positioning: default
[’component_histogram’,256,256,1 ] output as histogram of the connection components.
Positioning: (here (256, 256))
Scaling: (here 1, max. size)
[’row’,Line,Scale ]
’row’ output of the grayvalue profile along the given line.
line: image center (default)
Scaling: 50
[’row’,100,20 ] output of the grayvalue profile of line 100 with a scaling of 0.2 (20
[’column’,Column,Scale ]
’column’ output of the grayvalue profile along the given column.
column: image center (default)
Scaling: 50
[’column’,100,20 ] output of the grayvalue profile of column 100 with a scaling of 0.2 (20
[’contourline’,Step,Colored ]
HALCON 8.0.2
394 CHAPTER 4. GRAPHICS
’contourline’ grayvalue output as contour lines: the grayvalue difference per line is defined with the
parameter ’Step’ (default: 30, i.e. max. 8 lines for 256 grayvalues). The line can be displayed in
a given color (see set_color) or in the grayvalue they represent. This behaviour is defined with the
parameter ’Colored’ (0 = color, 1 = grayvalues). Default is color.
[’contourline’,15,1 ] grayvalue output as contour lines with a step of 15 and gray output.
[’3D-plot’, Step, Colored, EyeHeight, EyeDistance, ScaleGray, LinePos, ColumnPos]
’3D-plot’ grayvalues are interpreted as 3d data: the greater the value, the ’higher’ the assumed moun-
tain. Lines with step 2 (second paramter value) are drawn along the x- and y-axes. The third pa-
rameter (Colored) determines, if the output should be in color (default) or grayvalues. To define the
projection of the 3d data, use the parameters EyeHeight and EyeDistance. The projection parameters
take values from 0 to 255. ScaleGray defines a factor, by which the grayvalues are multiplied for
’height’ interpretation (given in percent. 100EyeHeight and EyeDistance the image can be shifted
out of place. Use RowPos and ColumnPos to move the whole output. Values from -127 to 127 are
possible.
[’3D-plot’, 5, 1, 110, 160, 150, 70, -10 ] line step: 5 pixel
Colored: yes (1)
EyeHeight: 110
EyeDistance: 160
ScaleGray: 1.5 (150)
RowPos: 70 pixel down
ColumnPos: 10 pixel right
[’3D-plot_hidden’, Step, Colored, EyeHeight, EyeDistance, ScaleGray, LinePos, ColumnPos]
’3D-plot_hidden’ like ’3D-plot’, but computes hidden lines.
• Two-channel images:
’default’ output the first channel.
• Three-channel images:
’default’ output as RGB image with ’median_cut’.
’television’ color addition algorithm for RGB images: (three components necessary for DispImage).
Images are displayed via a fixed color lookup table. Fast, but non-optimal color resolution. Only recom-
mended on bright screens.
’grid_scan’ grid-scan algorithm for RGB images (three components necessary for DispImage). An op-
timized color lookup table is generated for each image. Slower than ’television’. Disadvantages: Hard
color boundaries (no dithering). Different color lookup table for every image.
’grid_scan_floyd_steinberg’ grid-scan with Floyd-Steinberg dithering for smooth color boundaries.
’median_cut’ median-cut algorithm for RGB images (three components necessary for DispImage). Sim-
ilar to grid-scan. Disadvantages: Hard color boundaries (no dithering). Different color lookup table for
every image.
’median_cut_floyd_steinberg’ median-cut algorithm with Floyd-Steinberg dithering for smooth color
boundaries.
• Vector field images:
[’vector_field’, Step, MinLengh, ScaleLength ]
’vector_field’ Output a vector field. In this mode, a circle is drawn for each vector at the position of
the pixel. Furthermore, a line segment is drawn with the current vector. The step size for drawing
the vectors, i.e., the distance between the drawn vectors, can be set with the parameter Step. Short
vectors can be suppressed with the third parameter value (MinLength). The fourth parameter value
scales the vector length. It should be noted that by setting ’vector_field’ only the internal param-
eters Step, MinLengh, and ScaleLength are changed. The current display mode is not changed.
Vector field images are always displayed as vector field, no matter which mode is selected with
SetPaint.
[’vector_field’,16,2,3 ] Output of every 16. vector, that is longer than 2 pixel. Each vector is multiplied
by 3 for output.
Attention
• Display of color images (’television’, ’grid_scan’, etc.) changes the color lookup tables.
• If a wrong color mode is set, the error message may appear not until the DispImage call.
• Grayvalue output may be influenced by region output parameters. This can yield unexpected results.
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,"’,WindowHandle)
query_paint(WindowHandleModi)
fwrite_string([’available gray value modes: ’,Modi])
fnew_line()
disp_image(Image,WindowHandle)
get_mbutton(WindowHandle,_,_,_)
set_color(WindowHandle,’red’)
set_draw(WindowHandle,’margin’)
set_paint(WindowHandle,’histogram’)
disp_image(Image,WindowHandle)
set_color(WindowHandle,’blue’)
set_paint(WindowHandle,[’histogram’,100,100,3])
disp_image(Image,WindowHandle)
set_color(WindowHandle,’yellow’)
set_paint(WindowHandle,[’row’,100])
disp_image(Image,WindowHandle)
get_mbutton(WindowHandle,_,_,_)
clear_window(WindowHandle)
set_paint(WindowHandle,[’contourline’,10,1])
disp_image(Image,WindowHandle)
set_lut(WindowHandle,’color’)
get_mbutton(WindowHandle,_,_,_)
clear_window(WindowHandle)
set_part(WindowHandle,100,100,300,300)
set_paint(WindowHandle,’3D-plot’)
disp_image(Image,WindowHandle).
Result
SetPaint returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
SetPaint is reentrant, local, and processed without parallelization.
Possible Predecessors
QueryPaint, GetPaint
Possible Successors
DispImage
See also
GetPaint, QueryPaint, DispImage, SetShape, SetRgb, SetColor, SetGray
Module
Foundation
HALCON 8.0.2
396 CHAPTER 4. GRAPHICS
row1 = column1 = row2 = column2 = -1: The window size is choosen as the image part, i.e. no zooming of
the image will be performed.
row1, column1 > -1 and row2 = column2 = -1: The size of the last displayed image (in this window) is
choosen as the image part, i.e. the image can completely be displayed in the image. For this the image
will be zoomed if necessary.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; HTuple (int / long)
Row of the upper left corner of the chosen image part.
Default Value : 0
. column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; HTuple (int / long)
Column of the upper left corner of the chosen image part.
Default Value : 0
. row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; HTuple (int / long)
Row of the lower right corner of the chosen image part.
Default Value : -1
Restriction : (Row2 ≥ Row1) ∨ (Row2 = -1)
. column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x ; HTuple (int / long)
Column of the lower right corner of the chosen image part.
Default Value : -1
Restriction : (Column2 ≥ Column1) ∨ (Column2 = -1)
Example (Syntax: HDevelop)
get_system(’width’,,Width)
get_system(’height’,Height)
set_part(WindowHandle,0,0,Height-1,Width-1)
disp_image(Image,WindowHandle)
draw_rectangle1(WindowHandle:Row1,Column1,Row2,Column2)
set_part(WindowHandle,Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle).
Result
SetPart returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
SetPart is reentrant and processed without parallelization.
Possible Predecessors
GetPart
Possible Successors
SetPartStyle, DispImage, DispRegion
Alternatives
AffineTransImage
See also
GetPart, SetPartStyle, DispRegion, DispImage, DispColor
Module
Foundation
HALCON 8.0.2
398 CHAPTER 4. GRAPHICS
and 0 to 255 color images with 8 bit planes. It is different from the ’pixel’ ("picture element") in image processing.
Therefore HALCON distinguishes between pixel and image element (or grayvalue).
The current value can be queried with GetPixel.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. pixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer(-array) ; HTuple (int / long)
Color lookup table index.
Default Value : 128
Typical range of values : 0 ≤ Pixel ≤ 255
Result
SetPixel returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
SetPixel is reentrant, local, and processed without parallelization.
Possible Predecessors
GetPixel
Possible Successors
DispImage, DispRegion
Alternatives
SetRgb, SetColor, SetHsi
See also
GetPixel, SetLut, DispRegion, DispImage, DispColor
Module
Foundation
’original’: The shape is displayed unchanged. Nevertheless modifications via parameters like set_line_width or
set_line_approx can take place. This is also true for all other modes.
’outer_circle’: Each region is displayed by the smallest surrounding circle. (See SmallestCircle.)
’inner_circle’: Each region is displayed by the largest included circle. (See InnerCircle.)
’ellipse’: Each region is displayed by an ellipse with the same moments and orientation (See EllipticAxis.)
’rectangle1’: Each region is displayed by the smallest surrounding rectangle parallel to the coordinate axes. (See
SmallestRectangle1.)
’rectangle2’: Each region is displayed by the smallest surrounding rectangle. (See SmallestRectangle2.)
’convex’: Each region is displayed by its convex hull (See Convexity.)
’icon’ Each region is displayed by the icon set with SetIcon in the center of gravity.
Attention
Caution is advised for grayvalue output procedures with output parameter settings that use region out-
put, e.g. DispImage with SetPaint(WindowHandle,’histogram’) and SetShape
(WindowHandle,’convex’). In that case the convex hull of the grayvalue histogram is displayed.
HALCON 8.0.2
400 CHAPTER 4. GRAPHICS
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window_id.
. shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Region output mode.
Default Value : "original"
List of values : Shape ∈ {"original", "convex", "outer_circle", "inner_circle", "rectangle1", "rectangle2",
"ellipse", "icon"}
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
regiongrowing(Image,Seg,5,5,6,100)
set_colored(WindowHandle,12)
set_shape(WindowHandle,’rectangle2’)
disp_region(Seg,WindowHandle).
Result
SetShape returns 2 (H_MSG_TRUE) if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
SetShape is reentrant and processed without parallelization.
Possible Predecessors
SetIcon, QueryShape, GetShape
Possible Successors
DispRegion
See also
GetShape, QueryShape, DispRegion
Module
Foundation
4.7 Text
static void HOperatorSet.GetFont ( HTuple windowHandle,
out HTuple font )
string HWindow.GetFont ( )
Get the current font.
GetFont queries the name of the font used in the output window. The font is used by the operators
WriteString, ReadString etc. The font is set by the operator SetFont. Text windows as well as windows
for image display use fonts. Both types of windows have a default font that can be modified with SetSystem
(’default_font’,Fontname) prior to opening the window. A list of all available fonts can be obtained
using QueryFont.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. font (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the current font.
Example (Syntax: HDevelop)
get_font(WindowHandle,CurrentFont)
set_font(WindowHandle,MyFont)
write_string(WindowHandle,[’The name of my Font is:’,Myfont])
new_line(WindowHandle)
set_font(WindowHandle,CurrentFont)
Result
GetFont returns 2 (H_MSG_TRUE).
Parallelization Information
GetFont is reentrant and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow, QueryFont
Possible Successors
SetFont
See also
SetFont, QueryFont, OpenWindow, OpenTextwindow, SetSystem
Module
Foundation
HALCON 8.0.2
402 CHAPTER 4. GRAPHICS
See also
SetTposition, SetFont
Module
Foundation
string HWindow.GetTshape ( )
Get the shape of the text cursor.
GetTshape queries the shape of the text cursor for the output window. A new cursor shape is set by the operator
SetTshape.
A text cursor marks the current position for text output (which can also be invisible). It is different from the mouse
cursor (although both will be called "’cursor"’ if the context makes misconceptions impossible). The available
shapes for the text cursor can be queried with QueryTshape.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. textCursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the current text cursor.
Result
GetTshape returns 2 (H_MSG_TRUE) if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
GetTshape is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow, SetFont
Possible Successors
SetTshape, SetTposition, WriteString, ReadString, ReadChar
See also
SetTshape, QueryTshape, WriteString, ReadString
Module
Foundation
HTuple HWindow.QueryFont ( )
Query the available fonts.
HALCON 8.0.2
404 CHAPTER 4. GRAPHICS
QueryFont queries the fonts available for text output in the output window. They can be set with the operator
SetFont. Fonts are used by the operators WriteString, ReadChar, ReadString and NewLine.
Attention
For different machines the available fonts may differ a lot. Therefore QueryFont will return different fonts on
different machines.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. font (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Tuple with available font names.
Example (Syntax: HDevelop)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_check(’~text’)
query_font(WindowHandle,Fontlist)
set_color(WindowHandle,’white’)
for i=0 to |Fontlist|-1 by 1
set_font(WindowHandle,Fontlist[i])
write_string(WindowHandle,Fontlist[i])
new_line(WindowHandle)
endfor
Result
QueryFont returns 2 (H_MSG_TRUE).
Parallelization Information
QueryFont is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Possible Successors
SetFont, WriteString, ReadString, ReadChar
See also
SetFont, WriteString, ReadString, ReadChar, NewLine
Module
Foundation
HTuple HWindow.QueryTshape ( )
Query all shapes available for text cursors.
QueryTshape queries the available shapes of text cursors for the output window. The retrieved shapes can be
used by the operator SetTshape.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. textCursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Names of the available text cursors.
Result
QueryTshape returns 2 (H_MSG_TRUE).
Parallelization Information
QueryTshape is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Possible Successors
SetTshape, WriteString, ReadString
See also
SetTshape, GetShape, SetTposition, WriteString, ReadString
Module
Foundation
Attention
The window has to be a text window.
Parameter
HALCON 8.0.2
406 CHAPTER 4. GRAPHICS
Attention
The window has to be a text window.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. inString (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Default string (visible before input).
Default Value : ""
. length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Maximum number of characters.
Default Value : 32
Restriction : Length > 0
. outString (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Read string.
Result
ReadString returns 2 (H_MSG_TRUE) if the text window is valid and a string of maximal length fits within
the right window boundary. Otherwise an exception handling is raised.
Parallelization Information
ReadString is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenTextwindow, SetFont
Alternatives
ReadChar, FreadString, FreadChar
See also
SetTposition, NewLine, OpenTextwindow, SetFont, SetColor
Module
Foundation
-FontName-Height-Width-Italic-Underlined-Strikeout-Bold-CharSet-
where “Italic”, “Underlined”, “Strikeout” and “Bold” can take the values 1 and 0 to activate or de-
activate the corresponding feature. “Charset” can be used to select the character set, if it differs
from the default one. You can use the names of the defines (ANSI_CHARSET, BALTIC_CHARSET,
CHINESEBIG5_CHARSET, DEFAULT_CHARSET, EASTEUROPE_CHARSET, GB2312_CHARSET,
GREEK_CHARSET, HANGUL_CHARSET, MAC_CHARSET, OEM_CHARSET, RUSSIAN_CHARSET,
SHIFTJIS_CHARSET, SYMBOL_CHARSET, JOHAB_CHARSET, HEBREW_CHARSET, ARA-
BIC_CHARSET) or the integer value.
All parameters beside “FontName” und “Height” are optional, however it is only possible to omit parameters from
the end of the string. At the begin and end of the string a minus is required. To use the default setting, a * can be
used for the corresponding feature. Examples:
• -Arial-10-*-1-*-*-1-ANSI_CHARSET-
• -Arial-10-*-1-*-*-1-
• -Arial-10-
Please refer to the Windows documentation (Fonts and Text in the MSDN) for a detailed discussion.
On UNIX environments the font is specified by a string with the following components:
-FOUNDRY-FAMILY_NAME-WEIGHT_NAME-SLANT-SETWIDTH_NAME-ADD_STYLE_NAME-PIXEL_SIZE
-POINT_SIZE-RESOLUTION_X-RESOLUTION_Y-SPACING-AVERAGE_WIDTH-CHARSET_REGISTRY
-CHARSET_ENCODING,
where FOUNDRY identifies the organisation that supplied the font. The actual name of font is given in FAM-
ILY_NAME (e.g. ’courier’). WEIGHT_NAME describes the typographic weight of the font in human readable
form (e.g. ’medium’, ’semibold’, ’demibold’, or ’bold’). SLANT is one of the following codes:
• r for Roman
• i for Italic
• o for Oblique
• ri for Reverse Italic
• ro for Reverse Oblique
• ot for Other
SET_WIDTH_NAME describes the proportionate width of the font (e.g. ’normal’). ADD_STYLE_NAME iden-
tifies additional typographic style information (e.g. ’serif’ or ’sans serif’) and is empty in most cases.
The PIXEL_SIZE is the height of the font on the screen in pixel, while POINT_SIZE is the print size the font
was designed for. RESOLUTION_Y and RESOLUTION_X contain the vertical and horizontal Resolution of the
font. SPACING may be one of the following three codes:
• p for Proportional,
• m for Monospaced, or
• c for CharCell.
The AVERAGE_WIDTH is the mean of the width of each character in font. The character set encoded in font
is described in CHARSET_REGISTRY and CHARSET_ENCODING (e.g. ISO8859-1).
An example of a valid string for font would be
’-adobe-courier-medium-r-normal–12-120-75-75-m-70-iso8859-1’,
which is a 12px medium weighted courier font. As on Windows systems not all fields have to be specified and a *
can be used instead:
’-adobe-courier-medium-r-*–12-*-*-*-*-*-*-*’.
Please refer to "X Logical Font Description Conventions" for detailed information on individual parameters.
Attention
For different machines the available fonts may differ a lot. Therefore it is suggested to use wildcards, tables of
fonts and/or the operator QueryFont.
HALCON 8.0.2
408 CHAPTER 4. GRAPHICS
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. font (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of new font.
Example (Syntax: HDevelop)
Result
SetFont returns 2 (H_MSG_TRUE) if the font name is correct. Otherwise an exception handling is raised.
Parallelization Information
SetFont is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Possible Successors
QueryFont
See also
GetFont, QueryFont, OpenTextwindow, OpenWindow
Module
Foundation
Possible Predecessors
OpenWindow, OpenTextwindow
Possible Successors
SetTshape, WriteString, ReadString
Alternatives
NewLine
See also
ReadString, SetTshape, WriteString
Module
Foundation
HALCON 8.0.2
410 CHAPTER 4. GRAPHICS
WriteString can output all three types of data used in HALCON . The conversion to a string is guided by the
following rules:
4.8 Window
static void HOperatorSet.ClearRectangle ( HTuple windowHandle,
HTuple row1, HTuple column1, HTuple row2, HTuple column2 )
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; HTuple (int / long)
Line index of upper left corner.
Default Value : 10
Typical range of values : 0 ≤ Row1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; HTuple (int / long)
Column index of upper left corner.
Default Value : 10
Typical range of values : 0 ≤ Column1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; HTuple (int / long)
Row index of lower right corner.
Default Value : 118
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Row2 > Row1
. column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; HTuple (int / long)
Column index of lower right corner.
Default Value : 118
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Column2 ≥ Column1
Example (Syntax: HDevelop)
Result
If an output window exists and the specified parameters are correct ClearRectangle returns 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
ClearRectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, SetRgb, SetHsi,
DrawRectangle1
Alternatives
ClearWindow, DispRectangle1
See also
OpenWindow, OpenTextwindow
Module
Foundation
HALCON 8.0.2
412 CHAPTER 4. GRAPHICS
ClearWindow deletes all entries in the output window. The window (background and edge) is reset to its original
state. Parameters assigned to this window (e.g., with SetColor, SetPaint, etc.) remain unmodified.
Parameter
clear_window(WindowHandle).
Result
If the output window is valid ClearWindow returns 2 (H_MSG_TRUE). If necessary an exception handling is
raised.
Parallelization Information
ClearWindow is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Alternatives
ClearRectangle, DispRectangle1
See also
OpenWindow, OpenTextwindow
Module
Foundation
HALCON 8.0.2
414 CHAPTER 4. GRAPHICS
read_image(Image,’affe’)
open_window(0,0,-1,-1,’root’,’buffer’,’’,WindowHandle)
disp_image(Image,WindowHandle)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandleDestination)
repeat()
get_mbutton(WindowHandleDestination,Row,Column,Button)
copy_rectangle(BufferID,WindowHandleDestination,20,90,120,390,Row,Column)
until(Button = 1)
close_window(WindowHandleDestination)
close_window(WindowHandle)
clear(Image).
Result
If the output window is valid and if the specified parameters are correct CloseWindow returns 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
CopyRectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Possible Successors
CloseWindow
Alternatives
MoveRectangle, SlideImage
See also
OpenWindow, OpenTextwindow
Module
Foundation
DumpWindow writes the content of the window to a file. You may continue to process this file by convenient
printers or other programs. The content of a display is prepared for each special device (device), i.e., it is
formated in a manner, that you may print this file directly or it can be processed furthermore by a graphical
program.
To transform gray values the current color table of the window is used, i.e., the values of SetLutStyle remain
unconsidered.
Possible values for device
Attention
Under UNIX, the graphics window must be completely visible on the root window, because otherwise the contents
of the window cannot be read due to limitations in X Windows. If larger graphical displays are to be written to a
file, the window type ’pixmap’ can be used.
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window identifier.
. device (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / int / long)
Name of the target device or of the graphic format.
Default Value : "postscript"
List of values : Device ∈ {"postscript", "tiff", "bmp", "jpeg", "jp2", "png", "jpeg 100", "jpeg 80", "jpeg 60",
"jpeg 40", "jpeg 20", "jp2 50", "jp2 40", "jp2 30", "jp2 20", "png best", "png fastest", "png none"}
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; HTuple (string)
File name (without extension).
Default Value : "halcon_dump"
Example (Syntax: HDevelop)
HALCON 8.0.2
416 CHAPTER 4. GRAPHICS
Result
If the appropriate window is valid and the specified parameters are correct DumpWindow returns 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
DumpWindow is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, OpenTextwindow,
DispRegion
Possible Successors
SystemCall
See also
OpenWindow, OpenTextwindow, SetSystem, DumpWindowImage
Module
Foundation
GetOsWindowHandle returns the operating system window handle of the HALCON window windowHandle
in OSWindowHandle. Under UNIX, additionally the operating system display handle is returned in
OSDisplayHandle. The operating system window handle can be used to access the window using func-
tions from the operating system, e.g., to draw in a user-defined manner into the window. Under Windows,
OSWindowHandle can be cast to a variable of type HWND. Under UNIX systems, OSWindowHandle can
be cast into a variable of type Window, while OSDisplayHandle can be cast into a variable of type Display.
Parameter
/* Draw a line into a HALCON window under UNIX using X11 calls. */
#include "HalconC.h"
#include <X11/X.h>
#include <X11/Xlib.h>
/* Draw a line into a HALCON window under Windows using GDI calls. */
#include "HalconC.h"
#include "windows.h"
HALCON 8.0.2
418 CHAPTER 4. GRAPHICS
LOGBRUSH logbrush;
POINT point;
static DWORD dashes[] = { 20, 20 };
Result
If the window is valid GetOsWindowHandle returns 2 (H_MSG_TRUE). Otherwise, an exception handling is
raised.
Parallelization Information
GetOsWindowHandle is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Module
Foundation
Parameter
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, OpenTextwindow
See also
OpenWindow, SetWindowAttr
Module
Foundation
open_window(100,100,200,200,’root’,’visible’,’’,WindowHandle)
fwrite_string(’Move the window with the mouse!’)
fnew_line()
repeat()
get_mbutton(WindowHandle,_,_,Button)
get_window_extents(WindowHandle,Row,Column,Width,Height)
fwrite([’(’Row,’,’,Column,’)’])
fnew_line()
until(Button = 4).
Result
If the window is valid GetWindowExtents returns 2 (H_MSG_TRUE). If necessary an exception handling is
raised.
Parallelization Information
GetWindowExtents is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, OpenTextwindow
See also
SetWindowExtents, OpenWindow, OpenTextwindow
Module
Foundation
HALCON 8.0.2
420 CHAPTER 4. GRAPHICS
string HWindow.GetWindowType ( )
Get the window type.
GetWindowType determines the type or the graphical software, respectively, of the output device for the window.
You may query the available types of output devices with procedure QueryWindowType. A reasonable use for
GetWindowType might be in the field of the development of machine independent software. Possible values
are:
Parameter
open_window(100,100,200,200,’root’,’visible’,’’,WindowHandle)
get_window_type(WindowHandle,WindowType)
fwrite_string([’Window type: ’,WindowType])
fnew_line().
Result
If the window is valid GetWindowType returns 2 (H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
GetWindowType is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
See also
QueryWindowType, SetWindowType, GetWindowPointer3, OpenWindow, OpenTextwindow
Module
Foundation
HALCON 8.0.2
422 CHAPTER 4. GRAPHICS
Result
If the window is valid and the specified parameters are correct MoveRectangle returns 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
MoveRectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Alternatives
CopyRectangle
See also
OpenWindow, OpenTextwindow
Module
Foundation
HALCON:
set\_color(WindowHandle,"green");
disp\_region(WindowHandle,region);
Windows NT:
HPEN* penold;
HPEN penGreen = CreatePen(PS\_SOLID,1,RGB(0,255,0));
pen = (HPEN*)SelectObject(WINHDC,penGreen);
disp\_region(WindowHandle,region);
Interactive operators, for example DrawRegion, DrawCircle or GetMbutton cannot be used in this
window. The following operators can be used:
• Output of gray values: SetPaint, SetComprise, ( SetLut and SetLutStyle after output)
• Regions: SetColor, SetRgb, SetHsi, SetGray, SetPixel, SetShape, SetLineWidth,
SetInsert, SetLineStyle, SetDraw
• Image part: SetPart
• Text: SetFont
You may query current set values by calling procedures like GetShape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available resources by calling operators like
QueryColor.
The parameter WINHWnd is used to pass the window handle of the Windows NT window, in which output should
be done. The parameter WINHDC is used to pass the device context of the window WINHWnd. This device context
is used in the output routines of HALCON.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximum: height-1), the column index grows to the right (maximal: width-1).
You may use the value -1 for parameters width and height. This means, that the corresponding value is chosen
automatically. In particular, this is important if the aspect ratio of the pixels is not 1.0 (see SetSystem). If one
of the two parameters is set to -1, it will be chosen through the size which results out of the aspect ratio of the
pixels. If both parameters are set to -1, they will be set to the current image format.
The position and size of a window may change during runtime of a program. This may be achieved by calling
SetWindowExtents, but also through external influences (window manager). For the latter case the procedure
SetWindowExtents is provided.
Opening a window causes the assignment of a default font. It is used in connection with procedures like
WriteString and you may change it by performing SetFont after calling OpenWindow. On the other hand,
you have the possibility to specify a default font by calling SetSystem(’default_font’,<Fontname>)
before opening a window (and all following windows; see also QueryFont).
You may set the color of graphics and font, which is used for output procedures like DispRegion or
DispCircle, by calling SetRgb, SetHsi, SetGray or SetPixel. Calling SetInsert specifies
how graphics is combined with the content of the image repeat memory. Thereto you may achieve by calling, e.g.,
SetInsert(::’not’:) to eliminate the font after writing text twice at the same position.
HALCON 8.0.2
424 CHAPTER 4. GRAPHICS
The content of the window is not saved, if other windows overlap the window. This must be done in the program
code that handles the Windows NT window in the calling program.
For graphical output ( DispImage, DispRegion, etc.) you may adjust the window by calling procedure
SetPart in order to represent a logical clipping of the image format. In particular this implies that only this
part (appropriately scaled) of images and regions is displayed. Before you close your window, you have to close
the HALCON-window.
Steps to use new_extern_window:
Attention
Note that parameters as row, column, width and height are constrained through the output device, i.e., the
size of the Windows NT desktop.
Parameter
HTuple m_tHalconWindow ;
Hobject m_objImage ;
WM_CREATE:
/* here you should create your extern halcon window*/
HTuple tWnd, tDC ;
::set_check("~father") ;
tWnd = (INT)((INT*)&m_hWnd) ;
tDC = (INT)(INT*)GetWindowDC() ;
::new_extern_window(tWnd, tDC, 0, 0, sizeTotal.cx, sizeTotal.cy, &m_tHalconWindow) ;
::set_check("father") ;
WM_PAINT:
/* here you can draw halcon objects */
long l = 0 ;
if (m_thWindow != -1) {
/* don´t forget to set the dc !! */
HTuple tDC((INT)(INT*)&pDC->m_hDC) ;
HTuple tDCNull((INT)(INT*)&l) ;
::set_window_dc(m_tHalconWindow,tDC) ;
::disp_obj(pDoc->m_objImage, m_tHalconWindow) ;
/* release the graphic objects */
::set_window_dc(m_tHalconWindow, tDCNull) ;
}
WM_CLOSE:
/* close the halcon window */
if (m_tHalconWindow != -1) {
::close_window(m_tHalconWindow) ;
}
Result
If the values of the specified parameters are correct NewExternWindow returns 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Parallelization Information
NewExternWindow is reentrant, local, and processed without parallelization.
Possible Predecessors
ResetObjDb
Possible Successors
SetColor, QueryWindowType, GetWindowType, SetWindowType, GetMposition,
SetTposition, SetTshape, SetWindowExtents, GetWindowExtents, QueryColor,
SetCheck, SetSystem
Alternatives
OpenWindow, OpenTextwindow
See also
OpenWindow, DispRegion, DispImage, DispColor, SetLut, QueryColor, SetColor,
SetRgb, SetHsi, SetPixel, SetGray, SetPart, SetPartStyle, QueryWindowType,
GetWindowType, SetWindowType, GetMposition, SetTposition, SetWindowExtents,
GetWindowExtents, SetWindowAttr, SetCheck, SetSystem
Module
Foundation
HALCON 8.0.2
426 CHAPTER 4. GRAPHICS
<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter fatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via SetCheck,
fatherWindow relates to the ID of a HALCON window, otherwise ( SetCheck(’∼ father’)) it relates to the ID
of an operating system window. If fatherWindow is passed the value 0 or ’root’, then under Windows and Unix
the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via SetCheck) is irrelevant.
Position and size of a window may change during runtime of a program. This may be achieved by calling
SetWindowExtents, but also through external interferences (window manager). In the latter case the pro-
cedure SetWindowExtents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with
procedures like WriteString and you may overwrite it by performing SetFont after calling
OpenTextwindow. On the other hand you have the possibility to specify a default font by calling
SetSystem(’default_font’,<Fontname>) before opening a window (and all following windows; see
also QueryFont).
You may set the color of the font ( WriteString, ReadString) by calling SetColor, SetRgb, SetHsi,
SetGray or SetPixel. Calling SetInsert specifies how the text or the graphics, respectively, is combined
with the content of the image repeat memory. So you may achieve by calling, e.g., SetInsert(::’not’:) to
eliminate the font after writing text twice at the same position.
Normally every output (e.g., WriteString, DispRegion, DispCircle, etc.) in a window is terminated
by a "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure. But
this is not necessary in all cases, in particular if there are permanently output tasks or there is a mouse procedure
active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available. You may
stop this behavior by calling SetSystem(’flush_graphic’,’false’).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also
if the window is hidden by other windows. But this is not necessary in all cases: If you use a textual win-
dow, e.g., as a parent window for other windows, you may suppress the security mechanism for it and save the
necessary memory at the same moment. You achieve this before opening the window by calling SetSystem
(’backing_store’,’false’).
Difference: graphical window - textual window
• In contrast to graphical windows ( OpenWindow) you may specify more parameters (color, edge) for a
textual window while opening it.
• You may use textual windows only for input of user data ( ReadString).
• Using textual windows, the output of images, regions and graphics is "‘clipped"’ at the edges. Whereas
during the use of graphical windows the edges are "‘zoomed"’.
• The coordinate system (e.g., with GetMbutton or GetMposition) consists of display coordinates
independently of image size. The maximum coordinates are equal to the size of the window minus 1. In
contrast to this, graphical windows ( OpenWindow) use always a coordinate system, which corresponds to
the image format.
The parameter mode specifies the mode of the window. It can have following values:
’visible’: Normal mode for textual windows: The window is created according to the parameters and all inputs
and outputs are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like row, column, borderWidth,
borderColor, backgroundColor and fatherWindow do not have any meaning. Output to these
windows has no effect. Input ( ReadString, mouse, etc.) is not possible. You may use these windows
to query representation parameter for an output device without opening a (visible) window. General queries
are, e.g., QueryColor and GetStringExtents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but
all the other operations are possible and all output is displayed. Parameters like borderColor and
backgroundColor do not have any meaning. A common use for this mode is the creation of mouse
sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on
the display, but is stored in memory. Parameters like row, column, borderWidth, borderColor,
backgroundColor and fatherWindow do not have any meaning. You may use buffer windows, if you
prepare output (in the background) and copy it finally with CopyRectangle in a visible window. Another
usage might be the rapid processing of image regions during interactive manipulations. Textual input and
mouse interaction are not possible in this mode.
Attention
You have to keep in mind that parameters like row, column, width and height are restricted by the output
device. Is a father window (fatherWindow <> ’root’) specified, then the coordinates are relative to this window.
Parameter
HALCON 8.0.2
428 CHAPTER 4. GRAPHICS
open_textwindow(0,0,900,600,1,’black’,’slate blue’,’root’,’visible’,
’’WindowHandle)
open_textwindow(10,10,300,580,3,’red’,’blue’,Father,’visible’,
’’WindowHandle)
open_window(10,320,570,580,Father,’visible’,’’WindowHandle)
set_color(WindowHandle,’red’)
read_image(Image,’affe’)
disp_image(Image,WindowHandle)
repeat()
get_mposition(WindowHandle,Row,Column,Button)
get_grayval(Image,Row,Column,1,Gray)
write_string(WindowHandle,[’ Position (’,Row,’,’,Column,’) ’])
write_string(WindowHandle,[’Gray value (’,Gray,’) ’])
new_line(WindowHandle)
until(Button = 4)
close_window(WindowHandle)
clear_obj(Image).
Result
If the values of the specified parameters are correct OpenTextwindow returns 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
OpenTextwindow is reentrant, local, and processed without parallelization.
Possible Predecessors
ResetObjDb
Possible Successors
SetColor, QueryWindowType, GetWindowType, SetWindowType, GetMposition,
SetTposition, SetTshape, SetWindowExtents, GetWindowExtents, QueryColor,
SetCheck, SetSystem
Alternatives
OpenWindow
See also
WriteString, ReadString, NewLine, GetStringExtents, GetTposition, SetColor,
QueryWindowType, GetWindowType, SetWindowType, GetMposition, SetTposition,
SetTshape, SetWindowExtents, GetWindowExtents, QueryColor, SetCheck, SetSystem
Module
Foundation
public HWindow ( int row, int column, int width, int height,
HTuple fatherWindow, string mode, string machine)
public HWindow ( int row, int column, int width, int height,
int fatherWindow, string mode, string machine)
void HWindow.OpenWindow ( int row, int column, int width, int height,
HTuple fatherWindow, string mode, string machine )
void HWindow.OpenWindow ( int row, int column, int width, int height,
int fatherWindow, string mode, string machine )
HALCON 8.0.2
430 CHAPTER 4. GRAPHICS
window with the logical window number windowHandle and remain assigned to a window as long as they will
be overwritten. You may use the following configuration procedures:
• Output of gray values: SetPaint, SetComprise, ( SetLut and SetLutStyle after output)
• Regions: SetColor, SetRgb, SetHsi, SetGray, SetPixel, SetShape, SetLineWidth,
SetInsert, SetLineStyle, SetDraw
• Image clipping: SetPart
• Text: SetFont
You may query current set values by calling procedures like GetShape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available ressources by calling QueryColor.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximal: height-1), the column index grows to the right (maximal: width-1). You
have to keep in mind, that the range of the coordinate system is independent of the window size. It is specified
only through the image format (see ResetObjDb).
The parameter machine indicates the name of the computer, which has to open the window. In case of a X-
window, TCP-IP only sets the name, DEC-Net sets in addition a colon behind the name. The "‘server"’ resp. the
"‘screen"’ are not specified. If the empty string is passed the environment variable DISPLAY is used. It indicates
the target computer. At this the name is indicated in common syntax
<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter fatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via SetCheck,
fatherWindow relates to the ID of a HALCON window, otherwise ( SetCheck(’∼ father’)) it relates to the ID
of an operating system window. If fatherWindow is passed the value 0 or ’root’, then under Windows and Unix
the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via SetCheck) is irrelevant.
You may use the value "‘-1"’ for parameters width and height. This means, that the according value has to be
specified automatically. In particular this is of importance, if the proportion of pixels is not 1.0 (see SetSystem):
Is one of the two parameters set to "‘-1"’, it will be specified through the size which results out of the proportion
of pixels. Are both parameters set to "‘-1"’, they will be set to the maximum image format, which is currently
used (further information about the currently used maximum image format can be found in the description of
GetSystem using "‘width"’ or "‘height"’).
Position and size of a window may change during runtime of a program. This may be achieved by calling
SetWindowExtents, but also through external interferences (window manager). In the latter case the pro-
cedure SetWindowExtents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with
procedures like WriteString and you may overwrite it by performing SetFont after calling
OpenWindow. On the other hand you have the possibility to specify a default font by calling SetSystem
(’default_font’,<Fontname>) before opening a window (and all following windows; see also
QueryFont).
You may set the color of graphics and font, which is used for output procedures like DispRegion or
DispCircle, by calling SetRgb, SetHsi, SetGray or SetPixel. Calling SetInsert specifies
how graphics is combined with the content of the image repeat memory. Thereto you may achieve by calling, e.g.,
SetInsert(::’not’:) to eliminate the font after writing text twice at the same position.
Normally every output (e.g., DispImage, DispRegion, DispCircle, etc.) in a window is terminated by
a called "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or if there is a mouse
procedure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available.
You may stop this behavior by calling SetSystem(’flush_graphic’,’false’).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also if the
window is hidden by other windows. But this is not necessary in all cases: If the content of a window is built up
permanently new ( CopyRectangle), you may suppress the security mechanism for that and hence you can save
the necessary memory. This is done by calling SetSystem(’backing_store’,’false’) before opening
a window. In doing so you save not only memory but also time to compute. This is significant for the output of
video clips (see CopyRectangle).
For graphical output ( DispImage, DispRegion, etc.) you may adjust the window by calling procedure
SetPart in order to represent a logical clipping of the image format. In particular this implicates that you
obtain this clipping (with appropriate enlargement) of images and regions only.
Difference: graphical window - textual window
• Using graphical windows the layout is not as variable as concerned to textual windows.
• You may use textual windows for the input of user data only ( ReadString).
• During the output of images, regions and graphics a "‘zooming"’ is performed using graphical windows:
Independent on size and side ratio of the window images are transformed in that way, that they are displayed
in the window by filling it completely. On the opposite side using textual windows the output does not care
about the size of the window (only if clipping is necessary).
• Using graphical windows the coordinate system of the window corresponds to the coordinate system of
the image format. Using textual windows, its coordinate system is always equal to the display coordinates
independent on image size.
The parameter mode determines the mode of the window. It may have following values:
’visible’: Normal mode for graphical windows: The window is created according to the parameters and all input
and output are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like row, column and
fatherWindow do not have any meaning. Output to these windows has no effect. Input ( ReadString,
mouse, etc.) is not possible. You may use these windows to query representation parameter for an
output device without opening a (visible) window. Common queries are, e.g., QueryColor and
GetStringExtents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but all
the other operations are possible and all output is displayed. A common use for this mode is the creation of
mouse sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on the
display, but is stored in memory. Parameters like row, column and fatherWindow do not have any
meaning. You may use buffer windows, if you prepare output (in the background) and copy it finally with
CopyRectangle in a visible window. Another usage might be the rapid processing of image regions
during interactive manipulations. Textual input and mouse interaction are not possible in this mode.
Attention
You may keep in mind that parameters as row, column, width and height are constrained by the output
device. If you specify a father window (fatherWindow <> ’root’) the coordinates are relative to this window.
Parameter
HALCON 8.0.2
432 CHAPTER 4. GRAPHICS
open_window(0,0,400,-1,’root’,’visible’,’’,WindowHandle)
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
write_string(WindowHandle,’File, fabrik.ima’)
new_line(WindowHandle)
get_mbutton(WindowHandle,_,_,_)
set_lut(WindowHandle,’temperature’)
set_color(WindowHandle,’blue’)
write_string(WindowHandle,’temperature’)
new_line(WindowHandle)
write_string(WindowHandle,’Draw Rectangle’)
new_line(WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
set_part(Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
new_line(WindowHandle).
Result
If the values of the specified parameters are correct OpenWindow returns 2 (H_MSG_TRUE). If necessary an
exception handling is raised.
Parallelization Information
OpenWindow is reentrant, local, and processed without parallelization.
Possible Predecessors
ResetObjDb
Possible Successors
SetColor, QueryWindowType, GetWindowType, SetWindowType, GetMposition,
SetTposition, SetTshape, SetWindowExtents, GetWindowExtents, QueryColor,
SetCheck, SetSystem
Alternatives
OpenTextwindow
See also
DispRegion, DispImage, DispColor, SetLut, QueryColor, SetColor, SetRgb, SetHsi,
SetPixel, SetGray, SetPart, SetPartStyle, QueryWindowType, GetWindowType,
SetWindowType, GetMposition, SetTposition, SetWindowExtents, GetWindowExtents,
SetWindowAttr, SetCheck, SetSystem
Module
Foundation
Parameter
. windowTypes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Names of available window types.
Result
QueryWindowType always returns 2 (H_MSG_TRUE).
Parallelization Information
QueryWindowType is reentrant, local, and processed without parallelization.
Possible Predecessors
ResetObjDb
Module
Foundation
’border_width’ Width of the window border in pixels. Is not implemented under Windows.
’border_color’ Color of the window border. Is not implemented under Windows.
’background_color’ Background color of the window.
HALCON 8.0.2
434 CHAPTER 4. GRAPHICS
Attention
You have to call SetWindowAttr before calling OpenWindow.
Parameter
. attributeName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the attribute that should be modified.
List of values : AttributeName ∈ {"border_width", "border_color", "background_color", "window_title"}
. attributeValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string / int / long)
Value of the attribute that should be set.
List of values : AttributeValue ∈ {0, 1, 2, "white", "black", "MyName", "default"}
Result
If the parameters are correct SetWindowAttr returns 2 (H_MSG_TRUE). If necessary an exception handling
is raised.
Parallelization Information
SetWindowAttr is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, SetDraw, SetColor, SetColored, SetLineWidth, OpenTextwindow
See also
OpenWindow, GetWindowAttr
Module
Foundation
hWnd = createWINDOW(...)
new_extern_window(hwnd, hdc, 0,0,400,-1,WindowHandle)
set_device_context(WindowHandle, hdc)
read_image(Image,’fabrik’)
disp_image(Image,WindowHandle)
write_string(WindowHandle,’File, fabrik.ima’)
new_line(WindowHandle)
get_mbutton(WindowHandle,_,_,_)
set_lut(WindowHandle,’temperature’)
set_color(WindowHandle,’blue’)
write_string(WindowHandle,’temperature’)
new_line(WindowHandle)
write_string(WindowHandle,’Draw Rectangle’)
new_line(WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
set_part(Row1,Column1,Row2,Column2)
disp_image(Image,WindowHandle)
new_line(WindowHandle).
Result
If the values of the specified parameters are correct, SetWindowDc returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
SetWindowDc is reentrant, local, and processed without parallelization.
Possible Predecessors
NewExternWindow
Possible Successors
DispImage, DispRegion
See also
NewExternWindow, DispRegion, DispImage, DispColor, SetLut, QueryColor, SetColor,
SetRgb, SetHsi, SetPixel, SetGray, SetPart, SetPartStyle, QueryWindowType,
GetWindowType, SetWindowType, GetMposition, SetTposition, SetWindowExtents,
GetWindowExtents, SetWindowAttr, SetCheck, SetSystem
Module
Foundation
HALCON 8.0.2
436 CHAPTER 4. GRAPHICS
Possible Predecessors
OpenWindow, OpenTextwindow
See also
OpenWindow, OpenTextwindow, QueryWindowType, GetWindowType
Module
Foundation
read_image(Image,’fabrik’)
sobel_amp(Image,Amp,’sum_abs’,3)
open_window(0,0,-1,-1,’root’,’buffer’,’’,WindowHandle)
disp_image(Amp,WindowHandle)
sobel_dir(Image,Dir,’sum_abs’,3)
open_window(0,0,-1,-1,’root’,’buffer’,’’,WindowHandle)
disp_image(Dir,WindowHandle)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
slide_image(Puffer1,Puffer2,WindowHandle).
Result
If the both windows exist and one of these windows is valid SlideImage returns 2 (H_MSG_TRUE). If neces-
sary an exception handling is raised.
Parallelization Information
SlideImage is reentrant, local, and processed without parallelization.
Possible Predecessors
OpenWindow, OpenTextwindow
Alternatives
CopyRectangle, GetMposition
HALCON 8.0.2
438 CHAPTER 4. GRAPHICS
See also
OpenWindow, OpenTextwindow, MoveRectangle
Module
Foundation
Image
5.1 Access
static void HOperatorSet.GetGrayval ( HObject image, HTuple row,
HTuple column, out HTuple grayval )
439
440 CHAPTER 5. IMAGE
Hobject Bild;
char typ[128];
long width,height;
unsigned char *ptr;
read_image(&Bild,"fabrik");
get_image_pointer1(Bild,(long*)&ptr,typ,&width,&height);
Result
The operator GetImagePointer1 returns the value 2 (H_MSG_TRUE) if exactly one image was passed.
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
GetImagePointer1 is reentrant and processed without parallelization.
Possible Predecessors
ReadImage
Alternatives
SetGrayval, GetGrayval, GetImagePointer3
See also
PaintRegion, PaintGray
Module
Foundation
Access to the image data pointer and the image data inside the smallest rectangle of the domain of the input image.
The operator GetImagePointer1Rect returns the pointer pixelPointer which points to the beginning of
the image data inside the smallest rectangle of the domain of image. verticalPitch corresponds to the width
of the input image image multiplied with the number of bytes per pixel (horizontalBitPitch / 8). width
and height correspond to the size of the smallest rectangle of the input region. horizontalBitPitch is the
horizontal distance (in bits) between two neighbouring pixels. bitsPerPixel is the number of used bits per
pixel. GetImagePointer1Rect is symmetrical to GenImage1Rect.
Attention
The operator GetImagePointer1Rect should only be used for entry into newly created images, since other-
wise the gray values of other images might be overwritten (see relational structure).
Parameter
HALCON 8.0.2
442 CHAPTER 5. IMAGE
Hobject image,reg,imagereduced;
char typ[128];
long width,height,vert_pitch,hori_bit_pitch,bits_per_pix, winID;
unsigned char *ptr;
open_window(0,0,512,512,"black",winID);
read_image(&image,"monkey");
draw_region(®,winID);
reduce_domain(image,reg,&imagereduced);
get_image_pointer1_rect(imagereduced,(long*)&ptr,&width,&height,
&vert_pitch,&hori_bit_pitch,&bits_per_pix);
Result
The operator GetImagePointer1Rect returns the value 2 (H_MSG_TRUE) if exactly one image was
passed. The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
GetImagePointer1Rect is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GenImage1Rect
Alternatives
SetGrayval, GetGrayval, GetImagePointer3, GetImagePointer1
See also
PaintRegion, PaintGray, GenImage1Rect
Module
Foundation
language via the pointer is possible. An image is stored in HALCON as a vector of image lines. The three
channels must have the same pixel type and the same size.
Attention
Only one image can be passed. The operator GetImagePointer3 should only be used for entry into newly
created images, since otherwise the gray values of other images might be overwritten (see relational structure).
Parameter
HALCON 8.0.2
444 CHAPTER 5. IMAGE
5.2 Acquisition
HALCON 8.0.2
446 CHAPTER 5. IMAGE
Module
Foundation
Grab images and preprocessed image data from the specified image acquisition device.
The operator GrabData grabs images and preprocessed image data via the image acquisition device specified by
acqHandle. The desired operational mode of the image acquisition device as well as a suitable image part can
be adjusted via the operator OpenFramegrabber. Additional interface-specific settings can be specified via
SetFramegrabberParam. Depending on the current configuration of the image acquisition device, the prepro-
cessed image data can be returned in terms of images (image), regions (region), XLD contours (contours),
and control data (data).
Parameter
. image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Grabbed image data.
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Preprocessed image regions.
. contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; HXLDCont
Preprocessed XLD contours.
. acqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . framegrabber ; HFramegrabber / HTuple (IntPtr)
Handle of the acquisition device to be used.
. data (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / double / int / long)
Preprocessed control data.
Example (Syntax: HDevelop)
Result
If the image acquisition device is open and supports the image acquisition via GrabData, the operator
GrabData returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
HALCON 8.0.2
448 CHAPTER 5. IMAGE
Parallelization Information
GrabData is reentrant and processed without parallelization.
Possible Predecessors
OpenFramegrabber, GrabImageStart, SetFramegrabberParam
Possible Successors
GrabData, GrabDataAsync, GrabImageStart, GrabImage, GrabImageAsync,
SetFramegrabberParam, CloseFramegrabber
See also
OpenFramegrabber, InfoFramegrabber, SetFramegrabberParam
Module
Foundation
Grab images and preprocessed image data from the specified image image acquisition device and start the next
asynchronous grab.
The operator GrabData grabs images and preprocessed image data via the image acquisition device specified by
acqHandle and starts the next asynchronous grab. The desired operational mode of the image acquisition device
as well as a suitable image part can be adjusted via the operator OpenFramegrabber. Additional interface-
specific settings can be specified via SetFramegrabberParam. The segmented image regions are returned in
region. Depending on the current configuration of the image acquisition device, the preprocessed image data
can be returned in terms of images (image), regions (region), XLD contours (contours), and control data
(data).
The grab of the next image is finished by calling GrabDataAsync or GrabImageAsync. If more than
maxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is con-
sidered as too old and a new image is grabbed. If a negative value is assigned to maxDelay this control mechanism
is deactivated.
Please note that if you call the operators GrabImage or GrabData after GrabDataAsync, the asynchronous
grab started by GrabDataAsync is aborted and a new image is grabbed (and waited for).
Parameter
Result
If the image acquisition device is open and supports the image acquisition via GrabDataAsync, the operator
GrabDataAsync returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
GrabDataAsync is reentrant and processed without parallelization.
Possible Predecessors
OpenFramegrabber, GrabImageStart, SetFramegrabberParam
Possible Successors
GrabDataAsync, GrabImageAsync, SetFramegrabberParam, CloseFramegrabber
See also
OpenFramegrabber, InfoFramegrabber, SetFramegrabberParam
Module
Foundation
HALCON 8.0.2
450 CHAPTER 5. IMAGE
Result
If the image could be acquired successfully, the operator GrabImage returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
GrabImage is reentrant and processed without parallelization.
Possible Predecessors
OpenFramegrabber, SetFramegrabberParam
Possible Successors
GrabImage, GrabImageStart, GrabImageAsync, CloseFramegrabber
See also
OpenFramegrabber, InfoFramegrabber, SetFramegrabberParam
Module
Foundation
’default’,’default’,’default’,-1,-1,AcqHandle)
// Grab image + start next grab
grab_image_async(Image1,AcqHandle,-1.0)
// Process Image1 ...
// Finish asynchronous grab + start next grab
grab_image_async(Image2,AcqHandle,-1.0)
// Process Image2 ...
close_framegrabber(AcqHandle)
Result
If the image acquisition device is open and supports asynchronous grabbing the operator GrabImageStart
returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
GrabImageAsync is reentrant and processed without parallelization.
Possible Predecessors
GrabImageStart, OpenFramegrabber, SetFramegrabberParam
Possible Successors
GrabImageAsync, GrabDataAsync, SetFramegrabberParam, CloseFramegrabber
See also
GrabImageStart, OpenFramegrabber, InfoFramegrabber, SetFramegrabberParam
Module
Foundation
HALCON 8.0.2
452 CHAPTER 5. IMAGE
grab_image_start(AcqHandle,-1.0)
// Process Image1 ...
// Finish asynchronous grab + start next grab
grab_image_async(Image2,AcqHandle,-1.0)
// Process Image2 ...
close_framegrabber(AcqHandle)
Result
If the image acquisition device is open and supports asynchronous grabbing the operator GrabImageStart
returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
GrabImageStart is reentrant and processed without parallelization.
Possible Predecessors
OpenFramegrabber, SetFramegrabberParam
Possible Successors
GrabImageAsync, GrabDataAsync, SetFramegrabberParam, CloseFramegrabber
See also
OpenFramegrabber, InfoFramegrabber, SetFramegrabberParam
Module
Foundation
’bits_per_channel’: List of all supported values for the parameter ’BitsPerChannel’, see OpenFramegrabber.
’camera_type’: Description and list of all supported values for the parameter ’CameraType’, see
OpenFramegrabber.
’color_space’: List of all supported values for the parameter ’ColorSpace’, see OpenFramegrabber.
’defaults’: Interface-specific default values in valueList, see OpenFramegrabber.
’device’: List of all supported values for the parameter ’Device’, see OpenFramegrabber.
’external_trigger’: List of all supported values for the parameter ’ExternalTrigger’, see OpenFramegrabber.
’field’: List of all supported values for the parameter ’Field’, see OpenFramegrabber.
’general’: General information (in information).
’horizontal_resolution’: List of all supported values for the parameter ’HorizontalResolution’, see
OpenFramegrabber.
’image_height’: List of all supported values for the parameter ’ImageHeight’, see OpenFramegrabber.
’image_width’: List of all supported values for the parameter ’ImageWidth’, see OpenFramegrabber.
’info_boards’: Information about actually installed boards or cameras. This data is especially useful for the auto-
detect mechansim of ActivVisionTools and for the Image Acquisition Assistant in HDevelop.
’line_in’: List of all supported values for the parameter ’LineIn’, see OpenFramegrabber.
’parameters’: List of all interface-specific parameters which are accessible via SetFramegrabberParam or
GetFramegrabberParam.
’parameters_readonly’: List of all interface-specific parameters which are only accessible via
GetFramegrabberParam.
’parameters_writeonly’: List of all interface-specific parameters which are only accessible via
SetFramegrabberParam.
’port’: List of all supported values for the parameter ’Port’, see OpenFramegrabber.
’revision’: Version number of the image acquisition interface.
’start_column’: List of all supported values for the parameter ’StartColumn’, see OpenFramegrabber.
’start_row’: List of all supported values for the parameter ’StartRow’, see OpenFramegrabber.
’vertical_resolution’: List of all supported values for the parameter ’VerticalResolution’, see
OpenFramegrabber.
Please check also the directory doc/html/manuals for documentation about specific image grabber interfaces.
Parameter
Result
If the parameter values are correct and the specified image acquistion interface is available,
InfoFramegrabber returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
InfoFramegrabber is processed completely exclusively without parallelization.
Possible Predecessors
OpenFramegrabber
Possible Successors
OpenFramegrabber
HALCON 8.0.2
454 CHAPTER 5. IMAGE
See also
OpenFramegrabber
Module
Foundation
BitsPerChannel Number of bits, which are transferred from the image acquisition device per pixel and image
channel (typically 5, 8, 10, 12, or 16).
ColorSpace Output color format of the grabbed images (typically ’gray’ or ’raw’ for single-channel or ’rgb’ or
’yuv’ for three-channel images).
Generic Generic parameter with device-specific meaning which can be queried by InfoFramegrabber.
ExternalTrigger Activation of external triggering (if available).
CameraType More detailed specification of the desired image acquistion device (typically the type of the analog
video format or the name of the desired camera configuration file).
Device Device name of the image acquistion device.
Port Port the image acquistion device is connected to.
LineIn Camera input line of multiplexer (if available).
The operator OpenFramegrabber returns a handle (acqHandle) to the opened image acquisition device.
Attention
Due to the multitude of supported image acquisition devices, OpenFramegrabber contains a large number of
parameters. However, not all parameters are needed for a specific image acquisition device.
Parameter
HALCON 8.0.2
456 CHAPTER 5. IMAGE
Result
If the parameter values are correct and the desired image acquisition device could be opened,
OpenFramegrabber returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
OpenFramegrabber is processed completely exclusively without parallelization.
Possible Predecessors
InfoFramegrabber
Possible Successors
GrabImage, GrabData, GrabImageStart, GrabImageAsync, GrabDataAsync,
SetFramegrabberParam
See also
InfoFramegrabber, CloseFramegrabber, GrabImage
Module
Foundation
HALCON 8.0.2
458 CHAPTER 5. IMAGE
Parameter
. acqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . framegrabber ; HFramegrabber / HTuple (IntPtr)
Handle of the acquisition device to be used.
. param (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Parameter name.
Suggested values : Param ∈ {"color_space", "continuous_grabbing", "external_trigger", "grab_timeout",
"image_height", "image_width", "port", "start_column", "start_row", "volatile"}
. value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / double / int / long)
Parameter value to be set.
Result
If the image acquisition device is open and the specified parameter / parameter value is supported, the operator
SetFramegrabberParam returns the value 2 (H_MSG_TRUE). Otherwise an exception handling is raised.
Parallelization Information
SetFramegrabberParam is reentrant and processed without parallelization.
Possible Predecessors
OpenFramegrabber
Possible Successors
GrabImage, GrabData, GrabImageStart, GrabImageAsync, GrabDataAsync,
GetFramegrabberParam
See also
OpenFramegrabber, InfoFramegrabber, GetFramegrabberParam
Module
Foundation
5.3 Channel
static void HOperatorSet.AccessChannel ( HObject multiChannelImage,
out HObject image, HTuple channel )
Parallelization Information
AccessChannel is reentrant and processed without parallelization.
Possible Predecessors
CountChannels
Possible Successors
DispImage
Alternatives
Decompose2, Decompose3, Decompose4, Decompose5
See also
CountChannels
Module
Foundation
HImage HImage.ChannelsToImage ( )
Convert one-channel images into a multichannel image
The operator ChannelsToImage converts several one-channel images into a multichannel image. The new
definition domain is the average of the definition domains of the input images.
Parameter
HALCON 8.0.2
460 CHAPTER 5. IMAGE
Parallelization Information
ChannelsToImage is reentrant and processed without parallelization.
Possible Successors
CountChannels, DispImage
Module
Foundation
Parallelization Information
Compose3 is reentrant and automatically parallelized (on tuple level).
Possible Successors
DispImage
Alternatives
AppendChannel
See also
Decompose3
Module
Foundation
HALCON 8.0.2
462 CHAPTER 5. IMAGE
Parameter
. image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Input image 1.
. image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Input image 2.
. image3 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Input image 3.
. image4 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Input image 4.
. image5 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Input image 5.
. multiChannelImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Multichannel image.
Parallelization Information
Compose5 is reentrant and automatically parallelized (on tuple level).
Possible Successors
DispImage
Alternatives
AppendChannel
See also
Decompose5
Module
Foundation
Alternatives
AppendChannel
See also
Decompose6
Module
Foundation
HTuple HImage.CountChannels ( )
Count channels of image.
The operator CountChannels counts the number of channels of all input images.
HALCON 8.0.2
464 CHAPTER 5. IMAGE
Parameter
. multiChannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
One- or multichannel image.
. channels (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Number of channels.
Example (Syntax: C)
read_image(&Color,"patras");
count_channels(Color,&num_channels);
for (i=1; i<=num_channels; i++)
{
access_channel(Color,&Channel,i);
disp_image(Channel,WindowHandle);
clear_obj(Channel);
}
Parallelization Information
CountChannels is reentrant and processed without parallelization.
Possible Successors
AccessChannel, AppendChannel, DispImage
See also
AppendChannel, AccessChannel
Module
Foundation
HALCON 8.0.2
466 CHAPTER 5. IMAGE
Parallelization Information
Decompose4 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
CountChannels
Possible Successors
DispImage
Alternatives
AccessChannel, ImageToChannels
See also
Compose4
Module
Foundation
HALCON 8.0.2
468 CHAPTER 5. IMAGE
Parameter
. multiChannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Multichannel image.
. image1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 1.
. image2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 2.
. image3 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 3.
. image4 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 4.
. image5 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 5.
. image6 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 6.
. image7 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Output image 7.
Parallelization Information
Decompose7 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
CountChannels
Possible Successors
DispImage
Alternatives
AccessChannel, ImageToChannels
See also
Compose7
Module
Foundation
HImage HImage.ImageToChannels ( )
Convert a multichannel image into One-channel images
The operator ImageToChannels generates a one-channel image for each channel of the multichannel image in
multiChannelImage. The definition domains are adopted from the input image. As many images are created
as multiChannelImage has channels.
Parameter
. multiChannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; HImage
Multichannel image to be decomposed.
. images (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image-array ; HImage
Generated one-channel images.
Parallelization Information
ImageToChannels is reentrant and processed without parallelization.
Possible Predecessors
CountChannels
Possible Successors
DispImage
Alternatives
AccessChannel, Decompose2, Decompose3, Decompose4, Decompose5
Module
Foundation
5.4 Creation
static void HOperatorSet.CopyImage ( HObject image,
out HObject dupImage )
HImage HImage.CopyImage ( )
Copy an image and allocate new memory for it.
CopyImage copies the input image into a new image with the same domain as the input image. In contrast to
HALCON operators such as CopyObj, physical copies of all channels are created. This can be used, for example,
to modify the gray values of the new image (see GetImagePointer1).
Parameter
public HImage ( string type, int width, int height, IntPtr pixelPointer)
void HImage.GenImage1 ( string type, int width, int height,
IntPtr pixelPointer )
HALCON 8.0.2
470 CHAPTER 5. IMAGE
Result
If the parameter values are correct, the operator GenImage1 returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
GenImage1 is reentrant and processed without parallelization.
Possible Predecessors
GenImageConst, GetImagePointer1
Alternatives
GenImage3, GenImageConst, GetImagePointer1
See also
ReduceDomain, PaintGray, PaintRegion, SetGrayval
Module
Foundation
The memory for the new image is not newly allocated by HALCON , contrary to GenImage1, and thus is not
copied either. This means that the memory space that pixelPointer points to must be released by deleting the
object image. This is done by the procedure clearProc provided by the caller. This procedure must have the
following signature
void ClearProc(void* ptr);
and will be called using __cdecl calling convention when deleting image. If the memory shall not be released
(in the case of frame grabbers or static memory) a procedure “without trunk” or the NULL-Pointer can be passed.
Analogous to the parameter pixelPointer the pointer has to be passed to the procedure by casting it to long.
Parameter
. image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Created HALCON image.
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic"}
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. pixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; HTuple (IntPtr)
Pointer to the first gray value.
. clearProc (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; HTuple (IntPtr)
Pointer to the procedure re-releasing the memory of the image when deleting the object.
Default Value : 0
Example (Syntax: C)
Result
The operator GenImage1Extern returns the value 2 (H_MSG_TRUE) if the parameter values are correct.
Otherwise an exception handling is raised.
Parallelization Information
GenImage1Extern is reentrant and processed without parallelization.
Alternatives
GenImage1, GenImageConst, GetImagePointer1
HALCON 8.0.2
472 CHAPTER 5. IMAGE
See also
ReduceDomain, PaintGray, PaintRegion, SetGrayval
Module
Foundation
Create an image with a rectangular domain from a pointer on the pixels (with storage management).
The operator GenImage1Rect creates an image of size (verticalPitch/(horizontalBitPitch / 8))
* height. The pixels pointed to by pixelPointer are stored line by line. Since the type of the parameter
pixelPointer is generic (long) a cast must be used for the call. verticalPitch determines the distance
(in bytes) between pixel m in row n and pixel m in row n+1 inside of memory. All rows of the ’input image’ have
the same vertical pitch. The width of the output image equals verticalPitch / (horizontalBitPitch /
8). The height of input and output image are equal. The domain of the output image image is a rectangle of the
size width * height. The parameter horizontalBitPitch is the horizontal distance (in bits) between two
neighbouring pixels. bitsPerPixel is the number of used bits per pixel.
If doCopy is set ’true’, the image data pointed to by pixelPointer is copied and memory for the new image is
newly allocated by HALCON . Else the image data is not duplicated and the memory space that pixelPointer
points to must be released when deleting the object image. This is done by the procedure clearProc provided
by the caller. This procedure must have the following signature
void ClearProc(void* ptr);
and will be called using __cdecl calling convention when deleting image. If the memory shall not be released
(in the case of frame grabbers or static memory) a procedure ”without trunk” or the NULL-pointer can be passed.
Analogously to the parameter pixelPointer the pointer has to be passed to the procedure by casting it to
long. If doCopy is ’true’ then clearProc is irrelevant. The operator GenImage1Rect is symmetrical to
GetImagePointer1Rect.
Parameter
image = malloc(640*480);
for (r=0; r<480; r++)
for (c=0; c<640; c++)
image[r*640+c] = c % 255;
gen_image1_rect(new,(long)image,400,480,640,8,8,’false’,(long)free);
}
Result
The operator GenImage1Rect returns the value 2 (H_MSG_TRUE) if the parameter values are correct. Other-
wise an exception handling is raised.
Parallelization Information
GenImage1Rect is reentrant and processed without parallelization.
Possible Successors
GetImagePointer1Rect
Alternatives
GenImage1, GenImage1Extern
See also
GetImagePointer1Rect
Module
Foundation
HALCON 8.0.2
474 CHAPTER 5. IMAGE
The operator GenImage3 creates a three-channel image of the size width × height. The pixels in
pixelPointerRed, pixelPointerGreen and pixelPointerBlue are stored line-sequentially. The
type of the given pixels (pixelPointerRed etc.) must correspond to the name of the pixels (type). The
storage for the new image is newly created by HALCON . Thus, it can be released after the call. Since the type of
the parameters (pixelPointerRed etc.) is generic (long) a “cast” must be used for the call.
Parameter
. imageRGB (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Created image with new image matrix.
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "direction", "cyclic", "int1", "int2", "uint2", "int4", "real"}
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
. pixelPointerRed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; HTuple (IntPtr)
Pointer to first red value (channel 1).
. pixelPointerGreen (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; HTuple (IntPtr)
Pointer to first green value (channel 2).
. pixelPointerBlue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; HTuple (IntPtr)
Pointer to first blue value (channel 3).
Example (Syntax: C)
main()
{
Hobject rgb;
open_window(0,0,768,525,0,"","",&WindowHandle);
NewRGBImage(&rgb);
disp_color(rgb,WindowHandle);
clear_obj(rgb);
}
Result
If the parameter values are correct, the operator GenImage3 returns the value 2 (H_MSG_TRUE). Otherwise an
exception handling is raised.
Parallelization Information
GenImage3 is reentrant and processed without parallelization.
Possible Predecessors
GenImageConst, GetImagePointer1
Possible Successors
DispColor
Alternatives
GenImage1, Compose3, GenImageConst
See also
ReduceDomain, PaintGray, PaintRegion, SetGrayval, GetImagePointer1, Decompose3
Module
Foundation
HALCON 8.0.2
476 CHAPTER 5. IMAGE
gen_image_const(&New,"byte",width,height);
get_image_pointer1(New,(long*)&pointer,type,&width,&height);
for (row=0; row<height-1; row++)
for (col=0; col<width-1; col++)
pointer[row*width+col] = (row + col) % 256;
Result
If the parameter values are correct, the operator GenImageConst returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling is raised.
Parallelization Information
GenImageConst is reentrant and processed without parallelization.
Possible Successors
PaintRegion, ReduceDomain, GetImagePointer1, CopyObj
Alternatives
GenImage1, GenImage3
See also
ReduceDomain, PaintGray, PaintRegion, SetGrayval, GetImagePointer1
Module
Foundation
The size of the image is determined by width and height The gray values are of the type byte. Gray values
outside the valid area are clipped.
Parameter
HALCON 8.0.2
478 CHAPTER 5. IMAGE
Parameter
HALCON 8.0.2
480 CHAPTER 5. IMAGE
See also
ReduceDomain, PaintGray, PaintRegion, SetGrayval
Module
Foundation
The size of the image is determined by width and height. The gray values are of the type type. Gray values
outside the valid area are clipped.
Parameter
. imageSurface (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Created image with new image matrix.
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "uint2", "real"}
. alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
First order coefficient in vertical direction.
Default Value : 1.0
Suggested values : Alpha ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
First order coefficient in horizontal direction.
Default Value : 1.0
Suggested values : Beta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. gamma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Zero order coefficient
Default Value : 1.0
Suggested values : Gamma ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
line coordinate of the apex of the surface
Default Value : 256.0
Suggested values : Row ∈ {0.0, 128.0, 256.0, 512.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Column coordinate of the apex of the surface
Default Value : 256.0
Suggested values : Col ∈ {0.0, 128.0, 256.0, 512.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Result
If the parameter values are correct GenImageSurfaceFirstOrder returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
GenImageSurfaceFirstOrder is reentrant and processed without parallelization.
HALCON 8.0.2
482 CHAPTER 5. IMAGE
Possible Predecessors
FitSurfaceFirstOrder
Possible Successors
SubImage
See also
GenImageGrayRamp, GenImageSurfaceSecondOrder
Module
Foundation
imageSurface(r, c) = alpha(r−row)∗∗2+beta(c−col)∗∗2+gamma(r−row)∗(c−col)+delta(r−row)+epsilon(c
The size of the image is determined by width and height. The gray values are of the type type. Gray values
outside the valid area are clipped.
Parameter
HALCON 8.0.2
484 CHAPTER 5. IMAGE
Module
Foundation
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
GetGrayval
Alternatives
RegionToLabel, PaintRegion, SetGrayval
See also
GenImageProto, PaintGray
Module
Foundation
HALCON 8.0.2
486 CHAPTER 5. IMAGE
can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an empty input
region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
RegionToLabel is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Regiongrowing, Connection, ExpandRegion
Possible Successors
GetGrayval, GetImagePointer1
Alternatives
RegionToBin, PaintRegion
See also
LabelToRegion
Module
Foundation
read_image(Image,’fabrik’)
region_growing(Image,Regions,3,3,6,100)
region_to_mean(Regions,Image,Disp)
disp_image(Disp,WindowHandle)
set_draw(WindowHandle,’margin’)
set_color(WindowHandle,’black’)
disp_region(Regions,WindowHandle).
Result
RegionToMean returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
RegionToMean is reentrant and processed without parallelization.
Possible Predecessors
Regiongrowing, Connection
Possible Successors
DispImage
Alternatives
PaintRegion, Intensity
Module
Foundation
5.5 Domain
static void HOperatorSet.AddChannels ( HObject regions,
HObject image, out HObject grayRegions )
HALCON 8.0.2
488 CHAPTER 5. IMAGE
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. newDomain (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
New definition domain.
. imageNew (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image with new definition domain.
Parallelization Information
ChangeDomain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
GetDomain
Alternatives
ReduceDomain
See also
FullDomain, GetDomain, Intersection
Module
Foundation
HImage HImage.FullDomain ( )
Expand the domain of an image to maximum.
The operator FullDomain enters a rectangle with the edge length of the image as new definition domain. This
means that all pixels of the matrix are included in further operations. Thus the same definition domain is obtained
as by reading or generating an image. The size of the matrix is not changed.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. imageFull (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image with maximum definition domain.
Parallelization Information
FullDomain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
GetDomain
Alternatives
ChangeDomain, ReduceDomain
See also
GetDomain, GenRectangle1
Module
Foundation
HRegion HImage.GetDomain ( )
Get the domain of an image.
The operator GetDomain returns the definition domains of all input images as a region.
Parameter
HALCON 8.0.2
490 CHAPTER 5. IMAGE
Alternatives
ChangeDomain, ReduceDomain, AddChannels
See also
FullDomain, GetDomain, Intersection
Module
Foundation
5.6 Features
static void HOperatorSet.AreaCenterGray ( HObject regions,
HObject image, out HTuple area, out HTuple row, out HTuple column )
Compute the area and center of gravity of a region in a gray value image.
AreaCenterGray computes the area and center of gravity of the regions regions that have gray values which
are defined by the image image. This operator is similar to AreaCenter, but in contrast to that operator, the
gray values of the image are taken into account while computing the area and center of gravity.
The area A of a region R in the image with the gray values g(r, c) is defined as
X
A= g(r, c).
(r,c)∈R
This means that the area is defined by the volume of the gray value function g(r, c). The center of gravity is defined
by the first two normalized moments of the gray values g(r, c), i.e., by (m1,0 , m0,1 ), where
1 X p q
mp,q = r c g(r, c).
A
(r,c)∈R
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Gray value image.
. area (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Gray value volume of the region.
. row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double)
Row coordinate of the gray value center of gravity.
. column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double)
Column coordinate of the gray value center of gravity.
Result
AreaCenterGray returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution.
If the input is empty the behavior can be set via SetSystem(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
AreaCenterGray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
AreaCenter
See also
AreaCenterXld, EllipticAxisGray
Module
Foundation
HALCON 8.0.2
492 CHAPTER 5. IMAGE
The call of CoocFeatureImage corresponds to the consecutive execution of the operators GenCoocMatrix
and CoocFeatureMatrix. If several direction matrices of the co-occurrence matrix are to be evaluated
consecutively, it is more efficient to generate the matrix via GenCoocMatrix and then call the operator
CoocFeatureMatrix for the resulting matrix. The parameter direction transfers the direction of the neigh-
borhood in angle or ’mean’. In the case of ’mean’ the mean value is calculated in all four directions.
Parameter
The operator CoocFeatureMatrix calculates the gray value features from the part of the input matrix gen-
erated by GenCoocMatrix corresponding to the direction matrix indicated by the parameters LdGray and
Direction according to the following formulae:
Energy:
width
X
Energy = c2ij
i,j=0
Contrast:
width
X
Contrast = (i − j)2 cij
i,j=0
Attention
The region of the input image is disregarded.
Parameter
HALCON 8.0.2
494 CHAPTER 5. IMAGE
See also
Intensity, MinMaxGray, EntropyGray, SelectGray
Module
Foundation
Compute the orientation and major axes of a region in a gray value image.
The operator EllipticAxisGray calculates the length of the axes and the orientation of the ellipse having the
“same orientation” and the “aspect ratio” as the input region. Several input regions can be passed in regions as
tuples. The length of the major axis ra and the minor axis rb as well as the orientation of the major axis with
regard to the x-axis (phi) are determined. The angle is returned in radians. The calculation is done analogously to
EllipticAxis. The only difference is that in EllipticAxisGray the gray value moments are used instead
of the region moments. The gray value moments are derived from the input image image. For the definition of
the gray value moments, see AreaCenterGray.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Gray value image.
. ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Major axis of the region.
. rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Minor axis of the region.
. phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Angle enclosed by the major axis and the x-axis.
Result
EllipticAxisGray returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via SetSystem(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
EllipticAxisGray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Possible Successors
GenEllipse
Alternatives
EllipticAxis
See also
AreaCenterGray
Module
Foundation
Anisotropy coefficient:
Pk
0 rel[i] ∗ log2 (rel[i])
Anisotropy =
Entropy
where
rel[i] = Histogram of relative gray value frequencies
i = Gray value of input image (0 . . . 255)
Pk
k = Smallest possible gray value with 0 rel[i] ≥ 0.5
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions where the features are to be determined.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Gray value image.
. entropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Information content (entropy) of the gray values.
Assertion : (0 ≤ Entropy) ∧ (Entropy ≤ 8)
. anisotropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Measure of the symmetry of gray value distribution.
Complexity
If F is the area of the region the runtime complexity is O(F + 255).
Result
The operator EntropyGray returns the value 2 (H_MSG_TRUE) if an image with defined gray values is entered
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator SetSystem(’no_object_result’,<Result>), the behavior in case of empty region is set via
SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
EntropyGray is reentrant and automatically parallelized (on tuple level).
Alternatives
SelectGray
See also
EntropyImage, GrayHisto, GrayHistoAbs, FuzzyEntropy, FuzzyPerimeter
Module
Foundation
HALCON 8.0.2
496 CHAPTER 5. IMAGE
To estimate the noise, one of the following four methods can be selected in method:
• ’foerstner’: If method is set to ’foerstner’, first for each pixel a homogeneity measure is computed based
on the first derivatives of the gray values of image. By thresholding the homogeneity measure one obtains
the homogeneous regions in the image. The threshold is computed based on a starting value for the image
noise. The starting value is obtained by applying the method ’immerkaer’ (see below) in the first step. It
is assumed that the gray value fluctuations within the homogeneous regions are solely caused by the image
noise. Furthermore it is assumed that the image noise is Gaussian distributed. The average homogeneity
measure within the homogeneous regions is then used to calculate a refined estimate for the image noise.
The refined estimate leads to a new threshold for the homogeneity. The described process is iterated until the
estimated image noise remains constant between two successive iterations. Finally, the standard deviation of
the estimated image noise is returned in Sigma.
Note that in some cases the iteration falsely converges to the value 0. This happens, for example, if the gray
value histogram of the input image contains gaps that are caused either by an automatic radiometric scaling
of the camera or frame grabber, respectively, or by a manual spreading of the gray values using a scaling
factor > 1.
Also note that the result obtained by this method is independent of the value passed in percent.
• ’immerkaer’: If method is set to ’immerkaer’, first the following filter mask is applied to the input image:
1
−2 1
M = −2 4 −2 .
1 −2 1
The advantage of this method is that M is almost insensitive to image structure but only depends on the noise
in the image. Assuming a Gaussian distributed noise, its standard deviation is finally obtained as
r
π 1 X
Sigma = |Image ∗ M | ,
2 6N
Image
where N is the number of image pixels to which M is applied. Note that the result obtained by this method
is independent of the value passed in percent.
• ’least_squares’: If method is set to ’least_squares’, the fluctuations of the gray values with respect to a
locally fitted gray value plane are used to estimate the image noise. First, a homogeneity measure is computed
based on the first derivatives of the gray values of image. Homogeneous image regions are determined by
selecting the percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with
small magnitudes of the first derivatives. For each homogeneous pixel a gray value plane is fitted to its 3 × 3
neighborhood. The differences between the gray values within the 3 × 3 neighborhood and the locally fitted
plane are used to estimate the standard deviation of the noise. Finally, the average standard deviation over all
homogeneous pixels is returned in sigma.
• ’mean’: If method is set to ’mean’, the noise estimation is based on the difference between the input
image and a noiseless version of the input image. First, a homogeneity measure is computed based on the
first derivatives of the gray values of image. Homogeneous image regions are determined by selecting
the percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with small
magnitudes of the first derivatives. A mean filter is applied to the homogeneous image regions in order to
eliminate the noise. It is assumed that the difference between the input image and the thus obtained noiseless
version of the image represents the image noise. Finally, the standard deviation of the differences is returned
in sigma. It should be noted that this method requires large connected homogenous image regions to be
able to reliably estimate the noise.
Note that the methods ’foerstner’ and ’immerkaer’ assume a Gaussian distribution of the image noise, whereas
the methods ’least_squares’ and’mean’ can be applied to images with arbitrarily distributed noise. In general, the
method ’foerstner’ returns the most accurate results while the method ’immerkaer’ shows the fastest computation.
If the image noise could not be estimated reliably, the error 3175 is raised. This may happen if the image does not
contain enough homogeneous regions, if the image was artificially created, or if the noise is not of Gaussian type.
In order to avoid this error, it might be useful in some cases to try one of the following modifications in dependence
of the estimation method that is passed in method:
• Increase the size of the input image domain (useful for all methods).
• Increase the value of the parameter percent (useful for methods ’least_squares’ and ’mean’).
• Use the method ’immerkaer’, instead of the methods ’foerstner’, ’least_squares’, or ’mean’. The method
’immerkaer’ does not rely on the existence of homogeneous image regions, and hence is almost always
applicable.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Method to estimate the image noise.
Default Value : "foerstner"
List of values : Method ∈ {"foerstner", "immerkaer", "least_squares", "mean"}
. percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Percentage of used image points.
Default Value : 20
Suggested values : Percent ∈ {1, 2, 5, 7, 10, 15, 20, 30, 40, 50}
Restriction : (0 < Percent) ∧ (Percent ≤ 50.)
. sigma (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Standard deviation of the image noise.
Assertion : Sigma ≥ 0
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator EstimateNoise returns the value 2 (H_MSG_TRUE). If necessary an
exception is raised. If the image noise could not be estimated reliably, the error 3175 is raised.
Parallelization Information
EstimateNoise is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
GrabImage, GrabImageAsync, ReadImage, ReduceDomain
Possible Successors
BinomialFilter, GaussImage, MeanImage, SmoothImage
HALCON 8.0.2
498 CHAPTER 5. IMAGE
Alternatives
NoiseDistributionMean, Intensity, MinMaxGray
See also
GaussDistribution, AddNoiseDistribution
References
W. Förstner: "‘Image Preprocessing for Feature Extraction in Digital Intensity, Color and Range Images"‘, Springer
Lecture Notes on Earth Sciences, Summer School on Data Analysis and the Statistical Foundations of Geomatics,
1999
J. Immerkaer: "‘Fast Noise Variance Estimation"‘, Computer Vision and Image Understanding, Vol. 64, No. 2, pp.
300-302, 1996
Module
Foundation
Calculate gray value moments and approximation by a first order surface (plane).
The operator FitSurfaceFirstOrder calculates the gray value moments and the parameters of the approxi-
mation of the gray values by a first order surface. The calculation is done by minimizing the distance between the
gray values and the surface. A first order surface is described by the following formula:
r_center and c_center are the center coordinates of intersection of the input region with the full image domain. By
the minimization process the parameters from alpha to gamma is calculated.
The algorithm used for the fitting can be selected via algorithm:
’regression’ Standard ’least squares’ line fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter clippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for clippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
HALCON 8.0.2
500 CHAPTER 5. IMAGE
image(r, c) = alpha(r−r_center)∗∗2+beta(c−c_center)∗∗2+gamma(r−r_center)∗(c−c_center)+delta(r−r_center)
r_center and c_center are the center coordinates of the intersection of the input region with the full image domain.
By the minimization process the parameters from alpha to zeta is calculated.
The algorithm used for the fitting can be selected via algorithm:
’regression’ Standard ’least squares’ fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter clippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for clippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be checked.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Corresponding gray values.
. algorithm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Algorithm for the fitting.
Default Value : "regression"
List of values : Algorithm ∈ {"regression", "tukey", "huber"}
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Maximum number of iterations (unused for ’regression’).
Default Value : 5
Restriction : Iterations ≥ 0
1
P
H(X) = M N ln2 l Te (l)h(l)
where M × N is the size of the image, and h(l) is the histogram of the image. Furthermore,
Here, u(x(m, n)) is a fuzzy membership function defining the fuzzy set (see FuzzyPerimeter). The same
restrictions hold as in FuzzyPerimeter.
Parameter
HALCON 8.0.2
502 CHAPTER 5. IMAGE
Result
The operator FuzzyEntropy returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
FuzzyEntropy is reentrant and automatically parallelized (on tuple level).
See also
FuzzyPerimeter
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation
M
X −1 N
X −1 M
X −1 N
X −1
p(X) = |µX (xm,n ) − µX (xm,n+1 )| + |µX (xm,n ) − µX (xm+1,n )|
m=1 n=1 m=1 n=1
where M × N is the size of the image, and u(x(m, n)) is the fuzzy membership function (i.e., the input image).
This implementation uses Zadeh’s Standard-S function, which is defined as follows:
0, x≤a
2 x−a 2 ,
a<x≤b
c−a
µX (x) =
2
1 − 2 x−a
c−a , b<x≤c
1, c≤x
Parameter
Result
The operator FuzzyPerimeter returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise
an exception is raised.
Parallelization Information
FuzzyPerimeter is reentrant and automatically parallelized (on tuple level).
See also
FuzzyEntropy
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation
HALCON 8.0.2
504 CHAPTER 5. IMAGE
0 0 3 2 0 0 1 0 1 1 0
1 1 2 0 2 2 0 1 0 1 1
1 2 3 0 2 0 1 1 1 0 0
1 0 1 0 0 1 0 0
0 2 0 0 0 1 0 0
2 2 1 0 1 2 0 1
0 1 0 2 0 0 2 0
0 0 2 0 0 1 0 0
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region to be checked.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image providing the gray values.
. matrix (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Co-occurrence matrix (matrices).
. ldGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of gray values to be distinguished (2ldGray ).
Default Value : 6
List of values : LdGray ∈ {1, 2, 3, 4, 5, 6, 7, 8}
Typical range of values : 1 ≤ LdGray ≤ 256 (lin)
Minimum Increment : 1
Recommended Increment : 1
. direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Direction of neighbor relation.
Default Value : 0
List of values : Direction ∈ {0, 45, 90, 135}
Result
The operator GenCoocMatrix returns the value 2 (H_MSG_TRUE) if an image with defined gray values is
entered and the parameters are correct. The behavior in case of empty input (no input images available) is set via
the operator SetSystem(’no_object_result’,<Result>), the behavior in case of empty region is set
via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
GenCoocMatrix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
DrawRegion, GenCircle, GenEllipse, GenRectangle1, GenRectangle2, Threshold,
ErosionCircle, BinomialFilter, GaussImage, SmoothImage, SubImage
Alternatives
CoocFeatureImage
See also
CoocFeatureMatrix
Module
Foundation
HALCON 8.0.2
506 CHAPTER 5. IMAGE
1 X
horProjection(r) = image(r + r0 , c + c0 )
n(r + r0 )
(r+r 0 ,c+c0 )∈region
1 X
vertProjection(c) = image(r + r0 , c + c0 )
n(c + c0 )
(r+r 0 ,c+c0 )∈region
Here, (r0 , c0 ) denotes the upper left corner of the smallest enclosing axis-parallel rectangle of the input region (see
SmallestRectangle1), and n(x) denotes the number of region points in the corresponding row r + r0 or
column c + c0 . Hence, the horizontal projection returns a one-dimensional function that reflects the vertical gray
value changes. Likewise, the vertical projection returns a function that reflects the horizontal gray value changes.
If mode = ’rectangle’is selected the projection is performed in the direction of the major axes of the smallest en-
closing rectangle of arbitrary orientation of the input region (see SmallestRectangle2). Here, the horizontal
projection direction corresponds to the larger axis, while the vertical direction corresponds to the smaller axis. In
this mode, all gray values within the smallest enclosing rectangle of arbitrary orientation of the input region are
used to compute the projections.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Region to be processed.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Grayvalues for projections.
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Method to compute the projections.
Default Value : "simple"
List of values : Mode ∈ {"simple", "rectangle"}
. horProjection (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Horizontal projection.
. vertProjection (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; HTuple (double)
Vertical projection.
Parallelization Information
GrayProjections is reentrant and processed without parallelization.
Module
1D Metrology
HALCON 8.0.2
508 CHAPTER 5. IMAGE
The operator Histo2dim calculates the 2-dimensional histogram of two images within regions. The gray
values of channel 1 (imageCol) are interpreted as row index, those of channel 2 (imageRow) as column index.
The gray value at one point P (g1, g2) in the output image histo2Dim indicates the frequency of the gray value
combination (g1,g2) with g1 indicating the line index and g2 the column index.
Parameter
read_image(Image,’affe’)
texture_laws(Image,Texture,’el’,1,5)
draw_region(Region,WindowHandle)
histo_2dim(Region,Texture,Image,Histo2Dim)
disp_image(Histo2Dim,WindowHandle).
Complexity
If F is the plane of the region, the runtime complexity is O(F + 2562 ).
Result
The operator Histo2dim returns the value 2 (H_MSG_TRUE) if both images have defined gray val-
ues. The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>), the behavior in case of empty region is set via SetSystem
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
Histo2dim is reentrant and processed without parallelization.
Possible Predecessors
Decompose3, Decompose2, DrawRegion
Possible Successors
Threshold, Class2dimSup, Pouring, LocalMax, GraySkeleton
Alternatives
GrayHisto, GrayHistoAbs
See also
GetGrayval
Module
Foundation
The operator Intensity calculates the mean and the deviation of the gray values in the input image within
regions. If R is a region, p a pixel from R with the gray value g(p) and F the plane (F = |R|), the features are
defined by:
P
p∈R g(p)
mean :=
F
sP
p∈R (g(p) − mean)2
deviation :=
F
Attention
The calculation of deviation does not follow the usual definition if the region of the image contains only one
pixel. In this case 0.0 is returned.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions the features of which are to be calculated.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Gray value image.
. mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Mean gray value of a region.
. deviation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Deviation of gray values within a region.
Complexity
If F is the area of the region, the runtime complexity is O(F ).
Result
The operator Intensity returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no input
images available) is set via the operator SetSystem(’no_object_result’,<Result>), the behavior
in case of empty region is set via SetSystem(’empty_region_result’,<Result>). If necessary an
exception handling is raised.
Parallelization Information
Intensity is reentrant and automatically parallelized (on tuple level).
Possible Successors
Threshold
Alternatives
SelectGray, MinMaxGray
See also
MeanImage, MeanImage, GrayHisto, GrayHistoAbs
Module
Foundation
HALCON 8.0.2
510 CHAPTER 5. IMAGE
The operator MinMaxGray creates the histogram of the absolute frequencies of the gray values within regions
in the input image image (see GrayHisto) and calculates the number of pixels percent corresponding to
the area of the input image. Then it goes inwards on both sides of the histogram by this number of pixels and
determines the smallest and the largest gray value:
e.g.:
Area = 60, percent = 5, i.e. 3 pixels
histogram = [2,8,0,7,13,0,0,. . . ,0,10,10,5,3,1,1]
⇒ Maximum = 255, Minimum = 0, Range = 255
MinMaxGray returns: Maximum = 253, Minimum = 1, Range = 252
For image of type int4 and real, the above calculation is not performed via histograms, but using a rank selection
algorithm. If percent is set to 50, min = max = Median. If percent is 0 no histogram is calculated in order
to enhance the runtime.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions, the features of which are to be calculated.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Gray value image.
. percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Percentage below (above) the absolute maximum (minimum).
Default Value : 0
Suggested values : Percent ∈ {0, 1, 2, 5, 7, 10, 15, 20, 30, 40, 50}
Restriction : (0 ≤ Percent) ∧ (Percent ≤ 50)
. min (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
“Minimum” gray value.
. max (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
“Maximum” gray value.
Assertion : Max ≥ Min
. range (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Difference between Max and Min.
Assertion : Range ≥ 0
Example (Syntax: HDevelop)
Result
The operator MinMaxGray returns the value 2 (H_MSG_TRUE) if the input image has the defined gray values
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator SetSystem(’no_object_result’,<Result>). The behaviour in case of an empty region
is set via the operator SetSystem(’empty_region_result’,<Result>). If necessary an exception
handling is raised.
Parallelization Information
MinMaxGray is reentrant and processed without parallelization.
Possible Predecessors
DrawRegion, GenCircle, GenEllipse, GenRectangle1, Threshold, Regiongrowing
Possible Successors
Threshold
Alternatives
SelectGray, Intensity
See also
GrayHisto, ScaleImage, ScaleImageMax, LearnNdimNorm
Module
Foundation
1 X 1 X
MRow = (r − r)(image(r, c) − mean) MCol = (c − c)(image(r, c) − mean)
F2 F2
(r,c)∈regions (r,c)∈regions
Thus alpha indicates the gradient in the direction of the line axis (“down”), beta the gradient in the direction of
the column axis (to the “right”).
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be checked.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Corresponding gray values.
. MRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Mixed moments along a line.
. MCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Mixed moments along a column.
. alpha (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Parameter Alpha of the approximating plane.
. beta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Parameter Beta of the approximating plane.
. mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Mean gray value.
Result
The operator MomentsGrayPlane returns the value 2 (H_MSG_TRUE) if an image with the defined gray
values (byte) is entered and the parameters are correct. The behavior in case of empty input (no input images
available) is set via the operator SetSystem(’no_object_result’,<Result>), the behavior in case of
HALCON 8.0.2
512 CHAPTER 5. IMAGE
Attention
It should be noted that the calculation of deviation does not follow the usual definition. It is defined to return
the value 0.0 for an image with only one pixel.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions, of which the plane deviation is to be calculated.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Gray value image.
. deviation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Deviation of the gray values within a region.
Complexity
If F is the area of the region the runtime complexity amounts to O(F ).
Result
The operator PlaneDeviation returns the value 2 (H_MSG_TRUE) if image is of the type byte.
The behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>), the behavior in case of empty region is set via SetSystem
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
PlaneDeviation is reentrant and automatically parallelized (on tuple level).
Alternatives
Intensity, GenImageGrayRamp, SubImage
See also
MomentsGrayPlane
Module
Foundation
HALCON 8.0.2
514 CHAPTER 5. IMAGE
Parameter
Complexity
If F is the area
√ √ of the input region and N the mean number of connected components the runtime complexity is
O(255(F + F N )).
Result
The operator ShapeHistoAll returns the value 2 (H_MSG_TRUE) if an image with the defined gray val-
ues is entered. The behavior in case of empty input (no input images) is set via the operator SetSystem
(’no_object_result’,<Result>), the behavior in case of empty region is set via SetSystem
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
ShapeHistoAll is reentrant and processed without parallelization.
Possible Successors
HistoToThresh, Threshold, GenRegionHisto
Alternatives
ShapeHistoPoint
See also
Connection, Convexity, Compactness, ConnectAndHoles, EntropyGray, GrayHisto,
SetPaint, CountObj
Module
Foundation
HALCON 8.0.2
516 CHAPTER 5. IMAGE
5.7 Format
static void HOperatorSet.ChangeFormat ( HObject image,
out HObject imagePart, HTuple width, HTuple height )
HImage HImage.CropDomain ( )
Cut out of defined gray values.
The operator CropDomain cuts a rectangular area from the input images. This rectangle is the smallest sur-
rounding rectangle of the domain of the imput image. The new definition domain includes all pixels of the new
image. The new image matrix has the size of the rectangle.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imagePart (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image area.
Parallelization Information
CropDomain is reentrant and automatically parallelized (on tuple level).
Possible Successors
DispImage
HALCON 8.0.2
518 CHAPTER 5. IMAGE
Alternatives
CropPart, CropRectangle1, ChangeFormat, ReduceDomain
See also
ZoomImageSize, ZoomImageFactor
Module
Foundation
HImage HImage.CropDomainRel ( int top, int left, int bottom, int right
)
Module
Foundation
HImage HImage.CropPart ( int row, int column, int width, int height )
Cut out a rectangular image area.
The operator CropPart cuts a rectangular area from the input images. The area is indicated by a rectangle
(upper left corner and size). The area must be within the image. The definition domain includes all pixels of the
new image. The new image matrix has the size of a rectangle.
Parameter
HALCON 8.0.2
520 CHAPTER 5. IMAGE
put image tiledImage contains a single channel image, where the Num input channels have been tiled into
numColumns columns. In particular, this means that TileChannels cannot tile color images. For this pur-
pose, TileImages can be used. The parameter tileOrder determines the order in which the images are
copied into the output in the cases in which this is not already determined by numColumns (i.e., if numColumns
!= 1 and numColumns != Num). If tileOrder = ’horizontal’ the images are copied in the horizontal direction,
i.e., the second channel of image will be to the right of the first channel. If tileOrder = ’vertical’ the images
are copied in the vertical direction, i.e., the second channel of image will be below the first channel. The domain
of tiledImage is obtained by copying the domain of image to the corresponding locations in the output im-
age. If Num is not a multiple of numColumns the output image will have undefined gray values in the lower right
corner of the image. The output domain will reflect this.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Input image.
. tiledImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Tiled output image.
. numColumns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of columns to use for the output image.
Default Value : 1
Suggested values : NumColumns ∈ {1, 2, 3, 4, 5, 6, 7}
Restriction : NumColumns ≥ 1
. tileOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Order of the input images in the output image.
Default Value : "vertical"
List of values : TileOrder ∈ {"horizontal", "vertical"}
Example (Syntax: HDevelop)
Result
TileChannels returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution.
If the input is empty the behavior can be set via SetSystem(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
TileChannels is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
AppendChannel
Alternatives
TileImages, TileImagesOffset
See also
ChangeFormat, CropPart, CropRectangle1
Module
Foundation
HALCON 8.0.2
522 CHAPTER 5. IMAGE
TileImages tiles multiple input image objects, which must contain the same number of channels, into a large
image. The input image object images contains Num images, which may be of different size. The output image
tiledImage contains as many channels as the input images. In the output image the Num input images have been
tiled into numColumns columns. Each tile has the same size, which is determined by the maximum width and
height of all input images. If an input image is smaller than the tile size it is copied to the center of the respective
tile. The parameter tileOrder determines the order in which the images are copied into the output in the cases
in which this is not already determined by numColumns (i.e., if numColumns != 1 and numColumns != Num).
If tileOrder = ’horizontal’ the images are copied in the horizontal direction, i.e., the second image of images
will be to the right of the first image. If tileOrder = ’vertical’ the images are copied in the vertical direction,
i.e., the second image of images will be below the first image. The domain of tiledImage is obtained by
copying the domains of images to the corresponding locations in the output image. If Num is not a multiple of
numColumns the output image will have undefined gray values in the lower right corner of the image. The output
domain will reflect this.
Parameter
Result
TileImages returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execution. If the
input is empty the behavior can be set via SetSystem(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
TileImages is reentrant and automatically parallelized (on channel level).
Possible Predecessors
AppendChannel
Alternatives
TileChannels, TileImagesOffset
See also
ChangeFormat, CropPart, CropRectangle1
Module
Foundation
Tile multiple image objects into a large image with explicit positioning information.
TileImagesOffset tiles multiple input image objects, which must contain the same number of channels, into
a large image. The input image object images contains Num images, which may be of different size. The output
image tiledImage contains as many channels as the input images. The size of the output image is determined
by the parameters width and height. The position of the upper left corner of the input images in the output
images is determined by the parameters offsetRow and offsetCol. Both parameters must contain exactly
Num values. Optionally, each input image can be cropped to an arbitrary rectangle that is smaller than the input
image. To do so, the parameters row1, col1, row2, and col2 must be set accordingly. If any of these four
parameters is set to -1, the corresponding input image is not cropped. In any case, all four parameters must contain
Num values. If the input images are cropped the position parameters offsetRow and offsetCol refer to the
upper left corner of the cropped image. If the input images overlap each other in the output image (while taking
into account their respective domains), the image with the higher index in images overwrites the image data of
the image with the lower index. The domain of tiledImage is obtained by copying the domains of images to
the corresponding locations in the output image.
Attention
If the input images all have the same size and tile the output image exactly, the operator TileImages usually
will be slightly faster.
Parameter
HALCON 8.0.2
524 CHAPTER 5. IMAGE
/* Example 1 */
/* Grab 2 (multi-channel) NTSC images, crop the bottom 5 lines off */
/* of each image, the right 5 columns off of the first image, and */
/* the left five lines off of the second image, and put the cropped */
/* images side-by-side. */
gen_empty_obj (Images)
for I := 1 to 2 by 1
grab_image_async (ImageGrabbed, FGHandle, -1)
concat_obj (Images, ImageGrabbed, Images)
endfor
tile_images_offset (Images, TiledImage, [0,635], [0,0], [0,0],
[0,5], [474,474], [634,639])
/* Example 2 */
/* Enlarge image by 15 rows and columns on all sides */
EnlargeColsBy := 15
EnlargeRowsBy := 15
get_image_pointer1 (Image, Pointer, Type, WidthImage, HeightImage)
tile_images_offset (Image, EnlargedImage, EnlargeRowsBy, EnlargeColsBy,
-1, -1, -1, -1, WidthImage + EnlargeColsBy*2,
HeightImage + EnlargeRowsBy*2)
Result
TileImagesOffset returns 2 (H_MSG_TRUE) if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via SetSystem(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
TileImagesOffset is reentrant and automatically parallelized (on channel level).
Possible Predecessors
AppendChannel
Alternatives
TileChannels, TileImages
See also
ChangeFormat, CropPart, CropRectangle1
Module
Foundation
5.8 Manipulation
/* Copy a circular part of the image ’monkey’ into a new image (New1): */
read_image(Image,’monkey’)
gen_circle(Circle,200,200,150)
reduce_domain(Image,Circle,Mask)
/* New image with black (0) background */
gen_image_proto(Image,New1,0.0)
/* Copy a part of the image ’monkey’ into New1 */
overpaint_gray(New1,Mask).
Result
OverpaintGray returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
OverpaintGray is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GenImageConst, GenImageProto
Alternatives
GetImagePointer1, PaintGray, SetGrayval, CopyImage
See also
PaintRegion, OverpaintRegion
Module
Foundation
HALCON 8.0.2
526 CHAPTER 5. IMAGE
The parameter type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
If you do not want to modify image itself, you can use the operator PaintRegion, which returns the result in
a newly created image.
Attention
OverpaintRegion modifies the content of an already existing image (image). Besides, even other image
objects may be affected: For example, if you created image via CopyObj from another image object (or
vice versa), OverpaintRegion will also modify the image matrix of this other image object. Therefore,
OverpaintRegion should only be used to overpaint newly created image objects. Typical operators for this
task are, e.g., GenImageConst (creates a new image with a specified size), GenImageProto (creates an
image with the size of a specified prototype image) or CopyImage (creates an image as the copy of a specified
image).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image in which the regions are to be painted.
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be painted into the input image.
. grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Paint regions filled or as boundaries.
Default Value : "fill"
List of values : Type ∈ {"fill", "margin"}
Example (Syntax: HDevelop)
gen_rectangle1(Rectangle,100.0,100.0,300.0,300.0)
/* generate a black image */
gen_image_const(New1,"byte", 768, 576)
/* paint a white rectangle */
overpaint_region(New1,Rectangle,255.0,’fill’,).
Result
OverpaintRegion returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
OverpaintRegion is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GenImageConst, GenImageProto, ReduceDomain
Alternatives
SetGrayval, PaintRegion, PaintXld
See also
ReduceDomain, SetDraw, PaintGray, OverpaintGray, GenImageConst
Module
Foundation
/* Copy a circular part of the image ’monkey’ into the image ’fabrik’: */
read_image(Image,’monkey’)
gen_circle(Circle,200,200,150)
reduce_domain(Image,Circle,Mask)
read_image(Image2,’fabrik’)
/* Copy a part of the image ’monkey’ into ’fabrik’ */
paint_gray(Mask,Image2,MixedImage).
Result
PaintGray returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
PaintGray is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GenImageConst, GenImageProto
Alternatives
GetImagePointer1, SetGrayval, CopyImage, OverpaintGray
See also
PaintRegion, OverpaintRegion
HALCON 8.0.2
528 CHAPTER 5. IMAGE
Module
Foundation
The parameter type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
As an alternative to PaintRegion, you can use the operator OverpaintRegion, which directly paints the
regions into image.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be painted into the input image.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image in which the regions are to be painted.
. imageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image containing the result.
. grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Paint regions filled or as boundaries.
Default Value : "fill"
List of values : Type ∈ {"fill", "margin"}
Example (Syntax: HDevelop)
read_image(Image,’monkey’)
gen_rectangle1(Rectangle,100.0,100.0,300.0,300.0)
/* paint a white rectangle */
paint_region(Rectangle,Image,ImageResult,255.0,’fill’,).
Result
PaintRegion returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be
set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
PaintRegion is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GenImageConst, GenImageProto, ReduceDomain
Alternatives
SetGrayval, OverpaintRegion, PaintXld
See also
ReduceDomain, PaintGray, OverpaintGray, SetDraw, GenImageConst
Module
Foundation
Parameter
. XLD (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xld(-array) ; HXLD
XLD objects to be painted into the input image.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image in which the xld objects are to be painted.
. imageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image containing the result.
. grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Desired gray value of the xld object.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
Example (Syntax: HDevelop)
HALCON 8.0.2
530 CHAPTER 5. IMAGE
Result
PaintXld returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can be set
via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
PaintXld is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GenImageConst, GenImageProto, GenContourPolygonXld, ThresholdSubPix
Alternatives
SetGrayval, PaintGray, PaintRegion
See also
GenImageConst
Module
Foundation
Parallelization Information
SetGrayval is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GetImagePointer1, GenImageProto, GenImage1
Alternatives
GetImagePointer1, PaintGray, PaintRegion
See also
GetGrayval, GenImageConst, GenImage1, GenImageProto
Module
Foundation
5.9 Type-Conversion
HALCON 8.0.2
532 CHAPTER 5. IMAGE
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image whose image type is to be changed.
. imageConverted (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Converted image.
. newType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Desired image type (i.e., type of the gray values).
Default Value : "byte"
List of values : NewType ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic", "complex"}
Result
ConvertImageType returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
ConvertImageType is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ScaleImage
See also
ScaleImage, AbsImage
Module
Foundation
Parameter
HALCON 8.0.2
534 CHAPTER 5. IMAGE
Lines
6.1 Access
static void HOperatorSet.ApproxChain ( HTuple row, HTuple column,
HTuple minWidthCoord, HTuple maxWidthCoord, HTuple threshStart,
HTuple threshEnd, HTuple threshStep, HTuple minWidthSmooth,
HTuple maxWidthSmooth, HTuple minWidthCurve, HTuple maxWidthCurve,
HTuple weight1, HTuple weight2, HTuple weight3,
out HTuple arcCenterRow, out HTuple arcCenterCol, out HTuple arcAngle,
out HTuple arcBeginRow, out HTuple arcBeginCol,
out HTuple lineBeginRow, out HTuple lineBeginCol,
out HTuple lineEndRow, out HTuple lineEndCol, out HTuple order )
535
536 CHAPTER 6. LINES
Parameter
HALCON 8.0.2
538 CHAPTER 6. LINES
set_d(t3,0.3,0);
set_d(t4,0.9,0);
set_d(t5,0.2,0);
set_d(t6,0.4,0);
set_d(t7,2.4,0);
set_i(t8,2,0);
set_i(t9,12,0);
set_d(t10,1.0,0);
set_d(t11,1.0,0);
set_d(t12,1.0,0);
T_approx_chain(Rows,Columns,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,
&Bzl,&Bzc,&Br,&Bwl,&Bwc,&Ll0,&Lc0,&Ll1,&Lc1,&order);
nob = length_tuple(Bzl);
nol = length_tuple(Ll0);
/* draw lines and arcs */
set_i(WindowHandleTuple,WindowHandle,0) ;
set_line_width(WindowHandle,4);
if (nob>0) T_disp_arc(Bzl,Bzc,Br,Bwl,Bwc);
set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);
Result
The operator ApproxChain returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
ApproxChain is reentrant and processed without parallelization.
Possible Predecessors
SobelAmp, EdgesImage, GetRegionContour, Threshold, HysteresisThreshold
Possible Successors
SetLineWidth, DispArc, DispLine
Alternatives
GetRegionPolygon, ApproxChainSimple
See also
GetRegionChain, SmallestCircle, DispCircle, DispLine
Module
Foundation
HALCON 8.0.2
540 CHAPTER 6. LINES
connection(RK1,&Rand);
/* fetch chain code */
T_get_region_contour(Rand,&Rows,&Columns);
firstline = get_i(Tline,0);
firstcol = get_i(Tcol,0);
/* approximation with lines and circular arcs */
T_approx_chain_simple(Rows,Columns,
&Bzl,&Bzc,&Br,&Bwl,&Bwc,&Ll0,&Lc0,&Ll1,&Lc1,&order);
nob = length_tuple(Bzl);
nol = length_tuple(Ll0);
/* draw lines and arcs */
set_i(WindowHandleTuple,WindowHandle,0) ;
set_line_width(WindowHandle,4);
if (nob>0) T_disp_arc(Bzl,Bzc,Br,Bwl,Bwc);
set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);
Result
The operator ApproxChainSimple returns the value 2 (H_MSG_TRUE) if the parameters are correct. Other-
wise an exception is raised.
Parallelization Information
ApproxChainSimple is reentrant and processed without parallelization.
Possible Predecessors
SobelAmp, EdgesImage, GetRegionContour, Threshold, HysteresisThreshold
Possible Successors
SetLineWidth, DispArc, DispLine
Alternatives
GetRegionPolygon, ApproxChain
See also
GetRegionChain, SmallestCircle, DispCircle, DispLine
Module
Foundation
6.2 Features
static void HOperatorSet.LineOrientation ( HTuple rowBegin,
HTuple colBegin, HTuple rowEnd, HTuple colEnd, out HTuple phi )
HALCON 8.0.2
542 CHAPTER 6. LINES
Attention
If only one feature is used the value of operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
. rowBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; HTuple (int / long)
Row coordinates of the starting points of the input lines.
. colBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; HTuple (int / long)
Column coordinates of the starting points of the input lines.
. rowEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; HTuple (int / long)
Row coordinates of the ending points of the input lines.
. colEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; HTuple (int / long)
Column coordinates of the ending points of the input lines.
. feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Features to be used for selection.
List of values : Feature ∈ {"length", "row", "column", "phi"}
. operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Desired combination of the features.
List of values : Operation ∈ {"and", "or"}
. min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / int / long / double)
Lower limits of the features or ’min’.
Default Value : "min"
. max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / int / long / double)
Upper limits of the features or ’max’.
Default Value : "max"
. rowBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; HTuple (int / long)
Row coordinates of the starting points of the lines fulfilling the conditions.
. colBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; HTuple (int / long)
Column coordinates of the starting points of the lines fulfilling the conditions.
. rowEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; HTuple (int / long)
Row coordinates of the ending points of the lines fulfilling the conditions.
. colEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; HTuple (int / long)
Column coordinates of the ending points of the lines fulfilling the conditions.
. failRowBOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; HTuple (int / long)
Row coordinates of the starting points of the lines not fulfilling the conditions.
. failColBOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; HTuple (int / long)
Column coordinates of the starting points of the lines not fulfilling the conditions.
. failRowEOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; HTuple (int / long)
Row coordinates of the ending points of the lines not fulfilling the conditions.
. failColEOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; HTuple (int / long)
Column coordinates of the ending points of the lines not fulfilling the conditions.
Result
The operator PartitionLines returns the value 2 (H_MSG_TRUE) if the parameter values are correct. Oth-
erwise an exception is raised.
Parallelization Information
PartitionLines is reentrant and processed without parallelization.
Possible Predecessors
SobelAmp, EdgesImage, Threshold, HysteresisThreshold, SplitSkeletonRegion,
SplitSkeletonLines
Possible Successors
SetLineWidth, DispLine
Alternatives
LineOrientation, LinePosition, SelectLines, SelectLinesLongest
See also
SelectLines, SelectLinesLongest, DetectEdgeSegments, SelectShape
HALCON 8.0.2
544 CHAPTER 6. LINES
Module
Foundation
Attention
If only one feature is used the value of operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
HALCON 8.0.2
546 CHAPTER 6. LINES
Matching
7.1 Component-Based
547
548 CHAPTER 7. MATCHING
Parallelization Information
ClearAllTrainingComponents is processed completely exclusively without parallelization.
Possible Predecessors
TrainModelComponents, WriteTrainingComponents
See also
ClearTrainingComponents
Module
Matching
Possible Predecessors
TrainModelComponents, WriteTrainingComponents
See also
ClearAllTrainingComponents
Module
Matching
HRegion HComponentTraining.ClusterModelComponents (
HImage trainingImages, string ambiguityCriterion,
double maxContourOverlap, double clusterThreshold )
HRegion HImage.ClusterModelComponents (
HComponentTraining componentTrainingID, string ambiguityCriterion,
double maxContourOverlap, double clusterThreshold )
Adopt new parameters that are used to create the model components into the training result.
With ClusterModelComponents you can modify parameters after a first training has been performed using
TrainModelComponents. ClusterModelComponents sets the criterion ambiguityCriterion that
is used to solve the ambiguities, the maximum contour overlap maxContourOverlap, and the cluster threshold
of the training result componentTrainingID to the specified values. A detailed description of these parameters
can be found in the documentation of TrainModelComponents. By modifying these parameters, the way in
which the initial components are merged into rigid model components changes. For example, the greater the cluster
threshold is chosen, the fewer initial components are merged.
The rigid model components are returned in modelComponents. In order to receive reasonable results, it is
essential that the same training images that were used to perform the training with TrainModelComponents
are passed in trainingImages. The pose of the newly clustered components within the training images is de-
termined using the shape-based matching. As in TrainModelComponents, one can decide whether the shape
models should be pregenerated by using SetSystem(’pregenerate_shape_models’,...). Further-
more, SetSystem(’border_shape_models’,...) can be used to determine whether the models must
lie completely within the training images or whether they can extend partially beyond the image border.
Thus, you can select suitable parameter values interactively by repeatedly calling
InspectClusteredComponents with different parameter values and then setting the chosen values
by using GetTrainingComponents.
Parameter
HALCON 8.0.2
550 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator ClusterModelComponents returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
ClusterModelComponents is processed completely exclusively without parallelization.
Possible Predecessors
TrainModelComponents, InspectClusteredComponents
Possible Successors
GetTrainingComponents, CreateTrainedComponentModel, ModifyComponentRelations,
WriteTrainingComponents, GetComponentRelations, ClearTrainingComponents,
ClearAllTrainingComponents
Module
Matching
HALCON 8.0.2
552 CHAPTER 7. MATCHING
HComponentModel HImage.CreateComponentModel (
HRegion componentRegions, HTuple variationRow, HTuple variationColumn,
HTuple variationAngle, double angleStart, double angleExtent,
HTuple contrastLowComp, HTuple contrastHighComp, HTuple minSizeComp,
HTuple minContrastComp, HTuple minScoreComp, HTuple numLevelsComp,
HTuple angleStepComp, string optimizationComp, HTuple metricComp,
HTuple pregenerationComp, out HTuple rootRanking )
HComponentModel HImage.CreateComponentModel (
HRegion componentRegions, int variationRow, int variationColumn,
double variationAngle, double angleStart, double angleExtent,
int contrastLowComp, int contrastHighComp, int minSizeComp,
int minContrastComp, double minScoreComp, int numLevelsComp,
double angleStepComp, string optimizationComp, string metricComp,
string pregenerationComp, out int rootRanking )
Prepare a component model for matching based on explicitly specified components and relations.
CreateComponentModel prepares patterns, which are passed in the form of a model image modelImage
and several regions componentRegions, as a component model for matching. The output parameter
componentModelID is a handle for this model, which is used in subsequent calls to FindComponentModel.
In contrast to CreateTrainedComponentModel, no preceding training with TrainModelComponents
needs to be performed before calling CreateComponentModel.
Each of the regions passed in componentRegions describes a separate model component. Later, the index of
a component region in componentRegions is used to denote the model component. The reference point of a
component is the center of gravity of its associated region, which is passed in componentRegions. It can be
calculated by calling AreaCenter.
The relative movements (relations) of the model components can be set by passing variationRow,
variationColumn, and variationAngle. Because directly passing the relations is complicated, instead of
the relations the variations of the model components are passed. The variations describe the movements of the com-
ponents independently from each other relative to their poses in the model image modelImage. The parameters
variationRow and variationColumn describe the movement of the components in row and column di-
rection by ± 21 variationRow and ± 12 variationColumn, respectively. The parameter variationAngle
describes the angle variation of the component by ± 12 variationAngle. Based on these values, the relations
are automatically computed. The three parameters must either contain one element, in which case the parameter is
used for all model components, or must contain the same number of elements as componentRegions, in which
case each parameter element refers to the corresponding model component in componentRegions.
The parameters angleStart and angleExtent determine the range of possible rotations of the component
model in an image.
Internally, a separate shape model is built for each model component (see CreateShapeModel). There-
fore, the parameters contrastLowComp, contrastHighComp, minSizeComp, minContrastComp,
minScoreComp, numLevelsComp, angleStepComp, optimizationComp, metricComp, and
pregenerationComp correspond to the parameters of CreateShapeModel, with the following differ-
ences: First, in the parameter contrast of CreateShapeModel the upper as well as the lower threshold
for the hysteresis threshold method can be passed. Additionally, a third value, which suppresses small con-
nected contour regions, can be passed. In contrast, the operator CreateComponentModel offers three sepa-
rate parameters contrastHighComp, contrastLowComp, and minScoreComp in order to set these three
values. Consequently, also the automatic computation of the contrast threshold(s) is different. If both hystere-
sis threshold should be automatically determined, both contrastLowComp and contrastHighComp must
be set to ’auto’. In contrast, if only one threshold value should be determined, contrastLowComp must
be set to ’auto’ while contrastHighComp must be set to an arbitrary value different from ’auto’. Sec-
ondly, the parameter optimization of CreateShapeModel provides the possibility to reduce the num-
ber of model points as well as the possibility to completely pregenerate the shape model. In contrast, the
operator CreateTrainedComponentModel uses a separate parameter pregenerationComp in order
to decide whether the shape models should be completely pregenerated or not. A third difference concerning
the parameter minScoreComp should be noted. When using the shape-based matching, this parameter needs
not be passed when preparing the shape model using CreateShapeModel, but only during the search us-
ing FindShapeModel. In contrast, when preparing the component model it is favorable to analyze rota-
tional symmetries of the model components and similarities between the model components. However, this
analysis only leads to meaningful results if the value for minScoreComp that is used during the search (see
FindComponentModel) is already approximately known.
In addition to the parameters contrastLowComp, contrastHighComp, and minSizeComp also the pa-
rameters minContrastComp, numLevelsComp, angleStepComp, and optimizationComp can be au-
tomatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number of
elements as the number of regions in componentRegions, in which case each parameter element refers to the
corresponding element in componentRegions.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using FindComponentModel in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of FindComponentModel during the search. To what
extent a model component is suited to act as the root component depends on several factors. In principle, a model
component that can be found in the image with a high probability should be chosen. Therefore, a component
that is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as the root
HALCON 8.0.2
554 CHAPTER 7. MATCHING
component. Additionally, the computation time that is associated with the root component during the search
can serve as a criterion. A ranking of the model components that is based on the latter criterion is returned in
rootRanking. In this parameter the indices of the model components are sorted in descending order according
to their associated search time, i.e., rootRanking[0] contains the index of the model component that, chosen
as root component, allows the fastest search. Note that the ranking returned in rootRanking represents only
a coarse estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the
value of the system parameter ’border_shape_models’ are identical when calling CreateComponentModel
and FindComponentModel.
Parameter
HALCON 8.0.2
556 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator CreateComponentModel returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Parallelization Information
CreateComponentModel is processed completely exclusively without parallelization.
Possible Predecessors
DrawRegion, ConcatObj
Possible Successors
FindComponentModel
Alternatives
CreateTrainedComponentModel
See also
CreateShapeModel, FindShapeModel
Module
Matching
HComponentModel HComponentTraining.CreateTrainedComponentModel (
double angleStart, double angleExtent, HTuple minContrastComp,
HTuple minScoreComp, HTuple numLevelsComp, HTuple angleStepComp,
string optimizationComp, HTuple metricComp, HTuple pregenerationComp,
out HTuple rootRanking )
HComponentModel HComponentTraining.CreateTrainedComponentModel (
double angleStart, double angleExtent, int minContrastComp,
double minScoreComp, int numLevelsComp, double angleStepComp,
string optimizationComp, string metricComp, string pregenerationComp,
out int rootRanking )
HTuple HComponentModel.CreateTrainedComponentModel (
HComponentTraining componentTrainingID, double angleStart,
double angleExtent, HTuple minContrastComp, HTuple minScoreComp,
HTuple numLevelsComp, HTuple angleStepComp, string optimizationComp,
HTuple metricComp, HTuple pregenerationComp )
int HComponentModel.CreateTrainedComponentModel (
HComponentTraining componentTrainingID, double angleStart,
double angleExtent, int minContrastComp, double minScoreComp,
int numLevelsComp, double angleStepComp, string optimizationComp,
string metricComp, string pregenerationComp )
HALCON 8.0.2
558 CHAPTER 7. MATCHING
ing CreateShapeModel but only during the search using FindShapeModel. In contrast, when prepar-
ing the component model it is favorable to analyze rotational symmetries of the model components and sim-
ilarities between the model components. However, this analysis only leads to meaningful results if the value
for minScoreComp that is used during the search (see FindComponentModel) is already approximately
known. After the search with FindComponentModel the pose parameters of the components in a search im-
age are returned. Note that the pose parameters refer to the reference points of the components. The reference
point of a component is the center of gravity of its associated region that is returned in modelComponents of
TrainModelComponents.
The parameters minContrastComp, numLevelsComp, angleStepComp, and optimizationComp can
be automatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number
of elements as the number of model components contained in componentTrainingID, in which case each
parameter element refers to the corresponding component in componentTrainingID.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using FindComponentModel in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of FindComponentModel during the search. To what
extent a model component is suited to act as root component depends on several factors. In principle, a model
component that can be found in the image with a high probability should be chosen. Therefore, a component that
is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as root component.
Additionally, the computation time that is associated with the root component during the search can serve as a
criterion. A ranking of the model components that is based on the latter criterion is returned in rootRanking.
In this parameter the indices of the model components are sorted in descending order according to their associ-
ated computation time, i.e., rootRanking[0] contains the index of the model component that, chosen as root
component, allows the fastest search. Note that the ranking returned in rootRanking represents only a coarse
estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value of the
system parameter ’border_shape_models’ are identical when calling CreateTrainedComponentModel and
FindComponentModel.
Parameter
HALCON 8.0.2
560 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator CreateTrainedComponentModel returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
CreateTrainedComponentModel is processed completely exclusively without parallelization.
Possible Predecessors
TrainModelComponents, ReadTrainingComponents
Possible Successors
FindComponentModel
Alternatives
CreateComponentModel
See also
CreateShapeModel, FindShapeModel
Module
Matching
HALCON 8.0.2
562 CHAPTER 7. MATCHING
nent can be set with ifRootNotFound (see below). Also, the computation time that is associated with the
root component during the search can serve as a criterion. A ranking of the model components that is based
on the latter criterion is returned in rootRanking of the operator CreateTrainedComponentModel or
CreateComponentModel, respectively. If the complete ranking is passed in rootComponent, the first value
rootComponent[0] is automatically selected as the root component. The domain of the image image deter-
mines the search space for the reference point, i.e., the allowed positions, of the root component. The parameters
angleStartRoot and angleExtentRoot specify the allowed angle range within which the root component
is searched. If necessary, the range of rotations is clipped to the range given when the component model was
created with CreateTrainedComponentModel or CreateComponentModel, respectively. The angle
range for each component can be queried with GetShapeModelParams after requesting the corresponding
shape model handles with GetComponentModelParams.
The position and rotation of the model components of all found component model instances are returned in
rowComp, columnComp, and angleComp. The coordinates rowComp and columnComp are the coordi-
nates of the origin (reference point) of the component in the search image. If the component model was created
with CreateTrainedComponentModel by training, the origin of the component is the center of gravity of
the respective returned contour region in modelComponents of the operator TrainModelComponents.
Otherwise, if the component model was created manually with CreateComponentModel, the origin of the
component is the center of gravity of the corresponding passed component region componentRegion of the op-
erator CreateComponentModel. Since the relations between the components in componentModelID refer
to this reference point, the origin of the components must not be modified by using SetShapeModelOrigin.
Additionally, the score of each found component instance is returned in scoreComp. The score is a number
between 0 and 1, and is an approximate measure of how much of the component is visible in the image. If,
for example, half of the component is occluded, the score cannot exceed 0.5. While scoreComp represents
the score of the instances of the single components, score contains the score of the instances of the entire
component model. More precisely, score contains the weighted mean of the associated values of scoreComp.
The weighting is performed according to the number of model points within the respective component.
In order to assign the values in rowComp, columnComp, angleComp, and scoreComp to the as-
sociated model component, the index of the model component (see CreateComponentModel and
TrainModelComponents, respectively) is returned in modelComp. Furthermore, for each found instance
of the component model its associated component matches are given in modelStart and modelEnd. Thus,
the matches of the components that correspond to the first found instance of the component model are given
by the interval of indices [modelStart[0],modelEnd[0]]. The indices refer to the parameters rowComp,
columnComp, angleComp, scoreComp, and modelComp. Assume, for example, that two instances of the
component model, which consists of three components, are found in the image, where for one instance only two
components (component 0 and component 2) could be found. Then the returned parameters could, for exam-
ple, have the following elements: rowComp = [100,200,300,150,250], columnComp = [200,210,220,400,425],
angleComp = [0,0.1,-0.2,0.1,0.2,0], scoreComp = [1,1,1,1,1], modelComp = [0,1,2,0,2], modelStart
= [0,3], modelEnd = [2,4], score = [1,1]. The operator GetFoundComponentModel can be used to
visualize the result of the search and to extract the component matches of a certain component model instance.
By default, the components are searched at image positions where the components lie completely within the im-
age. This means that the components will not be found if they extend beyond the borders of the image, even
if they would achieve a score greater than minScoreComp (see below). This behavior can be changed with
SetSystem(’border_shape_models’,’true’), which will cause components that extend beyond the
image border to be found if they achieve a score greater than minScoreComp. Here, points lying outside the
image are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search
will increase in this mode.
The parameter minScore determines what score a potential match of the component model must at least have to
be regarded as an instance of the component model in the image. If the component model can be expected never
to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If a missing or strongly occluded
root component must be assumed, and hence ifRootNotFound is set to ’select_new_root’ (see below), the
search is faster the larger minScore is chosen. Otherwise, the value of this parameter only slightly influences the
computation time.
The maximum number of model instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches. If all model instances exceeding minScore in the image
should be found, numMatches must be set to 0.
In some cases, found instances only differ in the pose of one or a few components. The parameter maxOverlap
determines by what fraction (i.e., a number between 0 and 1) two instances may at most overlap in order to
consider them as different instances, and hence to return them separately. If two instances overlap each other by
more than maxOverlap only the best instance is returned. The calculation of the overlap is based on the smallest
enclosing rectangles of arbitrary orientation (see SmallestRectangle2) of the found component instances. If
maxOverlap = 0, the found instances may not overlap at all, while for maxOverlap = 1 no check for overlap
is performed, and hence all instances are returned.
The parameter ifRootNotFound specifies the behavior of the operator when dealing with a missing or
strongly occluded root component. This parameter strongly influences the computation time of the operator. If
ifRootNotFound is set to ’stop_search’, it is assumed that the root component is always found in the image.
Consequently, for instances for which the root component could not be found the search for the remaining compo-
nents is not continued. If ifRootNotFound is set to ’select_new_root’, different components are successively
chosen as the root component and searched within the full search space. The order in which the selection of the
root component is performed corresponds to the order passed in rootRanking. The poses of the found in-
stances of all root components are then used to start the recursive search for the remaining components. Hence,
it is possible to find instances even if the original root component is not found. However, the computation time
of the search increases significantly in comparison to the search when choosing ’stop_search’. The number of
root components to search depends on the value specified for minScore. The higher the value for minScore
is chosen the fewer root components must be searched, and thus the faster the search is performed. If the number
of elements in rootComponent is less than the number of required root components during the search, the root
components are completed by the automatically computed order (see CreateTrainedComponentModel or
CreateComponentModel).
The parameter ifComponentNotFound specifies the behavior of the operator when dealing with missing or
strongly occluded components other than the root component. Here, it can be stated in which way components
that must be searched relative to the pose of another (predecessor) component should be treated if the predecessor
component was not found. If ifComponentNotFound is set to ’prune_branch’, such components are not
searched at all and are also treated as ’not found’. If ifComponentNotFound is set to ’search_from_upper’,
such components are searched relative to the pose of the predecessor component of the predecessor component. If
ifComponentNotFound is set to ’search_from_best’, such components are searched relative to the pose of the
already found component from which the relative search can be performed with minimum computational effort.
The parameter posePrediction determines whether the pose of components that could not be found should
be estimated. If posePrediction is set to ’none’, only the poses of the found components are returned. In
contrast, if posePrediction is set to ’from_neighbors’ or ’from_all’, the poses of components that could not
be found are estimated and returned with a score of scoreComp = 0.0. The estimation of the poses is then either
based on the poses of the found neighboring components in the search tree (’from_neighbors’) or on the poses of
all found components (’from_all’).
Internally, the shape-based matching is used for the component-based matching in order to search the individ-
ual components (see FindShapeModel). Therefore, the parameters minScoreComp, subPixelComp,
numLevelsComp, and greedinessComp have the same meaning as the corresponding parameters in
FindShapeModel. These parameters must either contain one element, in which case the parameter is used for
all components, or must contain the same number of elements as model components in componentModelID,
in which case each parameter element refers to the corresponding component in componentModelID.
numLevelsComp may also contain two elements or twice the number of elements as model components. The
first value determines the number of pyramid levels to use. The second value determines the lowest pyramid level
to which the found matches are tracked. If different values should be used for different components, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in numLevelsComp. If, for ex-
ample, two components are contained in componentModelID, and the number of pyramid levels is 5 for the
first component and 4 for the second component, and the lowest pyramid level is 2 for the first component and 1
for the second component, numLevelsComp = [5,2,4,1] must be selected. Further details can be found in the
documentation of FindShapeModels.
Parameter
HALCON 8.0.2
564 CHAPTER 7. MATCHING
|numLevelsComp| = 2n).
Default Value : 0
List of values : NumLevelsComp ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. greedinessComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
“Greediness” of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : GreedinessComp ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ GreedinessComp) ∧ (GreedinessComp ≤ 1)
. modelStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Start index of each found instance of the component model in the tuples describing the component matches.
. modelEnd (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
End index of each found instance of the component model in the tuples describing the component matches.
. score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Score of the found instances of the component model.
. rowComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double)
Row coordinate of the found component matches.
. columnComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double)
Column coordinate of the found component matches.
. angleComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Rotation angle of the found component matches.
. scoreComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Score of the found component matches.
. modelComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Index of the found components.
Result
If the parameter values are correct, the operator FindComponentModel returns the value 2
(H_MSG_TRUE). If the input is empty (no input image available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
FindComponentModel is reentrant and processed without parallelization.
Possible Predecessors
CreateTrainedComponentModel, CreateComponentModel, ReadComponentModel
Possible Successors
GetFoundComponentModel
Alternatives
FindShapeModels
See also
FindShapeModel, FindShapeModels, GetShapeModelParams, GetComponentModelParams,
TrainModelComponents, SetShapeModelOrigin, SmallestRectangle2
Module
Matching
HALCON 8.0.2
566 CHAPTER 7. MATCHING
When using the second possibility, i.e., the components of the component model are approximately
known, the training by using TrainModelComponents can be performed without previously execut-
ing GenInitialComponents. If this is desired, the initial components can be specified by the
user and directly passed to TrainModelComponents. Furthermore, if the components as well as
the relative movements (relations) of the components are known, GenInitialComponents as well as
TrainModelComponents need not be executed. In fact, by immediately passing the components as well
as the relations to CreateComponentModel, the component model can be created without any training.
In both cases, however, GenInitialComponents can be used to evaluate the effect of the feature ex-
traction parameters contrastLow, contrastHigh, and minSize of TrainModelComponents and
CreateComponentModel, and hence to find suitable parameter values for a certain application.
For this, the image regions for the (initial) components must be explicitly given, i.e., for each (initial) component
a separate image from which the (initial) component should be created is passed. In this case, modelImage
contains multiple image objects. The domain of each image object is used as the region of interest for calculating
the corresponding (initial) component. The image matrix of all image objects in the tuple must be identical, i.e.,
modelImage cannot be constructed in an arbitrary manner using ConcatObj, but must be created from the
same image using AddChannels or equivalent calls. If this is not the case, an error message is returned. If
the paramters contrastLow, contrastHigh, or minSize only contain one element, this value is applied
to the creation of all (initial) components. In contrast, if different values for different (initial) components should
be used, tuples of values can be passed for these three parameters. In this case, the tuples must have a length
that corresponds to the number of (initial) components, i.e., the number of image objects in modelImage. The
contour regions of the (initial) components are returned in initialComponents.
Thus, the second possibility is equivalent to the function of InspectShapeModel within the shape-based
matching. However, in contrast to InspectShapeModel, GenInitialComponents does not return the
contour regions on multiple image pyramid levels. Therefore, if the number of pyramid levels to be used should be
chosen manually, preferably InspectShapeModel should be called individually for each (initial) component.
For both described possibilities the parameters contrastLow, contrastHigh, and minSize can be au-
tomatically determined. If both hysteresis threshold should be automatically determined, both contrastLow
and contrastHigh must be set to ’auto’. In contrast, if only one threshold value should be determined,
contrastLow must be set to ’auto’ while contrastHigh must be set to an arbitrary value different from
’auto’.
If the input image modelImage has one channel the representation of the model is created with the method that is
used in CreateComponentModel or CreateTrainedComponentModel for the metrics ’use_polarity’,
’ignore_global_polarity’, and ’ignore_local_polarity’. If the input image has more than one channel the represen-
tation is created with the method that is used for the metric ’ignore_color_polarity’.
Parameter
HALCON 8.0.2
568 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator GenInitialComponents returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
GenInitialComponents is reentrant and processed without parallelization.
Possible Predecessors
DrawRegion, AddChannels, ReduceDomain
Possible Successors
TrainModelComponents
Alternatives
InspectShapeModel
Module
Matching
HTuple HComponentModel.GetComponentModelParams (
out HTuple rootRanking, out HShapeModel[] shapeModelIDs )
double HComponentModel.GetComponentModelParams (
out int rootRanking, out HShapeModel shapeModelIDs )
Result
If the handle of the component model is valid, the operator GetComponentModelParams returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
GetComponentModelParams is reentrant and processed without parallelization.
HALCON 8.0.2
570 CHAPTER 7. MATCHING
Possible Predecessors
CreateTrainedComponentModel, CreateComponentModel
See also
GetShapeModelParams
Module
Matching
HRegion HComponentModel.GetComponentModelTree (
out HRegion relations, HTuple rootComponent, HTuple image,
out HTuple startNode, out HTuple endNode, out HTuple row,
out HTuple column, out HTuple phi, out HTuple length1,
out HTuple length2, out HTuple angleStart, out HTuple angleExtent )
HRegion HComponentModel.GetComponentModelTree (
out HRegion relations, int rootComponent, string image,
out int startNode, out int endNode, out double row, out double column,
out double phi, out double length1, out double length2,
out double angleStart, out double angleExtent )
phi, length1, and length2 (see GenRectangle2). The orientation relation is described by the starting
angle angleStart and the angle extent angleExtent.
For the root component as well as for components that do not have a predecessor in the current image or that
have not been found in the current image, an empty region is returned and the corresponding values of the seven
parameters are set to 0.
Parameter
HALCON 8.0.2
572 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator GetComponentModelTree returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Parallelization Information
GetComponentModelTree is reentrant and processed without parallelization.
Possible Predecessors
CreateTrainedComponentModel, CreateComponentModel
See also
TrainModelComponents
Module
Matching
HRegion HComponentTraining.GetComponentRelations (
int referenceComponent, HTuple image, out HTuple row,
out HTuple column, out HTuple phi, out HTuple length1,
out HTuple length2, out HTuple angleStart, out HTuple angleExtent )
HRegion HComponentTraining.GetComponentRelations (
int referenceComponent, string image, out double row,
out double column, out double phi, out double length1,
out double length2, out double angleStart, out double angleExtent )
Return the relations between the model components that are contained in a training result.
GetComponentRelations returns the relations between model components after training them with
TrainModelComponents. With the parameter referenceComponent, you can select a reference com-
ponent. GetComponentRelations then returns the relations between the reference component and all other
components in the model image (if image = ’model_image’ or image = 0) or in a training image (if image ≥ 1).
In order to obtain the relations in the ith training image, image must be set to i. The result of the training returned
by TrainModelComponents must be passed in componentTrainingID. referenceComponent de-
scribes the index of the reference component and must be within the range of 0 and n-1, if n is the number of model
components (see TrainModelComponents).
The relations are returned in form of regions in relations as well as in form of numerical values in row,
column, phi, length1, length2, angleStart, and angleExtent.
The region object tuple relations is designed as follows. For each component a separate region is returned.
Consequently, relations contains n regions, where the order of the regions within the tuple is determined by the
index of the corresponding components. The positions of all components in the image are represented by circles
with a radius of 3 pixels. For each component other than the reference component referenceComponent, ad-
ditionally the position relation and the orientation relation relative to the reference component are represented.
The position relation is represented by a rectangle and the orientation relation is represend by a circle sec-
tor with a radius of 30 pixels. The center of the circle is placed at the mean relative position of the compo-
nent. The rectangle describes the movement of the reference point of the respective component relative to the
pose of the reference component, while the circle sector describes the variation of the relative orientation (see
TrainModelComponents). A relative orientation of 0 corresponds to the relative orientation of both compo-
nents in the model image. If both components appear in the same relative orientation in all images, the circle sector
consequently degenerates to a straight line.
In addition to the region object tuple relations, the relations are also returned in form of numerical values in
row, column, phi, length1, length2, angleStart, and angleExtent. These parameters are tuples
of length n and contain the relations of all components relative to the reference component, where the order of
the values within the tuples is determined by the index of the corresponding component. The position relation is
described by the parameters of the corresponding rectangle row, column, phi, length1, and length2 (see
GenRectangle2). The orientation relation is described by the starting angle angleStart and the angle extent
angleExtent. For the reference component only the position within the image is returned in row and column.
All other values are set to 0.
If the reference component has not been found in the current image, an array of empty regions is returned and the
corresponding parameter values are set to 0.
The operator GetComponentRelations is particularly useful in order to visualize the result of the train-
ing that was performed with TrainModelComponents. With this, it is possible to evaluate the varia-
tions that are contained in the training images. Sometimes it might be reasonable to restart the training with
TrainModelComponents while using a different set of training images.
Parameter
. relations (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region representation of the relations.
. componentTrainingID (input_control) . . . . . . component_training ; HComponentTraining /
HTuple (IntPtr)
Handle of the training result.
. referenceComponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Index of reference component.
Restriction : ReferenceComponent ≥ 0
. image (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string / int / long)
Image for which the component relations are to be returned.
Default Value : "model_image"
Suggested values : Image ∈ {"model_image", 0, 1, 2, 3, 4, 5, 6, 7, 8}
. row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; HTuple (double)
Row coordinate of the center of the rectangle representing the relation.
. column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; HTuple (double)
Column index of the center of the rectangle representing the relation.
. phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; HTuple (double)
Orientation of the rectangle representing the relation (radians).
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
. length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.width(-array) ; HTuple (double)
First radius (half length) of the rectangle representing the relation.
Assertion : Length1 ≥ 0.0
. length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.height(-array) ; HTuple (double)
Second radius (half width) of the rectangle representing the relation.
Assertion : (Length2 ≥ 0.0) ∧ (Length2 ≤ Length1)
. angleStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Smallest relative orientation angle.
. angleExtent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Extent of the relative orientation angles.
Result
If the handle of the training result is valid, the operator GetComponentRelations returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
GetComponentRelations is reentrant and processed without parallelization.
Possible Predecessors
TrainModelComponents
HALCON 8.0.2
574 CHAPTER 7. MATCHING
Possible Successors
TrainModelComponents
See also
GenRectangle2
Module
Matching
HRegion HComponentModel.GetFoundComponentModel (
HTuple modelStart, HTuple modelEnd, HTuple rowComp, HTuple columnComp,
HTuple angleComp, HTuple scoreComp, HTuple modelComp, int modelMatch,
string markOrientation, out HTuple rowCompInst,
out HTuple columnCompInst, out HTuple angleCompInst,
out HTuple scoreCompInst )
Parameter
. foundComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Found components of the selected component model instance.
. componentModelID (input_control) . . . . . component_model ; HComponentModel / HTuple (IntPtr)
Handle of the component model.
. modelStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Start index of each found instance of the component model in the tuples describing the component matches.
. modelEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
End index of each found instance of the component model to the tuples describing the component matches.
. rowComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double)
Row coordinate of the found component matches.
. columnComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double)
Column coordinate of the found component matches.
. angleComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Rotation angle of the found component matches.
. scoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Score of the found component matches.
. modelComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Index of the found components.
. modelMatch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Index of the found instance of the component model to be returned.
. markOrientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mark the orientation of the components.
Default Value : "false"
List of values : MarkOrientation ∈ {"true", "false"}
. rowCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double)
Row coordinate of all components of the selected model instance.
. columnCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double)
Column coordinate of all components of the selected model instance.
. angleCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Rotation angle of all components of the selected model instance.
. scoreCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Score of all components of the selected model instance.
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator GetFoundComponentModel returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
HALCON 8.0.2
576 CHAPTER 7. MATCHING
Parallelization Information
GetFoundComponentModel is reentrant and processed without parallelization.
Possible Predecessors
FindComponentModel
See also
TrainModelComponents, CreateComponentModel
Module
Matching
HRegion HComponentTraining.GetTrainingComponents (
HTuple components, HTuple image, string markOrientation,
out HTuple row, out HTuple column, out HTuple angle, out HTuple score
)
HRegion HComponentTraining.GetTrainingComponents (
string components, string image, string markOrientation,
out double row, out double column, out double angle, out double score
)
HALCON 8.0.2
578 CHAPTER 7. MATCHING
’initial_components’, i, ’false’,
Row, Column, Angle, Score)
* Visualize the final poses of the model components.
get_training_components (TrainingComponents, ComponentTrainingID,
’model_components’, i, ’false’,
Row, Column, Angle, Score)
endfor
Result
If the handle of the training result is valid, the operator GetTrainingComponents returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
GetTrainingComponents is reentrant and processed without parallelization.
Possible Predecessors
TrainModelComponents
Possible Successors
TrainModelComponents
See also
FindShapeModel
Module
Matching
HRegion HComponentTraining.InspectClusteredComponents (
string ambiguityCriterion, double maxContourOverlap,
double clusterThreshold )
Result
If the handle of the training result is valid, the operator InspectClusteredComponents returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Parallelization Information
InspectClusteredComponents is reentrant and processed without parallelization.
Possible Predecessors
TrainModelComponents
Possible Successors
ClusterModelComponents
Module
Matching
HALCON 8.0.2
580 CHAPTER 7. MATCHING
void HComponentTraining.ModifyComponentRelations (
string referenceComponent, string toleranceComponent,
double positionTolerance, double angleTolerance )
HALCON 8.0.2
582 CHAPTER 7. MATCHING
HRegion HComponentTraining.TrainModelComponents (
HImage modelImage, HRegion initialComponents, HImage trainingImages,
HTuple contrastLow, HTuple contrastHigh, HTuple minSize,
HTuple minScore, HTuple searchRowTol, HTuple searchColumnTol,
HTuple searchAngleTol, string trainingEmphasis,
string ambiguityCriterion, double maxContourOverlap,
double clusterThreshold )
HRegion HComponentTraining.TrainModelComponents (
HImage modelImage, HRegion initialComponents, HImage trainingImages,
int contrastLow, int contrastHigh, int minSize, double minScore,
int searchRowTol, int searchColumnTol, double searchAngleTol,
string trainingEmphasis, string ambiguityCriterion,
double maxContourOverlap, double clusterThreshold )
HALCON 8.0.2
584 CHAPTER 7. MATCHING
TrainModelComponents should be used in cases where the relations of the components are not known
and should be trained automatically. In contrast, if the relations are known no training needs to be per-
formed with TrainModelComponents. Instead, the component model can be directly created with
CreateComponentModel.
If the initial components have been automatically created by using GenInitialComponents,
initialComponents contains the contour regions of the initial components. In contrast, if the initial com-
ponents should be defined by the user, they can be directly passed in initialComponents. However, in-
stead of the contour regions for each initial component, its enclosing region must be passed in the tuple. The
(contour) regions refer to the model image modelImage. If the initial components have been obtained using
GenInitialComponents, the model image should be the same as in GenInitialComponents. Please
note that each initial component is part of at most one rigid model component. This is because during the training
initial components can be merged into rigid model components if required (see below). However, they cannot be
split and distributed to several rigid model components.
TrainModelComponents uses the following approach to perform the training: In the first step, the initial
components are searched in all training images. In some cases, one initial component may be found in an training
image more than once. Thus, in the second step, the resulting ambiguities are solved, i.e., the most probable pose
of each initial component is found. Consequently, after solving the ambiguities, in all training images at most one
pose of each initial component is available. In the next step the poses are analyzed and those initial components
that do not show any relative movement are clustered to the final rigid model components. Finally, in the last step
the relations between the model components are computed by analyzing their relative poses over the sequence of
training images. The parameters that are associated with the mentioned steps are explained in the following.
The training is performed based on several training images, which are passed in trainingImages. Each train-
ing image must show at most one instance of the compound object and should contain the full range of allowed
relative movements of the model components. If, for example, the component model of an on/off switch should be
trained, one training image that shows the switch turned off is sufficient if the switch in the model image is turned
on, or vice versa.
The principle of the training is to find the initial components in all training images and to analyze their
poses. For this, for each initial component a shape model is created (see CreateShapeModel),
which is then used to determine the poses (position and orientation) of the initial components in the
training images (see FindShapeModel). Depending on the mode that is set by using SetSystem
(’pregenerate_shape_models’,...), the shape model is either pregenerated completely or computed
online during the search. The mode influences the computation time as well as the robustness of the train-
ing. Furthermore, it should be noted that if single-channel image are used in modelImage as well as in
trainingImages the metric ’use_polarity’ is used internally for CreateShapeModel, while if multichan-
nel images are used in either modelImage or trainingImages the metric ’ignore_color_polarity’ is used.
Finally, it should be noted that while the number of channels in modelImage and trainingImages may be
different, e.g., to facilitate model generation from synthetically generated images, the number of channels in all
the images in trainingImages must be identical. For further details see CreateShapeModel. The cre-
ation of the shape models can be influenced by choosing appropriate values for the parameters contrastLow,
contrastHigh, and minSize. These parameters have the same meaning as in GenInitialComponents
and can be automatically determined by passing ’auto’: If both hysteresis threshold should be automatically de-
termined, both contrastLow and contrastHigh must be set to ’auto’. In contrast, if only one thresh-
old value should be determined, contrastLow must be set to ’auto’ while contrastHigh must be set
to an arbitrary value different from ’auto’. If the initial components have been automatically created by
GenInitialComponents, the parameters contrastLow, contrastHigh, and minSize should be set
to the same values as in GenInitialComponents.
To influence the search for the initial components, the parameters minScore, searchRowTol,
searchColumnTol, searchAngleTol, and trainingEmphasis can be set. The parameter minScore
determines what score a potential match must at least have to be regarded as an instance of the initial component
in the training image. The larger minScore is chosen, the faster the training is. If the initial components can
be expected never to be occluded in the training images, minScore may be set as high as 0.8 or even 0.9 (see
FindShapeModel).
By default, the components are searched only at points in which the component lies completely within the re-
spective training image. This means that a component will not be found if it extends beyond the borders of the
image, even if it would achieve a score greater than minScore. This behavior can be changed with SetSystem
(’border_shape_models’,’true’), which will cause components that extend beyond the image border
to be found if they achieve a score greater than minScore. Here, points lying outside the image are regarded as
being occluded, i.e., they lower the score. It should be noted that the runtime of the training will increase in this
mode.
When dealing with a high number of initial components and many training images, the training may take a long
time (up to several minutes). In order to speed up the training it is possible to restrict the search space for the single
initial components in the training images. For this, the poses of the initial components in the model image are used
as reference pose. The parameters searchRowTol and searchColumnTol specify the position tolerance
region relative to the reference position in which the search is performed. Assume, for example, that the position of
an initial component in the model image is (100,200) and searchRowTol is set to 20 and searchColumnTol
is set to 10. Then, this initial component is searched in the training images only within the axis-aligned rectangle
that is determined by the upper left corner (80,190) and the lower right corner (120,210). The same holds for
the orientation angle range, which can be restricted by specifying the angle tolerance searchAngleTol to
the angle range of [-searchAngleTol,+searchAngleTol]. Thus, it is possible to considerably reduce the
computational effort during the training by an adequate acquisition of the training images. If one of the three
parameters is set to -1, no restriction of the search space is applied in the corresponding dimension.
The input parameters contrastLow, contrastHigh, minSize, minScore, searchRowTol,
searchColumnTol, and searchAngleTol must either contain one element, in which case the parameter is
used for all initial components, or must contain the same number of elements as the initial components contained
in initialComponents, in which case each parameter element refers to the corresponding initial component
in initialComponents.
The parameter trainingEmphasis offers another possibility to influence the computation time of the training
and to simultaneously affect its robustness. If trainingEmphasis is set to ’speed’, on the one hand the training
is comparatively fast, on the other hand it may happen in some cases that some initial components are not found in
the training images or are found at a wrong pose. Consequently, this would lead to an incorrect computation of the
rigid model components and their relations. The poses of the found initial components in the individual training
images can be examined by using GetTrainingComponents. If erroneous matches occur the training should
be restarted with trainingEmphasis set to ’reliability’. This results in a higher robustness at the cost of a
longer computation time.
Furthermore, during the pose determination of the initial components ambiguities may occur if the initial com-
ponents are rotationally symmetric or if several initial components are identical or at least similar to each other.
To solve the ambiguities, the most probable pose is calculated for each initial component in each training im-
age. For this, the individual ambiguous poses are evaluated. The pose of an initial component receives a good
evaluation if the relative pose of the initial component with respect to the other initial components is similar to
the corresponding relative pose in the model image. The method to evaluate this similarity can be chosen with
ambiguityCriterion. In almost all cases the best results are obtained with ’rigidity’, which assumes the
rigidity of the compound object. The more the rigidity of the compound object is violated by the pose of the initial
component, the worse its evaluation is. In the case of ’distance’, only the distance between the initial components
is considered during the evaluation. Hence, the pose of the initial component receives a good evaluation if its dis-
tances to the other initial components is similar to the corresponding distances in the model image. Accordingly,
when choosing ’orientation’, only the relative orientation is considered during the evaluation. Finally, the simulta-
neous consideration of distance and orientation can be achieved by choosing ’distance_orientation’. In contrast to
’rigidity’, the relative pose of the initial components is not considered when using ’distance_orientation’.
The process of solving the ambiguities can be further influenced by the parameter maxContourOverlap. This
parameter describes the extent by which the contours of two initial component matches may overlap each other.
Let the letters ’I’ and ’T’, for example, be two initial components that should be searched in a training image
that shows the string ’IT’. Then, the initial component ’T’ should be found at its correct pose. In contrast, the
initial component ’I’ will be found at its correct pose (’I’) but also at the pose of the ’T’ because of the simi-
larity of the two components. To discard the wrong match of the initial component ’I’, an appropriate value for
maxContourOverlap can be chosen: If overlapping matches should be tolerated, maxContourOverlap
should be set to 1. If overlapping matches should be completely avoided, maxContourOverlap should be set
to 0. By choosing a value between 0 and 1, the maximum percentage of overlapping contour pixels can be adjusted.
The decision which initial components can be clustered to rigid model components is made based on the poses
of the initial components in the model image and in the training images. Two initial components are merged
if they do not show any relative movement over all images. Assume that in the case of the above mentioned
switch the training image would show the same switch state as the model image, the algorithm would merge the
respective initial components because it assumes that the entire switch is one rigid model component. The extent
by which initial components are merged can be influenced with the parameter clusterThreshold. This cluster
threshold is based on the probability that two initial components belong to the same rigid model component. Thus,
HALCON 8.0.2
586 CHAPTER 7. MATCHING
clusterThreshold describes the minimum probability which two initial components must have in order to be
merged. Since the threshold is based on a probability value, it must lie in the interval between 0 and 1. The greater
the threshold is chosen, the smaller the number of initial components that are merged. If a threshold of 0 is chosen,
all initial components are merged into one rigid component, while for a threshold of 1 no merging is performed
and each initial component is adopted as one rigid model component.
The final rigid model components are returned in modelComponents. Later, the index of a component region
in modelComponents is used to denote the model component. The poses of the components in the training
images can be examined by using GetTrainingComponents.
After the determination of the model components their relative movements are analyzed by determining the move-
ment of one component with respect to a second component for each pair of components. For this, the components
are referred to their reference points. The reference point of a component is the center of gravity of its contour
region, which is returned in modelComponents. It can be calculated by calling AreaCenter. Finally, the
relative movement is represented by the smallest enclosing rectangle of arbitrary orientation of the reference point
movement and by the smallest enclosing angle interval of the relative orientation of the second component over all
images. The determined relations can be inspected by using GetComponentRelations.
Parameter
Result
If the parameter values are correct, the operator TrainModelComponents returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
TrainModelComponents is processed completely exclusively without parallelization.
HALCON 8.0.2
588 CHAPTER 7. MATCHING
Possible Predecessors
GenInitialComponents
Possible Successors
InspectClusteredComponents, ClusterModelComponents, ModifyComponentRelations,
WriteTrainingComponents, GetTrainingComponents, GetComponentRelations,
CreateTrainedComponentModel, ClearTrainingComponents,
ClearAllTrainingComponents
See also
CreateShapeModel, FindShapeModel
Module
Matching
Result
If the file name is valid (write permission), the operator WriteTrainingComponents returns the value 2
(H_MSG_TRUE). If necessary, an exception handling is raised.
Parallelization Information
WriteTrainingComponents is reentrant and processed without parallelization.
Possible Predecessors
TrainModelComponents
Module
Matching
7.2 Correlation-Based
static void HOperatorSet.ClearAllNccModels ( )
static void HMisc.ClearAllNccModels ( )
Free the memory of all NCC models.
The operator ClearAllNccModels frees the memory of all NCC models that were created by
CreateNccModel. After calling ClearAllNccModels, no model can be used any longer.
Attention
ClearAllNccModels exists solely for the purpose of implementing the “reset program” functionality in HDe-
velop. ClearAllNccModels must not be used in any application.
Result
ClearAllNccModels always returns 2 (H_MSG_TRUE).
Parallelization Information
ClearAllNccModels is processed completely exclusively without parallelization.
Possible Predecessors
CreateNccModel, ReadNccModel, WriteNccModel
Alternatives
ClearNccModel
Module
Matching
HALCON 8.0.2
590 CHAPTER 7. MATCHING
Module
Matching
is too small or angleExtent too big, it may happen that the model no longer fits into the (virtual) memory. In
this case, either angleStep must be enlarged or angleExtent must be reduced. In any case, it is desirable
that the model completely fits into the main memory, because this avoids paging by the operating system, and
hence the time to find the object will be much smaller. Since angles can be determined with subpixel resolution
by FindNccModel, angleStep ≥ 1 can be selected for models of a diameter smaller than about 200 pix-
els. If angleStep = 0 auto 0 or 0 is selected, CreateNccModel automatically determines a suitable angle
step length based on the size of the model. The automatically computed angle step length can be queried using
GetNccModelParams.
The parameter metric determines the conditions under which the model is recognized in the image. If metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of FindNccModel will
increase slightly in this case.
The center of gravity of the domain (region) of the model image template is used as the origin (reference point)
of the model. A different origin can be set with SetNccModelOrigin.
Parameter
. template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image whose domain will be used to create the model.
. numLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long / string)
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. angleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double)
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. angleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double)
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. angleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / string)
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0, 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity"}
. modelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; HNCCModel / HTuple (IntPtr)
Handle of the model.
Result
If the parameters are valid, the operator CreateNccModel returns the value 2 (H_MSG_TRUE). If the param-
eter numLevels are chosen such that the model contains too few points, the error 8510 is raised.
Parallelization Information
CreateNccModel is processed completely exclusively without parallelization.
Possible Predecessors
DrawRegion, ReduceDomain, Threshold
Possible Successors
FindNccModel, GetNccModelParams, ClearNccModel, WriteNccModel,
SetNccModelOrigin
Alternatives
CreateShapeModel, CreateScaledShapeModel, CreateAnisoShapeModel,
CreateTemplateRot
HALCON 8.0.2
592 CHAPTER 7. MATCHING
Module
Matching
Here, n denotes the number of points in the template, R denotes the domain (ROI) of the template, mt is the mean
gray value of the template
1 X
mt = t(u, v)
n
(u,v)∈R
1 X 2
s2t = (t(u, v) − mt )
n
(u,v)∈R
mi (r, c) is the mean gray value of the image at position (r, c) over all points of the template (i.e., the template
points are shifted by (r, c))
1 X
mi (r, c) = i(r + u, c + v)
n
(u,v)∈R
and s2i (r, c) is the variance of the gray values of the image at position (r, c) over all points of the template
1 X 2
s2i (r, c) = (i(r + u, c + v) − mi (r, c))
n
(u,v)∈R
The NCC measures how well the template and image correspond at a particular point (r, c). It assumes values
between −1 and 1. The larger the absolute value of the correlation, the larger the degree of correspondence
between the template and image. A value of 1 means that the gray values in the image are a linear transformation
of the gray values in the template:
i(r + u, c + v) = at(u, v) + b
where a > 0. Similarly, a value of −1 means that the gray values in the image are a linear transformation of the
gray values in the template with a < 0. Hence, in this case the template occurs with a reversed polarity in the
image. Because of the above property, the NCC is invariant to linear illumination changes.
The NCC as defined above is used if the NCC model has been created with Metric = ’use_polarity’. If the model
has been created with Metric = ’ignore_global_polarity’, the absolute value of ncc(r, c) is used as the score.
It should be noted that the NCC is very sensitive to occlusion and clutter as well as to nonlinear illumination
changes in the image. If a model should be found in the presence of occlusion, clutter, or nonlinear illumination
changes the search should be performed using the shape-based matching (see, e.g., CreateShapeModel).
The domain of the image image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
CreateNccModel. A different origin set with SetNccModelOrigin is not taken into account here. The
model is searched within those points of the domain of the image, in which the model lies completely within the
image. This means that the model will not be found if it extends beyond the borders of the image, even if it would
achieve a score greater than minScore (see below).
The parameters angleStart and angleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
CreateNccModel. In particular, this means that the angle ranges of the model and the search must truly overlap.
The angle range in the search is not adapted modulo 2π. To simplify the presentation, all angles in the remainder
of the paragraph are given in degrees, whereas they have to be specified in radians in FindNccModel. Hence,
if the model, for example, was created with angleStart = −20◦ and angleExtent = 40◦ and the angle
search space in FindNccModel is, for example, set to angleStart = 350◦ and angleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select angleStart = −10◦ .
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rotations
are found in the image. If the model has repeating structures it may happen that multiple instances with identical
rotations are found at similar positions in the image. The parameter maxOverlap determines by what fraction
(i.e., a number between 0 and 1) two instances may at most overlap in order to consider them as different instances,
and hence to be returned separately. If two instances overlap each other by more than maxOverlap only the
best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary
orientation (see SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances
may not overlap at all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’false’, the model’s pose is only determined with pixel accuracy and the angle resolution
HALCON 8.0.2
594 CHAPTER 7. MATCHING
that was specified with CreateNccModel. If subPixel is set to ’true’, the position as well as the rotation are
determined with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This
mode costs almost no computation time and achieves a high accuracy. Hence, subPixel should usually be set to
’true’.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with CreateNccModel. If numLevels
is set to 0, the number of pyramid levels specified in CreateNccModel is used. Optionally, numLevels can
contain a second value that determines the lowest pyramid level to which the found matches are tracked. Hence, a
value of [4,2] for numLevels means that the matching starts at the fourth pyramid level and tracks the matches
to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This mechanism can
be used to decrease the runtime of the matching. It should be noted, however, that in general the accuracy of the
extracted pose parameters is lower in this mode than in the normal mode, in which the matches are tracked to the
lowest pyramid level. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value.
Parameter
Result
If the parameter values are correct, the operator FindNccModel returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
FindNccModel is reentrant and processed without parallelization.
Possible Predecessors
CreateNccModel, ReadNccModel, SetNccModelOrigin
Possible Successors
ClearNccModel
Alternatives
FindShapeModel, FindScaledShapeModel, FindAnisoShapeModel, FindShapeModels,
FindScaledShapeModels, FindAnisoShapeModels, BestMatchRotMg
Module
Matching
HALCON 8.0.2
596 CHAPTER 7. MATCHING
Result
If the handle of the model is valid, the operator GetNccModelOrigin returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Parallelization Information
GetNccModelOrigin is reentrant and processed without parallelization.
Possible Predecessors
CreateNccModel, ReadNccModel, SetNccModelOrigin
Possible Successors
FindNccModel
See also
AreaCenter
Module
Matching
HALCON 8.0.2
598 CHAPTER 7. MATCHING
Possible Successors
FindNccModel, GetNccModelOrigin
See also
AreaCenter
Module
Matching
7.3 Gray-Value-Based
Parallelization Information
AdaptTemplate is reentrant and processed without parallelization.
Possible Predecessors
CreateTemplate, CreateTemplateRot, ReadTemplate
Possible Successors
SetReferenceTemplate, BestMatch, FastMatch, FastMatchMg, SetOffsetTemplate,
BestMatchMg, BestMatchPreMg, BestMatchRot, BestMatchRotMg
Module
Matching
HALCON 8.0.2
600 CHAPTER 7. MATCHING
alternative Method the mode whichLevels with value ’original’ can be used. In this case not only the position
with the lowest error but all points below maxError are analysed further on in the next higher resolution. This
method is slower but it is more stable and the possibilty to miss the correct position is very low. In this case it is
often possible to start with a lower resolution (higher level in Pyramid, i.e. larger value for numLevels) which
leads to a reduced runtime. Besides the values ’all’ and ’original’ for whichLevels you can specify the pyramid
level explicitly where to switch between a “match all” and the ”best match”. Here 0 corresponds to ’original’ and
numLevels - 1 is equivalent to ’all’. A value in between is in most cases a good compromise between speed
and a stable detection. A larger value for whichLevels results in a reduced runtime, a smaller value results in a
more stable detection. The value of numLevels has to equal or smaller than the value used to create the template.
The position of the found matching position is returned in row and column. The corresponding error is given
in error. If no point below maxError is found a value of 255 for error and 0 for row and column is
returned. If the desired object is missed (no object found or wrong position) you have to set maxError higher or
whichLevels lower. Check also if the illumination has changed (see SetOffsetTemplate).
The maximum error of the position (without noise) is 0.1 pixel. The average error is 0.03 pixel.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image inside of which the pattern has to be found.
. templateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; HTemplate / HTuple (IntPtr)
Template number.
. maxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. subPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Exactness in subpixels in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. numLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the used resolution levels.
Default Value : 4
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}
. whichLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long / string)
Resolution level up to which the method “best match” is used.
Default Value : 2
Suggested values : WhichLevels ∈ {"all", "original", 0, 1, 2, 3, 4, 5, 6}
. row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double)
Row position of the best match.
. column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double)
Column position of the best match.
. error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Average divergence of the grayvalues in the best match.
Result
If the parameter values are correct, the operator BestMatchMg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behaviour can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
BestMatchMg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
CreateTemplate, ReadTemplate, AdaptTemplate, DrawRegion, DrawRectangle1,
ReduceDomain, SetReferenceTemplate, SetOffsetTemplate
Alternatives
FastMatch, FastMatchMg, BestMatch, BestMatchPreMg, BestMatchRot, BestMatchRotMg,
ExhaustiveMatch, ExhaustiveMatchMg
HALCON 8.0.2
602 CHAPTER 7. MATCHING
Module
Matching
HALCON 8.0.2
604 CHAPTER 7. MATCHING
HALCON 8.0.2
606 CHAPTER 7. MATCHING
HALCON 8.0.2
608 CHAPTER 7. MATCHING
time and the higher sensitivity to changes in the shape of the pattern. The mode ’gradient’ is slighy faster but more
sensitive to noise.
The maximum error for matching has typically to be chosen higher when using the edge amplitude. The mode
chosen by grayValues leads automatically to calling the appropriate filter during matching — if necessary.
As an alternative to the gradient approach the operator SetOffsetTemplate can be used, if the change in
illumination is known.
The parameter optimize specifies if the pattern has to optimized for runtime. This optimization results in a
longer time to create the template but reduces the time for matching. In addition the optimization leads to a more
stable matching, i.e., the possibilty to miss good matches is reduced. The optimization process selects the most
stable and significant gray values to be tested first during the matching process. Using this technique a wrong
match can be eliminated very early.
The reference position for the template is its center of gravity. I.e. if you apply the template to the orig-
inal image the center of gravity is returned. This default reference can be adapted using the operator
SetReferenceTemplate.
In sub pixel mode a special position correction is calculated which is added after each matching: The template is
applied to the original image and the difference between the found position and the center of gravity is used as a
correction vector. This is important for patterns in a textured context or for asymetric pattern. For most templates
this correction vector is near null.
If the pattern is no longer used, it has to be freed by the operator ClearTemplate in order to deallocate the
memory.
Before the use of the template, which is stored independently of the image size, it can be adapted explicitly to the
size of a definite image size by using AdaptTemplate.
Parameter
Alternatives
CreateTemplateRot, ReadTemplate
Module
Matching
A ∗ 12 ∗ angleExtend
M=
angleStep
After the transformation, a number (templateID) is assigned to the template for being used in the further
process.
A description of the other parameters can be found at the operator CreateTemplate.
Attention
You have to be aware, that depending on the resolution a large number of pre calculated patterns have to be created
which might result in a large amount of memory needed.
Parameter
HALCON 8.0.2
610 CHAPTER 7. MATCHING
The difference between FastMatch and ExhaustiveMatch is that the matching for one position is stopped
if the error is to high. This leads to a reduced runtime but one might miss correct matches. The runtime of the
operator depends mainly on the size of the domain of image. Therefore it is important to restrict the domain as
far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter maxError
determines the maximal error which the searched position is allowed to show. The lower this value is, the faster
the operator runs.
All points which show a matching error smaller than maxError will be returned in the output region matches.
This region can be used for further processing. For example by using a connection and BestMatch to find all
the matching objects. If no point has a match error below maxError the empty region (i.e no points) is returned.
Parameter
HALCON 8.0.2
612 CHAPTER 7. MATCHING
showing an error small enough in the scaled down image (error smaller than maxError) will be refined at the
corresponding positions in the original image (image).
The runtime of matching dependends on the parameter maxError: the larger the value the longer is the processing
time, because more points of the pattern have to be tested. If maxError is to low the pattern will not be found.
The value has therefore to be optimized for every application.
numLevel indicates the number of levels of the pyramid, including the original image. Optionally a second value
can be given. This value specifies the number (0..n) of the lowest level which is used the the matching. The region
found up to this level will then be zoomed to the size of the original level. This can used to increase the runtime in
the case that the accuracy does not have to be so high.
Parameter
Parameter
HALCON 8.0.2
614 CHAPTER 7. MATCHING
7.4 Shape-Based
HALCON 8.0.2
616 CHAPTER 7. MATCHING
the highest pyramid level. If this procedure would lead to a model with no pyramid levels, i.e., if the number of
model points is already too small on the lowest pyramid level, CreateAnisoShapeModel returns with an
error message. If numLevels is set to ’auto’ (or 0 for backwards compatibility), CreateAnisoShapeModel
determines the number of pyramid levels automatically. The automatically computed number of pyramid levels can
be queried using GetShapeModelParams. In rare cases, it might happen that CreateAnisoShapeModel
determines a value for the number of pyramid levels that is too large or too small. If the number of pyramid levels is
chosen too large, the model may not be recognized in the image or it may be necessary to select very low parameters
for MinScore or Greediness in FindAnisoShapeModel in order to find the model. If the number of pyramid
levels is chosen too small, the time required to find the model in FindAnisoShapeModel may increase. In
these cases, the number of pyramid levels should be selected using the output of InspectShapeModel.
The parameters angleStart and angleExtent determine the range of possible rotations, in which
the model can occur in the image. Note that the model can only be found in this range of angles by
FindAnisoShapeModel. The parameter angleStep determines the step length within the selected range
of angles. Hence, if subpixel accuracy is not specified in FindAnisoShapeModel, this parameter specifies
the accuracy that is achievable for the angles in FindAnisoShapeModel. angleStep should be chosen
based on the size of the object. Smaller models do not have many different discrete rotations in the image, and
hence angleStep should be chosen larger for smaller models. If angleExtent is not an integer multiple of
angleStep, angleStep is modified accordingly.
The parameters scaleRMin, scaleRMax, scaleCMin, and scaleCMax determine the range of possible
anisotropic scales of the model in the row and column direction. A scale of 1 in both scale factors corresponds to
the original size of the model. The parameters scaleRStep and scaleCStep determine the step length within
the selected range of scales. Hence, if subpixel accuracy is not specified in FindAnisoShapeModel, these pa-
rameters specify the accuracy that is achievable for the scales in FindAnisoShapeModel. Like angleStep,
scaleRStep and scaleCStep should be chosen based on the size of the object. If the respective range of
scales is not an integer multiple of scaleRStep and scaleCStep, scaleRStep and scaleCStep are
modified accordingly.
Note that the transformations are treated internally such that the scalings are applied first, followed by the rotation.
Therefore, the model should usually be aligned such that it appears horizontally or vertically in the model image.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected angle
and scale range and stored in memory. The memory required to store the model is proportional to the num-
ber of angle steps, the number of scale steps, and the number of points in the model. Hence, if angleStep,
scaleRStep, or scaleCStep are too small or angleExtent or the range of scales are too big, it may
happen that the model no longer fits into the (virtual) memory. In this case, angleStep, scaleRStep, or
scaleCStep must be enlarged or angleExtent or the range of scales must be reduced. In any case, it is desir-
able that the model completely fits into the main memory, because this avoids paging by the operating system, and
hence the time to find the object will be much smaller. Since angles can be determined with subpixel resolution by
FindAnisoShapeModel, angleStep ≥ 1◦ and scaleRStep, scaleCStep ≥ 0.02 can be selected for
models of a diameter smaller than about 200 pixels. If angleStep = 0 auto 0 or scaleRStep, scaleCStep =
0
auto 0 (or 0 for backwards compatibility in both cases) is selected, CreateAnisoShapeModel automatically
determines a suitable angle or scale step length, respectively, based on the size of the model. The automatically
computed angle and scale step lengths can be queried using GetShapeModelParams.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
FindAnisoShapeModel. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting optimization
to a value different from ’none’. If optimization = ’none’, all model points are stored. In all other cases, the
number of points is reduced according to the value of optimization. If the number of points is reduced, it may
be necessary in FindAnisoShapeModel to set the parameter Greediness to a smaller value, e.g., 0.7 or 0.8.
For small models, the reduction of the number of model points does not result in a speed-up of the search because
in this case usually significantly more potential instances of the model must be examined. If optimization
is set to ’auto’, CreateAnisoShapeModel automatically determines the reduction of the number of model
points.
Optionally, a second value can be passed in optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with SetSystem(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
HALCON 8.0.2
618 CHAPTER 7. MATCHING
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating
the transformed models. For example, if the model is not pregenerated completely, FindAnisoShapeModel
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
contrast should be chosen such that only the significant features of the template are used for the model.
contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in EdgesImage. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see HysteresisThreshold. Optionally, contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in contrast. In this case, the first two values can simply be set
to identical values. The effect of this parameter can be checked in advance with InspectShapeModel. If
contrast is set to ’auto’, CreateAnisoShapeModel determines the three above described values auto-
matically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’), or
the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed:
If, for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are deter-
mined automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while
the hysteresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determi-
nation of the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be
preferred if certain model components should be included or suppressed because of application-specific reasons
or if the object contains several different contrasts. Therefore, the contrast thresholds should be automatically
determined with DetermineShapeModelParams and subsequently verified using InspectShapeModel
before calling CreateAnisoShapeModel.
With minContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by FindAnisoShapeModel. In other words, this parameter separates the model from the noise in
the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, minContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine minContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image minContrast should be set to 17. Obviously,
minContrast must be smaller than contrast. If the model should be recognized in very low contrast im-
ages, minContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, minContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
FindAnisoShapeModel. If minContrast is set to ’auto’, the minimum contrast is determined automati-
cally based on the noise in the model image. Consequently, an automatic determination only makes sense if the
image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using GetShapeModelParams.
The parameter metric determines the conditions under which the model is recognized in the image. If metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime of
FindAnisoShapeModel will increase slightly in this case. If metric = ’ignore_local_polarity’, the model
is found even if the contrast changes locally. This mode can, for example, be useful if the object consists of a
part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the runtime
of FindAnisoShapeModel increases significantly, it is usually better to create several models that reflect the
possible contrast variations of the object with CreateAnisoShapeModel, and to match them simultaneously
with FindAnisoShapeModels. The above three metrics can only be applied to single-channel images. If a
multichannel image is used as the model image or as the search image only the first channel will be used (and no er-
ror message will be returned). If metric = ’ignore_color_polarity’, the model is found even if the color contrast
changes locally. This is, for example, the case if parts of the object can change their color, e.g., from red to green. In
particular, this mode is useful if it is not known in advance in which channels the object is visible. In this mode, the
runtime of FindAnisoShapeModel can also increase significantly. The metric ’ignore_color_polarity’ can be
used for images with an arbitrary number of channels. If it is used for single-channel images it has the same effect
as ’ignore_local_polarity’. It should be noted that for metric = ’ignore_color_polarity’ the number of channels
in the model creation with CreateAnisoShapeModel and in the search with FindAnisoShapeModel
can be different. This can, for example, be used to create a model from a synthetically generated single-channel
image. Furthermore, it should be noted that the channels do not need to contain a spectral subdivision of the light
(like in an RGB image). The channels can, for example, also contain images of the same object that were obtained
by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image template is used as the origin (reference point)
of the model. A different origin can be set with SetShapeModelOrigin.
Parameter
HALCON 8.0.2
620 CHAPTER 7. MATCHING
HALCON 8.0.2
622 CHAPTER 7. MATCHING
these cases, the number of pyramid levels should be selected using the output of InspectShapeModel.
The parameters angleStart and angleExtent determine the range of possible rotations, in which
the model can occur in the image. Note that the model can only be found in this range of angles by
FindScaledShapeModel. The parameter angleStep determines the step length within the selected range
of angles. Hence, if subpixel accuracy is not specified in FindScaledShapeModel, this parameter specifies
the accuracy that is achievable for the angles in FindScaledShapeModel. angleStep should be chosen
based on the size of the object. Smaller models do not have many different discrete rotations in the image, and
hence angleStep should be chosen larger for smaller models. If angleExtent is not an integer multiple of
angleStep, angleStep is modified accordingly.
The parameters scaleMin and scaleMax determine the range of possible scales (sizes) of the model. A scale of
1 corresponds to the original size of the model. The parameter scaleStep determines the step length within the
selected range of scales. Hence, if subpixel accuracy is not specified in FindScaledShapeModel, this param-
eter specifies the accuracy that is achievable for the scales in FindScaledShapeModel. Like angleStep,
scaleStep should be chosen based on the size of the object. If the range of scales is not an integer multiple of
scaleStep, scaleStep is modified accordingly.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected angle
and scale range and stored in memory. The memory required to store the model is proportional to the number
of angle steps, the number of scale steps, and the number of points in the model. Hence, if angleStep or
scaleStep are too small or angleExtent or the range of scales are too big, it may happen that the model
no longer fits into the (virtual) memory. In this case, either angleStep or scaleStep must be enlarged or
angleExtent or the range of scales must be reduced. In any case, it is desirable that the model completely fits
into the main memory, because this avoids paging by the operating system, and hence the time to find the object
will be much smaller. Since angles can be determined with subpixel resolution by FindScaledShapeModel,
angleStep ≥ 1◦ and scaleStep ≥ 0.02 can be selected for models of a diameter smaller than about 200
pixels. If angleStep = 0 auto 0 or scaleStep = 0 auto 0 (or 0 for backwards compatibility in both cases) is
selected, CreateScaledShapeModel automatically determines a suitable angle or scale step length, respec-
tively, based on the size of the model. The automatically computed angle and scale step lengths can be queried
using GetShapeModelParams.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
FindScaledShapeModel. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting optimization
to a value different from ’none’. If optimization = ’none’, all model points are stored. In all other cases,
the number of points is reduced according to the value of optimization. If the number of points is reduced,
it may be necessary in FindScaledShapeModel to set the parameter Greediness to a smaller value, e.g.,
0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of the
search because in this case usually significantly more potential instances of the model must be examined. If
optimization is set to ’auto’, CreateScaledShapeModel automatically determines the reduction of the
number of model points.
Optionally, a second value can be passed in optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with SetSystem(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, FindScaledShapeModel
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
contrast should be chosen such that only the significant features of the template are used for the model.
contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in EdgesImage. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see HysteresisThreshold. Optionally, contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in contrast. In this case, the first two values can simply be set
to identical values. The effect of this parameter can be checked in advance with InspectShapeModel. If
contrast is set to ’auto’, CreateScaledShapeModel determines the three above described values auto-
matically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’), or
the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed:
If, for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are deter-
mined automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while
the hysteresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determi-
nation of the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be
preferred if certain model components should be included or suppressed because of application-specific reasons
or if the object contains several different contrasts. Therefore, the contrast thresholds should be automatically
determined with DetermineShapeModelParams and subsequently verified using InspectShapeModel
before calling CreateScaledShapeModel.
With minContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by FindScaledShapeModel. In other words, this parameter separates the model from the noise in
the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, minContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine minContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image minContrast should be set to 17. Obviously,
minContrast must be smaller than contrast. If the model should be recognized in very low contrast im-
ages, minContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, minContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
FindScaledShapeModel. If minContrast is set to ’auto’, the minimum contrast is determined automat-
ically based on the noise in the model image. Consequently, an automatic determination only makes sense if the
image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using GetShapeModelParams.
The parameter metric determines the conditions under which the model is recognized in the image. If metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime
of FindScaledShapeModel will increase slightly in this case. If metric = ’ignore_local_polarity’, the
model is found even if the contrast changes locally. This mode can, for example, be useful if the object consists
of a part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the
runtime of FindScaledShapeModel increases significantly, it is usually better to create several models that
reflect the possible contrast variations of the object with CreateScaledShapeModel, and to match them si-
multaneously with FindScaledShapeModels. The above three metrics can only be applied to single-channel
images. If a multichannel image is used as the model image or as the search image only the first channel will
be used (and no error message will be returned). If metric = ’ignore_color_polarity’, the model is found even
if the color contrast changes locally. This is, for example, the case if parts of the object can change their color,
e.g., from red to green. In particular, this mode is useful if it is not known in advance in which channels the
object is visible. In this mode, the runtime of FindScaledShapeModel can also increase significantly. The
metric ’ignore_color_polarity’ can be used for images with an arbitrary number of channels. If it is used for
single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that for metric =
’ignore_color_polarity’ the number of channels in the model creation with CreateScaledShapeModel and
in the search with FindScaledShapeModel can be different. This can, for example, be used to create a model
from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do not need
HALCON 8.0.2
624 CHAPTER 7. MATCHING
to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also contain
images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image template is used as the origin (reference point)
of the model. A different origin can be set with SetShapeModelOrigin.
Parameter
HALCON 8.0.2
626 CHAPTER 7. MATCHING
The operator CreateShapeModel prepares a template, which is passed in the image template, as a shape
model used for matching. The ROI of the model is passed as the domain of template.
The model is generated using multiple image pyramid levels and is stored in memory. If a complete pregeneration
of the model is selected (see below), the model is generated at multiple rotations on each level. The output
parameter modelID is a handle for this model, which is used in subsequent calls to FindShapeModel.
The number of pyramid levels is determined with the parameter numLevels. It should be chosen as large as pos-
sible because by this the time necessary to find the object is significantly reduced. On the other hand, numLevels
must be chosen such that the model is still recognizable and contains a sufficient number of points (at least four)
on the highest pyramid level. This can be checked using the output of InspectShapeModel. If not enough
model points are generated, the number of pyramid levels is reduced internally until enough model points are found
on the highest pyramid level. If this procedure would lead to a model with no pyramid levels, i.e., if the number
of model points is already too small on the lowest pyramid level, CreateShapeModel returns with an error
message. If numLevels is set to ’auto’ (or 0 for backwards compatibility), CreateShapeModel determines
the number of pyramid levels automatically. The automatically computed number of pyramid levels can be queried
using GetShapeModelParams. In rare cases, it might happen that CreateShapeModel determines a value
for the number of pyramid levels that is too large or too small. If the number of pyramid levels is chosen too large,
the model may not be recognized in the image or it may be necessary to select very low parameters for MinScore
or Greediness in FindShapeModel in order to find the model. If the number of pyramid levels is chosen too
small, the time required to find the model in FindShapeModel may increase. In these cases, the number of
pyramid levels should be selected using the output of InspectShapeModel.
The parameters angleStart and angleExtent determine the range of possible rotations, in which the model
can occur in the image. Note that the model can only be found in this range of angles by FindShapeModel. The
parameter angleStep determines the step length within the selected range of angles. Hence, if subpixel accuracy
is not specified in FindShapeModel, this parameter specifies the accuracy that is achievable for the angles in
FindShapeModel. angleStep should be chosen based on the size of the object. Smaller models do not
possess many different discrete rotations in the image, and hence angleStep should be chosen larger for smaller
models. If angleExtent is not an integer multiple of angleStep, angleStep is modified accordingly.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected
angle range and stored in memory. The memory required to store the model is proportional to the number of
angle steps and the number of points in the model. Hence, if angleStep is too small or angleExtent too
big, it may happen that the model no longer fits into the (virtual) memory. In this case, either angleStep
must be enlarged or angleExtent must be reduced. In any case, it is desirable that the model completely
fits into the main memory, because this avoids paging by the operating system, and hence the time to find the
object will be much smaller. Since angles can be determined with subpixel resolution by FindShapeModel,
angleStep ≥ 1 can be selected for models of a diameter smaller than about 200 pixels. If angleStep = 0 auto 0
(or 0 for backwards compatibility) is selected, CreateShapeModel automatically determines a suitable angle
step length based on the size of the model. The automatically computed angle step length can be queried using
GetShapeModelParams.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
FindShapeModel. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting optimization
to a value different from ’none’. If optimization = ’none’, all model points are stored. In all other cases, the
number of points is reduced according to the value of optimization. If the number of points is reduced, it may
be necessary in FindShapeModel to set the parameter Greediness to a smaller value, e.g., 0.7 or 0.8. For
small models, the reduction of the number of model points does not result in a speed-up of the search because in
this case usually significantly more potential instances of the model must be examined. If optimization is set
to ’auto’, CreateShapeModel automatically determines the reduction of the number of model points.
Optionally, a second value can be passed in optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with SetSystem(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, FindShapeModel typically
returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a completely
pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two modes. If
maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
contrast should be chosen such that only the significant features of the template are used for the model.
contrast can also contain a tuple with two values. In this case, the model is segmented using a method similar
to the hysteresis threshold method used in EdgesImage. Here, the first element of the tuple determines the lower
threshold, while the second element determines the upper threshold. For more information about the hysteresis
threshold method, see HysteresisThreshold. Optionally, contrast can contain a third value as the last
element of the tuple. This value determines a threshold for the selection of significant model components based
on the size of the components, i.e., components that have fewer points than the minimum size thus specified are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level. If small
model components should be suppressed, but hysteresis thresholding should not be performed, nevertheless three
values must be specified in contrast. In this case, the first two values can simply be set to identical values. The
effect of this parameter can be checked in advance with InspectShapeModel. If contrast is set to ’auto’,
CreateShapeModel determines the three above described values automatically. Alternatively, only the contrast
(’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’), or the minimum size (’auto_min_size’) can be
determined automatically. The remaining values that are not determined automatically can additionally be passed
in the form of a tuple. Also various combinations are allowed: If, for example, [’auto_contrast’,’auto_min_size’]
is passed, both the contrast and the minimum size are determined automatically. If [’auto_min_size’,20,30] is
passed, the minimum size is determined automatically while the hysteresis thresholds are set to 20 and 30, etc.
In certain cases, it might happen that the automatic determination of the contrast thresholds is not satisfying. For
example, a manual setting of these parameters should be preferred if certain model components should be included
or suppressed because of application-specific reasons or if the object contains several different contrasts. There-
fore, the contrast thresholds should be automatically determined with DetermineShapeModelParams and
subsequently verified using InspectShapeModel before calling CreateShapeModel.
With minContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by FindShapeModel. In other words, this parameter separates the model from the noise in the image.
Therefore, a good choice is the range of gray value changes caused by the noise in the image. If, for example, the
gray values fluctuate within a range of 10 gray levels, minContrast should be set to 10. If multichannel images
are used for the model and the search images, and if the parameter metric is set to ’ignore_color_polarity’ (see
below) the noise in one channel must be multiplied by the square root of the number of channels to determine
minContrast. If, for example, the gray values fluctuate within a range of 10 gray levels in a single channel
and the image is a three-channel image minContrast should be set to 17. Obviously, minContrast must
be smaller than contrast. If the model should be recognized in very low contrast images, minContrast
must be set to a correspondingly small value. If the model should be recognized even if it is severely occluded,
minContrast should be slightly larger than the range of gray value fluctuations created by noise in order to
ensure that the position and rotation of the model are extracted robustly and accurately by FindShapeModel. If
minContrast is set to ’auto’, the minimum contrast is determined automatically based on the noise in the model
image. Consequently, an automatic determination only makes sense if the image noise during the recognition is
similar to the noise in the model image. Furthermore, in some cases it is advisable to increase the automatically
determined value in order to increase the robustness against occlusions (see above). The automatically computed
minimum contrast can be queried using GetShapeModelParams.
The parameter metric determines the conditions under which the model is recognized in the image. If metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of FindShapeModel
will increase slightly in this case. If metric = ’ignore_local_polarity’, the model is found even if the contrast
changes locally. This mode can, for example, be useful if the object consists of a part with medium gray value,
within which either darker or brighter sub-objects lie. Since in this case the runtime of FindShapeModel in-
creases significantly, it is usually better to create several models that reflect the possible contrast variations of the
object with CreateShapeModel, and to match them simultaneously with FindShapeModels. The above
three metrics can only be applied to single-channel images. If a multichannel image is used as the model image or
as the search image only the first channel will be used (and no error message will be returned). If metric = ’ig-
nore_color_polarity’, the model is found even if the color contrast changes locally. This is, for example, the case if
HALCON 8.0.2
628 CHAPTER 7. MATCHING
parts of the object can change their color, e.g., from red to green. In particular, this mode is useful if it is not known
in advance in which channels the object is visible. In this mode, the runtime of FindShapeModel can also in-
crease significantly. The metric ’ignore_color_polarity’ can be used for images with an arbitrary number of chan-
nels. If it is used for single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that
for metric = ’ignore_color_polarity’ the number of channels in the model creation with CreateShapeModel
and in the search with FindShapeModel can be different. This can, for example, be used to create a model
from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do not need
to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also contain
images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image template is used as the origin (reference point)
of the model. A different origin can be set with SetShapeModelOrigin.
Parameter
. template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Input image whose domain will be used to create the model.
. numLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long / string)
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. angleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double)
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. angleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double)
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. angleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / string)
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. optimization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Kind of optimization and optionally method used for generating the model.
Default Value : "auto"
List of values : Optimization ∈ {"auto", "none", "point_reduction_low", "point_reduction_medium",
"point_reduction_high", "pregeneration", "no_pregeneration"}
. metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (int / long / string)
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum
size of the object parts.
Default Value : "auto"
Suggested values : Contrast ∈ {"auto", "auto_contrast", "auto_contrast_hyst", "auto_min_size", 10, 20,
30, 40, 60, 80, 100, 120, 140, 160}
. minContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long / string)
Minimum contrast of the objects in the search images.
Default Value : "auto"
Suggested values : MinContrast ∈ {"auto", 1, 2, 3, 5, 7, 10, 20, 30, 40}
Restriction : MinContrast < Contrast
. modelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; HShapeModel / HTuple (IntPtr)
Handle of the model.
Result
If the parameters are valid, the operator CreateShapeModel returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised. If the parameters numLevels and contrast are chosen such that the model contains
too few points, the error 8510 is raised.
Parallelization Information
CreateShapeModel is processed completely exclusively without parallelization.
Possible Predecessors
DrawRegion, ReduceDomain, Threshold
Possible Successors
FindShapeModel, FindShapeModels, GetShapeModelParams, ClearShapeModel,
WriteShapeModel, SetShapeModelOrigin
Alternatives
CreateScaledShapeModel, CreateAnisoShapeModel, CreateTemplateRot
See also
SetSystem, GetSystem
Module
Matching
HALCON 8.0.2
630 CHAPTER 7. MATCHING
HALCON 8.0.2
632 CHAPTER 7. MATCHING
Find the best matches of an anisotropic scale invariant shape model in an image.
The operator FindAnisoShapeModel finds the best numMatches instances of the anisotropic scale invariant
shape model modelID in the input image image. The model must have been created previously by calling
CreateAnisoShapeModel or ReadShapeModel.
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in row, column, angle, scaleR, and scaleC. The coordinates row and column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with CreateAnisoShapeModel. A different origin can
be set with SetShapeModelOrigin.
Note that the coordinates row and column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
CreateAnisoShapeModel. A different origin set with SetShapeModelOrigin is not taken into ac-
count. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even
if it would achieve a score greater than minScore (see below). This behavior can be changed with SetSystem
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than minScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters angleStart and angleExtent determine the range of rotations for which the model is
searched. The parameters scaleRMin, scaleRMax, scaleCMin, and scaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped to the
range given when the model was created with CreateAnisoShapeModel. In particular, this means that the
angle ranges of the model and the search must truly overlap. The angle range in the search is not adapted modulo
2π. To simplify the presentation, all angles in the remainder of the paragraph are given in degrees, whereas they
have to be specified in radians in FindAnisoShapeModel. Hence, if the model, for example, was created with
angleStart = −20◦ and angleExtent = 40◦ and the angle search space in FindScaledShapeModel
is, for example, set to angleStart = 350◦ and angleExtent = 20◦ , the model will not be found, even
though the angle ranges would overlap if they were regarded modulo 360◦ . To find the model, in this example it
would be necessary to select angleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below minScore are
found.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter maxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than maxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances may not overlap at
all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with CreateAnisoShapeModel. If
subPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined with
subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs almost
no computation time and achieves an accuracy that is high enough for most applications. In some applications,
however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, subPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with CreateAnisoShapeModel.
If numLevels is set to 0, the number of pyramid levels specified in CreateAnisoShapeModel is used.
Optionally, numLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid
HALCON 8.0.2
634 CHAPTER 7. MATCHING
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, subPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.
The parameter greediness determines how “greedily” the search should be carried out. If greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for greediness =
0.9.
Parameter
Result
If the parameter values are correct, the operator FindAnisoShapeModel returns the value 2 (H_MSG_TRUE).
HALCON 8.0.2
636 CHAPTER 7. MATCHING
If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
FindAnisoShapeModel is reentrant and processed without parallelization.
Possible Predecessors
CreateAnisoShapeModel, ReadShapeModel, SetShapeModelOrigin
Possible Successors
ClearShapeModel
Alternatives
FindShapeModel, FindScaledShapeModel, FindShapeModels, FindScaledShapeModels,
FindAnisoShapeModels, BestMatchRotMg
See also
SetSystem, GetSystem
Module
Matching
Find the best matches of multiple anisotropic scale invariant shape models.
The operator FindAnisoShapeModels finds the best numMatches instances of the anisotropic scale invari-
ant shape models that are passed in modelIDs in the input image image. The models must have been created
previously by calling CreateAnisoShapeModel or ReadShapeModel.
Hence, in contrast to FindAnisoShapeModel, multiple models can be searched in the same image in one
call. This changes the semantics of all input parameters to some extent. All input parameters must either con-
tain one element, in which case the parameter is used for all models, or must contain the same number of ele-
ments as modelIDs, in which case each parameter element refers to the corresponding element in modelIDs.
(numLevels may also contain either two or twice the number of elements as modelIDs; see below.) As usual,
the domain of the input image image is used to restrict the search space for the reference point of the models
modelIDs. Consistent with the above semantics, the input image image can therefore contain a single image
object or an image object tuple containing multiple image objects. If image contains a single image object, its
domain is used as the region of interest for all models in modelIDs. If image contains multiple image ob-
jects, each domain is used as the region of interest for the corresponding model in modelIDs. In this case, the
image matrix of all image objects in the tuple must be identical, i.e., image cannot be constructed in an arbi-
trary manner using ConcatObj, but must be created from the same image using AddChannels or equivalent
calls. If this is not the case, an error message is returned. The above semantics also hold for the input con-
HALCON 8.0.2
638 CHAPTER 7. MATCHING
trol parameters. Hence, for example, minScore can contain a single value or the same number of values as
modelIDs. In the first case, the value of minScore is used for all models in modelIDs, while in the sec-
ond case the respective value of the elements in minScore is used for the corresponding model in modelIDs.
An extension to these semantics holds for numMatches and maxOverlap. If numMatches contains one el-
ement, FindAnisoShapeModels returns the best numMatches instances of the model irrespective of the
type of the model. If, for example, two models are passed in modelIDs and numMatches = 2 is selected, it
can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, numMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in numMatches. If,
for example, numMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of numMatches, see below. A similar extension
of the semantics holds for maxOverlap. If a single value is passed for maxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
maxOverlap, the overlap is only computed for found instances of the model that have the same model type,
i.e., only instances of the same model that overlap too much are eliminated. In this mode, models of different
types may overlap completely. For a detailed description of the semantics of maxOverlap, see below. Hence,
a call to FindAnisoShapeModels with multiple values for modelIDs, numMatches and maxOverlap
has the same effect as multiple independent calls to FindAnisoShapeModel with the respective parameters.
However, a single call to FindAnisoShapeModels is considerably more efficient.
The type of the found instances of the models is returned in model. The elements of model are indices into the
tuple modelIDs, i.e., they can contain values from 0 to |modelIDs| − 1. Hence, a value of 0 in an element of
model corresponds to an instance of the first model in modelIDs.
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in row, column, angle, scaleR, and scaleC. The coordinates row and column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with CreateAnisoShapeModel. A different origin can
be set with SetShapeModelOrigin.
Note that the coordinates row and column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for FindAnisoShapeModel shows how to create this matrix and use it to display the model
at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
CreateAnisoShapeModel. A different origin set with SetShapeModelOrigin is not taken into ac-
count. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even
if it would achieve a score greater than minScore (see below). This behavior can be changed with SetSystem
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than minScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters angleStart and angleExtent determine the range of rotations for which the model is
searched. The parameters scaleRMin, scaleRMax, scaleCMin, and scaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped to the
range given when the model was created with CreateAnisoShapeModel. In particular, this means that the an-
gle ranges of the model and the search must truly overlap. The angle range in the search is not adapted modulo 2π.
To simplify the presentation, all angles in the remainder of the paragraph are given in degrees, whereas they have
to be specified in radians in FindAnisoShapeModels. Hence, if the model, for example, was created with
angleStart = −20◦ and angleExtent = 40◦ and the angle search space in FindAnisoShapeModels
is, for example, set to angleStart = 350◦ and angleExtent = 20◦ , the model will not be found, even
though the angle ranges would overlap if they were regarded modulo 360◦ . To find the model, in this example it
would be necessary to select angleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below minScore are
found.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter maxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than maxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances may not overlap at
all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with CreateAnisoShapeModel. If
subPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined with
subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs almost
no computation time and achieves an accuracy that is high enough for most applications. In some applications,
however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, subPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with CreateAnisoShapeModel.
If numLevels is set to 0, the number of pyramid levels specified in CreateAnisoShapeModel is used.
Optionally, numLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, subPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, numLevels must contain twice the number of elements as modelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in numLevels. If, for example,
two models are specified in modelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, numLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in modelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in numLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter greediness determines how “greedily” the search should be carried out. If greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which may
HALCON 8.0.2
640 CHAPTER 7. MATCHING
cause the model not to be found in rare cases, even though it is visible in the image. For greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for greediness =
0.9.
Parameter
HALCON 8.0.2
642 CHAPTER 7. MATCHING
found if they achieve a score greater than minScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters angleStart and angleExtent determine the range of rotations for which the model is
searched. The parameters scaleMin and scaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
CreateScaledShapeModel. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation,
all angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians
in FindScaledShapeModel. Hence, if the model, for example, was created with angleStart = −20◦
and angleExtent = 40◦ and the angle search space in FindScaledShapeModel is, for example, set to
angleStart = 350◦ and angleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select angleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below minScore are
found.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter maxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than maxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances may not overlap at
all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with CreateScaledShapeModel. If
subPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined with
subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs almost
no computation time and achieves an accuracy that is high enough for most applications. In some applications,
however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, subPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with CreateScaledShapeModel.
If numLevels is set to 0, the number of pyramid levels specified in CreateScaledShapeModel is used. If
numLevels is set to 0, the number of pyramid levels specified in CreateShapeModel is used. Optionally,
numLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, subPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
HALCON 8.0.2
644 CHAPTER 7. MATCHING
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value.
The parameter greediness determines how “greedily” the search should be carried out. If greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for greediness =
0.9.
Parameter
Result
If the parameter values are correct, the operator FindScaledShapeModel returns the value 2
(H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
FindScaledShapeModel is reentrant and processed without parallelization.
Possible Predecessors
CreateScaledShapeModel, ReadShapeModel, SetShapeModelOrigin
Possible Successors
ClearShapeModel
Alternatives
FindShapeModel, FindAnisoShapeModel, FindShapeModels, FindScaledShapeModels,
FindAnisoShapeModels, BestMatchRotMg
See also
SetSystem, GetSystem
Module
Matching
HALCON 8.0.2
646 CHAPTER 7. MATCHING
ment, FindScaledShapeModels returns the best numMatches instances of the model irrespective of the
type of the model. If, for example, two models are passed in modelIDs and numMatches = 2 is selected, it
can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, numMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in numMatches. If,
for example, numMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of numMatches, see below. A similar extension
of the semantics holds for maxOverlap. If a single value is passed for maxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
maxOverlap, the overlap is only computed for found instances of the model that have the same model type,
i.e., only instances of the same model that overlap too much are eliminated. In this mode, models of different
types may overlap completely. For a detailed description of the semantics of maxOverlap, see below. Hence,
a call to FindScaledShapeModels with multiple values for modelIDs, numMatches and maxOverlap
has the same effect as multiple independent calls to FindScaledShapeModel with the respective parameters.
However, a single call to FindScaledShapeModels is considerably more efficient.
The type of the found instances of the models is returned in model. The elements of model are indices into the
tuple modelIDs, i.e., they can contain values from 0 to |modelIDs| − 1. Hence, a value of 0 in an element of
model corresponds to an instance of the first model in modelIDs.
The position, rotation, and scale of the found instances of the model are returned in row, column, angle,
and scale. The coordinates row and column are the coordinates of the origin of the shape model in the
search image. By default, the origin is the center of gravity of the domain (region) of the image that was
used to create the shape model with CreateScaledShapeModel. A different origin can be set with
SetShapeModelOrigin.
Note that the coordinates row and column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for FindScaledShapeModel shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
CreateScaledShapeModel. A different origin set with SetShapeModelOrigin is not taken into ac-
count. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even
if it would achieve a score greater than minScore (see below). This behavior can be changed with SetSystem
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than minScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters angleStart and angleExtent determine the range of rotations for which the model is
searched. The parameters scaleMin and scaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
CreateScaledShapeModel. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians in
FindScaledShapeModels. Hence, if the model, for example, was created with angleStart = −20◦
and angleExtent = 40◦ and the angle search space in FindScaledShapeModels is, for example, set to
angleStart = 350◦ and angleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select angleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
HALCON 8.0.2
648 CHAPTER 7. MATCHING
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below minScore are
found.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter maxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than maxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances may not overlap at
all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with CreateScaledShapeModel. If
subPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined with
subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs almost
no computation time and achieves an accuracy that is high enough for most applications. In some applications,
however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, subPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with CreateScaledShapeModel.
If numLevels is set to 0, the number of pyramid levels specified in CreateScaledShapeModel is used.
Optionally, numLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, subPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, numLevels must contain twice the number of elements as modelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in numLevels. If, for example,
two models are specified in modelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, numLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in modelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in numLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter greediness determines how “greedily” the search should be carried out. If greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for greediness =
0.9.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image in which the models should be found.
. modelIDs (input_control) . . . . . . . . . . . . . . . . shape_model(-array) ; HShapeModel [ ] / HTuple (IntPtr)
Handle of the models.
. angleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Smallest rotation of the models.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. angleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. scaleMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double)
Minimum scale of the models.
Default Value : 0.9
Suggested values : ScaleMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleMin > 0
. scaleMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double)
Maximum scale of the models.
Default Value : 1.1
Suggested values : ScaleMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleMax ≥ ScaleMin
. minScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Minimum score of the instances of the models to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. numMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Number of instances of the models to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. maxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Maximum overlap of the instances of the models to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. subPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. numLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Number of pyramid levels used in the matching (and lowest pyramid level to use if |numLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
HALCON 8.0.2
650 CHAPTER 7. MATCHING
HALCON 8.0.2
652 CHAPTER 7. MATCHING
overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all angles in the re-
mainder of the paragraph are given in degrees, whereas they have to be specified in radians in FindShapeModel.
Hence, if the model, for example, was created with angleStart = −20◦ and angleExtent = 40◦ and the
angle search space in FindShapeModel is, for example, set to angleStart = 350◦ and angleExtent =
20◦ , the model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ .
To find the model, in this example it would be necessary to select angleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below minScore are
found.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter maxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than maxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances may not overlap at
all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with CreateShapeModel. If subPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, subPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the num-
ber of levels is clipped to the range given when the shape model was created with CreateShapeModel. If
numLevels is set to 0, the number of pyramid levels specified in CreateShapeModel is used. Optionally,
numLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, subPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value.
The parameter greediness determines how “greedily” the search should be carried out. If greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for greediness =
0.9.
Parameter
HALCON 8.0.2
654 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator FindShapeModel returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
FindShapeModel is reentrant and processed without parallelization.
Possible Predecessors
CreateShapeModel, ReadShapeModel, SetShapeModelOrigin
Possible Successors
ClearShapeModel
Alternatives
FindScaledShapeModel, FindAnisoShapeModel, FindScaledShapeModels,
FindShapeModels, FindAnisoShapeModels, BestMatchRotMg
See also
SetSystem, GetSystem
Module
Matching
HALCON 8.0.2
656 CHAPTER 7. MATCHING
in the respective entry in numMatches. If, for example, numMatches = [1, 1] is selected, one instance of
the first model and one instance of the second model is returned. For a detailed description of the semantics of
numMatches, see below. A similar extension of the semantics holds for maxOverlap. If a single value is passed
for maxOverlap, the overlap is computed for all found instances of the different models, irrespective of the model
type, i.e., instances of the same or of different models that overlap too much are eliminated. If, on the other hand,
multiple values are passed in maxOverlap, the overlap is only computed for found instances of the model that
have the same model type, i.e., only instances of the same model that overlap too much are eliminated. In this mode,
models of different types may overlap completely. For a detailed description of the semantics of maxOverlap,
see below. Hence, a call to FindShapeModels with multiple values for modelIDs, numMatches and
maxOverlap has the same effect as multiple independent calls to FindShapeModel with the respective
parameters. However, a single call to FindShapeModels is considerably more efficient.
The type of the found instances of the models is returned in model. The elements of model are indices into the
tuple modelIDs, i.e., they can contain values from 0 to |modelIDs| − 1. Hence, a value of 0 in an element of
model corresponds to an instance of the first model in modelIDs.
The position and rotation of the found instances of the model is returned in row, column, and angle. The
coordinates row and column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the shape model with
CreateShapeModel. A different origin can be set with SetShapeModelOrigin.
Note that the coordinates row and column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for FindShapeModel shows how to create this matrix and use it to display the model at the
found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
CreateShapeModel. A different origin set with SetShapeModelOrigin is not taken into account. The
model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than minScore (see below). This behavior can be changed with SetSystem
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than minScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters angleStart and angleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
CreateShapeModel. In particular, this means that the angle ranges of the model and the search must truly over-
lap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all angles in the remain-
der of the paragraph are given in degrees, whereas they have to be specified in radians in FindShapeModels.
Hence, if the model, for example, was created with angleStart = −20◦ and angleExtent = 40◦ and the
angle search space in FindShapeModels is, for example, set to angleStart = 350◦ and angleExtent =
20◦ , the model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ .
To find the model, in this example it would be necessary to select angleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter minScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger minScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, minScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below minScore are
found.
The maximum number of instances to be found can be determined with numMatches. If more than
numMatches instances with a score greater than minScore are found in the image, only the best numMatches
instances are returned. If fewer than numMatches are found, only that number is returned, i.e., the parameter
minScore takes precedence over numMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter maxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than maxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
SmallestRectangle2) of the found instances. If maxOverlap = 0, the found instances may not overlap at
all, while for maxOverlap = 1 all instances are returned.
The parameter subPixel determines whether the instances should be extracted with subpixel accuracy. If
subPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with CreateShapeModel. If subPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, subPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the num-
ber of levels is clipped to the range given when the shape model was created with CreateShapeModel. If
numLevels is set to 0, the number of pyramid levels specified in CreateShapeModel is used. Optionally,
numLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, subPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for each
model, numLevels must contain twice the number of elements as modelIDs. In this case, the number of pyra-
mid levels and the lowest pyramid level must be specified interleaved in numLevels. If, for example, two models
are specified in modelIDs, the number of pyramid levels is 5 for the first model and 4 for the second model, and
the lowest pyramid level is 2 for the first model and 1 for the second model, numLevels = [5 , 2 , 4 , 1 ] must
be selected. If exactly two models are specified in modelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in numLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter greediness determines how “greedily” the search should be carried out. If greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for greediness =
0.9.
Parameter
HALCON 8.0.2
658 CHAPTER 7. MATCHING
Possible Successors
ClearShapeModel
Alternatives
FindScaledShapeModels, FindAnisoShapeModels, FindShapeModel,
FindScaledShapeModel, FindAnisoShapeModel, BestMatchRotMg
See also
SetSystem, GetSystem
Module
Matching
HALCON 8.0.2
660 CHAPTER 7. MATCHING
The operator GetShapeModelOrigin returns the origin (reference point) of the shape model modelID. The
origin is specified relative to the center of gravity of the domain (region) of the image that was used to create the
shape model with CreateShapeModel, CreateScaledShapeModel, or CreateAnisoShapeModel.
Hence, an origin of (0,0) means that the center of gravity of the domain of the model image is used as the origin.
An origin of (-20,-40) means that the origin lies to the upper left of the center of gravity.
Parameter
Note that the parameters optimization and contrast that also can be determined automatically dur-
ing the model creation cannot be queried by using GetShapeModelParams. If their value is of interest
DetermineShapeModelParams should be used instead.
Parameter
HALCON 8.0.2
662 CHAPTER 7. MATCHING
InspectShapeModel creates a representation of a shape model. The operator is particularly useful in or-
der to determine the parameters numLevels and contrast, which are used in CreateShapeModel,
CreateScaledShapeModel, or CreateAnisoShapeModel, quickly and conveniently. The repre-
sentation of the model is created on multiple image pyramid levels, where the number of levels is de-
termined by numLevels. In contrast to CreateShapeModel, CreateScaledShapeModel, and
CreateAnisoShapeModel, the model is only created for the rotation and scale of the object in the input
image, i.e., 0◦ and 1. As output, InspectShapeModel creates an image object modelImages containing
the images of the individual levels of the image pyramid as well as a region in modelRegions for each pyra-
mid level that represents the model at the respective pyramid level. The individual objects can be accessed with
SelectObj. If the input image image has one channel the representation of the model is created with the method
that is used in CreateShapeModel, CreateScaledShapeModel or CreateAnisoShapeModel for
the metrics ’use_polarity’, ’ignore_global_polarity’, and ’ignore_local_polarity’. If the input image has more than
one channel the representation is created with the method that is used for the metric ’ignore_color_polarity’. As
described for CreateShapeModel, CreateScaledShapeModel, and CreateAnisoShapeModel,
the number of pyramid levels should be chosen as large as possible, while taking into account that the model must
be recognizable on the highest pyramid level and must have enough model points. The parameter contrast
should be chosen such that only the significant features of the template object are used for the model. contrast
can also contain a tuple with two values. In this case, the model is segmented using a method similar to the
hysteresis threshold method used in EdgesImage. Here, the first element of the tuple determines the lower
threshold, while the second element determines the upper threshold. For more information about the hysteresis
threshold method, see HysteresisThreshold. Optionally, contrast can contain a third value as the last
element of the tuple. This value determines a threshold for the selection of significant model components based
on the size of the components, i.e., components that have fewer points than the minimum size thus specified are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level. If small
model components should be suppressed, but hysteresis thresholding should not be performed, nevertheless three
values must be specified in contrast. In this case, the first two values can simply be set to identical val-
ues. In its typical use, InspectShapeModel is called interactively multiple times with different parameters
for numLevels and contrast, until a satisfactory model is obtained. After this, CreateShapeModel,
CreateScaledShapeModel, or CreateAnisoShapeModel are called with the parameters thus obtained.
Parameter
Module
Foundation
HALCON 8.0.2
664 CHAPTER 7. MATCHING
Result
If the handle of the model is valid, the operator SetShapeModelOrigin returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Parallelization Information
SetShapeModelOrigin is processed completely exclusively without parallelization.
Possible Predecessors
CreateShapeModel, CreateScaledShapeModel, CreateAnisoShapeModel,
ReadShapeModel
Possible Successors
FindShapeModel, FindScaledShapeModel, FindAnisoShapeModel, FindShapeModels,
FindScaledShapeModels, FindAnisoShapeModels, GetShapeModelOrigin
See also
AreaCenter
Module
Matching
Matching-3D
665
666 CHAPTER 8. MATCHING-3D
Parameter
. objectModel3DID (input_control) . . . . . . . . object_model_3d ; HObjectModel3D / HTuple (IntPtr)
Handle of the 3D object model.
Result
If the handle of the model is valid, the operator ClearObjectModel3d returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Parallelization Information
ClearObjectModel3d is processed completely exclusively without parallelization.
Possible Predecessors
ReadObjectModel3dDxf
See also
ClearAllObjectModel3d
Module
3D Metrology
HALCON 8.0.2
668 CHAPTER 8. MATCHING-3D
and radius. The longitude is returned in the range [−π, +π] while the latitude is returned in the range
[−π/2, +π/2]. Furthermore, the latitude of the north pole is π/2, and hence, the latitude of the south pole is −π/2.
The orientation of the spherical coordinate system with respect to the Cartesian coordinate system can be specified
with the parameters equatPlaneNormal and zeroMeridian.
equatPlaneNormal determines the normal of the equatorial plane (longitude == 0) pointing to the north pole
(positive latitude) and may take the following values:
’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.
The position of the zero meridian can be specified with the parameter zeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
zeroMeridian are valid:
’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.
Only reasonable combinations of equatPlaneNormal and zeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
equatPlaneNormal=’y’ and zeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from spherical to Cartesian coordinates by us-
ing ConvertPoint3dSpherToCart, the same values must be passed for equatPlaneNormal and
zeroMeridian as were passed to ConvertPoint3dCartToSpher.
The operator ConvertPoint3dCartToSpher can be used, for example, to convert a given camera position
into spherical coordinates. If multiple camera positions are converted in this way, one obtains a pose range (in
spherical coordinates), which can be passed to CreateShapeModel3d in order to create a 3D shape model.
Parameter
’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.
The position of the zero meridian can be specified with the parameter zeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
zeroMeridian are valid:
’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
HALCON 8.0.2
670 CHAPTER 8. MATCHING-3D
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.
Only reasonable combinations of equatPlaneNormal and zeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
equatPlaneNormal=’y’ and zeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from Cartesian to spherical coordinates by us-
ing ConvertPoint3dCartToSpher, the same values must be passed for equatPlaneNormal and
zeroMeridian as were passed to ConvertPoint3dSpherToCart.
The operator ConvertPoint3dSpherToCart can be used, for example, to convert a camera position that
is given in spherical coordinates into Cartesian coordinates. The result can then be utilized to create a complete
camera pose by passing the Cartesian coordinates to CreateCamPoseLookAtPoint.
Parameter
’x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world coordi-
nate system points upwards in the image plane.
’-x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world
coordinate system points downwards in the image plane.
’y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world coordi-
nate system points upwards in the image plane.
’-y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world
coordinate system points downwards in the image plane.
’z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world coordi-
nate system points upwards in the image plane.
’-z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world
coordinate system points downwards in the image plane.
Alternatively to the above values, an arbitrary normal vector can be specified in refPlaneNormal, which is not
restricted to the coordinate axes. For this, a tuple of three values representing the three components of the normal
vector must be passed.
Note that the position of the optical center and the point at which the camera looks must differ from each other.
Furthermore, the normal vector of the reference plane and the z axis of the camera must not be parallel. Otherwise,
the camera pose is not well-defined.
CreateCamPoseLookAtPoint is particularly useful if a 3D object model or a 3D shape model should be visu-
alized from a certain camera position. In this case, the pose that is created by CreateCamPoseLookAtPoint
can be passed to ProjectObjectModel3d or ProjectShapeModel3d, respectively.
It is also possible to pass tuples of different length for different input parameters. In this case, internally the
maximum number of parameter values over all input control parameters is computed. This number is taken as
the number of output camera poses. Then, all input parameters can contain a single value or the same number of
HALCON 8.0.2
672 CHAPTER 8. MATCHING-3D
values as output camera poses. In the first case, the single value is used for the computation of all camera poses,
while in the second case the respective value of the element in the parameter is used for the computation of the
corresponding camera pose.
Parameter
void HShapeModel3D.CreateShapeModel3d (
HObjectModel3D objectModel3DID, HTuple camParam, double refRotX,
double refRotY, double refRotZ, string orderOfRotation,
double longitudeMin, double longitudeMax, double latitudeMin,
double latitudeMax, double camRollMin, double camRollMax,
double distMin, double distMax, int minContrast, HTuple genParamNames,
HTuple genParamValues )
void HShapeModel3D.CreateShapeModel3d (
HObjectModel3D objectModel3DID, HTuple camParam, double refRotX,
double refRotY, double refRotZ, string orderOfRotation,
double longitudeMin, double longitudeMax, double latitudeMin,
double latitudeMax, double camRollMin, double camRollMax,
double distMin, double distMax, int minContrast, string genParamNames,
int genParamValues )
HALCON 8.0.2
674 CHAPTER 8. MATCHING-3D
about the different pose types). Note, however, that GetShapeModel3dParams always returns the pose using
the pose type 0. Finally, poses that are given in one of the two coordinate systems can be transformed to the other
coordinate system by using TransPoseShapeModel3d.
With minContrast, it can be determined which edge contrast the model must at least have in the recognition
performed by FindShapeModel3d. In other words, this parameter separates the model from the noise in
the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, minContrast should be set to 10. If
multichannel images are used for the search images, the noise in one channel must be multiplied by the square root
of the number of channels to determine minContrast. If, for example, the gray values fluctuate within a range
of 10 gray levels in a single channel and the image is a three-channel image, minContrast should be set to 17.
If the model should be recognized in very low contrast images, minContrast must be set to a correspondingly
small value. If the model should be recognized even if it is severely occluded, minContrast should be slightly
larger than the range of gray value fluctuations created by noise in order to ensure that the pose of the model is
extracted robustly and accurately by FindShapeModel3d.
The parameters described above are application-dependent and must be always specified when creating a 3D
shape model. In addition, there are some generic parameters that can optionally be used to influence the model
creation. For most applications these parameters need not to be specified but can be left at their default values.
If desired, these parameters and their corresponding values can be specified by using genParamNames and
genParamValues, respectively. The following values for genParamNames are possible:
’num_levels’: For efficiency reasons the model views are generated on multiple pyramid levels. On higher levels
fewer views are generated than on lower levels. With the parameter ’num_levels’ the number of pyramid
levels on which model views are generated can be specified. It should be chosen as large as possible because
by this the time necessary to find the model is significantly reduced. On the other hand, the number of
levels must be chosen such that the shape representations of the views on the highest pyramid level are
still recognizable and contain a sufficient number of points (at least four). If not enough model points are
generated for a certain view, the view is deleted from the model and replaced by a view on a lower pyramid
level. If for all views on a pyramid level not enough model points are generated, the number of levels is
reduced internally until for at least one view enough model points are found on the highest pyramid level. If
this procedure would lead to a model with no pyramid levels, i.e., if the number of model points is too small
for all views already on the lowest pyramid level, CreateShapeModel3d returns an error message. If
’num_levels’ is set to ’auto’ (default value), CreateShapeModel3d determines the number of pyramid
levels automatically. In this case all model views on all pyramid levels are automatically checked whether
their shape representations are still recognizable. If the shape representation of a certain view is found to
be not recognizable, the view is deleted from the model and replaced by a view on a lower pyramid level.
Note that if ’num_levels’ is set to ’auto’, the number of pyramid levels can be different for different views.
In rare cases, it might happen that CreateShapeModel3d determines a value for the number of pyramid
levels that is too large or too small. If the number of pyramid levels is chosen too large, the model may
not be recognized in the image or it may be necessary to select very low parameters for minScore or
greediness in FindShapeModel3d in order to find the model. If the number of pyramid levels is
chosen too small, the time required to find the model in FindShapeModel3d may increase. In these cases,
the views on the pyramid levels should be checked by using the output of GetShapeModel3dContours.
Suggested values: ’auto’, 3, 4, 5, 6
Default value: ’auto’
’optimization’: For models with particularly large model views, it may be useful to reduce the number of model
points by setting ’optimization’ to a value different from ’none’. If ’optimization’ = ’none’, all model points
are stored. In all other cases, the number of points is reduced according to the value of ’optimization’.
If the number of points is reduced, it may be necessary in FindShapeModel3d to set the parame-
ter greediness to a smaller value, e.g., 0.7 or 0.8. For models with small model views, the reduction
of the number of model points does not result in a speed-up of the search because in this case usually
significantly more potential instances of the model must be examined. If ’optimization’ is set to ’auto’,
CreateShapeModel3d automatically determines the reduction of the number of model points for each
model view.
List of values: ’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’, ’point_reduction_high’
Default value: ’auto’
’metric’: This parameter determines the conditions under which the model is recognized in the image. Cur-
rently, only the metric ’ignore_segment_polarity’ is supported, which recognizes an object even if the con-
trast changes locally.
List of values: ’ignore_segment_polarity’
HALCON 8.0.2
676 CHAPTER 8. MATCHING-3D
’min_face_angle’: 3D edges are only included in the shape representations of the views if the angle between
the two 3D faces that are incident with the 3D object model edge is at least ’min_face_angle’. If
’min_face_angle’ is set to 0.0, all edges are included. If ’min_face_angle’ is set to π (equivalent to 180
degrees), only the silhouette of the 3D object model is included. This parameter can be used to suppress
edges within curved surfaces, e.g., the surface of a cylinder or cone. Curved surfaces are approximated by
multiple planar faces. The edges between such neighboring planar faces should not be included in the shape
representation because they also do not appear in real images of the model. Thus, ’min_face_angle’ should
be set sufficiently high to suppress these edges. The effect of different values for ’min_face_angle’ can be
inspected by using ProjectObjectModel3d before calling CreateShapeModel3d. Note that if
edges that are not visible in the search image are included in the shape representation, the performance (ro-
bustness and speed) of the matching may decrease considerably.
Suggested values: rad(10), rad(20), rad(30), rad(45)
Default value: rad(15)
’min_size’: This value determines a threshold for the selection of significant model components based on the size
of the components, i.e., connected components that have fewer points than the specified minimum size are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level.
Suggested values: ’auto’, 0, 3, 5, 10, 20
Default value: ’auto’
’model_tolerance’: The parameter specifies the tolerance of the projected 3D object model edges in the image,
given in pixels. The higher the value is chosen, the fewer views need to be generated. Consequently, a higher
value results in models that are less memory consuming and faster to find with FindShapeModel3d. On
the other hand, if the value is chosen too high, the robustness of the matching will decrease. Therefore, this
parameter should only be modified with care. For most applications, a good compromise between speed and
robustness is obtained when setting ’model_tolerance’ to 1.
Suggested values: 0, 1, 2
Default value: 1
Parameter
HALCON 8.0.2
678 CHAPTER 8. MATCHING-3D
Possible Predecessors
ReadObjectModel3dDxf, ProjectObjectModel3d, GetObjectModel3dParams
Possible Successors
FindShapeModel3d, WriteShapeModel3d, ProjectShapeModel3d,
GetShapeModel3dParams, GetShapeModel3dContours
See also
ConvertPoint3dCartToSpher, ConvertPoint3dSpherToCart,
CreateCamPoseLookAtPoint, TransPoseShapeModel3d
Module
3D Metrology
will be relatively time consuming in this case. If greediness = 1, an unsafe search heuristic is used, which
may cause the model not to be found in rare cases, even though it is visible in the image. For greediness =
1, the maximum search speed is achieved. In almost all cases, the 3D shape model will always be found for
greediness = 0.9.
The number of pyramid levels used during the search is determined with numLevels. If necessary, the number
of levels is clipped to the range given when the 3D shape model was created with CreateShapeModel3d. If
numLevels is set to 0, the number of pyramid levels specified in CreateShapeModel3d is used. Optionally,
numLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for numLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. If the lowest pyramid level to use is chosen too
large, it may happen that the desired accuracy cannot be achieved, or that wrong instances of the model are found
because the model is not specific enough on the higher pyramid levels to facilitate a reliable selection of the correct
instance of the model. In this case, the lowest pyramid level to use must be set to a smaller value.
In addition to the parameters described above, there are some generic parameters that can optionally be used to in-
fluence the matching. For most applications these parameters need not to be specified but can be left at their default
values. If desired, these parameters and their corresponding values can be specified by using genParamNames
and genParamValues, respectively. The following values for genParamNames are possible:
• If the pose range in which the model is to be searched is smaller than the pose range that was specified during
the model creation with CreateShapeModel3d, the pose range can be restricted appropriately with the
following parameters. If the values lie outside the pose range of the model, the values are automatically
clipped to the pose range of the model.
’longitude_min’: Sets the minimum longitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’longitude_max’: Sets the maximum longitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’latitude_min’: Sets the minimum latitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-90)
’latitude_max’: Sets the maximum latitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(90)
’cam_roll_min’: Sets the minimum camera roll angle of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’cam_roll_max’: Sets the maximum camera roll angle of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’dist_min’: Sets the minimum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: 0
’dist_max’: Sets the maximum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: (∞)
• Further generic parameters that do not concern the pose range can be specified:
’num_matches’: With this parameter the maximum number of instances to be found can be determined.
If more than the specified number of instances with a score greater than minScore are found in the
image, only the best ’num_matches’ instances are returned. If fewer than ’num_matches’ are found,
only that number is returned, i.e., the parameter minScore takes precedence over ’num_matches’. If
’num_matches’ is set to 0, all matches that satisfy the score criterion are returned. Note that the more
matches should be found the slower the matching will be.
Suggested values: 0, 1, 2, 3
Default value: 1
HALCON 8.0.2
680 CHAPTER 8. MATCHING-3D
’max_overlap’: It may happen that multiple instances with similar positions but with different orientations
are found in the image. The parameter ’max_overlap’ determines by what fraction (i.e., a number be-
tween 0 and 1) two instances may at most overlap in order to consider them as different instances, and
hence to be returned separately. If two instances overlap each other by more than the specified value only
the best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle
of arbitrary orientation (see SmallestRectangle2) of the found instances. If 0 max _overlap 0 = 0,
the found instances may not overlap at all, while for 0 max _overlap 0 = 1 all instances are returned.
Suggested values: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Default value: 0.5
’pose_refinement’: This parameter determines whether the poses of the instances should be refined after
the matching. If ’pose_refinement’ is set to ’none’ the model’s pose is only determined with a limited
accuracy. In this case, the accuracy depends on several sampling steps that are used inside the match-
ing process and, therefore cannot be predicted very well. Therefore, ’pose_refinement’ should only be
set to ’none’ when the computation time is of primary concern and an approximate pose is sufficient.
In all other cases the pose should be determined through a least-squares adjustment, i.e., by minimiz-
ing the distances of the model points to their corresponding image points. In order to achieve a high
accuracy, this refinement is directly performed in 3D. Therefore, the refinement requires additional com-
putation time. The different modes for least-squares adjustment (’least_squares’, ’least_squares_high’,
and ’least_squares_very_high’) can be used to determine the accuracy with which the minimum distance
is searched for. The higher the accuracy is chosen, the longer the pose refinement will take, however.
For most applications ’least_squares_high’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
List of values: ’none’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’
Default value: ’least_squares_high’
’outlier_suppression’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. Then, in some cases it might be useful
to apply a robust outlier suppression during the least-squares adjustment. This might be necessary, for
example, if a high degree of clutter is present in the image, which prevents the least-squares adjustment
from finding the optimum pose. In this case, ’outlier_suppression’ should be set to either ’medium’
(eliminates a medium proportion of outliers) or ’high’ (eliminates a high proportion of outliers). How-
ever, in most applications, no robust outlier suppression is necessary, and hence, ’pose_refinement’ can
be set to ’none’. It should be noted that activating the outlier suppression comes along with a signifi-
cantly increasing computation time.
List of values: ’none’, ’medium’, ’high’
Default value: ’none’
’cov_pose_mode’: This parameter only takes effect if ’pose_refinement’ is set to a value other than ’none’,
and hence, a least-squares adjustment is performed. ’cov_pose_mode’ determines the mode in which
the accuracies that are computed during the least-squares adjustment are returned in covPose. If
’cov_pose_mode’ is set to ’standard_deviations’, the 6 standard deviations of the 6 pose parameters
are returned for each match. In contrast, if ’cov_pose_mode’ is set to ’covariances’, covPose contains
the 36 values of the complete 6 × 6 covariance matrix of the 6 pose parameters.
List of values: ’standard_deviations’, ’covariances’
Default value: ’standard_deviations’
’border_model’: The model is searched within those points of the domain of the image in which the model
lies completely within the image. This means that the model will not be found if it extends beyond
the borders of the image, even if it would achieve a score greater than minScore. This behavior can
be changed by setting ’border_model’ to ’true’, which will cause models that extend beyond the image
border to be found if they achieve a score greater than minScore. Here, points lying outside the image
are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the
search will increase in this mode.
List of values: ’false’, ’true’
Default value: ’false’
Parameter
Result
If the parameter values are correct, the operator FindShapeModel3d returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
FindShapeModel3d is reentrant and processed without parallelization.
HALCON 8.0.2
682 CHAPTER 8. MATCHING-3D
Possible Predecessors
CreateShapeModel3d, ReadShapeModel3d
Possible Successors
ProjectShapeModel3d
See also
ConvertPoint3dCartToSpher, ConvertPoint3dSpherToCart,
CreateCamPoseLookAtPoint, TransPoseShapeModel3d
Module
3D Metrology
’reference_point’: 3D coordinates of the reference point of the model. The reference point is the center of the
smallest enclosing axis-parallel cuboid (see parameter ’bounding_box1’).
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z).
Parameter
HALCON 8.0.2
684 CHAPTER 8. MATCHING-3D
’cam_param’: Interior parameters of the camera that is used for the matching.
’ref_rot_x’: Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or
without unit).
’ref_rot_y’: Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or
without unit).
’ref_rot_z’: Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or
without unit).
’order_of_rotation’: Meaning of the rotation values of the reference orientation.
’longitude_min’: Minimum longitude of the model views.
’longitude_max’: Maximum longitude of the model views.
’latitude_min’: Minimum latitude of the model views.
’latitude_max’: Maximum latitude of the model views.
’cam_roll_min’: Minimum camera roll angle of the model views.
’cam_roll_max’: Maximum camera roll angle of the model views.
’dist_min’: Minimum camera-object-distance of the model views.
’dist_max’: Maximum camera-object-distance of the model views.
’min_contrast’: Minimum contrast of the objects in the search images.
’num_levels’: User-specified number of pyramid levels.
’num_levels_max’: Maximum number of used pyramid levels over all model views.
’optimization’: Kind of optimization by reducing the number of model points.
’metric’: Match metric.
’min_face_angle’: Minimum 3D face angle for which 3D object model edges are included in the 3D shape model.
’min_size’: Minimum size of the projected 3D object model edge (in number of pixels) to include the projected
edge in the 3D shape model.
’model_tolerance’: Maximum acceptable tolerance of the projected 3D object model edges (in pixels).
’num_views_per_level’: Number of model views per pyramid level. For each pyramid level the number of views
that are stored in the 3D shape model are returned. Thus, the number of returned elements corresponds to the
number of used pyramid levels, which can be queried with ’num_levels_max’.
’reference_pose’: Reference position and orientation of the 3d shape model. The returned pose describes the pose
of the internally used reference coordinate system of the 3D shape model with respect to the coordinate
system that is used in the underlying 3D object model.
’reference_point’: 3D coordinates of the reference point of the underlying 3D object model.
’bounding_box1’: Smallest enclosing axis-parallel cuboid of the underlying 3D object model in the following
order: [min_x, min_y, min_z, max_x, max_y, max_z].
A detailed description of the parameters can be looked up with the operator CreateShapeModel3d.
It is possible to query the values of several parameters with a single operator call by passing a tuple containing the
names of all desired parameters to genParamNames. As a result a tuple of the same length with the correspond-
ing values is returned in genParamValues. Note that this is solely possible for parameters that return only a
single value.
Parameter
. shapeModel3DID (input_control) . . . . . . . . . . . . shape_model_3d ; HShapeModel3D / HTuple (IntPtr)
Handle of the 3D shape model.
. genParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; HTuple (string)
Names of the generic parameters that are to be queried for the 3D shape model.
Default Value : "num_levels_max"
List of values : GenParamNames ∈ {"cam_param", "ref_rot_x", "ref_rot_y", "ref_rot_z",
"order_of_rotation", "longitude_min", "longitude_max", "latitude_min", "latitude_max", "cam_roll_min",
"cam_roll_max", "dist_min", "dist_max", "min_contrast", "num_levels", "num_levels_max", "optimization",
"metric", "min_face_angle", "min_size", "model_tolerance", "num_views_per_level", "reference_pose",
"reference_point", "bounding_box1"}
. genParamValues (output_control) . . . . . . . attribute.name(-array) ; HTuple (string / int / long / double)
Values of the generic parameters.
Result
If the parameters are valid, the operator GetShapeModel3dParams returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Parallelization Information
GetShapeModel3dParams is reentrant and processed without parallelization.
Possible Predecessors
CreateShapeModel3d, ReadShapeModel3d
Possible Successors
FindShapeModel3d
See also
ConvertPoint3dCartToSpher, ConvertPoint3dSpherToCart,
CreateCamPoseLookAtPoint, TransPoseShapeModel3d
Module
3D Metrology
HALCON 8.0.2
686 CHAPTER 8. MATCHING-3D
hidden by faces of the 3D object model. If hiddenSurfaceRemoval is set to ’false’, all projected edges are
returned. This is faster than a projection with hiddenSurfaceRemoval set to ’true’.
3D edges are only projected if the angle between the two 3D faces that are incident with the 3D edge is at least
minFaceAngle. If minFaceAngle is set to 0.0, all edges are projected. If minFaceAngle is set to π
(equivalent to 180 degrees), only the silhouette of the 3D object model is returned. This parameter can be used to
suppress edges within curved surfaces, e.g., the surface of a cylinder or cone.
Parameter
The interior camera parameters camParam describe the projection characteristics of the camera (see
WriteCamPar). The pose describes the position and orientation of the world coordinate system with respect to
the camera coordinate system.
The parameter hiddenSurfaceRemoval can be used to switch on or to switch off the removal of hidden
surfaces. If hiddenSurfaceRemoval is set to ’true’, only those projected edges are returned that are not
hidden by faces of the 3D object model. If hiddenSurfaceRemoval is set to ’false’, all projected edges are
returned. This is faster than a projection with hiddenSurfaceRemoval set to ’true’.
3D edges are only projected if the angle between the two 3D faces that are incident with the 3D edge is at least
minFaceAngle. If minFaceAngle is set to 0.0, all edges are projected. If minFaceAngle is set to π
(equivalent to 180 degrees), only the silhouette of the 3D object model is returned. This parameter can be used to
suppress edges within curved surfaces, e.g., the surface of a cylinder.
ProjectShapeModel3d and ProjectObjectModel3d return the same result if the 3D object model that
was used to create the 3D shape model is passed to ProjectObjectModel3d.
ProjectShapeModel3d is especially useful in order to visualize the matches that are returned by
FindShapeModel3d in the case that the underlying 3D object model is no longer available.
Parameter
HALCON 8.0.2
688 CHAPTER 8. MATCHING-3D
• POLYLINE
– Polyface meshes
• 3DFACE
• LINE
• CIRCLE
• ARC
• ELLIPSE
• SOLID
• BLOCK
• INSERT
Two-dimensional linear elements like the DXF elements CIRCLE or ELLIPSE are interpreted as faces even if they
are not extruded. If necessary, they are closed. Two-dimensional linear elements that consist of just two points are
not used because they do not define a face. Thus, elements of the type LINE are only used if they are extruded.
The curved surface of extruded DXF entities of the type CIRCLE, ARC, and ELLIPSE is approximated by planar
faces. The accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’
and ’max_approx_error’. The parameter ’min_num_points’ defines the minimum number of sampling points
that are used for the approximation of the DXF element CIRCLE, ARC, or ELLIPSE. Note that the parameter
’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if
’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-
circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum
deviation of the XLD contour from the ideal circle or ellipse, respectively. The determination of this deviation
is carried out in the units used in the DXF file. For the determination of the accuracy of the approximation both
criteria are evaluated. Then, the criterion that leads to the more accurate approximation is used.
Internally, the following default values are used for the generic parameters:
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
One possible way to create a suitable DXF file is to create a 3D model of the object with the CAD program
AutoCAD. Ensure that the surface of the object is modelled, not only its edges. Lines that, e.g., define object
edges, will not be used by HALCON, because they do not define the surface of the object. Once the modelling is
completed, you can store the model in DWG format. To convert the DWG file into a DXF file that is suitable for
HALCON’s 3D matching, carry out the following steps:
• Export the 3D CAD model to a 3DS file using the 3dsout command of AutoCAD. This will triangulate the
object’s surface, i.e., the model will only consist of planes. (Users of AutoCAD 2007 or newer versions can
download this command utility from Autodesk’s web site.)
• Open a new empty sheet in AutoCAD.
• Import the 3DS file into this empty sheet with the 3dsin command of AutoCAD.
• Save the object into a DXF R12 file.
Users of other CAD programs should ensure that the surface of the 3D model is triangulated before it is exported
into the DXF file. If the CAD program is not able to carry out the triangulation, it is often possible to save the 3D
model in the proprietary format of the CAD program and to convert it into a suitable DXF file by using a CAD file
format converter that is able to perform the triangulation.
Parameter
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
Name of the DXF file
. scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (string / int / long / double)
Scale or unit.
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. genParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {"min_num_points", "max_approx_error"}
. genParamValues (input_control) . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long / string)
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {0.1, 0.25, 0.5, 1, 2, 5, 10, 20}
. objectModel3DID (output_control) . . . . . . . object_model_3d ; HObjectModel3D / HTuple (IntPtr)
Handle of the read 3D object model.
. dxfStatus (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Status information.
Result
ReadObjectModel3dDxf returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Parallelization Information
ReadObjectModel3dDxf is processed completely exclusively without parallelization.
Possible Successors
AffineTransObjectModel3d, ProjectObjectModel3d
Module
3D Metrology
HALCON 8.0.2
690 CHAPTER 8. MATCHING-3D
The operator ReadShapeModel3d reads a 3D shape model, which has been written with
WriteShapeModel3d, from the file fileName.
Parameter
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference
coordinate system of a 3D shape model and vice versa.
The operator TransPoseShapeModel3d transforms the pose poseIn into the pose poseOut by using the
transformation direction specified in transformation. In the majority of cases, the operator will be used to
transform a camera pose that is given with respect to the source coordinate system to a camera pose that refers to
the target coordinate system.
The pose can be transformed between two coordinate systems. The first coordinate system is the reference coordi-
nate system of the 3D shape model that is passed in shapeModel3DID. The origin of the reference coordinate
system lies at the reference point of the underlying 3D object model. The orientation of the reference coordi-
nate system is determined by the reference orientation that was specified when creating the 3D shape model with
CreateShapeModel3d.
The second coordinate system is the world coordiante system, i.e., the coordinate system of the 3D object model
that underlies the 3D shape model. This coordinate system is implicitly determined by the coordinates that are
stored in the DXF file that was read by using ReadObjectModel3dDxf.
If transformation is set to ’ref_to_model’, it is assumed that poseIn refers to the reference coordinate
system of the 3D shape model. The resulting output pose poseOut in this case refers to the coordinate system of
the 3D object model.
If transformation is set to ’model_to_ref’, it is assumed that poseIn refers to the coordinate system of the
3D object model. The resulting output pose poseOut in this case refers to the reference coordinate system of the
3D shape model.
The relative pose of the two coordinate systems can be queried by passing ’reference_pose’ for genParamNames
in the operator GetShapeModel3dParams.
Parameter
HALCON 8.0.2
692 CHAPTER 8. MATCHING-3D
Morphology
9.1 Gray-Values
A range filtering is calculated according to the following scheme: The indicated mask is put over the image to be
filtered in such a way that the center of the mask touches all pixels once. For each of these pixels all neighboring
pixels covered by the mask are sorted in an ascending sequence corresponding to their gray values. Each sorted
sequence of gray values contains the same number of gray values like the mask has image points. The n-th highest
element, (= modePercent, rank values between 0...100 in percent) is selected and set as result gray value in the
corresponding result image.
If modePercent is 0, then the operator equals to the gray value opening ( GrayOpening). If modePercent
is 50, the operator results in the median filter, which is applied twice ( MedianImage). The modePercent
100 in DualRank means that it calculates the gray value closing ( GrayClosing). Choosing parameter values
inside this range results in a smooth transformation of these operators.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Image to be filtered.
. imageRank (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Filtered Image.
693
694 CHAPTER 9. MORPHOLOGY
read_image(Image,’fabrik’)
dual_rank(Image,ImageOpening,’circle’,10,10,’mirrored’)
disp_image(ImageOpening,WindowHandle).
√ Complexity
For each pixel: O( F ∗ 10) with F = area of the structuring element.
Result
If the parameter values are correct the operator DualRank returns the value 2 (H_MSG_TRUE). The
behavior in case of empty input (no input images available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
DualRank is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
ReadImage
Possible Successors
Threshold, DynThreshold, SubImage, Regiongrowing
Alternatives
RankImage, GrayClosing, GrayOpening, MedianImage
See also
GenCircle, GenRectangle1, GrayErosionRect, GrayDilationRect, SigmaImage
References
W. Eckstein, O. Munkelt “Extracting Objects from Digital Terrain Model” Remote Sensing and Reconstruction for
Threedimensional Objects and Scenes, SPIE Symposium on Optical Science, Engeneering, and Instrumentation,
July 1995, San Diego
Module
Foundation
HALCON 8.0.2
696 CHAPTER 9. MORPHOLOGY
bothat(i, s) = (i • s) − i,
i.e., the difference of the closing of the image with s and the image (see GrayClosing). For the generation of
structuring elements, see ReadGraySe.
Parameter
i • s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see GrayDilation and GrayErosion).
For the generation of structuring elements, see ReadGraySe.
Parameter
Parallelization Information
GrayClosing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
ReadGraySe
Alternatives
DualRank
See also
Closing, GrayDilation, GrayErosion
Module
Foundation
i ◦ s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see GrayDilationRect and
GrayErosionRect).
Parameter
HALCON 8.0.2
698 CHAPTER 9. MORPHOLOGY
Alternatives
GrayClosing, GrayClosingShape
See also
ClosingRectangle1, GrayDilationRect, GrayErosionRect
Module
Foundation
i • s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see GrayDilationShape and
GrayErosionShape).
Attention
Note that GrayClosingShape requires considerably more time for mask sizes of type float than for mask sizes
of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see ReadGraySe).
Parameter
HALCON 8.0.2
700 CHAPTER 9. MORPHOLOGY
If the parameters maskHeight or maskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image image is
transformed with both the next larger and the next smaller odd mask size, and the output image imageMax is
interpolated from the two intermediate images. Therefore, note that GrayDilationShape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ und ’octagon’ for the maskShape control parameter, maskHeight and
maskWidth must be equal. The parameter value ’octagon’ for maskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
Attention
Note that GrayDilationShape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image for which the maximum gray values are to be calculated.
. imageMax (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image containing the maximum gray values.
. maskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (double / int / long)
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight
. maskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (double / int / long)
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth
. maskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Shape of the mask.
Default Value : "octagon"
List of values : MaskShape ∈ {"rectangle", "rhombus", "octagon"}
Result
GrayDilationShape returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
GrayDilationShape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
GrayDilation, GrayDilationRect
See also
GrayOpeningShape, GrayClosingShape, GraySkeleton
Module
Foundation
Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see ReadGraySe).
HALCON 8.0.2
702 CHAPTER 9. MORPHOLOGY
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Structuring element.
. imageErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Gray-eroded image.
Result
GrayErosion returns 2 (H_MSG_TRUE) if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
GrayErosion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
ReadGraySe
Possible Successors
GrayDilation, SubImage
Alternatives
GrayErosionRect
See also
GrayOpening, GrayClosing, Erosion1, GraySkeleton
Module
Foundation
Result
GrayErosionRect returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior
can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
GrayErosionRect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
GrayDilationRect
Module
Foundation
HALCON 8.0.2
704 CHAPTER 9. MORPHOLOGY
Result
GrayErosionShape returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
GrayErosionShape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
GrayErosion, GrayErosionRect
See also
GrayOpeningShape, GrayClosingShape, GraySkeleton
Module
Foundation
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see GrayErosion and GrayDilation).
For the generation of structuring elements, see ReadGraySe.
Parameter
GrayOpeningRect applies a gray value opening to the input image image with a rectangular mask of
size (maskHeight, maskWidth). The resulting image is returned in imageOpening. If the parameters
maskHeight or maskWidth are even, they are changed to the next larger odd value. At the border of the image
the gray values are mirrored.
The gray value opening of an image i with a rectangular structuring element s is defined as
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see GrayErosionRect and
GrayDilationRect).
Parameter
HALCON 8.0.2
706 CHAPTER 9. MORPHOLOGY
GrayOpeningShape applies a gray value opening to the input image image with the structuring element of
shape maskShape. The mask’s offset values are 0 and its horizontal and vertical size is defined by maskHeight
and maskWidth. The resulting image is returned in imageOpening.
If the parameters maskHeight or maskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image image is
transformed with both the next larger and the next smaller odd mask size, and the output image imageOpening
is interpolated from the two intermediate images. Therefore, note that GrayOpeningShape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ and ’octagon’ for the maskShape control parameter, maskHeight and
maskWidth must be equal. The parameter value ’octagon’ for maskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
The gray value opening of an image i with a structuring element s is defined as
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see GrayErosionShape and
GrayDilationShape).
Attention
Note that GrayOpeningShape requires considerably more time for mask sizes of type float than for mask sizes
of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
tophat(i, s) = i − (i ◦ s),
i.e., the difference of the image and its opening with s (see GrayOpening). For the generation of structuring
elements, see ReadGraySe.
HALCON 8.0.2
708 CHAPTER 9. MORPHOLOGY
Parameter
Alternatives
GenDiscSe
See also
ReadImage, PaintRegion, PaintGray, CropPart
Module
Foundation
9.2 Region
read_image (Image,’/bilder/name.ext’)
threshold (Image,Regions,128,255)
gen_circle (Circle,0,0,16)
bottom_hat (Regions,Circle,RegionBottomHat).
Result
BottomHat returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
710 CHAPTER 9. MORPHOLOGY
Alternatives
Closing, Difference
See also
TopHat, MorphHat, GrayBothat, Opening
Module
Foundation
#include "HalconCpp.h"
main()
{
HWindow w;
HRegion circ1 = HRegion::GenCircle (20, 10, 10.5);
circ1.Display (w);
w.Click ();
return(0);
}
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O(3 F ) .
Result
Boundary returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HALCON 8.0.2
712 CHAPTER 9. MORPHOLOGY
main()
{
cout << "Reproduction of ’closing ()’ using " << endl;
cout << "’dilation()’ and ’minkowski_sub1()’" << endl;
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .
Result
Closing returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:
√
O(4 · F 1 · radius) .
Result
ClosingCircle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
714 CHAPTER 9. MORPHOLOGY
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
Alternatives
RankRegion, FillUp, Closing, ClosingCircle, ClosingGolay
See also
Dilation1, MinkowskiSub1, Erosion1, Opening
Module
Foundation
Result
ClosingGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
716 CHAPTER 9. MORPHOLOGY
Result
ClosingRectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
For each point m in M a translation of the region R is performed. The union of all these translations is the dilation
of R with M . Dilation1 is similar to the operator MinkowskiAdd1, the difference is that in Dilation1
the structuring element is mirrored at the origin. The position of structElement is meaningless, since the
displacement vectors are determined with respect to the center of gravity of M .
The parameter iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Attention
A dilation always results in enlarged regions. Closely spaced regions which may touch or overlap as a result of
the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator Union1 has to be called first.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be dilated.
. structElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element.
. regionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Dilated regions.
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · iterations) .
Result
Dilation1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
718 CHAPTER 9. MORPHOLOGY
which are to be performed with the structuring element. The result of iteration n − 1 is used as input for iteration
n.
An empty region is generated in case of an empty structuring element.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Attention
A dilation always results in enlarged regions. Closely spaced regions which may touch or overlap as a result of
the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator Union1 has to be called first.
Parameter
Result
Dilation2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
See also
Erosion1, Erosion2, Opening, Closing
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Reproduction of ’dilation_circle ()’" << endl;
cout << "First = original image " << endl;
cout << "Blue = after dilation " << endl;
cout << "Red = before dilation " << endl;
HByteImage img("monkey");
HWindow w;
HALCON 8.0.2
720 CHAPTER 9. MORPHOLOGY
return(0);
}
Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:
√
O(2 · radius · F 1) .
Result
DilationCircle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Attention
Not all values of rotation are valid for any Golay element. For some of the values of rotation, the resulting
regions are identical to the input regions.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be dilated.
. regionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Dilated regions.
. golayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
. rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(3 · F ) .
Result
DilationGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
DilationGolay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, Union1, Watersheds, ClassNdimNorm
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
Alternatives
Dilation1, Dilation2, DilationSeq
See also
ErosionGolay, OpeningGolay, ClosingGolay, HitOrMissGolay, ThinningGolay,
ThickeningGolay, GolayElements
Module
Foundation
HALCON 8.0.2
722 CHAPTER 9. MORPHOLOGY
DilationRectangle1 applies a dilation with a rectangular structuring element to the input regions region.
The size of the structuring rectangle is width × height. The operator results in enlarged regions, and the holes
smaller than the rectangular mask in the interior of the regions are closed.
DilationRectangle1 is a very fast operation because the height of the rectangle enters only logarithmically
into the runtime complexity, while the width does not enter at all. This leads to excellent runtime efficiency, even
in the case of very large rectangles (edge length > 100).
Attention
DilationRectangle1 is applied to each input region separately. If gaps between different regions are to be
closed, Union1 or Union2 has to be called first.
To enlarge a region by the same amount in all directions, width and height must be odd. If this is not the case,
the region is dilated by a larger amount at the right or at the bottom, respectively, than at the left or at the top.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be dilated.
. regionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Dilated regions.
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of the structuring rectangle.
Default Value : 11
Suggested values : Width ∈ {1, 2, 3, 4, 5, 11, 15, 21, 31, 51, 71, 101, 151, 201}
Typical range of values : 1 ≤ Width ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of the structuring rectangle.
Default Value : 11
Suggested values : Height ∈ {1, 2, 3, 4, 5, 11, 15, 21, 31, 51, 71, 101, 151, 201}
Typical range of values : 1 ≤ Height ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Reproduction of ’dilation_rectangle ()’" << endl;
cout << "First = original image " << endl;
cout << "Blue = after dilation " << endl;
cout << "Red = after segmentation " << endl;
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:
√
O( F 1 · ld(H)) .
Result
DilationRectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
724 CHAPTER 9. MORPHOLOGY
Result
DilationSeq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
For each point m in M a translation of the region R is performed. The intersection of all these translations is
the erosion of R with M . Erosion1 is similar to the operator MinkowskiSub1, the difference is that in
Erosion1 the structuring element is mirrored at the origin. The position of structElement is meaningless,
since the displacement vectors are determined with respect to the center of gravity of M .
The parameter iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Parameter
Result
Erosion1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
726 CHAPTER 9. MORPHOLOGY
Result
Erosion2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Simulation of ’erosion_circle ()’" << endl;
HALCON 8.0.2
728 CHAPTER 9. MORPHOLOGY
HByteImage img("monkey");
HWindow w;
return(0);
}
Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:
√
O(2 · radius · F 1) .
Result
ErosionCircle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
The rotation number rotation determines which rotation of the element should be used, and whether the fore-
ground (even) or background version (odd) of the selected element should be used. The Golay elements, together
with all possible rotations, are described with the operator GolayElements. The operator works by shifting
the structuring element over the region to be processed (region). For all positions of the structuring element
fully contained in the region, the corresponding reference point (relative to the structuring element) is added to the
output region. This means that the intersection of all translations of the structuring element within the region is
computed.
The parameter iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
Attention
Not all values of rotation are valid for any Golay element. For some of the values of rotation, the resulting
regions are identical to the input regions.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be eroded.
. regionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Eroded regions.
. golayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
. rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(3 · F) .
Result
ErosionGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
730 CHAPTER 9. MORPHOLOGY
Module
Foundation
Result
ErosionRectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Possible Predecessors
Threshold, Regiongrowing, Watersheds, ClassNdimNorm
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
Alternatives
Erosion1, MinkowskiSub1
See also
GenRectangle1
Module
Foundation
Result
ErosionSeq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
732 CHAPTER 9. MORPHOLOGY
Parallelization Information
ErosionSeq is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Watersheds, ClassNdimNorm
Possible Successors
Connection, ReduceDomain, SelectShape, AreaCenter
Alternatives
ErosionGolay, Erosion1, Erosion2
See also
DilationSeq, HitOrMissSeq, ThinningSeq
Module
Foundation
n
[
P = (R ◦ Mi )
i=1
\n
Q = (P • Mi )
i=1
Regions larger than the structuring elements are preserved, while small gaps are closed.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be processed.
. structElements (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Structuring elements.
. regionFitted (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Fitted regions.
Result
Fitting returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Alternatives
Opening, Closing, Connection, SelectShape
Module
Foundation
h h h h x h h h x x h h
x x x h x h h x h h x h
h h h h x h x h h h h x
M1 M2 M3 M4
h h h h h h h x h h x h
x x h h x x x x h h x x
h x h h x h h h h h h h
M5 M6 M7 M8
Parameter
. structElements (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Generated structuring elements.
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of structuring element to generate.
Default Value : "noise"
List of values : Type ∈ {"noise"}
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (int / long)
Row coordinate of the reference point.
Default Value : 1
Suggested values : Row ∈ {0, 1, 10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (int / long)
Column coordinate of the reference point.
Default Value : 1
Suggested values : Column ∈ {0, 1, 10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
Result
GenStructElements returns 2 (H_MSG_TRUE) if all parameters are correct. Otherwise, an exception is
raised.
Parallelization Information
GenStructElements is reentrant and processed without parallelization.
Possible Successors
Fitting, HitOrMiss, Opening, Closing, Erosion2, Dilation2
See also
GolayElements
Module
Foundation
HALCON 8.0.2
734 CHAPTER 9. MORPHOLOGY
• •
• · · · · •
• · · • · • · · • • • · · • · •
• • ◦ · · ◦ · • · ◦ · ·
• · · · · ◦ · ·
m(0,1) m(2,3) m(4,5) m(6,7)
· ·
◦ · · · · ◦
· · • · · • · • · ◦ · • · • · ·
◦ • • · · • · • · • · ·
· · • • • • • •
m(8,9) m(10,11) m(12,13) m(14,15)
◦ ◦
◦ · · · · ◦
◦ · · ◦ · • · · ◦ ◦ ◦ · · • · ◦
◦ • • · · • · • · • · ·
◦ · · · · • · ·
d(0,1) d(2,3) d(4,5) d(6,7)
· ·
• · · · · •
· · ◦ · · • · ◦ · • · ◦ · • · ·
• • ◦ · · ◦ · • · ◦ · ·
· · ◦ ◦ ◦ ◦ ◦ ◦
d(8,9) d(10,11) d(12,13) d(14,15)
• •
◦ • ◦ ◦ • ◦
• ◦ ◦ • • • · ◦ • ◦ • ◦ · • • •
◦ • • ◦ · • ◦ • ◦ • · ◦
• ◦ ◦ ◦ ◦ • ◦ ◦
f(0,1) f(2,3) f(4,5) f(6,7)
◦ ◦
• · ◦ ◦ · •
◦ ◦ • ◦ · • • • ◦ • ◦ • • • · ◦
• • ◦ ◦ • ◦ ◦ • ◦ ◦ • ◦
◦ ◦ • • • ◦ • •
f(8,9) f(10,11) f(12,13) f(14,15)
• ◦
◦ · • ◦ · ◦
• • • ◦ · • · • ◦ ◦ • ◦ · • · •
◦ • ◦ ◦ · ◦ ◦ • • ◦ · •
◦ ◦ ◦ ◦ ◦ ◦ • •
f2(0,1) f2(2,3) f2(4,5) f2(6,7)
◦ •
◦ · ◦ • · ◦
◦ ◦ ◦ • · • · ◦ • ◦ ◦ • · • · ◦
◦ • ◦ • · ◦ • • ◦ ◦ · ◦
• • • • • ◦ ◦ ◦
f2(8,9) f2(10,11) f2(12,13) f2(14,15)
• ·
· · • · · ·
• • ◦ · · • · ◦ · · • · · • · •
· • · · · · · • • · · •
· · · · · · ◦ ◦
k(0,1) k(2,3) k(4,5) k(6,7)
HALCON 8.0.2
736 CHAPTER 9. MORPHOLOGY
· ◦
· · · • · ·
· · · ◦ · • · · ◦ · · • · • · ·
· • · • · · • • · · · ·
◦ • • • • · · ·
k(8,9) k(10,11) k(12,13) k(14,15)
• •
• · · · · •
• · · • · ◦ · · • • • · · ◦ · •
• ◦ · · · · · ◦ · · · ·
• · · · · · · ·
c(0,1) c(2,3) c(4,5) c(6,7)
· ·
· · · ·
· ·
· · • · · ◦ • · · · • ◦ · ·
· ◦ • · · • · ◦ · • · ·
· · • • • • • •
c(8,9) c(10,11) c(12,13) c(14,15)
Parameter
. structElement1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element for the foreground.
. structElement2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element for the background.
. golayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the structuring element.
Default Value : "l"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14}
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (int / long)
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (int / long)
Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
GolayElements returns 2 (H_MSG_TRUE) if all parameters are correct. Otherwise, an exception is raised.
Parallelization Information
GolayElements is reentrant and processed without parallelization.
Possible Successors
HitOrMiss
Alternatives
GenRegionPoints, GenStructElements, GenRegionPolygonFilled
See also
DilationGolay, ErosionGolay, OpeningGolay, ClosingGolay, HitOrMissGolay,
ThickeningGolay
References
J. Serra: "‘Image Analysis and Mathematical Morphology"’. Volume I. Academic Press, 1982
Module
Foundation
HALCON 8.0.2
738 CHAPTER 9. MORPHOLOGY
Result
HitOrMiss returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(6 · F ) .
Result
HitOrMissGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
HitOrMissGolay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, Union1, Watersheds, ClassNdimNorm
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
Alternatives
HitOrMissSeq, HitOrMiss
See also
ErosionGolay, DilationGolay, OpeningGolay, ClosingGolay, ThinningGolay,
ThickeningGolay, GolayElements
Module
Foundation
Result
HitOrMissSeq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
HALCON 8.0.2
740 CHAPTER 9. MORPHOLOGY
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
For each point m in M a translation of the region R is performed. The union of all these translations is the
Minkowski addition of R with M . MinkowskiAdd1 is similar to the operator Dilation1, the difference
is that in Dilation1 the structuring element is mirrored at the origin. The position of structElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Attention
A Minkowski addition always results in enlarged regions. Closely spaced regions which may touch or overlap as
a result of the dilation are still treated as two separate regions. If the desired behavior is to merge them into one
region, the operator Union1 has to be called first.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be dilated.
. structElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element.
. regionMinkAdd (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Dilated regions.
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · iterations) .
Result
MinkowskiAdd1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
742 CHAPTER 9. MORPHOLOGY
The parameter iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
An empty region is generated in case of an empty structuring element.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Attention
A Minkowski addition always results in enlarged regions. Closely spaced regions which may touch or overlap as
a result of the dilation are still treated as two separate regions. If the desired behavior is to merge them into one
region, the operator Union1 has to be called first.
Parameter
Result
MinkowskiAdd2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
Alternatives
MinkowskiAdd1, Dilation1
See also
TransposeRegion
Module
Foundation
Erode a region.
MinkowskiSub1 computes the Minkowski subtraction of the input regions with a structuring element. By
applying MinkowskiSub1 to a region, its boundary gets smoothed. In the process, the area of the region is
reduced. Furthermore, connected regions may be split. Such regions, however, remain logically one region. The
Minkowski subtraction is a set-theoretic region operation. It uses the intersection operation.
Let M (structElement) and R (region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
MinkowskiSub1(R, M ) := t~vm (R)
m∈M
For each point m in M a translation of the region R is performed. The intersection of all these translations is the
Minkowski subtraction of R with M . MinkowskiSub1 is similar to the operator Erosion1, the difference
is that in Erosion1 the structuring element is mirrored at the origin. The position of structElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be eroded.
. structElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element.
. regionMinkSub (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Eroded regions.
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
HALCON 8.0.2
744 CHAPTER 9. MORPHOLOGY
√ √
O( F 1 · F 2 · iterations) .
Result
MinkowskiSub1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Result
MinkowskiSub2 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
746 CHAPTER 9. MORPHOLOGY
MorphHat computes the union of the regions that are removed by an Opening operation with the regions that
are added by a Closing operation. Hence this is the union of the results of TopHat and BottomHat. The
position of structElement does not influence the result.
Structuring elements (structElement) can be generated with operators such as GenCircle,
GenRectangle1, GenRectangle2, GenEllipse, DrawRegion, GenRegionPolygon,
GenRegionPoints, etc.
Attention
The individual regions are processed separately.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
cout << "Reproduction of ’dilation_circle ()’" << endl;
cout << "First = original image " << endl;
cout << "Red = after segmentation " << endl;
cout << "Blue = after erosion " << endl;
HByteImage img("monkey");
HWindow w;
return(0);
}
Result
MorphHat returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Parallelization Information
MorphHat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, Union1, Watersheds, ClassNdimNorm,
GenCircle, GenEllipse, GenRectangle1, GenRectangle2, DrawRegion,
GenRegionPoints, GenStructElements, GenRegionPolygonFilled
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
Alternatives
TopHat, BottomHat, Union2
See also
Opening, Closing
Module
Foundation
HRegion HRegion.MorphSkeleton ( )
Compute the morphological skeleton of a region.
MorphSkeleton computes the skeleton of the input regions (region) using morphological transformations.
The computation yields a disconnected skeleton (gaps in the diagonals) having a width of one or two pixels. The
calculation uses the Golay element ’h’, i.e., an 8-neighborhood. This is equivalent to the maximum-norm.
Parameter
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
748 CHAPTER 9. MORPHOLOGY
Result
MorphSkiz returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Alternatives
Skeleton, ThinningSeq, MorphSkeleton, Interjacent
See also
Thinning, HitOrMissSeq, Difference
Module
Foundation
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .
HALCON 8.0.2
750 CHAPTER 9. MORPHOLOGY
Result
Opening returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:
√
O(4 · F 1 · radius) .
Result
OpeningCircle returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
752 CHAPTER 9. MORPHOLOGY
Result
OpeningGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Result
OpeningRectangle1 returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
754 CHAPTER 9. MORPHOLOGY
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be opened.
. structElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element (position-invariant).
. regionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Opened regions.
Example (Syntax: HDevelop)
/* Simulation of opening_seg */
opening_seg(Region,StructElement,RegionOpening):
erosion1(Region,StructElement,H1,1) >
connection(H1,H2)
dilation1(H2,StructElement,RegionOpening,1)
clear_obj([H1,H2]).
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √ √
q
O( F 1 · F 2 · F 1) .
Result
OpeningSeg returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be processed.
. regionPrune (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; HRegion
Result of the pruning operation.
. length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Length of the branches to be removed.
Default Value : 2
Suggested values : Length ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Length ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O(length · 3 · F ) .
Result
Pruning returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
Pruning is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
MorphSkiz, Skeleton, ThinningSeq
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
See also
MorphSkeleton, JunctionsSkeleton
Module
Foundation
HALCON 8.0.2
756 CHAPTER 9. MORPHOLOGY
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be processed.
. structElement1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element for the foreground.
. structElement2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Structuring element for the background.
. regionThick (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; HRegion
Result of the thickening operator.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (int / long)
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 2, 4, 8, 16, 32, 128}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (int / long)
Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 2, 4, 8, 16, 32, 128}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50, 70, 100, 200, 400}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region, F 1 the area of the structuring element 1, and F 2 the area of the structuring
element 2. Then the runtime complexity for one object is:
√ √ √
O iterations · F · F1 + F2 .
Result
Thickening returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Module
Foundation
Result
ThickeningGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
758 CHAPTER 9. MORPHOLOGY
Result
ThickeningSeq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Result
Thinning returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
760 CHAPTER 9. MORPHOLOGY
Result
ThinningGolay returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
’l’ Skeleton, similar to Skeleton. This structuring element is also used in MorphSkiz.
’m’ A skeleton with many “hairs” and multiple (parallel) branches.
’d’ A skeleton without multiple branches, but with many gaps, similar to MorphSkeleton.
’c’ Uniform erosion of the region.
’e’ One pixel wide lines are shortened. This structuring element is also used in MorphSkiz.
’i’ Isolated points are removed. (Only iterations = 1 is useful.)
’f’ Y-junctions are eliminated. (Only iterations = 1 is useful.)
’f2’ One pixel long branches and corners are removed. (Only iterations = 1 is useful.)
’h’ A kind of inner boundary, which, however, is thicker than the result of Boundary, is generated. (Only
iterations = 1 is useful.)
’k’ Junction points are eliminated, but also new ones are generated.
The Golay elements, together with all possible rotations, are described with the operator GolayElements.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be processed.
. regionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Result of the thinning operator.
. golayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Structuring element from the Golay alphabet.
Default Value : "l"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
HALCON 8.0.2
762 CHAPTER 9. MORPHOLOGY
Result
ThinningSeq returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Parameter
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
764 CHAPTER 9. MORPHOLOGY
OCR
10.1 Hyperboxes
765
766 CHAPTER 10. OCR
Parallelization Information
CloseOcr is reentrant and processed without parallelization.
Possible Predecessors
WriteOcrTrainf
Possible Successors
ReadOcr
Module
OCR/OCV
Parameter
HALCON 8.0.2
768 CHAPTER 10. OCR
Classify characters.
The operator DoOcrMulti assigns a class to every character (character). For gray value features the gray
values from the surrounding rectangles of the regions are used. The gray values will be taken from the parameter
image. For each character the corresponding class will be returned in classVal and a confidence value will
be returned in confidence. The confidence value indicates the similarity between the input pattern and the
assigned character.
Parameter
HALCON 8.0.2
770 CHAPTER 10. OCR
Possible Successors
WriteOcr
Module
OCR/OCV
HALCON 8.0.2
772 CHAPTER 10. OCR
Result
If the parameters are correct, the operator OcrGetFeatures returns the value 2 (H_MSG_TRUE). Otherwise
an exception will be raised.
Parallelization Information
OcrGetFeatures is reentrant and processed without parallelization.
Possible Predecessors
CreateOcrClassBox, ReadOcr, ReduceDomain, Threshold, Connection
Possible Successors
LearnClassBox
See also
TrainfOcrClassBox, TraindOcrClassBox
Module
OCR/OCV
HALCON 8.0.2
774 CHAPTER 10. OCR
The operator TrainfOcrClassBox trains the classifier ocrHandle via the indicated training files. Any
number of files can be indicated. The parameter avgConfidence provides information about the success of
the training: It contains the average confidence of the trained characters measured by a re-classification. The
confidence of mismatched characters is set to 0 (thus, the average confidence will be decreased significantly).
Attention
The names of the characters in the file must fit the network.
Parameter
HALCON 8.0.2
776 CHAPTER 10. OCR
Possible Predecessors
TraindOcrClassBox, TrainfOcrClassBox
Possible Successors
DoOcrMulti, DoOcrSingle
See also
ReadOcr, DoOcrMulti, TraindOcrClassBox, TrainfOcrClassBox
Module
OCR/OCV
10.2 Lexica
static void HOperatorSet.ClearAllLexica ( )
static void HMisc.ClearAllLexica ( )
Clear all lexica.
ClearAllLexica clears all lexica and releases their resources. All existing lexicon handles are invalid after
this call, and referring to a lexicon by name in expressions is equally no longer possible.
Attention
ClearAllLexica exists solely for the purpose of implementing the “reset program” functionality in HDevelop.
ClearAllLexica must not be used in any application.
Parallelization Information
ClearAllLexica is processed completely exclusively without parallelization.
See also
ClearLexicon
Module
OCR/OCV
Clear a lexicon.
ClearLexicon clears a lexicon and releases its resources.
Parameter
CreateLexicon creates a new lexicon based on a tuple of words. By specifying a unique textual name, you
can later refer to the lexicon from syntax expressions like those used, e.g., by DoOcrWordMlp.
Note that lexicon support in HALCON is currently not aimed at natural languages. Rather, it is intended as a
post-processing step in OCR applications that only need to distinguish between a limited set of not more than a
few thousand valid words, e.g., country or product names. MVTec itself does not provide any lexica.
Parameter
HALCON 8.0.2
778 CHAPTER 10. OCR
Alternatives
CreateLexicon
See also
LookupLexicon, SuggestLexicon
Module
OCR/OCV
HTuple HLexicon.InspectLexicon ( )
Query all words from a lexicon.
InspectLexicon returns a tuple of all words in the lexicon in the parameter words.
Parameter
. lexiconHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .lexicon ; HLexicon / HTuple (IntPtr)
Handle of the lexicon.
. words (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .string(-array) ; HTuple (string)
List of all words.
Parallelization Information
InspectLexicon is reentrant and processed without parallelization.
Alternatives
LookupLexicon
See also
CreateLexicon
Module
OCR/OCV
10.3 Neural-Nets
static void HOperatorSet.ClearAllOcrClassMlp ( )
static void HMisc.ClearAllOcrClassMlp ( )
Clear all OCR classifiers.
ClearAllOcrClassMlp clears all OCR classifiers that were created with CreateOcrClassMlp and frees
all memory required for the classifiers. After calling ClearAllOcrClassMlp, no classifiers can be used any
longer.
Attention
ClearAllOcrClassMlp exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. ClearAllOcrClassMlp must not be used in any application.
Result
ClearAllOcrClassMlp always returns 2 (H_MSG_TRUE).
Parallelization Information
ClearAllOcrClassMlp is processed completely exclusively without parallelization.
Possible Predecessors
DoOcrSingleClassMlp, EvaluateClassMlp
HALCON 8.0.2
780 CHAPTER 10. OCR
Alternatives
ClearOcrClassMlp
See also
CreateOcrClassMlp, ReadOcrClassMlp, WriteOcrClassMlp, TrainfOcrClassMlp
Module
OCR/OCV
HALCON 8.0.2
782 CHAPTER 10. OCR
After the classifier has been created, it is trained using TrainfOcrClassMlp. After this, the classifier can be
saved using WriteOcrClassMlp. Alternatively, the classifier can be used immediately after training to classify
characters using DoOcrSingleClassMlp or DoOcrMultiClassMlp.
HALCON provides a number of pretrained OCR classifiers (see Solution Guide I, chapter ’OCR’, section ’Pre-
trained OCR Fonts’). These pretrained OCR classifiers can be read directly with ReadOcrClassMlp and make
it possible to read a wide variety of different fonts without the need to train an OCR classifier. Therefore, it is
recommended to try if one of the pretrained OCR classifiers can be used successfully. If this is the case, it is not
necessary to create and train an OCR classifier.
A comparison of the MLP and the support vector machine (SVM) (see CreateOcrClassSvm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
. widthCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 8
Suggested values : WidthCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ WidthCharacter ≤ 20
. heightCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 10
Suggested values : HeightCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ HeightCharacter ≤ 20
. interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Interpolation mode for the zooming of the characters.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Features to be used for classification.
Default Value : "default"
List of values : Features ∈ {"default", "pixel", "pixel_invar", "pixel_binary", "gradient_8dir",
"projection_horizontal", "projection_horizontal_invar", "projection_vertical", "projection_vertical_invar",
"ratio", "anisometry", "width", "height", "zoom_factor", "foreground", "foreground_grid_9",
"foreground_grid_16", "compactness", "convexity", "moments_region_2nd_invar",
"moments_region_2nd_rel_invar", "moments_region_3rd_invar", "moments_central",
"moments_gray_plane", "phi", "num_connect", "num_holes", "cooc", "num_runs", "chord_histo"}
Result
If the parameters are valid, the operator CreateOcrClassMlp returns the value 2 (H_MSG_TRUE). If neces-
sary an exception handling is raised.
Parallelization Information
CreateOcrClassMlp is processed completely exclusively without parallelization.
Possible Successors
TrainfOcrClassMlp
HALCON 8.0.2
784 CHAPTER 10. OCR
Alternatives
CreateOcrClassSvm, CreateOcrClassBox
See also
DoOcrSingleClassMlp, DoOcrMultiClassMlp, ClearOcrClassMlp, CreateClassMlp,
TrainClassMlp, ClassifyClassMlp
Module
OCR/OCV
Alternatives
DoOcrWordMlp, DoOcrSingleClassMlp
See also
CreateOcrClassMlp, ClassifyClassMlp
Module
OCR/OCV
HALCON 8.0.2
786 CHAPTER 10. OCR
Alternatives
DoOcrMultiClassMlp
See also
CreateOcrClassMlp, ClassifyClassMlp
Module
OCR/OCV
Parameter
HALCON 8.0.2
788 CHAPTER 10. OCR
HALCON 8.0.2
790 CHAPTER 10. OCR
Compute the information content of the preprocessed feature vectors of an OCR classifier.
GetPrepInfoOcrClassMlp computes the information content of the training vectors that have been
transformed with the preprocessing given by preprocessing. preprocessing can be set to ’prin-
cipal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created with
CreateOcrClassMlp. The preprocessing methods are described with CreateClassMlp. The informa-
tion content is derived from the variations of the transformed components of the feature vector, i.e., it is computed
solely based on the training data, independent of any error rate on the training data. The information content is
computed for all relevant components of the transformed feature vectors (NumInput for ’principal_components’
and min(NumOutput − 1, NumInput) for ’canonical_variates’, see CreateClassMlp), and is returned in
informationCont as a number between 0 and 1. To convert the information content into a percentage, it sim-
ply needs to be multiplied by 100. The cumulative information content of the first n components is returned in
the n-th component of cumInformationCont, i.e., cumInformationCont contains the sums of the first n
elements of informationCont. To use GetPrepInfoOcrClassMlp, a sufficient number of samples must
be stored in the training files given by trainingFile (see WriteOcrTrainf).
informationCont and cumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the total data. This can be decided easily from the first value
of cumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to CreateOcrClassMlp. The call to GetPrepInfoOcrClassMlp al-
ready requires the creation of a classifier, and hence the setting of NumComponents in CreateOcrClassMlp
to an initial value. However, if GetPrepInfoOcrClassMlp is called it is typically not known how
many components are relevant, and hence how to set NumComponents in this call. Therefore, the fol-
lowing two-step approach should typically be used to select NumComponents: In a first step, a classi-
fier with the maximum number for NumComponents is created (NumInput for ’principal_components’ and
min(NumOutput − 1, NumInput) for ’canonical_variates’). Then, the training samples are saved in a training
file using WriteOcrTrainf. Subsequently, GetPrepInfoOcrClassMlp is used to determine the infor-
mation content of the components, and with this NumComponents. After this, a new classifier with the desired
number of components is created, and the classifier is trained with TrainfOcrClassMlp.
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; HOCRMlp / HTuple (IntPtr)
Handle of the OCR classifier.
. trainingFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; HTuple (string)
Name(s) of the training file(s).
Default Value : "ocr.trf"
. preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. informationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Relative information content of the transformed feature vectors.
. cumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator GetPrepInfoOcrClassMlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
GetPrepInfoOcrClassMlp may return the error 9211 (Matrix is not positive definite) if preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
GetPrepInfoOcrClassMlp is reentrant and processed without parallelization.
Possible Predecessors
CreateOcrClassMlp, WriteOcrTrainf, AppendOcrTrainf, WriteOcrTrainfImage
Possible Successors
ClearOcrClassMlp, CreateOcrClassMlp
Module
OCR/OCV
HALCON 8.0.2
792 CHAPTER 10. OCR
Result
If the parameters are valid, the operator TrainfOcrClassMlp returns the value 2 (H_MSG_TRUE). If neces-
sary an exception handling is raised.
TrainfOcrClassMlp may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
TrainfOcrClassMlp is processed completely exclusively without parallelization.
Possible Predecessors
CreateOcrClassMlp, WriteOcrTrainf, AppendOcrTrainf, WriteOcrTrainfImage
Possible Successors
DoOcrSingleClassMlp, DoOcrMultiClassMlp, WriteOcrClassMlp
Alternatives
ReadOcrClassMlp
See also
TrainClassMlp
Module
OCR/OCV
HALCON 8.0.2
794 CHAPTER 10. OCR
Possible Predecessors
TrainfOcrClassMlp
Possible Successors
ClearOcrClassMlp
See also
CreateOcrClassMlp, ReadOcrClassMlp, WriteClassMlp, ReadClassMlp
Module
OCR/OCV
10.4 Support-Vector-Machines
See also
CreateOcrClassSvm, ReadOcrClassSvm, WriteOcrClassSvm, TrainfOcrClassSvm
Module
OCR/OCV
HALCON 8.0.2
796 CHAPTER 10. OCR
The parameter features can contain the following feature names for the classification of the characters. By
specifying ’default’, the features ’ratio’ and ’pixel_invar’ are selected.
After the classifier has been created, it is trained using TrainfOcrClassSvm. After this, the classifier can be
saved using WriteOcrClassSvm. Alternatively, the classifier can be used immediately after training to classify
characters using DoOcrSingleClassSvm or DoOcrMultiClassSvm.
A comparison of SVM and the multi-layer perceptron (MLP) (see CreateOcrClassMlp) typically shows that
SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition rates
than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications. Please
note that this guideline assumes optimal tuning of the parameters.
Parameter
. widthCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 8
Suggested values : WidthCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ WidthCharacter ≤ 20
. heightCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 10
Suggested values : HeightCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ HeightCharacter ≤ 20
. interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Interpolation mode for the zooming of the characters.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Features to be used for classification.
Default Value : "default"
List of values : Features ∈ {"default", "pixel", "pixel_invar", "pixel_binary", "gradient_8dir",
"projection_horizontal", "projection_horizontal_invar", "projection_vertical", "projection_vertical_invar",
"ratio", "anisometry", "width", "height", "zoom_factor", "foreground", "foreground_grid_9",
"foreground_grid_16", "compactness", "convexity", "moments_region_2nd_invar",
"moments_region_2nd_rel_invar", "moments_region_3rd_invar", "moments_central",
"moments_gray_plane", "phi", "num_connect", "num_holes", "cooc", "num_runs", "chord_histo"}
. characters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
All characters of the character set to be read.
Default Value : ["0","1","2","3","4","5","6","7","8","9"]
. kernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
The kernel type.
Default Value : "rbf"
List of values : KernelType ∈ {"linear", "rbf", "polynomial_inhomogeneous",
"polynomial_homogeneous"}
. kernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Additional parameter for the kernel function.
Default Value : 0.02
Suggested values : KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Regularization constant of the SVM.
Default Value : 0.05
Suggested values : Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction : (Nu > 0.0) ∧ (Nu < 1.0)
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
The mode of the SVM.
Default Value : "one-versus-one"
List of values : Mode ∈ {"one-versus-all", "one-versus-one"}
. preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
HALCON 8.0.2
798 CHAPTER 10. OCR
Result
If the parameters are valid the operator CreateOcrClassSvm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception handling is raised.
Parallelization Information
CreateOcrClassSvm is processed completely exclusively without parallelization.
Possible Successors
TrainfOcrClassSvm
Alternatives
CreateOcrClassMlp, CreateOcrClassBox
See also
DoOcrSingleClassSvm, DoOcrMultiClassSvm, ClearOcrClassSvm, CreateClassSvm,
TrainClassSvm, ClassifyClassSvm
Module
OCR/OCV
HALCON 8.0.2
800 CHAPTER 10. OCR
Parameter
. character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Character to be recognized.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Gray values of the character.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; HOCRSvm / HTuple (IntPtr)
Handle of the OCR classifier.
. num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}
. classVal (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Result of classifying the character with the SVM.
Result
If the parameters are valid the operator DoOcrSingleClassSvm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
Parallelization Information
DoOcrSingleClassSvm is reentrant and processed without parallelization.
Possible Predecessors
TrainfOcrClassSvm, ReadOcrClassSvm
Alternatives
DoOcrMultiClassSvm
See also
CreateOcrClassSvm, ClassifyClassSvm
Module
OCR/OCV
used are identical to those returned by DoOcrSingleClassSvm for a single character. It does so by testing
all possible corrections for which the classification result is changed for at most numCorrections character
regions.
In case the expression is a lexicon and the above procedure did not yield a result, the most similar word in
the lexicon is returned as long as it requires less than numCorrections edit operations for the correction (see
SuggestLexicon).
The resulting word is graded by a score between 0.0 (no correction found) and 1.0 (original word correct), which
is dominated by the number of corrected characters but also adds a minor penalty for ignoring the second best class
or even all best classes (in case of lexica).
Parameter
HALCON 8.0.2
802 CHAPTER 10. OCR
HALCON 8.0.2
804 CHAPTER 10. OCR
Possible Successors
DoOcrSingleClassSvm, DoOcrMultiClassSvm
See also
TrainfOcrClassSvm, GetParamsClassSvm
Module
OCR/OCV
Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
GetPrepInfoOcrClassSvm computes the information content of the training vectors that have been
transformed with the preprocessing given by preprocessing. preprocessing can be set to ’prin-
cipal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created with
CreateOcrClassSvm. The preprocessing methods are described with CreateClassSvm. The information
content is derived from the variations of the transformed components of the feature vector, i.e., it is computed solely
based on the training data, independent of any error rate on the training data. The information content is computed
for all relevant components of the transformed feature vectors (NumFeatures for ’principal_components’ and
min(NumClasses − 1, NumFeatures) for ’canonical_variates’, see CreateClassSvm), and is returned
in informationCont as a number between 0 and 1. To convert the information content into a percentage, it
simply needs to be multiplied by 100. The cumulative information content of the first n components is returned in
the n-th component of cumInformationCont, i.e., cumInformationCont contains the sums of the first n
elements of informationCont. To use GetPrepInfoOcrClassSvm, a sufficient number of samples must
be stored in the training files given by trainingFile (see WriteOcrTrainf).
informationCont and cumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the total data. This can be decided easily from the first value
of cumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to CreateOcrClassSvm. The call to GetPrepInfoOcrClassSvm al-
ready requires the creation of a classifier, and hence the setting of NumComponents in CreateOcrClassSvm
to an initial value. However, if GetPrepInfoOcrClassSvm is called it is typically not known how
many components are relevant, and hence how to set NumComponents in this call. Therefore, the fol-
lowing two-step approach should typically be used to select NumComponents: In a first step, a classifier
with the maximum number for NumComponents is created (NumFeatures for ’principal_components’ and
min(NumClasses − 1, NumFeatures) for ’canonical_variates’). Then, the training samples are saved in a
training file using WriteOcrTrainf. Subsequently, GetPrepInfoOcrClassSvm is used to determine the
information content of the components, and with this NumComponents. After this, a new classifier with the
desired number of components is created, and the classifier is trained with TrainfOcrClassSvm.
Parameter
Result
If the parameters are valid the operator GetPrepInfoOcrClassSvm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
GetPrepInfoOcrClassSvm may return the error 9211 (Matrix is not positive definite) if preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
GetPrepInfoOcrClassSvm is reentrant and processed without parallelization.
Possible Predecessors
CreateOcrClassSvm, WriteOcrTrainf, AppendOcrTrainf, WriteOcrTrainfImage
Possible Successors
ClearOcrClassSvm, CreateOcrClassSvm
Module
OCR/OCV
HALCON 8.0.2
806 CHAPTER 10. OCR
Parameter
double HOCRSvm.GetSupportVectorOcrClassSvm (
HTuple indexSupportVector )
Return the index of a support vector from a trained OCR classifier that is based on support vector machines.
The operator GetSupportVectorOcrClassSvm maps support vectors of a trained SVM-based OCR
classifier (given in OCRHandle) to the original training data set. The index of the SV is specified with
indexSupportVector. The index is counted from 0, i.e., indexSupportVector must be a number
between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be determined with
GetSupportVectorNumOcrClassSvm. The index of this SV in the training data is returned in index.
GetSupportVectorOcrClassSvm can, for example, be used to visualize the support vectors. To do so, the
train file that has been used to train the SVM must be read with ReadOcrTrainf. The value returned in index
must be incremented by 1 and can then be used to select the support vectors with SelectObj from the training
characters. If more than one train file has been used in TrainfOcrClassSvm index behaves as if all train
files had been merged into one train file with ConcatOcrTrainf.
Parameter
See also
CreateOcrClassSvm, ReadOcrTrainf, AppendOcrTrainf, ConcatOcrTrainf
Module
OCR/OCV
HALCON 8.0.2
808 CHAPTER 10. OCR
Result
If the parameters are valid the operator TrainfOcrClassSvm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception handling is raised.
TrainfOcrClassSvm may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
TrainfOcrClassSvm is processed completely exclusively without parallelization.
Possible Predecessors
CreateOcrClassSvm, WriteOcrTrainf, AppendOcrTrainf, WriteOcrTrainfImage
Possible Successors
DoOcrSingleClassSvm, DoOcrMultiClassSvm, WriteOcrClassSvm
Alternatives
ReadOcrClassSvm
See also
TrainClassSvm
Module
OCR/OCV
HALCON 8.0.2
810 CHAPTER 10. OCR
Result
If the parameters are valid the operator WriteOcrClassSvm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
WriteOcrClassSvm is reentrant and processed without parallelization.
Possible Predecessors
TrainfOcrClassSvm
Possible Successors
ClearOcrClassSvm
See also
CreateOcrClassSvm, ReadOcrClassSvm, WriteClassSvm, ReadClassSvm
Module
OCR/OCV
10.5 Tools
static void HOperatorSet.SegmentCharacters ( HObject region,
HObject image, out HObject imageForeground,
out HObject regionForeground, HTuple method, HTuple eliminateLines,
HTuple dotPrint, HTuple strokeWidth, HTuple charWidth,
HTuple charHeight, HTuple thresholdOffset, HTuple contrast,
out HTuple usedThreshold )
’local_contrast_best’ This method extracts text that differ locally from the background. Therefore, it is suited
for images with inhomogeneous illumination. The enhancment of the text borders, leads to a more accurate
determinaton of the outline of the text. Which is especially useful if the background is highly textured.
The parameter contrast defines the minimum contrast,i.e., the minimum gray value difference between
symobls and background.
’local_auto_shape’ The minimum contrast is estimated automatically such that the number of very small regions
is reduced. This method is especially suitable for noisy images. The parameter thresholdOffset can
be used to adjust the threshold. Let g(x, y) be the gray value at position (x, y) in the input image. The
threshold condition is determined by:
g(x, y) ≤ usedThreshold + thresholdOffset.
Select eliminateLines if the extraction of characters is disturbed by lines that are horizontal or vertical with
respect to the lines of text and set its value to ’true’. The elimination is influenced by the maximum of charWidth
and the maximum of charHeight. For further information see the description of these parameters.
dotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.
strokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters dotPrint, the average
charWidth, and the average charHeight.
charWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average charWidth. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
charHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average charHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5, and the maximum to 20.
thresholdOffset: This parameter can be used to adjust the threshold, which is used when the segmentation
method ’local_auto_shape’ is chosen.
contrast: Defines the minimum contrast between the text and the background. This parameter is used if the
segmentation method ’local_contrast_best’ is selected.
usedThreshold: After the execution, this parameter returns the threshold used to segment the characters.
imageForeground returns the image that was internally used for the segmentation.
Parameter
HALCON 8.0.2
812 CHAPTER 10. OCR
Result
If the input parameters are set correctly, the operator SegmentCharacters returns the value 2
(H_MSG_TRUE). Otherwise an exception will be raised.
Parallelization Information
SegmentCharacters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
TextLineOrientation
Possible Successors
SelectCharacters, Connection
Alternatives
Threshold
Module
Foundation
connected to a character. If you have more than one region with text, you can of course handle them without
merging them. The region for SelectCharacters typically comes from SegmentCharacters but also
any other segmentation operators can be used.
The process of the selection can be partitioned into four parts. All steps are influenced by the parameters
strokeWidth, charHeight, and charWidth. If you loose small objects like dots, adapt the minimum
charWidth and the minimum charHeight. But some parameters affect the result of a certain step in partic-
ular. A closer description follows below. With the parameter stopAfter you can terminate after a specified
step.
In the first step, ’step1_select_candidates’, charWidth and the charHeight are used to select the candidates.
The result of this step is also affected by clutterSizeMax.
In the next step, ’step2_partition_characters’, the parameter partitionMethod and the parameter
partitionLines influence the result.
Step three, ’step3_connect_fragments’, uses the the parameters connectFragments and dotPrint. If dot-
printed characters have to be detected and some dots are not connected to the character, there are two ways to
overcome this problem: You can increase the fragmentDistance and/or decrease the strokeWidth.
In the last step, ’step4_select_characters’, the result is affected by the parameters diacriticMarks and
punctuation.
dotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.
strokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters dotPrint, the average
charWidth, and the average charHeight.
charWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on average charWidth. The same is
the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
charHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum
is not set or equal -1, the operator automatically sets these value depending on average charHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10 the minimum value is calculated by the system and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5 and the maximum to 20.
punctuation: Set this parameter to ’true’ if the operator also has to detect punctuation marks (e.g. .,:’‘"),
otherwise they will be suppressed.
diacriticMarks: Set this parameter to ’true’ if the text in your application contains diacritic marks (e.g. â,é,ö),
or to ’false’ to suppress them.
partitionMethod: If neighboring characters are printed close to each other, they may be partly merged. With
this parameter you can specify the method to partition such characters. The possible values are ’none’, which
means no partitioning is perfomed. ’fixed_width’ means that the partitioning assumes a constant character width.
If the width of the extracted region is well above the average charWidth, the region ist split into parts that have
the given average charWidth. The partitioning starts at the left border of the region. ’variable_width’ means
that the characters are partitioned at the position where they have the thinnest connection. This method can be
selected for characters that are printed with a variable-width font or if many consecutive characters are extracted as
one symbol. It could be helpful to call TextLineSlant and/or use TextLineOrientation before calling
SelectCharacters.
partitionLines: If some text lines or some characters of different text lines are connected, set this parameter
to ’true’.
HALCON 8.0.2
814 CHAPTER 10. OCR
fragmentDistance: This parameter influences the connection of character fragments. If too much is con-
nected, set the parameter to ’narrow’ or ’medium’. In the case that more fragments should be connected, set
the parameter to ’medium’ or ’wide’. The connection is also influenced by the maximum of charWidth and
charHeight. See also connectFragments.
connectFragments: Set this parameter to ’true’ if the extracted symbols are fragmented, i.e., if a symbol is
not extracted as one region but broken up into several parts. See also fragmentDistance and stopAfter in
the step ’step3_connect_fragments’.
clutterSizeMax: If the extracted characters contain clutter, i.e., small regions near the actual symbols, increase
this value. If parts of the symbols are missing, decrease this value.
stopAfter: Use this parameter in the case the operator does not produce the desired results. By modifying this
value the operator stops after the execution of the selected step and provides the corresponding results. To end on
completion, set stopAfter to ’completion’.
Parameter
Result
If the input parameters are set correctly, the operator SelectCharacters returns the value 2 (H_MSG_TRUE).
Otherwise an exception will be raised.
Parallelization Information
SelectCharacters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
SegmentCharacters, TextLineSlant
Possible Successors
DoOcrSingle, DoOcrMulti
Alternatives
Connection
Module
Foundation
HALCON 8.0.2
816 CHAPTER 10. OCR
The search area can be restricted by the parameters orientationFrom and orientationTo, whereby also
the runtime of the operator is influenced.
With the calculated angle orientationAngle and operators like AffineTransImage, the region region
of the image image can be rotated such, that the text lines lie horizontally in the image. This may simplify the
character segmentation for OCR applications.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Area of text lines.
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. charHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the text lines.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. orientationFrom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double)
Minimum rotation of the text lines.
Default Value : -0.523599
Typical range of values : -1.570796 ≤ OrientationFrom ≤ 1.570796
Restriction : ((−pi/2) ≤ OrientationFrom) ∧ (OrientationFrom ≤ OrientationTo)
. orientationTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double)
Maximum rotation of the text lines.
Default Value : 0.523599
Typical range of values : -1.570796 ≤ OrientationTo ≤ 1.570796
Restriction : ((−pi/2) ≤ OrientationTo) ∧ (OrientationTo ≤ (pi/2))
. orientationAngle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; HTuple (double)
Calculated rotation angle of the text lines.
Example (Syntax: HDevelop)
read_image(Image,’letters’)
text_line_orientation(Image,Image,50,rad(-80),rad(80),OrientationAngle)
rotate_image(Image,ImageRotate,-OrientationAngle/rad(180)*180,’constant’)
Result
If the input parameters are set correctly, the operator TextLineOrientation returns the value 2
(H_MSG_TRUE). Otherwise an exception will be raised.
Parallelization Information
TextLineOrientation is reentrant and automatically parallelized (on tuple level).
Possible Successors
RotateImage, AffineTransImage, AffineTransImageSize
Module
Foundation
are segmented by the operator TextLineSlant itself. If more than one region is passed, the numerical values
of the orientation angle are stored in a tuple, the position of a value in the tuple corresponding to the position of
the region in the input tuple.
charHeight specifies the approximately high of the existing text lines in the region region. It´s assumed, that
the text lines are darker than the background.
The search area can be restricted by the parameters slantFrom and slantTo, whereby also the runtime of the
operator is influenced.
With the calculated slant angle slantAngle and operators for affine transformations, the slant can be removed
from the characters. This may simplify the character separation for OCR applications. To work correctly all
characters of a region should have nearly the same slant.
Parameter
hom_mat2d_identity(HomMat2DIdentity)
read_image(Image,’dot_print_slanted’)
/* correct slant */
text_line_slant(Image,Image,50,rad(-45),rad(45),SlantAngle)
hom_mat2d_slant(HomMat2DIdentity,-SlantAngle,’x’,0,0,HomMat2DSlant)
affine_trans_image(Image,Image,HomMat2DSlant,’constant’,’true’)
Result
If the input parameters are set correctly, the operator TextLineSlant returns the value 2 (H_MSG_TRUE).
Otherwise an exception will be raised.
Parallelization Information
TextLineSlant is reentrant and automatically parallelized (on tuple level).
Possible Successors
HomMat2dSlant, AffineTransImage, AffineTransImageSize
Module
Foundation
HALCON 8.0.2
818 CHAPTER 10. OCR
10.6 Training-Files
char name[128];
char class[128];
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
create_tuple(&Class,num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",class);
append_ocr_trainf(Character,Image,name,class);
}
Result
If the parameters are correct, the operator AppendOcrTrainf returns the value 2 (H_MSG_TRUE). Otherwise
an exception will be raised.
Parallelization Information
AppendOcrTrainf is processed completely exclusively without parallelization.
Possible Predecessors
Threshold, Connection, CreateOcrClassBox, ReadOcr
Possible Successors
TrainfOcrClassBox, InfoOcrClassBox, WriteOcr, DoOcrMulti, DoOcrSingle
Alternatives
WriteOcrTrainf, WriteOcrTrainfImage
Module
OCR/OCV
HALCON 8.0.2
820 CHAPTER 10. OCR
Parameter
. characters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image-array ; HImage
Images read from file.
. trainFileNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; HTuple (string)
Names of the training files.
Default Value : ""
. characterNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Names of the read characters.
Result
If the parameter values are correct the operator ReadOcrTrainf returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling is raised.
Parallelization Information
ReadOcrTrainf is reentrant and processed without parallelization.
Possible Predecessors
WriteOcrTrainf
Possible Successors
DispImage, SelectObj, ZoomImageSize
Alternatives
ReadOcrTrainfSelect
See also
TrainfOcrClassBox
Module
OCR/OCV
HALCON 8.0.2
822 CHAPTER 10. OCR
character (region) in character the corresponding class name must be specified in classVal. The gray
values are passed via the parameter image. If no file extension is specified in fileName the extension ’.trf’ is
appended to the file name. The version of the file format used for writing data can be defined by the parameter
’ocr_trainf_version’ of the operator SetSystem.
Parameter
Parallelization Information
WriteOcrTrainfImage is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Connection, CreateOcrClassBox, ReadOcr
Possible Successors
TrainfOcrClassBox, InfoOcrClassBox, WriteOcr, DoOcrMulti, DoOcrSingle
Alternatives
WriteOcrTrainf, AppendOcrTrainf
Module
OCR/OCV
HALCON 8.0.2
824 CHAPTER 10. OCR
Object
11.1 Information
static void HOperatorSet.CountObj ( HObject objects, out HTuple number
)
int HObject.CountObj ( )
int HImage.CountObj ( )
int HRegion.CountObj ( )
int HXLD.CountObj ( )
int HXLDCont.CountObj ( )
int HXLDPoly.CountObj ( )
int HXLDPara.CountObj ( )
int HXLDModPara.CountObj ( )
int HXLDExtPara.CountObj ( )
Number of objects in a tuple.
The operator CountObj determines for the object parameter objects the number of objects it contains. In
this connection it should be noted that object is not the same as connection component (see Connection). For
example, the number of objects of a region not consisting of three connected parts is 1.
Attention
In Prolog and Lisp the length of the list is not necessarily identical with the number of objects. This is the case
when object keys are contained which were created in the compact mode (keys from compact and normal mode
can be used as a mixture). See in this connection SetSystem(’compact_object’,<true/false>).
Parameter
825
826 CHAPTER 11. OBJECT
Parallelization Information
CountObj is reentrant and processed without parallelization.
See also
CopyObj, ObjToInteger, Connection, SetSystem
Module
Foundation
’creator’ Output of the names of the procedures which initially created the image components (not the object).
’type’ Output of the type of image component (’byte’, ’int1’, ’int2’, ’uint2’ ’int4’, ’real’, ’direction’, ’cyclic’,
’complex’, ’vector_field’). The component 0 is of type ’region’ or ’xld’.
In the tuple channel the numbers of the components about which information is required are stated. After
carrying out GetChannelInfo, information contains a tuple of strings (one string per entry in channel)
with the required information.
Parameter
HTuple HObject.GetObjClass ( )
HTuple HImage.GetObjClass ( )
HTuple HRegion.GetObjClass ( )
HTuple HXLD.GetObjClass ( )
HTuple HXLDCont.GetObjClass ( )
HTuple HXLDPoly.GetObjClass ( )
HTuple HXLDPara.GetObjClass ( )
HTuple HXLDModPara.GetObjClass ( )
HTuple HXLDExtPara.GetObjClass ( )
Name of the class of an image object.
GetObjClass returns the name of the corresponding class to each object. The following classes are possible:
’image’ Object with region (definition domain) and at least one channel.
’region’ Object with a region without gray values.
’xld_cont’ XLD object as contour
’xld_poly’ XLD object as polygon
’xld_parallel’ XLD object with parallel polygons
Parameter
. objectVal (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; HObject
Image objects to be examined.
. classVal (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Name of class.
HALCON 8.0.2
828 CHAPTER 11. OBJECT
Result
If the parameter values are correct the operator GetObjClass returns the value 2 (H_MSG_TRUE). Otherwise
an exception is raised.
Parallelization Information
GetObjClass is reentrant and automatically parallelized (on tuple level).
Possible Successors
DispImage, DispRegion, DispXld
See also
GetChannelInfo, CountRelation
Module
Foundation
int HObject.TestObjDef ( )
int HImage.TestObjDef ( )
int HRegion.TestObjDef ( )
int HXLD.TestObjDef ( )
int HXLDCont.TestObjDef ( )
int HXLDPoly.TestObjDef ( )
int HXLDPara.TestObjDef ( )
int HXLDModPara.TestObjDef ( )
int HXLDExtPara.TestObjDef ( )
Test whether an object is already deleted.
The operator TestObjDef checks whether the object still exists in the HALCON operator data base (i.e. whether
the surrogate is still valid). Is that the case isDefined is set to TRUE, else FALSE. This check especially makes
sense before deleting an object if it is not sure that the object has already been deleted by a prior deleting operator
( ClearObj).
Attention
The parameter isDefined can be TRUE even if the object was already deleted because the surrogates of deleted
objects are re-used for new objects. In this context see the example.
Parameter
circle(&Circle,100.0,100.0,100.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE): %d\n",IsDefined);
clear_obj(Circle);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_FALSE): %d\n",IsDefined);
gen_rectangle1(&Rectangle,200.0,200.0,300.0,300.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE!!!): %d\n",IsDefined);
Complexity
The runtime complexity is O(1).
HALCON 8.0.2
830 CHAPTER 11. OBJECT
Result
The operator TestObjDef returns the value 2 (H_MSG_TRUE) if the parameters are correct. The
behavior in case of empty input (no input objects available) is set via the operator SetSystem
(’no_object_result’,<Result>).
Parallelization Information
TestObjDef is reentrant and processed without parallelization.
Possible Predecessors
ClearObj, GenCircle, GenRectangle1
See also
SetCheck, ClearObj, ResetObjDb
Module
Foundation
11.2 Manipulation
gen_circle(&Circle,200.0,400.0,23.0);
gen_rectangle1(&Rectangle,23.0,44.0,203.0,201.0);
concat_obj(Circle,Rectangle,&CirclAndRectangle);
clear_obj(Circle); clear_obj(Rectangle);
disp_region(CircleAndRectangle,WindowHandle);
Complexity
Runtime complexity: O(|objects1| + |objects2|);
Memory complexity of the result objects: O(|objects1| + |objects2|)
Result
ConcatObj returns 2 (H_MSG_TRUE) if all objects are contained in the HALCON database. If the input
HALCON 8.0.2
832 CHAPTER 11. OBJECT
count_obj(Regions,Num)
for(1,Num,i)
copy_obj(Regions,Single,i,1)
get_region_polygon(Single,5.0,Line,Column)
disp_polygon(WindowHandle,Line,Column)
clear_obj(Single)
loop().
Complexity
Runtime complexity: O(|objects| + numObj);
Memory complexity of the result object: O(numObj)
Result
CopyObj returns 2 (H_MSG_TRUE) if all objects are contained in the HALCON database and all parameters are
correct. If the input is empty the behavior can be set via SetSystem(’no_object_result’,<Result>).
If necessary, an exception is raised.
Parallelization Information
CopyObj is reentrant and processed without parallelization.
Possible Predecessors
CountObj
Alternatives
SelectObj
See also
CountObj, ConcatObj, ObjToInteger, CopyImage
Module
Foundation
HALCON 8.0.2
834 CHAPTER 11. OBJECT
Complexity
Runtime complexity: O(|objects| + number)
Result
ObjToInteger returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the behavior can
be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
ObjToInteger is reentrant and processed without parallelization.
Possible Predecessors
TestObjDef
Alternatives
CopyObj, SelectObj, CopyImage, GenImageProto
HALCON 8.0.2
836 CHAPTER 11. OBJECT
See also
IntegerToObj, CountObj
Module
Foundation
count_obj(Regions,&Num);
for (i=1; i<=Num; i++)
{
select_obj(Regions,&Single,i);
T_get_region_polygon(Single,5.0,&Row,&Column);
T_disp_polygon(WindowHandleTuple,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
clear_obj(Single);
}
Complexity
Runtime complexity: O(|objects|)
Result
SelectObj returns 2 (H_MSG_TRUE) if all objects are contained in the HALCON database and
all parameters are correct. If the input is empty the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
SelectObj is reentrant and processed without parallelization.
Possible Predecessors
CountObj
Alternatives
CopyObj
See also
CountObj, ConcatObj, ObjToInteger
Module
Foundation
HALCON 8.0.2
838 CHAPTER 11. OBJECT
Regions
12.1 Access
static void HOperatorSet.GetRegionChain ( HObject region,
out HTuple row, out HTuple column, out HTuple chain )
void HRegion.GetRegionChain ( out int row, out int column,
out HTuple chain )
3 2 1
4 ∗ 0
5 6 7
The operator GetRegionChain returns the code in the form of a tuple. In case of an empty region the parame-
ters row and column are zero and chain is the empty tuple.
Attention
Holes of the region are ignored. Only one region may be passed, and it must have exactly one connection compo-
nent.
Parameter
839
840 CHAPTER 12. REGIONS
Parallelization Information
GetRegionChain is reentrant and processed without parallelization.
Possible Predecessors
SobelAmp, Threshold, Skeleton, EdgesImage, GenRectangle1, GenCircle
Possible Successors
ApproxChain, ApproxChainSimple
See also
CopyObj, GetRegionContour, GetRegionPolygon
Module
Foundation
The operator GetRegionConvex returns the convex hull of a region as polygon. The polygon is the minimum
result of line (rows) and column coordinates (columns) describing the hull of the region. The polygon pixels
lie on the region. The polygon starts at the smallest line number; in this line at the pixel with the largest column
index. The rotation direction is clockwise. The first pixel of the polygon is identical with the last. The operator
GetRegionConvex returns the coordinates in the form of tuples. An empty region is passed as empty tuple.
Parameter
GetRegionPoints returns the coordinates in the form of tuples. An empty region is passed as empty tuple.
Attention
Only one region may be passed.
Parameter
HALCON 8.0.2
842 CHAPTER 12. REGIONS
Result
The operator GetRegionPoints normally returns the value 2 (H_MSG_TRUE). If more than one connection
component is passed an exception handling is caused. The behavior in case of empty input (no input regions
available) is set via the operator SetSystem(’no_object_result’,<Result>).
Parallelization Information
GetRegionPoints is reentrant and processed without parallelization.
Possible Predecessors
SobelAmp, Threshold, Connection
Alternatives
GetRegionRuns
See also
CopyObj, GenRegionPoints
Module
Foundation
Possible Predecessors
SobelAmp, Threshold, Skeleton, EdgesImage
See also
CopyObj, GenRegionPolygon, DispPolygon, GetRegionChain, GetRegionContour,
SetLineApprox
Module
Foundation
HALCON 8.0.2
844 CHAPTER 12. REGIONS
12.2 Creation
static void HOperatorSet.GenCheckerRegion (
out HObject regionChecker, HTuple widthRegion, HTuple heightRegion,
HTuple widthPattern, HTuple heightPattern )
gen_checker_region(Checker,512,512,32,64:)
set_draw(WindowHandle,’fill’)
set_part(WindowHandle,0,0,511,511)
disp_region(Checker,WindowHandle)
Complexity
The required storage (in bytes) for the region is:
O((widthRegion ∗ heightRegion)/widthPattern)
Result
The operator GenCheckerRegion returns the value 2 (H_MSG_TRUE) if the parameter values are correct.
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator SetSystem(’clip_region’,<’true’/’false’>).
Parallelization Information
GenCheckerRegion is reentrant and processed without parallelization.
Possible Successors
PaintRegion
Alternatives
GenGridRegion, GenRegionPolygonFilled, GenRegionPoints, GenRegionRuns,
GenRectangle1, ConcatObj, GenRandomRegion, GenRandomRegions
See also
HammingChangeRegion, ReduceDomain
Module
Foundation
HALCON 8.0.2
846 CHAPTER 12. REGIONS
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
read_image(Image,’meer’)
gen_circle(Circle,300.0,200.0,150.5)
reduce_domain(Image,Circle,Mask)
disp_color(Mask,WindowHandle).
Complexity
Runtime complexity: O(radius ∗ 2)
Storage complexity (byte): O(radius ∗ 8)
Result
If the parameter values are correct, the operator GenCircle returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised. The clipping according to the current image format is set
via the operator SetSystem(’clip_region’,<’true’/’false’>). If an empty region is cre-
ated by clipping (the circle is completely outside of the image format) the operator SetSystem
(’store_empty_region’,<true/false>) determines whether the empty region is put out.
Parallelization Information
GenCircle is reentrant and processed without parallelization.
Possible Successors
PaintRegion, ReduceDomain
Alternatives
GenEllipse, GenRegionPolygonFilled, GenRegionPoints, GenRegionRuns, DrawCircle
See also
DispCircle, SetShape, SmallestCircle, ReduceDomain
Module
Foundation
Create an ellipse.
The operator GenEllipse generates one or more ellipses with the center (row, column), the orientation phi
and the half-radii radius1 and radius2. The angle is indicated in arc measure according to the x axis in
mathematically positive direction. More than one region can be created by passing tuples of parameter values.
The center must be located within the image coordinates. The coordinate system runs from (0,0) (upper left corner)
to (Width-1,Height-1). See GetSystem and ResetObjDb in this context. If the ellipse reaches beyond the
edge of the image it is clipped to the current image format according to the value of the system flag ’clip_region’ (
SetSystem).
Parameter
. ellipse (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Created ellipse(s).
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y(-array) ; HTuple (double / int / long)
Line index of center.
Default Value : 200.0
Suggested values : Row ∈ {0.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Row ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x(-array) ; HTuple (double / int / long)
Column index of center.
Default Value : 200.0
Suggested values : Column ∈ {0.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Column ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad(-array) ; HTuple (double / int / long)
Orientation of the longer radius (Radius1).
Default Value : 0.0
Suggested values : Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
Typical range of values : -1.178097 ≤ Phi ≤ 1.178097 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. radius1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1(-array) ; HTuple (double / int / long)
Longer radius.
Default Value : 100.0
Suggested values : Radius1 ∈ {2.0, 5.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Radius1 ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Radius1 > 0
. radius2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2(-array) ; HTuple (double / int / long)
Shorter radius.
Default Value : 60.0
Suggested values : Radius2 ∈ {1.0, 2.0, 4.0, 5.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Radius2 ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : (Radius2 > 0) ∧ (Radius2 ≤ Radius1)
Example (Syntax: HDevelop)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
set_insert(WindowHandle,’xor’)
repeat()
get_mbutton(WindowHandle,Row,Column,Button)
gen_ellipse(Ellipse,Row,Column,Column / 300.0,
(Row mod 100)+1,(Column mod 50) + 1)
disp_region(Ellipse,WindowHandle)
clear_obj(Ellipse)
HALCON 8.0.2
848 CHAPTER 12. REGIONS
until(Button = 1).
Complexity
Runtime complexity: O(radius1 ∗ 2)
Storage complexity (byte): O(radius1 ∗ 8)
Result
If the parameter values are correct, the operator GenEllipse returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
SetSystem(’clip_region’,<’true’/’false’>).
Parallelization Information
GenEllipse is reentrant and processed without parallelization.
Possible Successors
PaintRegion, ReduceDomain
Alternatives
GenCircle, GenRegionPolygonFilled, DrawEllipse
See also
DispEllipse, SetShape, SmallestCircle, ReduceDomain
Module
Foundation
and columnSteps in column direction. In the ’lines’ mode rowSteps, columnSteps respectively, can be
set to zero. In this case only columns, lines respectively, are created.
Attention
If a very small pattern is chosen (rowSteps < 4 or columnSteps < 4) the created region requires much
storage.
In the ’points’ mode rowSteps and columnSteps must not be set to zero.
Parameter
. regionGrid (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Created lines/pixel region.
. rowSteps (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long / double)
Step width in line direction or zero.
Default Value : 10
Suggested values : RowSteps ∈ {0, 2, 3, 4, 5, 7, 10, 15, 20, 30, 50, 100}
Typical range of values : 0 ≤ RowSteps ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (RowSteps > 1) ∨ (RowSteps = 0)
. columnSteps (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long / double)
Step width in column direction or zero.
Default Value : 10
Suggested values : ColumnSteps ∈ {0, 2, 3, 4, 5, 7, 10, 15, 20, 30, 50, 100}
Typical range of values : 0 ≤ ColumnSteps ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (ColumnSteps > 1) ∨ (ColumnSteps = 0)
. type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of created pattern.
Default Value : "lines"
List of values : Type ∈ {"lines", "points"}
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Maximum width of pattern.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Maximum height of pattern.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
gen_grid_region(Raster,10,10,’lines’,512,512)
reduce_domain(Image,Raster,Mask)
sobel_amp(Mask,GridSobel,’sum_abs’,3)
disp_image(GridSobel,WindowHandle).
Complexity
The necessary storage (in bytes) for the region is:
O((ImageW idth/ColumnSteps) ∗ (ImageHeight/RowSteps))
HALCON 8.0.2
850 CHAPTER 12. REGIONS
Result
If the parameter values are correct the operator GenGridRegion returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
SetSystem(’clip_region’,<’true’/’false’>).
Parallelization Information
GenGridRegion is reentrant and processed without parallelization.
Possible Successors
ReduceDomain, PaintRegion
Alternatives
GenRegionLine, GenRegionPolygon, GenRegionPoints, GenRegionRuns
See also
GenCheckerRegion, ReduceDomain
Module
Foundation
Parallelization Information
GenRandomRegion is reentrant and processed without parallelization.
Possible Successors
PaintRegion, ReduceDomain
See also
GenCheckerRegion, HammingChangeRegion, AddNoiseDistribution, AddNoiseWhite,
ReduceDomain
Module
Foundation
HALCON 8.0.2
852 CHAPTER 12. REGIONS
exception handling is raised. The clipping according to the current image format is determined by the operator
SetSystem(’clip_region’,<’true’/’false’>).
Parallelization Information
GenRandomRegions is reentrant and processed without parallelization.
Possible Successors
PaintRegion
Module
Foundation
HALCON 8.0.2
854 CHAPTER 12. REGIONS
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
disp_image(Image,WindowHandle)
draw_rectangle1(WindowHandle,Row1,Column1,Row2,Column2)
gen_rectangle1(Rectangle,Row1,Column1,Row2,Column2)
reduce_domain(Image,Rectangle,Mask)
emphasize(Mask,Emphasize,9,9,1.0)
disp_image(Emphasize,WindowHandle).
Result
If the parameter values are correct, the operator GenRectangle1 returns the value 2 (H_MSG_TRUE). Oth-
erwise an exception handling is raised. The clipping according to the current image format is set via the operator
SetSystem(’clip_region’,<’true’/’false’>).
Parallelization Information
GenRectangle1 is reentrant and processed without parallelization.
Possible Successors
PaintRegion, ReduceDomain
Alternatives
GenRectangle2, GenRegionPolygon, FillUp, GenRegionRuns, GenRegionPoints,
GenRegionLine
See also
DrawRectangle1, ReduceDomain, SmallestRectangle1
Module
Foundation
Parameter
HALCON 8.0.2
856 CHAPTER 12. REGIONS
HALCON 8.0.2
858 CHAPTER 12. REGIONS
The indicated coordinates stand for two consecutive pixels in the tupel.
Parameter
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Created region.
. rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y(-array) ; HTuple (int / long)
Lines of the pixels in the region.
Default Value : 100
Suggested values : Rows ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Rows ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x(-array) ; HTuple (int / long)
Columns of the pixels in the region.
Default Value : 100
Suggested values : Columns ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Columns ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Columns = Rows
Complexity
F shall be the number of pixels. If the pixels are sorted in ascending order the runtime complexity is: O(F ),
otherwise O(log(F ) ∗ F ).
HALCON 8.0.2
860 CHAPTER 12. REGIONS
Result
The operator GenRegionPoints returns the value 2 (H_MSG_TRUE) if the pixels are located within the
image format. Otherwise an exception handling is raised. The clipping according to the current image format is set
via the operator SetSystem(’clip_region’,<’true’/’false’>). If an empty region is created (by
the clipping or by an empty input) the operator SetSystem(’store_empty_region’,<true/false>)
determines whether the region is returned or an empty object tuple.
Parallelization Information
GenRegionPoints is reentrant and processed without parallelization.
Possible Predecessors
GetRegionPoints
Possible Successors
PaintRegion, ReduceDomain
Alternatives
GenRegionPolygon, GenRegionRuns, GenRegionLine
See also
ReduceDomain
Module
Foundation
/* Polygon-approximation*/
get_region_polygon(Region,7,Row,Column)
/* store it as a region */
gen_region_polygon(Pol,Row,Column)
/* fill up the hole */
fill_up(Pol,Filled).
Result
If the base points are correct the operator GenRegionPolygon returns the value 2 (H_MSG_TRUE). Other-
wise an exception handling is raised. The clipping according to the current image format is set via the opera-
tor SetSystem(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping
or by an empty input) the operator SetSystem(’store_empty_region’,<true/false>) determines
whether the region is returned or an empty object tuple.
Parallelization Information
GenRegionPolygon is reentrant and processed without parallelization.
Possible Predecessors
GetRegionPolygon, DrawPolygon
Alternatives
GenRegionPolygonFilled, GenRegionPoints, GenRegionRuns
See also
FillUp, ReduceDomain, GetRegionPolygon, DrawPolygon
Module
Foundation
/* Polygon approximation */
T_get_region_polygon(Region,7,&Row,&Column);
T_gen_region_polygon_filled(&Pol,Row,Column);
/* fill up with original gray value */
reduce_domain(Image,Pol,&New);
HALCON 8.0.2
862 CHAPTER 12. REGIONS
Result
If the base points are correct the operator GenRegionPolygonFilled returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised. The clipping according to the current image format is set via the oper-
ator SetSystem(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping
or by an empty input) the operator SetSystem(’store_empty_region’,<true/false>) determines
whether the region is returned or an empty object tuple.
Parallelization Information
GenRegionPolygonFilled is reentrant and processed without parallelization.
Possible Predecessors
GetRegionPolygon, DrawPolygon
Alternatives
GenRegionPolygon, GenRegionPoints, DrawPolygon
See also
GenRegionPolygon, ReduceDomain, GetRegionPolygon, GenRegionRuns
Module
Foundation
.
Parameter
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Created region.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.y(-array) ; HTuple (int / long)
Lines of the runs.
Default Value : 100
Suggested values : Row ∈ {0, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 10
. columnBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.x1(-array) ; HTuple (int / long)
Columns of the starting points of the runs.
Default Value : 50
Suggested values : ColumnBegin ∈ {0, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ ColumnBegin ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 10
Number of elements : ColumnBegin = Row
. columnEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.x2(-array) ; HTuple (int / long)
Columns of the ending points of the runs.
Default Value : 200
Suggested values : ColumnEnd ∈ {50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ ColumnEnd ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 10
Number of elements : ColumnEnd = Row
Restriction : ColumnEnd ≥ ColumnBegin
Complexity
F shall be the number of pixels. If the pixels are sorted in ascending order the runtime complexity is: O(F ),
otherwise it is O(log(F ) ∗ F ).
Result
If the data is correct the operator GenRegionRuns returns the value 2 (H_MSG_TRUE), otherwise an excep-
tion handling is raised. The clipping according to the current image format is set via the operator SetSystem
(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping or by an empty in-
put) the operator SetSystem(’store_empty_region’,<true/false>) determines whether the region
is returned or an empty object tuple.
Parallelization Information
GenRegionRuns is reentrant and processed without parallelization.
HALCON 8.0.2
864 CHAPTER 12. REGIONS
Possible Predecessors
GetRegionRuns
Alternatives
GenRegionPoints, GenRegionPolygon, GenRegionLine, GenRegionPolygonFilled
See also
ReduceDomain
Module
Foundation
HRegion HImage.LabelToRegion ( )
Extract regions with equal gray values from an image.
LabelToRegion segments an image into regions of equal gray value. One output region is generated for each
gray value occuring in the image. This is similar to calling Threshold multiple times, and accumulating
the results with ConcatObj. Another related operator is Regiongrowing. However, LabelToRegion
does not perform a Connection operation on the resulting regions, i.e., they may be disconnected. A typical
application of LabelToRegion is the segmentation of label images, hence its name.
The number of output regions is limited by the system parameter ’max_outp_obj_par’, which can be read via
GetSystem(::’max_outp_obj_par’:<Anzahl>).
Attention
LabelToRegion is not implemented for images of type ’real’. The input images must not contain negative gray
values.
Parameter
12.3 Features
static void HOperatorSet.AreaCenter ( HObject regions,
out HTuple area, out HTuple row, out HTuple column )
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
main()
{
Tuple area, row, column;
img.Display (w);
w.Click ();
reg.Display (w);
w.Click ();
cout << "Total number of regions: " << reg.Num () << endl;
HALCON 8.0.2
866 CHAPTER 12. REGIONS
return(0);
}
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator AreaCenter returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
AreaCenter is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
See also
SelectShape
Module
Foundation
HTuple HRegion.Circularity ( )
Shape factor for the circularity (similarity to a circle) of a region.
The operator Circularity calculates the similarity of the input region with a circle.
Calculation: If F is the area of the region and max is the maximum distance from the center to all contour pixels,
the shape factor C is defined as:
F
C=
(max2 ∗ π)
The shape factor C of a circle is 1. If the region is long or has holes C is smaller than 1. The operator
Circularity especially responds to large bulges, holes and unconnected regions.
In case of an empty region the operator Circularity returns the value 0 (if no other behavior was set (see
SetSystem)). If more than one region is passed the numerical values of the shape factor are stored in a tuple, the
position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter
fwrite_string(FileId,[’rectangle: ’,M[2]])
fnew_line(FileId)
fwrite_string(FileId,[’ellipse: ’,M[3]])
fnew_line(FileId)
fwrite_string(FileId,[’circle: ’,M[4]])
fnew_line(FileId)
Result
The operator Circularity returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
Circularity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
Roundness, Compactness, Convexity, Eccentricity
See also
AreaCenter, SelectShape
Module
Foundation
HTuple HRegion.Compactness ( )
Shape factor for the compactness of a region.
The operator Compactness calculates the compactness of the input regions.
Calculation: If L is the length of the contour (see Contlength) and F the area of the region the shape factor
C is defined as:
L2
C=
4F π
The shape factor C of a circle is 1. If the region is long or has holes C is larger than 1. The operator
Compactness responds to the course of the contour (roughness) and to holes. In case of an empty region
the operator Compactness returns the value 0 if no other behavior was set (see SetSystem). If more than
one region is passed the numerical values of the shape factor are stored in a tuple, the position of a value in the
tuple corresponding to the position of the region in the input tuple.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. compactness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Compactness of the input region(s).
Assertion : (Compactness ≥ 1.0) ∨ (Compactness = 0)
Result
The operator Compactness returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
Compactness is reentrant and automatically parallelized (on tuple level).
HALCON 8.0.2
868 CHAPTER 12. REGIONS
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
Compactness, Convexity, Eccentricity
See also
Contlength, AreaCenter, SelectShape
Module
Foundation
HTuple HRegion.Contlength ( )
Contour length of a region.
The operator Contlength calculates the total length of the contour (sum of all connection components of
the region) for each region of regions. The distance between two neighboring contour points parallel to the
√
coordinate axes is rated 1, the distance in the diagonal is rated 2. If more than one region is passed the numerical
values of the contour length are stored in a tuple, the position of a value in the tuple corresponding to the position
of the region in the input tuple. In case of an empty region the operator Contlength returns the value 0.
Attention
The contour of holes is not calculated.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. contLength (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Contour length of the input region(s).
Assertion : ContLength ≥ 0
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HWindow w;
HRegionArray reg;
cout << "Draw " << NumOfElements << " regions " << endl;
w.Click ();
return(0);
}
Result
The operator Contlength returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
HALCON 8.0.2
870 CHAPTER 12. REGIONS
Parallelization Information
Contlength is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Possible Successors
GetRegionContour
Alternatives
Compactness
See also
AreaCenter, GetRegionContour
Module
Foundation
HTuple HRegion.Convexity ( )
Shape factor for the convexity of a region.
The operator Convexity calculates the convexity of each input region of regions.
Calculation: If Fc is the area of the convex hull and Fo the original area of the region the shape factor C is defined
as:
Fo
C=
Fc
The shape factor C is 1 if the region is convex (e.g., rectangle, circle etc.). If there are indentations or holes C is
smaller than 1.
In case of an empty region the operator Convexity returns the value 0 (if no other behavior was set (see
SetSystem)). If more than one region is passed the numerical values of the contour length are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter
HALCON 8.0.2
872 CHAPTER 12. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as math-
ematical, infinitely small points that are represented by the center of the pixels (see the documentation of
EllipticAxis). This can lead to non-empty regions that have rb = 0. In these cases, the output features
that require a division by rb are set to 0. In particular, regions that contain a single point or regions whose points
lie exactly on a straight line (e.g., one pixel high horizontal regions or one pixel wide vertical regions) have an
anisometry of 0.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. anisometry (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Shape feature (in case of a circle = 1.0).
Assertion : Anisometry ≥ 1.0
. bulkiness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Calculated shape feature.
. structureFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Calculated shape feature.
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator Eccentricity returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
Eccentricity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
See also
EllipticAxis, MomentsRegion2nd, SelectShape, AreaCenter
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem
(’no_object_result’,<Result>)).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as mathemat-
ical, infinitely small points that are represented by the center of the pixels. This means that ra and rb can assume
the value 0. In particular, for an empty region and a region containing a single point ra = rb = 0 is returned.
Furthermore, for regions whose points lie exactly on a straight line (e.g., one pixel high horizontal regions or one
pixel wide vertical regions), rb = 0 is returned.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Main radius (normalized to the area).
Assertion : Ra ≥ 0.0
. rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Secondary radius (normalized to the area).
Assertion : (Rb ≥ 0.0) ∧ (Rb ≤ Ra)
. phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Angle between main radius and x axis (arc measure).
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
elliptic_axis(Seg,Ra,Rb,Phi)
area_center(Seg,_,Row,Column)
gen_ellipse(Ellipses,Row,Column,Phi,Ra,Rb)
set_draw(WindowHandle,’margin’)
disp_region(Ellipses,WindowHandle)
HALCON 8.0.2
874 CHAPTER 12. REGIONS
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator EllipticAxis returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
EllipticAxis is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Possible Successors
GenEllipse
Alternatives
SmallestRectangle2, OrientationRegion
See also
MomentsRegion2nd, SelectShape, SetShape
References
R. Haralick, L. Shapiro “Computer and Robot Vision” Addison-Wesley, 1992, pp. 73-75
Module
Foundation
HTuple HRegion.EulerNumber ( )
Calculate the Euler number.
The procedure EulerNumber calculates the Euler number, i.e., the difference between the number of connection
components and the number of holes.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
Parameter
• regions1 is empty:
In this case all regions in regions2 are permutatively checked for neighborhood.
• regions1 consists of one region:
The regions of regions1 are compared to all regions in regions2.
• regions1 consists of the same number of regions as regions2:
Here all regions at the n-th position in regions1 and regions2 are checked for the neighboring relation.
The operator FindNeighbors uses the chessboard distance between neighboring regions. It can be specified
by the parameter maxDistance. Neighboring regions are located at the n-th position in regionIndex1 and
regionIndex2, i.e., the region with index regionIndex1[n] from regions1 is the neighbor of the region
with index regionIndex2[n] from regions2.
Attention
Covered regions are not found!
Parameter
HALCON 8.0.2
876 CHAPTER 12. REGIONS
The returned indices can be used, e.g., in SelectObj to select the regions containing the test pixel.
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Regions to be examined.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (int / long)
Line index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (int / long)
Column index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
. index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Index of the regions containing the test pixel.
Complexity √
If F is the area of the region and N is the number of regions the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator GetRegionIndex returns the value 2 (H_MSG_TRUE) if the parameters are correct.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
GetRegionIndex is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
SelectRegionPoint
See also
GetMbutton, GetMposition, TestRegionPoint
Module
Foundation
distance between the intersections of the contour with the plumb on the main axis in the respective point which are
the furthest apart. Additionally the operator GetRegionThickness returns the histogramm of the thick-
nesses of the region. The length of the histogram corresponds to the largest occurring thickness in the observed
region.
Attention
Only one region may be passed. If the region has several connection components, only the first one is investigated.
All other components are ignored.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Region to be analysed.
. thickness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Thickness of the region along its main axis.
. histogramm (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Histogram of the thickness of the region along its main axis.
Result
The operator GetRegionThickness returns the value 2 (H_MSG_TRUE) if exactly one region is passed.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>).
Parallelization Information
GetRegionThickness is reentrant and processed without parallelization.
Possible Predecessors
SobelAmp, Threshold, Connection, SelectShape, SelectObj
See also
CopyObj, EllipticAxis
Module
Foundation
The parameter similarity describes the similarity between the two regions based on the hamming distance
distance:
distance
similarity = 1 −
|regions1| + |regions2|
If both regions are empty similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
HALCON 8.0.2
878 CHAPTER 12. REGIONS
Parameter
. regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Comparative regions.
. distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Hamming distance of two regions.
Assertion : Distance ≥ 0
. similarity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Similarity of two regions.
Assertion : (0 ≤ Similarity) ∧ (Similarity ≤ 1)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
hamming_distance returns the value 2 (H_MSG_TRUE) if the number of objects in both parameters is the same
and is not 0. The behavior in case of empty input (no input objects available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set) is
set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling handling
is raised.
Parallelization Information
HammingDistance is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
Intersection, Complement, AreaCenter
See also
HammingChangeRegion
Module
Foundation
The parameter similarity describes the similarity between the two regions based on the hamming distance
distance:
distance
similarity = 1 −
|N orm(regions1)| + |regions2|
’center’: The region is moved so that both regions have the save center of gravity.
If both regions are empty similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
Parameter
HALCON 8.0.2
880 CHAPTER 12. REGIONS
The output of the procedure is chosen in such a way that it can be used as an input for the HALCON procedures
DispCircle, GenCircle, and GenEllipseContourXld.
If several regions are passed in regions corresponding tuples are returned as output parameters. In case of an
empty input region all parameters have the value 0.0 if no other behavior was set with SetSystem.
Attention
If several inner circles are present at a region only the most upper left solution is returned.
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
select_shape(Seg,H,’area’,’and’,100,2000)
inner_circle(H,Row,Column,Radius)
gen_circle(Circles,Row,Column,Radius:)
set_draw(WindowHandle,’margin’)
disp_region(Circles,WindowHandle)
Complexity √
If F is the area of the region and R is the radius of the inner circle the runtime complexity is O( F ∗ R).
Result
The operator InnerCircle returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>), the behavior in case of empty region is set via SetSystem
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
InnerCircle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, RunlengthFeatures
Possible Successors
GenCircle, DispCircle
Alternatives
ErosionCircle, InnerRectangle1
See also
SetShape, SelectShape, SmallestCircle
Module
Foundation
HALCON 8.0.2
882 CHAPTER 12. REGIONS
The operator MomentsRegion2nd calculates the moments (m20, m02) and the product of inertia of the axes
through the center parallel to the coordinate axes (m11). Furthermore the main axes of inertia (ia, ib) are
calculated.
Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by: X
Mij = (Z0 − Z)i (S0 − S)j
(Z,S)∈R
p
ib = h − h2 − M 20 ∗ M 02 + M 112
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by:
1 X
Mij = 2 (Z0 − Z)i (S0 − S)j
F
(Z,S)∈R
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
HALCON 8.0.2
884 CHAPTER 12. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
Calculation: x and y are the coordinates of the center of a region R with the area Z. Then the moments Mpq are
defined by: X
Mpq = M Z(xi , yi )(xi − x)p (yi − y)q
i=1
m10 m01
wherein are x = m00 and y = m00 .
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. m21 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 3rd order (line-dependent).
. m12 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 3rd order (column-dependent).
. m03 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 3rd order (column-dependent).
. m30 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 3rd order (line-dependent).
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator MomentsRegion3rd returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
MomentsRegion3rd is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
MomentsRegion2nd
See also
EllipticAxis
Module
Foundation
HALCON 8.0.2
886 CHAPTER 12. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
I4 = µ230 µ302 − 6µ30 µ21 µ11 µ202 + 6µ30 µ12 µ02 (2µ211 − µ20 µ02 )
+µ30 µ03 (6µ20 µ11 µ02 − 8µ311 ) + 9µ221 µ20 µ202 − 18µ21 µ12 µ20 µ11 µ02
+6µ21 µ03 µ20 (2µ211 − µ20 µ02 ) + 9µ212 µ220 µ02 − 6µ12 µ03 µ11 µ220 + µ203 µ320
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. i1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
. i2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
. i3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
. i4 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 3rd order.
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator MomentsRegionCentral returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
MomentsRegionCentral is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
MomentsRegion2nd
See also
EllipticAxis
Module
Foundation
HALCON 8.0.2
888 CHAPTER 12. REGIONS
I2
ψ2 =
µ1 0
I3
ψ3 = 7
µ
I4
ψ4 = 1
µ 1
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. PSI1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
. PSI2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
. PSI3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
. PSI4 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Moment of 2nd order.
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator MomentsRegionCentralInvar returns the value 2 (H_MSG_TRUE) if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set) is
set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
MomentsRegionCentralInvar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
MomentsRegion2nd
See also
EllipticAxis
Module
Foundation
HTuple HRegion.OrientationRegion ( )
Orientation of a region.
The operator OrientationRegion calculates the orientation of the region. The operator is based on
EllipticAxis. In addition the point on the contour with maximal distance to the center of gravity is cal-
culated. If the column coordinate of this point is less than the column coordinate of the center of gravity the value
of π is added to the angle.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem
(’no_object_result’,<Result>)).
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Orientation of region (arc measure).
Assertion : (−pi ≤ Phi) ∧ (Phi < pi)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator OrientationRegion returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
OrientationRegion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Possible Successors
DispArrow
Alternatives
EllipticAxis, SmallestRectangle2
See also
MomentsRegion2nd, LineOrientation
Module
Foundation
HTuple HRegion.Rectangularity ( )
Shape factor for the rectangularity of a region.
The operator Rectangularity calculates the rectangularity of the input regions.
To determine the rectangularity, first a rectangle is computed that has the same first and second order moments
as the input region. The computation of the rectangularity measure is finally based on the area of the difference
between the computed rectangle and the input region normalized with respect to the area of the rectangle.
For rectangles Rectangularity returns the value 1. The more the input region deviates from a perfect rectan-
gle, the less the returned value for rectangularity will be.
In case of an empty region the operator Rectangularity returns the value 0 (if no other behavior was set (see
SetSystem)). If more than one region is passed the numerical values of the rectangularity are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Attention
For input regions which orientation cannot be computed by using second order moments (as it is the case for
square regions, for example), the returned rectangularity is underestimated by up to 10% depending on the
orientation of the input region.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Region(s) to be examined.
. rectangularity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Rectangularity of the input region(s).
Assertion : (0 ≤ Rectangularity) ∧ (Rectangularity ≤ 1.0)
HALCON 8.0.2
890 CHAPTER 12. REGIONS
Result
The operator Rectangularity returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
Rectangularity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
Circularity, Compactness, Convexity, Eccentricity
See also
Contlength, AreaCenter, SelectShape
References
P. L. Rosin: “Measuring rectangularity”; Machine Vision and Applications; vol. 11; pp. 191-196; Springer-Verlag,
1999.
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see SetSystem).
Parameter
HALCON 8.0.2
892 CHAPTER 12. REGIONS
numRuns
KFactor = √
Area
wherein Area indicates the area of the region. It should be noted that the K-factor can be smaller than 1.0 (in case
of long horizontal regions).
The L-factor (LFactor) indicates the mean number of runs for each line index occurring in the region.
meanLength indicates the mean length of the runs. The parameter bytes indicates how many bytes are neces-
sary for coding the region with runlengths.
Attention
All features calculated by the operator RunlengthFeatures are not rotation invariant because the runlength
coding depends on the direction. The operator RunlengthFeatures does not serve for calculating shape
features but for controlling and analysing the efficiency of the runlength coding.
Parameter
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
HALCON 8.0.2
894 CHAPTER 12. REGIONS
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
disp_image(Image)
regiongrowing(Image,Seg,3,3,5,0)
set_color(WindowHandle,’red’)
set_draw(WindowHandle,’margin’)
Button := 1
while (Button = 1)
fwrite_string(FileId,’Select the region with the mouse (End right button)’)
fnew_line(FileId)
get_mbutton(WindowHandle,Row,Column,Button)
select_region_point(Seg,Single,Row,Column)
disp_region(Single,WindowHandle)
endwhile
Complexity √
If F is the area of the region and N is the number of regions, the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator SelectRegionPoint returns the value 2 (H_MSG_TRUE) if the parameters are correct.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
SelectRegionPoint is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
TestRegionPoint
See also
GetMbutton, GetMposition
Module
Foundation
• regions1 is empty:
In this case all regions in regions2 are permutatively checked for neighborhood.
• regions1 consists of one region:
The regions of regions1 are compared to all regions in regions2.
• regions1 consists of the same number of regions as regions2:
The regions at the n-th position in regions1 and regions2 are each checked for a neighboring relation.
The operator SelectRegionSpatial calculates the centers of the regions to be compared and decides ac-
cording to the angle between the center straight lines and the x axis whether the direction relation is fulfilled. The
relation is fulfilled within the area of -45 degree to +45 degree around the coordinate axes. Thus, the direction
relation can be understood in such a way that the center of the second region must be located left (or right, above,
below) of the center of the first region. The indices of the regions fulfilling the direction relation are located at the
n-th position in regionIndex1 and regionIndex2, i.e., the region with the index regionIndex2[n] has
the indicated relation with the region with the index regionIndex1[n]. Access to regions via the index can be
obtained via the operator CopyObj.
Parameter
. regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Starting regions
. regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Comparative regions
. direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Desired neighboring relation.
Default Value : "left"
List of values : Direction ∈ {"left", "right", "above", "below"}
. regionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Indices in the input tuples (regions1 or regions2), respectively.
. regionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Indices in the input tuples (regions1 or regions2), respectively.
Result
The operator SelectRegionSpatial returns the value 2 (H_MSG_TRUE) if regions2 is not empty. The
behavior in case of empty parameter regions2 (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set) is
set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
SelectRegionSpatial is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
AreaCenter, Intersection
See also
SpatialRelation, FindNeighbors, CopyObj, ObjToInteger
Module
Foundation
HALCON 8.0.2
896 CHAPTER 12. REGIONS
If only one feature (features) is used the value of operation is meaningless. Several features are processed
in the sequence in which they are entered.
Parameter
HALCON 8.0.2
898 CHAPTER 12. REGIONS
Result
The operator SelectShape returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input objects available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
SelectShape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, RunlengthFeatures
Possible Successors
SelectShape, SelectGray, ShapeTrans, ReduceDomain, CountObj
Alternatives
SelectShapeStd
See also
AreaCenter, Circularity, Compactness, Contlength, Convexity, Rectangularity,
EllipticAxis, Eccentricity, InnerCircle, SmallestCircle, SmallestRectangle1,
SmallestRectangle2, InnerRectangle1, Roundness, ConnectAndHoles,
DiameterRegion, OrientationRegion, MomentsRegion2nd, MomentsRegion2ndInvar,
MomentsRegion2ndRelInvar, MomentsRegion3rd, MomentsRegion3rdInvar,
MomentsRegionCentral, MomentsRegionCentralInvar, SelectObj
Module
Foundation
’distance_dilate’ The minimum distance in the maximum norm from the edge of pattern to the edge of every
region from regions is determined (see DistanceRrMinDil).
’distance_contour’ The minimum Euclidean distance from the edge of pattern to the edge of every region
from regions is determined. (see DistanceRrMin).
’distance_center’ The Euclidean distance from the center of pattern to the center of every region from
regions is determined.
’covers’ It is examined how well the region pattern fits into the regions from regions. If there is no shift
so that pattern is a subset of regions the overlap is 0. If pattern corresponds to the region after a
corresponding shift the overlap is 100. Otherwise the area of the opening of regions with pattern is put
into relation with the area of regions (in percent).
’fits’ It is examined whether pattern can be shifted in such a way that it fits in regions. If this is possible the
corresponding region is copied from regions. The parameters min and max are ignored.
’overlaps_abs’ The area of the intersection of pattern and every region in regions is computed.
’overlaps_rel’ The area of the intersection of pattern and every region in regions is computed. The relative
overlap is the ratio of the area of the intersection and the are of the respective region in regions (in percent).
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HALCON 8.0.2
900 CHAPTER 12. REGIONS
{
cout << "Usage: " << argv[0] << " <radius of circle>" << endl;
exit (1);
}
img.Display (w);
w.SetColor ("red");
seg.Display (w);
w.Click ();
return(0);
}
Result
The operator SelectShapeProto returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
SelectShapeProto is reentrant and processed without parallelization.
Possible Predecessors
Connection, DrawRegion, GenCircle, GenRectangle1, GenRectangle2, GenEllipse
Possible Successors
SelectGray, ShapeTrans, ReduceDomain, CountObj
Alternatives
SelectShape
See also
Opening, Erosion1, DistanceRrMinDil, DistanceRrMin
Module
Foundation
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Input regions to be selected.
. selectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions with desired shape.
. shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Shape features to be checked.
Default Value : "max_area"
List of values : Shape ∈ {"max_area", "rectangle1", "rectangle2"}
. percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Similarity measure.
Default Value : 70.0
Suggested values : Percent ∈ {10.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 100.0}
Typical range of values : 0.0 ≤ Percent ≤ 100.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 10.0
Parallelization Information
SelectShapeStd is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Regiongrowing, Connection, SmallestRectangle1, SmallestRectangle2
Alternatives
Intersection, Complement, AreaCenter, SelectShape
See also
SmallestRectangle1, SmallestRectangle2, Rectangularity
Module
Foundation
HALCON 8.0.2
902 CHAPTER 12. REGIONS
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100:)
select_shape(Seg,H,’area’,’and’,100,2000)
smallest_circle(H,Row,Column,Radius)
gen_circle(Circles,Row,Column,Radius)
set_draw(WindowHandle,’margin’)
disp_region(Circles,WindowHandle)
Complexity √
If F is the area of the region, then the mean runtime complexity is O( F .
Result
The operator SmallestCircle returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
SmallestCircle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, RunlengthFeatures
Possible Successors
GenCircle, DispCircle
Alternatives
EllipticAxis, SmallestRectangle1, SmallestRectangle2
See also
SetShape, SelectShape, InnerCircle
Module
Foundation
The operator SmallestRectangle1 calculates the surrounding rectangle of all input regions (parallel
to the coordinate axes). The surrounding rectangle is described by the coordinates of the corner pixels
(row1,column1,row2,column2)
If more than one region is passed in regions, the results are stored in tuples, the index of a value in the tuple
corresponding to the index of a region in the input. In case of empty region all parameters have the value 0 if no
other behavior was set (see SetSystem).
Attention
In case of empty region the result of row1,column1, row2 and column2 (all are 0) can lead to confusion.
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; HTuple (int / long)
Line index of upper left corner point.
. column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; HTuple (int / long)
Column index of upper left corner point.
. row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; HTuple (int / long)
Line index of lower right corner point.
. column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; HTuple (int / long)
Column index of lower right corner point.
Complexity
If F is the area of the region the mean runtime complexity is O(sqrt(F )).
Result
The operator SmallestRectangle1 returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
SmallestRectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, RunlengthFeatures
Possible Successors
DispRectangle1, GenRectangle1
Alternatives
SmallestRectangle2, AreaCenter
See also
SelectShape
Module
Foundation
HALCON 8.0.2
904 CHAPTER 12. REGIONS
The procedure is applied when, for example, the location of a scenery of several regions (e.g., printed text on a rect-
angular paper or in rectangular print (justified lines)) must be found. The parameters of SmallestRectangle2
are chosen in such a way that they can be used directly as input for the HALCON-procedures DispRectangle2
and GenRectangle2.
If more than one region is passed in regions the results are stored in tuples, the index of a value in the tuple
corresponding to the index of a region in the input. In case of empty region all parameters have the value 0.0 if no
other behavior was set (see SetSystem).
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; HTuple (double)
Line index of the center.
. column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; HTuple (double)
Column index of the center.
. phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; HTuple (double)
Orientation of the surrounding rectangle (arc measure)
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
. length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; HTuple (double)
First radius (half length) of the surrounding rectangle.
Assertion : Length1 ≥ 0.0
. length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; HTuple (double)
Second radius (half width) of the surrounding rectangle.
Assertion : (Length2 ≥ 0.0) ∧ (Length2 ≤ Length1)
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
smallest_rectangle2(Seg,Row,Column,Phi,Length1,Length2)
gen_rectangle2(Rectangle,Row,Column,Phi,Length1,Length2)
set_draw(WindowHandle,’margin’)
disp_region(Rectangle,WindowHandle)
Complexity
If F is
√ the area of the region and N is the number of supporting points of the convex hull, the runtime complexity
is O( F + N 2 ).
Result
The operator SmallestRectangle2 returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via SetSystem(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
SmallestRectangle2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection, RunlengthFeatures
Possible Successors
DispRectangle2, GenRectangle2
Alternatives
EllipticAxis, SmallestRectangle1
See also
SmallestCircle, SetShape
Module
Foundation
• regions1 is empty:
In this case all regions in regions2 are permutatively checked for neighborhood.
• regions1 consists of one region:
The regions of regions1 are compared to all regions in regions2.
• regions1 consists of the same number of regions as regions2:
regions1 and regions2 are checked for a neighboring relation.
The percentage percent is interpreted in such a way that the area of the second region has to be located really
left/right or above/below the region margins of the first region by at least percent percent. The indices of
the regions that fulfill at least one of these conditions are then located at the n-th position in the output parame-
ters regionIndex1 and regionIndex2. Additionally the output parameters relation1 and relation2
contain at the n-th position the type of relation of the region pair (regionIndex1[n], regionIndex2[n]),
i.e., region with index regionIndex2[n] has the relation relation1[n] and relation2[n] with region with
index regionIndex1[n].
Possible values for relation1 and relation2 are:
In regionIndex1 and regionIndex2 the indices of the regions in the tuples of the input regions (regions1
or regions2), respectively, are entered as image identifiers. Access to chosen regions via the index can be
obtained by the operator CopyObj.
Parameter
. regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Starting regions.
. regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Comparative regions.
. percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Percentage of the area of the comparative region which must be located left/right or above/below the region
margins of the starting region.
Default Value : 50
Suggested values : Percent ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Typical range of values : 0 ≤ Percent ≤ 100 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (0 ≤ Percent) ∧ (Percent ≤ 100)
. regionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
. regionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
. relation1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Horizontal pose relation in which regionIndex2[n] stands with regionIndex1[n].
. relation2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Vertical pose relation in which regionIndex2[n] stands with regionIndex1[n].
HALCON 8.0.2
906 CHAPTER 12. REGIONS
Result
The operator SpatialRelation returns the value 2 (H_MSG_TRUE) if regions2 is not empty and
percent is correctly choosen. The behavior in case of empty parameter regions2 (no input regions avail-
able) is set via the operator SetSystem(’no_object_result’,<Result>). The behavior in case of
empty region (the region is the empty set) is set via SetSystem(’empty_region_result’,<Result>).
If necessary an exception handling is raised.
Parallelization Information
SpatialRelation is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
AreaCenter, Intersection
See also
SelectRegionSpatial, FindNeighbors, CopyObj, ObjToInteger
Module
Foundation
12.4 Geometric-Transformations
static void HOperatorSet.AffineTransRegion ( HObject region,
out HObject regionAffineTrans, HTuple homMat2D, HTuple interpolate )
As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the region, e.g., by operators like AreaCenter. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric region and then rotate the region around this point using
HomMat2dRotate, the resulting region will not lie on the original one. In such a case, you can compensate this
effect by applying the following translations to homMat2D before using it in AffineTransRegion:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_region(Region, RegionAffinTrans, HomMat2DAdapted, ’false’)
Parameter
HALCON 8.0.2
908 CHAPTER 12. REGIONS
Parameter
read_image(&Image,"monkey");
threshold(Image,&Seg,128.0,255.0);
mirror_region(Seg,&Mirror,"row",512);
disp_region(Mirror,WindowHandle);
Parallelization Information
MirrorRegion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
Alternatives
AffineTransRegion
See also
ZoomRegion
Module
Foundation
HALCON 8.0.2
910 CHAPTER 12. REGIONS
point in the input region that is specified by radiusEnd and angleEnd. In the usual mode (angleStart
< angleEnd and radiusStart < radiusEnd), the polar transformation is performed in the mathemati-
cally positive orientation (counterclockwise). Furthermore, points with smaller radii lie in the upper part of the
output region. By suitably exchanging the values of these parameters (e.g., angleStart > angleEnd or
radiusStart > radiusEnd), any desired orientation of the output region can be achieved.
The angles can be chosen from all real numbers. Center point and radii can be real as well. However, if they are
both integers and the difference of radiusEnd and radiusStart equals height−1, calculation will be sped
up through an optimized routine.
The radii and angles are inclusive, which means that the first row of the virtual target image contains the circle
with radius radiusStart and the last row contains the circle with radius radiusEnd. For complete circles,
where the difference between angleStart and angleEnd equals 2π (360 degrees), this also means that the
first column of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − width ) degrees instead.
The parameter interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
If more than one region is passed in region, their polar transformations are computed individually and stored
as a tuple in polarTransRegion. Please note that the indices of an input region and its transformation only
correspond if the system variable ’store_empty_regions’ is set to ’true’ (see SetSystem). Otherwise empty
output regions are discarded and the length of the input tuple region is most likely not equal to the length of the
output tuple polarTransRegion.
Attention
If width or height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see SetSystem). Otherwise, an output region that does not lie within
the dimensions of the current image can produce an error message.
Parameter
HALCON 8.0.2
912 CHAPTER 12. REGIONS
The angles and radii are inclusive, which means that the row coordinate 0 in polarRegion will be mapped
onto a a circle with a distance of radiusStart pixels from the specified center and the row with the coordinate
heightIn − 1 will be mapped onto a circle of radius radiusEnd. This applies to angleStart, angleEnd,
and widthIn in an analogous way. If the width of the input region polarRegion corresponds to an angle
interval greater than 2π, the region is cropped such that length of this interval is 2π.
The parameter interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
PolarTransRegionInv is the inverse function of PolarTransRegion.
The call sequence:
polar_trans_region(Region, PolarRegion, Row, Column, rad(360), 0, 0,
Radius, Width, Height, ’nearest_neighbor’)
polar_trans_region_inv(PolarRegion, XYTransRegion, Row, Column, rad(360),
0, 0, Radius, Width, Height, Width, Height, ’nearest_neighbor’)
returns the region Region, restricted to the circle around (Row, Column) with radius Radius, as its output
region XYTransRegion.
If more than one region is passed in polarRegion, their cartesian transformations are computed individually
and stored as a tuple in XYTransRegion. Please note that the indices of an input region and its transformation
only correspond if the system variable ’store_empty_regions’ is set to ’false’ (see SetSystem). Otherwise empty
output regions are discarded and the length of the input tuple polarRegion is most likely not equal to the length
of the output tuple XYTransRegion.
Attention
If width or height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see SetSystem). Otherwise, an output region that does not lie within
the dimensions of the current image can produce an error message.
Parameter
HALCON 8.0.2
914 CHAPTER 12. REGIONS
If ’clip_region’ is set to its default value ’true’ by SetSystem(’clip_region’, ’true’) or if the trans-
formation is degenerated and thus produces infinite regions, the output region is clipped by the rectangle with upper
left corner (0, 0) and lower right corner (’width’, ’height’), where ’width’ and ’height’ are system variables (see
also GetSystem). If ’clip_region’ is ’false’, the output region is not clipped except by the maximum supported
coordinate size MAX_FORMAT. This may result in extremely memory and time intensive computations, so use
with care.
Parameter
x + x0
column =
2
y + y0
row = .
2
If row and column are set to the origin, the in morphology often used transposition results. Hence
TransposeRegion is often used to reflect (transpose) a structuring element.
Parameter
Result
TransposeRegion returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: SetSystem(’no_object_result’,<RegionResult>)
• empty region: SetSystem(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
TransposeRegion is reentrant and automatically parallelized (on tuple level).
Possible Successors
ReduceDomain, SelectShape, AreaCenter, Connection
See also
Dilation1, Opening, Closing
Module
Foundation
HALCON 8.0.2
916 CHAPTER 12. REGIONS
12.5 Sets
static void HOperatorSet.Complement ( HObject region,
out HObject regionComplement )
HRegion HRegion.Complement ( )
Return the complement of a region.
Complement determines the complement of the input region(s).
If the system flag ’clip_region’ is ’true’, which is the default, the difference of the largest image processed so far
(see ResetObjDb) and the input region is returned.
If the system flag ’clip_region’ is ’false’ (see SetSystem), the resluting region would be infinitely large. To avoid
this, the complement is done only virtually by setting the complement flag of region to TRUE. For succeeding
operations the de Morgan laws are applied while calculating results. Using Complement with ’clip_region’
set to ’false’ makes sense only to avoid fringe effects, e.g., if the area of interest is bigger or smaller than the
image. For the latter case, the clipping would be set explicitly. If there is no reason to use the operator with
’clip_region’=’false’ but you need the flag for other operations of your program, it is recommended to temporarilly
set the system flag to’true’ and change it back to ’false’ after applying Complement. Otherwise, negative regions
may result from succeeding operations.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Input region(s).
. regionComplement (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Complemented regions.
Number of elements : RegionComplement = Region
Result
Complement always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given)
can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an empty input
region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
Complement is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring, ClassNdimNorm
Possible Successors
SelectShape
See also
Difference, Union1, Union2, Intersection, ResetObjDb, SetSystem
Module
Foundation
The resulting region is defined as the input region (region) with all points from sub removed.
Attention
Empty regions are valid for both parameters. On output, empty regions may result. The value of the system flag
’store_empty_region’ determines the behavior in this case.
Parameter
Complexity
Let N be the number of regions, F _1 be their average√ F _2 be the total area of all regions in sub. Then
area, and√
the runtime complexity is O(F _1 ∗ log(F _1) + N ∗ ( F _1 + F _2)).
Result
Difference always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given)
can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an empty input
region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
Difference is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring, ClassNdimNorm
Possible Successors
SelectShape, DispRegion
See also
Intersection, Union1, Union2, Complement, SymmDifference
Module
Foundation
HALCON 8.0.2
918 CHAPTER 12. REGIONS
Let N be the number of regions in region1, F1 be their average√ √ F2 be the total area of all regions in
area, and
region2. Then the runtime complexity is O(F1 log (F1 ) + N ∗ ( F1 + F2 )).
Result
Intersection always returns 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given) can
be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an empty input
region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
Intersection is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
See also
Union1, Union2, Complement
Module
Foundation
Parameter
. region1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Input region 1.
. region2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Input region 2.
. regionDifference (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Resulting region.
Example (Syntax: HDevelop)
Result
SymmDifference always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an
empty input region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
SymmDifference is reentrant and processed without parallelization.
Possible Successors
SelectShape, DispRegion
See also
Intersection, Union1, Union2, Complement, Difference
Module
Foundation
HRegion HRegion.Union1 ( )
Return the union of all input regions.
Union1 computes the union of all input regions and returns the result in regionUnion.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Regions of which the union is to be computed.
. regionUnion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Union of all input regions.
Number of elements : RegionUnion ≤ Region
Example (Syntax: HDevelop)
HALCON 8.0.2
920 CHAPTER 12. REGIONS
Complexity √ √
Let F be the sum of all areas of the input regions. Then the runtime complexity is O(log( F ) ∗ F ).
Result
Union1 always returns 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given) can be set
via SetSystem(’no_object_result’,<Result>) and the behavior in case of an empty input region
via SetSystem(’empty_region_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
Union1 is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
Alternatives
Union2
See also
Intersection, Complement
Module
Foundation
12.6 Tests
static void HOperatorSet.TestEqualRegion ( HObject regions1,
HObject regions2, out HTuple isEqual )
HALCON 8.0.2
922 CHAPTER 12. REGIONS
Alternatives
Difference, AreaCenter
See also
TestEqualRegion
Module
Foundation
12.7 Transformation
static void HOperatorSet.BackgroundSeg ( HObject foreground,
out HObject backgroundRegions )
HRegion HRegion.BackgroundSeg ( )
Determine the connected components of the background of given regions.
BackgroundSeg determines connected components of the background of the foreground regions given in
foreground. This operator is normally used after an edge operator in order to determine the regions enclosed
by the extracted edges. The connected components are determined using 4-neighborhood.
Parameter
. foreground (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Input regions.
. backgroundRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Connected components of the background.
Example (Syntax: HDevelop)
/* Simulation of background_seg: */
background_seg(Foreground,BackgroundRegions):
complement(Foreground,Background)
get_system(’neighborhood’,Save)
set_system(’neighborhood’,4)
connection(Background,BackgroundRegions)
clear_obj(Background)
set_system(’neighborhood’,Save).
Complexity
Let F be the area of the background, H and W be the height
√ and√ width of the image, and N be the number of
resulting regions. Then the runtime complexity is O(H + F ∗ N ).
Result
BackgroundSeg always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an
empty input region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
BackgroundSeg is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring, ClassNdimNorm
Possible Successors
SelectShape
HALCON 8.0.2
924 CHAPTER 12. REGIONS
Alternatives
Complement, Connection
See also
Threshold, HysteresisThreshold, Skeleton, ExpandRegion, SetSystem, SobelAmp,
EdgesImage, Roberts, BandpassImage
Module
Foundation
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
Alternatives
Intersection, GenRectangle1, ClipRegionRel
Module
Foundation
HALCON 8.0.2
926 CHAPTER 12. REGIONS
Result
ClipRegionRel returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via SetSystem(’no_object_result’,<Result>) and the behavior in
case of an empty input region via SetSystem(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
ClipRegionRel is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
Alternatives
SmallestRectangle1, Intersection, GenRectangle1, ClipRegion
Module
Foundation
HRegion HRegion.Connection ( )
Compute connected components of a region.
Connection determines the connected components of the input regions given in region. The neighborhood
used for this can be set via SetSystem(’neighborhood’,<4/8>). The default is 8-neighborhood, which
is useful for determining the connected components of the foreground. The maximum number of connected com-
ponents that is returned by Connection can be set via SetSystem(’max_connection’,<Num>). The
default value of 0 causes all connected components to be returned. The inverse operator of Connection is
Union1.
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Input region.
. connectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Connected components.
Example (Syntax: HDevelop)
read_image(Image,’affe’)
set_colored(WindowHandle,12)
threshold(Image,Light,150.0,255.0)
count_obj(Light,Number1)
fwrite_string(’Nummber of regions after threshold = ’+Number1)
fnew_line()
disp_region(Light,WindowHandle)
connection(Light,Many)
count_obj(Many,Number2)
fwrite_string(’Nummber of regions after threshold = ’+Number2)
fnew_line()
disp_region(Many,WindowHandle).
Complexity
Let F be the area√of the√input region and N be the number of generated connected components. Then the runtime
complexity is O( F ∗ N ).
Result
Connection always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given)
can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an empty input
HALCON 8.0.2
928 CHAPTER 12. REGIONS
Complexity
The runtime complexity is O(width ∗ height).
Result
DistanceTransform returns H_MSG_2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
DistanceTransform is reentrant and processed without parallelization.
Possible Predecessors
Threshold, DynThreshold, Regiongrowing
Possible Successors
Threshold
See also
Skeleton
References
P. Soille: “Morphological Image Analysis, Principles and Applications”; Springer Verlag Berlin Heidelberg New
York, 1999.
G. Borgefors: “Distance Transformations in Arbitrary Dimensions”; Computer Vision, Graphics, and Image Pro-
cessing, Vol. 27, pages 321–345, 1984.
P.E. Danielsson: “Euclidean Distance Mapping”; Computer Graphics and Image Processing, Vol. 14, pages 227–
248, 1980.
Module
Foundation
Parameter
HALCON 8.0.2
930 CHAPTER 12. REGIONS
’image’ The input regions are expanded iteratively until they touch another region or the image border. In this
case, the image border is defined to be the rectangle ranging from (0,0) to (row_max,col_max). Here,
(row_max,col_max) corresponds to the lower right corner of the smallest surrounding rectangle of all input
regions (i.e., of all regions that are passed in regions and forbiddenArea). Because ExpandRegion
processes all regions simultaneously, gaps between regions are distributed evenly to all regions. Overlapping
regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to the respective regions. Because the intersection with the original region is
computed after the shrinking operation gaps in the output regions may result, i.e., the segmentation is not
complete. This can be prevented by calling ExpandRegion a second time with the complement of the
original regions as “forbidden area.”
Parameter
. regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions for which the gaps are to be closed, or which are to be separated.
. forbiddenArea (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Regions in which no expansion takes place.
. regionExpanded (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Expanded or separated regions.
. iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long / string)
Number of iterations.
Default Value : "maximal"
Suggested values : Iterations ∈ {"maximal", 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 50, 70, 100, 200}
Typical range of values : 0 ≤ Iterations ≤ 1000 (lin)
Minimum Increment : 1
Recommended Increment : 1
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Expansion mode.
Default Value : "image"
List of values : Mode ∈ {"image", "region"}
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
threshold(Image,Light,100,255)
disp_region(Light,WindowHandle)
connection(Light,Seg)
expand_region(Seg,[],Exp1,’maximal’,’image’)
set_colored(WindowHandle,12)
set_draw(WindowHandle,’margin’)
disp_region(Exp1,WindowHandle)
Result
ExpandRegion always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of an empty in-
put region via SetSystem(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
ExpandRegion is reentrant and processed without parallelization.
Possible Predecessors
Pouring, Threshold, DynThreshold, Regiongrowing
Alternatives
Dilation1
See also
ExpandGray, Interjacent, Skeleton
Module
Foundation
HRegion HRegion.FillUp ( )
Fill up holes in regions.
FillUp fills up holes in regions. The number of regions remains unchanged. The neighborhood type is set via
SetSystem(’neighborhood’,<4/8>) (default: 8-neighborhood).
Parameter
HALCON 8.0.2
932 CHAPTER 12. REGIONS
read_image(&Image,"affe");
threshold(Image,&Seg,120.0,255.0);
fill_up_shape(Seg,&Filled,"area",0.0,200.0);
Result
FillUpShape returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via SetSystem(’no_object_result’,<Result>) and the behavior in
case of an empty input region via SetSystem(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
FillUpShape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
Alternatives
FillUp
See also
SelectShape, Connection, AreaCenter
Module
Foundation
’medial_axis’ This mode is used for regions that do not touch or overlap. The operator will find separating lines
between the regions which partition the background evenly between the input regions. This corresponds to
the following calls:
complement(’full’,Region,Tmp) skeleton(Tmp,Result)
HALCON 8.0.2
934 CHAPTER 12. REGIONS
’border’ If the input regions do not touch or overlap this mode is equivalent to Boundary(Region,Result),
i.e., it replaces each region by its boundary. If regions are touching they are aggregated into one region. The
corresponding output region then contains the boundary of the aggregated region, as well as the one pixel
wide separating line between the original regions. This corresponds to the following calls:
boundary(Region,Tmp1,’inner’) union1(Tmp1,Tmp2)
skeleton(Tmp2,Result)
’mixed’ In this mode the operator behaves like the mode ’medial_axis’ for non-overlapping regions. If regions
touch or overlap, again separating lines between the input regions are generated on output, but this time
including the “touching line” between regions, i.e., touching regions are separated by a line in the output
region. This corresponds to the following calls:
erosion1(Region,Mask,Tmp1,1) union1(Tmp1,Tmp2)
complement(full,Tmp2,Tmp3) skeleton(Tmp3,Result)
where Mask denotes the following “cross mask”:
×
× × ×
×
Parameter
read_image(Image,’wald1_rot’)
mean(Image,Mean,31,31)
dyn_threshold(Mean,Seg,20)
interjacent(Seg,Graph,’medial_axis’)
disp_region(Graph,WindowHandle)
Result
Interjacent always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions
given) can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of an empty
input region via SetSystem(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
Interjacent is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring
Possible Successors
SelectShape, DispRegion
See also
ExpandRegion, JunctionsSkeleton, Boundary
Module
Foundation
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
JunctionsSkeleton always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no
regions given) can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of
an empty input region via SetSystem(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
JunctionsSkeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Skeleton
Possible Successors
AreaCenter, Connection, GetRegionPoints, Difference
See also
Pruning, SplitSkeletonRegion
Module
Foundation
HALCON 8.0.2
936 CHAPTER 12. REGIONS
HALCON 8.0.2
938 CHAPTER 12. REGIONS
PartitionRectangle partitions the input region into rectangles having an extent of width times height.
The region is always split into rectangles of equal size. Therefore, width and height are adapted to the actual
size of the region. If the region is smaller than the given size its output remains unchanged. A partition is only
done if the size of the region is at least 1.5 times the size of the rectangle given by the paramters.
Parameter
height ∗ width
number = ,
2
read_image(Image,’affe’)
mean_image(Image,Mean,5,5)
dyn_threshold(Mean,Points,25)
rank_region(Points,Textur,15,15,30)
gen_circle(Mask,10,10,3)
opening1(Textur,Mask,Seg).
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ∗ 8).
Result
RankRegion returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via SetSystem(’no_object_result’,<Result>) and the behavior in
case of an empty input region via SetSystem(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
RankRegion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Connection, Regiongrowing, Pouring, ClassNdimNorm
Possible Successors
SelectShape, DispRegion
Alternatives
ClosingRectangle1, ExpandRegion
See also
RankImage, MeanImage
Module
Foundation
HALCON 8.0.2
940 CHAPTER 12. REGIONS
Attention
If type = ’outer_circle’ is selected it might happen that the resulting circular region does not completely cover the
input region. This is because internally the operators SmallestCircle and GenCircle are used to compute
√ As described in the documentation of SmallestCircle, the calculated radius can be too small
the outer circle.
by up to 1/ 2 − 0.5 pixels. Additionally, √ the circle that is generated by GenCircle is translated by up to 0.5
pixels in both directions, i.e., by up to 1/ 2 pixels. Consequently, when adding up both effects, the original region
might protrude beyond the returned circular region by at most 1 pixel.
Parameter
HRegion HRegion.Skeleton ( )
Compute the skeleton of a region.
Skeleton computes the skeleton, i.e., the medial axis of the input regions. The skeleton is constructed in a way
that each point on it can be seen as the center point of a circle with the largest radius possible while still being
completely contained in the region.
Parameter
HALCON 8.0.2
942 CHAPTER 12. REGIONS
Result
Skeleton returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior in case of empty input (no re-
gions given) can be set via SetSystem(’no_object_result’,<Result>) and the behavior in case of an
empty input region via SetSystem(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
Skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
SobelAmp, EdgesImage, BandpassImage, Threshold, HysteresisThreshold
Possible Successors
JunctionsSkeleton, Pruning
Alternatives
MorphSkeleton, Thinning
See also
GraySkeleton, SobelAmp, EdgesImage, Roberts, BandpassImage, Threshold
References
Eckardt, U. “Verdünnung mit Perfekten Punkten”, Proceedings 10. DAGM-Symposium, IFB 180, Zurich, 1988
Module
Foundation
’character’ The regions will be treated like characters in a row and will be sorted according to their order in the
line: If two regions overlap horizontally, they will be sorted with respect to their column values, otherwise
they will be sorted with regard to their row values. To be able to sort a line correctly, all regions in the line
must overlap each other vertically. Furthermore, the regions in adjacent rows must not overlap.
’first_point’ The point with the lowest column value in the first row of the region.
’last_point’ The point with the highest column value in the last row of the region.
’upper_left’ Upper left corner of the surrounding rectangle.
’upper_right’ Upper right corner of the surrounding rectangle.
’lower_left’ Lower left corner of the surrounding rectangle.
’lower_right’ Lower right corner of the surrounding rectangle.
The parameter order determines whether the sorting order is increasing or decreasing: using ’true’ the order will
be increasing, using ’false’ the order will be decreasing.
Parameter
HALCON 8.0.2
944 CHAPTER 12. REGIONS
read_image(Image,’fabrik’)
edges_image (Image, ImaAmp, ImaDir, ’lanser2’, 0.5, ’nms’, 8, 16)
threshold (ImaAmp, RawEdges, 8, 255)
skeleton (RawEdges, Skeleton)
junctions_skeleton (Skeleton, EndPoints, JuncPoints)
difference (Skeleton, JuncPoints, SkelWithoutJunc)
connection (SkelWithoutJunc, SingleBranches)
select_shape (SingleBranches, SelectedBranches, ’area’, ’and’, 16, 99999)
split_skeleton_lines (SelectedBranches, 3, BeginRow, BeginCol, EndRow,
EndCol).
Result
SplitSkeletonLines always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no
regions given) can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of
an empty input region via SetSystem(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
SplitSkeletonLines is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Connection, SelectShape, Skeleton, JunctionsSkeleton, Difference
Possible Successors
SelectLines, PartitionLines, DispLine
See also
SplitSkeletonRegion, DetectEdgeSegments
Module
Foundation
read_image(Image,’fabrik’)
edges_image (Image, ImaAmp, ImaDir, ’lanser2’, 0.5, ’nms’, 8, 16)
threshold (ImaAmp, RawEdges, 8, 255)
skeleton (RawEdges, Skeleton)
junctions_skeleton (Skeleton, EndPoints, JuncPoints)
difference (Skeleton, JuncPoints, SkelWithoutJunc)
connection (SkelWithoutJunc, SingleBranches)
select_shape (SingleBranches, SelectedBranches, ’area’, ’and’, 16, 99999)
split_skeleton_region (SelectedBranches, Lines, 3)
Result
SplitSkeletonRegion always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input
(no regions given) can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of
an empty input region via SetSystem(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
SplitSkeletonRegion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Connection, SelectShape, Skeleton, JunctionsSkeleton, Difference
Possible Successors
CountObj, SelectShape, SelectObj, AreaCenter, EllipticAxis, SmallestRectangle2,
GetRegionPolygon, GetRegionContour
See also
SplitSkeletonLines, GetRegionPolygon, GenPolygonsXld
Module
Foundation
HALCON 8.0.2
946 CHAPTER 12. REGIONS
Segmentation
13.1 Classification
static void HOperatorSet.AddSamplesImageClassGmm ( HObject image,
HObject classRegions, HTuple GMMHandle, HTuple randomize )
Add training samples from an image to the training data of a Gaussian Mixture Model.
AddSamplesImageClassGmm adds training samples from the image to the Gaussian Mixture Model (GMM)
given by GMMHandle. AddSamplesImageClassGmm is used to store the training samples before a classifier
to be used for the pixel classification of multichannel images with ClassifyImageClassGmm is trained.
AddSamplesImageClassGmm works analogously to AddSampleClassGmm. The image must have a
number of channels equal to numDim, as specified with CreateClassGmm. The training regions for the
numClasses pixel classes are passed in classRegions. Hence, classRegions must be a tuple containing
numClasses regions. The order of the regions in classRegions determines the class of the pixels. If there
are no samples for a particular class in image an empty region must be passed at the position of the class in
classRegions. With this mechanism it is possible to use multiple images to add training samples for all rel-
evant classes to the GMM by calling AddSamplesImageClassGmm multiple times with the different images
and suitably chosen regions. The regions in classRegions should contain representative training samples for
the respective classes. Hence, they need not cover the entire image. The regions in classRegions should not
overlap each other, because this would lead to the fact that in the training data the samples from the overlapping
areas would be assigned to multiple classes, which may lead to a lower classification performance. Image data of
integer type can be particularly badly suited for modelling with a GMM. randomize can be used to overcome
this problem, as explained in AddSampleClassGmm.
Parameter
947
948 CHAPTER 13. SEGMENTATION
Result
If the parameters are valid, the operator AddSamplesImageClassGmm returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
AddSamplesImageClassGmm is processed completely exclusively without parallelization.
Possible Predecessors
CreateClassGmm
Possible Successors
TrainClassGmm, WriteSamplesClassGmm
Alternatives
ReadSamplesClassGmm
See also
ClassifyImageClassGmm, AddSampleClassGmm, ClearSamplesClassGmm,
GetSampleNumClassGmm, GetSampleClassGmm
Module
Foundation
Add training samples from an image to the training data of a multilayer perceptron.
AddSamplesImageClassMlp adds training samples from the image image to the multilayer perceptron
(MLP) given by MLPHandle. AddSamplesImageClassMlp is used to store the training samples before
a classifier to be used for the pixel classification of multichannel images with ClassifyImageClassMlp
is trained. AddSamplesImageClassMlp works analogously to AddSampleClassMlp. Because here
the MLP is always used for classification, OutputFunction = ’softmax’ must be specified when the MLP is
created with CreateClassMlp. The image image must have a number of channels equal to NumInput,
as specified with CreateClassMlp. The training regions for the NumOutput pixel classes are passed in
classRegions. Hence, classRegions must be a tuple containing NumOutput regions. The order of the
regions in classRegions determines the class of the pixels. If there are no samples for a particular class
in image an empty region must be passed at the position of the class in classRegions. With this mecha-
nism it is possible to use multiple images to add training samples for all relevant classes to the MLP by calling
AddSamplesImageClassMlp multiple times with the different images and suitably chosen regions. The re-
gions in classRegions should contain representative training samples for the respective classes. Hence, they
need not cover the entire image. The regions in classRegions should not overlap each other, because this
would lead to the fact that in the training data the samples from the overlapping areas would be assigned to multi-
ple classes, which may lead to slower convergence of the training and a lower classification performance.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Training image.
. classRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Regions of the classes to be trained.
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; HClassMlp / HTuple (IntPtr)
MLP handle.
Result
If the parameters are valid, the operator AddSamplesImageClassMlp returns the value 2 (H_MSG_TRUE).
If necessary an exception handling is raised.
Parallelization Information
AddSamplesImageClassMlp is processed completely exclusively without parallelization.
Possible Predecessors
CreateClassMlp
Possible Successors
TrainClassMlp, WriteSamplesClassMlp
Alternatives
ReadSamplesClassMlp
See also
ClassifyImageClassMlp, AddSampleClassMlp, ClearSamplesClassMlp,
GetSampleNumClassMlp, GetSampleClassMlp, AddSamplesImageClassSvm
Module
Foundation
Add training samples from an image to the training data of a support vector machine.
AddSamplesImageClassSvm adds training samples from the image image to the support vector machine
(SVM) given by SVMHandle. AddSamplesImageClassSvm is used to store the training samples be-
fore training a classifier for the pixel classification of multichannel images with ClassifyImageClassSvm.
AddSamplesImageClassSvm works analogously to AddSampleClassSvm.
The image image must have a number of channels equal to NumFeatures, as specified with
CreateClassSvm. The training regions for the NumClasses pixel classes are passed in classRegions.
Hence, classRegions must be a tuple containing NumClasses regions. The order of the regions in
classRegions determines the class of the pixels. If there are no samples for a particular class in image,
an empty region must be passed at the position of the class in classRegions. With this mechanism it
is possible to use multiple images to add training samples for all relevant classes to the SVM by calling
AddSamplesImageClassSvm multiple times with the different images and suitably chosen regions.
The regions in classRegions should contain representative training samples for the respective classes. Hence,
they need not cover the entire image. The regions in classRegions should not overlap each other, because
this would lead to the fact that in the training data the samples from the overlapping areas would be assigned to
multiple classes, which may lead to slower convergence of the training and a lower classification performance.
A further application of this operator is the automatic novelty detection, where, e.g., anomalies in color or texture
can be detected. For this mode a training set that defines a sample region (e.g., skin regions for skin detection or
samples of the correct texture) is passed to the SVMHandle, which is created in the Mode ’novelty-detection’.
After training, regions that differ from the trained sample regions are detected (e.g., the rejection class for skin or
errors in texture).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Training image.
. classRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Regions of the classes to be trained.
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; HClassSvm / HTuple (IntPtr)
SVM handle.
Result
If the parameters are valid AddSamplesImageClassSvm returns the value 2 (H_MSG_TRUE). If necessary,
an exception handling is raised.
Parallelization Information
AddSamplesImageClassSvm is processed completely exclusively without parallelization.
HALCON 8.0.2
950 CHAPTER 13. SEGMENTATION
Possible Predecessors
CreateClassSvm
Possible Successors
TrainClassSvm, WriteSamplesClassSvm
Alternatives
ReadSamplesClassSvm
See also
ClassifyImageClassSvm, AddSampleClassSvm, ClearSamplesClassSvm,
GetSampleNumClassSvm, GetSampleClassSvm, AddSamplesImageClassMlp
Module
Foundation
(gr , gc ) ∈ featureSpace
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HWindow win;
long nc;
image.Display (win);
win.SetColor ("green");
cout << "Draw the region of interrest " << endl;
win.SetDraw ("fill");
win.SetColor ("red");
feats.Display (win);
win.SetColor ("blue");
cd2reg.Display (win);
Complexity
Let A be the area of the input region. Then the runtime complexity is O(2562 + A).
Result
HALCON 8.0.2
952 CHAPTER 13. SEGMENTATION
Class2dimSup returns 2 (H_MSG_TRUE). If all parameters are correct, the behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
Class2dimSup is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Histo2dim, Threshold, DrawRegion, Dilation1, Opening, ShapeTrans
Possible Successors
Connection, SelectShape, SelectGray
Alternatives
ClassNdimNorm, ClassNdimBox, Threshold
See also
Histo2dim
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
HWindow w;
long nc;
colimg.Display (w);
w.SetDraw ("margin");
w.SetColored (12);
seg.Display (w);
w.Click ();
return (0);
}
Result
Class2dimUnsup returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
Class2dimUnsup is reentrant and processed without parallelization.
Possible Predecessors
Decompose2, Decompose3, MedianImage, AnisotropicDiffusion, ReduceDomain
Possible Successors
SelectShape, SelectGray, Connection
Alternatives
Threshold, Histo2dim, Class2dimSup, ClassNdimNorm, ClassNdimBox
Module
Foundation
HALCON 8.0.2
954 CHAPTER 13. SEGMENTATION
read_image(Bild,’meer’)
disp_image(Image,WindowHandle)
set_color(WindowHandle,’green’)
fwrite_string(’Draw the learning region’)
fnew_line()
draw_region(Reg1,WindowHandle)
reduce_domain(Image,Reg1,Foreground)
set_color(WindowHandle,’red’)
fwrite_string(’Draw Background’)
fnew_line()
draw_region(Reg2,WindowHandle)
reduce_domain(Image,Reg2,Background)
fwrite_string(’Training’)
fnew_line()
create_class_box(ClassifHandle)
learn_ndim_box(Foreground,Background,Image,ClassifHandle)
fwrite_string(’Classification’)
fnew_line()
class_ndim_box(Image,Res,ClassifHandle)
set_draw(WindowHandle,’fill’)
disp_region(Res,WindowHandle)
close_class_box(ClassifHandle).
Complexity
Let N be the number of hyper-cuboids and A be the area of the input region. Then the runtime complexity is
O(N, A).
Result
ClassNdimBox returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
ClassNdimBox is local and processed completely exclusively without parallelization.
Possible Predecessors
CreateClassBox, LearnClassBox, MedianImage, Compose2, Compose3, Compose4,
Compose5, Compose6, Compose7
Alternatives
ClassNdimNorm, Class2dimSup, Class2dimUnsup
See also
DescriptClassBox
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
int main ()
{
HImage image ("meer"),
t1, t2, t3,
m1, m2, m3, m;
HALCON 8.0.2
956 CHAPTER 13. SEGMENTATION
HWindow w;
w.SetColor ("green");
image.Display (w);
HRegion empty;
Tuple cen, t;
w.SetColored (12);
reg.Display (w);
cout << "Result of classification" << endl;
return (0);
}
Complexity
Let N be the number of clusters and A be the area of the input region. Then the runtime complexity is O(N, A).
Result
ClassNdimNorm returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
ClassNdimNorm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
LearnNdimNorm, Compose2, Compose3, Compose4, Compose5, Compose6, Compose7
Possible Successors
Connection, SelectShape, ReduceDomain, SelectGray
Alternatives
ClassNdimBox, Class2dimSup, Class2dimUnsup
Module
Foundation
Result
If the parameters are valid, the operator ClassifyImageClassGmm returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
ClassifyImageClassGmm is reentrant and processed without parallelization.
Possible Predecessors
TrainClassGmm, ReadClassGmm
See also
AddSamplesImageClassGmm, CreateClassGmm
HALCON 8.0.2
958 CHAPTER 13. SEGMENTATION
Module
Foundation
Result
If the parameters are valid, the operator ClassifyImageClassMlp returns the value 2 (H_MSG_TRUE). If
necessary an exception handling is raised.
Parallelization Information
ClassifyImageClassMlp is reentrant and processed without parallelization.
Possible Predecessors
TrainClassMlp, ReadClassMlp
Alternatives
ClassifyImageClassSvm, ClassNdimBox, ClassNdimNorm, Class2dimSup
See also
AddSamplesImageClassMlp, CreateClassMlp
Module
Foundation
HALCON 8.0.2
960 CHAPTER 13. SEGMENTATION
Result
If the parameters are valid the operator ClassifyImageClassSvm returns the value 2 (H_MSG_TRUE). If
necessary, an exception handling is raised.
Parallelization Information
ClassifyImageClassSvm is reentrant and processed without parallelization.
Possible Predecessors
TrainClassSvm, ReadClassSvm, ReduceClassSvm
Alternatives
ClassifyImageClassMlp, ClassNdimBox, ClassNdimNorm, Class2dimSup
See also
AddSamplesImageClassSvm, CreateClassSvm
Module
Foundation
HALCON 8.0.2
962 CHAPTER 13. SEGMENTATION
between the rejection class and the classificator classes. Values larger than 0 denote the corresponding ratio of
overlap. If no rejection region is given, its value is set to 1. The regions in background do not influence on the
clustering. They are merely used to check the results that can be expected.
From a user’s point of view the key difference between LearnNdimNorm and LearnNdimBox is that in the
latter case the rejection class affects the classification process itself. Here, a hyper plane is generated that separates
foreground and background classes, so that no points in feature space are classified incorrectly. As for
LearnNdimNorm, however, an overlap between foreground and background class is allowed. This has its
effect on the return value quality. The larger the overlap, the smaller this value.
Parameter
. foreground (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Foreground pixels to be trained.
. background (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Background pixels to be trained (rejection class).
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Multi-channel training image.
. metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Metric to be used.
Default Value : "euclid"
List of values : Metric ∈ {"euclid", "maximum"}
. distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Maximum cluster radius.
Default Value : 10.0
Suggested values : Distance ∈ {1.0, 2.0, 3.0, 4.0, 6.0, 8.0, 10.0, 13.0, 17.0, 24.0, 30.0, 40.0}
Typical range of values : 0.0 ≤ Distance ≤ 511.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 1.0
Restriction : Distance > 0.0
. minNumberPercent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than
MinNumberPercent (otherwise the cluster is not output).
Default Value : 0.01
Suggested values : MinNumberPercent ∈ {0.001, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0}
Typical range of values : 0.0 ≤ MinNumberPercent ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (0 ≤ MinNumberPercent) ∧ (MinNumberPercent ≤ 100)
. radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Cluster radii or half edge lengths.
. center (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Coordinates of all cluster centers.
. quality (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Overlap of the rejection class with the classified objects (1: no overlap).
Assertion : (0 ≤ Quality) ∧ (Quality ≤ 1)
Result
LearnNdimNorm returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to the
input images can be determined by setting the values of the flags ’no_object_result’ and ’empty_region_result’
with SetSystem. If necessary, an exception is raised.
Parallelization Information
LearnNdimNorm is local and processed completely exclusively without parallelization.
Possible Predecessors
MinMaxGray, SobelAmp, BinomialFilter, GaussImage, ReduceDomain, DiffOfGauss
Possible Successors
ClassNdimNorm, Connection, Dilation1, Erosion1, Opening, Closing, RankRegion,
ShapeTrans, Skeleton
Alternatives
LearnNdimBox, LearnClassBox
See also
ClassNdimNorm, ClassNdimBox, Histo2dim
References
P. Haberäcker, "‘Digitale Bildverarbeitung"’; Hanser-Studienbücher, München, Wien, 1987
Module
Foundation
13.2 Edges
HALCON 8.0.2
964 CHAPTER 13. SEGMENTATION
Htuple SobelSize,MinAmplitude,MaxDistance,MinLength;
Htuple RowBegin,ColBegin,RowEnd,ColEnd;
create_tuple(&SobelSize,1);
set_i(SobelSize,5,0);
create_tuple(&MinAmplitude,1);
set_i(MinAmplitude,32,0);
create_tuple(&MaxDistance,1);
set_i(MaxDistance,3,0);
create_tuple(&MinLength,1);
set_i(MinLength,10,0);
T_detect_edge_segments(Image,SobelSize,MinAmplitude,MaxDistance,MinLength,
&RowBegin,&ColBegin,&RowEnd,&ColEnd);
Result
DetectEdgeSegments returns 2 (H_MSG_TRUE) if all parameters are correct. If the input is empty the
behaviour can be set via SetSystem(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
DetectEdgeSegments is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
SigmaImage, MedianImage
Possible Successors
SelectLines, PartitionLines, SelectLinesLongest, LinePosition, LineOrientation
Alternatives
SobelAmp, Threshold, Skeleton
Module
Foundation
HALCON 8.0.2
966 CHAPTER 13. SEGMENTATION
’hvnms’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values within
a seach space of ± 5 pixels, either horizontally or vertically. Non-maximum points are removed from the
region, gray values remain unchanged.
’loc_max’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values of its
eight neighbors.
Parameter
NonmaxSuppressionDir suppresses all points in the regions of the image imgAmp whose gray values are not
local (directed) maxima. imgDir is a direction image giving the direction perpendicular to the local maximum
(Unit: 2 degrees, i.e., 50 degrees are coded as 25 in the image). Such images are returned, for example, by
EdgesImage. Two modes of operation can be selected:
’nms’ Each point in the image is tested whether its gray value is a local maximum perpendicular to its direction.
In this mode only the two neighbors closest to the given direction are examined. If one of the two gray values
is greater than the gray value of the point to be tested, it is suppressed (i.e., removed from the input region.
The corresponding gray value remains unchanged).
’inms’ Like ’nms’. However, the two gray values for the test are obtained by interpolation from four adjacent
points.
Parameter
HALCON 8.0.2
968 CHAPTER 13. SEGMENTATION
13.3 Regiongrowing
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
ExpandGray closes gaps between the input regions, which resulted from the suppression of small regions in a
segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses result
from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in which the
gray values or color are different from the gray values or color of neighboring pixles on the region’s border by at
most threshold (in each channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray
value difference of at least 255 − threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
forbiddenArea). The number of iterations is determined by the parameter iterations. By passing ’maxi-
mal’, ExpandGray iterates until convergence, i.e., until no more changes occur. By passing 0 for this parameter,
all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are different in the
following ways:
’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because ExpandGray processes all regions
simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value. Over-
lapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.
Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
seg.Display (win);
HRegionArray exp = seg.ExpandGray1 (image, empty_region,
"maximal", "image", 32);
win.SetDraw ("margin");
win.SetColored (12);
exp.Display (win);
win.Click ();
return (0);
}
Result
ExpandGray always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no regions given)
can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of an empty input
region via SetSystem(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
ExpandGray is reentrant and processed without parallelization.
Possible Predecessors
Connection, Regiongrowing, Pouring, ClassNdimNorm
Possible Successors
SelectShape
See also
ExpandGrayRef, ExpandRegion
Module
Foundation
HALCON 8.0.2
970 CHAPTER 13. SEGMENTATION
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
ExpandGrayRef closes gaps between the input regions, which resulted from the suppression of small regions
in a segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses
result from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in
which the gray values or color are different from a reference gray value or color by at most threshold (in each
channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray value difference of at least
255 − threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
forbiddenArea). The number of iterations is determined by the parameter iterations. By passing ’max-
imal’, ExpandGrayRef iterates until convergence, i.e., until no more changes occur. By passing 0 for this
parameter, all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are differ-
ent in the following ways:
’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because ExpandGrayRef processes all re-
gions simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value.
Overlapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.
Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
win.SetDraw ("margin");
win.SetColored (12);
image.Display (win);
return (0);
}
Result
ExpandGrayRef always returns the value 2 (H_MSG_TRUE). The behavior in case of empty input (no re-
gions given) can be set via SetSystem(’no_object_result’,<Result>), the behavior in case of an
empty input region via SetSystem(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via SetSystem(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
HALCON 8.0.2
972 CHAPTER 13. SEGMENTATION
Parallelization Information
ExpandGrayRef is reentrant and processed without parallelization.
Possible Predecessors
Connection, Regiongrowing, Pouring, ClassNdimNorm
Possible Successors
SelectShape
See also
ExpandGray, ExpandRegion
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
win.SetDraw ("margin");
win.SetColored (12);
image.Display (win);
reg.Display (win);
win.Click ();
return (0);
}
Parallelization Information
ExpandLine is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage, AnisotropicDiffusion, MedianImage,
AffineTransImage, RotateImage
Possible Successors
Intersection, Opening, Closing
Alternatives
RegiongrowingMean, ExpandGray, ExpandGrayRef
Module
Foundation
HALCON 8.0.2
974 CHAPTER 13. SEGMENTATION
For rectangles larger than one pixel, ususally the images should be smoothed with a lowpass filter with a size of at
least row × column before calling Regiongrowing (so that the gray values at the centers of the regtangles
are “representative” for the whole rectangle). If the image contains little noise and the rectangles are small, the
smoothing can be omitted in many cases.
The resulting regions are collections of rectangles of the chosen size row × column . Only regions containing at
least minSize points are returned.
Regiongrowing is a very fast operation, and thus suited for time-critical applications.
Attention
column and row are automatically converted to odd values if necessary.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Segmented regions.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Vertical distance between tested pixels (height of the raster).
Default Value : 3
Suggested values : Row ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 1 ≤ Row ≤ 99 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Row ≥ 1) ∧ odd(Row)
. column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Horizontal distance between tested pixels (height of the raster).
Default Value : 3
Suggested values : Column ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 1 ≤ Column ≤ 99 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Column ≥ 1) ∧ odd(Column)
. tolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Points with a gray value difference less then or equal to tolerance are accumulated into the same object.
Default Value : 6.0
Suggested values : Tolerance ∈ {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0, 18.0, 25.0}
Typical range of values : 1.0 ≤ Tolerance ≤ 127.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 1.0
Restriction : (0 ≤ Tolerance) ∧ (Tolerance < 127)
. minSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Minimum size of the output regions.
Default Value : 100
Suggested values : MinSize ∈ {1, 5, 10, 20, 50, 100, 200, 500, 1000}
Typical range of values : 1 ≤ MinSize
Minimum Increment : 1
Recommended Increment : 5
Restriction : MinSize ≥ 1
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
mean_image(Image,Mean,Row,Column)
regiongrowing(Mean,Result,Row,Column,6.0,100).
Complexity
Let N be the number of found regions and M the number of points in one of these regions. Then the runtime
complexity is O(N ∗ log(M ) ∗ M ).
Result
Regiongrowing returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
Regiongrowing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, MeanImage, GaussImage, SmoothImage, MedianImage,
AnisotropicDiffusion
Possible Successors
SelectShape, ReduceDomain, SelectGray
Alternatives
RegiongrowingN, RegiongrowingMean, LabelToRegion
Module
Foundation
HALCON 8.0.2
976 CHAPTER 13. SEGMENTATION
HALCON 8.0.2
978 CHAPTER 13. SEGMENTATION
a = max {|gA |}
b = max {|gB |}
M inT ≤ |a − b| ≤ M axT
a = max {|gA |}
b = max {|gB |}
a b
M inT ≤ min , ≤ M axT
b a
’gray-min-diff’: Difference of the minimum gray values
a = min {|gA |}
b = min {|gB |}
M inT ≤ |a − b| ≤ M axT
a = min {|gA |}
b = min {|gB |}
a b
M inT ≤ min , ≤ M axT
b a
’variance-diff’: Difference of the variances over all gray values (channels)
V ar(gB )
M inT ≤ ≤ M axT
V ar(gA )
’mean-abs-diff’: Difference of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
|a − b|
M inT ≤ ≤ M axT
Anzahl der Summen
’mean-abs-ratio’: Ratio of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
a b
M inT ≤ min , ≤ M axT
b a
’max-abs-diff’: Difference of the maximum distance of the components
HALCON 8.0.2
980 CHAPTER 13. SEGMENTATION
Parameter
13.4 Threshold
static void HOperatorSet.AutoThreshold ( HObject image,
out HObject regions, HTuple sigma )
AutoThreshold segments a single-channel image using multiple thresholding. First, the absolute histogram of
the gray values is determined. Then, relevant minima are extracted from the histogram, which are used successively
as parameters for a thresholding operation. The thresholds used for byte images are 0, 255, and all minima extracted
from the histogram (after the histogram has been smoothed with a Gaussian filter with standard deviation sigma).
For each gray value interval one region is generated. Thus, the number of regions is the number of minima +
1. For uint2 images, the above procedure is used analogously. However, here the highest threshold is 65535.
Furthermore, the value of sigma (virtually) refers to a histogram with 256 values, although internally histograms
with a higher resolution are used. This is done to facilitate switching between image types without having to
change the parameter sigma. The larger the value of sigma is chosen, the fewer regions will be extracted. This
operator is useful if the regions to be extracted exhibit similar gray values (homogeneous regions).
Parameter
Parallelization Information
AutoThreshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
AnisotropicDiffusion, MedianImage, Illuminate
Possible Successors
Connection, SelectShape, SelectGray
Alternatives
BinThreshold, CharThreshold
See also
GrayHisto, GrayHistoAbs, HistoToThresh, SmoothFunct1dGauss, Threshold
Module
Foundation
HRegion HImage.BinThreshold ( )
Segment an image using an automatically determined threshold.
BinThreshold segments a single-channel gray value image using an automatically determined threshold. First,
the relative histogram of the gray values is determined. Then, relevant minima are extracted from the histogram,
which are used as parameters for a thresholding operation. In order to reduce the number of minima, the histogram
is smoothed with a Gaussian, as in AutoThreshold. The mask size is enlarged until there is only one minimum
HALCON 8.0.2
982 CHAPTER 13. SEGMENTATION
in the smoothed histogram. The selected region contains the pixels with gray values from 0 to the minimum. This
operator is, for example useful for the segmentation of dark characters on a light paper.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Dark regions of the image.
Example (Syntax: HDevelop)
Parallelization Information
BinThreshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
AnisotropicDiffusion, MedianImage, Illuminate
Possible Successors
Connection, SelectShape, SelectGray
Alternatives
AutoThreshold, CharThreshold
See also
GrayHisto, SmoothFunct1dGauss, Threshold
Module
Foundation
Parameter
Parallelization Information
CharThreshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
AnisotropicDiffusion, MedianImage, Illuminate
Possible Successors
Connection, SelectShape, SelectGray
Alternatives
BinThreshold, AutoThreshold, GrayHisto, SmoothFunct1dGauss, Threshold
Module
Foundation
HALCON 8.0.2
984 CHAPTER 13. SEGMENTATION
This test is performed for all points of the domain (region) of image, intersected with the domain of the translated
pattern. All points fulfilling the above condition are aggregated in the output region. The two images may be
of different size. Typically, pattern is smaller than image.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Comparison image.
. selected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Points in which the two images are similar/different.
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mode: return similar or different pixels.
Default Value : "diff_outside"
Suggested values : Mode ∈ {"diff_inside", "diff_outside"}
. diffLowerBound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number ; HTuple (int / long)
Lower bound of the tolerated gray value difference.
Default Value : -5
Suggested values : DiffLowerBound ∈ {0, -1, -2, -3, -5, -7, -10, -12, -15, -17, -20, -25, -30}
Typical range of values : -255 ≤ DiffLowerBound ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ DiffLowerBound) ∧ (DiffLowerBound ≤ 255)
. diffUpperBound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number ; HTuple (int / long)
Upper bound of the tolerated gray value difference.
Default Value : 5
Suggested values : DiffUpperBound ∈ {0, 1, 2, 3, 5, 7, 10, 12, 15, 17, 20, 25, 30}
Typical range of values : -255 ≤ DiffUpperBound ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ DiffUpperBound) ∧ (DiffUpperBound ≤ 255)
. grayOffset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long)
Offset gray value subtracted from the input image.
Default Value : 0
Suggested values : GrayOffset ∈ {-30, -25, -20, -17, -15, -12, -10, -7, -5, -3, -2, -1, 0, 1, 2, 3, 5, 7, 10, 12,
15, 17, 20, 25, 30}
Typical range of values : -255 ≤ GrayOffset ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ GrayOffset) ∧ (GrayOffset ≤ 255)
. addRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (int / long)
Row coordinate by which the comparison image is translated.
Default Value : 0
Suggested values : AddRow ∈ {-200, -100, -20, -10, 0, 10, 20, 100, 200}
Typical range of values : -32000 ≤ AddRow ≤ 32000 (lin)
Minimum Increment : 1
Recommended Increment : 1
HALCON 8.0.2
986 CHAPTER 13. SEGMENTATION
/* Simulation of dual_threshold */
dual_threshold(Laplace,Result,MinS,MinG,Threshold):
threshold(Laplace,Tmp1,Threshold,999999)
connection(Tmp1,Tmp2)
select_shape(Tmp2,Tmp3,’area’,’and’,MinS,999999)
select_gray(Laplace,Tmp3,Tmp4,’max’,’and’,MinG,999999)
threshold(Laplace,Tmp5,-999999,-Threshold)
connection(Tmp5,Tmp6)
select_shape(Tmp6,Tmp7,’area’,’and’,MinS,999999)
select_gray(Laplace,Tmp7,Tmp8,’min’,’and’,-999999,-MinG)
concat_obj(Tmp4,Tmp8,Result)
Result
DualThreshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
DualThreshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
MinMaxGray, SobelAmp, BinomialFilter, GaussImage, ReduceDomain, DiffOfGauss,
SubImage, DerivateGauss, LaplaceOfGauss, Laplace, ExpandRegion
Possible Successors
Connection, Dilation1, Erosion1, Opening, Closing, RankRegion, ShapeTrans,
Skeleton
Alternatives
Threshold, DynThreshold, CheckDifference
See also
Connection, SelectShape, SelectGray
Module
Foundation
go ≥ gt + offset
go ≤ gt − offset
Typically, the threshold images are smoothed versions of the original image (e.g., by applying MeanImage,
BinomialFilter, GaussImage, etc.). Then the effect of DynThreshold is similar to applying
Threshold to a highpass-filtered version of the original image (see HighpassImage).
With DynThreshold, contours of an object can be extracted, where the objects’ size (diameter) is determined
by the mask size of the lowpass filter and the amplitude of the objects’ edges:
The larger the mask size is chosen, the larger the found regions become. As a rule of thumb, the mask size should
be about twice the diameter of the objects to be extracted. It is important not to set the parameter offset to zero
because in this case too many small regions will be found (noise). Values between 5 and 40 are a useful choice.
The larger offset is chosen, the smaller the extracted regions become.
All points of the input image fulfilling the above condition are stored jointly in one region. If necessary, the
connected components can be obtained by calling Connection.
Attention
If offset is chosen from −1 to 1 usually a very noisy region is generated, requiring large storage. If offset
is chosen too large (> 60, say) it may happen that no points fulfill the threshold condition (i.e., an empty region is
returned). If offset is chosen too small (< -60, say) it may happen that all points fulfill the threshold condition
(i.e., a full region is returned).
Parameter
HALCON 8.0.2
988 CHAPTER 13. SEGMENTATION
Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
DynThreshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
DynThreshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
MeanImage, SmoothImage, BinomialFilter, GaussImage
Possible Successors
Connection, SelectShape, ReduceDomain, SelectGray, RankRegion, Dilation1,
Opening, Erosion1
Alternatives
CheckDifference, Threshold
See also
HighpassImage, SubImage
Module
Foundation
minGray ≤ g ≤ maxGray .
To reduce procesing time, the selection is done in two steps: At first all pixels along rows and columns with dis-
tances minSize are processed. In the next step the neighborhood (size minSize × minSize) of all previously
selected points are processed.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Segmented regions.
. minGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Lower threshold for the gray values.
Default Value : 128
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Typical range of values : 0.0 ≤ MinGray ≤ 255.0 (lin)
Minimum Increment : 1
Recommended Increment : 5.0
. maxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Upper threshold for the gray values.
Default Value : 255.0
Suggested values : MaxGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Typical range of values : 0.0 ≤ MaxGray ≤ 255.0 (lin)
Minimum Increment : 1
Recommended Increment : 5.0
. minSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long)
Minimum size of objects to be extracted.
Default Value : 20
Suggested values : MinSize ∈ {5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 100}
Typical range of values : 2 ≤ MinSize ≤ 200 (lin)
Minimum Increment : 1
Recommended Increment : 2
Complexity
Let A be the area of the ouput region and height the height of image. Then the runtime complexity is O(A +
height/minSize).
Result
FastThreshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
FastThreshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
HistoToThresh, MinMaxGray, SobelAmp, BinomialFilter, GaussImage, ReduceDomain,
FillInterlace
Possible Successors
Connection, Dilation1, Erosion1, Opening, Closing, RankRegion, ShapeTrans,
Skeleton
Alternatives
Threshold, GenGridRegion, DilationRectangle1, DynThreshold
See also
Class2dimSup, HysteresisThreshold
Module
Foundation
HALCON 8.0.2
990 CHAPTER 13. SEGMENTATION
from the histogram. Before the thresholds are determined, the histogram is smoothed with a Gaussian smoothing
function.
HistoToThresh can process the absolute and relative histograms that are returned by GrayHisto. Note,
however, that here only byte images should be used, because otherwise the returned thresholds cannot easily be
transformed to the thresholds for the actual image. For images of type uint2, the histograms should be computed
with GrayHistoAbs since this facilitates a simple transformation of the thresholds by simply multiplying the
thresholds with the quantization selected in GrayHistoAbs. For uint2 images, it is important to ensure that the
quantization must be chosen in such a manner that the histogram still contains salient information. For example, a
640×480 image with 16 bits per pixel gray value resolution contains on average only 307200/65536 = 4.7 entries
per histogram bin, i.e., the histogram is too sparsely populated to derive any useful statistics from it. To be able to
extract useful thresholds from such a histogram, sigma would have to be set to an extremely large value, which
would lead to very high run times and numerical problems. The quantization in GrayHistoAbs should therefore
normally be chosen such that the histogram contains a maximum of 1024 entries. Hence, for images with more than
10 bits per pixel, the quantization must be chosen greater than 1. The histogram returned by GrayHistoAbs
should furthermore be restricted to the parts that contain salient information. For example, for an image with 12
bits per pixel, the quantization should be set to 4. Only the first 1024 entries of the computed histogram (which
contains 16384 entries in this example) should be passed to HistoToThresh. Finally, minThresh must be
multiplied by 4 (i.e., the quantization), while maxThresh must be multiplied by 4 and increased by 3 (i.e., the
quantization minus 1).
Parameter
. histogramm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; HTuple (int / long / double)
Gray value histogram.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Sigma for the Gaussian smoothing of the histogram.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.5 ≤ Sigma ≤ 30.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.2
. minThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Minimum thresholds.
. maxThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Maximum thresholds.
Example (Syntax: HDevelop)
/* Calculate thresholds from a 12 bit uint2 image and threshold the image. */
gray_histo_abs (Image, Image, 4, AbsoluteHisto)
AbsoluteHisto := AbsoluteHisto[0:1023]
histo_to_thresh (AbsoluteHisto, 16, MinThresh, MaxThresh)
MinThresh := MinThresh*4
MaxThresh := MaxThresh*4+3
threshold (Image, Region, MinThresh, MaxThresh)
Parallelization Information
HistoToThresh is reentrant and processed without parallelization.
Possible Predecessors
GrayHisto
Possible Successors
Threshold
See also
AutoThreshold, BinThreshold, CharThreshold
Module
Foundation
minGray ≤ g ≤ maxGray .
All points of an image fulfilling the condition are returned as one region. If more than one gray value interval is
passed (tuples for minGray and maxGray), one separate region is returned for each interval.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Segmented region.
. minGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Lower threshold for the gray values.
Default Value : 128.0
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
. maxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Upper threshold for the gray values.
Default Value : 255.0
Suggested values : MaxGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Restriction : MaxGray ≥ MinGray
Example (Syntax: HDevelop)
read_image(Image,’fabrik’)
sobel_dir(Image,EdgeAmp,EdgeDir,’sum_abs’,3)
threshold(EdgeAmp,Seg,50,255,2)
skeleton(Seg,Rand)
connection(Rand,Lines)
select_shape(Lines,Edges,’area’,’and’,10,1000000).
Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
Threshold returns 2 (H_MSG_TRUE) if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
Threshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
HistoToThresh, MinMaxGray, SobelAmp, BinomialFilter, GaussImage, ReduceDomain,
FillInterlace
Possible Successors
Connection, Dilation1, Erosion1, Opening, Closing, RankRegion, ShapeTrans,
Skeleton
HALCON 8.0.2
992 CHAPTER 13. SEGMENTATION
Alternatives
Class2dimSup, HysteresisThreshold, DynThreshold, BinThreshold, CharThreshold,
AutoThreshold, DualThreshold
See also
ZeroCrossing, BackgroundSeg, Regiongrowing
Module
Foundation
read_image(Image,’fabrik’)
threshold_sub_pix(Image,Border,35)
disp_xld(Border,WindowHandle)
Result
ThresholdSubPix usually returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
ThresholdSubPix is reentrant and processed without parallelization.
Alternatives
Threshold
See also
ZeroCrossingSubPix
Module
2D Metrology
HALCON 8.0.2
994 CHAPTER 13. SEGMENTATION
HRegion HImage.ZeroCrossing ( )
Extrakt zero crossings from an image.
ZeroCrossing returns the zero crossings of the input image as a region. A pixel is accepted as a zero crossing
if its gray value (in image) is zero, or if at least one of its neighbors of the 4-neighborhood has a different sign.
This operator is intended to be used after edge operators returning the second derivative of the image (e.g.,
LaplaceOfGauss), which were possibly followed by a smoothing operator. In this case, the zero crossings
are (candidates for) edges.
Parameter
HXLDCont HImage.ZeroCrossingSubPix ( )
Extract zero crossings from an image with subpixel accuracy.
ZeroCrossingSubPix extracts the zero crossings of the input image image with subpixel accuracy. The
extracted zero crossings are returned as XLD-contours in zeroCrossings. Thus, ZeroCrossingSubPix
can be used as a sub-pixel precise edge extractor if the input image is a Laplace-filtered image (see Laplace,
LaplaceOfGauss, DerivateGauss).
For the extraction, the input image is regarded as a surface, in which the gray values are interpolated bilinearly
between the centers of the individual pixels. Consistent with the surface thus defined, zero crossing lines are
extracted for each pixel and linked into topologically sound contours. This means that the zero crossing contours
are correctly split at junction points. If the image contains extended areas of constant gray value 0, only the border
of such areas is returned as zero crossings.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. zeroCrossings (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; HXLDCont
Extracted zero crossings.
Example (Syntax: HDevelop)
Result
ZeroCrossingSubPix usually returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
ZeroCrossingSubPix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Laplace, LaplaceOfGauss, DiffOfGauss, DerivateGauss
Alternatives
ZeroCrossing
See also
ThresholdSubPix
Module
2D Metrology
HALCON 8.0.2
996 CHAPTER 13. SEGMENTATION
13.5 Topography
Parallelization Information
CriticalPointsSubPix is reentrant and processed without parallelization.
Possible Successors
GenCrossContourXld, DispCross
Alternatives
LocalMinSubPix, LocalMaxSubPix, SaddlePointsSubPix
See also
LocalMin, LocalMax, Plateaus, PlateausCenter, Lowlands, LowlandsCenter
Module
Foundation
HRegion HImage.LocalMax ( )
Detect all local maxima in an image.
LocalMax extracts all points from image having a gray value larger than the gray value of all its
neighbors and returns them in localMaxima. The neighborhood used can be set by SetSystem
(’neighborhood’,<4/8>).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Input image.
. localMaxima (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; HRegion
Extracted local maxima as a region.
Number of elements : LocalMaxima = Image
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
maxi.Display (win);
win.Click ();
return (0);
HALCON 8.0.2
998 CHAPTER 13. SEGMENTATION
Parallelization Information
LocalMax is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage
Possible Successors
GetRegionPoints, Connection
Alternatives
NonmaxSuppressionAmp, Plateaus, PlateausCenter
See also
Monotony, TopographicSketch, CornerResponse, TextureLaws
Module
Foundation
Parallelization Information
LocalMaxSubPix is reentrant and processed without parallelization.
Possible Successors
GenCrossContourXld, DispCross
Alternatives
CriticalPointsSubPix, LocalMinSubPix, SaddlePointsSubPix
See also
LocalMax, Plateaus, PlateausCenter
Module
Foundation
HRegion HImage.LocalMin ( )
Detect all local minima in an image.
LocalMin extracts all points from image having a gray value smaller than the gray value of all its
neighbors and returns them in localMinima. The neighborhood used can be set by SetSystem
(’neighborhood’,<4/8>).
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image to be processed.
. localMinima (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; HRegion
Extracted local minima as regions.
Number of elements : LocalMinima = Image
Example (Syntax: C++)
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
mins.Display (win);
win.Click ();
return (0);
HALCON 8.0.2
1000 CHAPTER 13. SEGMENTATION
Parallelization Information
LocalMin is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage
Possible Successors
GetRegionPoints, Connection
Alternatives
GraySkeleton, Lowlands, LowlandsCenter
See also
Monotony, TopographicSketch, CornerResponse, TextureLaws
Module
Foundation
Parallelization Information
LocalMinSubPix is reentrant and processed without parallelization.
Possible Successors
GenCrossContourXld, DispCross
Alternatives
CriticalPointsSubPix, LocalMaxSubPix, SaddlePointsSubPix
See also
LocalMin, Lowlands, LowlandsCenter
Module
Foundation
HRegion HImage.Lowlands ( )
Detect all gray value lowlands.
Lowlands extracts all points from image with a gray value less or equal to the gray value of its neighbors
(8-neighborhood) and returns them in lowlands. Each lowland is returned as a separate region.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
mins.Display (win);
win.Click ();
return (0);
}
HALCON 8.0.2
1002 CHAPTER 13. SEGMENTATION
Parallelization Information
Lowlands is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage
Possible Successors
AreaCenter, GetRegionPoints, SelectShape
Alternatives
LowlandsCenter, GraySkeleton, LocalMin
See also
Monotony, TopographicSketch, CornerResponse, TextureLaws
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
mins.Display (win);
win.Click ();
return (0);
}
Parallelization Information
LowlandsCenter is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage
Possible Successors
AreaCenter, GetRegionPoints, SelectShape
Alternatives
Lowlands, GraySkeleton, LocalMin
See also
Monotony, TopographicSketch, CornerResponse, TextureLaws
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
HALCON 8.0.2
1004 CHAPTER 13. SEGMENTATION
win.SetColored (12);
maxi.Display (win);
win.Click ();
return (0);
}
Parallelization Information
Plateaus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage
Possible Successors
AreaCenter, GetRegionPoints, SelectShape
Alternatives
PlateausCenter, NonmaxSuppressionAmp, LocalMax
See also
Monotony, TopographicSketch, CornerResponse, TextureLaws
Module
Foundation
HRegion HImage.PlateausCenter ( )
Detect the centers of all gray value plateaus.
PlateausCenter extracts all points from image with a gray value greater or equal to the gray value of its
neighbors (8-neighborhood) and returns them in plateaus. If more than one of these points are connected
(plateau), their center of gravity is returned. Each plateau center is returned as a separate region.
Parameter
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
image.Display (win);
win.SetColored (12);
maxi.Display (win);
win.Click ();
return (0);
}
Parallelization Information
PlateausCenter is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage
Possible Successors
AreaCenter, GetRegionPoints, SelectShape
Alternatives
Plateaus, NonmaxSuppressionAmp, LocalMax
See also
Monotony, TopographicSketch, CornerResponse, TextureLaws
Module
Foundation
HALCON 8.0.2
1006 CHAPTER 13. SEGMENTATION
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Input image.
. regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; HRegion
Segmented regions.
. mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Mode of operation.
Default Value : "all"
List of values : Mode ∈ {"all", "maxima", "regions"}
. minGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
All gray values smaller than this threshold are disregarded.
Default Value : 0
Suggested values : MinGray ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}
Typical range of values : 0 ≤ MinGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : MinGray ≥ 0
. maxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
All gray values larger than this threshold are disregarded.
Default Value : 255
Suggested values : MaxGray ∈ {100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240,
250, 255}
Typical range of values : 0 ≤ MaxGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (MaxGray ≤ 255) ∧ (MaxGray > MinGray)
Example (Syntax: HDevelop)
/* Segmentation of a histogram */
read_image(Image,’monkey’)
texture_laws(Image,Texture,’el’,2,5)
draw_region(Region,draw_region)
reduce_domain(Texture,Region,Testreg)
histo_2dim(Testreg,Texture,Region,Histo)
pouring(Histo,Seg,’all’,0,255).
Complexity
Let N be the number of pixels in the input image and M be the number of found segments, where the enclosing
rectangle of the segment i contains mi pixels. Furthermore, let Ki be the number of chords in segment i. Then the
runtime complexity is
Result
Pouring usually returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Parallelization Information
Pouring is processed under mutual exclusion against itself and without parallelization.
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage, MeanImage
Alternatives
Watersheds, LocalMax
See also
Histo2dim, ExpandRegion, ExpandGray, ExpandGrayRef
Module
Foundation
HALCON 8.0.2
1008 CHAPTER 13. SEGMENTATION
Result
SaddlePointsSubPix returns 2 (H_MSG_TRUE) if all parameters are correct and no error oc-
curs during the execution. If the input is empty the behavior can be set via SetSystem
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
SaddlePointsSubPix is reentrant and processed without parallelization.
Possible Successors
GenCrossContourXld, DispCross
Alternatives
CriticalPointsSubPix, LocalMinSubPix, LocalMaxSubPix
Module
Foundation
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HRegion watersheds;
HRegionArray basins = gauss.Watersheds (&watersheds);
win.SetColored (12);
basins.Display (win);
win.Click ();
return (0);
}
Result
Watersheds always returns 2 (H_MSG_TRUE). The behavior with respect to the input images and out-
put regions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’, and
’store_empty_region’ with SetSystem. If necessary, an exception is raised.
Parallelization Information
Watersheds is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
BinomialFilter, GaussImage, SmoothImage, InvertImage
Possible Successors
ExpandRegion, SelectShape, ReduceDomain, Opening
Alternatives
WatershedsThreshold, Pouring
References
L. Vincent, P. Soille: “Watersheds in Digital Space: An Efficient Algorithm Based on Immersion Simulations”;
IEEE Transactions on Pattern Analysis and Machine Intelligence; vol. 13, no. 6; pp. 583-598; 1991.
Module
Foundation
HALCON 8.0.2
1010 CHAPTER 13. SEGMENTATION
Parameter
System
14.1 Database
static void HOperatorSet.CountRelation ( HTuple relationName,
out HTuple numOfTuples )
’image’: Image matrices. One matrix may also be the component of more than one image (no redundant storage).
’region’: Regions (the full and the empty region are always available). One region may of course also be the
component of more than one image object (no redundant storage).
’XLD’: eXtended Line Description: Contours, Polygons, paralles, lines, etc. XLD data types don’t have gray
values and are stored with subpixel accuracy.
’object’: Iconic objects. Composed of a region (called region) and optionally image matrices (called image).
’tuple’: In the compact mode, tuples of iconic objects are stored as a surrogate in this relation. Instead of working
with the individual object keys, only this tuple key is used. It depends on the host language, whether the
objects are passed individually (Prolog and C++) or as tuples (C, Smalltalk, Lisp, OPS-5).
Certain database objects will be created already by the operator ResetObjDb and therefore have to be avail-
able all the time (the undefined gray value component, the objects ’full’ (FULL_REGION in HALCON/C) and
’empty’ (EMPTY_REGION in HALCON/C) as well as the herein included empty and full region). By calling
GetChannelInfo, the operator therefore appears correspondingly also as ’creator’ of the full and empty region.
The procedure can be used for example to check the completeness of the ClearObj operation.
1011
1012 CHAPTER 14. SYSTEM
Parameter
. relationName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Relation of interest of the HALCON database.
Default Value : "object"
List of values : RelationName ∈ {"image", "region", "XLD", "object", "tuple"}
. numOfTuples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of tuples in the relation.
Example (Syntax: HDevelop)
reset_obj_db(512,512,3)
count_relation(’image’,I1)
count_relation(’region’,R1)
count_relation(’XLD’,X1)
count_relation(’object’,O1)
count_relation(’tuple’,T1)
read_image(X,’monkey’)
count_relation(’image’,I2)
count_relation(’region’,R2)
count_relation(’XLD’,X2)
count_relation(’object’,O2)
count_relation(’tuple’,T2)
/*
Result: I1 = 1 (undefined image)
R1 = 2 (full and empty region)
X1 = 0 (no XLD data)
O1 = 2 (full and empty objects)
T1 = 0 (always 0 in the normal mode )
Result
If the parameter is correct, the operator CountRelation returns the value 2 (H_MSG_TRUE). Otherwise an
exception is raised.
Parallelization Information
CountRelation is reentrant and processed without parallelization.
Possible Predecessors
ResetObjDb
See also
ClearObj
Module
Foundation
modules, a key is generated that is needed for the licence manager. GetModules is normally called at the end
of a programm to check the used modules.
Parameter
HALCON 8.0.2
1014 CHAPTER 14. SYSTEM
14.2 Error-Handling
Parameter
. errorNumber (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the HALCON error.
Restriction : (1 ≤ ErrorNumber) ∧ (ErrorNumber ≤ 36000)
. errorText (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Corresponding error text.
Example (Syntax: C)
Herror err;
char message[MAX_STRING];
set_check("~give_error");
err = send_region(region,socket_id);
set_check("give_error");
if (err != H_MSG_TRUE) {
get_error_text((long)err,message);
fprintf(stderr,"my error message: %s\n",message);
exit(1);
}
Result
The operator GetErrorText always returns the value 2 (H_MSG_TRUE).
Parallelization Information
GetErrorText is reentrant and processed without parallelization.
Possible Predecessors
SetCheck
See also
SetCheck
Module
Foundation
HALCON 8.0.2
1016 CHAPTER 14. SYSTEM
Possible Predecessors
ResetObjDb
See also
SetSpy, QuerySpy
Module
Foundation
’color’: If this control mode is activated, only colors may be used which are supported by the display for the
currently active window. Otherwise an error message is displayed.
In case of deactivated control mode and non existent colors, the nearest color is used (see also SetColor,
SetGray, SetRgb).
’text’: If this control mode is activated, it will check the coordinates during the setting of the text cursor as well
as during the display of strings ( WriteString) to the effect whether a part of a sign would lie outside the
windowframe (a fact which is not forbidden in principle by the system).
If the control mode is deactivaed, the text will be clipped at the windowframe.
’data’: (For program development)
Checks the consistency of image objects (regions and grayvalue components.
’interface’: If this control mode is activated, the interface between the host language and the HALCON proce-
dures will be checked in course (e.g. typifying and counting of the values).
’database’: This is a consistency check of the database (e.g. checks whether an object which shall be canceled
does indeed exist or not.)
’give_error’: Determines whether errors shall trigger exceptions or not. If this control modes is deactivated,
the application program must provide a suitable error treatment itself. Please note that errors which are
not reported usually lead to undefined output parameters which may cause an unpredictable reaction of the
program. Details about how to handle exceptions in the different HALCON language interfaces can be found
in the HALCON Programmer’s Guide and the HDevelop User’s Guide.
’father’: If this control mode is activated when calling the operators OpenWindow or OpenTextwindow,
HALCON allows only the usage of the number of another HALCON window as the father window of the
new window; otherwise it allows also the usage of IDs of operating system windows as the father window.
This control mode ist only relevant for windows of type ’X-Window’ and ’WIN32-Window’.
’region’: (For program development)
Checks the consistency of chords (this may lead to a notable speed reduction of routines).
’clear’: Normally, if a list of objects shall be canceled by using ClearObj, an exception will be raised, in case
individual objects do not or no longer exist. If the ’clear’ mode is activated, such objects will be ignored.
’memory’: (For program development)
Checks the memory blocks freed by the HALCON memory managemnet on consistency and overwriting of
memory borders.
’all’: Activates all control modes.
’none’: Deactivates all control modes.
’default’: Default settings: [’give_error’,’database’]
Parameter
HALCON 8.0.2
1018 CHAPTER 14. SYSTEM
SetSpy(’mode’,’on’),
and deactivated by using
SetSpy(’mode’,’off’).
The debugging tool can further be activated with the help of the environment variable HALCONSPY. The definition
of this variable corresponds to calling up ’mode’ and ’on’.
The following control modes can be tuned (in any desired combination of course) with the help of
classVal/value:
’operator’ When a routine is called, its name and the names of its parameters will be given (in TRIAS notation).
Value: ’on’ or ’off’
default: ’off’
’input_control’ When a routine is called, the names and values of the input control parameters will be given.
Value: ’on’ or ’off’
default: ’off’
’output_control’ When a routine is called, the names and values of the output control parameters are given.
Value: ’on’ or ’off’
default: ’off’
’parameter_values’ Additional information on ’input_control’ and ’output_control’: indicates how many values
per parameter shall be displayed at most (maximum tuplet length of the output).
Value: tuplet length (integer)
default: 4
’db’ Information concerning the 4 relations in the HALCON-database. This is especially valuable in looking for
forgotten ClearObj.
Value: ’on’ or ’off’
default: ’off’
’input_gray_window’ Any reading access of the gray-value component of an (input) image object will cause the
gray-value component to be shown in the indicated window (Window-ID; ’none’ will deactivate this control
).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_region_window’ Any reading access of the region of an (input) iconic object will cause this region to be
shown in the indicated (Window-ID; ’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_xld_window’ Any reading access of the xld will cause this xld to be shown in the indicated (Window-ID;
’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’time’ Processing time of the operator
Value: ’on’ or ’off’
default: ’off’
’halt’ Determines whether there is a halt after every individual action (’multiple’) or only at the end of each oper-
ator (’single’). The parameter is only effective if the halt has been activated by ’timeout’ or ’button_window’.
Value: ’single’ or ’multiple’
default: ’multiple’
’timeout’ After every output there will be a halt of the indicated number of seconds.
Value: seconds (real)
default 0.0
’button_window’ Alternative to ’timeout’: after every output spy waits until the cursor indicates (’button_click’
= ’false’) or clicks into (’button_click’ = ’true’) the indicated window. (Window-ID; ’none’ will deactivate
this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’button_click’ Additional option for ’button_window’: determines whether or not a mouse-click has to be waited
for after an output.
Value: ’on’ or ’off’
default: ’off’
’button_notify’ If ’button_notify’ is activated, spy generates a beep after every output. This is useful in combi-
nation with ’button_window’.
Value: ’on’ or ’off’
default: ’off’
’log_file’ Spy can hereby divert the text output into a file having been opened with open_file.
Value: a file handle (see OpenFile)
’error’ If ’error’ is activated and an internal error occurs, spy will show the internal procedures (file/line) con-
cerned.
Value: ’on’ or ’off’
default: ’off’
’internal’ If ’internal’ is activated, spy will display the internal procedures and their parameters (file/line) while
an HALCON-operator is processed.
Value: ’on’ or ’off’
default: ’off’
Parameter
Result
The operator SetSpy returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an exception
is raised.
HALCON 8.0.2
1020 CHAPTER 14. SYSTEM
Parallelization Information
SetSpy is processed completely exclusively without parallelization.
Possible Predecessors
ResetObjDb
See also
GetSpy, QuerySpy
Module
Foundation
14.3 Information
static void HOperatorSet.GetChapterInfo ( HTuple chapter,
out HTuple info )
The operator GetKeywords returns all the keywords in the online-texts corresponding to those procedures which
have the indicated substring procName in their name. If instead of procName the empty string is transmitted,
the operator GetKeywords returns all keywords. The keywords of an individual procedure can also be called
by using the operator GetOperatorInfo. The online-texts will be taken from the files english.hlp, english.sta,
english.num, english.key and english.idx, which are searched by HALCON in the currently used directory and in
the directory ’help_dir’ (see also GetSystem and SetSystem).
Parameter
HALCON 8.0.2
1022 CHAPTER 14. SYSTEM
’result_state’: Return value of the procedure (TRUE, FALSE, FAIL, VOID or EXCEPTION).
’attention’: Restrictions and advice concering the correct use of the procedure (optional).
’parameter’: Names of the parameter of the procedure (see also GetParamInfo).
’references’: Literary references (optional).
’module’: The module to which the operator is assigned.
’html_path’: The directory where the HTML documentation of the operator resides.
’warning’: Possible warnings for using the operator.
The texts will be taken from the files english.hlp, english.sta, english.key, english.num und english.idx which
will be searched by HALCON in the currently used directory or in the directory ’help_dir’ (respectively
’user_help_dir’) (see also GetSystem and SetSystem). By adding ’.latex’ after the slotname, the text of
slots containing textual information can be made available in LATEX notation.
Parameter
Parameter
. pattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Substring of the seeked names (empty <=> all names).
Default Value : "info"
. procNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Detected procedure names.
Result
The operator GetOperatorName returns the value 2 (H_MSG_TRUE) if the helpfiles are available. Otherwise
an exception handling is raised.
Parallelization Information
GetOperatorName is reentrant and processed without parallelization.
Possible Successors
GetOperatorInfo, GetParamNames, GetParamNum, GetParamTypes
Alternatives
SearchOperator
See also
GetOperatorInfo, GetParamNames, GetParamNum, GetParamTypes
Module
Foundation
HALCON 8.0.2
1024 CHAPTER 14. SYSTEM
or .extent.x, .extent.y),
polygon(.x, .y), contour(.x, .y),
coordinates(.x, .y), chord(.x1, .x2, .y),
chain(.begin.x, .begin.y, .code).
’default_value’: Default-value for the parameter (for input-control parameters only). It is the question of mere
information only (the parameter value must be transmitted explicitly, even if the default-value is used). This
entry serves only as a notice, a point of departure for own experiments. The values have been selected so that
they normally do not cause any errors but generate something that makes sense.
’multi_value’: ’true’, if more than one value is permitted in this parameter position, otherwise ’false’.
’multichannel’: ’true’, in case the input image object may be multichannel.
’mixed_type’: For control parameters exclusively and only if value tuples (’multivalue’-’true’) and various types
of data are permitted for the parameter values (’type_list’ having more than one value). In this case slot
indicates, whether values of various types may be mixed in one tuple (’true’ or ’false’).
’values’: Selection of values (optional).
’value_list’: In case a parameter can take only a limited number of values, this fact will be indicated explicitly
(optional).
’valuemin’: Minimum value of a value interval.
’valuemax’: Maximum value of a value interval.
’valuefunction’: Function discribing the course of the values for a series of tests (lin, log, quadr, ...).
’steprec’: Recommended step width for the parameter values in a series of tests.
’steprec’: Minimum step width of the parameter values in a series of tests.
’valuenumber’: Expression describing the number of parameters as such or in relation to other parameters.
’assertion’: Expression describing the parameter values as such or in relation to other parameters.
The online-texts will be taken from the files english.hlp, english.sta, english.key, english.num and english.idx
which will be searched by HALCON in the currently used directory or the directory ’help_dir’ (see also
GetSystem and SetSystem).
Parameter
. procName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; HTuple (string)
Name of the procedure on whose parameter more information is needed.
Default Value : "get_param_info"
. paramName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the parameter on which more information is needed.
Default Value : "Slot"
. slot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Desired information.
Default Value : "description"
List of values : Slot ∈ {"description", "type_list", "default_type", "sem_type", "default_value", "values",
"value_list", "valuemin", "valuemax", "valuefunction", "valuenumber", "assertion", "steprec", "stepmin",
"mixed_type", "multivalue", "multichannel"}
. information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Information (empty in case there is no information available).
Result
The operator GetParamInfo returns the value 2 (H_MSG_TRUE) if the parameters are correct and the helpfiles
are available. Otherwise an exception handling is raised.
Parallelization Information
GetParamInfo is processed completely exclusively without parallelization.
Possible Predecessors
GetKeywords, SearchOperator
Alternatives
GetParamNames, GetParamNum, GetParamTypes
See also
QueryParamInfo, GetOperatorInfo, GetOperatorName
Module
Foundation
HALCON 8.0.2
1026 CHAPTER 14. SYSTEM
C-function (CName) called by the procedure. The output parameter type indicates, whether the procedure is a
system procedure or an user procedure.
Parameter
. procName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; HTuple (string)
Name of the procedure.
Default Value : "get_param_num"
. CName (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Name of the called C-function.
. inpObjPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the input object parameters.
. outpObjPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the output object parameters.
. inpCtrlPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the input control parameters.
. outpCtrlPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Number of the output control parameters.
. type (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
System procedure or user procedure.
Suggested values : Type ∈ {"system", "user"}
Result
The operator GetParamNum returns the value 2 (H_MSG_TRUE) if the name of the procedure exists. Otherwise
an exception handling is raised.
Parallelization Information
GetParamNum is reentrant and processed without parallelization.
Possible Predecessors
GetKeywords, SearchOperator, GetOperatorName, GetOperatorInfo
Possible Successors
GetParamTypes
Alternatives
GetOperatorInfo, GetParamInfo
See also
GetParamNames, GetParamTypes, GetOperatorName
Module
Foundation
’integer’: an integer.
’integer tuple’: an integer or a tuple of integers.
’real’: a floating point number.
’real tuple’: a floating point number or a tuple of floating point numbers.
’string’: a string.
Parameter
HALCON 8.0.2
1028 CHAPTER 14. SYSTEM
14.4 Operating-System
count_seconds(Start)
/* program segment to be measured */
count_seconds(End)
Seconds := End - Start
Result
The operator CountSeconds always returns the value 2 (H_MSG_TRUE).
Parallelization Information
CountSeconds is reentrant and processed without parallelization.
See also
SetSystem
Module
Foundation
HALCON 8.0.2
1030 CHAPTER 14. SYSTEM
14.5 Parallelization
static void HOperatorSet.CheckParHwPotential ( HTuple allInpPars )
static void HSystem.CheckParHwPotential ( int allInpPars )
Check hardware regarding its potential for parallel processing.
CheckParHwPotential is necessary for an efficient automatic parallelization, which is used by HALCON
to better utilize multiprocessor hardware in order to speed up the processing of operators. As the parallelization
of operators is done automatically, there is no need for the user to explicitely prepare or change programs for
their parallelization. Thus, all HALCON-based programs can be used unchanged on multiprocessor hardware
and nevertheless utilize the potential of parallel hardware. CheckParHwPotential checks a given hardware
with respect to a parallel processing of HALCON operators. At this, it examines every operator, which can be
sped up in principle by an automatic parallelization. Each examined operator is processed several times - both
sequentially and in parallel - with a changing set of input parameter values/images. The latter helps to evaluate
dependencies between an operator’s input parameter characteristics (e.g. the size of an input image) and the
efficiency of its parallel processing. At this, allInpPars is used in the following way: In the normal case,
i.e. if allInpPars contains the default value 0 (“false”), only those input parameters are examined which are
supposed to show influence on the processing time. Other parameters are not examined so that the whole process is
sped up. However, in some rare cases, the internal implementation of a HALCON operator might change from one
HALCON release to another. Then, a parameter which did not show any direct influence on the processing time in
former releases, may now show such an influence. In this case it is necessary to set allInpPars to 1 (“true”) in
order to force the examination of all input parameters. If this happens, the HALCON release notes will most likely
contain an appropriate note about this fact. Overall, CheckParHwPotential performs several test loops and
collects a lot of hardware-specific informations, which enable HALCON to optimize the automatic parallelization
for a given hardware. The hardware information is stored so that it can be used again in future HALCON sessions.
Thus, it is sufficient, to start CheckParHwPotential once on each multiprocessor machine that is used for
parallel processing. Of course, it should be started again, if the hardware of the machine changes, for example,
by installing a new cpu, or if the operating system of the machine changes, or if the machine gets a new host
name. The latter is necessary, because HALCON identifies the machine-specific parallelization information by
the machine’s host name. If the same multiprocessor machine is used with different operating systems, such as
Windows and Linux, it is necessary to start CheckParHwPotential once for each operating system in order to
correctly measure the rather strong influence of the operating system on the potential of exploiting multiprocessor
hardware. Under Windows, HALCON stores the parallelization knowledge, which belongs to a specific machine,
in the machine’s registry. At this, it uses a machine-specific registry key, which can be used by different users
simultaneously. In the normal case, this key can be written or changed by any user under Windows NT. However,
under Windows 2000 the key may only be changed by users with administrator privileges or by users which at least
belong to the “power user” group. For all other users CheckParHwPotential shows no effect (but does not
return an error). Under Linux/UNIX the parallelization information is stored in a file in the HALCON installation
directory ($HALCONROOT). Again this means that CheckParHwPotential must be called by users with
the appropriate privileges, here by users which have write access to the HALCON directory. If HALCON is used
within a network under Linux/UNIX, the denoted file contains the information about every computer in the network
for which the hardware check has been successfully completed.
Attention
During its test loops CheckParHwPotential has to start every examined operator several times. Thus,
the processing of CheckParHwPotential can take rather a long time. CheckParHwPotential
bases on the automatic parallelization of operators which is exclusively supported by Parallel HALCON. Thus,
CheckParHwPotential always returns an appropriate error, if it used with a non-parallel HALCON version.
CheckParHwPotential must be called by users with the appropriate privileges for storing the parallelization
information permanently (see the operator’s description above for more details about this subject).
Parameter
. allInpPars (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Check every input parameter?
Default Value : 0
List of values : AllInpPars ∈ {0, 1}
Result
CheckParHwPotential returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
CheckParHwPotential is local and processed completely exclusively without parallelization.
Possible Successors
StoreParKnowledge
See also
StoreParKnowledge, LoadParKnowledge
Module
Foundation
HALCON 8.0.2
1032 CHAPTER 14. SYSTEM
Result
LoadParKnowledge returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
LoadParKnowledge is local and processed completely exclusively without parallelization.
Possible Predecessors
StoreParKnowledge
See also
StoreParKnowledge, CheckParHwPotential
Module
Foundation
14.6 Parameters
static void HOperatorSet.GetSystem ( HTuple query,
out HTuple information )
a + in the list below. By passing the string ’?’ as the parameter query, the names of all system parameters are
provided with information.
The following system parameters can be queried:
Versions
’parallel_halcon’: The currently used variant of HALCON: Parallel HALCON (’true’) or Standard HAL-
CON (’false’)
’version’: HALCON version number, e.g.: 6.0
’last_update’: Date of creation of the HALCON library
’revision’: Revision number of the HALCON library, e.g.: 1
Upper Limits
’max_contour_length’: Maximum number of contour respectively polygone control points of a region.
’max_images’: Maximum total of images.
’max_channels’: Maximum number of channels of an image.
’max_obj_per_par’: Maximum number of image objects which may be used during one call up per param-
eter
’max_inp_obj_par’: Maximum number of input parameters.
’max_outp_obj_par’: Maximum number of output parameters.
’max_inp_ctrl_par’: Maximum number of input control parameters.
’max_outp_ctrl_par’: Maximum number of output control parameters.
’max_window’: Maximum number of windows.
’max_window_types’: Maximum number of window systems.
’max_proc’: Maximum number of HALCON procedures (system defined + user defined).
Graphic
+’flush_graphic’: Determines, whether the flush operation is called or not after each visualization operation
in HALCON. Unix operating systems flash the display buffer auto- matically and make this parameter
effectless on respective operating systems, therefore.
+’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values.
If the values is -1 the gray values will be automatically scaled (default).
+’backing_store’: Storage of the window contents in case of overlaps.
+’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graph-
ics window is displayed.
+’window_name’: (no description available)
+’default_font’: Name of the font to set at opening the window.
+’update_lut’: (no description available)
+’x_package’: Number of bytes which are sent to the X server during each transfer of data.
+’num_gray_4’: Number of colors reserved under X Xindows concerning the output of graylevels (
DispChannel) on a machine with 4 bitplanes (16 colors).
+’num_gray_6’: Number of colors reserved under X Windows concerning the output of graylevels (
DispChannel) on a machine with 6 bitplanes (64 colors).
+’num_gray_8’: Number of colors reserved under X Windows concerning the output of graylevels (
DispChannel) on a machine with 8 bitplanes (256 colors).
+’num_gray_percentage’: HALCON reserves a certain amount of the available colors under X Windows
for the representation of graylevels ( DispImage). This shall interfere with other X applications as
little as possible. However, if HALCON does not succeed in reserving a minimum percentage of
’num_gray_percentage’ of the necessary colors on the X server, a certain amount of the lookup-table
will be claimed for the HALCON graylevels regardless of the consequences for other applications.
This may result in undesired shifts of color when switching between HALCON windows and windows
of other applications, or if (outside HALCON) a window-dump is generated. The number of the real
graylevels to be reserved depends on the number of available bitplanes on the outputmachine (see also
’num_gray_*’. Naturally no colors will be reserved on monochrome machines - the graylevels will
instead be dithered when displayed. If graylevel displays are used, only different shades of gray will
be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with 8 bit
HALCON 8.0.2
1034 CHAPTER 14. SYSTEM
pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: Before the first window on a machine with x bitplanes is opened, num_gray_x indicates the
number of colors which have to be reserved for the display of graylevels, afterwards, however, it will
indicate the number of colors which actually have been reserved.
+’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines
how many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-
color display under X windows.
+’num_graphic_2’: Number of the HALCON graphic colors reserved under X Windows (for
DispRegion etc.) on a machine with 2 bitplanes (4 colors).
+’num_graphic_4’: Number of the HALCON graphic colors reserved under X Windows (for
DispRegion etc.) on a machine with 4 bitplanes (16 colors).
+’num_graphic_6’: Number of the HALCON graphic colors reserved under X Windows (for
DispRegion etc.) on a machine with 6 bitplanes (64 colors).
+’num_graphic_8’: Number of the HALCON graphic colors reserved under X Windows (for
DispRegion etc.) on a machine with 8 bitplanes (256 colors).
Image Processing
+’neighborhood’: Using the 4 or 8 neighborhood.
+’init_new_image’: Initialization of images before applying grayvalue transformations.
+’no_object_result’: Behavior for an empty object lists.
+’empty_region_result’: Reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Possible return
values:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
HALCON 8.0.2
1036 CHAPTER 14. SYSTEM
Parameter
’neighborhood’: This parameter is used with all procedures which examine neighborhood rela-
tions: Connection, GetRegionContour, GetRegionChain, GetRegionPolygon,
GetRegionThickness, Boundary, PaintRegion, DispRegion, FillUp, Contlength,
ShapeHistoAll.
value: 4 or 8
default: 8
’default_font’: Whenever a window is opened, a font will be set for the text output, whereby the ’default_font’
will be used. If the preset font cannot be found, another fontname can be set before opening the window.
Value: Filename of the fonts
default: fixed
’update_lut’ Determines whether the HALCON color tables are adapted according to their environment or not.
value: ’true’ or ’false’
default: ’false’
’image_dir’: Image files (e.g. ReadImage and ReadSequence) will be looked for in the currently used
directory and in ’image_dir’ (if no absolute paths are indicated). More than one directory name can be indi-
cated (searchpaths), seperated by semicolons (Windows) or colons (Unix). The path can also be determined
using the environment variable HALCONIMAGES.
Value: Name of the filepath
default: ’$HALCONROOT/images’ bzw. ’%HALCONROOT%/images’
’lut_dir’: Color tables ( SetLut) which are realized as an ASCII-file will be looked for in the currently used
directory and in ’lut_dir’ (if no absolute paths are indicated). If HALCONROOT is set, HALCON will
search the color tables in the sub-directory "‘lut"’.
Value: Name of the filepath
default: ’$HALCONROOT/lut’ bzw. ’%HALCONROOT%/lut’
’help_dir’: The online text files german or english.hlp, .sta, .key .num and .idx will be looked for in the cur-
rently used directory or in ’help_dir’. This system parameter is necessary for instance using the operators
GetOperatorInfo and GetParamInfo. This parameter can also be set by the environment variable
HALCONROOT before initializing HALCON. In this case the variable must indicate the directory above the
helpdirectories (that is the HALCON-Homedirectory): e.g.: ’/usr/local/halcon’
Value: Name of the filepath
default: ’$HALCONROOT/help’ bzw. ’%HALCONROOT%/help’
’init_new_image’: Determines whether new images shall be set to 0 before using filters. This is not necessary if
always the whole image is filtered of if the data of not filtered image areas are unimportant.
Value: ’true’ or ’false’
default: ’true’
’no_object_result’: Determines how operations processing iconic objects shall react if the object tuplet is empty
(= no objects). Available values for value:
’true’: the error will be ignored
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
default: ’true’
’empty_region_result’: Controls the reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Available values for
value:
HALCON 8.0.2
1038 CHAPTER 14. SYSTEM
value: 1, 2, 3
default: 3
’filename_encoding’: This parameter determines how file and directory names are interpreted that are passed as
string parameters to and from HALCON. With the value ’locale’ these names are used unaltered, while with
the value ’utf8’ these names are interpreted as being UTF-8 encoded. In the latter case, HALCON tries to
translate input parameters from UTF-8 to the locale encoding according to the current system settings, and
output parameters from locale to UTF-8 encoding.
value: ’locale’ or ’utf8’
default: ’locale’
’x_package’: The output of image data via the network may cause errors owing to the heavy load on the computer
or on the network. In order to avoid this, the data are transmitted in small packages. If the computer is used
locally, these units can be enlarged at will. This can lead to a notably improved output performance.
value: package size (in bytes)
default: 20480
’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values. If the
values is -1 the gray values will be automatically scaled (default).
value: -1 or 9..16
default: -1
’num_gray_4’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 4 bitplanes (16 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
value: 2 - 12
default: 8
’num_gray_6’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 6 bitplanes (64 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
value: 2 - 62
default: 50
’num_gray_8’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 8 bitplanes (256 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
value: 2 - 254
default: 140
’num_gray_percentage’: Under X Windows HALCON reserves a part of the available colors for the represen-
tation of gray values ( DispChannel). This shall interfere with other X applications as little as possible.
However, if HALCON does not succeed in reserving a minimum percentage of ’num_gray_percentage’ of
the necessary colors on the X server, a certain amount of the lookup table will be claimed for the HALCON
graylevels regardless of the consequences. This may result in undesired shifts of color when switching be-
tween HALCON windows and windows of other applications, or (outside HALCON) if a window-dump is
generated. The number of the real graylevels to be reserved depends on the number of available bitplanes on
the outputmachine (see also ’num_gray_*’. Naturally no colors will be reserved on monochrome machines -
the graylevels will instead be dithered when displayed. If graylevel-displays are used, only different shades
of gray will be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with
8 bit pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: This value may only be changed before the first window has been opened on the machine. For before
opening the first window on a machine with x bitplanes, num_gray_x indicates the number of colors which
have to be reserved for the display of graylevels, afterwards, however, it will indicate the number of colors
which actually have been reserved.
value: 0 - 100
default: 30
’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines how
many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-color display
under X windows.
default: 60
’int_zooming’: Determines if the zooming of images is done with integer arithmetic or with floating point arith-
metic. default: ’true’
HALCON 8.0.2
1040 CHAPTER 14. SYSTEM
’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graphics
window is displayed. default: ’default’
’num_graphic_2’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators DispRegion etc.) on a machine with 2 bitplanes (4 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
value: 0 - 2
default: 2
’num_graphic_4’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators DispRegion etc.) on a machine with 4 bitplanes (16 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
value: 0 - 14
default: 5
’num_graphic_6’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators DispRegion etc.) on a machine with 6 bitplanes (64 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
value: 0 - 62
default: 10
’num_graphic_8’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators DispRegion etc.) on a machine with 8 bitplanes (256 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
value: 0 - 64
default: 20
’graphic_colors’ HALCON reserves the first num_graphic_x colors form this list of color names as graphic col-
ors. As a default HALCON uses this same list which is also returned by using QueryAllColors. How-
ever, the list can be changed individually: hereby a tuplet of color names will be returned as value. It is
recommendable that such a tuplet always includes the colors ’black’ and ’white’, and optionally also ’red’,
’green’ and ’blue’. If ’default’ is set as value, HALCON returns to the initial setting. Note: On graylevel
machines not the first x colors will be reserved, but the first x shades of gray from the list.
Attention: This value may only be changed before the first window has been opened on the machine.
value: Tuplets of X Windows color names
default: see also query_all_colors
’current_runlength_number’: Regions will be stored internally in a certain runlengthcode. This parameter can
determine the maximum number of chords which may be used for representing a region. Please note that
some procedures raise the number on their own if necessary.
The value can be enlarged as well as reduced.
value: maximum number of chords
default: 50000
’clip_region’: Determines whether the regions of iconic objects of the HALCON database will be clipped to
the currently used image size or not. This is the case for example in procedures like GenCircle,
GenRectangle1 or Dilation1.
See also: ResetObjDb
value: ’true’ or ’false’
default: ’true’
’do_low_error’ Determines whether the HALCON should print low level error or not.
value: ’true’ or ’false’
default: ’false’
’reentrant’ Determines whether HALCON must be reentrant for being used within a parallel programming en-
vironment (e.g. a multithreaded application). This parameter is only of importance for Parallel HALCON,
which can process several operators concurrently. Thus, the parameter is ignored by the sequentially working
HALCON-Version. If it is set to ’true’, Parallel HALCON internally uses synchronization mechanisms to
protect shared data objects from concurrent accesses. Though this is inevitable with any effectively paral-
lel working application, it may cause undesired overhead, if used within an application which works purely
sequentially. The latter case can be signalled by setting ’reentrant’ to ’false’. This switches off all internal
synchronization mechanisms and thus reduces overhead. Of course, Parallel HALCON then is no longer
thread-safe, which causes another side-effect: Parallel HALCON will then no longer use the internal paral-
lelization of operators, because this needs reentrancy. Setting ’reentrant’ to ’true’ resets Parallel HALCON
to its default state, i.e. it is reentrant (and thread-safe) and it uses the automatic parallelization to speed up
the processing of operators on multiprocessor machines.
value: ’true’ or ’false’
default: Parallel HALCON: ’true’, otherwise: ’false’
’parallelize_operators’ Determines whether Parallel HALCON uses an automatic parallelization to speed up the
processing of operators on multiprocessor machines. This feature can be switched off by setting ’paral-
lelize_operators’ to ’false’. Even then, Parallel HALCON will remain reentrant (and thread-safe), unless
the parameter ’reentrant’ is changed via SetSystem accordingly. Changing ’parallelize_operators’ can
be helpful, for example, if HALCON operators are called by a multithreaded application that also does the
scheduling and load-balancing of operators and data by itself. Then, it may be undesired that HALCON
performs additional parallelization steps, which may disturb the application’s scheduling and load-balancing
concepts. For a more detailed control of automatic parallelization single methods of data parallelization
can be switched. ’split_tuple’ enables the tuple parallelization method, ’split_channel’ the parallelization on
image channels, and ’split_domain’ the parallelization on the image domain. A preceding ’˜’ disables the
respective method. The method strings can also be passed within a control tuple to switch on or off methods
of automatic data parallelization at once. E.g., [’split_tuple’,’split_channel’,’split_domain’] is equivalent to
’true’.
The parameter ’parallelize_operators’ is only supported by Parallel HALCON and thus ignored by the se-
quentially working HALCON-Version.
value:’true’, ’false’, ’split_tuple’, ’split_channel’, ’split_domain’, ’s̃plit_tuple’, ’s̃plit_channel’,
’s̃plit_domain’ default: Parallel HALCON: ’true’, else: ’false’
’thread_num’ Sets the number of threads used by the automatic parallelization of Parallel HALCON. The number
includes the main thread and is restricted to the number of processors for efficiency reasons. Decreasing the
number of threads is helpful if processors are occupied by user worker threads besides the threads of the
automatic parallelization. With this, the number of processing threads can be adapted to the number of
processors for best efficiency. Standard HALCON ignores this parameter value. value: 1 <= Value <=
processor_num default: Parallel HALCON: processor_num, else: 1
’thread_pool’ Denotes whether Parallel HALCON always creates new threads for automatic parallelization
(’false’) or uses an existing pool of threads (’true’). Using a pool is more efficient for automatic paral-
lelization. When switching off atomatic parallelization permanently, deactivating the pool can save resources
of the operating system. Standard HALCON ignores this parameter value. value: ’true’, ’false’ default:
Parallel HALCON: ’true’, else: ’false’
’clock_mode’ Determines the mode of the measurement of time intervals with CountSeconds. For
value=’processor_time’, the time the running HALCON process occupies the cpu is measured. This kind
of measuring time is independend from the cpu load caused by other processes, but it features a lower reso-
lution on most systems and is more inaccurate for smaller time intervals, therefore.
For value=’elapsed_time’, the actual elapsed system time is measured. It includes the waiting time of the
current process as well as the cpu time of other processes. Therefore, to get a reliable measurement make
sure that no other process causes any cpu load.
value=’performance_counter’ measures the actual system time by using a performance counter,
which results in a higher resolution. If the system does not support any performance counter,
value=’processor_time’ is used.
value: ’processor_time’, ’elapsed_time’, ’performance_counter’
default: ’performance_counter’
’max_connection’ Determines the maximum number of regions returned by Connection. For value=0, all
regions are returned.
’extern_alloc_funct’ Pointer to external function for memory allocation of result images. default: 0
’extern_free_funct’ Pointer to external function for memory deallocation of result images. default: 0
’image_cache_capacity’ Upper limit in bytes of the internal image memory cache. To speed up allocation of
new images HALCON does not free image memory but caches it to reuse it. Caching of freed images
is done as long as the upper limit is not reached. This functionality can be switched off by setting ’im-
age_cache_capacity’ to 0.
This parameter is only available in Standard HALCON and ignored in Parallel HALCON.
default: Standard HALCON: 4194304 (4MByte), else: 0
’global_mem_cache’ Cache mode of global memory,i.e., memory that is visible beyond an operator. It specifies
whether unused global memory should be cached (’shared’) or freed (’idle’). Generally, caching speeds up
HALCON 8.0.2
1042 CHAPTER 14. SYSTEM
memory allocation and processing at the cost of memory consumption. Additionally, Parallel HALCON of-
fers the option to cache global memory for each thread separately (’exclusive’). This mode can also accelerate
processing at the cost of higher memory consumption. Standard HALCON treats the value ’exclusive’ like
the value ’shared’.
value: ’idle’,’exclusive’,’shared’
default: ’false’
’temporary_mem_cache’ Flag if unused temporary memory of an operator should be cached (’true’, default) or
freed (’false’). A single-threaded application can be speeded up by caching global memory, whereas freeing
reduces the memory consumption of a multithreaded application at the expense of speed.
value: ’true’ or ’false’
default: ’true’
’alloctmp_max_blocksize’ Maximum size of memory blocks to be allocated within temporary memory manage-
ment. (No effect, if ’temporary_mem_cache’ == ’false’ ) value: -1 or >= 0
default: -1
’mmx_enable’ Flag, if MMX operations were used to accelerate selected image processing operators (’true’) or
not (’false’). (No effect, if ’mmx_supported’ == ’false’, see also operator get_system) default: ’true’ if cpu
supports MMX, else ’false’
’language’ Language used for error messages. value: ’english’ or ’german’. default: ’ english’
Parameter
. systemParameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Name of the system parameter to be changed.
Default Value : "image_dir"
List of values : SystemParameter ∈ {"alloctmp_max_blocksize", "backing_store",
"border_shape_models", "clip_region", "clock_mode", "current_runlength_number", "default_font",
"do_low_error", "empty_region_result", "extern_alloc_funct", "extern_free_funct", "filename_encoding",
"flush_file", "flush_graphic", "global_mem_cache", "graphic_colors", "help_dir", "icon_name",
"image_cache_capacity", "image_dir", "image_dpi", "init_new_image", "int2_bits", "int_zooming",
"language", "lut_dir", "max_connection", "mmx_enable", "neighborhood", "no_object_result",
"num_graphic_2", "num_graphic_4", "num_graphic_6", "num_graphic_8", "num_graphic_percentage",
"num_gray_4", "num_gray_6", "num_gray_8", "num_gray_percentage", "ocr_trainf_version",
"parallelize_operators", "pregenerate_shape_models", "reentrant", "store_empty_region",
"temporary_mem_cache", "thread_num", "thread_pool", "update_lut", "x_package"}
. value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / int / long / double)
New value of the system parameter.
Default Value : "true"
Suggested values : Value ∈ {"true", "false", 0, 4, 8, 100, 140, 255}
Result
The operator SetSystem returns the value 2 (H_MSG_TRUE) if the parameters are correct. Otherwise an
exception will be raised.
Parallelization Information
SetSystem is local and processed completely exclusively without parallelization.
Possible Predecessors
ResetObjDb, GetSystem, SetCheck
See also
GetSystem, SetCheck, CountSeconds
Module
Foundation
14.7 Serial
static void HOperatorSet.ClearSerial ( HTuple serialHandle,
HTuple channel )
ClearSerial discards data written to the serial device referred to by serialHandle, but not transmitted
(channel = ’output’), or data received, but not read (channel = ’input’), or performs both these operations at
once (channel = ’in_out’).
Parameter
HALCON 8.0.2
1044 CHAPTER 14. SYSTEM
Parameter
. serialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; HSerial / HTuple (IntPtr)
Serial interface handle.
Result
If the parameters are correct and the device could be closed, the operator CloseSerial returns the value 2
(H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
CloseSerial is reentrant and processed without parallelization.
Possible Predecessors
OpenSerial
See also
OpenSerial, CloseFile
Module
Foundation
See also
SetSerialParam
Module
Foundation
HALCON 8.0.2
1046 CHAPTER 14. SYSTEM
Parameter
HALCON 8.0.2
1048 CHAPTER 14. SYSTEM
interpreted as the end of a string. WriteSerial always waits until all data has been transmitted, i.e., a timout
for writing cannot be set.
Parameter
. serialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; HSerial / HTuple (IntPtr)
Serial interface handle.
. data (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Characters to write (as tuple of integers).
Result
If the parameters are correct and the write to the device was successful, the operator WriteSerial returns the
value 2 (H_MSG_TRUE). Otherwise an exception is raised.
Parallelization Information
WriteSerial is reentrant and processed without parallelization.
Possible Predecessors
OpenSerial
See also
ReadSerial
Module
Foundation
14.8 Sockets
static void HOperatorSet.CloseSocket ( HTuple socket )
Close a socket.
CloseSocket closes a socket that was previously opened with OpenSocketAccept,
OpenSocketConnect, or SocketAcceptConnect. For a detailed example, see OpenSocketAccept.
Parameter
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
Parallelization Information
CloseSocket is reentrant and processed without parallelization.
See also
OpenSocketAccept, OpenSocketConnect, SocketAcceptConnect
Module
Foundation
string HSocket.GetNextSocketDataType ( )
Determine the HALCON data type of the next socket data.
GetNextSocketDataType returns the data type of the next data that are present on the socket socket and
returns it in dataType. The possible values for dataType are:
Parameter
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
. dataType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Data type of next HALCON data.
Parallelization Information
GetNextSocketDataType is reentrant and processed without parallelization.
See also
SendImage, ReceiveImage, SendRegion, ReceiveRegion, SendTuple, ReceiveTuple
Module
Foundation
int HSocket.GetSocketDescriptor ( )
Get the socket descriptor of a socket used by the operating system.
GetSocketDescriptor returns the socket descriptor used by the operating system for the socket connection
that is passed in socket. The socket descriptor can be used in operating system calls such as select, read,
write, recv, or send.
Parameter
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
. socketDescriptor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Socket descriptor used by the operating system.
Parallelization Information
GetSocketDescriptor is reentrant and processed without parallelization.
Possible Predecessors
OpenSocketAccept, OpenSocketConnect, SocketAcceptConnect
See also
SetSocketTimeout
Module
Foundation
HTuple HSocket.GetSocketTimeout ( )
Get the timout of a socket.
GetSocketTimeout returns the timout for the socket connection that is passed in socket. For a description
of possible values of timeout see SetSocketTimeout.
HALCON 8.0.2
1050 CHAPTER 14. SYSTEM
Parameter
/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
/* Busy wait for an incoming connection */
dev_error_var (Error, 1)
dev_set_check (’~give_error’)
OpenStatus := 5
while (OpenStatus # 2)
socket_accept_connect (AcceptingSocket, ’false’, Socket)
OpenStatus := Error
wait_seconds (0.2)
endwhile
dev_set_check (’give_error’)
/* Connection established */
receive_image (Image, Socket)
threshold (Image, Region, 0, 63)
send_region (Region, Socket)
receive_region (ConnectedRegions, Socket)
area_center (ConnectedRegions, Area, Row, Column)
send_tuple (Socket, Area)
send_tuple (Socket, Row)
send_tuple (Socket, Column)
close_socket (Socket)
close_socket (AcceptingSocket)
/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’fabrik’)
send_image (Image, Socket)
receive_region (Region, Socket)
connection (Region, ConnectedRegions)
send_region (ConnectedRegions, Socket)
receive_tuple (Socket, Area)
receive_tuple (Socket, Row)
receive_tuple (Socket, Column)
close_socket (Socket)
Parallelization Information
OpenSocketAccept is reentrant and processed without parallelization.
Possible Successors
SocketAcceptConnect
See also
OpenSocketConnect, CloseSocket, GetSocketTimeout, SetSocketTimeout, SendImage,
ReceiveImage, SendRegion, ReceiveRegion, SendTuple, ReceiveTuple
Module
Foundation
HALCON 8.0.2
1052 CHAPTER 14. SYSTEM
HImage HSocket.ReceiveImage ( )
void HImage.ReceiveImage ( HSocket socket )
Receive an image over a socket connection.
ReceiveImage reads an image object that was sent over the socket connection determined by socket by
another HALCONprocess using the operator SendImage. If no image has been sent, the HALCON process
calling ReceiveImage blocks until enough data arrives. For a detailed example, see OpenSocketAccept.
Parameter
HRegion HSocket.ReceiveRegion ( )
void HRegion.ReceiveRegion ( HSocket socket )
Receive regions over a socket connection.
ReceiveRegion reads a region object that was sent over the socket connection determined by socket by
another HALCONprocess using the operator SendRegion. If no regions have been sent, the HALCON process
calling ReceiveRegion blocks until enough data arrives. For a detailed example, see OpenSocketAccept.
Parameter
. region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Received regions.
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
Parallelization Information
ReceiveRegion is reentrant and processed without parallelization.
Possible Predecessors
OpenSocketConnect, SocketAcceptConnect, GetSocketTimeout, SetSocketTimeout
See also
SendRegion, SendImage, ReceiveImage, SendTuple, ReceiveTuple,
GetNextSocketDataType
Module
Foundation
HTuple HSocket.ReceiveTuple ( )
Receive a tuple over a socket connection.
ReceiveTuple reads a tuple that was sent over the socket connection determined by socket by another
HALCON process using the operator SendTuple. If no tuple has been sent, the HALCON process calling
ReceiveTuple blocks until enough data arrives. For a detailed example, see OpenSocketAccept.
Parameter
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
. tuple (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .string ; HTuple (string / double / int / long)
Received tuple.
Parallelization Information
ReceiveTuple is reentrant and processed without parallelization.
Possible Predecessors
OpenSocketConnect, SocketAcceptConnect, GetSocketTimeout, SetSocketTimeout
See also
SendTuple, SendImage, ReceiveImage, SendRegion, ReceiveRegion,
GetNextSocketDataType
Module
Foundation
HXLD HSocket.ReceiveXld ( )
void HXLD.ReceiveXld ( HSocket socket )
Receive an XLD object over a socket connection.
ReceiveXld reads an XLD object that was sent over the socket connection determined by socket by another
HALCONprocess using the operator SendXld. If no XLD object has been sent, the HALCON process calling
ReceiveXld blocks until enough data arrives. For a detailed example, see SendXld.
HALCON 8.0.2
1054 CHAPTER 14. SYSTEM
Parameter
. XLD (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; HXLD
Received XLD object.
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
Parallelization Information
ReceiveXld is reentrant and processed without parallelization.
Possible Predecessors
OpenSocketConnect, SocketAcceptConnect, GetSocketTimeout, SetSocketTimeout
See also
SendXld, SendImage, ReceiveImage, SendRegion, ReceiveRegion, SendTuple,
ReceiveTuple, GetNextSocketDataType
Module
Foundation
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be sent.
. socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; HSocket / HTuple (IntPtr)
Socket number.
Parallelization Information
SendRegion is reentrant and processed without parallelization.
Possible Predecessors
OpenSocketConnect, SocketAcceptConnect
See also
ReceiveRegion, SendImage, ReceiveImage, SendTuple, ReceiveTuple,
GetNextSocketDataType
Module
Foundation
HALCON 8.0.2
1056 CHAPTER 14. SYSTEM
/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
socket_accept_connect (AcceptingSocket, ’true’, Socket)
receive_image (Image, Socket)
edges_sub_pix (Image, Edges, ’canny’, 1.5, 20, 40)
send_xld (Edges, Socket)
receive_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
gen_parallels_xld (Polygons, Parallels, 10, 30, 0.15, ’true’)
send_xld (Parallels, Socket)
receive_xld (ModParallels, Socket)
receive_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
close_socket (AcceptingSocket)
/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’mreut’)
send_image (Image, Socket)
receive_xld (Edges, Socket)
gen_polygons_xld (Edges, Polygons, ’ramer’, 2)
send_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
receive_xld (Parallels, Socket)
mod_parallels_xld (Parallels, Image, ModParallels, ExtParallels,
0.4, 160, 220, 10)
send_xld (ModParallels, Socket)
send_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
Parallelization Information
SendXld is reentrant and processed without parallelization.
Possible Predecessors
OpenSocketConnect, SocketAcceptConnect
See also
ReceiveXld, SendImage, ReceiveImage, SendRegion, ReceiveRegion, SendTuple,
ReceiveTuple, GetNextSocketDataType
Module
Foundation
any longer. Therefore, in these cases, the only possibility to put the system into a consistent state is to close both
sockets and to open them anew. It should be noted that sometimes while reading data no error message will be
returned if the sending socket is closed while the receiving socket is waiting for data. In these cases, empty data
are returned (either objects or tuples).
The timeout is given in seconds as a floating point number. It can also be set to ’infinite’, causing the read calls to
wait indefinitely.
Parameter
HALCON 8.0.2
1058 CHAPTER 14. SYSTEM
See also
OpenSocketConnect, CloseSocket, GetSocketTimeout, SetSocketTimeout
Module
Foundation
Tools
15.1 2D-Transformations
static void HOperatorSet.AffineTransPixel ( HTuple homMat2D,
HTuple row, HTuple col, out HTuple rowTrans, out HTuple colTrans )
Hence,
affine_trans_pixel (HomMat2D, Row, Col, RowTrans, ColTrans)
corresponds to the following operator sequence:
affine_trans_point_2d (HomMat2D, Row+0.5, Col+0.5, RowTmp, ColTmp)
RowTrans := RowTmp-0.5
ColTrans := ColTmp-0.5
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Input transformation matrix.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double / int / long)
Input pixel(s) (row coordinate).
Default Value : 64
Suggested values : Row ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
1059
1060 CHAPTER 15. TOOLS
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in px
and their column coordinates in py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
The transformation matrix can be created using the operators HomMat2dIdentity, HomMat2dRotate,
HomMat2dTranslate, etc., or can be the result of operators like VectorAngleToRigid.
For example, if homMat2D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
qx px px
R t R · + t
qy = · py = py
00 1
1 1 1
Parameter
HALCON 8.0.2
1062 CHAPTER 15. TOOLS
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 und 37→4. The indices of the images that determine the respective transformation are
given by mappingSource and mappingDest. The indices are start at 1. Consequently, in the above example
mappingSource = [1,1,1,2,2,3] and mappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by numImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by referenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in homMatrices2D.
Additionally, the coordinates of the matched point pairs in the image pairs must be passed in rows1, cols1,
rows2, and cols2. They can be determined from the output of ProjMatchPointsRansac with
TupleSelect or with the HDevelop function subset. To enable BundleAdjustMosaic to determine
which point pair belongs to which image pair, numCorrespondences must contain the number of found point
matches for each image pair.
The parameter transformation determines the class of transformations that is used in the bundle adjustment
to transform the image points. This can be used to restrict the allowable transformations. For transformation
= ’projective’, projective transformations are used (see VectorToProjHomMat2d). For transformation
= ’affine’, affine transformations are used (see VectorToHomMat2d), for transformation = ’similarity’,
similarity transformations (see VectorToSimilarity), and for transformation = ’rigid’ rigid transfor-
mations (see VectorToRigid).
The resulting bundle-adjusted transformations are retuned as an array of 3 × 3 projective transformation matrices
in mosaicMatrices2D. In addition, the points reconstructed by the bundle adjustment are returned in (rows,
cols). The average projection error of the reconstructed points is returned in error. This can be used to check
whether the optimization has converged to useful values.
Parameter
* Assume that Images contains the four images of the mosaic in the
* layout given in the above description. Then the following example
* computes the bundle-adjusted transformation matrices.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT,
’ncc’, 10, 0, 0, 480, 640, 0, 0.5,
’gold_standard’, 2, 42, HomMat2D,
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
bundle_adjust_mosaic (4, 1, From, To, HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches, ’rigid’, MosaicMatrices)
gen_bundle_adjusted_mosaic (Images, MosaicImage, HomMatrices2D,
’default’, ’false’, TransMat2D)
Result
If the parameters are valid, the operator BundleAdjustMosaic returns the value 2 (H_MSG_TRUE). If nec-
essary an exception handling is raised.
Parallelization Information
BundleAdjustMosaic is reentrant and processed without parallelization.
Possible Predecessors
ProjMatchPointsRansac
Possible Successors
GenBundleAdjustedMosaic
See also
GenProjectiveMosaic
Module
Matching
HALCON 8.0.2
1064 CHAPTER 15. TOOLS
For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
Rl tl Rr tr Rl · Rr Rl +tl · tr
homMat2DCompose = · =
00 1 00 1 0 0 1
Parameter
double HHomMat2D.HomMat2dDeterminant ( )
Compute the determinant of a homogeneous 2D transformation matrix.
HomMat2dDeterminant computes the determinant of the homogeneous 2D transformation matrix given by
homMat2D and returns it in determinant.
Parameter
Result
HomMat2dDeterminant always returns 2 (H_MSG_TRUE).
Parallelization Information
HomMat2dDeterminant is reentrant and processed without parallelization.
Possible Predecessors
HomMat2dTranslate, HomMat2dTranslateLocal, HomMat2dScale, HomMat2dScaleLocal,
HomMat2dRotate, HomMat2dRotateLocal, HomMat2dSlant, HomMat2dSlantLocal
Module
Foundation
public HHomMat2D ( )
void HHomMat2D.HomMat2dIdentity ( )
Generate the homogeneous transformation matrix of the identical 2D transformation.
HomMat2dIdentity generates the homogeneous transformation matrix homMat2DIdentity describing the
identical 2D transformation:
1 0 0
homMat2DIdentity = 0 1 0
0 0 1
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, homMat2DIdentity is stored as the
tuple [1,0,0,0,1,0].
Parameter
HHomMat2D HHomMat2D.HomMat2dInvert ( )
Invert a homogeneous 2D transformation matrix.
HomMat2dInvert inverts the homogeneous 2D transformation matrix given by homMat2D. The resulting ma-
trix is returned in homMat2DInvert.
HALCON 8.0.2
1066 CHAPTER 15. TOOLS
Parameter
The point (px,py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using homMat2DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 +px 0 1 0 −px
R
homMat2DRotate = 0 1 +py · 0 · 0 1 −py · homMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by homMat2D, use
HomMat2dRotateLocal.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Input transformation matrix.
. phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / int / long)
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double / int / long)
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. homMat2DRotate (output_control) . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Output transformation matrix.
Result
If the parameters are valid, the operator HomMat2dRotate returns 2 (H_MSG_TRUE). If necessary, an excep-
tion is raised.
Parallelization Information
HomMat2dRotate is reentrant and processed without parallelization.
Possible Predecessors
HomMat2dIdentity, HomMat2dTranslate, HomMat2dScale, HomMat2dRotate,
HomMat2dSlant
Possible Successors
HomMat2dTranslate, HomMat2dScale, HomMat2dRotate, HomMat2dSlant
See also
HomMat2dRotateLocal
Module
Foundation
HALCON 8.0.2
1068 CHAPTER 15. TOOLS
tion matrix R. In contrast to HomMat2dRotate, it is performed relative to the local coordinate system, i.e., the
coordinate system described by homMat2D; this corresponds to the following chain of transformation matrices:
0
R cos(phi) − sin(phi)
homMat2DRotate = homMat2D · 0 R =
sin(phi) cos(phi)
0 0 1
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using homMat2DRotate.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
The point (px,py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using homMat2DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 +px 0 1 0 −px
S
homMat2DScale = 0 1 +py · 0 · 0 1 −py · homMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by homMat2D, use
HomMat2dScaleLocal.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
HALCON 8.0.2
1070 CHAPTER 15. TOOLS
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using homMat2DScale.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
HALCON 8.0.2
1072 CHAPTER 15. TOOLS
fixed, while for axis = ’y’ the y-axis is slanted and the x-axis remains fixed. The slanting is performed relative to
the global (i.e., fixed) coordinate system; this corresponds to the following chains of transformation matrices:
cos(theta) 0 0
axis = 0 x 0 : homMat2DSlant = sin(theta) 1 0 · homMat2D
0 0 1
1 − sin(theta) 0
axis = 0 y 0 : homMat2DSlant = 0
cos(theta) 0 · homMat2D
0 0 1
The point (px,py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using homMat2DSlant. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the slant is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations for axis = ’x’:
1 0 +px cos(theta) 0 0 1 0 −px
homMat2DSlant = 0 1
+py · sin(theta) 1 0 · 0 1 −py · homMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by homMat2D, use
HomMat2dSlantLocal.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Input transformation matrix.
. theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / int / long)
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718
. axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Coordinate axis that is slanted.
Default Value : "x"
List of values : Axis ∈ {"x", "y"}
. px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
cos(theta) 0 0
axis = 0 x 0 : homMat2DSlant = homMat2D · sin(theta) 1 0
0 0 1
1 − sin(theta) 0
axis = 0 y 0 : homMat2DSlant = homMat2D · 0
cos(theta) 0
0 0 1
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using homMat2DSlant.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
HALCON 8.0.2
1074 CHAPTER 15. TOOLS
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
x-axis. The parameters tx and ty determine the translation of the two coordinate systems. The matrix homMat2D
can be constructed from the six transformation parameters by the following operator sequence:
hom_mat2d_identity (HomMat2DIdentity)
hom_mat2d_scale (HomMat2DIdentity, Sx, Sy, 0, 0, HomMat2DScale)
hom_mat2d_slant (HomMat2DScale, Theta, ’y’, 0, 0, HomMat2DSlant)
hom_mat2d_rotate (HomMat2DSlant, Phi, 0, 0, HomMat2DRotate)
hom_mat2d_translate (HomMat2DRotate, Tx, Ty, HomMat2D)
Parameter
HALCON 8.0.2
1076 CHAPTER 15. TOOLS
1 0
t tx
homMat2DTranslate = 0 1 · homMat2D t=
ty
0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by homMat2D, use
HomMat2dTranslateLocal.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Input transformation matrix.
. tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double / int / long)
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. homMat2DTranslate (output_control) . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Output transformation matrix.
Result
If the parameters are valid, the operator HomMat2dTranslate returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
HomMat2dTranslate is reentrant and processed without parallelization.
Possible Predecessors
HomMat2dIdentity, HomMat2dTranslate, HomMat2dScale, HomMat2dRotate,
HomMat2dSlant
Possible Successors
HomMat2dTranslate, HomMat2dScale, HomMat2dRotate, HomMat2dSlant
See also
HomMat2dTranslateLocal
Module
Foundation
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Input transformation matrix.
. tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double / int / long)
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. homMat2DTranslate (output_control) . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Output transformation matrix.
Result
If the parameters are valid, the operator HomMat2dTranslateLocal returns 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Parallelization Information
HomMat2dTranslateLocal is reentrant and processed without parallelization.
Possible Predecessors
HomMat2dIdentity, HomMat2dTranslateLocal, HomMat2dScaleLocal,
HomMat2dRotateLocal, HomMat2dSlantLocal
Possible Successors
HomMat2dTranslateLocal, HomMat2dScaleLocal, HomMat2dRotateLocal,
HomMat2dSlantLocal
HALCON 8.0.2
1078 CHAPTER 15. TOOLS
See also
HomMat2dTranslate
Module
Foundation
HHomMat2D HHomMat2D.HomMat2dTranspose ( )
Transpose a homogeneous 2D transformation matrix.
HomMat2dTranspose transposes the homogeneous 2D transformation matrix given by homMat2D. The result
matrix homMat2DTranspose is always a 3 × 3 matrix, even if the input matrix is represented by a 2 × 3 matrix.
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Input transformation matrix.
. homMat2DTranspose (output_control) . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Output transformation matrix.
Result
HomMat2dTranspose always returns 2 (H_MSG_TRUE).
Parallelization Information
HomMat2dTranspose is reentrant and processed without parallelization.
Possible Predecessors
HomMat2dTranslate, HomMat2dTranslateLocal, HomMat2dScale, HomMat2dScaleLocal,
HomMat2dRotate, HomMat2dRotateLocal, HomMat2dSlant, HomMat2dSlantLocal
Possible Successors
HomMat2dCompose, HomMat2dInvert
Module
Foundation
The point (principalPointRow, principalPointCol) is the principal point of the projection and the
point (principalPointRow, principalPointCol, 0) can thus be interpreted as the position of the camera
in a virtual three-dimensional space. The direction of view is along the positive z-axis.
In this virtual space the plane containing the input image as well as the image plane are located at z = focus,
which is focus pixels away form the camera. As a result, using the identity matrix as the input matrix homMat3D
leads to a matrix homMat2D which also represents the identity in 2D.
Consequently, the parameter focus is the “focal distance” of the virtual camera used and its unit is pixels. Its
value influences the degree of perspective distortions. The same input matrix at a bigger focal distance results in
weaker distortions than at a low focal distance.
Let H be the affine 3D matrix with elements hij , (r, c) = (principalPointRow, principalPointCol)
and f = focus.
Then the projective transformation matrix is calculated as follows: First, a 3×4 projection matrix is calculated as:
h11 h12 h13 h14
f 0 c 1 0 0 −c h21 h22 h23 h24
Q= 0 f r · 0 1 0 −r ·
h31
h32 h33 h34
0 0 1 0 0 1 0
0 0 0 1
Since the image of a plane containing points (x, y, f, 1)T is to be calculated the last two columns of Q can be
joined:
1 0 0
r11 r12 r13 q11 q12 f · q13 + q14 0 1 0
R = r21 r22 r23 = q21 q22 f · r23 + q24 =Q·
0
0 f
r31 r32 r33 q31 q32 f · r33 + q34
0 0 1
Finally, the columns and rows of R are swapped in a way that the first row of P contains the transformation of the
row coordinates and the second row contains the transformation of the column coordinates so that P can be used
directly in ProjectiveTransImage:
0 1 0 0 1 0
P = 1 0 0 ·R· 1 0 0
0 0 1 0 0 1
Parameter
. homMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
3 × 4 3D transformation matrix.
. principalPointRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double / int / long)
Row coordinate of the principal point.
Default Value : 256
Suggested values : PrincipalPointRow ∈ {16, 32, 64, 128, 240, 256, 512}
. principalPointCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Column coordinate of the principal point.
Default Value : 256
Suggested values : PrincipalPointCol ∈ {16, 32, 64, 128, 256, 320, 512}
. focus (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Focal length in pixels.
Default Value : 256
Suggested values : Focus ∈ {1, 2, 5, 256, 32768}
HALCON 8.0.2
1080 CHAPTER 15. TOOLS
If fewer than 4 pairs of points (px, py, pw), (qx, qy, qw) are given, there exists no unique solution, if exactly 4
pairs are supplied the matrix homMat2D transforms them in exactly the desired way, and if there are more than 4
point pairs given, HomVectorToProjHomMat2d seeks to minimize the transformation error. To achieve such
a minimization, two different algorithms are available. The algorithm to use can be chosen using the parameter
method. For conventional geometric problems method=’normalized_dlt’ usually yields better results. However,
if one of the coordinates qw or pw equals 0, method=’dlt’ must be chosen.
In contrast to VectorToProjHomMat2d, HomVectorToProjHomMat2d uses homogeneous coordinates
for the points, and hence points at infinity (pw = 0 or qw = 0) can be used to determine the transformation. If
finite points are used, typically pw and qw are set to 1. In this case, VectorToProjHomMat2d can also be
used. VectorToProjHomMat2d has the advantage that one additional optimization method can be used and
that the covariances of the points can be taken into account. If the correspondence between the points has not
been determined, ProjMatchPointsRansac should be used to determine the correspondence as well as the
transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in px
and their column coordinates in py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
HALCON 8.0.2
1082 CHAPTER 15. TOOLS
Compute a projective transformation matrix between two images by finding correspondences between points.
Given a set of coordinates of characteristic points (cols1, rows1) and (cols2, rows2) in both input im-
ages image1 and image2, ProjMatchPointsRansac automatically determines corresponding points and
the homogeneous projective transformation matrix homMat2D that best transforms the corresponding points
from the different images into each other. The characteristic points can, for example, be extracted with
PointsFoerstner or PointsHarris.
The transformation is determined in two steps: First, gray value correlations of mask windows around the input
points in the first and the second image are determined and an initial matching between them is generated using
the similarity of the windows in both images.
The size of the mask windows is maskSize × maskSize. Three metrics for the correlation can be selected.
If grayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of matchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the algorithm’s performance, the search area for the matchings can be limited. Only points within a
window of 2 · rowTolerance × 2 · colTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
rowMove and colMove.
If the transformation contains a rotation, i.e., if the first image is rotated with respect to the second image, the
parameter rotation may contain an estimate for the rotation angle or an angle interval in radians. A good guess
will increase the quality of the gray value matching. If the actual rotation differs too much from the specified
estimate the matching will typically fail. The larger the given interval, the slower the operator is since the entire
algorithm is run for all relevant angles within the interval.
Once the initial matching is complete, a randomized search algorithm (RANSAC) is used to determine the transfor-
mation matrix homMat2D. It tries to find the matrix that is consistent with a maximum number of correspondences.
For a point to be accepted, its distance from the coordinates predicted by the transformation must not exceed the
threshold distanceThreshold.
Once a choice has been made, the matrix is further optimized using all consistent points. For this optimization, the
estimationMethod can be chosen to either be the slow but mathematically optimal ’gold_standard’ method
or the faster ’normalized_dlt’. Here, the algorithms of VectorToProjHomMat2d are used.
Point pairs that still violate the consistency condition for the final transformation are dropped, the matched points
are returned as control values. points1 contains the indices of the matched input points from the first image,
points2 contains the indices of the corresponding points in the second image.
The parameter randSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to
obtain reproducible results. If randSeed is set to a positive number, the operator yields the same result on every
call with the same parameters because the internally used random number generator is initialized with the seed
value. If randSeed = 0, the random number generator is initialized with the current time. Hence, the results
may not be reproducible in this case.
Parameter
HALCON 8.0.2
1084 CHAPTER 15. TOOLS
ProjectiveTransPixel corresponds to the following steps (input and output points as homogeneous vec-
tors):
RTrans row
CTrans = homMat2D · col
WTrans 1
!
rowTrans
RTrans
= WTrans
colTrans CTrans
WTrans
If a point at infinity (WTrans = 0) is created by the transformation, an error is returned.
Parameter
. homMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Homogeneous projective transformation matrix.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double / int / long)
Input pixel(s) (row coordinate).
Default Value : 64
Suggested values : Row ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double / int / long)
Input pixel(s) (column coordinate).
Default Value : 64
Suggested values : Col ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. rowTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double)
Output pixel(s) (row coordinate).
. colTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double)
Output pixel(s) (column coordinate).
Parallelization Information
ProjectiveTransPixel is reentrant and processed without parallelization.
Possible Predecessors
VectorToProjHomMat2d, HomVectorToProjHomMat2d, ProjMatchPointsRansac,
HomMat3dProject
See also
ProjectiveTransImage, ProjectiveTransImageSize, ProjectiveTransRegion,
ProjectiveTransContourXld, ProjectiveTransPoint2d
Module
Foundation
HALCON 8.0.2
1086 CHAPTER 15. TOOLS
To transform the homogeneous coordinates to Euclidean coordinates, they have to be divided by qw:
!
Qx
Ex Qw
= Qy
Ey Qw
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in px
and their column coordinates in py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Parameter
The coordinates of the original point are passed in (row1,column1), while the corresponding angle is passed
in angle1. The coordinates of the transformed point are passed in (row2,column2), while the corresponding
angle is passed in angle2. The following equation describes the transformation of the point using homogeneous
vectors:
row2 row1
column2 = homMat2D · column1
1 1
In particular, the operator VectorAngleToRigid is useful to construct a rigid affine transformation from
the results of the matching operators (e.g., FindShapeModel or BestMatchRotMg), which transforms a
reference image to the current image or (if the parameters are passed in reverse order) from the current image to
the reference image.
homMat2D can be used directly with operators that transform data using affine transformations, e.g.,
AffineTransImage.
Parameter
. row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double / int / long)
Row coordinate of the original point.
. column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Column coordinate of the original point.
. angle1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / int / long)
Angle of the original point.
. row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; HTuple (double / int / long)
Row coordinate of the transformed point.
. column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; HTuple (double / int / long)
Column coordinate of the transformed point.
. angle2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / int / long)
Angle of the transformed point.
. homMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double)
Output transformation matrix.
Example (Syntax: HDevelop)
Parallelization Information
VectorAngleToRigid is reentrant and processed without parallelization.
Possible Predecessors
BestMatchRotMg, BestMatchRot
Possible Successors
HomMat2dInvert, AffineTransImage, AffineTransRegion, AffineTransContourXld,
AffineTransPolygonXld, AffineTransPoint2d
HALCON 8.0.2
1088 CHAPTER 15. TOOLS
Alternatives
VectorToRigid
See also
VectorFieldToHomMat2d
Module
Foundation
HHomMat2D HImage.VectorFieldToHomMat2d ( )
void HHomMat2D.VectorFieldToHomMat2d ( HImage vectorField )
Approximate an affine map from a displacement vector field.
VectorFieldToHomMat2d approximates an affine map from the displacement vector field vectorField.
The affine map is returned in homMat2D.
If the displacement vector field has been computed from the original image Iorig and the second image Ires ,
the internally stored transformation matrix (see AffineTransImage) contains a map that describes how to
transform the first image Iorig to the second image Ires .
Parameter
2
X
qx[i] px[i]
qy[i] − homMat2D · py[i]
= minimum
i
1 1
homMat2D can be used directly with operators that transform data using affine transformations, e.g.,
AffineTransImage.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
HALCON 8.0.2
1090 CHAPTER 15. TOOLS
If fewer than 4 pairs of points (px,py), (qx,qy) are given, there exists no unique solution, if exactly 4 pairs
are supplied the matrix homMat2D transforms them in exactly the desired way, and if there are more than
4 point pairs given, VectorToProjHomMat2d seeks to minimize the transformation error. To achieve
such a minimization, several different algorithms are available. The algorithm to use can be chosen using
the parameter method. method=’dlt’ uses a fast and simple, but also rather inaccurate error estimation al-
gorithm while method=’normalized_dlt’ offers a good compromise between speed and accuracy. Finally,
method=’gold_standard’ performs a mathematically optimal but slower optimization.
If ’gold_standard’ is used and the input points have been obtained from an operator like PointsFoerstner,
which provides a covariance matrix for each of the points, which specifies the accuracy of the points, this can be
taken into account by using the input parameters covYY1, covXX1, covXY1 for the points in the first image and
covYY2, covXX2, covXY2 for the points in the second image. The covariances are symmetric 2 × 2 matrices.
covXX1/covXX2 and covYY1/covYY2 are a list of diagonal entries while covXY1/covXY2 contains the non-
diagonal entries which appear twice in a symmetric matrix. If a different method than ’gold_standard’ is used or
the covariances are unknown the covariance parameters can be left empty.
In contrast to HomVectorToProjHomMat2d, points at infinity cannot be used to determine the transformation
in VectorToProjHomMat2d. If this is necessary, HomVectorToProjHomMat2d must be used. If the
correspondence between the points has not been determined, ProjMatchPointsRansac should be used to
determine the correspondence as well as the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in px
and their column coordinates in py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
The point correspondences are passed in the tuples (px, py) and (qx,qy), where corresponding points must be
at the same index positions in the tuples. The transformation is always overdetermined. Therefore, the returned
transformation is the transformation that minimizes the distances between the original points (px,py) and the
transformed points (qx,qy), as described in the following equation (points as homogeneous vectors):
2
X
qx[i] px[i]
qy[i] − homMat2D · py[i]
= minimum
i
1 1
homMat2D can be used directly with operators that transform data using affine transformations, e.g.,
AffineTransImage.
HALCON 8.0.2
1092 CHAPTER 15. TOOLS
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
The point correspondences are passed in the tuples (px, py) and (qx,qy), where corresponding points must be
at the same index positions in the tuples. If more than two point correspondences are passed the transformation
is overdetermined. In this case, the returned transformation is the transformation that minimizes the distances
between the original points (px,py) and the transformed points (qx,qy), as described in the following equation
(points as homogeneous vectors):
2
X
qx[i] px[i]
qy[i] − homMat2D · py[i]
= minimum
i
1 1
homMat2D can be used directly with operators that transform data using affine transformations, e.g.,
AffineTransImage.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (row,column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
15.2 3D-Transformations
static void HOperatorSet.AffineTransPoint3d ( HTuple homMat3D,
HTuple px, HTuple py, HTuple pz, out HTuple qx, out HTuple qy,
out HTuple qz )
HALCON 8.0.2
1094 CHAPTER 15. TOOLS
by the homogeneous transformation matrix given in homMat3D. This corresponds to the following equation (input
and output points as homogeneous vectors):
qx px
qy py
qz = homMat3D ·
pz
1 1
The transformation matrix can be created using the operators HomMat3dIdentity, HomMat3dScale,
HomMat3dRotate, HomMat3dTranslate, etc., or be the result of PoseToHomMat3d.
For example, if homMat3D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
qx px px
qy R t py R· py + t
=
qz ·
pz =
000 1 pz
1 1 1
Parameter
Result
ConvertPoseType returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
ConvertPoseType is reentrant and processed without parallelization.
Possible Predecessors
CreatePose, HomMat3dToPose, CameraCalibration, HandEyeCalibration
Possible Successors
WritePose
See also
CreatePose, GetPoseType, WritePose, ReadPose
Module
Foundation
HALCON 8.0.2
1096 CHAPTER 15. TOOLS
public HPose ( double transX, double transY, double transZ, double rotX,
double rotY, double rotZ, string orderOfTransform,
string orderOfRotation, string viewOfTransform)
Create a 3D pose.
CreatePose creates the 3D pose pose. A pose describes a rigid 3D transformation, i.e., a transformation
consisting of an arbitrary translation and rotation, with 6 parameters: transX, transY, and transZ specify the
translation along the x-, y-, and z-axis, respectively, while rotX, rotY, and rotZ describe the rotation.
3D poses are typically used in two ways: First, to describe the position and orientation of one coordinate system
relative to another (e.g., the pose of a part’s coordinate system relative to the camera coordinate system - in short:
the pose of the part relative to the camera) and secondly, to describe how coordinates can be transformed between
two coordinate systems (e.g., to transform points from part coordinates into camera coordinates).
Please note that you can “read” this chain in two ways: If you start from the right, the rotations are always
performed relative to the global (i.e., fixed or “old”) coordinate system. Thus, Rgba can be read as follows: First
rotate around the z-axis, then around the “old” y-axis, and finally around the “old” x-axis. In contrast, if you read
from the left to the right, the rotations are performed relative to the local (i.e., “new”) coordinate system. Then,
Rgba corresponds to the following: First rotate around the x-axis, the around the “new” y-axis, and finally around
the “new(est)” z-axis.
Reading Rgba from right to left corresponds to the following sequence of operator calls:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, RotZ, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, RotY, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, RotX, ’x’, 0, 0, 0, HomMat3DXYZ)
In contrast, reading from left to right corresponds to the following operator sequence:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate_local (HomMat3DIdent, RotX, ’x’, 0, 0, 0,
HomMat3DRotX)
hom_mat3d_rotate_local (HomMat3DRotX, RotY, ’y’, 0, 0, 0,
HomMat3DRotXY)
hom_mat3d_rotate_local (HomMat3DRotXY, RotZ, ’z’, 0, 0, 0, HomMat3DXYZ)
When passing ’abg’ in orderOfRotation, the rotation corresponds to the following chain:
If you pass ’rodriguez’ in orderOfRotation, the rotation parameters rotX, rotY, and rotZ are interpreted
as the x-, y-, and z-component of the so-called Rodriguez rotation vector. The direction of the vector defines the
(arbitrary) axis of rotation. The length of the vector usually defines the rotation angle with positive orientation.
Here, a variation of the Rodriguez vector is used, where the length of the vector defines the tangent of half the
rotation angle:
rotX p
Rrodriguez = rotate around rotY by 2 · arctan( rotX2 + rotY2 + rotZ2 )
rotZ
transX
R t R(rotX, rotY, rotZ) transY
Hpose = = =
000 1 transZ
0 0 0 1
1 0 0 transX 0
0 1 0 transY 0
· R(rotX, rotY, rotZ)
= = H(t) · H(R)
0 0 1 transZ 0
0 0 0 1 0 0 0 1
Transformation of coordinates
The following equation describes how a point can be transformed from coordinate system 1 into coordinate sys-
tem 2 with a pose, or more exactly, with the corresponding homogeneous transformation matrix 2 H1 (input and
output points as homogeneous vectors, see also AffineTransPoint3d). Note that to transform points from
coordinate system 1 into system 2, you use the transformation matrix that describes the pose of coordinate system
1 relative to system 2.
transX
p2 p1
R(rotX, rotY, rotZ) · p1 + transY
= 2 H1 · =
1 1 transZ
1
0 1 0 0 −transX
R(rotX, rotY, rotZ) 0 0 1 0
· −transY
= H(R) · H(−t)
HR(p−T ) =
0 0 0 1 −transZ
0 0 0 1 0 0 0 1
If you select ’coordinate_system’ for viewOfTransform, the sequence of transformations remains constant,
but the rotation angles are negated. Please note that, contrary to its name, this is not equivalent to transforming a
coordinate system!
HALCON 8.0.2
1098 CHAPTER 15. TOOLS
1 0 0 transX 0
0 1 0
· R(−rotX, −rotY, −rotZ)
transY 0
Hcoordinate_system =
0 0 1
transZ 0
0 0 0 1 0 0 0 1
You can convert poses into other representation types using ConvertPoseType and query the type using
GetPoseType.
Parameter
Result
CreatePose returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception handling
is raised.
Parallelization Information
CreatePose is reentrant and processed without parallelization.
Possible Successors
PoseToHomMat3d, WritePose, CameraCalibration, HandEyeCalibration
Alternatives
ReadPose, HomMat3dToPose
See also
HomMat3dRotate, HomMat3dTranslate, ConvertPoseType, GetPoseType,
HomMat3dToPose, PoseToHomMat3d, WritePose, ReadPose
Module
Foundation
HALCON 8.0.2
1100 CHAPTER 15. TOOLS
For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
Rl tl Rr tr Rl · Rr Rl ·tr + tl
homMat3DCompose = · =
000 1 000 1 0 0 0 1
Parameter
. homMat3DLeft (input_control) . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Left input transformation matrix.
. homMat3DRight (input_control) . . . . . . . . . . . . . . . . .hom_mat3d-array ; HHomMat3D / HTuple (double)
Right input transformation matrix.
. homMat3DCompose (output_control) . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Output transformation matrix.
Result
If the parameters are valid, the operator HomMat3dCompose returns 2 (H_MSG_TRUE). If necessary, an ex-
ception is raised.
Parallelization Information
HomMat3dCompose is reentrant and processed without parallelization.
Possible Predecessors
HomMat3dCompose, HomMat3dTranslate, HomMat3dTranslateLocal, HomMat3dScale,
HomMat3dScaleLocal, HomMat3dRotate, HomMat3dRotateLocal, PoseToHomMat3d
Possible Successors
HomMat3dTranslate, HomMat3dTranslateLocal, HomMat3dScale, HomMat3dScaleLocal,
HomMat3dRotate, HomMat3dRotateLocal
See also
AffineTransPoint3d, HomMat3dIdentity, HomMat3dRotate, HomMat3dTranslate,
PoseToHomMat3d, HomMat3dToPose
Module
Foundation
public HHomMat3D ( )
void HHomMat3D.HomMat3dIdentity ( )
Generate the homogeneous transformation matrix of the identical 3D transformation.
HomMat3dIdentity generates the homogeneous transformation matrix homMat3DIdentity describing the
identical 3D transformation:
1 0 0 0
0 1 0 0
homMat3DIdentity = 0 0 1 0
0 0 0 1
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, homMat3DIdentity is stored as the
tuple [1,0,0,0,0,1,0,0,0,0,1,0].
Parameter
. homMat3DIdentity (output_control) . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Transformation matrix.
Result
HomMat3dIdentity always returns 2 (H_MSG_TRUE).
Parallelization Information
HomMat3dIdentity is reentrant and processed without parallelization.
Possible Successors
HomMat3dTranslate, HomMat3dTranslateLocal, HomMat3dScale, HomMat3dScaleLocal,
HomMat3dRotate, HomMat3dRotateLocal
HALCON 8.0.2
1102 CHAPTER 15. TOOLS
Alternatives
PoseToHomMat3d
Module
Foundation
HHomMat3D HHomMat3D.HomMat3dInvert ( )
Invert a homogeneous 3D transformation matrix.
HomMat3dInvert inverts the homogeneous 3D transformation matrix given by homMat3D. The resulting ma-
trix is returned in homMat3DInvert.
Parameter
axis = ’x’:
0
1 0 0
Rx 0
homMat3DRotate = · homMat3D Rx = 0 cos(phi) − sin(phi)
0
0 sin(phi) cos(phi)
000 1
axis = ’y’:
0
cos(phi) 0 sin(phi)
Ry 0
· homMat3D
homMat3DRotate = Ry = 0 1 0
0
− sin(phi) 0 cos(phi)
000 1
axis = ’z’:
0
cos(phi) − sin(phi) 0
Rz 0
· homMat3D
homMat3DRotate = Rz = sin(phi) cos(phi) 0
0
0 0 1
000 1
axis = [x,y,z]:
0
Ra 0
homMat3DRotate = · homMat3D Ra = u · uT + cos(phi) · (I − u · uT ) + sin(phi) · S
0
000 1
0
−z 0 y0
x 1 0 0 0
axis
u= = y0 I= 0 1 0 S = z0 0 −x0
kaxisk
z0 0 0 1 −y 0 x0 0
The point (px,py,pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using homMat3DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 0 +px 0 1 0 0 −px
0 1 0 +py
· R
0 0 1
· 0 −py
· homMat3D
homMat3DRotate =
0 0 1 +pz 0 0 0 1 −pz
0 0 0 1 000 1 0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by homMat3D, use
HomMat3dRotateLocal.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
HALCON 8.0.2
1104 CHAPTER 15. TOOLS
Parameter
. homMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Input transformation matrix.
. phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / int / long)
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / double / int / long)
Axis, to be rotated around.
Default Value : "x"
Suggested values : Axis ∈ {"x", "y", "z"}
. px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; HTuple (double / int / long)
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; HTuple (double / int / long)
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z ; HTuple (double / int / long)
Fixed point of the transformation (z coordinate).
Default Value : 0
Suggested values : Pz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. homMat3DRotate (output_control) . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Output transformation matrix.
Result
If the parameters are valid, the operator HomMat3dRotate returns 2 (H_MSG_TRUE). If necessary, an excep-
tion is raised.
Parallelization Information
HomMat3dRotate is reentrant and processed without parallelization.
Possible Predecessors
HomMat3dIdentity, HomMat3dTranslate, HomMat3dScale, HomMat3dRotate
Possible Successors
HomMat3dTranslate, HomMat3dScale, HomMat3dRotate
See also
HomMat3dInvert, HomMat3dIdentity, HomMat3dRotateLocal, PoseToHomMat3d,
HomMat3dToPose, HomMat3dCompose
Module
Foundation
axis = ’x’:
0
1 0 0
Rx 0
homMat3DRotate = homMat3D · Rx = 0 cos(phi) − sin(phi)
0
0 sin(phi) cos(phi)
000 1
axis = ’y’:
0
cos(phi) 0 sin(phi)
Ry 0
homMat3DRotate = homMat3D · Ry = 0 1 0
0
− sin(phi) 0 cos(phi)
000 1
axis = ’z’:
0
cos(phi) − sin(phi) 0
Rz 0
homMat3DRotate = homMat3D · Rz = sin(phi) cos(phi) 0
0
0 0 1
000 1
axis = [x,y,z]:
0
Ra 0
homMat3DRotate = homMat3D · Ra = u · uT + cos(phi) · (I − u · uT ) + sin(phi) · S
0
000 1
0
−z 0 y0
x 1 0 0 0
axis
u= = y0 I= 0 1 0 S = z0 0 −x0
kaxisk
z0 0 0 1 −y 0 x0 0
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using homMat3DRotate.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. homMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Input transformation matrix.
. phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; HTuple (double / int / long)
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / double / int / long)
Axis, to be rotated around.
Default Value : "x"
Suggested values : Axis ∈ {"x", "y", "z"}
. homMat3DRotate (output_control) . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Output transformation matrix.
HALCON 8.0.2
1106 CHAPTER 15. TOOLS
Result
If the parameters are valid, the operator HomMat3dRotateLocal returns 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Parallelization Information
HomMat3dRotateLocal is reentrant and processed without parallelization.
Possible Predecessors
HomMat3dIdentity, HomMat3dTranslateLocal, HomMat3dScaleLocal,
HomMat3dRotateLocal
Possible Successors
HomMat3dTranslateLocal, HomMat3dScaleLocal, HomMat3dRotateLocal
See also
HomMat3dInvert, HomMat3dIdentity, HomMat3dRotate, PoseToHomMat3d,
HomMat3dToPose, HomMat3dCompose
Module
Foundation
The point (px,py,pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using homMat3DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 0 +px 0 1 0 0 −px
· 0 1 0 −py · homMat3D
0 1 0 +py S 0
homMat3DScale = 0 0 1 +pz ·
0 0 0 1 −pz
0 0 0 1 000 1 0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by homMat3D, use
HomMat3dScaleLocal.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
HALCON 8.0.2
1108 CHAPTER 15. TOOLS
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using homMat3DScale.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
Possible Predecessors
HomMat3dIdentity, HomMat3dTranslateLocal, HomMat3dScaleLocal,
HomMat3dRotateLocal
Possible Successors
HomMat3dTranslateLocal, HomMat3dScaleLocal, HomMat3dRotateLocal
See also
HomMat3dInvert, HomMat3dIdentity, HomMat3dScale, PoseToHomMat3d,
HomMat3dToPose, HomMat3dCompose
Module
Foundation
HPose HHomMat3D.HomMat3dToPose ( )
Convert a homogeneous transformation matrix into a 3D pose.
HomMat3dToPose converts a homogeneous transformation matrix into the corresponding 3D pose with type
code 0. For details about 3D poses and the corresponding transformation matrices please refer to CreatePose.
A typical application of HomMat3dToPose is that a 3D pose was converted into a homogeneous transformation
matrix to further transform it, e.g., with HomMat3dRotate or HomMat3dTranslate, and now must be
converted back into a pose to use it as input for operators like ImagePointsToWorldPlane.
Parameter
. homMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Homogeneous transformation matrix.
. pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Equivalent 3D pose.
Number of elements : 7
Example (Syntax: HDevelop)
Result
HomMat3dToPose returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
HomMat3dToPose is reentrant and processed without parallelization.
Possible Predecessors
HomMat3dRotate, HomMat3dTranslate, HomMat3dInvert
HALCON 8.0.2
1110 CHAPTER 15. TOOLS
Possible Successors
CameraCalibration, WritePose, DispCaltab, SimCaltab
See also
CreatePose, CameraCalibration, DispCaltab, SimCaltab, WritePose, ReadPose,
PoseToHomMat3d, Project3dPoint, GetLineOfSight, HomMat3dRotate,
HomMat3dTranslate, HomMat3dInvert, AffineTransPoint3d
Module
Foundation
To perform the transformation in the local coordinate system, i.e., the one described by homMat3D, use
HomMat3dTranslateLocal.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. homMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Input transformation matrix.
. tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; HTuple (double / int / long)
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; HTuple (double / int / long)
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
HALCON 8.0.2
1112 CHAPTER 15. TOOLS
HHomMat3D HPose.PoseToHomMat3d ( )
Convert a 3D pose into a homogeneous transformation matrix.
PoseToHomMat3d converts a 3D pose pose, e.g., the exterior camera parameters, into the equivalent homo-
geneous transformation matrix homMat3D. For details about 3D poses and the corresponding transformation
matrices please refer to CreatePose.
A typical application of PoseToHomMat3d is that you want to further transform the pose, e.g., rotate or translate
it using HomMat3dRotate or HomMat3dTranslate. In case of the exterior camera parameters, this can be
necessary if the calibration plate cannot be placed such that its coordinate system coincides with the desired world
coordinate system.
Parameter
. pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
3D pose.
Number of elements : 7
. homMat3D (output_control) . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; HHomMat3D / HTuple (double)
Equivalent homogeneous transformation matrix.
Example (Syntax: HDevelop)
pose_to_hom_mat3d(FinalPose, cam_H_cal)
* rotate it 90 degree around its y-axis to obtain a world coordinate system
* whose y- and z-axis lie in the plane of the calibration plate while the
* x-axis point ’upwards’: cam_H_w = cam_H_cal * RotY(90)
hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, deg(90), ’y’, 0, 0, 0,
HomMat3DRotateY)
hom_mat3d_compose(cam_H_cal, HomMat3DRotateY, cam_H_w)
Result
PoseToHomMat3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
PoseToHomMat3d is reentrant and processed without parallelization.
Possible Predecessors
CameraCalibration, ReadPose
Possible Successors
AffineTransPoint3d, HomMat3dInvert, HomMat3dTranslate, HomMat3dRotate,
HomMat3dToPose
See also
CreatePose, CameraCalibration, WritePose, ReadPose, HomMat3dToPose,
Project3dPoint, GetLineOfSight, HomMat3dRotate, HomMat3dTranslate,
HomMat3dInvert, AffineTransPoint3d
Module
Foundation
Parameter
HALCON 8.0.2
1114 CHAPTER 15. TOOLS
A typical application of this operator when defining a world coordinate system by placing the standard cal-
ibration plate on the plane of measurements. In this case, the external camera parameters returned by
CameraCalibration correspond to a coordinate system that lies above the measurement plane, because the
coordinate system of the calibration plate is located on its surface and the plate has a certain thickness. To correct
the pose, call SetOriginPose with the translation vector (0,0,D), where D is the thickness of the calibration
plate.
Parameter
. poseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
original 3D pose.
Number of elements : 7
. DX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
translation of the origin in x-direction.
Default Value : 0
. DY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
translation of the origin in y-direction.
Default Value : 0
Parameter
HALCON 8.0.2
1116 CHAPTER 15. TOOLS
read_image(Image1, ’calib-01’)
read_image(Image2, ’calib-02’)
read_image(Image3, ’calib-03’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
find_caltab(Image2, Caltab2, ’caltab.descr’, 3, 112, 5)
find_caltab(Image3, Caltab3, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
find_marks_and_pose(Image2, Caltab2, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2,
StartPose2)
find_marks_and_pose(Image3, Caltab3, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3,
StartPose3)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3],
[CCoord1, CCoord2, CCoord3], StartCamPar,
[StartPose1, StartPose2, StartPose3], ’all’,
CamParam, NFinalPose, Errors)
* write exterior camera parameters of first calibration image
write_pose(NFinalPose[0:6], ’campose.dat’)
Result
WritePose returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been written success-
fully. If necessary an exception handling is raised.
Parallelization Information
WritePose is local and processed completely exclusively without parallelization.
Possible Predecessors
CameraCalibration, HomMat3dToPose
See also
CreatePose, FindMarksAndPose, CameraCalibration, DispCaltab, SimCaltab,
ReadPose, PoseToHomMat3d, HomMat3dToPose
Module
Foundation
15.3 Background-Estimator
Parallelization Information
CloseAllBgEsti is local and processed completely exclusively without parallelization.
Alternatives
CloseBgEsti
See also
CreateBgEsti
Module
Foundation
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
/* - end of background estimation - */
/* close the dataset: */
close_bg_est(BgEstiHandle).
Result
CloseBgEsti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
CloseBgEsti is local and processed completely exclusively without parallelization.
Possible Predecessors
RunBgEsti
See also
CreateBgEsti
Module
Foundation
HALCON 8.0.2
1118 CHAPTER 15. TOOLS
time constant for the exp-function that raises the threshold in case of a foreground estimation of the pixel. That
means, the threshold is raised in regions where movement is detected in the foreground. That way larger changes in
illumination are tolerated if the background becomes visible again. The main reason for increasing this tolerance is
the impossibility for a prediction of illumintaion changes while the background is hidden. Therefore no adaptation
of the estimated background image is possible.
Attention
If gainMode was set to ’frame’, the run-time can be extremly long for large values of gain1 or gain2, because
the values for the gains’ table are determined by a simple binary search.
Parameter
HALCON 8.0.2
1120 CHAPTER 15. TOOLS
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize 1. BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7.0,10,3.25,15.0,BgEstiHandle1)
/* initialize 2. BgEsti-Dataset with
frame orientated gains and fixed threshold */
create_bg_esti(InitImage,0.7,0.7,’frame’,30.0,4.0,
’off’,9.0,10,3.25,15.0,BgEstiHandle2).
Result
CreateBgEsti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
CreateBgEsti is local and processed completely exclusively without parallelization.
Possible Successors
RunBgEsti
See also
SetBgEstiParams, CloseBgEsti
Module
Foundation
Parameter
/* read Init-Image:*/
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7.0,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
/* change only the gain parameter in dataset: */
get_bg_esti_params(BgEstiHandle,par1,par2,par3,par4,
par5,par6,par7,par8,par9,par10)
set_bg_esti_params(BgEstiHandle,par1,par2,par3,0.004,
0.08,par6,par7,par8,par9,par10)
/* read the next image in sequence: */
read_image(Image3,’Image_3’)
/* estimate the Background: */
run_bg_esti(Image3,Region3,BgEstiHandle)
/* display the foreground region: */
disp_region(Region3,WindowHandle)
/* etc. */
HALCON 8.0.2
1122 CHAPTER 15. TOOLS
Result
GetBgEstiParams returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
GetBgEstiParams is reentrant and processed without parallelization.
Possible Predecessors
CreateBgEsti
Possible Successors
RunBgEsti
See also
SetBgEstiParams
Module
Foundation
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* give the background image from the aktive dataset: */
give_bg_esti(BgImage,BgEstiHandle)
/* display the background image: */
disp_image(BgImage,WindowHandle).
Result
GiveBgEsti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
GiveBgEsti is reentrant and processed without parallelization.
Possible Predecessors
RunBgEsti
Possible Successors
RunBgEsti, CreateBgEsti, UpdateBgEsti
See also
RunBgEsti, UpdateBgEsti, CreateBgEsti
Module
Foundation
The background estimation processes only single-channel images. Therefore the background has to be adapted
separately for every channel.
The background estimation should be used on half- or even quarter-sized images. For this, the input images (and
the initialization image!) has to be reduced using ZoomImageFactor. The advantage is a shorter run-time
on one hand and a low-band filtering on the other. The filtering eliminates high frequency noise and results in a
more reliable estimation. As a result the threshold (see CreateBgEsti) can be lowered. The foreground region
returned by RunBgEsti then has to be enlarged again for further processing.
Attention
The passed image (presentImage) must have the same type and size as the background image of the current
data set (initialized with CreateBgEsti).
Parameter
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
HALCON 8.0.2
1124 CHAPTER 15. TOOLS
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
Result
RunBgEsti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
RunBgEsti is reentrant and processed without parallelization.
Possible Predecessors
CreateBgEsti, UpdateBgEsti
Possible Successors
RunBgEsti, GiveBgEsti, UpdateBgEsti
See also
SetBgEstiParams, CreateBgEsti, UpdateBgEsti, GiveBgEsti
Module
Foundation
/* read Init-Image:*/
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7.0,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* display the foreground region: */
disp_region(Region1,WindowHandle)
/* read the next image in sequence: */
read_image(Image2,’Image_2’)
/* estimate the Background: */
HALCON 8.0.2
1126 CHAPTER 15. TOOLS
run_bg_esti(Image2,Region2,BgEstiHandle)
/* display the foreground region: */
disp_region(Region2,WindowHandle)
/* etc. */
/* change parameter in dataset: */
set_bg_esti_params(BgEstiHandle,0.7,0.7,’fixed’,
0.004,0.08,’on’,9.0,10,3.25,20.0)
/* read the next image in sequence: */
read_image(Image3,’Image_3’)
/* estimate the Background: */
run_bg_esti(Image3,Region3,BgEstiHandle)
/* display the foreground region: */
disp_region(Region3,WindowHandle)
/* etc. */
Result
SetBgEstiParams returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
SetBgEstiParams is reentrant and processed without parallelization.
Possible Predecessors
CreateBgEsti
Possible Successors
RunBgEsti
See also
UpdateBgEsti
Module
Foundation
/* read Init-Image: */
read_image(InitImage,’Init_Image’)
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,’fixed’,0.002,0.02,
’on’,7,10,3.25,15.0,BgEstiHandle)
/* read the next image in sequence: */
read_image(Image1,’Image_1’)
/* estimate the Background: */
run_bg_esti(Image1,Region1,BgEstiHandle)
/* use the Region and the information of a knowledge base */
/* to calculate the UpDateRegion */
update_bg_esti(Image1,UpdateRegion,BgEstiHandle)
/* then read the next image in sequence: */
read_image(,Image2,’Image_2’)
/* estimate the Background: */
run_bg_esti(Image2,Region2,BgEstiHandle)
/* etc. */
Result
UpdateBgEsti returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
UpdateBgEsti is reentrant and processed without parallelization.
Possible Predecessors
RunBgEsti
Possible Successors
RunBgEsti
See also
RunBgEsti, GiveBgEsti
Module
Foundation
15.4 Barcode
static void HOperatorSet.ClearAllBarCodeModels ( )
static void HMisc.ClearAllBarCodeModels ( )
Delete all bar code models and free the allocated memory
The operator ClearAllBarCodeModels deletes all bar code models that were created by
CreateBarCodeModel. All memory used by the models is freed. After the operator call, all bar code
handles are invalid.
Attention
ClearAllBarCodeModels exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. ClearAllBarCodeModels must not be used in any application.
Result
The operator ClearAllBarCodeModels returns the value 2 (H_MSG_TRUE) if all bar code models were
freed correctly. Otherwise, an exception will be raised.
Parallelization Information
ClearAllBarCodeModels is processed completely exclusively without parallelization.
Alternatives
ClearBarCodeModel
See also
CreateBarCodeModel, FindBarCode
Module
Bar Code
HALCON 8.0.2
1128 CHAPTER 15. TOOLS
HALCON 8.0.2
1130 CHAPTER 15. TOOLS
Note, that the PharmaCode can be read in forward and backward direction, both yielding a valid result. Therefore,
both strings are returned and concatenated into a single string in decodedDataStrings by a separating comma.
Parameter
Access iconic objects that were created during the search or decoding of bar code symbols.
With the operator GetBarCodeObject, iconic objects created during the last call of the operator
FindBarCode can be accessed. Besides the name of the object (objectName), the bar code model
(barCodeHandle) must be passed to GetBarCodeObject. In addition, in candidateHandle an in-
dex to a single decoded symbol or a single symbol candidate must be passed. Alternatively, candidateHandle
can be set to ’all’ and then all objects of the decoded symbols or symbol candidates are returned.
Setting objectName to ’symbol_regions’ will return regions of successfully decoded symbols. When choosing
’all’ as candidateHandle, the regions of all decoded symbols are retrieved. The order of the returned objects
is the same as in FindBarCode. If there is a total of n decoded symbols candidateHandle can be chosen
in between 0 and (n-1) to get the region of the respective decoded symbol.
Setting objectName to ’candidate_regions’ will return regions of potential bar codes. If there is a total of n
decoded symbols out of a total of m candidates then candidateHandle can be chosen between 0 and (m-1).
With candidateHandle between 0 and (n-1) the original segmented region of the respective decoded symbol
is retrieved. With candidateHandle between n and (m-1) the region of the potential or undecodable symbol
is returned. In addition, candidateHandle can be set to ’all’ to retrieve all candidate regions at the same time.
Setting objectName to ’scanlines_all’ or ’scanlines_valid’ will return XLD contours representing the partic-
ular detected bars in the scanlines applied on the candidate regions. ’scanlines_all’ represents all scanlines that
FindBarCode whould place in order to decode a barcode. ’scanlines_valid’ represents only those scanlines that
could be successfully decoded. For single row bar codes, there will be at least one ’scanlines_valid’ if the symbol
was successfully decoded. There will be no ’scanlines_valid’ if it was not decoded. For stacked bar codes (e.g.
’RSS-14 Stacked’ and ’RSS Expanded Stacked’) this rule applies similarly, but on a per-symbol-row basis rather
then per-symbol. Note that GetBarCodeObject returns all XLD contours merged into a single array of XLDs
and hence there is no way to identify the contours corresponding to separate scanlines. Furthermore, if ’all’ is used
as candidateHandle, the output object will contain XLD contours for all symbols and in this case there is no
way to identify the contours corresponding to separate symbols as well. However, the contours still can be used
for visualization purposes.
Parameter
HALCON 8.0.2
1132 CHAPTER 15. TOOLS
’meas_thresh’: Threshold for the detection of edges in the bar code region.
’max_diff_orient’: Maximal difference in the orientation of edges in a bar code region. The difference in oriented
angles, given in degree, refers to neighboring pixels.
Further details on the above parameters can be found with the description of SetBarCodeParam operator.
Parameter
Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
The operator GetBarCodeResult allows to access alphanumeric results of the find and decode process. To
access a result, first the handle of the bar code model (barCodeHandle) and the index of the resulting symbol
(candidateHandle) must be passed. candidateHandle refers to the results, in the same order that is
returned by operator FindBarCode. candidateHandle can take numbers from 0 to (n-1), where n is the
total number of successfully decoded symbols. Alternatively, candidateHandle can be set to ’all’ if all results
are desired. The option ’all’ can be chosen only in the case where the return value of a single result is single
valued.
When resultName is set to ’decoded_strings’ the decoded result is returned as a string in a human readable
format. This decoded string can be returned for a single result, i.e., candidateHandle is for example 0, or for
all results simultaneously, i.e., candidateHandle is set to ’all’. Note, that only data characters are comprised
in the decoded string. Start/stop characters are excluded, but can be refered to via ’decoded_reference’. For codes
with a facultative check character it depends on the settings whether the check character is returned or not. When
’check_char’ is set to the default value ’absent’ the decoded string takes the check character as a normal data
character. When ’check_char’ is set to ’present’ and if the check character is correct it will be ignored in the string.
If the check character is wrong the resulting string is an empty string.
When choosing ’decoded_reference’ as resultName the underlying decoded reference data is returned. It com-
prises all original characters of the symbol, i.e., data characters, potential start or stop characters and check charac-
ters if present. For codes taking only numeric data, like, e.g., the EAN/UPC codes, the RSS-14 and RSS Limited
codes, or the 2/5 codes, the decoded reference data takes the same values as the decoded string data including check
characters. For codes with alphanumeric data, like for example code 39 or code 128 the decoded reference data are
the indices of the respective decoding table. For RSS Expanded and RSS Expanded Stacked the reference values
are the ASCII codes of the decoded data, where the special charachter FNC1 appears with value 10. Furthermore,
for all codes from the RSS family the first reference value reprsents a linkage flag with value of 1 if the flag is set
and 0 otherwise. As the decoded reference is a tuple of whole numbers it can only be called for a single result,
meaning that candidateHandle has to be the handle number of the corresponding decoded symbol.
When resultName is set to ’composite_strings’ or ’composite_reference’, then the decoded string or the refer-
ence data of a RSS Composite component is returned, respectively. For further details see the description of the
parameter ’composite_code’ of SetBarCodeParam.
When resultName is set to ’orientation’, the orientation for the specified result is returned. The ’orientation’ of
a bar code is defined as the angle between its reading direction and the horizontal image axis. The angle is positive
in counter clockwise direction and is given in degrees. It can be in the range of [-180.0 . . . 180.0] degrees. Note
that the reading direction is perpendicular to the bars of the bar code. A single angle is returned when only one
result is specified, e.g., by entering 0 for candidateHandle. Otherwise, when candidateHandle is set to
’all’, a tuple containing the angles of all results is returned.
Parameter
HALCON 8.0.2
1134 CHAPTER 15. TOOLS
’element_size_min’: Minimal size of bar code elements, i.e. the minimal width of bars and spaces. For small bar
codes the value should be reduced to 1.5. In the case of huge bar codes the value should be increased, which
results in a shorter execution time and fewer candidates.
Typical values: [1.5 . . . 10.0]
Default: 2.0
’element_size_max’: Maximal size of bar code elements, i.e. the maximal width of bars and spaces. The value of
’element_size_max’ should be adequate low such that two neighboring bar codes are not fused into a single
one. On this other hand the value should be sufficiently high in order to find the complete bar code region.
Typical values: [4.0 . . . 60.0]
Default: 8.0
’element_height_min’: Minimal bar code height. The default value of this parameter is -1, meaning that the bar
code reader automatically derives a reasonable height from the other parameters. Just for very flat and very
high bar codes a manual adjustment of this parameter can be necessary. In the case of a bar code with a height
of less than 16 pixels the respective height should be set by the user. Note, that the minimal value is 8 pixels.
If the bar code is very high, i.e. 70 pixels and more, manually adjusting to the respective height can lead to a
speed-up of the subsequent finding and reading operation.
Typical values: [-1, 8 . . . 64]
Default: -1
’orientation’: Expected bar code orientation. A potential (candidate) bar code contains bars with similar ori-
entation. The ’orientation’ and ’orientation_tol’ parameters are used to specify the range [’orientation’-
’orientation_tol’, ’orientation’+’orientation_tol’]. FindBarCode processes a candidate bar code only
when the avarage orientation of its bars lies in this range. If the bar codes are expected to appear only in
certain orientations in the processed images, one can reduce the orientation range adequately. This enables
an early identification of false candidates and hence shorter execution times. This adjustment can be used for
images with a lot of texture, which includes fragments tending to result in false bar code candidates.
The actual orientation angle of a bar code is explained with GetBarCodeResult(...,’orientation’,...) with
the only difference that for the early identification of false candidates the reading direction of the bar codes
is ignored, which results in relevant orientation values only in the range [-90.0 . . . 90.0]. The only exception
to this rule constitutes the bar code symbol PharmaCode, which possesses a forward and a backward reading
direction at the same time: here, ’orientation’ can take values in the range [-180.0 . . . 180.0] and the decoded
result is unique corresponding to just one reading direction.
Typical values: [-90.0 . . . 90.0]
Default: 0.0
’orientation_tol’: Orientation tolerance. See the explanation of ’orientation’ parameter. As explained there, rel-
evant orientation values are only in the range of [-90.0 . . . 90.0], which means that with ’orientation_tol’ =
90 the whole range is spanned. Therefore, valid values for ’orientation_tol’ are only in the range of [0.0
. . . 90.0]. The default value 90.0 means that no restriction on the bar code candidates is performed.
Typical values: [0.0 . . . 90.0]
Default: 90.0
Appearance of the bar code in the image:
’meas_thresh’: The bar-space-sequence of a bar code is determined with a scanline measuring the position of the
edges. Finding these edges requires a threshold. ’meas_thresh’ defines this threshold which is a relative value
with respect to the dynamic range of the scanline pixels. In the case of disturbances in the bar code region or
a high noise level, the value of ’meas_thresh’ should be increased.
Typical values: [0.05 . . . 0.2]
Default: 0.05
’max_diff_orient’: A potential bar code region contains bars, and hence edges, with a similar orientation. The
value max_diff_orient denotes the maximal difference in this orientation between adjacent pixels and is given
in degree. If a bar code is of bad quality with jagged edges the parameter max_diff_orient should be set to
bigger values. If the bar code is of good quality max_diff_orient can be set to smaller values, thus reducing
the number of potential but false bar code candidates.
Typical values: [2 . . . 20]
Default: 10
Bar code specific values:
’check_char’: For bar codes with a facultative check character, this parameter determines whether the check char-
acter is taken into account or not. If the bar code has a check character, ’check_char’ should be set to ’present’
and thus the check character is tested. In that case, a bar code result is returned only if the check sum is cor-
rect. For ’check_char’ set to ’absent’ no check sum is computed and bar code results are retunred as long as
they were successfully decoded. Bar codes with a facultative check character are, e.g. Code 39, Codabar, 25
Industrial and 25 Interleaved.
Values: [’absent’, ’present’]
Default: ’absent’
’composite_code’: EAN.UPC bar codes can have an additional 2D Composite code component appended. If
’composite_code’ is set to ’CC-A/B’ the composite component will be found and decoded. By default, ’com-
posite_code’ is set to ’none’ and thus it is disabled. If the searched bar code symbol has no attached composite
component, just the result of the bar code itself is returned by FindBarCode. Composite codes are sup-
ported only for bar codes of the RSS family.
Values: [’none’, ’CC-A/B’]
Default: ’none’
Parameter
. barCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; HBarCode / HTuple (IntPtr)
Handle of the bar code model.
. genParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; HTuple (string)
Names of the generic parameters that shall be adjusted for finding and decoding bar codes.
Default Value : "element_size_max"
List of values : GenParamNames ∈ {"element_size_min", "element_size_max", "element_height_min",
"orientation", "orientation_tol", "meas_thresh", "max_diff_orient", "check_char", "composite_code"}
HALCON 8.0.2
1136 CHAPTER 15. TOOLS
15.5 Calibration
static void HOperatorSet.CaltabPoints ( HTuple calTabDescrFile,
out HTuple x, out HTuple y, out HTuple z )
static void HMisc.CaltabPoints ( string calTabDescrFile, out HTuple x,
out HTuple y, out HTuple z )
Read the mark center points from the calibration plate description file.
CaltabPoints reads the mark center points from the calibration plate description file calTabDescrFile
(see GenCaltab) and returns their coordinates in x, y und z. The mark center points are 3D coordinates in
the calibration plate coordinate system und describe the 3D model of the calibration plate. The calibration plate
coordinate system is located in the middle of the surface of the calibration plate, its z-axis points into the calibration
plate, its x-axis to the right, and it y-axis downwards.
The mark center points are typically used as input parameters for the operator CameraCalibration. This
operator projects the model points into the image, minimizes the distance between the projected points and the
observed 2D coordinates in the image (see FindMarksAndPose), and from this computes the exact values for
the interior and exterior camera parameters.
Parameter
. calTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. x (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
X coordinates of the mark center points in the coordinate system of the calibration plate.
. y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Y coordinates of the mark center points in the coordinate system of the calibration plate.
. z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Z coordinates of the mark center points in the coordinate system of the calibration plate.
Example (Syntax: HDevelop)
* read_image(Image1, ’calib-01’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576]
find_marks_and_pose(Image1,Caltab1,’caltab.descr’, StartCamPar,
Result
CaltabPoints returns 2 (H_MSG_TRUE) if all parameter values are correct and the file calTabDescrFile
has been read successfully. If necessary, an exception handling is raised.
Parallelization Information
CaltabPoints is reentrant and processed without parallelization.
Possible Successors
CameraCalibration
See also
FindCaltab, FindMarksAndPose, CameraCalibration, DispCaltab, SimCaltab,
Project3dPoint, GetLineOfSight, GenCaltab
Module
Foundation
HALCON 8.0.2
1138 CHAPTER 15. TOOLS
Result
If the parameters are valid, the operator CamMatToCamPar returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
CamMatToCamPar is reentrant and processed without parallelization.
Possible Predecessors
StationaryCameraSelfCalibration
See also
CameraCalibration, CamParToCamMat
Module
Calibration
Result
If the parameters are valid, the operator CamParToCamMat returns the value 2 (H_MSG_TRUE). If necessary
an exception handling is raised.
Parallelization Information
CamParToCamMat is reentrant and processed without parallelization.
Possible Predecessors
CameraCalibration
See also
StationaryCameraSelfCalibration, CamMatToCamPar
Module
Calibration
HALCON 8.0.2
1140 CHAPTER 15. TOOLS
x
pc
w
y R t p
=
= ·
1 z 000 1 1
1
Then, the point is projected into the image plane, i.e., onto the sensor chip.
For the modeling of this projection process that is determined by the used combination of camera, lens, and frame
grabber, HALCON provides the following three 3D camera models:
For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in camParam
is greater than 0, the projection is described by the following equations:
x
pc = y
z
x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in camParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
x
pc = y
z
u = x and v=y
2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:
ũ ṽ
c= + Cx and r= + Cy
Sx Sy
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators FindMarksAndPose and CameraCalibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
x
pc = y ,
z
m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz
with
1
D =
1 + κ(ũ2 + (pv )2 )
pv = Sy · Cy
ũ
c= + Cx and r=t
Sx
Camera parameters
The total of 14 camera parameters for area scan cameras and 17 camera parameters for line scan cameras, respec-
tively, can be divided into the interior and exterior camera parameters:
Interior camera parameters: These parameters describe the characteristics of the used camera, especially the
dimension of the sensor itself and the projection properties of the used combination of lens, camera, and
frame grabber.
For area scan cameras, the above described camera model contains the following 8 parameters:
Focus: Focal length of the lens. 0 for telecentric lenses.
Kappa (κ): Distortion coefficient to model the pillow- or barrel-shaped distortions caused by the lens.
Sx : Scale factor. For pinhole cameras, it corresponds to the horizontal distance between two neighbor-
ing cells on the sensor. For telecentric cameras, it represents the horizontal size of a pixel in world
coordinates. Attention: This value increases, if the image is subsampled!
HALCON 8.0.2
1142 CHAPTER 15. TOOLS
Sy : Scale factor. For pinhole cameras, it corresponds to the vertical distance between two neighboring
cells on the sensor. For telecentric cameras, it respresents the vertical size of a pixel in world coordi-
nates. Since in most cases the image signal is sampled line-synchronously, this value is determined
by the dimension of the sensor and needn’t be estimated for pinhole cameras by the calibration
process. Attention: This value increases, if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Row coordinate of the image center point (center of the radial distortion).
ImageWidth: Width of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
ImageHeight: Height of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
For line scan cameras, the above described camera model contains the following 11 parameters:
Focus: Focal length of the lens.
Kappa: Distortion coefficient to model the pin-cushion- or barrel-shaped distortions caused by the lens.
Sx : Scale factor, corresponds to the horizontal distance between two neighboring cells on the sensor.
Attention: This value increases if the image is subsampled!
Sy : Scale factor. During the calibration, it appears only in the form pv = Sy · Cy . pv describes the
distance of the image center point from the sensor line in [meters]. Attention: This value increases
if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Distance of the image center point (center of the radial distortion) from the sensor line in [scanlines].
ImageWidth: Width of the sampled image. Attention: This value decreases if the image is subsampled!
ImageHeight: Height of the sampled image. Attention: This value decreases if the image is subsam-
pled!
Vx : X-component of the motion vector.
Vy : Y-component of the motion vector.
Vz : Z-component of the motion vector.
Note that the term focal length is not quite correct and would be appropriate only for an infinite object
distance. To simplify matters, always the term focal length is used even if the image distance is meant.
Exterior camera parameters: These 6 parameters describe the 3D pose, i.e., the position and orientation, of the
world coordinate system relative to the camera coordinate system. For line scan cameras, the pose of the
world coordinate system refers to the camera coordinate system of the first image line. Three parameters
describe the translation, three the rotation. See CreatePose for more information about 3D poses. Note
that CameraCalibration operates with all types of 3D poses for NStartPose.
When using the standard calibration plate, the world coordinate system is defined by the coordinate system
of the calibration plate which is located in the middle of the surface of the calibration plate, its z-axis pointing
into the calibration plate, its x-axis to the right, and it y-axis downwards.
How to generate a appropriate calibration plate? The simplest method to determine the interior parameters of
a camera is the use of the standard calibration plate as generated by the operator GenCaltab. You can
obtain high-precision calibration plates in various sizes and materials from your local distributor. In case of
small distances between object and lens it may be sufficient to print the calibration pattern by a laser printer
and to mount it on a cardboard. Otherwise – especially by using a wide-angle lens – it is possible to print
the PostScript file on a large ink-jet printer and to mount it on a aluminum plate. It is very important that
the mark coordinates in the calibration plate description file correspond to the real ones on the calibration
plate with high accuracy. Thus, the calibration plate description file has to be modified in accordance with
the measurement of the calibration plate!
How to take a set of suitable images? If you use the standard calibration plate, you can proceed in the following
way: With the combination of lens (fixed distance!), camera, and frame grabber to be calibrated a set of
images of the calibration plate has to be taken, see OpenFramegrabber and GrabImage. The following
items have to be considered:
• At least a total of 10 to 20 images should be taken into account.
The value for Sx is calibrated, since the video signal of a camera normally isn’t sampled pixel-
synchronously.
Sy : Since most off-the-shelf cameras have quadratic pixels, the same values for Sy are valid as for Sx .
In contrast to Sx the value for Sy will not be calibrated for pinhole cameras, because the video
signal of a camera normally is sampled line-synchronously. Thus, the initial value is equal to the
final value. Appropriate initial values are:
Full image (768*576) Subsampling (384*288)
1/3"-Chip 0.0000055 m 0.0000110 m
1/2"-Chip 0.0000086 m 0.0000172 m
2/3"-Chip 0.0000110 m 0.0000220 m
Cx and Cy : Initial values for the coordinates of the image center is the half image width and half image
height. Notice: The values of Cx and Cy decrease if the image is subsampled! Appropriate initial
values are:
Full image (768*576) Subsampling (384*288)
Cx 384.0 192.0
Cy 288.0 144.0
ImageWidth and ImageHeight: These two parameters are determined by the the used frame grabber
and therefore are not calibrated. Appropriate initial values are, for example:
Full image (768*576) Subsampling (384*288)
ImageWidth 768 384
HALCON 8.0.2
1144 CHAPTER 15. TOOLS
For line scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells can be taken from the
technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m, and 14e-6 m.
Notice: The value of Sx increase, if the image is subsampled!
Sy : The initial value for the size of a cell in the direction perpendicular to the sensor line can also be
taken from the technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m,
and 14e-6 m. Notice: The value of Sx increase, if the image is subsampled! In contrast to Sx , the
value for Sy will NOT be calibrated for line scan cameras, because it appears only in the form pv =
Sy · Cy . Therefore, it cannot be determined separately.
Cx : The initial value for the x-coordinate of the image center is the half image width. Notice: The
values of Cx decreases if the image is subsampled! Appropriate initial values are:
Image width: 1024 2048 4096 8192
Cx: 512 1024 2048 4096
Cy : The initial value for the y-coordinate of the image center can normally be set to 0.
ImageWidth and ImageHeight: These two parameters are determined by the used frame grabber and
therefore are not calibrated.
Vx , Vy , Vz : The initial values for the x-, y-, and z-component of the motion vector depend on the image
acquisition setup. Assuming a camera that looks perpendicularly onto a conveyor belt, and that is
rotated around its optical axis such that the sensor line is perpendicular to the conveyor belt, i.e., the
y-axis of the camera coordinate system is parallel to the conveyor belt, the initial values Vx = Vz =
0. The initial value for Vy can then be determined, e.g., from a line scan image of an object with
known size (e.g., calibration plate, ruler):
Vy = l[m]/l[row]
with:
l[m] = Length of the object in object coordinates [meter]
l[row] = Length of the object in image coordinates [rows]
If, compared to the above setup, the camera is rotated 30 degrees around its optical axis, i.e., around
the z-axis of the camera coordinate system, the above determined initial values must be changed as
follows:
Vxz = sin(30) ∗ Vy
Vyz = cos(30) ∗ Vy
Vzz = Vz = 0
If, compared to the first setup, the camera is rotated -20 degrees around the x-axis of the camera
coordinate system, the following initial values result:
Vxx = Vx = 0
Vyx = cos(−20) ∗ Vy
Vzx = sin(−20) ∗ Vy
The quality of the initial values for Vx , Vy , and Vz are crucial for the success of the whole calibration.
If they are not precise enough, the calibration may fail.
Which camera parameters have to be estimated? The input parameter estimateParams is used to select
which camera parameters to estimate. Usually this parameter is set to ’all’, i.e., all 6 exterior camera pa-
rameters (translation and rotation) and all interior camera parameters are determined. If the interior camera
parameters already have been determined (e.g., by a previous call to CameraCalibration) it is often
desired to only determine the pose of the world coordinate system in camera coordinates (i.e., the exterior
camera parameters). In this case, estimateParams can be set to ’pose’. This has the same effect as
estimateParams = [’alpha’,’beta’,’gamma’,’transx’,’transy’,’transz’]. Otherwise, estimateParams
contains a tuple of strings indicating the combination of parameters to estimate. In addition, parameters can
be excluded from estimation by using the prefix ~. For example, the values [’pose’,’~transx’] have the same
effect as [’alpha’,’beta’,’gamma’,’transy’,’transz’]. Whereas [’all’,’~focus’] determines all internal and ex-
ternal parameters except the focus, for instance. The prefix ~ can be used with all parameter values except
’all’.
What is the order within the individual parameters? The length of the tuple NStartPose corresponds to the
number of calibration images, e.g., using 15 images leads to a length of the tuple NStartPose equal to
15 · 7 = 105 (15 times the 7 exterior camera parameters). The first 7 values correspond to the pose of the
calibration plate in the first image, the next 7 values to the pose in the second image, etc.
This fixed number of calibration images has to be considered within the tuples with the coordinates of the 3D
model marks and the extracted 2D marks. If 15 images are used, the length of the tuples NRow and NCol
is 15 times the length of the tuples with the coordinates of the 3D model marks (NX, NY, and NZ). If every
image consists 49 marks, the length of the tuples NRow and NCol is 15 · 49 = 735, while the length of the
tuples NX, NY, and NZ is 49. The order of the values in NRow and NCol is “image after image”, i.e., using
49 marks the first 3D model point corresponds to the 1st, 50th, 99th, 148th, 197th, 246th, etc. extracted 2D
mark.
The 3D model points can be read from a calibration plate description file using the operator
CaltabPoints. Initial values for the poses of the calibration plate can be determined by applying
FindMarksAndPose for each image. The tuple NStartPose is set by the concatenation of all these
poses.
What is the meaning of the output parameters? If the camera calibration process is finished successfully, i.e.,
the minimization process has converged, the output parameters camParam and NFinalPose contain the
computed exact values for the interior and exterior camera parameters. The length of the tuple NFinalPose
corresponds to the length of the tuple NStartPose.
The representation types of NFinalPose correspond to the representation type of the first tuple of
NStartPose (see CreatePose). You can convert the representation type by ConvertPoseType.
The computed average errors (errors) give an impression of the accuracy of the calibration. The error
values (deviations in x and y coordinates) are measured in pixels.
Must I use a planar calibration object? No. The operator CameraCalibration is designed in a way that
the input tuples NX, NY, NZ, NRow, and NCol can contain any 3D/2D correspondences, see the above para-
graph explaining the order of the single parameters.
Thus, it makes no difference how the required 3D model marks and the corresponding extracted 2D marks are
determined. On one hand, it is possible to use a 3D calibration pattern, on the other hand, you also can use any
characteristic points (natural landmarks) with known position in the world. By setting estimateParams
to ’pose’, it is thus possible to compute the pose of an object in camera coordinates! For this, at least three
3D/2D-correspondences are necessary as input. NStartPose can, e.g., be generated directly as shown in
the program example for CreatePose.
Attention
The minimization process of the calibration depends on the initial values of the interior (startCamParam) and
exterior (NStartPose) camera parameters. The computed average errors errors give an impression of the
accuracy of the calibration. The errors (deviations in x and y coordinates) are measured in pixels.
Parameter
HALCON 8.0.2
1146 CHAPTER 15. TOOLS
Result
CameraCalibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired camera
parameters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
CameraCalibration is reentrant and processed without parallelization.
Possible Predecessors
FindMarksAndPose, CaltabPoints, ReadCamPar
Possible Successors
WritePose, PoseToHomMat3d, DispCaltab, SimCaltab
See also
FindCaltab, FindMarksAndPose, DispCaltab, SimCaltab, WriteCamPar, ReadCamPar,
CreatePose, ConvertPoseType, WritePose, ReadPose, PoseToHomMat3d,
HomMat3dToPose, CaltabPoints, GenCaltab
Module
Calibration
• ’fixed’: Only kappa is modified, the other interior camera parameters remain unchanged. In general, this
leads to a change of the visible part of the scene.
• ’fullsize’: The scale factors Sx and Sy and the image center point [Cx , Cy ]T are modified in order to preserve
the visible part of the scene. Thus, all points visible in the original image are also visible in the modified
(rectified) image. In general, this leads to undefined pixels in the modified image.
• ’adaptive’: A trade-off between the other modes: The visible part of the scene is slightly reduced to prevent
undefined pixels in the modified image. Similiarly to ’fullsize’, the scale factors and the image center point
are modified.
• ’preserve_resolution’: As in the mode ’fullsize’, all points visible in the original image are also visible in
the modified (rectified) image, i.e., the scale factors Sx and Sy and the image center point [Cx , Cy ]T are
modified. In general, this leads to undefined pixels in the modified image. In contrast to the mode ’fullsize’
additionally the size of the modified image is increased such that the image resolution does not decrease in
any part of the image.
In all modes the radial distortion coefficient κ in camParOut is set to kappa. The transformation of a pixel in
the modified image into the image plane using camParOut results in the same point as the transformation of a
pixel in the original image via camParIn.
Parameter
HALCON 8.0.2
1148 CHAPTER 15. TOOLS
Possible Successors
ChangeRadialDistortionImage, ChangeRadialDistortionContoursXld,
GenRadialDistortionMap
See also
CameraCalibration, ReadCamPar, ChangeRadialDistortionImage,
ChangeRadialDistortionContoursXld
Module
Calibration
HXLDCont HXLDCont.ChangeRadialDistortionContoursXld (
HTuple camParIn, HTuple camParOut )
HALCON 8.0.2
1150 CHAPTER 15. TOOLS
Transform an XLD contour into the plane z=0 of a world coordinate system.
The operator ContourToWorldPlaneXld transforms contour points given in contours into the plane z=0
in a world coordinate system and returns the 3D contour points in contoursTrans. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in worldPose. In camParam
you must pass the interior camera parameters (see WriteCamPar for the sequence of the parameters and the
underlying camera model).
In many cases camParam and worldPose are the result of calibrating the camera with the operator
CameraCalibration. See below for an example.
With the parameter scale you can scale the resulting 3D coordinates. The parameter scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter scale.
Internally, the operator first computes the line of sight between the projection center and the image point in the
camera coordinate system, taking into account the radial distortions. The line of sight is then transformed into the
world coordinate system specified in worldPose. By intersecting the plane z=0 with the line of sight the 3D
coordinates of the transformed contour contoursTrans are obtained.
Parameter
. contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; HXLDCont
Input XLD contours to be transformed in image coordinates.
. contoursTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; HXLDCont
Transformed XLD contours in world coordinates.
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. worldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (string / int / long / double)
Scale oder dimension
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
Example (Syntax: HDevelop)
Result
ContourToWorldPlaneXld returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
ContourToWorldPlaneXld is reentrant and processed without parallelization.
Possible Predecessors
CreatePose, HomMat3dToPose, CameraCalibration, HandEyeCalibration,
SetOriginPose
See also
ImagePointsToWorldPlane
Module
Calibration
Generate a calibration plate description file and a corresponding PostScript file. (obsolete)
CreateCaltab has been replaced with the operator GenCaltab. The operator is contained and described for
compatibility reasons only.
CreateCaltab generates the description of a standard calibration plate for HALCON. This calibration plate
consists of 49 black circular marks on a white plane which are surrounded by a black frame. The parameter
width sets the width (equal to the height) of the whole calibration plate in meters. Using a width of 0.8 m, the
distance between two neighboring marks becomes 10 cm, and the mark radius and the frame width are set to 2.5
cm. The calibration plate coordinate system is located in the middle of the surface of the calibration plate, its z-axis
points into the calibration plate, its x-axis to the right, and it y-axis downwards.
The file calTabDescrFile contains the calibration plate description, e.g., the number of rows and columns
of the calibration plate, the geometry of the surrounding frame (see FindCaltab), and the coordinates and
the radius of all calibration plate marks given in the calibration plate coordinate system. A file generated by
CreateCaltab looks like the following (comments are marked by a ’#’ at the beginning of a line):
#
# Description of the standard calibration plate
# used for the camera calibration in HALCON
#
# 7 rows X 7 columns
# Distance between mark centers [meter]: 0.1
# Quadratic frame (with outer and inner border) around calibration plate
w 0.025
o -0.41 0.41 0.41 -0.41
i -0.4 0.4 0.4 -0.4
HALCON 8.0.2
1152 CHAPTER 15. TOOLS
# calibration marks at y = 0 m
-0.3 0 0.025
-0.2 0 0.025
-0.1 0 0.025
0 0 0.025
0.1 0 0.025
0.2 0 0.025
0.3 0 0.025
The file calTabFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file calTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Width of the calibration plate in meters.
Default Value : 0.8
Suggested values : Width ∈ {1.2, 0.8, 0.6, 0.4, 0.2, 0.1}
Recommended Increment : 0.1
Restriction : 0.0 < Width
. calTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; HTuple (string)
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. calTabFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; HTuple (string)
File name of the PostScript file.
Default Value : "caltab.ps"
Example (Syntax: HDevelop)
Result
CreateCaltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception handling is raised.
Parallelization Information
CreateCaltab is processed completely exclusively without parallelization.
Possible Successors
ReadCamPar, CaltabPoints
See also
GenCaltab, FindCaltab, FindMarksAndPose, CameraCalibration, DispCaltab,
SimCaltab
Module
Foundation
Project and visualize the 3D model of the calibration plate in the image.
DispCaltab is used to visualize the calibration marks and the connecting lines between the marks of the
used calibration plate (calTabDescrFile) in the window specified by windowHandle. Additionally, the
HALCON 8.0.2
1154 CHAPTER 15. TOOLS
x- and y-axes of the plate’s coordiante system are printed on the plate’s surface. For this, the 3D model of
the calibration plate is projected into the image plane using the interior (camParam) and exterior camera pa-
rameters (caltabPose, i.e., the pose of the calibration plate in camera coordinates). The underlying cam-
era model (pinhole, telecentric, or line scan camera with radial distortion) is described in WriteCamPar and
CameraCalibration.
Typically, DispCaltab is used to verify the result of the camera calibration (see CameraCalibration) by
superimposing it onto the original image. The current line width can be set by SetLineWidth, the current color
can be set by SetColor. Additionally, the font type of the labels of the coordinate axes can be set by SetFont.
The parameter scaleFac influences the number of supporting points to approximate the elliptic contours of the
calibration marks. You should increase the number of supporting points, if the image part in the output window is
displayed with magnification (see SetPart).
Parameter
. windowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; HWindow / HTuple (IntPtr)
Window in which the calibration plate should be visualized.
. calTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. caltabPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Exterior camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements : 7
. scaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Scaling factor for the visualization.
Default Value : 1.0
Suggested values : ScaleFac ∈ {0.5, 1.0, 2.0, 3.0}
Recommended Increment : 0.05
Restriction : 0.0 < ScaleFac
Example (Syntax: HDevelop)
Result
DispCaltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception handling
is raised.
Parallelization Information
DispCaltab is reentrant, local, and processed without parallelization.
Possible Predecessors
CameraCalibration, ReadCamPar, ReadPose
See also
FindMarksAndPose, CameraCalibration, SimCaltab, WriteCamPar, ReadCamPar,
CreatePose, WritePose, ReadPose, Project3dPoint, GetLineOfSight
Module
Foundation
Result
FindCaltab returns 2 (H_MSG_TRUE) if all parameter values are correct and an image region
HALCON 8.0.2
1156 CHAPTER 15. TOOLS
is found. The behavior in case of empty input (no image given) can be set via SetSystem
(’no_object_result’,<Result>) and the behavior in case of an empty result region via SetSystem
(’store_empty_region’,<true/false>). If necessary, an exception handling is raised.
Parallelization Information
FindCaltab is reentrant and processed without parallelization.
Possible Predecessors
ReadImage
Possible Successors
FindMarksAndPose
See also
FindMarksAndPose, CameraCalibration, DispCaltab, SimCaltab, CaltabPoints,
GenCaltab
Module
Foundation
Extract the 2D calibration marks from the image and calculate initial values for the exterior camera parameters.
FindMarksAndPose is used to determine the necessary input data for the subsequent camera calibration (see
CameraCalibration): First, the 2D center points [RCoord,CCoord] of the calibration marks within the
region calTabRegion of the input image image are extracted and ordered. Secondly, a rough estimate for
the exterior camera parameters (startPose) is computed, i.e., the 3D pose (= position and orientation) of the
calibration plate relative to the camera coordinate system (see CreatePose for more information about 3D
poses).
In the input image image an edge detector is applied (see EdgesImage, mode ’lanser2’) to the region
calTabRegion, which can be found by applying the operator FindCaltab. The filter parameter for this
edge detection can be tuned via alpha. In the edge image closed contours are searched for: The number of closed
contours must correspond to the number of calibration marks as described in the calibration plate description file
calTabDescrFile and the contours have to be ellipticly shaped. Contours shorter than minContLength are
discarded, just as contours enclosing regions with a diameter larger than maxDiamMarks (e.g., the border of the
calibration plate).
For the detection of contours a threshold operator is applied on the resulting amplitudes of the edge detector. All
points with a high amplitude (i.e., borders of marks) are selected.
First, the threshold value is set to startThresh. If the search for the closed contours or the successive pose
estimate fails, this threshold value is successively decreased by deltaThresh down to a minimum value of
minThresh.
Each of the found contours is refined with subpixel accuracy (see EdgesSubPix) and subsequently approxi-
mated by an ellipse. The center points of these ellipses represent a good approximation of the desired 2D image
coordinates [RCoord,CCoord] of the calibration mark center points. The order of the values within these two
tuples must correspond to the order of the 3D coordinates of the calibration marks in the calibration plate descrip-
tion file calTabDescrFile, since this fixes the correspondences between extracted image marks and known
model marks (given by CaltabPoints)! If a triangular orientation mark is defined in a corner of the plate by
the plate description file (see GenCaltab), the mark will be detected and the point order is returned in row-major
order beginning with the corner mark in the (barycentric) negative quadrant with respect to the defined coordinate
system of the plate. Else, if no orientation mark is defined, the order of the center points is in row-major order
beginning at the upper left corner mark in the image.
Based on the ellipse parameters for each calibration mark, a rough estimate for the exterior camera parameters is
finally computed. For this purpose the fixed correspondences between extracted image marks and known model
marks are used. The estimate startPose describes the pose of the calibration plate in the camera coordinate
system as required by the operator CameraCalibration.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Input image.
. calTabRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Region of the calibration plate.
. calTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. startCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Initial values for the interior camera parameters.
Number of elements : (StartCamParam = 8) ∨ (StartCamParam = 11)
. startThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long)
Initial threshold value for contour detection.
Default Value : 128
List of values : StartThresh ∈ {80, 96, 112, 128, 144, 160}
. deltaThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long)
Loop value for successive reduction of startThresh.
Default Value : 10
List of values : DeltaThresh ∈ {6, 8, 10, 12, 14, 16, 18, 20, 22}
. minThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long)
Minimum threshold for contour detection.
Default Value : 18
List of values : MinThresh ∈ {8, 10, 12, 14, 16, 18, 20, 22}
. alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Filter parameter for contour detection, see EdgesImage.
Default Value : 0.9
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1}
Typical range of values : 0.2 ≤ Alpha ≤ 2.0
Restriction : Alpha > 0.0
. minContLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Minimum length of the contours of the marks.
Default Value : 15.0
Suggested values : MinContLength ∈ {10.0, 15.0, 20.0, 30.0, 40.0, 100.0}
Restriction : MinContLength > 0.0
. maxDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Maximum expected diameter of the marks.
Default Value : 100.0
Suggested values : MaxDiamMarks ∈ {50.0, 100.0, 150.0, 200.0, 300.0}
Restriction : MaxDiamMarks > 0.0
. RCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Tuple with row coordinates of the detected marks.
. CCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Tuple with column coordinates of the detected marks.
. startPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Estimation for the exterior camera parameters.
Number of elements : 7
Example (Syntax: HDevelop)
HALCON 8.0.2
1158 CHAPTER 15. TOOLS
Result
FindMarksAndPose returns 2 (H_MSG_TRUE) if all parameter values are correct and an estimation for the
exterior camera parameters has been determined successfully. If necessary, an exception handling is raised.
Parallelization Information
FindMarksAndPose is reentrant and processed without parallelization.
Possible Predecessors
FindCaltab
Possible Successors
CameraCalibration
See also
FindCaltab, CameraCalibration, DispCaltab, SimCaltab, ReadCamPar, ReadPose,
CreatePose, PoseToHomMat3d, CaltabPoints, GenCaltab, EdgesSubPix, EdgesImage
Module
Foundation
HALCON 8.0.2
1160 CHAPTER 15. TOOLS
The file calTabPSFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file calTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
Result
GenCaltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written suc-
cessfully. If necessary, an exception handling is raised.
Parallelization Information
GenCaltab is processed completely exclusively without parallelization.
Possible Successors
ReadCamPar, CaltabPoints
See also
FindCaltab, FindMarksAndPose, CameraCalibration, DispCaltab, SimCaltab
Module
Foundation
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world
coordinate system.
GenImageToWorldPlaneMap generates a projection map map, which describes the mapping between the im-
age plane and the plane z=0 (plane of measurements) in a world coordinate system. This map can be used to rectify
an image with the operator MapImage. The rectified image shows neither radial nor perspective distortions; it
corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the plane of measure-
ments. The world coordinate system is chosen by passing its 3D pose relative to the camera coordinate system in
worldPose. In camParam you must pass the interior camera parameters (see WriteCamPar for the sequence
of the parameters and the underlying camera model).
In many cases camParam and worldPose are the result of calibrating the camera with the operator
CameraCalibration. See below for an example.
HALCON 8.0.2
1162 CHAPTER 15. TOOLS
The size of the images to be mapped can be specified by the parameters widthIn and heightIn. The pixel
position of the upper left corner of the output image is determined by the origin of the world coordinate system.
The size of the output image can be choosen by the parameters widthMapped, heightMapped, and scale.
widthMapped and heightMapped must be given in pixels.
With the parameter scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter interpolation specifies whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
The mapping function is stored in the output image map. map has the same size as the resulting images after
the mapping. If no interpolation is chosen, map consists of one image containing one channel, in which for each
pixel of the resulting image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen, map consists of one image containing
five channels. In the first channel for each pixel in the resulting image the linearized coordinates of the pixel in
the input image is stored that is in the upper left position relative to the transformed coordinates. The four other
channels contain the weights of the four neighboring pixels of the transformed coordinates which are used for the
bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to
the transformed coordinates. If several images have to be mapped using the same camera parameters,
GenImageToWorldPlaneMap in combination with MapImage is much more efficient than the operator
ImageToWorldPlane because the mapping function needs to be computed only once.
Parameter
Result
GenImageToWorldPlaneMap returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
HALCON 8.0.2
1164 CHAPTER 15. TOOLS
Parallelization Information
GenImageToWorldPlaneMap is reentrant and processed without parallelization.
Possible Predecessors
CreatePose, HomMat3dToPose, CameraCalibration, HandEyeCalibration,
SetOriginPose
Possible Successors
MapImage
Alternatives
ImageToWorldPlane
See also
MapImage, ContourToWorldPlaneXld, ImagePointsToWorldPlane
Module
Calibration
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
GenRadialDistortionMap computes the mapping of images corresponding to a changing radial distortion in
accordance to the interior camera parameters camParIn and camParOut which can be obtained, e.g., using the
operator CameraCalibration. camParIn and camParOut contain the old and the new camera parameters
including the old and the new radial distortion, respectively (also see WriteCamPar for the sequence of the
parameters and the underlying camera model). Each pixel of the potential output image is transformed into the
image plane using camParOut and subsequently projected into a subpixel position of the potential input image
using camParIn.
The mapping function is stored in the output image map. The size of map is given by the camera parameters
camParOut and therefore defines the size of the resulting mapped images using MapImage. The size of the
images to be mapped with MapImage is determined by the camera parmaters camParIn. If no interpolation
is chosen (interpolation = ’none’), map consists of one image containing one channel, in which for each
pixel of the output image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen (interpolation = ’bilinear’),
map consists of one image containing five channels. In the first channel for each pixel in the resulting image
the linearized coordinate of the pixel in the input image is stored that is in the upper left position relative to
the transformed coordinates. The four other channels contain the weights of the four neighboring pixels of the
transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
If camParOut was computed via ChangeRadialDistortionCamPar, the mapping describes the effect of
a lens with a modified radial distortion. If κ is 0, the mapping corresponds to a rectification.
If several images have to be mapped using the same camera parameters, GenRadialDistortionMap in
combination with MapImage is much more efficient than the operator ChangeRadialDistortionImage
because the transformation must be computed only once.
Parameter
HALCON 8.0.2
1166 CHAPTER 15. TOOLS
that are based on the first derivative of the image function (e.g., EdgesSubPix) yield edges that are shifted
towards the center of curvature, i.e., extracted ellipses will be slightly to small. Approaches that are based on the
second derivative of the image function ( LaplaceOfGauss followed by ZeroCrossingSubPix) result in
edges that are shifted away from the center of curvature, i.e., extracted ellipses will be slightly too large.
These effects increase with the curvature of the edge and with the size of the filter mask that is used for the
edge extraction. Therefore, to achieve high accuracy, the ellipses should appear large in the image and the filter
parameter should be chosen such that small filter masks are used (see InfoEdges).
Parameter
. contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; HXLD
Contours to be examined.
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : CamParam = 8
. radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double)
Radius of the circle in object space.
Number of elements : (Radius = Contour) ∨ (Radius = 1)
Restriction : Radius > 0.0
. outputType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of output parameters.
Default Value : "pose"
List of values : OutputType ∈ {"pose", "center_normal"}
. pose1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
3D pose of the first circle.
Number of elements : (Pose1 = (7 · Contour)) ∨ (Pose1 = (6 · Contour))
. pose2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
3D pose of the second circle.
Number of elements : (Pose2 = (7 · Contour)) ∨ (Pose2 = (6 · Contour))
Result
GetCirclePose returns 2 (H_MSG_TRUE) if all parameter values are correct and the position of the circle has
been determined successfully. If necessary, an exception handling is raised.
Parallelization Information
GetCirclePose is reentrant and processed without parallelization.
Possible Predecessors
EdgesSubPix
Alternatives
FindMarksAndPose, CameraCalibration
See also
GetRectanglePose, FitEllipseContourXld
Module
3D Metrology
on the focal plane, i.e., for frame cameras, the output parameter QZ is equivalent to the focal length of the camera,
whereas for linescan cameras, QZ also depends on the motion of the camera with respect to the object. The equation
of the line of sight is given by
X PX QX − PX
Y = PY + l · QY − PY
Z PZ QZ − PZ
The advantage of representing the line of sight as two points is that it is easier to transform the line in 3D. To do
so, all that is necessary is to apply the operator AffineTransPoint3d to the two points.
Parameter
Result
GetLineOfSight returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
GetLineOfSight is reentrant and processed without parallelization.
Possible Predecessors
ReadCamPar, CameraCalibration
Possible Successors
AffineTransPoint3d
See also
CameraCalibration, DispCaltab, ReadCamPar, Project3dPoint, AffineTransPoint3d
Module
Calibration
HALCON 8.0.2
1168 CHAPTER 15. TOOLS
Output
The resulting pose is of code-0 (see CreatePose) and represents the pose of the center of the rectangle. You
can compute the pose of the corners of the rectangle as follows:
set_origin_pose (pose, width/2, -height/2, 0, PoseCorner1)
set_origin_pose (pose, width/2, height/2, 0, PoseCorner2)
set_origin_pose (pose, -width/2, height/2, 0, PoseCorner3)
set_origin_pose (pose, -width/2, -height/2, 0, PoseCorner4)
A rectangle is symmetric with respect to its x, y, and z axis and one and the same contour can represent a rectangle
in 4 different poses. The angles in pose are normalized to be in the range [−90; 90] degrees and the rest of the 4
possible poses can be computed by combining flips around the corresponding axis:
∗ NOTE: the following code works ONLY for pose of type Code-0
∗ as it is returned by GetRectanglePose
∗
∗ flip around z-axis
PoseFlippedZ := pose
PoseFlippedZ[5] := PoseFlippedZ[5]+180
∗ flip around y-axis
PoseFlippedY := pose
PoseFlippedY[4] := PoseFlippedY[4]+180
PoseFlippedY[5] := -PoseFlippedY[5]
∗ flip around x-axis
PoseFlippedX := pose
PoseFlippedX[3] := PoseFlippedX[3]+180
PoseFlippedX[4] := -PoseFlippedX[4]
PoseFlippedX[5] := -PoseFlippedX[5]
Note that if the rectangle is a square (width == height) the number of alternative poses is 8.
If more than one contour are given in contour, a corresponding tuple of values for both width and height
has to be provided as well. Yet, if only one value is provided for each of these arguments, then this value is applied
for each processed contour. A pose is estimated for each processed contour and all poses are concatenated in pose
(see the example below).
• ratio width/height
• length of the projected contour
• degree of perspective distortion of the contour
In order to achieve an accurate pose estimation, there are three corresponding criteria that should be considered:
The ratio width/height should fulfill
1
< width/height < 3
3
For a rectangular object deviating from this criterion, its longer side dominates the determination of its pose. This
causes instability in the estimation of the angle around the longer rectangle’s axis. In the extreme case when one
of the dimensions is 0, the rectangle is in fact a line segment, whose pose cannot be estimated.
Secondly, the lengths of each side of the contour should be at least 20 pixels. An error is returned if a side of the
contour is less than 5 pixels long.
Thirdly, the more the contour appears projectively distorted, the more stable the algorithm works. Therefore, the
pose of a rectangle tilted w.r.t to the image plane can be estimated accurately, whereas the pose of an rectangle
parallel to the image plane of the camera could be unstable. This is further discussed in the next paragraph.
Additionally, there is a rule of thumb that ensures projective distortion: the rectangle should be placed in space
such that its size in x and y dimension in the camera coordinate system should not be less than 1/10th of its
distance from the camera in z direction.
GetRectanglePose provides two measures for the accuracy of the estimated pose. error is the average
pixel error between the contour points and the modeled rectangle reprojected on the image. If error is exceeding
0.5, this is an indication that the algorithm did not converge properly, and the resulting pose should not be used.
covPose contains 36 entries representing the 6 × 6 covariance matrix of the first 6 entries of pose. The above
mentioned case of instability of the angle about the longer rectangle’s axis be detected by checking that the absolute
values of the variances and covariances of the rotations around the x and y axis (covPose[21],covPose[28],
and covPose[22] == covPose[27]) do not exceed 0.05. Further, unusually increased values of any of the
covariances and especially of the variances (the 6 values on the diagonal of covPose with indices 0, 7, 14, 21, 28
and 35, respectively) indicate a poor quality of pose.
HALCON 8.0.2
1170 CHAPTER 15. TOOLS
Parameter
. contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; HXLD
Contour(s) to be examined.
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : CamParam = 8
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double)
Width of the rectangle in meters.
Restriction : Width > 0
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double)
Height of the rectangle in meters.
Restriction : Height > 0
. weightingMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Weighting mode for the optimization phase.
Default Value : "nonweighted"
List of values : WeightingMode ∈ {"nonweighted", "huber", "tukey"}
. clippingFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Clipping factor for the elimination of outliers (typical: 1.0 for ’huber’ and 3.0 for ’tukey’).
Default Value : 2.0
Suggested values : ClippingFactor ∈ {1.0, 1.5, 2.0, 2.5, 3.0}
Restriction : ClippingFactor > 0
. pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
3D pose of the rectangle.
Number of elements : Pose = (7 · Contour)
. covPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
Covariances of the pose values.
Number of elements : CovPose = (36 · Contour)
. error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
Root-mean-square value of the final residual error.
Number of elements : Error = Contour
Example (Syntax: HDevelop)
* ...
endfor
Result
GetRectanglePose returns 2 (H_MSG_TRUE) if all parameter values are correct and the position of the
rectangle has been determined successfully. If the provided contour(s) cannot be segmented as a quadrangle
GetRectanglePose returns H_ERR_FIT_QUADRANGLE. If further necessary, an exception handling is
raised.
Parallelization Information
GetRectanglePose is reentrant, local, and processed without parallelization.
Possible Predecessors
EdgesSubPix
See also
GetCirclePose, SetOriginPose, CameraCalibration
References
G.Schweighofer and A.Pinz: “Robust Pose Estimation from a Planar Target”; Transactions on Pattern Analysis
and Machine Intelligence (PAMI), 28(12):2024-2030, 2006
Module
3D Metrology
HALCON 8.0.2
1172 CHAPTER 15. TOOLS
In contrast to the camera calibration, the calibration object is not moved manually. This task is delegated to
the robot which either moves the camera (mounted camera) or the calibration object (stationary camera). The
robot’s movements are assumed to be known and therefore are also used as an input for the calibration (parameter
MRelPoses).
The two hand-eye configurations are discussed in more detail below, followed by general information about the
process of hand-eye calibration.
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
H
camStartPose MRelPoses baseStartPose
camFinalPose baseFinalPose
From the set of calibration images, the operator HandEyeCalibration determines the two transformations
at the ends of the chain, i.e., the pose of the robot tool in camera coordinates (cam Htool ) and the pose of the
calibration object in the robot base coordinate system (base Hcal ). In the input parameters camStartPose and
baseStartPose, you must specify suitable starting values for these transformations which are constant over
all calibration images. HandEyeCalibration then returns the calibrated values in camFinalPose and
baseFinalPose.
In contrast, the transformation in the middle of the chain, tool Hbase , is known but changes for each calibration
image, because it describes the pose of the robot moving the camera, or to be more exact its inverse pose (pose of
the base coordinate system in robot tool coordinates). You must specify the (inverse) robot poses in the calibration
images in the parameter MRelPoses.
Internally, HandEyeCalibration uses a Newton-type algorithm to minimize an error function based on nor-
mal equations. Analogously to the calibration of the camera itself (see CameraCalibration), the hand-eye
calibration becomes more robust if you use many calibration images that were acquired with different robot poses.
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from a calibration image, i.e., the pose of the calibration object in camera coordinates (i.e.,
the external camera parameters), are equal to a chain of poses or homogeneous transformation matrices, this time
from the calibration object via the robot’s tool to its base and finally to the camera:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
*
YH
H
6 H
camStartPose MRelPoses baseStartPose
camFinalPose baseFinalPose
Analogously to the configuration with a moving camera, the operator HandEyeCalibration determines the
two transformations at the ends of the chain, here the pose of the robot base coordinate system in camera coordi-
nates (cam Hbase ) and the pose of the calibration object relative to the robot tool (tool Hcal ). In the input parame-
ters camStartPose and baseStartPose, you must specify suitable starting values for these transformations.
HandEyeCalibration then returns the calibrated values in camFinalPose and baseFinalPose. Please
note that the names of the parameters baseStartPose and baseFinalPose are misleading for this configu-
ration!
The transformation in the middle of the chain, base Htool , describes the pose of the robot moving the calibration
object, i.e., the pose of the tool relative to the base coordinate system. You must specify the robot poses in the
calibration images in the parameter MRelPoses.
How do I get 3D model points and their projections? 3D model points given in the world coordinate system
(NX, NY, NZ) and their associated projections in the image (NRow, NCol) form the basis of the hand-eye
calibration. In order to be able to perform a successful hand-eye calibration, you need images of the 3D
model points that were obtained for sufficiently many different poses of the manipulator.
In principle, you can use arbitrary known points for the calibration. However, it is usually most convenient
to use the standard calibration plate, e.g., the one that can be generated with GenCaltab. In this case, you
can use the operators FindCaltab and FindMarksAndPose to extract the position of the calibration
plate and of the calibration marks and the operator CaltabPoints to access the 3D coordinates of the
calibration marks (see also the description of CameraCalibration).
The parameter MPointsOfImage specifies the number of 3D model points used for each pose of the
manipulator, i.e., for each image. With this, the 3D model points which are stored in a linearized fashion
in NX, NY, NZ, and their corresponding projections (NRow, NCol) can be associated with the corresponding
pose of the manipulator (MRelPoses). Note that in contrast to the operator CameraCalibration the
3D coordinates of the model points must be specified for each calibration image, not only once.
How do I acquire a suitable set of images? If a standard calibration plate is used, the following procedure
should be used:
• At least 10 to 20 images from different positions should be taken in which the position of the camera
with respect to the calibration plate is sufficiently different. The position of the calibration plate (moving
camera: relative to the robot’s tool; stationary camera: relative to the robot’s base) must not be changed
between images.
• In each image, the calibration plate must be completely visible (including its border).
• No reflections or other disturbances should be visible on the calibration plate.
• The set of images must show the calibration plate from very different positions of the manipulator.
The calibration plate can and should be visible in different parts of the images. Furthermore, it should
be slightly to moderately rotated around its x- or y-axis, in order to clearly exhibit distortions of the
calibration marks. In other words, the corresponding exterior camera parameters (pose of the calibration
plate in camera coordinates) should take on many different values.
• In each image, the calibration plate should fill at least one quarter of the entire image, in order to ensure
the robust detection of the calibration marks.
• The interior camera parameters of the camera to be used must have been determined earlier and must be
passed in camParam (see CameraCalibration). Note that changes of the image size, the focal
length, the aperture, or the focus effect a change of the interior camera parameters.
• The camera must not be modified between the acquisition of the individual images, i.e., focal length,
aperture, and focus must not be changed, because all calibration images use the same interior camera
parameters. Please make sure that the focus is sufficient for the expected changes of the distance the
camera from the calibration plate. Therefore, bright lighting conditions for the calibration plate are
important, because then you can use smaller apertures which result in larger depth of focus.
How do I obtain suitable starting values? Depending on the used hand-eye configuration, you need starting val-
ues for the following poses:
Moving camera
baseStartPose = pose of the calibration object in robot base coordinates
camStartPose = pose of the robot tool in camera coordinates
Stationary camera
baseStartPose = pose of the calibration object in robot tool coordinates
camStartPose = pose of the robot base in camera coordinates
The camera’s coordinate system is oriented such that its optical axis corresponds to the z-axis, the x-axis
points to the right, and the y-axis downwards. The coordinate system of the standard calibration plate is
located in the middle of the surface of the calibration plate, its z-axis points into the calibration plate, its
x-axis to the right, and it y-axis downwards.
For more information about creating a 3D pose please refer to the description of CreatePose which also
contains a short example.
HALCON 8.0.2
1174 CHAPTER 15. TOOLS
In fact, you need a starting value only for one of the two poses (baseStartPose or camStartPose).
The other can be computed from one of the calibration images. This means that you can pick the pose that is
easier to determine and let HALCON compute the other one for you.
The main idea is to exploit the fact that the two poses for which we need starting values are connected via the
already described chain of transformations, here shown for a configuration with a moving camera:
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
H
camStartPose MRelPoses baseStartPose
In this configuration, it is typically easy to determine a starting value for cam Htool (camStartPose). Thus,
we solve the equation for base Hcal (baseStartPose):
Thus, to compute baseStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for camStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator FindMarksAndPose to determine the projections of the marks. An example program can be
found below.
For a configuration with a stationary camera, the chain of transformations is:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
*
HH
Y
6 H
camStartPose MRelPoses baseStartPose
tool
In this configuration, it is typically easier to determine a starting value for Hcal (baseStartPose).
Thus, we solve the equation for cam Hbase (camStartPose):
Thus, to compute camStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for baseStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator FindMarksAndPose to determine the projections of the marks. An example program can be
found below.
How do I obtain the poses of the robot? In the parameter MRelPoses you must pass the poses of the robot in
the calibration images (moving camera: pose of the robot base in robot tool coordinates; stationary camera:
pose of the robot tool in robot base coordinates) in a linearized fashion. We recommend to create the robot
poses in a separate program and save in files using WritePose. In the calibration program you can then
read and accumulate them in a tuple as shown in the example program below. Besides, we recommend to
save the pose of the robot tool in robot base coordinates independent of the hand-eye configuration. When
using a moving camera, you then invert the read poses before accumulating them. This is also shown in the
example program.
Via the cartesian interface of the robot, you can typically obtain the pose of the tool in base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (orderOfRotation = ’gba’
or ’abg’, see CreatePose). In this case, you can directly use the pose values obtained from the robot as
input for CreatePose.
If the cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix
step by step using the operators HomMat3dRotate and HomMat3dTranslate and then convert the
matrix into a pose using HomMat3dToPose. The following example code creates a pose from the ZYZ
representation described above:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, ϕ3, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, ϕ2, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, ϕ1, ’z’, 0, 0, 0, HomMat3DRotZYZ)
hom_mat3d_translate (HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose (base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the robot poses MRelPoses are specified with high
accuracy!
How can I exclude individual pose parameters from the estimation? HandEyeCalibration estimates a
maximum of 12 pose parameters, i.e., 6 parameters each for the two computed poses camFinalPose and
baseFinalPose. However, it is possible to exclude some of these pose parameters from the estimation.
This means that the starting values of the poses remain unchanged and are assumed constant for the estima-
tion of all other pose parameters. The parameter toEstimate is used to determine which pose parameters
should be estimated. In toEstimate, a list of keywords for the parameters to be estimated is passed. The
possible values are:
baseFinalPose:
’baseTx’ = translation along the x-axis
’baseTy’ = translation along the y-axis
’baseTz’ = translation along the z-axis
’baseRa’ = rotation around the x-axis
’baseRb’ = rotation around the y-axis
’baseRg’ = rotation around the z-axis
’base_pose’ = all 6 baseFinalPose parameters
camFinalPose:
’camTx’ = translation along the x-axis
’camTy’ = translation along the y-axis
’camTz’ = translation along the z-axis
’camRa’ = rotation around the x-axis
’camRb’ = rotation around the y-axis
’camRg’ = rotation around the z-axis
’cam_pose’ = all 6 camFinalPose parameters
In order to estimate all 12 pose parameters, you can pass the keyword ’all’ (or of course a tuple containing
all 12 keywords listed above).
It is useful to exclude individual parameters from estimation if those pose parameters have already been mea-
sured exactly. Therefor define a string tuple of the parameters that should be estimated or prefix the strings
of excluded parameters with a ’~’ sign. For example, toEstimate = [’all’,’~camTx’] estimates all pose
values except the x translation of the camera. Whereas toEstimate = [’base_pose’,’~baseRy’] estimates
the pose of the base apart from the rotation around the y-axis. The latter is equivalent to toEstimate =
[’baseTx’,’baseTy’,’baseTz’,’baseRx’,’baseRz’].
Which terminating criteria can be used for the error minimization? The error minimization terminates either
after a fixed number of iterations or if the error falls below a given minimum error. The parameter
stopCriterion is used to choose between these two alternatives. If ’CountIterations’ is passed, the
algorithm terminates after maxIterations iterations.
If stopCriterion is passed as ’MinError’, the algorithm runs until the error falls below the error threshold
given in minError. If, however, the number of iterations reaches the number given in maxIterations,
the algorithm terminates with an error message.
What is the order of the individual parameters? The length of the tuple MPointsOfImage corresponds to
the number of different positions of the manipulator and thus to the number of calibration images. The
parameter MPointsOfImage determines the number of model points used in the individual positions. If
the standard calibration plate is used, this means 49 points per position (image). If for example 15 images
were acquired, MPointsOfImage is a tuple of length 15, where all elements of the tuple have the value 49.
HALCON 8.0.2
1176 CHAPTER 15. TOOLS
The number of calibration images which is determined by the length of MPointsOfImage, must also be
taken into account for the tuples for the 3D model points and for the extracted 2D marks, respectively. Hence,
for 15 calibration images with 49 model points each, the tuples NX, NY, NZ, NRow, and NCol must contain
15 · 49 = 735 values each. These tuples are ordered according to the image the respective points lie in, i.e.,
the first 49 values correspond to the 49 model points in the first image. The order of the 3D model points and
the extracted 2D model points must be the same in each image.
The length of the tuple MRelPoses also depends on the number of calibration images. If, for example, 15
images and therefore 15 poses are used, the length of the tuple MRelPoses is 15 · 7 = 105 (15 times 7 pose
parameters). The first seven parameters thus determine the pose of the manipulator in the first image, and so
on.
What do the output parameters mean? If stopCriterion was set to ’CountIterations’, the output parame-
ters baseFinalPose and camFinalPose are returned even if the algorithm didn’t converge. If, how-
ever, stopCriterion was set to ’MinError’, the error must fall below ’MinError’ in order for output
parameters to be returned.
The representation type of baseFinalPose and camFinalPose is the same as in the corresponding
starting values. It can be changed with the operator ConvertPoseType. The description of the dif-
ferent representation types and of their conversion can be found with the documentation of the operator
CreatePose.
The parameter numErrors contains a list of (numerical) errors from the individual iterations of the algo-
rithm. Based on the evolution of the errors, it can be decided whether the algorithm has converged for the
given starting values. The error values are returned as 3D deviations in meters. Thus, the last entry of the
error list corresponds to an estimate of the accuracy of the returned pose parameters.
Attention
The quality of the calibration depends on the accuracy of the input parameters (position of the calibration marks,
robot poses MRelPoses, and the starting positions baseStartPose, camStartPose). Based on the returned
error measures numErrors, it can be decided, whether the algorithm has converged. Furthermore, the accuracy
of the returned pose can be estimated. The error measures are 3D differences in meters.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Linear list containing all the x coordinates of the calibration points (in the order of the images).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Linear list containing all the y coordinates of the calibration points (in the order of the images).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Linear list containing all the z coordinates of the calibration points (in the order of the images).
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Linear list containing all row coordinates of the calibration points (in the order of the images).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Linear list containing all the column coordinates of the calibration points (in the order of the images).
. MPointsOfImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
Number of the calibration points for each image.
. MRelPoses (input_control) . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose [ ] / HTuple (double / int / long)
Measured 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates;
stationary camera: robot tool in robot base coordinates).
. baseStartPose (input_control) . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Starting value for the 3D pose of the calibration object in robot base coordinates (moving camera) or in robot
tool coordinates (stationary camera), respectively.
. camStartPose (input_control) . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Starting value for the 3D pose of the robot tool (moving camera) or robot base (stationary camera),
respectively, in camera coordinates.
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
. toEstimate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Parameters to be estimated (max. 12 degrees of freedom).
Default Value : "all"
List of values : ToEstimate ∈ {"all", "base_pose", "cam_pose", "baseTx", "baseTy", "baseTz", "baseRa",
"baseRb", "baseRg", "camTx", "camTy", "camTz", "camRa", "camRb", "camRg"}
read_cam_par(’campar.dat’, CamParam)
CalDescr := ’caltab.descr’
caltab_points(CalDescr, X, Y, Z)
* process all calibration images
for i := 0 to NumImages-1 by 1
read_image(Image, ’calib_’+i$’02d’)
* find marks on the calibration plate in every image
find_caltab(Image, CalPlate, CalDescr, 3, 150, 5)
find_marks_and_pose(Image, CalPlate, CalDescr, CamParam, 128, 10,
RCoordTmp, CCoordTmp, StartPose)
* accumulate 2D and 3D coordinates of the marks
RCoord := [RCoord, RCoordTmp]
CCoord := [CCoord, CCoordTmp]
XCoord := [XCoord, X]
YCoord := [YCoord, Y]
ZCoord := [ZCoord, Z]
NumMarker := [NumMarker, |RCoordTmp|]
* read pose of the robot tool in robot base coordinates
read_pose(’robpose_’+i$’02d’+’.dat’, RobPose)
* moving camera? invert pose
if (IsMovingCameraConfig=’true’)
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_invert(base_H_tool, tool_H_base)
hom_mat3d_to_pose(tool_H_base, RobPose)
endif
* accumulate robot poses
MRelPoses := [MRelPoses, RobPose]
* store the pose of the calibration plate in the first image and the
* corresponding pose of the robot for later use
if (i=0)
cam_P_cal := StartPose
RelPose0 := RobPose
endif
endfor
* obtain starting values: read one, compute the other
if (IsMovingCameraConfig=’true’)
HALCON 8.0.2
1178 CHAPTER 15. TOOLS
Result
HandEyeCalibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the method con-
verges with an error less than the specified minimum error (if stopCriterion = ’MinError’). If necessary, an
exception handling is raised.
Parallelization Information
HandEyeCalibration is reentrant and processed without parallelization.
Possible Predecessors
FindMarksAndPose
Possible Successors
WritePose, ConvertPoseType, PoseToHomMat3d, DispCaltab, SimCaltab
See also
FindCaltab, FindMarksAndPose, DispCaltab, SimCaltab, WriteCamPar, ReadCamPar,
CreatePose, ConvertPoseType, WritePose, ReadPose, PoseToHomMat3d,
HomMat3dToPose, CaltabPoints, GenCaltab
Module
Calibration
Transform image points into the plane z=0 of a world coordinate system.
The operator ImagePointsToWorldPlane transforms image points which are given in rows and cols into
the plane z=0 in a world coordinate system and returns their 3D coordinates in x and y. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in worldPose. In camParam
you must pass the interior camera parameters (see WriteCamPar for the sequence of the parameters and the
underlying camera model).
In many cases camParam and worldPose are the result of calibrating the camera with the operator
CameraCalibration. See below for an example.
With the parameter scale you can scale the resulting 3D coordinates. The parameter scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter scale.
Internally, the operator first computes the line of sight between the projection center and the image contour points
in the camera coordinate system, taking into account the radial distortions. The line of sight is then transformed
into the world coordinate system specified in worldPose. By intersecting the plane z=0 with the line of sight the
3D coordinates x and y are obtained.
Parameter
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. worldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; HTuple (double / int / long)
Row coordinates of the points to be transformed.
Default Value : 100.0
. cols (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; HTuple (double / int / long)
Column coordinates of the points to be transformed.
Default Value : 100.0
. scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (string / int / long / double)
Scale or dimension
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
HALCON 8.0.2
1180 CHAPTER 15. TOOLS
Result
ImagePointsToWorldPlane returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
ImagePointsToWorldPlane is reentrant and processed without parallelization.
Possible Predecessors
CreatePose, HomMat3dToPose, CameraCalibration, HandEyeCalibration,
SetOriginPose
See also
ContourToWorldPlaneXld
Module
Calibration
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
ImageToWorldPlane rectifies an image image by transforming it into the plane z=0 (plane of measurements)
in a world coordinate system. The resulting rectified image imageWorld shows neither radial nor perspective
distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the
plane of measurements. The world coordinate system is chosen by passing its 3D pose relative to the camera coor-
dinate system in worldPose. In camParam you must pass the interior camera parameters (see WriteCamPar
for the sequence of the parameters and the underlying camera model).
In many cases camParam and worldPose are the result of calibrating the camera with the operator
CameraCalibration. See below for an example.
The pixel position of the upper left corner of the output image imageWorld is determined by the origin of the
world coordinate system. The size of the output image imageWorld can be choosen by the parameters width,
height, and scale. width and height must be given in pixels.
With the parameter scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter interpolation specifies, whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
If several images have to be rectified using the same parameters, GenImageToWorldPlaneMap in combination
with MapImage is much more efficient than the operator ImageToWorldPlane because the mapping function
needs to be computed only once.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Input image.
. imageWorld (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; HImage
Transformed image.
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. worldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; HTuple (int / long)
Width of the resulting image in pixels.
. height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; HTuple (int / long)
Height of the resulting image in pixels.
. scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (string / int / long / double)
Scale or unit
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
Example (Syntax: HDevelop)
HALCON 8.0.2
1182 CHAPTER 15. TOOLS
Result
ImageToWorldPlane returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
ImageToWorldPlane is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
CreatePose, HomMat3dToPose, CameraCalibration, HandEyeCalibration,
SetOriginPose
Alternatives
GenImageToWorldPlaneMap, MapImage
See also
ContourToWorldPlaneXld, ImagePointsToWorldPlane
Module
Calibration
Parameter
Result
Project3dPoint returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
Project3dPoint is reentrant and processed without parallelization.
Possible Predecessors
ReadCamPar, AffineTransPoint3d
Possible Successors
GenRegionPoints, GenRegionPolygon, DispPolygon
See also
CameraCalibration, DispCaltab, ReadCamPar, GetLineOfSight, AffineTransPoint3d
Module
Calibration
HALCON 8.0.2
1184 CHAPTER 15. TOOLS
tion is described by a discrete function with the relevant number of gray values (256 for byte images). For
functionType = 0 polynomial 0 , the response is described by a polynomial of degree polynomialDegree.
The computation of the response function is slower for functionType = 0 discrete 0 . However, since a poly-
nomial tends to oscillate in the areas in which no gray value information can be derived, even if smoothness
constraints are imposed as described below, the discrete model should usually be preferred over the polynomial
model.
The parameter smoothness defines (in addition to the constraints on the response function that can be de-
rived from the images) constraints on the smoothness of the response function. If, as described above, the gray
value range can be covered completely and without gaps, the default value of 1 should not be changed. Other-
wise, values > 1 can be used to obtain a stronger smoothing of the response function, while values < 1 lead
to a weaker smoothing. The smoothing is particularly important in areas for which no gray value information
can be derived from the images, i.e., in gaps in the histograms and for gray values smaller than the minimum
gray value of all images or larger than the maximum gray value of all images. In these areas, the smoothness
constraints lead to an interpolation or extrapolation of the response function. Because of the nature of the inter-
nally derived constraints, functionType = 0 discrete 0 favors an exponential function in the undefined areas,
whereas functionType = 0 polynomial 0 favors a straight line. Please note that the interpolation and extrapo-
lation is always less reliable than to cover the gray value range completely and without gaps. Therefore, in any
case it should be attempted first to acquire the images optimally, before the smoothness constraints are used to
fill in the remaining gaps. In all cases, the response function should be checked for plausibility after the call
to RadiometricSelfCalibration. In particular, it should be checked whether inverseResponse is
monotonic. If this is not the case, a more suitable scene should be used to avoid interpolation, or smoothness
should be set to a larger value. For functionType = 0 polynomial 0 , it may also be necessary to change
polynomialDegree. If, despite these changes, an implausible response is returned, the saturation behavior
of the camera should be checked, e.g., based on the 2D gray value histogram, and the saturated areas should be
masked out by hand, as described above.
When the inverse gray value response function of the camera is determined, the absolute energy falling on the
camera cannot be determined. This means that inverseResponse can only be determined up to a scale factor.
Therefore, an additional constraint is used to fix the unknown scale factor: the maximum gray value that can occur
should occur for the maximum input gray value, e.g., inverseResponse[255] = 255 for byte images. This
constraint usually leads to the most intuitive results. If, however, a multichannel image (typically an RGB image)
should be radiometrically calibrated (for this, each channel must be calibrated separately), the above constraint
may lead to the result that a different scaling factor is determined for each channel. This may lead to the result that
gray tones no longer appear gray after the correction. In this case, a manual white balancing step must be carried
out by identifying a homogeneous gray area in the original image, and by deriving appropriate scaling factors from
the corrected gray values for two of the three response curves (or, in general, for n − 1 of the n channels). Here,
the response curve that remains invariant should be chosen such that all scaling factors are < 1. With the scaling
factors thus determined, new response functions should be calculated by multiplying each value of a response
function with the scaling factor corresponding to that response function.
Parameter
HALCON 8.0.2
1186 CHAPTER 15. TOOLS
Result
If the parameters are valid, the operator RadiometricSelfCalibration returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
RadiometricSelfCalibration is reentrant and processed without parallelization.
Possible Predecessors
ReadImage, GrabImage, GrabImageAsync, SetFramegrabberParam, ConcatObj,
ProjMatchPointsRansac, ProjectiveTransImage
Possible Successors
LutTrans
See also
Histo2dim, GrayHisto, GrayHistoAbs, ReduceDomain
Module
Calibration
Focus:foc: 0.00806039;
DOUBLE:0.0:;
"Focal length of the lens [meter]";
Kappa:kappa: -2253.5;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";
Sx:sx: 1.0629e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";
Sy:sy: 1.1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";
Cx:cx: 378.236;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";
Cy:cy: 297.587;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";
ImageWidth:imgw: 768;
INT:1:32767;
HALCON 8.0.2
1188 CHAPTER 15. TOOLS
ImageHeight:imgh: 576;
INT:1:32767;
"Height of the used calibration images [pixel]";
In addition to the 8 parameters of the parameter group Camera:Parameter, the parameter group LinescanCamera:
Parameter contains 3 parameters that describe the motion of the camera with respect to the object. With this,
the parameter group LinescanCamera:Parameter consists of the 11 parameters Focus, Kappa (κ), Sx, Sy, Cx, Cy,
ImageWidth, ImageHeight, Vx, Vy und Vz. A suitable file can look like the following:
Focus:foc: 0.061;
DOUBLE:0.0:;
"Focal length of the lens [meter]";
Kappa:kappa: -16.9761;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";
Sx:sx: 1.06903e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";
Sy:sy: 1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";
Cx:cx: 930.625;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";
Cy:cy: 149.962;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";
ImageWidth:imgw: 2048;
INT:1:32767;
"Width of the used calibration images [pixel]";
ImageHeight:imgh: 3840;
INT:1:32767;
"Height of the used calibration images [pixel]";
Vx:vx: 1.41376e-06;
DOUBLE::;
"X-component of the motion vector [meter/scanline]";
Vy:vy: 5.45756e-05;
DOUBLE::;
"Y-component of the motion vector [meter/scanline]";
Vz:vz: 3.45872e-06;
DOUBLE::;
"Z-component of the motion vector [meter/scanline]";
Parameter
Result
ReadCamPar returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been read success-
fully. If necessary an exception handling is raised.
Parallelization Information
ReadCamPar is reentrant and processed without parallelization.
Possible Successors
FindMarksAndPose, SimCaltab, GenCaltab, DispCaltab, CameraCalibration
See also
FindCaltab, FindMarksAndPose, CameraCalibration, DispCaltab, SimCaltab,
WriteCamPar, WritePose, ReadPose, Project3dPoint, GetLineOfSight
Module
Foundation
HALCON 8.0.2
1190 CHAPTER 15. TOOLS
Result
SimCaltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
SimCaltab is reentrant and processed without parallelization.
Possible Predecessors
CameraCalibration, FindMarksAndPose, ReadPose, ReadCamPar, HomMat3dToPose
Possible Successors
FindCaltab
See also
FindCaltab, FindMarksAndPose, CameraCalibration, DispCaltab, CreatePose,
HomMat3dToPose, Project3dPoint, GenCaltab
Module
Calibration
x = PX .
Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3×4 projection matrix.
The projection matrix P can be decomposed as follows:
P = K(R|t) .
Here, R is a 3×3 rotation matrix and t is an inhomogeneous 3D vector. These two entities describe
the position (pose) of the camera in 3D space. This convention is analogous to the convention used in
CameraCalibration, i.e., for R = I and t = 0 the x axis points to the right, the y axis downwards, and
the z axis points forward. K is the calibration matrix of the camera (the camera matrix) which can be described as
follows:
HALCON 8.0.2
1192 CHAPTER 15. TOOLS
af sf u
K= 0 f v .
0 0 1
Here, f is the focal length of the camera in pixels, a the aspect ratio of the pixels, s is a factor that models the
skew of the image axes, and (u, v) is the principal point of the camera in pixels. In this convention, the x axis
corresponds to the column axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that t = 0. With this convention, it is easy to see that the
fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point.
Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity,
and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence
X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a
direction. With this, the above projection equation can be written as follows:
x = KRX .
If two images of the same point are taken with a stationary camera, the following equations hold:
x1 = K1 R1 X
x2 = K2 R2 X
and conseqently
x2 = K2 R2 R−1 −1 −1
1 K1 x1 = K2 R12 K1 x1 = H12 x1 .
If the camera parameters do not change when taking the two images, K1 = K2 holds. Because of the above,
the two images of the same 3D point are related by a projective 2D transformation. This transformation can be
determined with ProjMatchPointsRansac. It needs to be taken into account that the order of the coordinates
of the projective 2D transformations in HALCON is the opposite of the above convention. Furthermore, it needs to
be taken into account that ProjMatchPointsRansac uses a coordinate system in which the origin of a pixel
lies in the upper left corner of the pixel, whereas StationaryCameraSelfCalibration uses a coordinate
system that corresponds to the definition used in CameraCalibration, in which the origin of a pixel lies in the
center of the pixel. For projective 2D transformations that are determined with ProjMatchPointsRansac
the rows and columns must be exchanged and a translation of (0.5, 0.5) must be applied. Hence, instead of
H12 = K2 R12 K1−1 the following equations hold in HALCON:
0 1 0.5 0 1 −0.5
H12 = 1 0 0.5 K2 R12 K−1
1
1 0 −0.5
0 0 1 0 0 1
and
0 1 −0.5 0 1 0.5
K2 R12 K1−1 = 1 0 −0.5 H12 1 0 0.5 .
0 0 1 0 0 1
From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can
be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D
transformation between the two images. Let Hij be the projective transformation from image i to image j. Then,
Kj K>
j = Hij Ki K> >
i Hij
K−> −1
j Kj = H−> −> −1 −1
ij Ki Ki Hij
From the second equation, linear constraints on the camera parameters can be derived. This method is used for
estimationMethod = ’linear’. Here, all source images i given by mappingSource and all destination
images j given by mappingDest are used to compute constraints on the camera parameters. After the camera
parameters have been determined from these constraints, the rotation of the camera in the respective images can
be determined based on the equation Rij = K−1 j Hij Ki and by constructing a chain of transformations from the
reference image referenceImage to the respective image. From the first equation above, a nonlinear method
to determine the camera parameters can be derived by minimizing the following error:
> >
2
X
Kj K>
E= j − Hij Ki Ki Hij F
(i,j)∈{(s,d)}
Here, analogously to the linear method, {(s, d)} is the set of overlapping images specified by mappingSource
and mappingDest. This method is used for estimationMethod = ’nonlinear’. To start the minimization,
the camera parameters are initialized with the results of the linear method. These two methods are very fast and
return acceptable results if the projective 2D transformations Hij are sufficiently accurate. For this, it is essential
that the images do not have radial distortions. It can also be seen that in the above two methods the camera
parameters are determined independently from the rotation parameters, and consequently the possible constraints
are not fully exploited. In particular, it can be seen that it is not enforced that the projections of the same 3D point
lie close to each other in all images. Therefore, StationaryCameraSelfCalibration offers a complete
bundle adjustment as a third method (estimationMethod = ’gold_standard’). Here, the camera parameters
and rotations as well as the directions in 3D corresponding to the image points (denoted by the vectors X above),
are determined in a single optimization by minimizing the following error:
n m
!
X X 1 2
2 2
E= kxij − Ki Ri Xj k + 2 (ui + vi )
i=1 j=1
σ
In this equation, only the terms for which the reconstructed direction Xj is visible in image i are taken into account.
The starting values for the parameters in the bundle adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle adjustment requires a significantly longer execution
time than the two simpler methods. Nevertheless, because the bundle adjustment results in significantly better
results, it should be preferred.
In each of the three methods the camera parameters that should be computed can be specified. The remaining
parameters are set to a constant value. Which parameters should be computed is determined with the parameter
cameraModel which contains a tuple of values. cameraModel must always contain the value ’focus’ that
specifies that the focal length f is computed. If cameraModel contains the value ’principal_point’ the principal
point (u, v) of the camera is computed. If not, the principal point is set to (imageWidth/2, imageHeight/2).
If cameraModel contains the value ’aspect’ the aspect ratio a of the pixels is determined, otherwise it is set to
1. If cameraModel contains the value ’skew’ the skew of the image axes is determined, otherwise it is set to
0. Only the following combinations of the parameters are allowed: ’focus’, [’focus’, ’principal_point’], [’focus’,
’aspect’], [’focus’, ’principal_point’, ’aspect’] und [’focus’, ’principal_point’, ’aspect’, ’skew’].
Additionally, it is possible to determine the parameter kappa which models radial lens distortions, if
estimationMethod = ’gold_standard’ has been selected and the camera parameters are assumed constant.
In this case, ’kappa’ can also be included in the parameter cameraModel.
When using estimationMethod = ’gold_standard’ to determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by adding a sigma to the parameter ’principal_point:
0.5’. If no sigma is given the penalty term in the above equation for calculating the error is ommited.
The parameter fixedCameraParams determines whether the camera parameters can change in each im-
age or whether they should be assumed constant for all images. To calibrate a camera so that it can later be
used for measuring with the calibrated camera, only fixedCameraParams = ’true’ is useful. The mode
fixedCameraParams = ’false’ is mainly useful to compute spherical mosaics with GenSphericalMosaic
if the camera zoomed or if the focus changed significantly when the mosaic images were taken. If a mosaic with
constant camera parameters should be computed, of course fixedCameraParams = ’true’ should be used. It
should be noted that for fixedCameraParams = ’false’ the camera calibration problem is determined very
badly, especially for long focal lengths. In these cases, often only the focal length can be determined. Therefore,
it may be necessary to use cameraModel = ’focus’ or to constrain the position of the principal point by using a
small Sigma for the penalty term for the principal point.
HALCON 8.0.2
1194 CHAPTER 15. TOOLS
The number of images that are used for the calibration is passed in numImages. Based on the number of images,
several constraints for the camera model must be observed. If only two images are used, even under the assumption
of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should
be set to 0 by not adding ’skew’ to cameraModel. If fixedCameraParams = ’false’ is used, the full set of
camera parameters can never be determined, no matter how many images are used. In this case, the skew should be
set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least
one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other
images. If this is not the case the computation of the aspect ratio should be suppressed by not adding ’aspect’ to
cameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping
image pair is determined with ProjMatchPointsRansac. For example, for a 2×2 block of images in the
following layout
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 und 37→4. The indices of the images that determine the respective transformation are
given by mappingSource and mappingDest. The indices are start at 1. Consequently, in the above example
mappingSource = [1,1,1,2,2,3] and mappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by numImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by referenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in
homMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must
be passed in rows1, cols1, rows2, and cols2. They can be determined from the output of
ProjMatchPointsRansac with TupleSelect or with the HDevelop function subset. To enable
StationaryCameraSelfCalibration to determine which point pair belongs to which image pair,
numCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices Ki are returned in cameraMatrices as 3 × 3 matrices. For
fixedCameraParams = ’false’, numImages matrices are returned. Since for fixedCameraParams =
’true’ all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations Ri
are returned in rotationMatrices as 3 × 3 matrices. rotationMatrices always contains numImages
matrices.
If estimationMethod = ’gold_standard’ is used, (x, y, z) contains the reconstructed directions Xj . In ad-
dition, error contains the average projection error of the reconstructed directions. This can be used to check
whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective
camera matrix should be multiplied with the corresponding rotation matrix (with HomMat2dCompose).
Parameter
* Assume that Images contains four images in the layout given in the
* above description. Then the following example performs the camera
* self-calibration using these four images.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
HALCON 8.0.2
1196 CHAPTER 15. TOOLS
Result
If the parameters are valid, the operator StationaryCameraSelfCalibration returns the value 2
(H_MSG_TRUE). If necessary an exception handling is raised.
Parallelization Information
StationaryCameraSelfCalibration is reentrant and processed without parallelization.
Possible Predecessors
ProjMatchPointsRansac
Possible Successors
GenSphericalMosaic
See also
GenProjectiveMosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Module
Calibration
For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in camParam
is greater than 0, the projection is described by the following equations:
x
pc = y
z
x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in camParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
x
pc = y
z
u = x and v=y
2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:
ũ ṽ
c= + Cx and r= + Cy
Sx Sy
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
HALCON 8.0.2
1198 CHAPTER 15. TOOLS
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators FindMarksAndPose and CameraCalibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
x
pc = y ,
z
m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz
with
1
D =
1 + κ(ũ2 + (pv )2 )
pv = Sy · Cy
ũ
c= + Cx and r=t
Sx
The format of the text file camParFile is a (HALCON-independent) generic parameter description. It allows to
group arbitrary sets of parameters hierarchically. The description of a single parameter within a parameter group
consists of the following 3 lines:
Depending on the number of elements of camParam, the parameter groups Camera:Parameter or LinescanCam-
era:Parameter, respectively, are written into the text file camParFile (see ReadCamPar for an example). The
parameter group Camera:Parameter consits of the 8 interior camera parameters of the area scan camera. The
parameter group LinescanCamera:Parameter consists of the 11 interior camera parameters of the line scan camera.
Parameter
. camParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. camParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; HTuple (string)
File name of interior camera parameters.
Default Value : "campar.dat"
List of values : CamParFile ∈ {"campar.dat", "campar.initial", "campar.final"}
Example (Syntax: HDevelop)
read_image(Image3, ’calib-03’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
find_caltab(Image2, Caltab2, ’caltab.descr’, 3, 112, 5)
find_caltab(Image3, Caltab3, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
find_marks_and_pose(Image2, Caltab2, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2,
StartPose2)
find_marks_and_pose(Image3, Caltab3, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3,
StartPose3)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3],
[CCoord1, CCoord2, CCoord3], StartCamPar,
[StartPose1, StartPose2, StartPose3], ’all’,
CamParam, NFinalPose, Errors)
* write interior camera parameters to file
write_cam_par(CamParam, ’campar.dat’)
Result
WriteCamPar returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been written suc-
cessfully. If necessary an exception handling is raised.
Parallelization Information
WriteCamPar is local and processed completely exclusively without parallelization.
Possible Predecessors
CameraCalibration
See also
FindCaltab, FindMarksAndPose, CameraCalibration, DispCaltab, SimCaltab,
ReadCamPar, WritePose, ReadPose, Project3dPoint, GetLineOfSight
Module
Foundation
15.6 Datacode
static void HOperatorSet.ClearAllDataCode2dModels ( )
static void HMisc.ClearAllDataCode2dModels ( )
Delete all 2D data code models and free the allocated memory
The operator ClearAllDataCode2dModels deletes all 2D data code models that were created by
CreateDataCode2dModel or ReadDataCode2dModel. All memory used by the models is freed. Af-
ter the operator call all 2D data code handles are invalid.
Attention
ClearAllDataCode2dModels exists solely for the purpose of implementing the “reset program” functional-
ity in HDevelop. ClearAllDataCode2dModels must not be used in any application.
Result
The operator ClearAllDataCode2dModels returns the value 2 (H_MSG_TRUE) if all 2D data code models
were freed correctly. Otherwise, an exception will be raised.
HALCON 8.0.2
1200 CHAPTER 15. TOOLS
Parallelization Information
ClearAllDataCode2dModels is processed completely exclusively without parallelization.
Alternatives
ClearDataCode2dModel
See also
CreateDataCode2dModel, ReadDataCode2dModel
Module
Data Code
The parameter symbolType is used to determine the type of data codes to process. Presently, three types are
supported: ’Data Matrix ECC 200’, ’QR Code’, and ’PDF417’. Data matrix codes of type ECC 000-140 are not
supported. For the QR Code the older Model 1 as well as the new Model 2 can be read. The PDF417 can be read
in its conventional as well as in its compact form (’Compact/Truncated PDF417’).
For all symbol types, the data code reader supports the Extended Channel Interpretation (ECI) protocol. If the
symbol contains an ECI code, all characters with ASCII code 92 (backslash, ’\’) that occur in the normal data
stream are, in compliance with the standard, doubled (’\\’) for the output. This is necessary in order to distinguish
data backslashs from the ECI sequence ’\nnnnnn’.
The information whether the symbol contains ECI codes (and consequently doubled backslashs) or not is stored in
the Symbology Identifier number that can be obtained for every succesfully decoded symbol with the help of the
operator GetDataCode2dResults passing the generic parameter ’symbology_ident’. How the code number
encodes additional information about the symbology and the data code reader, like the ECI support, is defined
in the different symbology specifications. For more information see the appropriate standards and the operator
GetDataCode2dResults.
The Symbology Indentifier code will not be preceded by the data code reader to the output data, even if the symbol
contains an ECI code. If this is needed, e.g., by a subsequent processing unit, the ’symbology_ident’ number
(obtained by the operator GetDataCode2dResults with parameter ’symbology_ident’) can be added to the
data stream manually together with the symbology flag and the symbol code: ’]d’, ’]Q’, or ’]L’ for DataMatrix
codes, QR codes, or PDF417 codes, respectively.
Standard default settings of the data code model
The default settings of the model were chosen to read a wide range of common symbols within a reasonable
amount of time. However, for run-time reasons some restrictions apply to the symbol (see the following table).
If the model was modified (as described later), it is at any time possible to reset it to these default settings by
passing the generic parameter ’default_parameters’ together with the value ’standard_recognition’ to the operator
SetDataCode2dParam.
HALCON 8.0.2
1202 CHAPTER 15. TOOLS
operator SetDataCode2dParam. Both operators provide the generic parameters genParamNames and
genParamValues for this purpose. A detailed description of all supported generic parameters can be found
with the operator SetDataCode2dParam.
Another way for adapting the model is to train it based on sample images. Passing the parameter ’train’ to the op-
erator FindDataCode2d will cause the find operator to look for a symbol, determine its parameters, and modify
the model accordingly. More details can be found with the description of the operator FindDataCode2d.
It is possible to query the model parameters with the operator GetDataCode2dParam. The names of all sup-
ported parameters for setting or querying the model are returned by the operator QueryDataCode2dParams.
Store the data code model
Furthermore, the operator WriteDataCode2dModel allows to write the model into a file that can be used later
to create (e.g., in a different application) an identical copy of the model. Such a model copy is created directly by
ReadDataCode2dModel (without calling CreateDataCode2dModel).
Free the data code model
Since memory is allocated during CreateDataCode2dModel and the following operations, the model should
be freed explicitly by the operator ClearDataCode2dModel if it is no longer used.
Parameter
clear_data_code_2d_model (DataCodeHandle)
* (2) Create a model for reading a wide range of Data matrix ECC 200 codes
* (this model will also read light symbols on dark background)
create_data_code_2d_model (’Data Matrix ECC 200’, ’default_parameters’,
’enhanced_recognition’, DataCodeHandle)
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)
Result
The operator CreateDataCode2dModel returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
CreateDataCode2dModel is processed completely exclusively without parallelization.
Possible Successors
SetDataCode2dParam, FindDataCode2d
Alternatives
ReadDataCode2dModel
See also
ClearDataCode2dModel, ClearAllDataCode2dModels
Module
Data Code
Detect and read 2D data code symbols in an image or train the 2D data code model.
The operator FindDataCode2d detects 2D data code symbols in the input image (image) and reads the data
that is coded in the symbol. Before calling FindDataCode2d, a model of a class of 2D data codes that matches
the symbols in the images must be created with CreateDataCode2dModel or ReadDataCode2dModel.
The handle returned by these operators is passed to FindDataCode2d in dataCodeHandle. To look for more
than one symbol in an image, the generic parameter ’stop_after_result_num’ can be passed to genParamNames
together with the number of requested symbols as genParamValues.
HALCON 8.0.2
1204 CHAPTER 15. TOOLS
As a result the operator returns for every successfully decoded symbol the surrounding XLD contour
(symbolXLDs), a result handle, which refers to a candidate structure that stores additional information about
the symbol as well as the search and decoding process (resultHandles), and the string that is encoded in
the symbol (decodedDataStrings). If the string is longer than 1024 characters it is shortened to 1020
characters followed by ’. . . ’. In this case, accessing the complete string is only possible with the operator
GetDataCode2dResults. Passing the candidate handle from resultHandles together with the generic
parameter ’decoded_data’ GetDataCode2dResults returns a tuple with the ASCII code of all characters of
the string.
Adjusting the model
If there is a symbol in the image that cannot be read, it should be verified, whether the properties of the symbol
fit the model parameters. Special attention should be paid to the correct polarity (’polarity’, light-on-dark or dark-
on-light), the symbol size (’symbol_size’ for ECC 200, ’version’ for QR Code, ’symbol_rows’ and ’symbol_cols’
for PDF417), the module size (’module_size’ for ECC 200 and QR Code, ’module_width’ and ’module_aspect’
for PDF417), the possibility of a mirroring of the symbol (’mirrored’), and the specified minimum contrast (’con-
trast_min’). Further relevant parameters are the gap between neighboring foreground modules and, for ECC 200,
the maximum slant of the L-shaped finder pattern (’slant_max’). The current settings for these parameters are re-
turned by the operator GetDataCode2dParam. If necessary, the appropriate model parameters can be adjusted
with SetDataCode2dParam.
It is recommended to adjust the model as well as possible to the symbols in the images also for run-time reasons.
In general, the run-time of FindDataCode2d is higher for a more general model than for a more specific model.
One should take into account that a general model leads to a high run-time especially if no valid data code can be
found.
Train the model
Besides setting the model parameters manually with SetDataCode2dParam, the model can also be trained
with FindDataCode2d based on one or several sample images. For this the generic parameter ’train’ must
be passed in genParamNames. The corresponding value passed in genParamValues determines the model
parameters that should be learned. The following values are possible:
It is possible to train several of these parameters in one call of FindDataCode2d by passing the generic pa-
rameter ’train’ in a tuple more than once in conjunction with the appropriate parameters: e.g., genParamNames
= [’train’,’train’] and genParamValues = [’polarity’,’module_size’]. Furthermore, in conjunction with ’train’
= ’all’ it is possible to exclude single parameters from training explicitly again by passing ’train’ more than once.
The names of the parameters to exclude, however, must be prefixed by ’˜’: genParamNames = [’train’,’train’]
and genParamValues = [’all’,’˜contrast’], e.g., trains all parameters except the minimum contrast.
For training the model, the following aspects should be considered:
• To use several images for the training, the operator FindDataCode2d must be called with the parameter
’train’ once for every sample image.
• It is also possible to train the model with several symbols in one image. Here, the generic parameter
’stop_after_result_num’ must be passed as a tuple to genParamNames together with ’train’. The num-
ber of symbols in the image is passed in genParamValues together with the training parameters.
• If the training image contains more symbols than the one that shall be used for the training the domain of the
image should be reduced to the symbol of interest with ReduceDomain.
• In an application with very similar images, one image for training may be sufficient if the following assump-
tions are true: The symbol size (in modules) is the same for all symbols used in the application, foreground
and background modules are of the same size and there is no gap between neighboring foreground modules,
the background has no distinct texture; and the contrast of all images is almost the same. Otherwise, several
images should be used for training.
• In applications where the symbol size (in modules) is not fixed, the smallest as well as the biggest symbols
should be used for the training. If this can not be guaranteed, the limits for the symbol size should be adapted
manually after the training, or the symbol size should entirely be excluded from the training.
• During the first call of FindDataCode2d in the training mode, the trained model parameters are restricted
to the properties of the detected symbol. Any successive training call will, where necessary, extend the
parameter range to cover the already trained symbols as well as the new symbols. Resetting the model with
SetDataCode2dParam to one of its default settings (’default_parameters’ = ’standard_recognition’ or
’enhanced_recognition’) will also reset the training state of the model.
• If FindDataCode2d is not able to read the symbol in the training image, this will produce no error or
exception handling. This can simply be tested in the program by checking the results of FindDataCode2d:
symbolXLDs, resultHandles, decodedDataStrings. These tuples will be empty, and the model
will not be modified.
HALCON 8.0.2
1206 CHAPTER 15. TOOLS
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Input image.
. symbolXLDs (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; HXLDCont
XLD contours that surround the successfully decoded data code symbols.
. dataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . datacode_2d ; HDataCode2D / HTuple (IntPtr)
Handle of the 2D data code model.
. genParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; HTuple (string)
Names of (optional) parameters for controlling the behavior of the operator.
Default Value : []
List of values : GenParamNames ∈ {"train", "stop_after_result_num"}
. genParamValues (input_control) . . . . . . . . . attribute.value(-array) ; HTuple (int / long / double / string)
Values of the optional generic parameters.
Default Value : []
Suggested values : GenParamValues ∈ {"all", "model_type", "symbol_size", "version", "module_size",
"module_shape", "polarity", "mirrored", "contrast", "module_grid", "image_proc", 1, 2, 3}
. resultHandles (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Handles of all successfully decoded 2D data code symbols.
. decodedDataStrings (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Decoded data strings of all detected 2D data code symbols in the image.
Example (Syntax: HDevelop)
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Display all symbols, the strings encoded in them, and the module size
dev_set_color (’green’)
for i := 0 to |ResultHandles| - 1 by 1
SymbolXLD := SymbolXLDs[i+1]
dev_display (SymbolXLD)
get_contour_xld (SymbolXLD, Row, Col)
set_tposition (WindowHandle, max(Row), min(Col))
write_string (WindowHandle, DecodedDataStrings[i])
get_data_code_2d_results (DataCodeHandle, ResultHandles[i],
[’module_height’,’module_width’], ModuleSize)
new_line (WindowHandle)
write_string (WindowHandle, ’module size = ’ + ModuleSize[0] + ’x’ +
ModuleSize[1])
endfor
Result
The operator FindDataCode2d returns the value 2 (H_MSG_TRUE) if the given parameters are correct. Oth-
erwise, an exception will be raised.
Parallelization Information
FindDataCode2d is reentrant and processed without parallelization.
Possible Predecessors
CreateDataCode2dModel, ReadDataCode2dModel, SetDataCode2dParam
Possible Successors
GetDataCode2dResults, GetDataCode2dObjects, WriteDataCode2dModel
See also
CreateDataCode2dModel, SetDataCode2dParam, GetDataCode2dResults,
GetDataCode2dObjects
Module
Data Code
Access iconic objects that were created during the search for 2D data code symbols.
The operator GetDataCode2dObjects facilitates to access iconic objects that were created during the last call
of FindDataCode2d while searching and reading the 2D data code symbols. Besides the name of the object
(objectName), the 2D data code model (dataCodeHandle) must be passed to GetDataCode2dObjects.
In addition, in candidateHandle a handle of a result or candidate structure or a string identifying a
group of candidates (see GetDataCode2dResults) must be passed. These handles are returned by
FindDataCode2d for all successfully decoded symbols and by GetDataCode2dResults for a group of
candidates. If these operators return several handles in a tuple, the individual handles can be accessed by normal
tuple operations.
Some objects are not accessible without setting the model parameter ’persistence’ to 1 (see
SetDataCode2dParam). The persistence must be set before calling FindDataCode2d, either while
creating the model with CreateDataCode2dModel or with SetDataCode2dParam.
Currently, the following iconic objects can be retrieved:
Regions of the modules
These region arrays correspond to the areas that were used for the classification. The returned object is a region
array. Hence it cannot be requested for a group of candidates. Therefore, a single result handle must be passed in
candidateHandle. The model persistence must be 1 for this object. In addition, requesting the module ROIs
makes sense only for symbols that were detected as valid symbols. For other candidates, whose processing was
aborted earlier, the module ROIs are not available.
XLD contour
HALCON 8.0.2
1208 CHAPTER 15. TOOLS
This object can be requested for any group of results or for any single candidate or symbol handle. The persistence
setting is of no relevance.
Pyramid images
* Example demonstrating how to access the iconic objects of the data code
* search.
* Get the handles of all candidates that were detected as a symbol but
* could not be read
get_data_code_2d_results (DataCodeHandle, ’all_undecoded’, ’handle’,
HandlesUndecoded)
* For every undecoded symbol, get the contour and the classified
* module regions
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
dev_set_color (’blue’)
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the module regions of the foreground modules
dev_set_color (’green’)
get_data_code_2d_objects (ModuleFG, DataCodeHandle, HandlesUndecoded[i],
’module_1_rois’)
* Get the module regions of the background modules
dev_set_color (’red’)
get_data_code_2d_objects (ModuleBG, DataCodeHandle, HandlesUndecoded[i],
’module_0_rois’)
* Stop for inspecting the image
stop ()
endfor
Result
The operator GetDataCode2dObjects returns the value 2 (H_MSG_TRUE) if the given parameters are cor-
rect and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
GetDataCode2dObjects is reentrant and processed without parallelization.
Possible Predecessors
FindDataCode2d, QueryDataCode2dParams
Possible Successors
GetDataCode2dResults
See also
QueryDataCode2dParams, GetDataCode2dResults, GetDataCode2dParam,
SetDataCode2dParam
Module
Data Code
HALCON 8.0.2
1210 CHAPTER 15. TOOLS
’symbol_cols_min’: minimum number of data columns in the symbol in codewords, i.e., excluding the code-
words of the start/stop pattern and of the two row indicators.
’symbol_cols_max’: maximum number of data columns in the symbol in codewords, i.e., excluding the
codewords of the start/stop pattern and of the two row indicators.
’symbol_rows_min’: minimum number of module rows in the symbol.
’symbol_rows_max’: maximum number of module rows in the symbol.
It is possible to query the values of several or all parameters with a single operator call by passing a tuple con-
taining the names of all desired parameters to genParamNames. As a result a tuple of the same length with the
corresponding values is returned in genParamValues.
Parameter
. dataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . datacode_2d ; HDataCode2D / HTuple (IntPtr)
Handle of the 2D data code model.
. genParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; HTuple (string)
Names of the generic parameters that are to be queried for the 2D data code model.
Default Value : "contrast_min"
List of values : GenParamNames ∈ {"strict_model", "persistence", "polarity", "mirrored", "contrast_min",
"model_type", "version_min", "version_max", "symbol_size_min", "symbol_size_max", "symbol_cols_min",
"symbol_cols_max", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size_min",
"module_size_max", "module_width_min", "module_width_max", "module_aspect_min",
"module_aspect_max", "module_gap_col_min", "module_gap_col_max", "module_gap_row_min",
"module_gap_row_max", "slant_max", "module_grid", "position_pattern_min"}
. genParamValues (output_control) . . . . . . . .attribute.value(-array) ; HTuple (string / int / long / double)
Values of the generic parameters.
Result
The operator GetDataCode2dParam returns the value 2 (H_MSG_TRUE) if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
GetDataCode2dParam is reentrant and processed without parallelization.
Possible Predecessors
QueryDataCode2dParams, SetDataCode2dParam, FindDataCode2d
Possible Successors
FindDataCode2d, WriteDataCode2dModel
Alternatives
WriteDataCode2dModel
See also
QueryDataCode2dParams, SetDataCode2dParam, GetDataCode2dResults,
GetDataCode2dObjects, FindDataCode2d
Module
Data Code
Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
The operator GetDataCode2dResults allows to access several alphanumerical results that were calculated
while searching and reading the 2D data code symbols. These results describe the search process in general or one
of the investigated candidates – independently of whether it could be read or not. The results are in most cases
not related to the symbol with the highest resolution but depend on the pyramid level that was investigated when
the reading process was aborted. To access a result, the name of the parameter (resultNames) and the 2D data
code model (dataCodeHandle) must be passed. In addition, in candidateHandle a handle of a result or
candidate structure or a string identifying a group of candidates must be passed. These handles are returned by
FindDataCode2d for all successfully decoded symbols and by GetDataCode2dResults for a group of
candidates. If these operators return several handles in a tuple, the individual handles can be accessed by normal
tuple operations.
HALCON 8.0.2
1212 CHAPTER 15. TOOLS
Most results consist of one value. Several of these results can be queried for a specific candidate in a single call.
The values returned in resultValues correspond to the appropriate parameter names in the resultNames
tuple. As an alternative, these results can also be queried for a group of candidates (see below). In this case, only
one parameter can be requested per call, and resultValues contains one value for every candidate.
Furthermore, there exists another group of results that consist of more than one value (e.g., ’bin_module_data’),
which are returned as a tuple. These parameters must always be queried exclusively: one result for one specific
candidate.
Apart from the candidate-specific results there are a number of results referring to the search process in general.
This is indicated by passing the string ’general’ in candidateHandle instead of a candidate handle.
Candidate groups
The following candidate group names are predefined and can be passed as candidateHandle instead of a
single handle:
’general’: This value is used for results that refer to the last FindDataCode2d call in general but not to a
specific candidate.
’all_candidates’: All candidates (including the successfully decoded symbols) that were investigated during the
last call of FindDataCode2d.
’all_results’: All symbols that were successfully decoded during the last call of FindDataCode2d.
’all_undecoded’: All candidates of the last call of FindDataCode2d that were detected as 2D data code sym-
bols, but could not be decoded. For these candidates the error correction detected too many errors, or there
was an failure while decoding the error-corrected data because of inconsistent data.
’all_aborted’: All candidates of the last call of FindDataCode2d that could not be identified as valid 2D data
code symbols and for which the processing was aborted.
Supported results
Currently, the access to the following results, which are returned in resultValues, is supported:
General results that do not depend on specific candidates (all data code types) – ’general’:
’symbol_rows’, ’symbol_cols’: ECC 200 and QR Code: detected size of the symbol in modules: number of
rows and columns including the finder pattern; PDF417: detected number of rows and data columns
(each 17 modules wide) within the symbol (excluding the start/stop patterns and the row indicators).
’module_height’, ’module_width’: height and width of the modules in pixels.
’contrast’: estimation of the symbol’s contrast. This value is based on the gradient of the edge between the
finder pattern and the background.
’decoded_string’: result string that is encoded in the symbol – this query is useful only for successfully
decoded strings. It returns the same string as FindDataCode2d and is subjected to the same restric-
tions concerning the maximum length of 1024 characters. If the result string is longer, the parameter
’decoded_data’ can be used to get a tuple with all ASCII characters of the decoded string.
’decoding_error’: decoding error – for successfully decoded symbols this is the number of errors that were
detected and corrected by the error correction. The number of errors corresponds here to the number of
code words that lead to errors when trying to read them. If the error correction failed, a negative error
code is returned.
’symbology_ident’: The Symbology Identifier is used to indicate that the data code contains the FNC1 and/or
ECI characters.
FNC1 (Function 1 Character) is used if the data formating conforms to specific predefined industry
standards.
The ECI protocol (Extended Channel Interpretation) is used to change the default interpretation of the
encoded data. A 6-digit code number after the ECI character switches the interpretation of the following
characters from the default to a specific code page like an international character set. In the output stream
the ECI switch is coded as ’\nnnnnn’. Therefore all backslashs (’\’, ASCII code 92), that occur in the
normal output stream have to be doubled.
The ’symbology_ident’ parameter returns only the actual identifier value m (m ∈ [0, 6] (ECC 200 and QR
Code) and m ∈ [0, 2] (PDF417)) according to the specification of Data matrix, QR Codes, and PDF417
but not the identifier prefixes ’]d’, ’]Q’, and ’]L’ for Data matrix, QR Codes, and PDF417 respectively.
If required, this Symbology Identifier composed of the prefix and the value m has to be preceded the
decoded string (normally only if m > 1) manually. Symbols that contain ECI codes (and hence doubled
backslashs) can be recognised by the following identifier values: ECC 200: 4, 5, and 6, QR Code: 2, 4,
and 6, PDF417: 1.
• QR Codes:
’version’: version number that corresponds to the size of the symbol (version 1 = 21 × 21, version 2 = 25 ×
25, . . . , version 40 = 177 × 177).
’symbol_size’: detected size of the symbol in modules.
’model_type’: Type of the QR Code Model. In HALCON the older, original specification for QR Codes
Model 1 as well as the newer, enhanced form Model 2 are supported.
’mask_pattern_ref’, ’error_correction_level’: If a candidate is recognized as an QR Code the first step is
to read the format information encoded in the symbol. This includes a code for the pattern that was
used for masking the data modules (0 ≤ ’mask_pattern_ref’ ≤ 7) and the level of the error correction
(’error_correction_level’ ∈ [’L’, ’M’, ’Q’, ’H’]).
• PDF417:
’module_aspect’: module aspect ratio; this corresponds to the ratio of ’module_height’ to ’module_width’.
’error_correction_level’: If a candidate is recognized as a PDF417 the first step is to read the format infor-
mation encoded in the symbol. This includes the error correction level, which was used during encoding
(’error_correction_level’ ∈ [0, 8]).
HALCON 8.0.2
1214 CHAPTER 15. TOOLS
Results that return a tuple of values and hence can be requested only separately and only for a single candidate:
decoded words within the error correction blocks are counted. As for 2D data codes, the modulation
grade indicates how strong the amplitudes, i.e. the extremal intensities, of the bars and spaces are. The
grade decodability measures the deviation of the actual length of bars and spaces with respect to their
reference length. And finally, the grade defects refers to a measurement of how perfect the reflectance
profiles of bars and spaces are.
• PDF417:
’macro_exist’: symbols that are part of a group of symbols are called "‘Macro PDF417"’ symbols. These
symbols contain additional information within a control block. For macro symbols ’macro_exist’ returns
the value 1 while for conventional symbols 0 is returned.
’macro_segment_index’: returns the index of the symbol in the group. For macro symbols this information
is obligatory.
’macro_file_id’: returns the group identifier as a string. For macro symbols this information is obligatory.
’macro_segment_count’: returns the number of symbols that belong to the group. For macro symbols this
information is optional.
’macro_time_stamp’: returns the time stamp on the source file expressed as the elapsed time in seconds since
1970:01:01:00:00:00 GMT as a string. For macro symbols this information is optional.
’macro_checksum’: returns the CRC checksum computed over the entire source file using the CCITT-16
polynomial. For macro symbols this information is optional.
’macro_last_symbol’: returns 1 if the symbol is the last one within the group of symbols. Otherwise 0 is
returned. For macro symbols this information is optional.
Status message
The status parameter that can be queried for all candidates reveals why and where in the evaluation phase a candi-
date was discarded. The following list shows the most important status messages in the order of their generation
during the evaluation phase:
HALCON 8.0.2
1216 CHAPTER 15. TOOLS
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: special decoding reader requested’ – The decoded data contains a message for program-
ming the data code reader. This feature is not supported.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
• QR Code:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted adjusting: finder patterns’ – It is not possible to determine the exact position of the finder pattern
in the processing image.
’aborted symbol: different number of rows and columns’ – It is not possible to determine for both dimen-
sions a consistent symbol size by the size and the position of the detected finder pattern. When reading
Model 2 symbols, this error may occur only with small symbols (< version 7 or 45 × 45 modules). For
bigger symbols the size is coded within the symbol in the version information region. The estimated size
is used only as a hint for finding the version information region.
’aborted symbol: invalid size’ – The size determined by the size and the position of the detected finder pat-
tern is too small or (only Model 1) too big.
’decoding of version information failed’ – While processing a Model 2 symbol, the symbol version as deter-
mined by the finder pattern is at least 7 (≥ 45 × 45 modules). However, reading the version from the
appropriate region in the symbol failed.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’decoding of format information failed’ – Reading the format information (mask pattern and error correction
level) from the appropriate region in the symbol failed.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
• PDF417:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: special decoding reader requested’ – The decoded data contains a message for program-
ming the data code reader. This feature is not supported.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
While processing a candidate, it is possible that internally several iterations for reading the symbol are performed.
If all attempts fail, normally the last abortion state is stored in the candidate structure. E.g., if the QR Code model
enables symbols with Model 1 and Model 2 specification, FindDataCode2d tries first to interpret the symbol
as Model 2 type. If this fails, Model 1 interpretation is performed. If this also fails, the status variable is set to
the latest failure state of the Model 1 interpretation. In order to get the error state of the Model 2 branch, the
’model_type’ parameter of the data code model must be restricted accordingly (with SetDataCode2dParam).
Parameter
. dataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . datacode_2d ; HDataCode2D / HTuple (IntPtr)
Handle of the 2D data code model.
. candidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (string / int / long)
Handle of the 2D data code candidate or name of a group of candidates for which the data is required.
Default Value : "all_candidates"
Suggested values : CandidateHandle ∈ {0, 1, 2, "general", "all_candidates", "all_results",
"all_undecoded", "all_aborted"}
. resultNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; HTuple (string)
Names of the results of the 2D data code to return.
Default Value : "status"
Suggested values : ResultNames ∈ {"min_search_level", "max_search_level", "pass_num", "result_num",
"candidate_num", "undecoded_num", "aborted_num", "handle", "pass", "status", "search_level",
"process_level", "polarity", "module_gap", "mirrored", "model_type", "symbol_rows", "symbol_cols",
"symbol_size", "version", "module_height", "module_width", "module_aspect", "slant", "contrast",
"module_grid", "decoded_string", "decoding_error", "symbology_ident", "mask_pattern_ref",
"error_correction_level", "bin_module_data", "raw_coded_data", "corr_coded_data", "decoded_data",
"quality_isoiec15415", "structured_append", "macro_exist", "macro_segment_index", "macro_file_id",
"macro_segment_count", "macro_time_stamp", "macro_checksum", "macro_last_symbol"}
. resultValues (output_control) . . . . . . . . . . attribute.value(-array) ; HTuple (string / int / long / double)
List with the results.
Example (Syntax: HDevelop)
* Example demonstrating how to access the results of the data code search.
* For every undecoded symbol, get the contour, the symbol size, and
* the binary module data
dev_set_color (’red’)
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the symbol size
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
[’symbol_rows’,’symbol_cols’], SymbolSize)
* Get the binary module data (has to be queried exclusively)
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
’bin_module_data’, BinModuleData)
* Stop for inspecting the data
HALCON 8.0.2
1218 CHAPTER 15. TOOLS
stop ()
endfor
Result
The operator GetDataCode2dResults returns the value 2 (H_MSG_TRUE) if the given parameters are cor-
rect and the requested results are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
GetDataCode2dResults is reentrant and processed without parallelization.
Possible Predecessors
FindDataCode2d, QueryDataCode2dParams
Possible Successors
GetDataCode2dObjects
See also
QueryDataCode2dParams, GetDataCode2dObjects, GetDataCode2dParam,
SetDataCode2dParam
Module
Data Code
The returned parameter list depends only on the type of the data code and not on the current state of the model or
its results.
Parameter
* This example demonstrates how the names of all available model parameters
* can be queried. This is used to request first the settings of the
* untrained and then the settings of the trained model.
Result
The operator QueryDataCode2dParams returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
QueryDataCode2dParams is reentrant and processed without parallelization.
Possible Predecessors
CreateDataCode2dModel
Possible Successors
GetDataCode2dParam, GetDataCode2dResults, GetDataCode2dObjects
Module
Data Code
HALCON 8.0.2
1220 CHAPTER 15. TOOLS
Result
The operator ReadDataCode2dModel returns the value 2 (H_MSG_TRUE) if the named 2D data code file
was found and correctly read. Otherwise, an exception will be raised.
Parallelization Information
ReadDataCode2dModel is processed completely exclusively without parallelization.
Possible Successors
FindDataCode2d
Alternatives
CreateDataCode2dModel
See also
WriteDataCode2dModel, ClearDataCode2dModel, ClearAllDataCode2dModels
Module
Data Code
Attention: If this parameter is set together with a list of other parameters, this parameter must be at the
first position.
HALCON 8.0.2
1222 CHAPTER 15. TOOLS
’symbol_cols_max’: maximum number of data columns in the symbol in codewords, i.e., excluding the two
codewords of the start/stop pattern and of the two indicators.
Value range: [1 . . . 30]
Default: 20 (enhanced: 30)
’symbol_rows_min’: minimum number of module rows in the symbol.
Value range: [3 . . . 90]
Default: 5 (enhanced: 3)
’symbol_rows_max’: maximum number of module rows in the symbol.
Value range: [3 . . . 90]
Default: 45 (enhanced: 90)
’symbol_cols’: set ’symbol_cols_min’ and ’symbol_cols_max’ to the same value.
’symbol_rows’: set ’symbol_rows_min’ and ’symbol_rows_max’ to the same value.
HALCON 8.0.2
1224 CHAPTER 15. TOOLS
When setting the model parameters, attention should be payed especially to the following issues:
• Symbols whose size does not comply with the size restrictions made in the model (with the generic parameters
’symbol_rows*’, ’symbol_cols*’, ’symbol_size*’, or ’version*’) will not be read if ’strict_model’ is set to
’yes’, which is the default. This behavior is useful if symbols of a specific size have to be detected while
other symbols should be ignored. On the other hand, neglecting this parameter can lead to problems, e.g.,
if one symbol of an image sequence is used to adjust the model (including the symbol size), but later in the
application the symbol size varies, which is quite common in practice.
• The run-time of FindDataCode2d depends mostly on the following model parameters, namely in cases
where the requested number of symbols cannot be found in the image: ’polarity’, ’module_size_min’ (ECC
200 and QR Code) and ’module_size_min’ together with ’module_aspect_min’ (PDF417), and if the mini-
mum module size is very small also the parameters ’module_gap_*’ (ECC 200 and QR Code), for QR Code
also ’position_pattern_min’.
Parameter
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)
Result
The operator SetDataCode2dParam returns the value 2 (H_MSG_TRUE) if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
SetDataCode2dParam is reentrant and processed without parallelization.
Possible Predecessors
CreateDataCode2dModel, ReadDataCode2dModel
Possible Successors
GetDataCode2dParam, FindDataCode2d, WriteDataCode2dModel
Alternatives
ReadDataCode2dModel
See also
QueryDataCode2dParams, GetDataCode2dParam, GetDataCode2dResults,
GetDataCode2dObjects
Module
Data Code
Result
The operator WriteDataCode2dModel returns the value 2 (H_MSG_TRUE) if the passed handle is valid and
if the model can be written into the named file. Otherwise, an exception will be raised.
HALCON 8.0.2
1226 CHAPTER 15. TOOLS
Parallelization Information
WriteDataCode2dModel is reentrant and processed without parallelization.
Possible Predecessors
SetDataCode2dParam, FindDataCode2d
Alternatives
GetDataCode2dParam
See also
CreateDataCode2dModel, SetDataCode2dParam, FindDataCode2d
Module
Data Code
15.7 Fourier-Descriptor
Normalizing of the Fourier coefficients with respect to the displacment of the starting point.
The operator AbsInvarFourierCoeff normalizes the Fourier coefficients with regard to the displacements
of the starting point. These occur when an object is rotated. The contour tracer GetRegionContour starts with
recording the contour in the upper lefthand corner of the region and follows the contour clockwise. If the object
is rotated, the starting value for the contour point chain is different which leads to a phase shift in the frequency
space. The following two kinds of normalizing are available:
abs_amount: The phase information will be eliminated; the normalizing does not retain the structure, i.e. if the
AZ-invariants are backtransformed, no similarity with the pattern can be recognized anymore.
az_invar1: AZ-invariants of the 1st order execute the normalizing with respect to displacing the starting point so
that the structure is retained; they are however more prone to local and global disturbances, in particular to
projective distortions.
Parameter
. realInvar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Real parts of the normalized Fourier coefficients.
. imaginaryInvar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Imaginary parts of the normalized Fourier coefficients.
. coefP (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Normalizing coefficients p.
Default Value : 1
Suggested values : CoefP ∈ {1, 2}
Restriction : CoefP ≥ 1
. coefQ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Normalizing coefficients q.
Default Value : 1
Suggested values : CoefQ ∈ {1, 2}
Restriction : (CoefQ ≥ 1) ∧ (CoefQ 6= CoefP)
. AZInvar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Order of the AZ-invariants.
Default Value : "abs_amount"
List of values : AZInvar ∈ {"abs_amount", "az_invar1"}
. realAbsInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Real parts of the normalized Fourier coefficients.
get_region_contour(single,&row,&col);
length_of_contour = length_tuple(row);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);
Parallelization Information
AbsInvarFourierCoeff is reentrant and processed without parallelization.
Possible Predecessors
InvarFourierCoeff
Possible Successors
Fourier1dimInv, MatchFourierCoeff
Module
Foundation
HALCON 8.0.2
1228 CHAPTER 15. TOOLS
get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
Parallelization Information
Fourier1dim is reentrant and processed without parallelization.
Possible Predecessors
PrepContourFourier
Possible Successors
InvarFourierCoeff, DispPolygon
Module
Foundation
get_region_contour(single,&row,&col);
length_of_contour = row.Num();
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);
Parallelization Information
Fourier1dimInv is reentrant and processed without parallelization.
Possible Predecessors
InvarFourierCoeff, Fourier1dim
Possible Successors
DispPolygon
Module
Foundation
The control parameter invarType indicates up to which level the affine representation shall be normalized.
Please note that indicating a certain level implies that the normalizing is executed with regard to all levels below.
For most applications a subsequent normalizing of the starting point is recommended!
Parameter
. realCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Real parts of the Fourier coefficients.
. imaginaryCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Imaginary parts of the Fourier coefficients.
. normPar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Input of the normalizing coefficients.
Default Value : 1
Suggested values : NormPar ∈ {1, 2}
Restriction : NormPar ≥ 1
. invarType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Indicates the level of the affine mappings.
Default Value : "affine_invar"
List of values : InvarType ∈ {"affine_invar", "simil_invar", "congr_invar", "transl_invar"}
. realInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Real parts of the normalized Fourier coefficients.
. imaginaryInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; HTuple (double)
Imaginary parts of the normalized Fourier coefficients.
Example (Syntax: C++)
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
HALCON 8.0.2
1230 CHAPTER 15. TOOLS
Parallelization Information
InvarFourierCoeff is reentrant and processed without parallelization.
Possible Predecessors
Fourier1dim
Possible Successors
InvarFourierCoeff
Module
Foundation
none: No attenuation.
1/index: Absolute amounts of the Fourier coefficients will be divided by their index.
1/(index*index): Absolute amounts of the Fourier coefficients will be divided by their square index.
The higher the result value, the greater the differences between the pattern and the test contour. If the number of
coefficients is not the same, only the first n coefficients will be compared. The parameter maxCoef indicates the
number of the coefficients to be compared. If maxCoef is set to zero, all coefficients will be used.
Parameter
. realCoef1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Real parts of the pattern Fourier coefficients.
. imaginaryCoef1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Imaginary parts of the pattern Fourier coefficients.
. realCoef2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Real parts of the Fourier coefficients to be compared.
. imaginaryCoef2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Imaginary parts of the Fourier coefficients to be compared.
. maxCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Total number of Fourier coefficients.
Default Value : 50
Suggested values : MaxCoef ∈ {0, 5, 10, 15, 20, 30, 40, 50, 70, 100, 200, 400}
Restriction : MaxCoef ≥ 0
. damping (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Kind of attenuation.
Default Value : "1/index"
Suggested values : Damping ∈ {"none", "1/index", "1/(index*index)"}
. distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; HTuple (double)
Similarity of the contours.
Example (Syntax: C++)
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,
"az_invar1",&absrow,&abscol);
match_fourier_coeff(contur1_row, contur1_col,
contur2_row, contur2_col, 50,
"1/index", &Distance_wert);
Parallelization Information
MatchFourierCoeff is reentrant and processed without parallelization.
Possible Predecessors
InvarFourierCoeff
Module
Foundation
HALCON 8.0.2
1232 CHAPTER 15. TOOLS
Please note that in contrast to the signed or unsigned area the affine mapping of the radian will not be transformed
linearly.
Parameter
get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
Parallelization Information
PrepContourFourier is reentrant and processed without parallelization.
Possible Predecessors
MoveContourOrig
Possible Successors
Fourier1dim
Module
Foundation
15.8 Function
static void HOperatorSet.AbsFunct1d ( HTuple function,
out HTuple functionAbsolute )
HFunction1D HFunction1D.AbsFunct1d ( )
Absolute value of the y values.
AbsFunct1d calculates the absolute values of all y values of function.
Parameter
composedFunction(x) = function2(function1(x)) .
composedFunction has the same domain (x-range) as function1. If the range (y-value range) of
function1 is larger than the domain of function2, the parameter border determines the border treatment of
function2. For border=’zero’ values outside the domain of function2 are set to 0, for border=’constant’
they are set to the corresponding value at the border, for border=’mirror’ they are mirrored at the border, and for
border=’cyclic’ they are continued cyclically. To obtain y-values, function2 is interpolated linearly.
Parameter
. function1 (input_control) . . . . . . . . . function_1d-array ; HFunction1D / HTuple (double / int / long)
Input function 1.
. function2 (input_control) . . . . . . . . . function_1d-array ; HFunction1D / HTuple (double / int / long)
Input function 2.
. border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Border treatment for the input functions.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic"}
. composedFunction (output_control) . . . . . . function_1d-array ; HFunction1D / HTuple (double /
int / long)
Composed function.
Parallelization Information
ComposeFunct1d is reentrant and processed without parallelization.
Possible Predecessors
CreateFunct1dPairs, CreateFunct1dArray
Module
Foundation
HALCON 8.0.2
1234 CHAPTER 15. TOOLS
representation of the function which needs more storage (because all (x,y) pairs are stored) and sometimes cannot
be processed as efficiently as functions created by CreateFunct1dArray.
Parameter
Alternatives
CreateFunct1dArray, ReadFunct1d
See also
Funct1dToPairs
Module
Foundation
HALCON 8.0.2
1236 CHAPTER 15. TOOLS
HALCON 8.0.2
1238 CHAPTER 15. TOOLS
Parameter
If mode is set to ’plateaus_center’, areas with a function value that is constant throughout several sampling points
are also considered. If such an area is identified as being a flat extremum, its center coordinate is returned.
Parameter
y1 (x) = a1 y2 (a3 x + a4 ) + a2 .
The transformation parameters are determined by a least-squares minimization of the following function:
n−1
X 2
y1 (xi ) − a1 y2 (a3 xi + a4 ) + a2 .
i=0
The values of the function y2 are obtained by linear interpolation. The parameter border determines the val-
ues of the function function2 outside of its domain. For border=’zero’ these values are set to 0, for
border=’constant’ they are set to the corresponding value at the border, for border=’mirror’ they are mirrored
at the border, and for border=’cyclic’ they are continued cyclically. The calculated transformation parameters
are returned as a 4-tuple in paramsVal. If some of the parameter values are known, the respective parameters can
be excluded from the least-squares adjustment by setting the corresponding value in the tuple useParams to the
value ’false’. In this case, the tuple paramsConst must contain the known value of the respective parameter. If
a parameter is used for the adjustment (useParams = ’true’), the corresponding parameter in paramsConst is
ignored. On output, MatchFunct1dTrans additionally returns the sum of the squared errors chiSquare of
HALCON 8.0.2
1240 CHAPTER 15. TOOLS
the resulting function, i.e., the function obtained by transforming the input function with the transformation param-
eters, as well as the covariance matrix covar of the transformation parameters paramsVal. These parameters
can be used to decide whether a successful matching of the functions was possible.
Parameter
. function1 (input_control) . . . . . . . . . function_1d-array ; HFunction1D / HTuple (double / int / long)
Function 1.
. function2 (input_control) . . . . . . . . . function_1d-array ; HFunction1D / HTuple (double / int / long)
Function 2.
. border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Border treatment for function 2.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic"}
. paramsConst (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
Values of the parameters to remain constant.
Default Value : [1.0,0.0,1.0,0.0]
Number of elements : 4
. useParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; HTuple (string)
Should a parameter be adapted for it?
Default Value : ["true","true","true","true"]
List of values : UseParams ∈ {"true", "false"}
Number of elements : 4
. paramsVal (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
Transformation parameters between the functions.
Number of elements : 4
. chiSquare (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Quadratic error of the output function.
. covar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
Covariance Matrix of the transformation parameters.
Number of elements : 16
Parallelization Information
MatchFunct1dTrans is reentrant and processed without parallelization.
Possible Predecessors
CreateFunct1dArray, CreateFunct1dPairs
See also
GrayProjections
Module
Foundation
HFunction1D HFunction1D.NegateFunct1d ( )
Negation of the y values.
NegateFunct1d negates all y values of function.
Parameter
. function (input_control) . . . . . . . . . . .function_1d-array ; HFunction1D / HTuple (double / int / long)
Input function.
. functionInverted (output_control) . . . . . . function_1d-array ; HFunction1D / HTuple (double /
int / long)
Function with the negated y values.
Parallelization Information
NegateFunct1d is reentrant and processed without parallelization.
Possible Predecessors
CreateFunct1dPairs, CreateFunct1dArray
Module
Foundation
int HFunction1D.NumPointsFunct1d ( )
Number of control points of the function.
NumPointsFunct1d calculates the number of control points of function.
Parameter
HALCON 8.0.2
1242 CHAPTER 15. TOOLS
HALCON 8.0.2
1244 CHAPTER 15. TOOLS
yt (x) = a1 y(a3 x + a4 ) + a2 .
The output function transformedFunction is obtained by transforming the x and y values of the input func-
tion separately with the above formula, i.e., the output function is not sampled again. Therefore, the parameter a3
is restricted to a3 6= 0.0 . To resample a function, the operator SampleFunct1d can be used.
Parameter
. function (input_control) . . . . . . . . . . .function_1d-array ; HFunction1D / HTuple (double / int / long)
Input function.
. paramsVal (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
Transformation parameters between the functions.
Number of elements : 4
. transformedFunction (output_control) . . . . . . function_1d-array ; HFunction1D / HTuple (dou-
ble / int / long)
Transformed function.
Parallelization Information
TransformFunct1d is reentrant and processed without parallelization.
Possible Predecessors
CreateFunct1dPairs, CreateFunct1dArray, MatchFunct1dTrans
Module
Foundation
HALCON 8.0.2
1246 CHAPTER 15. TOOLS
HTuple HFunction1D.ZeroCrossingsFunct1d ( )
Calculate the zero crossings of a function.
ZeroCrossingsFunct1d calculates the zero crossings zeroCrossings of the function function. A
linear interpolation is applied to the function between its sampling points so that the coordinates of the zero crossing
can be calculated exactly. If an entire line segment between two sampling points has a value of 0, only the end
points of its supporting interval are returned.
Parameter
. function (input_control) . . . . . . . . . . .function_1d-array ; HFunction1D / HTuple (double / int / long)
Input function
. zeroCrossings (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Zero crossings of the input function
Parallelization Information
ZeroCrossingsFunct1d is reentrant and processed without parallelization.
Possible Predecessors
CreateFunct1dPairs, CreateFunct1dArray, SmoothFunct1dGauss, SmoothFunct1dMean
Module
Foundation
15.9 Geometry
RowA1 := 255
ColumnA1 := 10
RowA2 := 255
ColumnA2 := 501
HALCON 8.0.2
1248 CHAPTER 15. TOOLS
Result
AngleLl returns 2 (H_MSG_TRUE).
Parallelization Information
AngleLl is reentrant and processed without parallelization.
Alternatives
AngleLx
Module
Foundation
Calculate the angle between one line and the vertical axis.
The operator AngleLx calculates the angle between one line and the abscissa. As input the coordinates of two
points on the line (row1,column1, row2,column2) are expected. The calculation is performed as follows: We
interprete the line as a vector with starting point row1,column1 and end point row2,column2. Rotating the
vector counter clockwise onto the abscissa (center of rotation is the intersection point of the abscissa) yields the
angle. The result depends of the order of the points on line. The parameter angle returns the angle in radians,
ranging from −π ≤ angle ≤ π.
Parameter
. row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double / int / long)
Row coordinate the first point of the line.
. column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double / int / long)
Column coordinate of the first point of the line.
. row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; HTuple (double / int / long)
Row coordinate of the second point of the line.
. column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; HTuple (double / int / long)
Column coordinate of the second point of the line.
. angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double)
Angle between the line and the abscissa [rad].
Example (Syntax: HDevelop)
RowX1 := 255
ColumnX1 := 10
RowX2 := 255
ColumnX2 := 501
disp_line (WindowHandle, RowX1, ColumnX1, RowX2, ColumnX2)
Row1 := 255
Column1 := 255
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
angle_lx (Row1, Column1, Row2, Column2, Angle)
endfor
Result
AngleLx returns 2 (H_MSG_TRUE).
Parallelization Information
AngleLx is reentrant and processed without parallelization.
Alternatives
AngleLl
Module
Foundation
HALCON 8.0.2
1250 CHAPTER 15. TOOLS
Result
DistanceCc returns 2 (H_MSG_TRUE).
Parallelization Information
DistanceCc is reentrant and processed without parallelization.
Alternatives
DistanceSc, DistancePc, DistanceCcMin
See also
DistanceSr, DistancePr
Module
Foundation
Result
DistanceCcMin returns 2 (H_MSG_TRUE).
Parallelization Information
DistanceCcMin is reentrant and processed without parallelization.
Alternatives
DistanceSc, DistancePc, DistanceCc
See also
DistanceSr, DistancePr
Module
Foundation
HALCON 8.0.2
1252 CHAPTER 15. TOOLS
dev_close_window ()
read_image (Image, ’fabrik’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
5000, 100000000)
dev_clear_window ()
dev_set_color (’black’)
dev_display (SelectedRegions)
dev_set_color (’red’)
Row1 := 100
Row2 := 400
for Col := 50 to 400 by 4
disp_line (WindowHandle, Row1, Col+100, Row2, Col)
distance_lr (SelectedRegions, Row1, Col+100, Row2, Col,
DistanceMin, DistanceMax)
endfor
Result
DistanceLr returns 2 (H_MSG_TRUE).
Parallelization Information
DistanceLr is reentrant and processed without parallelization.
Alternatives
DistanceLc, DistancePr, DistanceSr, DiameterRegion
See also
HammingDistance, SelectRegionPoint, TestRegionPoint, SmallestRectangle2
Module
Foundation
HALCON 8.0.2
1254 CHAPTER 15. TOOLS
The operator DistancePl calculates the orthogonal distance between points (row,column) and lines, given
by two arbitrary points on the line. The result is passed in distance.
DistancePl calculates the distances between a set of n points and one line as well as the distances between a
set of n points and n lines.
Parameter
Result
DistancePl returns 2 (H_MSG_TRUE).
Parallelization Information
DistancePl is reentrant and processed without parallelization.
Alternatives
DistancePs
See also
DistancePp, DistancePr
Module
Foundation
dev_close_window ()
read_image (Image, ’mreut’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
dev_display (Image)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
dev_clear_window ()
dev_display (Region)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’, 10000, 100000000)
get_region_contour (SelectedRegions, Rows, Columns)
RowPoint := 80
ColPoint := 250
NumberTuple := |Rows|
dev_set_color (’red’)
set_draw (WindowHandle, ’margin’)
disp_circle (WindowHandle, RowPoint, ColPoint, 10)
dev_set_color (’green’)
for i := 1 to NumberTuple by 10
disp_line (WindowHandle, Rows[i], Columns[i]-2, Rows[i], Columns[i]+2)
disp_line (WindowHandle, Rows[i]-2, Columns[i], Rows[i]+2, Columns[i])
distance_pp (RowPoint, ColPoint, Rows[i], Columns[i], Distance)
endfor
HALCON 8.0.2
1256 CHAPTER 15. TOOLS
Result
DistancePp returns 2 (H_MSG_TRUE).
Parallelization Information
DistancePp is reentrant and processed without parallelization.
Alternatives
DistancePs
See also
DistancePl, DistancePr
Module
Foundation
dev_close_window ()
read_image (Image, ’mreut’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
10000, 100000000)
Row1 := 255
Column1 := 255
dev_clear_window ()
dev_display (SelectedRegions)
dev_set_color (’red’)
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
Result
DistancePr returns 2 (H_MSG_TRUE).
Parallelization Information
DistancePr is reentrant and processed without parallelization.
Alternatives
DistancePc, DistanceLr, DistanceSr, DiameterRegion
See also
HammingDistance, SelectRegionPoint, TestRegionPoint, SmallestRectangle2
Module
Foundation
HALCON 8.0.2
1258 CHAPTER 15. TOOLS
dev_display (Image)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
dev_clear_window ()
dev_display (Region)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
10000, 100000000)
get_region_contour (SelectedRegions, Rows, Columns)
RowLine1 := 400
ColLine1 := 50
RowLine2 := 50
ColLine2 := 450
NumberTuple := |Rows|
dev_set_color (’red’)
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
dev_set_color (’green’)
for i := 1 to NumberTuple by 10
disp_line (WindowHandle, Rows[i], Columns[i]-2, Rows[i], Columns[i]+2)
disp_line (WindowHandle, Rows[i]-2, Columns[i], Rows[i]+2, Columns[i])
distance_ps (Rows[i], Columns[i], RowLine1, ColLine1, RowLine2, ColLine2,
DistanceMin, DistanceMax)
endfor
Result
DistancePs returns 2 (H_MSG_TRUE).
Parallelization Information
DistancePs is reentrant and processed without parallelization.
Alternatives
DistancePl
See also
DistancePp, DistancePr
Module
Foundation
Attention
Both input parameters must contain the same number of regions. The regions must not be empty.
Parameter
N umberiterations ∗ 2 − 1
.
The mask ’h’ has the effect that precisely the maximum metrics are calculated.
Attention
Both parameters must contain the same number of regions. The regions must not be empty.
HALCON 8.0.2
1260 CHAPTER 15. TOOLS
Parameter
. regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; HRegion
Regions to be examined.
. minDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; HTuple (int / long)
Minimum distances of the regions.
Assertion : -1 ≤ MinDistance
Result
The operator DistanceRrMinDil returns the value 2 (H_MSG_TRUE) if the input is not empty. Otherwise
an exception handling is raised.
Parallelization Information
DistanceRrMinDil is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Threshold, Regiongrowing, Connection
Alternatives
DistanceRrMin, Dilation1, Intersection
Module
Foundation
Alternatives
DistanceLc, DistancePc, DistanceCc, DistanceCcMin
See also
DistanceSr, DistanceLr, DistancePr, SelectXldPoint, TestXldPoint
Module
Foundation
dev_set_color (’black’)
RowLine1 := 400
ColLine1 := 200
HALCON 8.0.2
1262 CHAPTER 15. TOOLS
RowLine2 := 200
ColLine2 := 400
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
dev_set_color (’green’)
n := 0
for Rows := 40 to 200 by 4
disp_line (WindowHandle, Rows+n, Columns+n, Rows, Columns+n)
distance_sl (Rows+n, Columns+n, Rows, Columns+n, RowLine1, ColLine1,
RowLine2, ColLine2,DistanceMin, DistanceMax)
n := n+10
endfor
Result
DistanceSl returns 2 (H_MSG_TRUE).
Parallelization Information
DistanceSl is reentrant and processed without parallelization.
Alternatives
DistancePl
See also
DistancePs, DistancePp
Module
Foundation
Result
DistanceSr returns 2 (H_MSG_TRUE).
Parallelization Information
DistanceSr is reentrant and processed without parallelization.
Alternatives
DistanceSc, DistanceLr, DistancePr, DiameterRegion
See also
HammingDistance, SelectRegionPoint, TestRegionPoint, SmallestRectangle2
Module
Foundation
HALCON 8.0.2
1264 CHAPTER 15. TOOLS
Parameter
dev_set_color (’black’)
RowLine1 := 400
ColLine1 := 200
RowLine2 := 240
ColLine2 := 400
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
dev_set_color (’red’)
n := 0
for Rows := 40 to 200 by 4
disp_line (WindowHandle, Rows, Columns, Rows+n, Columns+n)
distance_ss (Rows, Columns, Rows+n, Columns+n, RowLine1, ColLine1,
RowLine2, ColLine2, DistanceMin, DistanceMax)
n := n+8
endfor
Result
DistanceSs returns 2 (H_MSG_TRUE).
Parallelization Information
DistanceSs is reentrant and processed without parallelization.
Alternatives
DistancePp
See also
DistancePl, DistancePs
Module
Foundation
draw_ellipse(WindowHandle,Row,Column,Phi,Radius1,Radius2)
get_points_ellipse([0,3.14],Row,Column,Phi,Radius1,Radius2,RowPoint,ColPoint)
Result
GetPointsEllipse returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
is raised.
Parallelization Information
GetPointsEllipse is reentrant and processed without parallelization.
Possible Predecessors
FitEllipseContourXld, DrawEllipse, GenEllipseContourXld
See also
GenEllipseContourXld
Module
Foundation
HALCON 8.0.2
1266 CHAPTER 15. TOOLS
dev_set_color (’black’)
RowLine1 := 350
ColLine1 := 250
RowLine2 := 300
ColLine2 := 300
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
n := 0
Result
IntersectionLl returns 2 (H_MSG_TRUE).
Parallelization Information
IntersectionLl is reentrant and processed without parallelization.
Module
Foundation
dev_set_color (’black’)
RowLine1 := 400
HALCON 8.0.2
1268 CHAPTER 15. TOOLS
ColLine1 := 200
RowLine2 := 240
ColLine2 := 400
Rows := 300
Columns := 50
disp_line (WindowHandle, RowLine1, ColLine1, RowLine2, ColLine2)
n := 0
for Rows := 40 to 200 by 4
dev_set_color (’red’)
disp_circle (WindowHandle, Rows+n, Columns, 2)
projection_pl (Rows+n, Columns, RowLine1, ColLine1, RowLine2, ColLine2,
RowProj, ColProj)
dev_set_color (’blue’)
disp_line (WindowHandle, RowProj-2, ColProj, RowProj+2, ColProj)
disp_line (WindowHandle, RowProj, ColProj-2, RowProj, ColProj+2)
n := n+8
endfor
Result
ProjectionPl returns 2 (H_MSG_TRUE).
Parallelization Information
ProjectionPl is reentrant and processed without parallelization.
Module
Foundation
15.10 Grid-Rectification
static void HOperatorSet.ConnectGridPoints ( HObject image,
out HObject connectingLines, HTuple row, HTuple col, HTuple sigma,
HTuple maxDist )
HXLD HImage.ConnectGridPoints ( HTuple row, HTuple col, HTuple sigma,
HTuple maxDist )
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. connectingLines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; HXLD
Output contours.
. row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinates of the grid points.
. col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; HTuple (double)
Column coordinates of the grid points.
Restriction : number(Col) = number(Row)
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (int / long / double)
Size of the applied Gaussians.
Default Value : 0.9
Suggested values : Sigma ∈ {0.7, 0.9, 1.1, 1.3, 1.5}
Number of elements : (1 ≤ Sigma) ∧ (Sigma ≤ 3)
Restriction : 0.7 ≤ Sigma
. maxDist (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Maximum distance of the connecting lines from the grid points.
Default Value : 5.5
Suggested values : MaxDist ∈ {1.5, 3.5, 5.5, 7.5, 9.5}
Restriction : 0.0 ≤ MaxDist
Result
ConnectGridPoints returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
ConnectGridPoints is reentrant and processed without parallelization.
Possible Predecessors
SaddlePointsSubPix
Possible Successors
GenGridRectificationMap
Module
Calibration
HALCON 8.0.2
1270 CHAPTER 15. TOOLS
Result
FindRectificationGrid returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
FindRectificationGrid is reentrant and processed without parallelization.
Possible Successors
DilationCircle, ReduceDomain
Module
Calibration
Generate a projection map that describes the mapping between an arbitrarily distorted image and the rectified
image.
GenArbitraryDistortionMap computes the mapping map between an arbitrarily distorted image and the
rectified image. Assuming that the points (row,col) form a regular grid in the rectified image, each grid cell,
which is defined by the coordinates (row,col) of its four corners in the distorted image, is projected onto a square
of gridSpacing×gridSpacing pixels. The coordinates of the grid points must be passed line by line in row
and col. gridWidth is the width of the point grid in grid points. To compute the mapping map, additionally
the width imageWidth and height imageHeight of the images to be rectified must be passed.
map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
In contrary to GenGridRectificationMap, GenArbitraryDistortionMap is used when the coor-
dinates (row,col) of the grid points in the distorted image are already known or the relevant part of the image
consist of regular grid structures, which the coordinates can be derived from.
Parameter
HALCON 8.0.2
1272 CHAPTER 15. TOOLS
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
GenGridRectificationMap calculates the mapping between the grid points (row,col), which have been
actually detected in the distorted image image (typically using SaddlePointsSubPix), and the correspond-
ing grid points of the ideal regular point grid. First, all paths that lead from their initial point via exactly four differ-
ent connecting lines back to the initial point are assembled from the grid points (row,col) and the connecting lines
connectingLines (detected by ConnectGridPoints). In case that the input of grid points (row,col)
and of connecting lines connectingLines was meaningful, one such ’mesh’ corresponds to exactly one grid
cell in the rectification grid. Afterwards, the meshes are combined to the point grid. According to the value of
rotation, the point grid is rotated by 0, 90, 180 or 270 degrees. Note that the point grid does not necessarily have
the correct orientation. When passing ’auto’ in rotation, the point grid is rotated such that the black circular
mark in the rectification grid is positioned to the left of the white one (see also CreateRectificationGrid).
Finally, the mapping map between the distorted image and the rectified image is calculated by interpolation be-
tween the grid points. Each grid cell, for which the coordinates (row,col) of all four corner points are known, is
projected onto a square of gridSpacing × gridSpacing pixels.
map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
GenGridRectificationMap additionally returns the calculated meshes as XLD contours in meshes.
In contrary to GenArbitraryDistortionMap, GenGridRectificationMap and its predecessors are
used when the coordinates (row,col) of the grid points in the distorted image are neither known nor can be derived
from the image contents.
Attention
Each input XLD contour connectingLines must own the global attribute ’bright_dark’, as it is described with
ConnectGridPoints!
Parameter
HALCON 8.0.2
1274 CHAPTER 15. TOOLS
15.11 Hough
HALCON 8.0.2
1276 CHAPTER 15. TOOLS
Parameter
. region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Binary edge image in which lines are to be detected.
. houghImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Hough transform for lines.
. angleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Adjusting the resolution in the angle area.
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
Result
The operator HoughLineTrans returns the value 2 (H_MSG_TRUE) if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator SetSystem
(’no_object_result’,<Result>), the behavior in case of empty region is set via SetSystem
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
HoughLineTrans is reentrant and processed without parallelization.
Possible Predecessors
Threshold, Skeleton
Possible Successors
Threshold, LocalMax
See also
HoughCircleTrans, GenRegionHline
Module
Foundation
Compute the Hough transform for lines using local gradient direction.
The operator HoughLineTransDir calculates the Hough transform for lines in those regions passed in the
domain of imageDir. To do so, the angles and the lengths of the lines’ normal vectors are registered in the
parameter space (the so-called Hough or accumulator space).
In contrast to HoughLineTrans, additionally the edge direction in imageDir (e.g., returned by SobelDir
or EdgesImage) is taken into account. This results in a more efficient computation and in a reduction of the
noise in the Hough space.
The parameter directionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with directionUncertainty = 10 a horizontal line
(i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and +10 de-
grees. The higher directionUncertainty is chosen, the higher the computation time will be. For
directionUncertainty = 180 HoughLineTransDir shows the same behavior as HoughLineTrans,
i.e., the edge direction is ignored. directionUncertainty should be chosen at least as high as the step width
of the edge direction stored in imageDir. The minimum step width is 2 degrees (defined by the image type
’direction’).
The result is stored in a newly generated UINT2-Image (houghImage), where the x-axis (i.e., columns) repre-
sents the angle between the normal vector and the x-axis of the original image, and the y-axis (i.e., rows) represents
the distance of the line from the origin.
The angle ranges from -90 to 180 degrees and will be stored with a resolution of 1/angleResolution, which
means that one pixel in x-direction is equivalent to 1/angleResolution degrees and that the houghImage
has a width of 270∗angleResolution+1 pixels. The height of the houghImage corresponds to the distance
between the lower right corner of the surrounding rectangle of the input region and the origin.
The local maxima in the result image are equivalent to the parameter values of the lines in the original image.
Parameter
. imageDir (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Image containing the edge direction. The edges must be described by the image domain.
. houghImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Hough transform.
. directionUncertainty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; HTuple (int / long)
Uncertainty of the edge direction (in degrees).
Default Value : 2
Typical range of values : 2 ≤ DirectionUncertainty ≤ 180
Minimum Increment : 2
. angleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Resolution in the angle area (in 1/degrees).
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
Result
The operator HoughLineTransDir returns the value 2 (H_MSG_TRUE) if the input is not empty. The be-
havior in case of empty input is set via the operator SetSystem(’no_object_result’,<Result>). If
necessary an exception handling is raised.
Parallelization Information
HoughLineTransDir is reentrant and processed without parallelization.
Possible Predecessors
EdgesImage, SobelDir, Threshold, HysteresisThreshold, NonmaxSuppressionDir,
ReduceDomain
Possible Successors
BinomialFilter, GaussImage, Threshold, LocalMax, PlateausCenter
See also
HoughLineTrans, HoughLines, HoughLinesDir
Module
Foundation
Detect lines in edge images with the help of the Hough transform and returns it in HNF.
The operator HoughLines allows the selection of linelike structures in a region, whereby it is not necessary that
the individual points of a line are connected. This process is based on the Hough transform. The lines are returned
in HNF, that is by the direction and length of their normal vector.
The parameter angleResolution defines the degree of exactness concerning the determination of the angles.
It amounts to 1/angleResolution degree. The parameter threshold determines by how many points
of the original region a line’s hypothesis has to be supported at least in order to be taken over into the output.
The parameters angleGap and distGap define a neighborhood of the points in the Hough image in order to
determine the local maxima. The lines are returned in HNF.
Parameter
. regionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Binary edge image in which the lines are to be detected.
. angleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Adjusting the resolution in the angle area.
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
HALCON 8.0.2
1278 CHAPTER 15. TOOLS
Detect lines in edge images with the help of the Hough transform using local gradient direction and return them in
normal form.
The operator HoughLinesDir selects line-like structures in a region based on the Hough transform. The
individual points of a line can be unconnected. The region is given by the domain of imageDir. The lines are
returned in Hessian normal form (HNF), that is by the direction and length of their normal vector.
In contrast to HoughLines, additionally the edge direction in imageDir (e.g., returned by SobelDir or
EdgesImage) is taken into account. This results in a more efficient computation and in a reduction of the noise
in the Hough space.
The parameter directionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with directionUncertainty = 10 a horizontal line
(i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and +10 de-
grees. The higher directionUncertainty is chosen, the higher the computation time will be. For
directionUncertainty = 180 HoughLinesDir shows the same behavior as HoughLines, i.e., the
edge direction is ignored. directionUncertainty should be chosen at least as high as the step width of the
edge direction stored in imageDir. The minimum step width is 2 degrees (defined by the image type ’direction’).
The parameter angleResolution defines how accurately the angles are determined. The accuracy amounts to
1/angleResolution degrees. A subsequent smoothing of the Hough space results in an increased stability.
The smoothing filter can be selected by smoothing, the degree of smoothing by the parameter filterSize
(see MeanImage or GaussImage for details). The parameter threshold determines by how many points of
the original region a line’s hypothesis must at least be supported in order to be selected into the output. The param-
eters angleGap and distGap define a neighborhood of the points in the Hough image in order to determine the
local maxima: angleGap describes the minimum distance of two maxima in the Hough image in angle direction
and distGap in distance direction, respectively. Thus, maxima exceeding threshold but lying close to an
even higher maximum are eliminated. This can particularly be helpful when searching for short and long lines
simultaneously. Besides the unsmoothed Hough image houghImage, the lines are returned in HNF (angle,
dist). If the parameter genLines is set to ’true’, additionally those regions in imageDir are returned that
contributed to the local maxima in Hough space. They are stored in the parameter lines.
Parameter
HALCON 8.0.2
1280 CHAPTER 15. TOOLS
Select those lines from a set of lines (in HNF) which fit best into a region.
Lines which fit best into a region can be selected from a set of lines which are available in HNF with the help of
the operator SelectMatchingLines; the region itself is also transmitted as a parameter (regionIn). The
width of the lines can be indicated by the parameter lineWidth. The selected lines will be returned in HNF and
as regions (regionLines).
The lines are selected iteratively in a loop: At first, the line showing the greatest overlap with the input region
is selected from the set of input lines. This line will then be taken over into the output set whereby all points
belonging to that line will not be considered in the further steps determining overlaps. The loop will be left when
the maximum overlap value of the region and the lines falls below a certain threshold value (thresh). The
selected lines will be returned as regions as well as in HNF.
Parameter
. regionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; HRegion
Region in which the lines are to be matched.
. regionLines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; HRegion
Region array containing the matched lines.
. angleIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.angle.rad(-array) ; HTuple (double)
Angles (in radians) of the normal vectors of the input lines.
Typical range of values : -1.5707963 ≤ AngleIn ≤ 3.1415927
15.12 Image-Comparison
HALCON 8.0.2
1282 CHAPTER 15. TOOLS
This mode is identical to CompareVariationModel. For mode = ’light’, region contains all points that
are too bright:
c(x, y) > tu (x, y) .
For mode = ’dark’, region contains all points that are too dark:
Finally, for mode = ’light_dark’ two regions are returned in region. The first region contains the result of mode
= ’light’, while the second region contains the result of mode = ’dark’. The respective regions can be selected
with SelectObj.
Parameter
HALCON 8.0.2
1284 CHAPTER 15. TOOLS
Result
CompareExtVariationModel returns 2 (H_MSG_TRUE) if all parameters are correct and
if the internal threshold images have been generated with PrepareVariationModel or
PrepareDirectVariationModel.
Parallelization Information
CompareExtVariationModel is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
PrepareVariationModel, PrepareDirectVariationModel
Possible Successors
SelectObj, Connection
Alternatives
CompareVariationModel, DynThreshold
See also
GetThreshImagesVariationModel
Module
Matching
Result
CompareVariationModel returns 2 (H_MSG_TRUE) if all parameters are correct and if the internal threshold
images have been generated with PrepareVariationModel or PrepareDirectVariationModel.
Parallelization Information
CompareVariationModel is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
PrepareVariationModel, PrepareDirectVariationModel
Possible Successors
Connection
Alternatives
CompareExtVariationModel, DynThreshold
See also
GetThreshImagesVariationModel
Module
Matching
HALCON 8.0.2
1286 CHAPTER 15. TOOLS
Typically, the variation model is used to discriminate correctly manufactured objects (“good objects”) from incor-
rectly manufactured objects (“bad objects”). It is assumed that the discrimination can be done solely based on the
gray values of the object.
The variation model consists of an ideal image of the object to which the images of the objects to be tested are
compared later on with CompareVariationModel or CompareExtVariationModel, and an image
that represents the amount of gray value variation at every point of the object. The size of the images with which
the object model is trained and with which the model is compared later on is passed in width and height,
respectively. The image type of the images used for training and comparison is passed in type.
The variation model is trained using multiple images of good objects. Therefore, it is essential that the training
images show the objects in the same position and rotation. If this cannot be guarateed by external means, the pose
of the object can, for example, be determined by using matching (see FindShapeModel). The image can then
be transformed to a reference pose with AffineTransImage.
The parameter mode is used to determine how the image of the ideal object and the corresponding variation
image are computed. For mode=’standard’, the ideal image of the object is computed as the mean of all training
images at the respective image positions. The corresponding variation image is computed as the standard deviation
of the training images at the respective image positions. This mode has the advantage that the variation model
can be trained iteratively, i.e., as soon as an image of a good object becomes available, it can be trained with
TrainVariationModel. The disadvantage of this mode is that great care must be taken to ensure that only
images of good objects are trained, because the mean and standard deviation are not robust against outliers, i.e., if
an image of a bad object is trained inadvertently, the accuracy of the ideal object image and that of the variation
image might be degraded.
If it cannot be avoided that the variation model is trained with some images of objects that can contain errors, mode
can be set to ’robust’. In this mode, the image of the ideal object is computed as the median of all training images
at the respective image positions. The corresponding variation image is computed as a suitably scaled median
absolute deviation of the training images and the median image at the respective image positions. This mode has
the advantage that it is robust against outliers. It has the disadvantage that it cannot be trained iteratively, i.e., all
training images must be accumulated using ConcatObj and be trained with TrainVariationModel in a
single call.
In some cases, it is impossible to acquire multiple training images. In this case, a useful variation image cannot
be trained from the single training image. To solve this problem, variations of the training image can be created
synthetically, e.g., by shifting the training image by ±1 pixel in the row and column directions or by using gray
value morphology (e.g., GrayErosionShape und GrayDilationShape), and then training the syntheti-
cally modified images. A different possibility to create the variation model from a single image is to create the
model with mode=’direct’. In this case, the variation model can only be trained by specifying the ideal image and
the variation image directly with PrepareDirectVariationModel. Since the variation typically is large at
the edges of the object, edge operators like SobelAmp, EdgesImage, or GrayRangeRect should be used
to create the variation image.
Parameter
Complexity
A variation model created with CreateVariationModel requires 12 ∗ width ∗ height bytes of mem-
ory for mode = ’standard’ and mode = ’robust’ for type = ’byte’. For type = ’uint2’ and type =
’int2’, 14 ∗ width ∗ height are required. For mode = ’direct’ and after the training data has been cleared
with ClearTrainDataVariationModel, 2 ∗ width ∗ height bytes are required for type = ’byte’ and
4 ∗ width ∗ height for the other image types.
Result
CreateVariationModel returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
CreateVariationModel is processed completely exclusively without parallelization.
Possible Successors
TrainVariationModel, PrepareDirectVariationModel
See also
PrepareVariationModel, ClearVariationModel, ClearTrainDataVariationModel,
FindShapeModel, AffineTransImage
Module
Matching
HImage HVariationModel.GetThreshImagesVariationModel (
out HImage maxImage )
Return the threshold images used for image comparison by a variation model.
GetThreshImagesVariationModel returns the threshold images of the variation model modelID in
maxImage and minImage. The threshold images must be computed with PrepareVariationModel or
PrepareDirectVariationModel before they can be read out. The formula used for calculating the thresh-
old images is described with PrepareVariationModel or PrepareDirectVariationModel. The
threshold images are used in CompareVariationModel and CompareExtVariationModel to detect
too large deviations of an image with respect to the model. As described with CompareVariationModel
and CompareExtVariationModel, gray values outside the interval given by minImage and maxImage
are regarded as errors.
Parameter
HALCON 8.0.2
1288 CHAPTER 15. TOOLS
void HVariationModel.PrepareDirectVariationModel (
HImage refImage, HImage varImage, HTuple absThreshold,
HTuple varThreshold )
void HVariationModel.PrepareDirectVariationModel (
HImage refImage, HImage varImage, double absThreshold,
double varThreshold )
Two thresholds are used to compute the threshold images. The parameter absThreshold determines the mini-
mum amount of gray levels by which the image of the current object must differ from the image of the ideal object.
The parameter varThreshold determines a factor relative to the variation image for the minimum difference of
the current image and the ideal image. absThreshold and varThreshold each can contain one or two values.
If two values are specified, different thresholds can be determined for too bright and too dark pixels. In this mode,
the first value refers to too bright pixels, while the second value refers to too dark pixels. If one value is specified,
this value refers to both the too bright and too dark pixels. Let i(x, y) be the ideal image refImage, v(x, y) the
variation image varImage, au = absThreshold[0], al = absThreshold[1], bu = varThreshold[0],
and bl = varThreshold[1] (or au = absThreshold, al = absThreshold, bu = varThreshold, and
bl = varThreshold, respectively). Then the two threshold images tu,l are computed as follows:
tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .
If the current image c(x, y) is compared to the variation model using CompareVariationModel, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:
In CompareExtVariationModel, extended comparison modes are available, which return only too bright
errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
GetThreshImagesVariationModel.
It should be noted that refImage and varImage are not stored as the ideal and variation images in the model
to save memory in the model.
Parameter
. refImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Reference image of the object.
. varImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; HImage
Variation image of the object.
. modelID (input_control) . . . . . . . . . . . . . . . . . . . variation_model ; HVariationModel / HTuple (IntPtr)
ID of the variation model.
. absThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
. varThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Threshold for the differences based on the variation of the variation model.
Default Value : 2
Suggested values : VarThreshold ∈ {1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5}
Restriction : VarThreshold ≥ 0
Example (Syntax: HDevelop)
Result
PrepareDirectVariationModel returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
PrepareDirectVariationModel is processed completely exclusively without parallelization.
Possible Predecessors
SobelAmp, EdgesImage, GrayRangeRect
HALCON 8.0.2
1290 CHAPTER 15. TOOLS
Possible Successors
CompareVariationModel, CompareExtVariationModel,
GetThreshImagesVariationModel, WriteVariationModel
Alternatives
PrepareVariationModel
See also
CreateVariationModel
Module
Matching
tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .
If the current image c(x, y) is compared to the variation model using CompareVariationModel, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:
In CompareExtVariationModel, extended comparison modes are available, which return only too bright
errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
GetThreshImagesVariationModel. Furthermore, the training data can be deleted with
ClearTrainDataVariationModel to save memory.
Parameter
. modelID (input_control) . . . . . . . . . . . . . . . . . . . variation_model ; HVariationModel / HTuple (IntPtr)
ID of the variation model.
. absThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
HALCON 8.0.2
1292 CHAPTER 15. TOOLS
Result
TrainVariationModel returns 2 (H_MSG_TRUE) if all parameters are correct.
Parallelization Information
TrainVariationModel is processed completely exclusively without parallelization.
Possible Predecessors
CreateVariationModel, FindShapeModel, AffineTransImage, ConcatObj
Possible Successors
PrepareVariationModel
See also
PrepareVariationModel, CompareVariationModel, CompareExtVariationModel,
ClearVariationModel
Module
Matching
15.13 Kalman-Filter
static void HOperatorSet.FilterKalman ( HTuple dimension,
HTuple model, HTuple measurement, HTuple predictionIn,
out HTuple predictionOut, out HTuple estimate )
Estimate the current state of a system with the help of the Kalman filtering.
HALCON 8.0.2
1294 CHAPTER 15. TOOLS
The operator FilterKalman returns an estimate of the current state (or also a prediction of a future state)
of a discrete, stochastically disturbed, linear system. In practice, Kalman filters are used successfully in image
processing in the analysis of image sequences (background identification, lane tracking with the help of line tracing
or region analysis, etc.). A short introduction concerning the theory of the Kalman filters will be followed by a
detailed description of the routine FilterKalman itself.
KALMAN FILTER: A discrete, stochastically disturbed, linear system is characterized by the following markers:
• State x(t): Describes the current state of the system (speeds, temperatures,...).
• Parameter u(t): Inputs from outside into the system.
• Measurement y(t): Measurements gained by observing the system. They indicate the state of the system (or
at least parts of it).
• An output function describing the dependence of the measurements on the state.
• A transition function indicating how the state changes with regard to time, the current value and the parame-
ters.
The output function and the transition function are linear. Their application can therefore be written as a multipli-
cation with a matrix.
The transition function is described with the help of the transition matrix A(t) and the parameter matrix , the initial
function is described by the measurement matrix C(t). Hereby C(t) characterizes the dependency of the new state
on the old, G(t) indicates the dependency on the parameters. In practice it is rarely possible (or at least too time
consuming) to describe a real system and its behaviour in a complete and exact way. Normally only a relatively
small number of variables will be used to simulate the behaviour of the system. This leads to an error, the so called
system error (also called system disturbance) v(t).
The output function, too, is usually not exact. Each measurement is faulty. The measurement errors will be called
w(t). Therefore the following system equations arise:
x(t + 1) = A(t)x(t) + G(t)u(t) + v(t)
y(t) = c(t)x(t) + w(t)
The system error v(t) and the measurement error w(t) are not known. As far as systems are concerned which
are interpreted with the help of the Kalman filter, these two errors are considered as Gaussian distributed random
vectors (therefore the expression "‘stochastically disturbed systems"’). Therefore the system can be calculated, if
the corresponding expected values for v(t) and w(t) as well as the covariance matrices are known.
The estimation of the state of the system is carried out in the same way as in the Gaussian-Markov-estimation.
However, the Kalman filter is a recursive algorithm which is based only on the current measurements y(t) and the
latest state x(t). The latter implicitly also includes the knowlegde about earlier measurements.
A suitable estimate value x_0, which is interpreted as the expected value of a random variable for x(0), must be
indicated for the initial value x(0). This variable should have an expected error value of 0 and the covariance
matrix P _0 which also has to be indicated. At a certain time t the expected values of both disturbances v(t) and
w(t) should be 0 and their covariances should be Q(t) and R(t). x(t), v(t) and w(t) will usually be assumed to be
not correlated (any kind of noise-process can be modelled - however the development of the necessary matrices by
the user will be considerably more demanding). The following conditions must be met by the searched estimate
values xt :
• The estimate values xt are linearly dependent on the actual value x(t) and on the measurement sequence
y(0), y(1), · · · , y(t).
• xt being hereby considered to meet its expectations, i.e. Ext = Ex(t).
• The grade criterion for xt is the criterion of minimal variance, i.e. the variance of the estimation error defined
as x(t) − xt , being as small as possible.
P̂ (t)C 0 (t)
(K − III) K(t) = C(t)P̂ (t)C 0 (t)+R(t)
(K − IV ) xt = x̂(t) + K(t)(y(t) − C(t)x̂(t))
(K − V ) P̃ (t) = P̂ (t) − K(t)C(t)P̂ (t)
(K − I) x̂(t + 1) = A(t)xt + G(t)u(t)
(K − II) P̂ (t + 1) = A(t)P̃ (t)A0 (t) + Q(t)
Hereby P̃ (t) is the covariance matrix of the estimation error, x̂(t) is the extrapolation value respective the predic-
tion value of the state, P̂ (t) are the covariances of the prediction error x̂ − x, K is the amplifier matrix (the so
called Kalman gain), and X 0 is the transposed of a matrix X.
Please note that the prediction of the future state is also possible with the equation (K-I). Somtimes this is very
useful in image processing in order to determine "‘regions of interest"’ in the next image.
As mentioned above, it is much more demanding to model any kind of noise processes. If for example the system
noise and the measurement noise are correlated with the corresponding covariance matrix L, the equations for the
Kalman gain and the error covariance matrix have to be modified:
P̂ (t)C 0 (t)+L(t)
(K − III) K(t) = C(t)P̂ (t)+C(t)l(t)+L0 C 0 (t)+R(t)
(K − V ) P̃ (t) = (P̂ (t) − K(t)C(t)P̂ (t))P̂ (t) − K(t)L(t)
This means that the user himself has to establish the linear system equations from (K-I) up to (K-V) with respect to
the actual problem. The user must therefore develop a mathematical model upon which the solution to the problem
can be based. Statistical characteristics describing the inaccuracies of the system as well as the measurement
errors, which are to be expected, thereby have to be estimated if they cannot be calculated exactly. Therefore the
following individual steps are necessary:
As mentioned above, the initialization of the system (point 7) hereby necessitates to indicate an estimate x0 of the
state of the system at the time 0 and the corresponding covariance matrix P0 . If the exact initial state is not known,
it is recommendable to set the components of the vector x0 to the average values of the corresponding range, and
to set high values for P0 (about the size of the squares of the range). After a few iterations (when the number of the
accumulated measurement values in total has exceeded the number of the system values), the values which have
been determined in this way are also useable.
If on the other hand the initial state is known exactly, all entries for P0 have to be set to 0, because P0 describes
the covariances of the error between the estimated value x0 and the actual value x(0).
THE FILTER ROUTINE:
A Kalman filter is dependent on a range of data which can be organized in four groups:
Model parameter: transition matrix A, control matrix G including the parameter u and the measurement matrix
C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L, and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂
HALCON 8.0.2
1296 CHAPTER 15. TOOLS
Thereby many systems can work without input "‘from outside"’, i.e. without G and u. Further, system errors and
measurement errors are normally not correlated (L is dropped).
Actually the data necessary for the routine will be set by the following parameters:
dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore [n,m,0] has to be passed.
model: This parameter includes the lined up matrices (vectors) A,C,Q,G,u and (if necessary) L having been stored
in row-major order. model therefore is a vector of the length n × n + n × m + n × n + n × p + p[+n × m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
measurement: This parameter includes the matrix R which has been stored in row-major order, and the mea-
surement vector y lined up. measurement therefore is a vector of the dimension m × m + m.
predictionIn / predictionOut: These two parameters include the matrix P̂ (the extrapolation-error co-
variance matrix) which has been stored in row-major order and the extrapolation vector x̂ lined up. This
means, they are vectors of the length n × n + n. predictionIn therefore is an input parameter, which
must contain P̂ (t) and x̂(t) at the current time t. With predictionOut the routine returns the correspond-
ing predictions P̂ (t + 1) and x̂(t + 1).
estimate: With this parameter the routine returns the matrix P̃ (the estimation-error covariance matrix) which
has been stored in row-major order and the estimated state x̃ lined up. estimate therefore is a vector of
the length n × n + n.
Please note that the covariance matrices (Q, R, P̂ , P̃ ) must of course be symmetric.
Parameter
// Typical procedure:
// 1. To initialize the variables, which describe the model, e.g. with
read_kalman(’kalman.init’,Dim,Mod,Meas,Pred)
// Generation of the first measurements (typical of the first image of an
// image series) with an appropriate problem-specific routine (there is a
Result
If the parameter values are correct, the operator FilterKalman returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling will be raised.
Parallelization Information
FilterKalman is reentrant and processed without parallelization.
Possible Predecessors
ReadKalman, SensorKalman
Possible Successors
UpdateKalman
See also
ReadKalman, UpdateKalman, SensorKalman
References
W.Hartinger: "‘Entwurf eines anwendungsunabh"angigen Kalman-Filters mit Untersuchungen im Bereich der
Bildfolgenanalyse"’; Diplomarbeit; Technische Universit"at M"unchen, Institut f"ur Informatik, Lehrstuhl Prof.
Radig; 1991.
R.E.Kalman: "‘A New Approach to Linear Filtering and Prediction Problems"’; Transactions ASME, Ser.D: Jour-
nal of Basic Engineering; Vol. 82, S.34-45; 1960.
R.E.Kalman, P.l.Falb, M.A.Arbib: "‘Topics in Mathematical System Theory"’; McGraw-Hill Book Company, New
York; 1969.
K-P. Karmann, A.von Brandt: "‘Moving Object Recognition Using an Adaptive Background Memory"’; Time-
Varying Image Processing and Moving Object Recognition 2 (ed.: V. Cappellini), Proc. of the 3rd Interantional
Workshop, Florence, Italy, May, 29th - 31st, 1989; Elsevier, Amsterdam; 1990.
Module
Foundation
HALCON 8.0.2
1298 CHAPTER 15. TOOLS
Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Estimate of the initial state of the system: state x0 and corresponding covariance matrix P0
Many systems do not need entries "‘from outside"’, and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). The characteristics mentioned above can be
stored in an ASCII-file and then can be read with the help of the operator ReadKalman. This ASCII-file must
have the following structure:
Dimension row
+ content row
+ matrix A
+ atrix C
+ matrix Q
[ + matrix G + vector u ]
[ + matrix L ]
+ matrix R
[ + matrix P0 ]
[ + vector x0 ]
dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore dimension = [n,m,0].
model: This parameter includes the lined up matrices (vectors) A, C, Q, G, u and (if necessary) L having been
stored in row-major order. model therefore is a vector of the length n×n+n×m+n×n+n×p+p[+n×m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
measurement: This parameter includes the matrix R which has been stored in row-major order.
measurement therefore is vector of the dimension m × m.
prediction: This parameter includes the matrix P0 (the error covariance matrix of the initial state estimate)
and the initial state estimate x0 lined up. This means, it is a vector of the length n × n + n.
Parameter
HALCON 8.0.2
1300 CHAPTER 15. TOOLS
Result
If the description file is readable and correct, the operator ReadKalman returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling will be raised.
Parallelization Information
ReadKalman is reentrant and processed without parallelization.
Possible Successors
FilterKalman
See also
UpdateKalman, FilterKalman, SensorKalman
Module
Foundation
Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂
Many systems do not need entries "‘from outside"’ and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). Some of the characteristics mentioned above
may change dynamically (from one iteration to the next). The operator UpdateKalman serves to modify parts
of the system according to an update file (ASCII) with the following structure (see also ReadKalman):
Dimension row
+ content row
+ matrix A
+ matrix C
+ matrix Q
+ matrix G + vector u
+ matrix L
+ matrix R
HALCON 8.0.2
1302 CHAPTER 15. TOOLS
dimensionIn / dimensionOut: These parameters include the dimensions of the state vector, measurement
vector and controller vector and therefore are vectors [n,m,p], whereby n indicates the number of the state
variables, m the number of the measurement values and p the number of the controller members. n and m are
invariant for a given system, i.e. they must not differ from corresponding input values of the update file. For
a system without without influence "‘from outside"’ p = 0.
modelIn / modelOut: These parameters include the lined up matrices (vectors) A, C, Q, G, u and if necessary
L which have been stored in row-major order. modelIn / modelOut therefore are vectors of the length
n × n + n × m + n × n + n × p + p[+n × m]. The last summand is dropped if system errors and measurement
errors are not correlated, i.e. no value has been set for L.
measurementIn / measurementOut: These parameters include the matrix R stored in row-major order, and
therefore are vectors of the dimension m × m.
Parameter
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
Update file for a Kalman filter.
Default Value : "kalman.updt"
. dimensionIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
The dimensions of the state vector, measurement vector and controller vector.
Default Value : [3,1,0]
Typical range of values : 0 ≤ DimensionIn ≤ 30
. modelIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major
order.
Default Value : [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
Typical range of values : 0.0 ≤ ModelIn ≤ 10000.0
. measurementIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
The matrix R stored in row-major order.
Default Value : [1,2]
Typical range of values : 0.0 ≤ MeasurementIn ≤ 10000.0
. dimensionOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; HTuple (int / long)
The dimensions of the state vector, measurement vector and controller vector.
. modelOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major
order.
. measurementOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; HTuple (double)
The matrix R stored in row-major order.
Example (Syntax: HDevelop)
%A+C-Q-G-u-L-R-
%transitions at time t=15:
%2 1 1
%0 2 2
%0 0 2
%
%the results of update_kalman:
%
%DimensionOut = [3,1,0]
%ModelOut = [2.0,1.0,1.0,0.0,2.0,2.0,0.0,0.0,2.0,1.0,0.0,0.0,
% 54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
%MeasurementOut = [1.2]
Result
If the update file is readable and correct, the operator UpdateKalman returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
UpdateKalman is reentrant and processed without parallelization.
Possible Successors
FilterKalman
See also
ReadKalman, FilterKalman, SensorKalman
Module
Foundation
15.14 Measure
static void HOperatorSet.CloseAllMeasures ( )
static void HMisc.CloseAllMeasures ( )
Delete all measure objects.
CloseAllMeasures deletes all measure objects that have been created using GenMeasureRectangle2 or
GenMeasureArc. The memory used for the measure objects is freed.
Attention
CloseAllMeasures exists solely for the purpose of implementing the “reset program” functionality in HDe-
velop. CloseAllMeasures must not be used in any application.
Result
CloseAllMeasures always returns 2 (H_MSG_TRUE).
Parallelization Information
CloseAllMeasures is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureRectangle2, GenMeasureArc, MeasurePos, MeasurePairs
Alternatives
CloseMeasure
Module
1D Metrology
HALCON 8.0.2
1304 CHAPTER 15. TOOLS
CloseMeasure deletes the measure object given by measureHandle. The memory used for the measure
object is freed.
Parameter
Having extracted subpixel edge locations, the edges are paired. The features of a possible edge pair are evaluated
by a fuzzy function, set by SetFuzzyMeasure. Which edge pairs are selected can be determined with the
parameter fuzzyThresh, which constitutes a threshold on the weight over all fuzzy sets, i.e., the geometric
mean of the weights of the defined fuzzy membership functions. As an extension to FuzzyMeasurePairs,
the pairing algorithm can be restricted by pairing. Currently only ’no_restriction’ is available, which returns all
possible edge pairs, allowing interleaving and inclusion of pairs. Finally, the best scored numPairs edge pairs
are returned, whereas 0 indicates to return all possible found edge combinations.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc. The
corresponding edge amplitudes are returned in amplitudeFirst and amplitudeSecond, the fuzzy scores in
fuzzyScore. In addition, the distance between each edge pair is returned in intraDistance, corresponding
to the distance between EdgeFirst[i] and EdgeSecond[i].
Attention
FuzzyMeasurePairing only returns meaningful results if the assumptions that the edges are straight and
perpendicular to the major axis of the rectangle or annular arc are fulfilled. Thus, it should not be used to extract
edges from curved objects, for example. Furthermore, the user should ensure that the rectangle or annular arc is
as close to perpendicular as possible to the edges in the image. Additionally, sigma must not become larger than
approx. 0.5 * Length1 (for Length1 see GenMeasureRectangle2).
It should be kept in mind that FuzzyMeasurePairing ignores the domain of image for efficiency reasons.
If certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
HALCON 8.0.2
1306 CHAPTER 15. TOOLS
HALCON 8.0.2
1308 CHAPTER 15. TOOLS
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. measureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; HMeasure / HTuple (IntPtr)
Measure object handle.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Sigma of Gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. ampThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Minimum edge amplitude.
Default Value : 30.0
Suggested values : AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ AmpThresh ≤ 255 (lin)
Minimum Increment : 0.5
Recommended Increment : 2
. fuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Minimum fuzzy value.
Default Value : 0.5
Suggested values : FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.7, 0.9}
Typical range of values : 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended Increment : 0.1
. transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Select the first gray value transition of the edge pairs.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative"}
. rowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinate of the first edge point.
. columnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; HTuple (double)
Column coordinate of the first edge point.
. amplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; HTuple (double)
Edge amplitude of the first edge (with sign).
. rowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinate of the second edge point.
. columnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; HTuple (double)
Column coordinate of the second edge point.
. amplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Edge amplitude of the second edge (with sign).
. rowEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinate of the center of the edge pair.
. columnEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; HTuple (double)
Column coordinate of the center of the edge pair.
. fuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Fuzzy evaluation of the edge pair.
. intraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Distance between edges of an edge pair.
. interDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator FuzzyMeasurePairs returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
FuzzyMeasurePairs is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureRectangle2, GenMeasureArc, SetFuzzyMeasure
Possible Successors
CloseMeasure
Alternatives
EdgesSubPix, FuzzyMeasurePairing, MeasurePairs
See also
FuzzyMeasurePos, MeasurePos
Module
1D Metrology
HALCON 8.0.2
1310 CHAPTER 15. TOOLS
in fuzzyScore. In addition, the distance between consecutive edge points is returned in distance. Here,
Distance[i] corresponds to the distance between Edge[i] and Edge[i+1], i.e., the tuple distance contains one
element less than the tuples rowEdge and columnEdge.
Attention
FuzzyMeasurePos only returns meaningful results if the assumptions that the edges are straight and perpen-
dicular to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved
objects, for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible
to the edges in the image. Additionally, sigma must not become larger than approx. 0.5 * Length1 (for Length1
see GenMeasureRectangle2).
It should be kept in mind that FuzzyMeasurePos ignores the domain of image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
Parallelization Information
FuzzyMeasurePos is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureRectangle2, GenMeasureArc, SetFuzzyMeasure
Possible Successors
CloseMeasure
Alternatives
EdgesSubPix, MeasurePos
See also
FuzzyMeasurePairing, FuzzyMeasurePairs, MeasurePairs
Module
1D Metrology
HALCON 8.0.2
1312 CHAPTER 15. TOOLS
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.
Parameter
public HMeasure ( HTuple row, HTuple column, HTuple phi, HTuple length1,
HTuple length2, int width, int height, string interpolation)
public HMeasure ( double row, double column, double phi, double length1,
double length2, int width, int height, string interpolation)
HALCON 8.0.2
1314 CHAPTER 15. TOOLS
GenMeasureRectangle2 prepares the extraction of straight edges which lie perpendicular to the major axis
of a rectangle. The center of the rectangle is passed in the parameters row and column, the direction of the major
axis of the rectangle in phi, and the length of the two axes, i.e., half the diameter of the rectangle, in length1
and length2.
The edge extraction algorithm is described in the documentation of the operator MeasurePos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For interpolation = ’bilinear’, bilinear interpolation is used,
while for interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator GenMeasureRectangle2. For this, an optimized data structure, a
so-called measure object, is constructed and returned in measureHandle. The size of the images in which
measurements will be performed must be specified in the parameters width and height.
The system parameter ’int_zooming’ (see SetSystem) affects the accuracy and speed of the calculations used to
construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.
Parameter
HALCON 8.0.2
1316 CHAPTER 15. TOOLS
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. measureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; HMeasure / HTuple (IntPtr)
Measure object handle.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Minimum edge amplitude.
Default Value : 30.0
Suggested values : Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 0.5
Recommended Increment : 2
. transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of gray value transition that determines how edges are grouped to edge pairs.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative", "all_strongest", "positive_strongest",
"negative_strongest"}
. select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Selection of edge pairs.
Default Value : "all"
List of values : Select ∈ {"all", "first", "last"}
. rowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinate of the center of the first edge.
. columnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; HTuple (double)
Column coordinate of the center of the first edge.
. amplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; HTuple (double)
Edge amplitude of the first edge (with sign).
. rowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinate of the center of the second edge.
. columnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; HTuple (double)
Column coordinate of the center of the second edge.
. amplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Edge amplitude of the second edge (with sign).
. intraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Distance between edges of an edge pair.
. interDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator MeasurePairs returns the value 2 (H_MSG_TRUE). Otherwise
an exception handling is raised.
Parallelization Information
MeasurePairs is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureRectangle2
Possible Successors
CloseMeasure
Alternatives
EdgesSubPix, FuzzyMeasurePairs, FuzzyMeasurePairing
HALCON 8.0.2
1318 CHAPTER 15. TOOLS
See also
MeasurePos, FuzzyMeasurePos
Module
1D Metrology
Parameter
HALCON 8.0.2
1320 CHAPTER 15. TOOLS
Extracting points with a particular grey value along a rectangle or an annular arc.
MeasureThresh extracts points for which the gray value within an one-dimensional gray value profile is equal
to the specified threshold threshold. The gray value profile is projected onto the major axis of the measure
rectangle which is passed with the parameter measureHandle, so the threshold points calculated within the
gray value profile correspond to certain image coordinates on the rectangle’s major axis. These coordinates are
returned as the operator results in rowThresh and columnThresh.
If the gray value profile intersects the threshold line for several times, the parameter select determines which
values to return. Possible settings are ’first’, ’last’, ’first_last’ (first and last) or ’all’. For the last two cases
distance returns the distances between the calculated points.
The gray value profile is created by averaging the gray values along all line segments, which are defined by the
measure rectangle as follows:
For every line segment, the average of the gray values of all points with an integer distance to the major axis is
calculated. Due to translation and rotation of the measure rectangle with respect to the image coordinates the input
image image is in general sampled at subpixel positions.
Since this involves some calculations which can be used repeatedly in several projections, the operator
GenMeasureRectangle2 is used to perform these calculations only once in advance. Here, the measure
object measureHandle is generated and different interpolation schemes can be selected.
Attention
MeasureThresh only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
GenMeasureRectangle2).
It should be kept in mind that MeasureThresh ignores the domain of image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image.
. measureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; HMeasure / HTuple (IntPtr)
Measure object handle.
. sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.0, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.0
. threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double)
Threshold.
Default Value : 128.0
Typical range of values : 0 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 0.5
. select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Selection of points.
Default Value : "all"
List of values : Select ∈ {"all", "first", "last", "first_last"}
. rowThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; HTuple (double)
Row coordinates of points with threshold value.
HALCON 8.0.2
1322 CHAPTER 15. TOOLS
SetFuzzyMeasure specifies a fuzzy member function passed in function. The specified fuzzy functions
enable FuzzyMeasurePos and FuzzyMeasurePairs / FuzzyMeasurePairing to evaluate and select
the detected edge candidates. For this purpose, weighting characteristics for different edge features can be defined
by one function each. Such a specified feature is called fuzzy set. Specifying no function for a fuzzy set means not
to use this feature for the final edge evaluation. Setting a second fuzzy function to a set means to discard the first
defined function and replace it by the second one. A previously defined fuzzy member function can be discarded
completely by ResetFuzzyMeasure.
Functions for five different fuzzy set types selected by the setType parameter can be defined, the sub types of a
set beeing mutual exclusive:
• ’contrast’ will use the fuzzy function to evaluate the amplitudes of the edge candidates. When extracting
edge pairs, the fuzzy evaluation is obtained by the geometric average of the fuzzy contrast scores of both
edges.
• The fuzzy function of ’position’ evaluates the distance of each edge candidate to the reference point of the
measure object, generated by GenMeasureArc or GenMeasureRectangle2. The reference point is
located at the beginning whereas ’position_center’ or ’position_end’ sets the reference point to the middle
or the end of the one-dimensional gray value profile instead. If the fuzzy position evaluation depends on the
position of the object along the profile, ’position_first_edge’ / ’position_last_edge’ sets the referece point at
the position of the first/last extracted edge. When extracting edge pairs the position of a pair is referenced by
the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the distance of each edge pair to the reference point of
the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’po-
sition_last_pair’, respectively. Contrary to ’position’, this set is only used by FuzzyMeasurePairs/
FuzzyMeasurePairing.
• ’size’ denotes a fuzzy set that evaluates the normed distance of the two edges of a pair in pixels. This
set is only used by FuzzyMeasurePairs/ FuzzyMeasurePairing. Specifying an upper bound
for the size by terminating the member function with a corresponding fuzzy value of 0.0 will speed up
FuzzyMeasurePairs / FuzzyMeasurePairing because not all possible pairs need to be considered.
• ’gray’ sets a fuzzy function to weight the mean projected gray value between two edges of a pair. This set is
only used by FuzzyMeasurePairs / FuzzyMeasurePairing.
A fuzzy member function is defined as a piecewise linear function by at least two pairs of values, sorted in an
ascending order by their x value. The x values represent the edge feature and must lie within the parameter space
of the set type, i.e., in case of ’contrast’ and ’gray’ feature and, e.g., byte images within the range 0.0 ≤ x ≤ 255.0.
In case of ’size’ x has to satisfy 0.0 ≤ x whereas in case of ’position’ x can be any real number. The y values of the
fuzzy function represent the weight of the corresponding feature value and have to satisfy the range of 0.0 ≤ y ≤
1.0. Outside of the function’s interval, defined by the smallest and the greatest x value, the y values of the interval
borders are continued constantly. Such Fuzzy member functions can be generated by CreateFunct1dPairs.
If more than one set is defined, FuzzyMeasurePos / FuzzyMeasurePairs / FuzzyMeasurePairing
yield the overall fuzzy weighting by the geometric middle of the weights of each set.
Parameter
. measureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; HMeasure / HTuple (IntPtr)
Measure object handle.
. setType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Selection of the fuzzy set.
Default Value : "contrast"
List of values : SetType ∈ {"position", "position_center", "position_end", "position_first_edge",
"position_last_edge", "position_pair_center", "position_pair_end", "position_first_pair", "position_last_pair",
"size", "gray", "contrast"}
. function (input_control) . . . . . . . . . . .function_1d-array ; HFunction1D / HTuple (double / int / long)
Fuzzy member function.
Example (Syntax: HDevelop)
HALCON 8.0.2
1324 CHAPTER 15. TOOLS
Parallelization Information
SetFuzzyMeasure is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureArc, GenMeasureRectangle2, CreateFunct1dPairs, TransformFunct1d
Possible Successors
FuzzyMeasurePos, FuzzyMeasurePairs
Alternatives
SetFuzzyMeasureNormPair
See also
ResetFuzzyMeasure
Module
1D Metrology
• ’size’ denotes a fuzzy set that valuates the normalized distance of two edges of a pair in pixels:
d
x= (x ≥ 0) .
s
Specifying an upper bound x_max for the size by terminating the member function with a corresponding
fuzzy value of 0.0 will speed up FuzzyMeasurePairs / FuzzyMeasurePairing because not all
possible pairs must be considered. Additionally, this fuzzy set can also be specified as a normalized size
difference by ’size_diff’
s−d
x= (x ≤ 1)
s
and a absolute normalized size difference by ’size_abs_diff’
|s − d|
x= (0 ≤ x ≤ 1) .
s
• The fuzzy function of ’position’ evaluates the signed distance p of each edge candidate to the reference point
of the measure object, generated by GenMeasureArc or GenMeasureRectangle2:
p
x= .
s
The reference point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference
point to the middle or the end of the one-dimensional gray value profile, instead. If the fuzzy position
valuation depends on the position of the object along the profile ’position_first_edge’ / ’position_last_edge’
sets the referece point at the position of the first/last extracted edge. When extracting edge pairs, the position
of a pair is referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the signed distance of each edge pair to the reference point
of the measure object. The position of a pair is defined by the center point between both edges. The
object’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’po-
sition_last_pair’, respectively. Contrary to ’position’, this set is only used by FuzzyMeasurePairs/
FuzzyMeasurePairing.
A normalized fuzzy member function is defined as a piecewise linear function by at least two pairs of values,
sorted in an ascending order by their x value. The y values of the fuzzy function represent the weight of the
corresponding feature value and must satisfy the range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined
by the smallest and the greatest x value, the y values of the interval borders are continued constantly. Such Fuzzy
member functions can be generated by CreateFunct1dPairs.
If more than one set is defined, FuzzyMeasurePos / FuzzyMeasurePairs / FuzzyMeasurePairing
yield the overall fuzzy weighting by the geometric mean of the weights of each set.
Parameter
. measureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; HMeasure / HTuple (IntPtr)
Measure object handle.
. pairSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (double / int / long)
Favored width of edge pairs.
Default Value : 10.0
List of values : PairSize ∈ {4.0, 6.0, 8.0, 10.0, 15.0, 20.0, 30.0}
Typical range of values : 0.0 ≤ PairSize
Minimum Increment : 0.1
Recommended Increment : 1.0
. setType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Selection of the fuzzy set.
Default Value : "size_abs_diff"
List of values : SetType ∈ {"size", "size_diff", "size_abs_diff", "position", "position_center",
"position_end", "position_first_edge", "position_last_edge", "position_pair_center", "position_pair_end",
"position_first_pair", "position_last_pair"}
. function (input_control) . . . . . . . . . . .function_1d-array ; HFunction1D / HTuple (double / int / long)
Fuzzy member function.
Example (Syntax: HDevelop)
HALCON 8.0.2
1326 CHAPTER 15. TOOLS
* (30% uncertainty). */
create_funct_1d_pairs ([0.7,1.0,1.3], [0.0,1.0,0.0], SizeFunction)
/* and set it for an expected pair size of 13.45 pixels */
set_fuzzy_measure_norm_pair (MeasureHandle, 13.45, ’size’, SizeFunction)
Parallelization Information
SetFuzzyMeasureNormPair is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureArc, GenMeasureRectangle2, CreateFunct1dPairs
Possible Successors
FuzzyMeasurePairs, FuzzyMeasurePairing
Alternatives
TransformFunct1d, SetFuzzyMeasure
See also
ResetFuzzyMeasure
Module
1D Metrology
Result
If the parameter values are correct the operator TranslateMeasure returns the value 2 (H_MSG_TRUE).
Otherwise an exception handling is raised.
Parallelization Information
TranslateMeasure is reentrant and processed without parallelization.
Possible Predecessors
GenMeasureRectangle2, GenMeasureArc
Possible Successors
MeasurePos, MeasurePairs, FuzzyMeasurePos, FuzzyMeasurePairs,
FuzzyMeasurePairing, MeasureThresh
Alternatives
GenMeasureRectangle2, GenMeasureArc
See also
CloseMeasure
Module
1D Metrology
15.15 OCV
static void HOperatorSet.CloseAllOcvs ( )
static void HMisc.CloseAllOcvs ( )
Clear all OCV tools.
CloseAllOcvs closes all OCV tools which have been opened using CreateOcvProj or ReadOcv. All
handles are invalid after this call.
Attention
CloseAllOcvs exists solely for the purpose of implementing the “reset program” functionality in HDevelop.
CloseAllOcvs must not be used in any application.
Result
CloseAllOcvs returns always 2 (H_MSG_TRUE).
Parallelization Information
CloseAllOcvs is processed completely exclusively without parallelization.
Possible Predecessors
ReadOcv, CreateOcvProj
Alternatives
CloseOcv
Module
OCR/OCV
HALCON 8.0.2
1328 CHAPTER 15. TOOLS
read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);
Result
CloseOcv returns 2 (H_MSG_TRUE), if the handle is valid. Otherwise, an exception handling is raised.
Parallelization Information
CloseOcv is processed completely exclusively without parallelization.
Possible Predecessors
ReadOcv, CreateOcvProj
See also
CloseOcr
Module
OCR/OCV
create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");
Result
CreateOcvProj returns 2 (H_MSG_TRUE), if the parameters are correct. Otherwise, an exception handling is
raised.
Parallelization Information
CreateOcvProj is processed completely exclusively without parallelization.
Possible Successors
TraindOcvProj, WriteOcv, CloseOcv
Alternatives
ReadOcv
See also
CreateOcrClassBox
Module
OCR/OCV
HALCON 8.0.2
1330 CHAPTER 15. TOOLS
Parameter
ReadOcv reads an OCV tool from file. The tool will contain the same information that it contained when saving
it with WriteOcv. After reading the tool the training can be completed for those patterns which have not been
trained so far. Otherwise a pattern comparison can be applied directly by calling DoOcvSimple.
As extension ’.ocv’ is used. If this extension is not given with the file name it will be added automatically.
Parameter
. fileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; HTuple (string)
Name of the file which has to be read.
Default Value : "test_ocv"
. OCVHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; HOCV / HTuple (IntPtr)
Handle of read OCV tool.
Example (Syntax: C++)
read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);
Result
ReadOcv returns 2 (H_MSG_TRUE), if the file is correct. Otherwise, an exception handling is raised.
Parallelization Information
ReadOcv is processed completely exclusively without parallelization.
Possible Predecessors
WriteOcv
Possible Successors
DoOcvSimple, CloseOcv
See also
ReadOcr
Module
OCR/OCV
HALCON 8.0.2
1332 CHAPTER 15. TOOLS
the same. However using multiple calls will normally result in a longer execution time than using one call with all
patterns.
Parameter
create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");
Result
TraindOcvProj returns 2 (H_MSG_TRUE), if the handle and the training pattern(s) are correct. Otherwise, an
exception handling is raised.
Parallelization Information
TraindOcvProj is processed completely exclusively without parallelization.
Possible Predecessors
WriteOcrTrainf, CreateOcvProj, ReadOcv, Threshold, Connection, SelectShape
Possible Successors
CloseOcv
See also
TraindOcrClassBox
Module
OCR/OCV
Result
WriteOcv returns 2 (H_MSG_TRUE), if the data is correct and the file can be written. Otherwise, an exception
handling is raised.
Parallelization Information
WriteOcv is reentrant and processed without parallelization.
Possible Predecessors
TraindOcvProj
Possible Successors
CloseOcv
See also
WriteOcr
Module
OCR/OCV
15.16 Shape-from
Parameter
. multiFocusImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Multichannel gray image consisting of multiple focus levels.
. depth (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Depth image.
. confidence (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Confidence of depth estimation.
. filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Filter used to find sharp pixels.
Default Value : "highpass"
List of values : Filter ∈ {"highpass", "bandpass"}
. selection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string)
Method used to find sharp pixels.
Default Value : "next_maximum"
List of values : Selection ∈ {"next_maximum", "local"}
HALCON 8.0.2
1334 CHAPTER 15. TOOLS
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
threshold(Confidence,HighConfidence,10,255);
reduce_domain(SharpImage,HighConfidence,ConfidentSharp);
Parallelization Information
DepthFromFocus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
Compose2, Compose3, Compose4, AddChannels, ReadImage, ReadSequence
Possible Successors
SelectGrayvaluesFromChannels, MeanImage, BinomialFilter, GaussImage, Threshold
See also
CountChannels
Module
3D Metrology
HALCON 8.0.2
1336 CHAPTER 15. TOOLS
Possible Successors
SfsModLr, SfsOrigLr, SfsPentland, PhotStereo, ShadeHeightField
Module
3D Metrology
HTuple HImage.EstimateTiltZc ( )
Estimate the tilt of a light source.
EstimateTiltZc estimates the tilt of a light source, i.e. the angle between the light source and the x-axis after
projection into the xy-plane, from the image image using the algorithm of Zheng and Chellappa.
Parameter
HALCON 8.0.2
1338 CHAPTER 15. TOOLS
The operator SelectGrayvaluesFromChannels selects gray values from the different channels of
multichannelImage. The channel number for each pixel is determined from the corresponding pixel value
in indexImage. If multichannelImage and indexImage contain the same number of images, the corre-
sponding images are processed pairwise. Otherwise, indexImage must contain only one single image. In this
case, the gray value selection is performed for each image of multichannelImage according to indexImage
.
Parameter
. multichannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; HImage
Multi-channel gray value image.
. indexImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Image, where pixel values are interpreted as channel index.
Number of elements : (IndexImage = MultichannelImage) ∨ (IndexImage = 1)
. selected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; HImage
Resulting image.
Example (Syntax: C++)
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
Parallelization Information
SelectGrayvaluesFromChannels is reentrant and automatically parallelized (on tuple level, domain
level).
Possible Predecessors
DepthFromFocus, MeanImage
Possible Successors
DispImage
See also
CountChannels
Module
Foundation
Parameter
HALCON 8.0.2
1340 CHAPTER 15. TOOLS
Result
If all parameters are correct SfsOrigLr returns the value 2 (H_MSG_TRUE). Otherwise, an exception is raised.
Parallelization Information
SfsOrigLr is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
EstimateAlAm, EstimateSlAlLr, EstimateSlAlZc, EstimateTiltLr, EstimateTiltZc
Possible Successors
ShadeHeightField
Module
3D Metrology
HALCON 8.0.2
1342 CHAPTER 15. TOOLS
HALCON 8.0.2
1344 CHAPTER 15. TOOLS
15.17 Stereo
static void HOperatorSet.BinocularCalibration ( HTuple NX,
HTuple NY, HTuple NZ, HTuple NRow1, HTuple NCol1, HTuple NRow2,
HTuple NCol2, HTuple startCamParam1, HTuple startCamParam2,
HTuple NStartPose1, HTuple NStartPose2, HTuple estimateParams,
out HTuple camParam1, out HTuple camParam2, out HTuple NFinalPose1,
out HTuple NFinalPose2, out HTuple relPose, out HTuple errors )
According to CameraCalibration the 3D transformation poses of the calibration model to the respective CCS
are returned in NFinalPose1 and NFinalPose2. These transformations are related to relPose according
to the following equation (neglecting differences due to the balancing effects of the multi image calibration):
HomMat3D_NFinalPose2 = INV(HomMat3D_RelPose) * HomMat3D_NFinalPose1,
whereas HomMat3D_* denotes a homogeneous transformation matrix of the respective poses and INV() inverts a
homogeneous matrix.
The computed average errors returned in errors give an impression of the accuracy of the calibration. Using
the determined camera parameters they denote the average of the euklidean distance of the projection of the mark
centers of the model to their image.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
. NRow1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NCol1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NRow2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
. NCol2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
. startCamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double / int / long)
Initial values for the internal projective parameters of the projective camera 1.
. startCamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double / int / long)
Initial values for the internal projective parameters of teh projective camera 2.
. NStartPose1 (input_control) . . . . . . . . . . . . . . . . . . . pose-array ; HPose [ ] / HTuple (double / int / long)
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
. NStartPose2 (input_control) . . . . . . . . . . . . . . . . . . . pose-array ; HPose [ ] / HTuple (double / int / long)
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
. estimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; HTuple (string / int / long)
Camera parameters to be estimated.
Default Value : "all"
List of values : EstimateParams ∈ {"all", "pose_rel", "pose1", "pose2", "cam_param1", "cam_param2",
"alpha1", "beta1", "gamma1", "transx1", "transy1", "transz1", "alpha2", "beta2", "gamma2", "transx2",
"transy2", "transz2", "focus1", "kappa1", "cx1", "cy1", "sx1", "sy1", "focus2", "kappa2", "cx2", "cy2", "sx2",
"sy2"}
. camParam1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Internal Parameters of the projective camera 1.
. camParam2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Internal parameters of the projective camera 2.
. NFinalPose1 (output_control) . . . . . . . . . . . . . . . . . . pose-array ; HPose [ ] / HTuple (double / int / long)
Ordered tuple with all poses of the calibration model in relation to camera 1.
. NFinalPose2 (output_control) . . . . . . . . . . . . . . . . . . pose-array ; HPose [ ] / HTuple (double / int / long)
Ordered tuple with all poses of the calibration model in relation to camera 2.
. relPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Pose of camera 2 in relation to camera 1.
. errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; HTuple (double)
Average error distances in pixels.
Example (Syntax: HDevelop)
HALCON 8.0.2
1346 CHAPTER 15. TOOLS
close_all_framegrabbers ()
open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1, ’default’, -1,
’default’, ’images_l.seq’, ’default’, 0, -1, FGHandle1)
open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1, ’default’, -1,
’default’, ’images_r.seq’, ’default’, 1, -1, FGHandle2)
Result
BinocularCalibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired pa-
rameters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
HALCON 8.0.2
1348 CHAPTER 15. TOOLS
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
It should be noted, that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window, referenced by 2m + 1 and 2n + 1, has to be odd numbered and is passed in
maskWidth and maskHeight. The search space is confined by the minimum and maximum disparity value
minDisparity and maxDisparity. Due to pixel values not defined beyond the image border the resulting
domain of disparity and score is not set along the image border within a margin of height (maskHeight-
1)/2 at the top and bottom border and of width (maskWidth-1)/2 at the left and right border. For the same reason,
the maximum disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum statistical
spread of gray values within the correlation window can be defined in textureThresh. This threshold is applied
on both input images image1 and image2. In addition, scoreThresh guarantees the matching quality and
defines the maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting
filter to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a
concurrent direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of BinocularDisparity is determined by
numLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages of similar disparity to reduce the disparity range on the next lower pyramid level.
textureThresh and scoreThresh are applied on every level and the returned domain of the disparity
and score images arises from the intersection of the resulting domains of every single level. Generally, pyramid
structures are the more advantageous the more the disparity image can be segmented into regions of homogeneous
disparities and the bigger the disparity range is specified. As a drawback, coarse pyramid levels might loose
important texture information which can result in deficient disparity values.
Finally, the value ’interpolation’ for parameter subDisparity performs subpixel refinement of disparities. It is
switched off by setting the parameter to ’none’.
Parameter
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)
Result
BinocularDisparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
BinocularDisparity is reentrant and automatically parallelized (on domain level).
HALCON 8.0.2
1350 CHAPTER 15. TOOLS
Possible Predecessors
MapImage
Possible Successors
Threshold, DisparityToDistance
Alternatives
BinocularDistance
See also
MapImage, GenBinocularRectificationMap, BinocularCalibration
Module
3D Metrology
camera 2, and the the external parameters relPoseRect have to be defined. Latter characterizes the relative
pose of both cameras to each other and specifies a point transformation from the rectified camera system 2 to the
rectified camera system 1. These parameters can be obtained from the operator BinocularCalibration
and GenBinocularRectificationMap. After all, a quality measure for each distance value is returned in
score, containing the best result of the matching function S of a reference pixel. For the matching, the gray
values of the original unprocessed images are used.
The used matching function is defined by the parameter method allocating three different kinds of correlation:
r+m c+n
1
| g1 (r0 , c0 ) − g2 (r0 , c0 + d) |,
P P
• ’sad’: Summed Absolute Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 255.
r+m c+n
1
(g1 (r0 , c0 ) − g2 (r0 , c0 + d))2 ,
P P
• ’ssd’: Summed Squared Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 65025.
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))(g2 (r 0 ,c0 +d)−g¯2 (r,c+d))
r 0 =r−m c0 =c−n
• ’nnc’: Normalized Cross Correlation S(r, c, d) = s ,
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))2 (g2 (r 0 ,c0 +d)−g¯2 (r,c+d))2
r 0 =r−m c0 =c−n
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
It should be noted that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window has to be odd numbered and is passed in maskWidth and maskHeight. The
search space is confined by the minimum and maximum disparity value minDisparity and maxDisparity.
Due to pixel values not defined beyond the image border the resulting domain of distance and score is
generally not set along the image border within a margin of height maskHeight/2 at the top and bottom border
and of width maskWidth/2 at the left and right border. For the same reason, the maximum disparity range is
reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum variance
within the correlation window can be defined in textureThresh. This threshold is applied on both input
images image1 and image2. In addition, scoreThresh guarantees the matching quality and defines the
maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting filter
to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a concurrent
direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of BinocularDistance is determined by
numLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmentated
into rectangular subimages to reduce the disparity range on the next lower pyramid level. textureThresh and
scoreThresh are applied on every level and the returned domain of the distance and score images arises
from the intersection of the resulting domains of every single level. Generally, pyramid structures are the more
advantageous the more the distance image can be segmented into regions of homogeneous distance values and the
bigger the disparity range must be specified. As a drawback, coarse pyramid levels might loose important texture
information which can result in deficient distance values.
Finally, the value ’interpolation’ for parameter subDistance increases the refinement and accuracy of the dis-
tance values. It is switched off by setting the parameter to ’none’.
Parameter
. image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Epipolar image of camera 1.
. image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; HImage
Epipolar image of camera 2.
HALCON 8.0.2
1352 CHAPTER 15. TOOLS
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpose.dat’, RelPose)
Result
BinocularDisparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
BinocularDistance is reentrant and automatically parallelized (on domain level).
Possible Predecessors
MapImage
Possible Successors
Threshold
Alternatives
BinocularDisparity
See also
MapImage, GenBinocularRectificationMap, BinocularCalibration,
DistanceToDisparity, DisparityToDistance
Module
3D Metrology
Transform a disparity value into a distance value in a rectified binocular stereo system.
DisparityToDistance transforms a disparity value into a distance of an object point to the binocular
stereo system. The cameras of this system must be rectified and are defined by the rectified internal parame-
ters camParamRect1 of the projective camera 1 and camParamRect2 of the projective camera 2, and the
HALCON 8.0.2
1354 CHAPTER 15. TOOLS
external parameters relPoseRect. Latter specifies the relative pose of both cameras to each other by defin-
ing a point transformation from rectified camera system 2 to rectified camera system 1. These parameters can
be obtained from the operator BinocularCalibration and GenBinocularRectificationMap. The
disparity value disparity is defined by the column difference of the image coordinates of two corresponding
points on an epipolar line according to the equation d = c2 − c1 (see also BinocularDisparity). This value
characterises a set of 3D object points of an equal distance to a plane beeing parallel to the rectified image plane of
the stereo system. The distance to the subset plane z = 0 which is parallel to the rectified image plane and contains
the optical centers of both cameras is returned in distance.
Parameter
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Given an image point of the rectified camera 1, specified by its image coordinates (row1,col1), and its disparity
in a rectified binocular stereo system, DisparityToPoint3d computes the corresponding three dimensional
object point. Whereby the disparity value disparity defines the column difference of the image coordinates
of two corresponding features on an epipolar line according to the equation d = c2 − c1 . The rectified binocular
camera system is specified by its internal camera parameters camParamRect1 of the projective camera 1 and
camParamRect2 of the projective camera 2, and the external parameters relPoseRect defining the pose of
the rectified camera 2 in relation to the rectified camera 1. These camera parameters can be obtained from the
operators BinocularCalibration and GenBinocularRectificationMap. The 3D point is returned
in Cartesian coordinates (x,y,z) of the rectified camera system 1.
Parameter
. camParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. camParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. relPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Pose of the rectified camera 2 in relation to the rectified camera 1.
Number of elements : 7
. row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Row coordinate of a point in the rectified image 1.
. col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Column coordinate of a point in the rectified image 1.
. disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Disparity of the images of the world point.
. x (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
X coordinate of the 3D point.
. y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Y coordinate of the 3D point.
. z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Z coordinate of the 3D point.
Result
DisparityToPoint3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
DisparityToPoint3d is reentrant and processed without parallelization.
Possible Predecessors
BinocularCalibration, GenBinocularRectificationMap
Possible Successors
BinocularDisparity, BinocularDistance
See also
IntersectLinesOfSight
Module
3D Metrology
HALCON 8.0.2
1356 CHAPTER 15. TOOLS
camParamRect1 of the projective camera 1 and camParamRect2 of the projective camera 2 and the external
parameters relPoseRect. latter specifies the relative pose of both camera systems to each other by defining a
point transformation from the rectified camera system 2 to the rectified camera system 1. These parameters can
be obtained from the operator BinocularCalibration and GenBinocularRectificationMap. The
distance value is passed in distance and the resulting disparity value disparity is defined by the column
difference of the image coordinates of two corresponding features on an epipolar line according to the equation
d = c2 − c1 .
Parameter
. camParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. camParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. relPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . pose-array ; HPose / HTuple (double / int / long)
Point transformation from rectified camera 2 to rectified camera 1.
Number of elements : 7
. distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; HTuple (double)
Distance of a world point to camera 1.
Restriction : 0 < Distance
. disparity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; HTuple (double / int / long)
Disparity between the images of the point.
Result
DistanceToDisparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
DistanceToDisparity is reentrant and processed without parallelization.
Possible Predecessors
BinocularCalibration, GenBinocularRectificationMap
Possible Successors
BinocularDisparity
Module
3D Metrology
Image coordinates result from 3D direction vectors by multiplication with the camera matrix CamM at:
col X
row = CamM at · Y .
1 1
Therefore, the fundamental matrix FMatrix is calculated from the essential matrix EMatrix and the camera
matrices camMat1, camMat2 by the following formula:
The transformation of the essential matrix to the fundamental matrix goes along with the propagation of the co-
variance matrices covEMat to covFMat. If covEMat is empty covFMat will be empty too.
The conversion operator EssentialToFundamentalMatrix is used especially for a subsequent visualiza-
tion of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameter
HALCON 8.0.2
1358 CHAPTER 15. TOOLS
In the case of a known covariance matrix covFMat of the fundamental matrix FMatrix, the covariance matrix
covFMatRect of the above rectified fundamental matrix is calculated. This can help for an improved stereo
matching process because the covariance matrix defines in terms of probabilities the image domain where to find
a corresponding match.
Similar to the operator GenBinocularRectificationMap the output images map1 and map2 describe
the transformation, also called mapping, of the original images to the rectified ones. The parameter mapping
specifies whether bilinear interpolation (’bilinear_map’) should be applied between the pixels in the input image
or whether the gray value of the nearest neighboring pixel should be taken (’nn_map’). The size and resolution
of the maps and of the transformed images can be adjusted by the parameter subSampling, which applies a
sub-sampling factor to the original images. For example, a factor of two will halve the image sizes. If just the two
homographies are required mapping can be set to ’no_map’ and no maps will be returned. For speed reasons,
this option should be used if for a specific stereo configuration the images must be rectified only once. If the stereo
setup is fixed, the maps should be generated only once and both images should be rectified with MapImage; this
will result in the smallest computational cost for on-line rectification.
When using the maps, the transformed images are of the same size as their maps. Each pixel in the map contains
the description of how the new pixel at this position is generated. The images map1 and map2 are single channel
images if mapping is set to ’nn_map’ and five channel images if it is set to ’bilinear_map’. In the first channel,
which is of type int4, the pixels contain the linear coordinates of their reference pixels in the original image. With
mapping equal to ’no_map’ this reference pixel is the nearest neighbor to the back-transformed pixel coordinates
of the map. In the case of bilinear interpolation the reference pixel is the next upper left pixel relative to the back-
transformed coordinates. The following scheme shows the ordering of the pixels in the original image next to the
back-transformed pixel coordinates, where the reference pixel takes the number 2.
2 3
4 5
The channels 2 to 5, which are of type uint2, contain the weights of the relevant pixels for the bilinear interpolation.
Based on the rectified images, the disparity be computed using BinocularDisparity. In contrast to stereo
with fully calibrated cameras, using the operator GenBinocularRectificationMap and its successors,
metric depth information can not be derived for weakly calibrated cameras. The disparity map gives just a qualita-
tive depth ordering of the scene.
Parameter
. map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image coding the rectification of the 1. image.
. map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; HImage
Image coding the rectification of the 2. image.
. FMatrix (input_control) . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double / int / long)
Fundamental matrix.
. covFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
9 × 9 covariance matrix of the fundamental matrix.
Default Value : []
. width1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the 1. image.
Default Value : 512
List of values : Width1 ∈ {128, 256, 512, 1024}
Restriction : Width1 > 0
. height1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the 1. image.
Default Value : 512
List of values : Height1 ∈ {128, 256, 512, 1024}
Restriction : Height1 > 0
. width2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Width of the 2. image.
Default Value : 512
List of values : Width2 ∈ {128, 256, 512, 1024}
Restriction : Width2 > 0
. height2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Height of the 2. image.
Default Value : 512
List of values : Height2 ∈ {128, 256, 512, 1024}
Restriction : Height2 > 0
. subSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; HTuple (int / long / double)
Subsampling factor.
Default Value : 1
List of values : SubSampling ∈ {1, 2, 3, 1.5}
. mapping (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Type of mapping.
Default Value : "no_map"
List of values : Mapping ∈ {"no_map", "nn_map", "bilinear_map"}
. covFMatRect (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double)
9 × 9 covariance matrix of the rectified fundamental matrix.
HALCON 8.0.2
1360 CHAPTER 15. TOOLS
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common
rectified image plane.
Given a pair of stereo images, rectification determines a transformation of each image plane in a way that pairs of
conjugate epipolar lines become collinear and parallel to the horizontal image axes. The rectified epipolar images
can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras. The camera centers of
this virtual rig are maintained whereas the image planes coincide, which means that the focal lengths are set equal,
and the optical axes parallel.
To achieve the transformation map for epipolar images GenBinocularRectificationMap requires the
internal camera parameters camParam1 of the projective camera 1 and camParam2 of the projective camera 2,
as well as the relative pose relPose defining a point transformation from camera 2 to camera 1. These parameters
can be obtained, e.g., from the operator BinocularCalibration.
The projection onto a common plane has many degrees of freedom which are implicitly restricted by selecting a
certain method in method (currently only one method available):
• ’geometric’ specifies the orientation of the common image plane by the cross product of the base line and the
line of intersection of the original image planes. The new focal length are determined in such a way as the
old prinzipal points have the same distance to the new common image plane.
2 3
4 5
In addition, GenBinocularRectificationMap returns the modified internal and external camera parame-
ters of the rectified stereo rig. camParamRect1 and camParamRect2 contain the modified internal parameters
of camera 1 and camera 2, respectively. The rotation of the rectified camera in relation to the original camera is
specified by camPoseRect1 and camPoseRect2, respectively. Finally, relPoseRect returns the modified
relative pose of the rectified camera system 2 in relation to the rectified camera system 1 defining a translation in x
only. Generally, the transformations are defined in a way that the rectified camera 1 is left of the rectified camera
2. This means that the optical center of camera 2 has a positive x coordinate of the rectified coordinate system of
camera 1.
Parameter
HALCON 8.0.2
1362 CHAPTER 15. TOOLS
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)
while 1
grab_image_async (Image1, FGHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
Result
GenBinocularRectificationMap returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception handling is raised.
Parallelization Information
GenBinocularRectificationMap is reentrant and processed without parallelization.
Possible Predecessors
BinocularCalibration
Possible Successors
MapImage
Alternatives
GenImageToWorldPlaneMap
See also
MapImage, GenImageToWorldPlaneMap, ContourToWorldPlaneXld,
ImagePointsToWorldPlane
Module
3D Metrology
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Given two lines of sight from different cameras, specified by their image points (row1,col1) of camera 1 and
(row2,col2) of camera 2, IntersectLinesOfSight computes the 3D point of intersection of these lines.
The binocular camera system is specified by its internal camera parameters camParam1 of the projective cam-
era 1 and camParam2 of the projective camera 2, and the external parameters relPose defining the pose of
the cameras by a point transformation from camera 2 to camera 1. These camera parameters can be obtained,
e.g., from the operator BinocularCalibration, if the coordinates of the image points (row1,col1) and
(row2,col2) refer to the respective original image coordinate system. In case of rectified image coordinates (
e.g., obtained from epipolar images), the rectified camera parameters must be passed, as they are returned by the
operator GenBinocularRectificationMap. The ’point of intersection’ is defined by the point with the
shortest distance to both lines of sight. This point is returned in Cartesian coordinates (x,y,z) of camera system 1
and its distance to the lines of sight is passed in dist.
Parameter
HALCON 8.0.2
1364 CHAPTER 15. TOOLS
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (rows1, cols1) and (rows2, cols2) in the stereo images
image1 and image2 along with known internal camera parameters, specified by the camera matrices camMat1
and camMat2, MatchEssentialMatrixRansac automatically determines the geometry of the stereo setup
and finds the correspondences between the characteristic points. The geometry of the stereo setup is represented
by the essential matrix EMatrix and all corresponding points have to fulfill the epipolar constraint.
The operator MatchEssentialMatrixRansac is designed to deal with a linear camera model. The internal
camera parameters are passed by the arguments camMat1 and camMat2, which are 3 × 3 upper triangular
matrices desribing an affine transformation. The relation between a vector (X,Y,1), representing the direction from
the camera to the viewed 3D space point and its (projective) 2D image coordinates (col,row,1) is:
col X f /sx s cx
row = CamM at · Y where CamM at = 0 f /sy cy .
1 1 0 0 1
Note the column/row ordering in the point coordinates which has to be compliant with the x/y notation of the
camera coordinate system. The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor
and (cx , cy ) indicates the principal point. Mainly, these are the elements known from the camera parameters as
used for example in CameraCalibration. Alternatively, the elements of the camera matrix can be described
in a different way, see e.g. StationaryCameraSelfCalibration. Multiplied by the inverse of the camera
matrices the direction vectors in 3D space are obtained from the (projective) image coordinates. For known camera
matrices the epipolar constraint is given by:
T
X2 X1
Y2 · EM atrix · Y1 = 0 .
1 1
The matching process is based on characteristic points, which can be extracted with point operators like
PointsFoerstner or PointsHarris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the essential matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is maskSize × maskSize. Three metrics for the correlation can be selected.
If grayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of matchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · rowTolerance × 2 · colTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
rowMove and colMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the essen-
tial matrix EMatrix. It tries to find the essential matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
distanceThreshold.
The parameter estimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If estimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix covEMat as well. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-
linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences
differ depending on the deployed estimation method.
The value error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. points1 contains
the indices of the matched input points from the first image and points2 contains the indices of the corresponding
points in the second image.
For the operator MatchEssentialMatrixRansac a special configuration of scene points and cameras exists:
if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in
HALCON 8.0.2
1366 CHAPTER 15. TOOLS
the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the
operator. This means that the output parameters EMatrix, covEMat and error are of double length and the
values of the second solution are simply concatenated behind the values of the first one.
The parameter randSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If randSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
randSeed. If randSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter
. image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image 1.
. image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; HImage
Input image 2.
. rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Row coordinates of characteristic points in image 1.
Restriction : (length(Rows1) ≥ 6) ∨ (length(Rows1) ≥ 3)
. cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Column coordinates of characteristic points in image 1.
Restriction : length(Cols1) = length(Rows1)
. rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Row coordinates of characteristic points in image 2.
Restriction : (length(Rows2) ≥ 6) ∨ (length(Rows2) ≥ 3)
. cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; HTuple (double / int / long)
Column coordinates of characteristic points in image 2.
Restriction : length(Cols2) = length(Rows2)
. camMat1 (input_control) . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double / int / long)
Camera matrix of the 1st camera.
. camMat2 (input_control) . . . . . . . . . . . . . . . hom_mat2d-array ; HHomMat2D / HTuple (double / int / long)
Camera matrix of the 2nd camera.
. grayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; HTuple (string)
Gray value comparison metric.
Default Value : "ssd"
List of values : GrayMatchMethod ∈ {"ssd", "sad", "ncc"}
. maskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Size of gray value masks.
Default Value : 10
Typical range of values : 3 ≤ MaskSize ≤ 15
Restriction : MaskSize ≥ 1
. rowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Average row coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ RowMove ≤ 200
. colMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Average column coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ ColMove ≤ 200
. rowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Half height of matching search window.
Default Value : 200
Typical range of values : 50 ≤ RowTolerance ≤ 200
Restriction : RowTolerance ≥ 1
. colTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; HTuple (int / long)
Half width of matching search win