Sie sind auf Seite 1von 3

Sandro Morero s188279 Speaker segmentation and clustering _______________________________________________________________________________________

COMPARISON BETWEEN Cross likelihood ratio based speaker clustering using eigenvoice models AND Partially Supervised Speaker Clustering FOR SPEAKER SEGMENTATION AND CLUSTERING

1. SPEAKER SEGMENTATION AND CLUSTERING

speaker clustering system which outperforms traditional approaches using Gaussian Mixture Model (GMM) based modeling.

Clustering and segmentation are used principally in television or radio studios in order to storage all the shows radio or the audio of the TV shows. Some studios have too much hours of registration to do it manually and for this reason these algorithms are used. In this document there is a comparison between two papers which talk about speaker segmentation and clustering. The first one is Cross likelihood ratio based speaker clustering using eigen voice models and the other one is Partially Supervised Speaker Clustering.

3. SECOND DOCUMENT
The second document is Partially Supervised Speaker Clustering and describe another way to do the clustering and segmentation of the voice. The figure below explain the algorithm used:

2. FIRST DOCUMENT
In the first document Cross likelihood ratio based speaker clustering using eigen voice models describes the Cross Likelihood ratio (CLR) criterion using eigenvoice modeling: In the baseline system, the audio is first passed through a speech activity detection stage which separates the audio into speech and non-speech regions. Speaker segmentation is then performed to partition the speech regions into homogeneous speaker segments, this is followed by a Viterbi resegmentation stage which aims to refine the segment boundary locations. The set of speaker segments are then passed to the speaker clustering stages of the system, which aim to merge the segments containing the utterances produced by the same speaker. Two clustering stages: The initial clustering stage merges only the closest speaker segments and is terminated early, resulting in a set of underclustered nodes, which is passed into the second clustering stage that performs further clustering using more complex models. At the end of the initial clustering stage, the segment boundaries are refined once more via Viterbi resegmentation. The refined segments are then passed into the second clustering stage, which completes the clustering process using the CLR as the decision criterion. In this clustering stage, the initial clusters have considerably more data than the individual speaker segments passed into the first clustering stage. The CLR between each pair of clusters is then calculated and agglomerative clustering is performed, iteratively merging the closest air of clusters until no more suitable merge candidates can be found. The second clustering stage produces the final diarization output, consisting of a relative, show-internal set of speaker labels and their corresponding start and end times. This document demonstrates that by incorporating eigenvoice modeling into the CLR framework, it is possible to capitalize on the advantages of each technique to produce a robust The first step is the Feature Extraction: it is the process of identifying the most important cues from the measured data while removing unimportant ones for a specific task or purpose based on domain knowledge. In order to account for the temporal dynamics of the spectrum, the basic melfrequency cepstral coefficients (MFCC) or perceptual linear predictive (PLP) features are usually augmented by their firstorder derivatives (i.e. the delta coefficients) and secondorder derivatives (i.e. the acceleration coefficients). Higherorder derivatives may be used too, although that is rarely seen. Using an independent training data set, it is possible to simultaneously characterize the temporal dynamics of the spectrum and maximize the discriminative power of the augmented acoustic features based on a discriminative learning framework. Then there is the Utterance Representation: it is, as its name suggests, is the task of compactly encoding the acoustic features of an utterance. The rationale behind this representation is that in speaker clustering the linguistic content of the speech signal is considered to be irrelevant and normally disregarded. The acoustic features of an utterance are modeled by an mcomponent GMM, defined as a weighted sum of m component Gaussian densities The number of free parameters of a GMM depends on the feature dimension and the number of Gaussian components. A relatively new utterance representation that has emerged in the speaker recognition area is the GMM mean supervector representation, which is obtained by concatenating the mean vectors of the Gaussian components of a GMM trained on the acoustic features of a particular utterance.

Sandro Morero s188279 Speaker segmentation and clustering _______________________________________________________________________________________


The GMM mean supervector representation allows us to represent an utterance as a single data point in a highdimensional space, where conventional clustering techniques such as k-means and the hierarchical clustering can be naturally applied. Figure below visualizes the GMM mean supervectors of many utterances from five different speakers using 2D scatter plots of their two principal components. The final stage is the clustering: The document is focuses in two types of algorithms. One is flat clustering clustering by partitioning the data space. The other is hierarchical clustering, where we try not to construct a partition but a nested hierarchy of partitions. The representative flat clustering algorithm is k-means whose objective is to partition the data space in such a way that the total intra-cluster variance is minimized. It iterates between a cluster assigning step and a mean updating step until convergence. The representative hierarchical clustering algorithm is agglomerative clustering whose objective is to obtain a complete hierarchy of clusters in the form of a dendrogram. The algorithm adopts a bottom-up strategy. First, it starts with each data point being a cluster. Then, it checks which clusters are the closest and merges them into a new cluster. This document is explain the discovery of the directional scattering property of the GMM mean supervector representation of utterances and advocate the use of the cosine distance metric instead of the Euclidean distance metric.

In each plot, the different speakers are shown in different colors. For each speaker, there are about 150 utterances, denoted by small dots. As one can see, the data points belonging to the same speaker tend to cluster together. The utterances from the same speaker tend to yield a cluster of GMM mean supervectors that scatter in a particular direction in the GMM mean supervector space. The third stage of the speaker clustering pipeline is the Distance Metric: the distance metrics that can be used for speaker clustering are closely related to the particular choice of utterance representations. Two popular categories of distance metrics, namely likelihood-based distance metrics and vector-based distance metrics, are widely used for the two corresponding utterance representations, respectively. The distance metric should represent some measure of the distance between two statistical models. A famous likelihood-based distance metric extensively used for speaker clustering is the Bayesian information criterion (BIC) which indicates how well a model fits the utterance. For the GMM mean supervector utterance representation, since an utterance can be represented as a single data point in a high-dimensional vector space, the most often used distance metric is the Euclidean distance metric. In the GMM mean supervector space, the data points belonging to the same speaker tend to cluster together. Thus, the Euclidean distance metric is a reasonable choice for speaker clustering in the GMM mean supervector space. However, we observe that the data points show very strong directional scattering patterns. The directions of the data points seem to be more informative and indicative than their magnitudes. This observation motivated us to advocate the use of the cosine distance metric instead of the Euclidean distance metric for speaker clustering in the GMM mean supervector space. The cosine distance metric is a measure of the angle between two vectors in the space and is irrelevant to the norms of the vectors.

4. COMPARISON BETWEEN THE TWO DOCUMENTS


These two documents describe different ways to do speaker segmentation and clustering, the similar part of the two documents is that both indicates all the steps to follow in order to do speaker segmentation and clustering, both of the documents model a speaker as a Gaussian mixture model. The two documents use different ways to obtain all the parameters of the Gaussian model. They are different because uses different algorithms to solve the same problem. The first one uses the Cross Likelihood Ratio criterion and make the comparison between this method using the complete formula and the same method neglecting a part of it. The second document describe one algorithm to do speaker segmentation and clustering, explaining the main differences between the flat clustering and hierarchical clustering. But the two documents are similar because both obtain eigenvalue from the voice and use them in equations, in this manner is easier to manage different voices from different speakers.

5. MY IDEAS FOR A FUTURE RESEARCH


In my opinion the difficult of this job is to discriminate different speakers, this can be done recording the timbre of the voice; each person has a different timbre and this can be a good starting point for future research. Every type of timbre can be associated to a number or to a binary string and inserted in a database, the difficult is then to find the single timbre in the database.

Sandro Morero s188279 Speaker segmentation and clustering _______________________________________________________________________________________


This difficulty can be resolved using different types of research, like hash table or similar.

Das könnte Ihnen auch gefallen