TY - CHAP
T1 - Dynamic Graph Learning for Feature Projection
AU - Zhu, Lei
AU - Li, Jingjing
AU - Zhang, Zheng
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2024
Y1 - 2024
N2 - High-dimensional features have gained widespread usage in various research fields such as multimedia computing, data mining, pattern recognition, and machine learning. However, the presence of high-dimensional features often gives rise to the “curse of dimensionality" problem and places significant computational burdens on machine learning models. To alleviate these issues, dimensionality reduction techniques are employed to identify low-dimensional latent subspaces that retain the data similarities observed in the original high-dimensional space. Two common paradigms used for dimensionality reduction are feature selection and feature projection. Feature selection involves identifying a subset of the original features as low-dimensional representations by discarding irrelevant and noisy features. On the other hand, feature projection utilizes a specific transformation matrix to generate projected dimensions that preserve the intrinsic data characteristics. Based on their reliance on semantic labels, feature projection can be further categorized into two families: unsupervised and supervised feature projection.
AB - High-dimensional features have gained widespread usage in various research fields such as multimedia computing, data mining, pattern recognition, and machine learning. However, the presence of high-dimensional features often gives rise to the “curse of dimensionality" problem and places significant computational burdens on machine learning models. To alleviate these issues, dimensionality reduction techniques are employed to identify low-dimensional latent subspaces that retain the data similarities observed in the original high-dimensional space. Two common paradigms used for dimensionality reduction are feature selection and feature projection. Feature selection involves identifying a subset of the original features as low-dimensional representations by discarding irrelevant and noisy features. On the other hand, feature projection utilizes a specific transformation matrix to generate projected dimensions that preserve the intrinsic data characteristics. Based on their reliance on semantic labels, feature projection can be further categorized into two families: unsupervised and supervised feature projection.
UR - https://www.scopus.com/pages/publications/85172416961
U2 - 10.1007/978-3-031-42313-0_2
DO - 10.1007/978-3-031-42313-0_2
M3 - 章节
AN - SCOPUS:85172416961
T3 - Synthesis Lectures on Computer Science
SP - 15
EP - 32
BT - Synthesis Lectures on Computer Science
PB - Springer Nature
ER -