Co-occurrence matrices act as the input to many unsupervised learning algorithms, including those that learn word embedding, and modern spectral topic models. However, the computation of these inputs often takes longer time than the inference. While much thought has been given to implementing fast learning algorithms. The co-occurrence matrix computation tasks are well suited to GPU parallelization. GPUs or other specialized hardware, have never been used to explicitly compute word-to-word co-occurrence matrix.
-
Notifications
You must be signed in to change notification settings - Fork 2
Co-occurrence matrices act as the input to many unsupervised learning algorithms, including those that learn word embedding, and modern spectral topic models. However, the computation of these inputs often takes longer time than the inference. While much thought has been given to implementing fast learning algorithms. The co-occurrence matrix co…
Ferdib-Al-Islam/gpu_parallelization
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Co-occurrence matrices act as the input to many unsupervised learning algorithms, including those that learn word embedding, and modern spectral topic models. However, the computation of these inputs often takes longer time than the inference. While much thought has been given to implementing fast learning algorithms. The co-occurrence matrix co…
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published