This repository is a comprehensive course on deep learning, covering essential topics and their implementations. The course is designed to provide both theoretical understanding and practical skills in various deep learning algorithms and techniques. The content follows state-of-the-art methods and includes hands-on examples in Python.
- CNN Basics: Introduction to convolutional layers, pooling layers, and their role in feature extraction.
- Classification: Using CNNs for image classification tasks, including data preprocessing, model building, and evaluation.
- Object Detection: Techniques such as YOLO, SSD, and Faster R-CNN for detecting objects within images.
- Object Segmentation: Methods like U-Net, Mask R-CNN for pixel-level classification and segmentation of images.
- Transfer Learning Basics : Understanding the concept of transfer learning and its advantages in deep learning.
- Pretrained Models: Utilizing pretrained models like VGG, ResNet, Inception, EfficientNet, ConvNext and other SOTA models for various tasks.
- Fine-Tuning: Techniques for fine-tuning pretrained models on specific datasets to improve performance.
- Autoencoder Basics: Introduction to the architecture and purpose of autoencoders.
- Variational Autoencoders (VAE): Introducing a probabilistic approach to learning latent representations.
- Denoising Autoencoders: Autoencoders designed to remove noise from data, improving the quality of the reconstructed output.
- GAN Basics: Introduction to the architecture of GANs, including the generator and discriminator networks.
- Training GANs: Techniques for training GANs, handling mode collapse, and improving stability.
- Applications of GANs: Image generation, style transfer, and other creative applications.
- 3D Convolutional Neural Networks (3D-CNN): Extending CNNs to 3D data for tasks such as video analysis.
- Recurrent Neural Networks (RNN): Understanding the architecture and applications of RNNs in sequence data.
- Long Short-Term Memory (LSTM): Advanced RNNs that can capture long-term dependencies in sequence data.
- Transformers: Introduction to transformer architecture, attention mechanisms, and their applications in NLP and beyond.
- Large Language Models: Overview of LLMs such as GPT, BERT, and their capabilities in understanding and generating human language.
- Generative AI: Techniques for creating text, images, and other media using generative models.
Each lecture includes the following implementations:
- TensorFlow/Keras Implementation: Utilizes TensorFlow and Keras for user-friendly implementations of deep learning models with both sequential and functional approaches.
- PyTorch Implementation: Employs the PyTorch framework, known for its flexibility and research-oriented design, to implement neural networks with full control over their architecture and training process.
- Clone the Repository:
git clone https://github.com/qazimsajjad/Deep-Learning-Course.git
- Navigate to the Directory:
cd Deep-Learning-Course
- Install Dependencies:
python => 3.9 numpy matplotlib pillow opencv sk-learn torch keras tensorflow
By following this course, you will gain a solid foundation in deep learning and be equipped with the skills to tackle a variety of complex tasks in computer vision, natural language processing, and generative models.
Open the Jupyter Notebooks provided in the repository to explore different Machine Learning techniques. Each notebook contains detailed explanations, code implementations, and example images to help you understand the concepts.
Kaleem Ullah Research Assitant Digital Image Processing (DIP) Lab Department of Computer Science Islamia College University, Peshawar, Pakistan. Remote Research Assistant Visual Analytics Lab (VIS2KNOW) Department of Applied AI Sungkyunkwan University, Seoul, South Korea.
Imran Nawar
Research Assistant, Digital Image Processing (DIP) Lab
Department of Computer Science
Islamia College Peshawar (Public Sector University), Pakistan