Full-Stack AI Bootcamp: ML to LLMs
Important Details:
Overview:
GIK Institute is organizing the AI Spectrum Bootcamp 2025: From ML to LLMs & Beyond, a comprehensive four-week program designed to equip 50 STEM graduates with cutting-edge knowledge and hands-on expertise in artificial intelligence. Running from 28th July through 22nd August 2025, the Bootcamp offers 140 hours of immersive training covering the entire AI spectrum—from foundational machine learning and classical algorithms to advanced deep learning, computer vision, transformers, MLOps, diffusion models, and the latest generative AI technologies. The program aims to enable participants to build internationally competitive AI skills through intensive practical sessions and theory classes led by experts in the field. Details of the Bootcamp structure, curriculum, and schedule are outlined in the following sections.
Program Learning Outcomes
- Apply foundational machine learning techniques including linear regression, logistic regression, supervised and unsupervised learning, and classification tasks.
- Develop, train, and optimize neural networks and ensemble models (random forests, XGBoost) using TensorFlow, incorporating regularization and bias-variance analysis.
- Design and implement advanced deep learning architectures such as fully connected, convolutional, recurrent networks, and vision transformers with best practices including batch normalization, dropout, and hyperparameter tuning.
- Employ state-of-the-art computer vision methods including transfer learning, object detection (RCNNs, YOLO), and image segmentation (FCNs, U-Net, DeepLab).
- Implement and fine-tune natural language processing models using Word2Vec, transformers (BERT, GPT), and LLM finetuning for tasks such as NER and question answering.
- Apply MLOps principles with tools like Git, MLflow, Docker, and CI/CD pipelines to enable reproducible and scalable ML workflows.
- Develop and deploy generative models including GANs, diffusion models, GPT, WaveGAN, Magenta, DALL-E, Whisper, and Gemini for multimodal generation tasks.
- Utilize vision-language models (CLIP, BLIP, Flamingo) for multimodal applications such as image captioning, image-text retrieval, and visual question answering (VQA).
- Critically evaluate ethical, fairness, transparency, and safety issues in AI, promoting responsible AI development.
Registration
Click here to confirm the registration fee: http://giki.edu.pk/ai-payment