Frugal AI
Hands-on course on efficient and low-cost AI, covering computational optimization of matrix operations (Python, C, GPU) and neural network compression techniques. Part of the PSL Week programme.
Instructor: David Cornu (course assistant - Gregory Sainton)
Term: 2024-2025
Location: PSL Week
Time: Lectures and practical sessions
Course Overview
This course introduces frugal approaches in artificial intelligence, addressing the growing need for efficient and sustainable ML systems. Students explore the full stack of computational optimisation — from low-level matrix operations in C to GPU acceleration and neural network compression — through hands-on practical work.
Course materials are available at github.com/Deyht/frugal_ai.
Topics Covered
Part I — Matrix multiplication optimisation
- Baseline implementation in Python and C
- Vectorisation and memory access optimisation
- Parallelisation strategies
- GPU acceleration (Google Colab environment)
Part II — Neural network optimisation
- Reducing model size and computational cost
- Weight pruning and quantisation techniques
- Performance vs. efficiency trade-offs
- Practical work with corrected solutions provided
Learning Outcomes
By the end of this course, students will be able to:
- Identify computational bottlenecks in ML pipelines
- Implement and benchmark matrix operations across Python, C and GPU
- Apply neural network compression techniques to reduce inference cost
- Reason about energy and resource efficiency in AI systems
Prerequisites
- Programming experience in Python
- Basic knowledge of neural networks
- No prior GPU or C programming experience required
Resources
- Lecture slides:
frugal_ai.pdf(available in the course repository) - Practical work instructions:
frugal_ai_practical_work_subject.pdf - All starter and correction files: github.com/Deyht/frugal_ai