HyperAIHyperAI

Command Palette

Search for a command to run...

Online Tutorials | With Over 77,000 Stars, LLM Courses Cover Practical Knowledge and Skills From Beginner to Advanced levels.

Featured Image

When "large models" became a topic of conversation on the streets, even among the elderly and children, this wave of technology was no longer confined to research papers or topics of discussion among investors. This innovative technology, which is still surging forward, has driven countless tangible changes. As a result, the industrial and application ecosystem surrounding LLM has expanded rapidly, with more and more people flocking to it for different purposes—some hoping to keep up with the forefront of technology, some trying to find new business opportunities, and some simply attracted by this technological frenzy.

But beyond the hype, a more practical problem is gradually emerging: truly understanding and mastering large language models is not easy. From model principles and training methods to inference optimization and application development,The knowledge chain involved is long and the technology stack is complex. Fragmented information is difficult to support systematic cognition, and there is a clear threshold between beginner and advanced levels.

It is against this backdrop that,An open-source project called LLM Course has received widespread attention since its release, and has so far garnered 77,000 stars.It reorganizes the knowledge scattered in papers, blogs, and coding practices into a learning system with a clear structure and well-defined path.

Unlike scattered tutorials or isolated technical documents, the LLM Course attempts to answer a more systematic question—To truly master large language models, one must know what to learn, in what order, and how to transform that knowledge into a workable application.From basic mathematics and neural networks to model training, alignment and evaluation, and then to RAG, Agent and deployment, this project breaks down the complex LLM technology system into structured modules, creating a relatively clear learning path.

In short, whether you're a beginner or an experienced developer, you can find suitable learning resources within the LLM Course. To facilitate quick practice,HyperAI has uploaded portions of the Notebook demonstrations from its LLM Course to the "Tutorials" section.All operating environments are fully configured and ready to use out of the box.

Run online:

https://go.hyper.ai/xpEHI

Tutorial details are as follows:

1. Model fine-tuning

Fine-Tuning

Fine-tuning is a key technique for adapting pre-trained models to specific tasks. This module covers several mainstream fine-tuning methods:

* Fine-tuning Llama 3.1 8B using Unsloth

The Unsloth framework offers highly efficient supervised fine-tuning, saving over 701TB of memory (3TB+).    

* Fine-tuning LLM using Axolotl    

A one-stop fine-tuning framework that supports multiple models and training strategies.    

Fine-tuning Llama 2 in Google Colab    

Free cloud-based fine-tuning practice: QLoRA method explained in detail    

* Fine-tuning Mistral 7B using DPO    

Direct preference optimization improves model alignment quality.    

* Fine-tuning Mistral 7B using SFT    

Overseeing and fine-tuning the entire process, from data to evaluation    

2. Quantification

Quantization

Quantization is a key technology for reducing model deployment costs, and can reduce model size by more than 75%.

* 4-bit GPTQ quantization

Detailed Explanation of the GPTQ Algorithm: Running Large Models on Consumer Hardware

* Introduction to Weighted Quantization    

Quantization fundamentals: Comparison of FP32/FP16/INT8/INT4

GGUF + llama.cpp Quantization    

Preferred format for local deployment, optimized for CPU/GPU inference.    

* ExLlamaV2 Quantization

One of the fastest inference engines, detailed explanation of the EXL2 format.

3. Advanced Applications

Explore cutting-edge technologies and advanced applications in the field of LLM.

* Decoding strategies for large language models  

A Complete Guide from Greedy Search to Nucleus Sampling    

* Knowledge Graph Augmentation 

ChatGPTRAG + Knowledge Graph: Reduce Illusions and Improve Accuracy

* LazyMergekit

One-click model merging, allowing you to work with MoE even without a GPU.

* Mergekit Complete Guide

Model merging principles and practices, SLERP/TIES/DARE

* Use Abliteration to remove censorship

Model alignment removal technique to explore the boundaries of model behavior

4. toolset 

Practical tools to improve development efficiency and make LLM development simpler.

* LLM AutoEval    

Automated model evaluation, one-click run with RunPod    

* LazyAxolotl

One-click cloud-based fine-tuning and startup, no complex configuration required.

* Model family tree

Visualize the relationships between models to understand the evolution of LLM.

AutoQuant

One-click quantization, supports GGUF/GPTQ/EXL2/AWQ

* AutoAbliteration

Automated alignment removal, custom dataset

* ZeroChat

Zero-GPU chat interface, Hugging Face (free GPU)

* AutoDedup

Automatic deduplication of datasets: MinHash + semantic deduplication

5. Graph Neural Network Course

Graph Neural Network Course

Graph neural networks are powerful tools for processing non-Euclidean data and are widely used in social networks, recommender systems, and other fields.

* Graph Convolutional Networks (GCNs)    

Essential introductory course to GNNs: Spectral graph theory and message passing    

* Graph Attention Network (GAT)

Application of attention mechanisms on graphs

* GraphSAGE

Large-scale graph sampling aggregation, inductive learning

* Graph Isomorphic Networks (GIN)

The strongest expressive ability: the Weisfeiler-Lehman test

6. Other useful tutorials

It covers practical skills in multiple fields such as deep learning fundamentals, reinforcement learning, and data optimization.

Minecraft Diamond Finder Bot    

Reinforcement learning in practice: MineRL environment Q-Learning    

* Pandas row iteration optimization

Tips for improving data processing performance by 100x+

Tensors in Deep Learning

PyTorch Tensor Basics, Broadcast Mechanism, Automatic Differentiation

* Q Learning Tutorial

Introduction to Reinforcement Learning: Detailed Explanation of the Value Iteration Algorithm

7. Linear programming 

Fundamentals of Operations Research: Mathematical Modeling and Solving of Resource Optimization Problems.

Introduction to Linear Programming    

Simplex method, duality theory, sensitivity analysis    

Integer programming vs. linear programming    

Branch and Bound, Cutting Plane Method  

* Constraint Programming

CSP, backtracking search, constraint propagation 

* Non-linear optimization of marketing budget    

Convex optimization, gradient descent, ROI maximization    

The above is the tutorial recommended by HyperAI this time. Everyone is welcome to come and experience it!

Tutorial Link:

https://go.hyper.ai/xpEHI