Bishnu Khadka
Trying to live life.
- Kathmandu, Nepal
- AI Research Center, ACEM
- Google Scholar
- ORCID
- ResearchGate
- GitHub
You May Also Enjoy
Why Triton is gaining popularity?
1 minute read
Published:
References:
In a typical machine learning(ML) work flow, we program the feature production, training, and inference. We do that mostly using frameworks to write high-level program and not have to manage the low level details required for ML or deep learning (DL). The pytorch or tensorflow (frameworks) calls the cuda if available and the operations are now performed on the GPU. DL models have achieved state-of-the-art (SOTA) performance in multiple-domains due to their hierarchial structure of parametric as well as non-parametric layers. Therefore, CUDA has to descide on how to perform the operations. Libraries like cuBLAS, cuDNN, or PyTorch’s built-in kernels are highly optimized for common operations (matrix multiply, convolutions, etc.). But if our applications have specialized algorithms, unique data layouts, and non-standard precision or formats, then CUDA might not perform well. Therefore, you write CUDA program for faster execution.
However, CUDA programming is very manual and tedious. It works on the principle of Scalar Program, Blocked threads
. This means we have to define what each thread does and manage it. It is a low-level programming method. Therefore, Triton
was developed to make the specialized algorithms faster and CUDA programming a little less tedious and manual. Triton is a high-level CUDA programming method. It works on the principle of Blocked Program, Scalar Threads
. This means that instead of managing each thread we manage a group of threads instead. And Trition handles the actual operation based on our memory flow and our data flow and chooses the optimum way to perform the given task/operation. Making it faster for the specialized use cases.
Therefore, Triton has gained popularity and is helping researchers and developers with cuda programming.
Book Notes: LLM Engineer’s Handbook
8 minute read
Published:
Book:
Notes
- An LLM engineer should have the knowledge in the following:
- Data preparation
- Fine-tune LLM
- Inference Optimization
- Product Deployment (MLOps)
- What the book will teach:
- Data Engineering
- Supervised Fine-tuning
- Model Evaluation
- Inference Optimization
- RAG
- For every project there must be planning. And the three planning steps the book talks about is as follows:
- Understand the problem
- What we want to build ?
- Why are we building it?
- Minimal Viable Product reflecting real-world scenario.
- Bridge the gap between the idealistic and the reality of what can be built.
- What are the steps that is required to build it?
- not clear on this part
- System Design step
- Core architecture and design choices
- How are we going to build it?
- Understand the problem
- What the book covers:
Chapter 1: Understanding
- The chapter covers the following topics:
- Understanding the LLM Twin concept
- Planning the MVP of the LLM Twin product.
- Building ML systems with feature/training/inference pipelines
- Designing the system architecture of the LLM Twin
- The key of the LLM Twin stands in the following:
- What data we collect
- How we preprocess the data
- How we feed the data into the LLM
- How we chain multiple prompts for the desired results
- How we evaluate the generated content
- We have to consider how to do the following (MLOps):
- Ingest, clean, and validate fresh data
- Training versus inference setups
- Compute and serve features in the right environment
- Serve the model in a cost-effective way
- Version, track, and share the datasets and models
- Monitor your infrastructure and models
- Deploy the model on a scalable infrastructure
- Automate the deployments and training
- In every software architecture,
Database
->Business Logic
->UI
. And, any layer can be as complex as required. But for ML, what do we require? Well, that is theFTI
architecture.Feature
->Training
->Inference
.
To conclude, the most important thing you must remember about the FTI pipelines is their interface:
- The feature pipeline takes in data and outputs the features and labels saved to the feature store.
- The training pipeline queries the features store for features and labels and outputs a model to the model registry.
- The inference pipeline uses the features from the feature store and the model from the model registry to make predictions.
Requirements of the ML system from a purely technical perspective:
- Data
- collect
- standardize
- clean the raw data
- create instruct database for fine-tuning an LLM
- chunk and embed the cleaned data. Store the vectorized data into a vector DB for RAG.
- Training
- Fine-tune LLMs of various sizes
- Fine-tune on instruction datasets of multiple sizes.
- Switch between LLM types
- Track and compare experiments.
- Test potential production LLM candidates before deploying them.
- Automatically start the training when new instruction datasets are available.
- Inference
- A REST API interface for clients to interact with the LLM
- Access to the vector DB in real time for RAG.
- Inference with LLMs of various sizes
- Autoscaling based on user requests
- Automatically deploy the LLMs that pass the evaluation step
- LLMOPs
- Instruction dataset versioning, lineage, and reusability
- Model versioning, lineage, and reusability
- Experiment tracking
- Continuous training, continuous integration, and continuous delivery (CT/CI/CD)
- Prompt and system monitoring
Chapter2: Tooling and Installation
- The chapter covers:
- Python ecosystem and project installation
- MLOps and LLMOps tooling
- Databases for storing unstructured and vector data
- Preparing for AWS
- Any Python project needs three fundamental tools: the Python interpreter, dependency management, and a task execution tool.
- Poetry is one of the most popular dependency and virtual environment managers within the Python ecosystem.
- An orchestrator is a system that automates, schedules, and coordinates all your ML pipelines. It ensures that each pipeline—such as data ingestion, preprocessing, model training, and deployment—executes in the correct order and handles dependencies efficiently.
- ZenML is one such orchestrator.
- It orchestrates by
pipeline
s andstep
s. They are just python functions. Where steps are called in pipeline functions. Modular code should be written for this. - ZenML transforms any step output into artifacts.
- Any file produced during the ML lifecycle.
- It orchestrates by
- Experiment Tracker:
- Training ML models is an entirely iterative and experimental process. Therefore, an experiment tracker is required.
- CometML is one that helps us in this aspect.
- Prompt monitoring
- you cannot use standard logging tools as prompts are complex and unstructured chains.
- Optik is simple to use prompt monitoring compared to other prompt monitoring tools.
- MongoDB, NoSQL dataset.
- Qdrant, vector database.
- For our MVP, AWS, it’s the perfect option as it provides robust features for everything we need, such as S3 (object storage), ECR (container registry), and SageMaker (compute for training and inference).
Chapter 3: Data Engineering
- In this chapter, we will study the following topics:
- Designing the LLM Twin’s data collection pipeline
- Implementing the LLM Twin’s data collection pipeline
- Gathering raw data into the data warehouse
- Collect and curate the dataset
- From raw data,
Extract
->Transform
->Load
into MongoDB. (ETL)- crawling
- standardizing data
- load into data warehouse
Chapter 4: RAG Feature Pipeline
- Retrieval-augmented generation (RAG)
- Chapter teaches you what RAG is and how to implement it.
- The main sections of this chapter are:
- Understanding RAG
- An overview of advanced RAG
- Exploring the LLM Twin’s RAG feature pipeline architecture
- Implementing the LLM Twin’s RAG feature pipeline
Chapter 5: Supervised Fine-Tuning
- SFT refines the model’s capabilities (here model refers to pre-trained model that can predict the new sequence) learning to predict
instruction-answer
pair. - Makes the general ability of pre-trained LLMs of understanding language to specific application, or in this case more conversational.
- In this chapter, the authors cover the following topics:
- Creating a high-quality instruction dataset
- SFT techniques
- Implementing fine-tuning in practice
Chapter 6: Fine-Tuning with Preference Alignment
- SFT cannot address a human’s preference of how a conversation should be, therefore we use
preference alignment
, specifically theDirect Preference Optimization (DPO)
. - Authors cover the following topics in this chapter:
- Understanding preference datasets
- How to create our own preference dataset
- Direct preference optimization (DPO)
- Implementing DPO in practice to align our model
Chapter 7: Evaluating LLMs
no unified approach
to measuring a model’s performance but there are patterns and recipes that we can adapt to specific use cases.- The chapter covers:
- Model evaluation
- RAG evaluation
- Evaluating TwinLlama-3.1-8B
Chapter 8: Inference Optimization
- Some tasks like document generation take hours and some tasks like code completion take a small amount of time, this is why optimization of the inference is quite important. The things that are optimized are the latency (the speed of the generation of the first token), throughput (number of tokens generated per second), and memory footprint of the LLM.
- The chapter covers:
- Model optimization strategies
- Model parallelism
- Model quantization
Chapter 9: RAG Inference Pipeline
- Where the magic happens for the RAG system.
- The chapter covers the following topics:
- Understanding the LLM Twin’s RAG inference pipeline
- Exploring the LLM Twin’s advanced RAG techniques
- Implementing the LLM Twin’s RAG inference pipeline
Chapter 10: Inference Pipeline Deployment
- The chapter covers:
- Criteria for choosing deployment types
- Understanding inference deployment types
- Monolithic versus microservices architecture in model serving
- Exploring the LLM Twin’s inference pipeline deployment strategy
- Deploying the LLM Twin service
- Autoscaling capabilities to handle spikes in usage
Chapter 11: MLOps and LLMOps
- This chapter covers:
- The path to LLMOps: Understanding its roots in DevOps and MLOps
- Deploying the LLM Twin’s pipelines to the cloud
- Adding LLMOps to the LLM Twin
Dynamic Arrays
2 minute read
Published:
Resources:
- Video: Dynamic Arrays
- Link: The Simple and Elegant Idea behind Efficient Dynamic Arrays
- Link: What if you had to invent a dynamic array?
Dynamic Array
- should be able to change the shape of the array dynamically.
- should be able to add/delete element fast
- should be able to insert/delete a element in the middle.
since we need to make this as efficient as possible. Let’s try what would we have done if we had to invent it for ourself.
- First, we take the functionality of it and try to simplify it as much as possible.
- Here, let’s take only the ‘adding dynamically’ part.
Adding Dynamically
- say we have an fixed array of 4 elements. Then, how can we make it such that we can add an element to it.
- Here, we know that we need to describe on a fixed space required for our task before hand to utilize a memory (refer to how memory works).
Alternative #1: Make an array of 5 element then copy all the data to the new array.
- Here, using this we can make an dynamic array. However, it is very expensive to do this for huge amount of data.
- For example, for an 1M length array, we need to perform around 90 billion copies.
- Here lets assume we are continuously adding element to the array. So, for 5th element we need 5 copying operation. For the 6th element, we need to first create a new array of size 6 and copy the 6 elements. So here our total operations is 5+6. For the 7th element, it is 5+6+7. In big O notation it is $O(N^2)$.
Alternative #2: Making a new array of size of the fixed array + 8 (say).
- Here, it will reduce a lot of copying operations, however it is still of $O(N^2)$ complexity.
Alternative #3: Making the new array the double size of the array.
- Here, the number of copying operation needed for an array of size N is always N. So it’s complexity is $O(N)$.
- This is very cool problem, so if you are math savvy then take out a piece of paper and do the math, it is quite fun to think about this problem. Find how this is $O(N)$.
- This is how programming languages define dynamic arrays.
For deletion, say if the filled element is less that the half of the size then we reduce the size of array by half. Here, this means our memory usage has also been optimized.
Similarly, with this way we can also easily perform insertions and deletion from the middle or front of the array with high speed .
Youtube: Everything I Learned at Stanford Business School in 28 Minutes
2 minute read
Published:
Links:
- Channel: jayhoovy
- Video: https://www.youtube.com/watch?v=vIkRbAvaQjs&t=49s
Notes
1. Stanford MBA Module #1 : Business Strategy
- The foundation of a successful company and the most important thing for a business is it’s business strategy. Business strategy is the game plan you employ to create a successful business.
- To learn how to create a great business plan is to study the business plan of the multi-billion dollar companies.
- And the most efficient way to study other business plan, is to use the “Porter’s Five Forces”
1.1. Porter’s Five Forces
- The five forces are:
Competition
: How competitive is your market?Substitute
: How many substitute are there in the market?New Entry
: How susceptible is your product to a new entry.Buyer's Power
: How much the buyer have power over you?Supply Chain
: How much of the supply chain you control?
- To understand this, John Ha, the maker of the video takes the example of Apple.
- Apple has huge competition, in the phone market, the laptop market, the PC market and other thing in the name of Samsung, Microsoft, etc. Additionally, there are many substitute to apple products. Here, from the first two we can clearly analyze that Apple should have a hard time. However, they do not. In fact, they thrive in this environment. But why?
- The answer is their
Ecosystem-lock
. Once you are in the apple eco-system, it is very hard to get out of it. Therefore, even if a new product is launched, the tendency of the apple user to choose a different company is pretty low. This means that Apple is not at all susceptible to new products. It does not matter if your first apple device is an iPhone, a MacBook, or an iPad, it is most likely the first apple product of many that you are going to own. And that is because of the ecosystem that Apple has built. - In addition to this, due to this ecosystem people tend to buy Apple. For example if there is a group in iMessage and an android user is in the group, it would cost money to send the person a text instead of using Wi-fi. Therefore, people have a social pressure to buy Apple. Then, once you have bought one Apple product, due to their superior inter-device functionality, it will most likely not be your last Apple device. So buyer’s power over Apple is less.
- Apple also has a really good supply chain, of which it has a very good control on. They tend to make most of the things they use, therefore the price can be controlled. Also, if another company tries to compete with you, due to your superior supply chain, it would be really hard for another company to compete.
- These are the reason why Apple is the first trillion dollar public company (first company was PetroChina).