![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
This tutorial will guide you through the process of building a simple C++ program that performs inference on GGUF LLM models using the llama.cpp framework. We will cover the essential steps involved in loading the model, performing inference, and displaying the results. The code for this tutorial can be found here.
Prerequisites
To follow along with this tutorial, you will need the following:
A Linux-based operating system (native or WSL)
CMake installed
GNU/clang toolchain installed
Step 1: Setting Up the Project
Let's start by setting up our project. We will be building a C/C++ program that uses llama.cpp to perform inference on GGUF LLM models.
Create a new project directory, let's call it smol_chat.
Within the project directory, let's clone the llama.cpp repository into a subdirectory called externals. This will give us access to the llama.cpp source code and headers.
mkdir -p externals
cd externals
git clone https://github.com/georgigerganov/llama.cpp.git
cd ..
Step 2: Configuring CMake
Now, let's configure our project to use CMake. This will allow us to easily compile and link our C/C++ code with the llama.cpp library.
Create a CMakeLists.txt file in the project directory.
In the CMakeLists.txt file, add the following code:
cmake_minimum_required(VERSION 3.10)
project(smol_chat)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
add_executable(smol_chat main.cpp)
target_include_directories(smol_chat PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
target_link_libraries(smol_chat llama.cpp)
This code specifies the minimum CMake version, sets the C++ standard and standard flag, adds an executable named smol_chat, includes headers from the current source directory, and links the llama.cpp shared library to our executable.
Step 3: Defining the LLM Interface
Next, let's define a C++ class that will handle the high-level interactions with the LLM. This class will abstract away the low-level llama.cpp function calls and provide a convenient interface for performing inference.
In the project directory, create a header file called LLMInference.h.
In LLMInference.h, declare the following class:
class LLMInference {
public:
LLMInference(const std::string& model_path);
~LLMInference();
void startCompletion(const std::string& query);
std::string completeNext();
private:
llama_model llama_model_;
llama_context llama_context_;
llama_sampler llama_sampler_;
std::vector
std::vector
std::vector
llama_batch batch_;
};
This class has a public constructor that takes the path to the GGUF LLM model as an argument and a destructor that deallocates any dynamically-allocated objects. It also has two public member functions: startCompletion, which initiates the completion process for a given query, and completeNext, which fetches the next token in the LLM's response sequence.
Step 4: Implementing LLM Inference Functions
Now, let's define the implementation for the LLMInference class in a file called LLMInference.cpp.
In LLMInference.cpp, include the necessary headers and implement the class methods as follows:
#include "LLMInference.h"
#include "common.h"
#include
#include
#include
LLMInference::LLMInference(const std::string& model_path) {
llama_load_model_from_file(&llama_model_, model_path.c_str(), llama_model_default_params());
llama_new_context_with_model(&llama_context_, &llama_model_);
llama_sampler_init_temp(&llama_sampler_, 0.8f);
llama_sampler_init_min_p(&llama_sampler_, 0.0f);
}
LLMInference::~LLMInference() {
for (auto& msg : _messages) {
std::free(msg.content);
}
llama_free_model(&llama_model_);
llama_free_context(&llama_context_);
}
void LLMInference::startCompletion(const std::string& query)
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
-
- 比特币价格,罗比尼和BTC势头:这是怎么回事?
- 2025-07-01 06:30:11
- 随着Robinhood随着令牌化股票的扩展,比特币徘徊在107,500美元中,这表明了加密货币的机构和零售利益的增长。
-
- PNG会员资格飙升以记录高:深入了解增长及其含义
- 2025-07-01 06:50:11
- PNG成员身份达到了历史悠久的高度,标志着该组织和钱币社区的新时代。是什么推动了这一增长,为什么要关心?
-
- 比特币的突破到$ 110K:纽约有什么真正的交易?
- 2025-07-01 06:50:11
- 比特币可以达到$ 110k吗?我们分解了从通货膨胀到标准普尔500指数的关键因素,并具有纽约的边缘。
-
-
-
-
- XRP,比特币及其感知缺陷:2025年的视角
- 2025-07-01 07:15:11
- 看看2025年的XRP和比特币,研究了它们的优势,劣势以及它们如何超越最初的设计。他们仍然是对手还是互补?
-
- 比特币价格眼睛$ 108K:上升势头盛行,宝贝!
- 2025-07-01 07:30:12
- 比特币的背,宝贝!机构利益和ETF涌入可能的激增至$ 108K及以后。新的历史最高现象吗?让我们深入研究比特币嗡嗡声!