시가총액: $2.9581T 0.300%
거래량(24시간): $80.0324B 32.770%
  • 시가총액: $2.9581T 0.300%
  • 거래량(24시간): $80.0324B 32.770%
  • 공포와 탐욕 지수:
  • 시가총액: $2.9581T 0.300%
암호화
주제
암호화
소식
cryptostopics
비디오
최고의 뉴스
암호화
주제
암호화
소식
cryptostopics
비디오
bitcoin
bitcoin

$93799.023048 USD

-0.60%

ethereum
ethereum

$1777.401774 USD

-1.97%

tether
tether

$1.000343 USD

-0.03%

xrp
xrp

$2.252855 USD

3.38%

bnb
bnb

$602.185977 USD

0.02%

solana
solana

$146.346959 USD

-0.63%

usd-coin
usd-coin

$1.000013 USD

-0.01%

dogecoin
dogecoin

$0.177703 USD

-1.16%

cardano
cardano

$0.697358 USD

-1.11%

tron
tron

$0.245113 USD

-2.74%

sui
sui

$3.522709 USD

-2.79%

chainlink
chainlink

$14.667769 USD

-0.49%

avalanche
avalanche

$21.472475 USD

-3.60%

stellar
stellar

$0.284731 USD

-2.25%

unus-sed-leo
unus-sed-leo

$9.077708 USD

0.32%

암호화폐 뉴스 기사

모델 컨텍스트 프로토콜을 사용하여 컨텍스트를 효과적으로 관리합니다

2025/04/28 14:32

이 튜토리얼에서는 ModelContextManager를 구축하여 MCP (Model Context Protocol)의 실제 구현을 안내합니다.

```python

```Python

import torch

토치 수입

import numpy as np

Numpy를 NP로 가져옵니다

import typing

가져 오기 입력

from dataclasses import dataclass

Dataclasses에서 Dataclass를 가져옵니다

import time

수입 시간

import gc

GC 가져 오기

from tqdm.notebook import tqdm

tqdm.notebook 가져 오기 tqdm

from sentence_transformers import SentenceTransformer

sentence_transformers import sentencetransformer에서

from transformers import GPT2Tokenizer, FLAN_T5ForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM

변압기에서 GPT2Tokenizer, Flan_T5forConditionalGeneration, Autotokenizer, AutomodElforseq2Seqlm

import math

수학 수학

MAX_TOKENS = 8000

max_tokens = 8000

DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'

Device = 'cuda'if torch.cuda.is_available () else 'cpu'

NUM_CHUNKS = 50

num_chunks = 50

CHUNK_SIZE = 100

chunk_size = 100

RELEVANCE_THRESHOLD = 0.1

관련성 _threshold = 0.1

IMPORTANCE_FACTOR = 1.0

중요도_factor = 1.0

RECENCY_FACTOR = 0.5

recency_factor = 0.5

VISUALIZE_CONTEXT = True

Visualize_context = true

BATCH_SIZE = 32

batch_size = 32

class ContextChunk(typing.NamedTuple):

클래스 컨텍스트 chunk (typing.namedTuple) :

content: str

내용 : str

embedding: np.array

임베딩 : NP.Array

importance: float = 1.0

중요성 : float = 1.0

timestamp: float = time.time()

타임 스탬프 : float = time.time ()

metadata: dict = None

메타 데이터 : dict = 없음

def __post_init__(self):

def __post_init __ (self) :

if self.metadata is None:

self.metadata가 없다면 :

self.metadata = {}

self.metadata = {}

class ModelContextManager:

클래스 ModelContextManager :

def __init__(self, context_chunks:typing.List[ContextChunk]=None, max_tokens:int=MAX_TOKENS, token_limit:int=0, gpt2_tokenizer:GPT2Tokenizer=None):

def __init __ (self, context_chunks : typing.list [contextchunk] = none, max_tokens : int = max_tokens, token_limit : int = 0, gpt2_tokenizer : gpt2tokenizer = none) :

self.max_tokens = max_tokens

self.max_tokens = max_tokens

self.token_limit = token_limit

self.token_limit = token_limit

self.context_chunks = context_chunks or []

self.context_chunks = context_chunks 또는 []

self.used_tokens = 0

self.used_tokens = 0

self.last_chunk_index = 0

self.last_chunk_index = 0

self.total_chunks = 0

self.total_chunks = 0

if gpt2_tokenizer is None:

gpt2_tokenizer가 없다면 :

self.gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")

self.gpt2_tokenizer = gpt2tokenizer.from_pretrained ( "gpt2")

else:

또 다른:

self.gpt2_tokenizer = gpt2_tokenizer

self.gpt2_tokenizer = gpt2_tokenizer

self.sentence_transformer = SentenceTransformer('all-mpnet-base-v2')

self.sentence_transformer = sentencetransformer ( 'all-mpnet-base-v2')

def add_chunk(self, chunk_text:str, importance:float=1.0):

def add_chunk (self, chunk_text : str, compition : float = 1.0) :

encoded_input = self.gpt2_tokenizer(chunk_text, return_tensors='pt')

encoded_input = self.gpt2_tokenizer (chunk_text, return_tensors = 'pt')

self.used_tokens += int(encoded_input[0].shape[1])

self.used_tokens += int (encoded_input [0] .Shape [1])

chunk_embedding = self.sentence_transformer.encode(chunk_text, batch_size=BATCH_SIZE)

chunk_embedding = self.sentence_transformer.encode (chunk_text, batch_size = batch_size)

new_chunk = ContextChunk(content=chunk_text, embedding=chunk_embedding, importance=importance)

new_chunk = contextChunk (content = chunk_text, Embedding = chunk_embedding, 중요도 = 중요성)

self.context_chunks.append(new_chunk)

self.context_chunks.append (new_chunk)

self.last_chunk_index += 1

self.last_chunk_index += 1

self.total_chunks += 1

self.total_chunks += 1

print(f"Added chunk with {int(encoded_input[0].shape[1])} tokens and importance {importance}. Total used tokens: {self.used_tokens}, total chunks: {self.total_chunks}")

print (f "{int (incoded_input [0] .Shape [1])} 토큰 및 중요성 {중요도}.

def optimize_context_window(self, query:str, min_chunks:int=3):

Def Optimize_context_window (self, query : str, min_chunks : int = 3) :

if len(self.context_chunks) <= min_chunks:

LEN (self.context_chunks) <= min_chunks :

return []

반품 []

query_embedding = self.sentence_transformer.encode(query, batch_size=BATCH_SIZE)

query_embedding = self.sentence_transformer.encode (query, batch_size = batch_size)

chunks_to_keep = []

chunks_to_keep = []

remaining_tokens = self.max_tokens - self.used_tokens

나머지 _tokens = self.max_tokens- self.used_tokens

if remaining_tokens < 0:

남은 경우 <0 :

print("Warning: token limit exceeded by %s tokens" % -remaining_tokens)

print ( "경고 : 토큰 제한이 % s 토큰으로 초과되었습니다" % -remaining_tokens)

for i in range(min_chunks, len(self.context_chunks) - 1, -1):

IN 범위 (Min_Chunks, Len (self.context_chunks) -1, -1) :

chunk = self.context_chunks[i]

청크 = self.context_chunks [i]

if i == len(self.context_chunks) - 1:

i == len (self.context_chunks) -1 :

chunks_to_keep.append(i)

chunks_to_keep.append (i)

continue

계속하다

chunk_importance = chunk.importance * IMPORTANCE_FACTOR

chunk_importance = chunk.importance * 중요도_factor

chunk_recency = (time.time() - chunk.timestamp) * RECENCY_FACTOR

chunk_recency = (time.time () - chunk.timestamp) * recency_factor

relevant_scores = np.array([cosine_similarity(chunk.embedding, x) for x in query_embedding])

atertant_scores = np.array (query_embedding의 x에 대한 [cosine_similarity (chunk.embedding, x)))))

max_relevant_score = np.max(relevant_scores)

max_relevant_score = np.max (atertant_scores)

total_score = chunk_importance + chunk_recency + max_relevant_score

Total_score = chunk_importance + chunk_recency + max_relevant_score

if total_score >= RELEVANCE_THRESHOLD:

Total_Score> = rattance_threshold 인 경우 :

encoded_input = self.gpt2_tokenizer(chunk.content, return_tensors='pt')

encoded_input = self.gpt2_tokenizer (chunk.content, return_tensors = 'pt')

chunk_token_count = int(encoded_input[0].shape[1])

chunk_token_count = int (encoded_input [0] .Shape [1])

if remaining_tokens >= chunk_token_count:

남은 경우 _tokens> = chunk_token_count :

chunks_to_keep.append(i)

chunks_to_keep.append (i)

remaining_

나머지 _

부인 성명:info@kdj.com

제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!

본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.

2025年04月28日 에 게재된 다른 기사