Market Cap: $3.3104T -0.610%
Volume(24h): $180.7418B 40.450%
  • Market Cap: $3.3104T -0.610%
  • Volume(24h): $180.7418B 40.450%
  • Fear & Greed Index:
  • Market Cap: $3.3104T -0.610%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$101937.247657 USD

-1.92%

ethereum
ethereum

$2440.088811 USD

-3.10%

tether
tether

$1.000193 USD

0.01%

xrp
xrp

$2.459614 USD

3.05%

bnb
bnb

$645.663399 USD

-1.18%

solana
solana

$169.340061 USD

-2.43%

usd-coin
usd-coin

$1.000185 USD

0.04%

dogecoin
dogecoin

$0.221860 USD

-5.74%

cardano
cardano

$0.788860 USD

-2.57%

tron
tron

$0.263711 USD

-1.20%

sui
sui

$3.873057 USD

-2.82%

chainlink
chainlink

$16.315579 USD

-4.09%

avalanche
avalanche

$23.848565 USD

-4.36%

stellar
stellar

$0.301245 USD

-3.23%

shiba-inu
shiba-inu

$0.000015 USD

-6.14%

Cryptocurrency News Articles

GTE-ModernColBERT-v1: A New State-of-the-art Model for Long-document Semantic Retrieval

May 12, 2025 at 02:08 am

This paper introduces GTE-ModernColBERT-v1, a new model that builds upon the ColBERT architecture and integrates the ModernBERT foundation

GTE-ModernColBERT-v1: A New State-of-the-art Model for Long-document Semantic Retrieval

Researchers from LightOn AI have presented GTE-ModernColBERT-v1, a model that builds upon the ColBERT architecture and integrates the ModernBERT foundation developed by Alibaba-NLP. Their aim was to distill knowledge from a base model and optimize it on the MS MARCO dataset, hoping to overcome limitations related to context length and semantic preservation. The model was trained using 300-token document inputs but demonstrated the ability to handle inputs as large as 8192 tokens, making it suitable for indexing and retrieving longer documents with minimal information loss. This work was deployed through PyLate, a library that simplifies the indexing and querying of documents using dense vector models. The model performs token-level semantic matching using the MaxSim operator, which evaluates similarity between individual token embeddings rather than compressing them into a single vector.

GTE-ModernColBERT-v1 transforms text into 128-dimensional dense vectors and uses the MaxSim function for computing semantic similarity between query and document tokens, preserving granular context and enabling more refined retrieval. It integrates with PyLate’s Voyager indexing system, which manages large-scale embeddings using an efficient HNSW (Hierarchical Navigable Small World) index. Once documents are embedded and stored, users can retrieve the top-k most relevant documents using the ColBERT retriever. This process supports full pipeline indexing and lightweight reranking for first-stage retrieval systems. PyLate offers flexibility in modifying document length during inference, allowing users to handle texts much longer than the model was originally trained on, an advantage rarely seen in standard embedding models.

On the NanoClimate dataset, the model achieved a MaxSim Accuracy@1 of 0.360, Accuracy@5 of 0.780, and Accuracy@10 of 0.860. Precision and recall scores were consistent, with MaxSim Recall@3 reaching 0.289 and Precision@3 at 0.233. These scores highlight the model’s capability to retrieve accurate results even in longer-context retrieval scenarios.

When evaluated on the BEIR benchmark, GTE-ModernColBERT outperformed previous models, including ColBERT-small. It scored 54.89 on the FiQA2018 dataset, 48.51 on NFCorpus, and 83.59 on the TREC-COVID task. The average performance across these tasks was significantly higher than baseline ColBERT variants. Notably, in the LongEmbed benchmark, the model scored 88.39 in Mean score and 78.82 in LEMB Narrative QA Retrieval, surpassing other leading models such as voyage-multilingual-2 (79.17) and bge-m3 (58.73). These results suggest that the model offers robust generalization and effective handling of long-context documents, outperforming many contemporaries by almost 10 points on long-context tasks. It is also highly adaptable to different retrieval pipelines, supporting both indexing and reranking implementations, making it a versatile solution for scalable semantic search.

Several Key Highlights from the Research on GTE-ModernColBERT-v1 include:This research provides a meaningful contribution to long-document semantic retrieval. By combining the strengths of token-level matching with scalable architecture, GTE-ModernColBERT-v1 addresses several bottlenecks that current models face. It introduces a reliable method for processing and retrieving semantically rich information from extended contexts, significantly improving precision and recall in longer-context retrieval scenarios.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on May 14, 2025