![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
Cryptocurrency News Articles
Token-Shuffle: Scaling Autoregressive Models to High-Resolution Images
Apr 26, 2025 at 12:38 pm
Autoregressive (AR) models have made significant advances in language generation and are increasingly explored for image synthesis.
Autoregressive (AR) models have achieved remarkable progress in language generation and are increasingly being explored for image synthesis. However, scaling AR models to high-resolution images presents a persistent challenge. Unlike text, which requires relatively few tokens, high-resolution images necessitate thousands of tokens, leading to a quadratic growth in computational cost. As a result, most AR-based multimodal models are constrained to low or medium resolutions, limiting their utility for detailed image generation. While diffusion models have shown promising results at high resolutions, they come with their own limitations, including complex sampling procedures and slower inference. Addressing the token efficiency bottleneck in AR models remains a crucial open problem for enabling scalable and practical high-resolution image synthesis.
In a new contribution by Meta AI, researchers introduce Token-Shuffle, a method designed to reduce the number of image tokens processed by Transformers without altering the fundamental next-token prediction reach. The key insight underpinning Token-Shuffle is the recognition of dimensional redundancy in visual vocablies used by multimodal large language models (MLLMs). Visual tokens, typically derived from vecor quantization (VQ) models, occupy high-dimensional spaces but carry a lower intrinsic information density compared to text tokens. Token-Shuffle exploits this property by merging spatially local visual tokens along the channel dimension before Transformer processing and subsequently restoring the original spatial structure after inference. This token fusion mechanism allows AR models to handle higher resolutions with significantly reduced computational cost while maintaining visual fidelity.
Token-Shuffle consists of two operations: token-shuffle and token-unshuffle. During input preparation, spatially neighboring tokens are merged using an MLP to form a compressed token that preserves essential local information. For a shuffle window size sss, the number of tokens is reduced by a factor of s2s^2s2, leading to a substantial reduction in Transformer FLOPs. After the Transformer layers, the token-unshuffle operation reconstructs the original spatial arrangement, again assisted by lightweight MLPs.
By compressing token sequences during Transformer computation, Token-Shuffle enables the efficient generation of high-resolution images, including those at 2048×2048 resolution. Importantly, this approach does not require modifications to the Transformer architecture itself, nor does it introduce auxiliary loss functions or pretraining of additional encoders.
Moreover, the method integrates a classifier-free guidance (CFG) scheduler specifically adapted for autoregressive generation. Rather than applying a fixed guidance scale across all tokens, the scheduler progressively adjusts guidance strength, minimizing early token artifacts and improving text-image alignment.
Token-Shuffle was evaluated on two major benchmarks: GenAI-Bench and GenEval. On GenAI-Bench, using a 2.7B parameter LLaMA-based model, Token-Shuffle achieved a VQAScore of 0.77 on “hard” prompts, outperforming other autoregressive models such as LlamaGen by a margin of +0.18 and diffusion models like LDM by +0.15. In the GenEval benchmark, it attained an overall score of 0.62, setting a new baseline for AR models operating in the discrete token regime.
Large-scale human evaluation on two benchmarks, GenAI-Bench and GenEval
Furthermore, Token-Shuffle was subjected to large-scale human evaluation on two benchmarks, GenAI-Bench and GenEval. The results indicated that compared to LlamaGen, Lumina-mGPT, and diffusion baselines, Token-Shuffle showed improved alignment with textual prompts, reduced visual flaws, and higher subjective image quality in most cases. However, minor degradation in logical consistency was observed relative to diffusion models, suggesting avenues for further refinement.
In terms of visual quality, Token-Shuffle demonstrated the capability to produce detailed and coherent 1024×1024 and 2048×2048 images. Ablation studies revealed that smaller shuffle window sizes (e.g., 2×2) offered the best trade-off between computational efficiency and output quality. Larger window sizes provided additional speedups but introduced minor losses in fine-grained detail.
All in all, the researchers present a simple yet effective method to address the scalability limitations of autoregressive image generation. By merging spatially local visual tokens during the Transformer computation, their approach reduces the computational complexity without altering the fundamental next-token prediction step. This integration of spatial compression with standard AR generation is compatible with existing multimodal generation frameworks, enabling the efficient generation of high-resolution images. The experimental results highlight the potential of Token-Shuffle to push AR models beyond prior resolution limits, making high-fidelity, high-resolution generation more practical and accessible.
Disclaimer:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
- Bitcoin (BTC) Price Rebounds Strongly, Setting the Stage for a Push Toward $100K
- Apr 26, 2025 at 07:10 pm
- After dipping to $85.3K during a period of market uncertainty, Bitcoin has shown remarkable resilience. The BTC price has surged back to $94.3K within a matter of days, fueled by both institutional purchases and renewed retail interest.
-
-
-
-
-
-
-
-
- Gold Dodda Honnu or ‘Double Pagoda’ coin, Jahangir's zodiac coin and British India's 1862 one-tola coin
- Apr 26, 2025 at 06:50 pm
- This is the 17th edition of the exhibition. Earlier, we used to have three-day exhibitions, but now we have decided to hold a one-day exhibition every last Sunday of the month.