![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
Cryptocurrency News Articles
Meta Holds Its First-Ever Event for AI Developers, LlamaCon, Announcing That It's Ready to Compete with ChatGPT
Apr 30, 2025 at 04:15 am
Meta held its first-ever event for AI developers, LlamaCon, at the company's headquarters in Menlo Park, where it announced that it was ready to compete with ChatGPT
Meta held its first-ever event for AI developers, LlamaCon, at the company’s headquarters in Menlo Park, where it announced that it was ready to compete with ChatGPT from OpenAI, as well as Google, AWS, and AI-as-a-service startups.
Meta Founder and CEO Mark Zuckerberg was joined by Co-Founder and CEO of Databricks, Ali Ghodsi.
This is really a big deal for Meta and the AI industry as the maker of the popular open-source Llama LLM seeks to directly monetize the incredible adoption Meta has realized. Developers just access the model from the cloud; no hardware or software to install.
But it is also a big deal for Cerebras and Groq, the two startups selected by Meta for serving fast tokens, many times faster than a GPU. (Nvidia, Cerebras and Groq are all clients of Cambrian-AI Research.) Meta did not disclose pricing, as access to the API is currently in preview, and access to Groq and Cerebras is only available by request. This is the first time either startup has landed a foothold at a hyper-scale Cloud Service Provider (CSP). And Meta has made it super easy to use; developers just select Groq or Cerebras in the API call.
Cerebras is the industry's fastest inference processor by far (~18X) but Grok is also 5-fold faster ... More than any GPU.
“Cerebras is proud to make Llama API the fastest inference API in the world,” said Andrew Feldman, CEO and co-founder of Cerebras. “Developers building agentic and real-time apps need speed. With Cerebras on Llama API, they can build AI systems that are fundamentally out of reach for leading GPU-based inference clouds.”
Microsoft Confirms $1.50 Windows Security Update Hotpatch Fee Starts July 1
Microsoft Confirms Password Spraying Attack — What You Need To Know
Google’s Gmail Upgrade—Why You Need To Change Your App
Llama on Cerebras is far faster than on Google TPUs or Nvidia GPUs.
Andrew’s point is important. Obtaining inferences at some 100 tokens per second is faster than a human can read, so “one-shot” inference requests for a service like ChatGPT runs just fine on GPUs. But multi-model agents and reasoning models can increase computational requirements by some 100-fold, opening an opportunity for faster inference from companies like Cerebras, Groq. Meta did not mention the third fast-inference company, Samba Nova, but indicated that they are open to other compute options in the future.
It will be interesting to see how well these two new options fare in the tokens-as-a-service world.
Disclaimer:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
-
-
-
-
- Crypto Market Experiences Sluggish Trading Session Today, Bitcoin and ETH Hover Near Daily Openings
- Apr 30, 2025 at 08:50 pm
- The crypto market experienced a sluggish trading session today, with Bitcoin and other major crypto assets hovering near their daily opening prices and showing minimal movement.
-
- Macro Chain Index Flashes First Buy Signal Since 2022, Hinting Bitcoin (BTC) Is Entering a New Bull Run
- Apr 30, 2025 at 08:50 pm
- A key Bitcoin (BTC) indicator that accurately signaled the 2022 market bottom has just flashed another buy signal, suggesting the cryptocurrency may be entering a new bull phase.
-
-
-