Results 1 to 3 of 3

Thread: Cerebras WSE - An impossible-sized new brain for AI

  1. #1
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    536
    Thanks
    238
    Thanked 90 Times in 70 Posts

    Cerebras WSE - An impossible-sized new brain for AI

    Quoting Forbes here:

    "With 400,000 programmable processor cores, 18 GB of memory, and an on-chip fabric capable of 25 petabits, the WSE comprises 1.2 trillion transistors in 46,225 mm2 of silicon real estate (for contrast, it is 56x larger than the largest GPU for AI, which is 815mm2)"

    "
    On top of these engineering innovations, the company develop new programmable Sparse Linear Algebra Cores (SLAC) optimized for AI processing. The SLAC skips any function that multiplies by zero, which can significantly speed the multiplication of matrices in the deep learning process while reducing power. The company also reduced the memory stack by eliminating cache and putting large amounts of high-speed memory (18 GB of SRAM) close to the processing cores. All this is connected by what the company calls the Swarm communication fabric, a 2D mesh fabric with 25 petabits of bandwidth that is designed to fit between the processor cores and tiles, including what would normally be die cut area on the wafer."

    "
    Because of its design, the Cerebras WSE platform has advantages in latency, bandwidth, processing efficiency, and size. According to Cerebras, the WSE is 56.7 times larger than the largest GPU, has 3,000 times more on-die memory, has 10,000 times more memory bandwidth, and fits into 1/50th of the space of a traditional data center configuration with thousands of server nodes. The company has not discussed the availability of the platform or estimated cost."

    https://www.cerebras.net/

    Click image for larger version. 

Name:	ECeNboUUwAAPBVL.jpg 
Views:	50 
Size:	90.5 KB 
ID:	6852

    Way much more info in the site. Of course I'm not related in any capacity to Cerebras, Forbes or any other company mentioned in the article.
    Last edited by Gonzalo; 12th September 2019 at 02:28. Reason: Company website

  2. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,946
    Thanks
    294
    Thanked 1,286 Times in 728 Posts
    Yeah, I think this kind of hardware (TPU) has potential to turn paq-like compression algorithms into something practical.
    Unfortunately cerebras thing specifically is too expensive - $5k just for the wafer, ~$600k estimated cost for the whole thing.

  3. #3
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    536
    Thanks
    238
    Thanked 90 Times in 70 Posts
    Hopefully this is the first step towards mass production and the reduction of costs.

Similar Threads

  1. Brain language-processing speed
    By Sportman in forum The Off-Topic Lounge
    Replies: 0
    Last Post: 6th September 2019, 03:31
  2. https - problem , it is impossible to access the website
    By joerg in forum The Off-Topic Lounge
    Replies: 4
    Last Post: 22nd December 2017, 19:27
  3. QuickLZ - an impossible compressor :)
    By lz77 in forum Data Compression
    Replies: 0
    Last Post: 25th July 2017, 10:33
  4. how the brain compresses images
    By willvarfar in forum Data Compression
    Replies: 12
    Last Post: 13th February 2011, 16:32

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •