Memória DRAM JEDEC: DDR5 to Double Bandwidth Over DDR4, NVDIMM-P Specification Due Next Year

Há várias noticias relacionadas a memória e CXL 3.0, OpenCAPI irá ser integrado com CXL e foram apresentados vários módulos de memória CXL DDR5.

W2rpMQu.png

https://www.anandtech.com/show/1752...announced-doubled-speeds-and-flexible-fabrics

ShEphnW.png

https://www.anandtech.com/show/17519/opencapi-to-fold-into-cxl

ImjefJg.jpg

https://www.techpowerup.com/297415/...-its-first-compute-express-link-memory-module
 
Sim, mas actualmente já é possivel usar nos sistemas CXL 2.0 que suportem o Type 3, que é o que funciona como "expansão de memória", aquele sistema da SK Hynix é estranho no sentido do 96GB, uma vez que a Samsung já tinha anunciado em Maio o lançamento da sua CXL 512GB, um ano depois do "anúncio" que coloquei na página anterior, mencionando que a Samsung tinha mais uma vez comido a Micron de cebolada...

Expanding the Limits of Memory Bandwidth and Density: Samsung’s CXL Memory Expander​

CXL consortium identifies three different device types:

• Type 1 CXL devices are caching devices such as Accelerators and SmartNICs. The Type 1 device can access the host memory through CXL.cache transactions and maintain a local cache that is coherent with the host memory.

• Type 2 CXL devices are GPUs and FPGAs that have memories like DDR and HBM attached to the device. CXL Type 2 devices can directly access the host-attached memory as do CXL Type 1 devices. Additionally, CXL Type 2 devices have local address space that is visible and accessible to the host CPU through CXL.mem transactions.

• Type 3 CXL devices are memory expansion devices that allow host processors to access CXL device memory cache coherently through cxl.mem transactions. CXL Type 3 devices could be used for memory density and memory bandwidth expansion.


For the purpose of this article, we’ll focus on Type 3 CXL devices.
expanding-the-limits-of-memory-bandwidth_4.jpeg


expanding-the-limits-of-memory-bandwidth_3.jpg

An important feature of CXL is that it maintains memory coherency between the direct attached CPU memory and the memory on the CXL device, which means that the host and the CXL device see the same data seamlessly. The CXL host has a home agent serving as a manager that uses the CXL.io and CXL.mem transactions to access the attached memory coherently.
Another major feature of CXL is that it is agnostic of the underlying memory technology as it allows various types of memories (e.g. volatile, persistent, etc.) to be attached to the host through the CXL interface. Moreover, CXL.mem transactions are byte addressable, load/store transactions just like DDR memory. So, attached CXL memory looks like native attached DDR memory to the end application. The CXL 2.0 specification also supports switching and memory pooling. Switching enables memory expansion, and pooling increases the overall system efficiency by allowing dynamic allocation and deallocation of memory resources. CXL integrity and data encryption define mechanisms for providing confidentiality, integrity and replay protection for data passing through the CXL link.

Samsung introduced the industry’s first CXL Type 3 memory expander prototype in May 2021. This prototype memory expander device has been successfully validated on multiple next-generation server CPU platforms. In addition, the CXL memory expander prototype has been tested on the server systems of multiple end customers with real applications and workloads.
expanding-the-limits-of-memory-bandwidth_6-1.jpg

Now, Samsung is testing a new CXL Type 3 DRAM memory expander product built with an application-specific integrated circuit (ASIC) CXL controller – and it’s poised to pave the way for the commercialization of CXL technology. Delivered in an EDSFF (E3.S) form factor, the expander is suitable for next-generation, high-capacity enterprise servers and datacenters.
expanding-the-limits-of-memory-bandwidth_5.jpg

https://semiconductor.samsung.com/n...nd-density-samsungs-cxl-dram-memory-expander/


NOTA: apenas para mencionar que ao contrário do protótipo anunciado em Maio de 2021, que ainda recorria a um FPGA, o produto final irá recorrer, tal como menciona o último quote, a um ASIC (um "processador") específico.


De resto e tal como mencionado na página anterior a Samsung encontra-se na linha da frente para continuar a destacar-se no mercado de servidores, ainda não tem o PIM (Processor in Memory) pronto a ser lançado, mas já anunciou a 2ª geração do SMART SSD (que recorre ao novo FPGA Xilinx/AMD Versal ACAP) notícia que coloquei no tópico dos SSD.

Haja lanes PCI para isto tudo :berlusca:
 
Pois já há uns tempos num post qualquer sobre servidores apareceu lá a referência a 96Gb e o Nemesis11 ficou :n1qshok:
Acho que foi no leak de specs do Meteor Lake Mobile, em que aparece o limite de 96 GB de RAM, quando se usa DDR5 (Mas 64 GB com LPDDR5), o que à primeira vista é algo bizarro.
6M3OeuG.png

https://forum.zwame.pt/threads/intel-meteor-lake-2023.1068552/page-3#post-17520692

Aparecendo estes DIMMs, mesmo sendo RDIMMs, está explicado. Devem também aparecer em UDIMMs, SODIMMs, etc. :)
 

QCT Demos Astera Labs Leo CXL Memory Expansion on Intel Sapphire Rapids at SC22​


At SC22, QCT had an interesting demo on the show floor. It was showing off its upcoming 4th Generation Intel Xeon Scalable, codenamed “Sapphire Rapids” platform with a new technology. That technology is the Astera Labs CXL memory expansion card that STH is testing.
The new trick was really CXL.
Here is the QCT riser with the dual-slot Astera Labs Leo development board.
QCT-QuantaGrid-D54Q-2U-Astera-Labs-Intel-Sapphire-Rapids-Demo-at-SC22-4.jpg

Here is a look at the board installed with DIMMs.
QCT-QuantaGrid-D54Q-2U-Astera-Labs-Intel-Sapphire-Rapids-Demo-at-SC22-3.jpg


Astera-Labs-Leo-CXL-Memory-Expansion-Card-with-DIMMs-in-STH-Studio-Cover.jpg

Astera Labs Leo CXL Memory Expansion Card With DIMMs In STH Studio Cover

At SC22, QCT was showing this setup and that the system was running at around 95% the performance of locally attached DDR5 in the 4th Gen Intel Xeon platform.
QCT-QuantaGrid-D54Q-2U-Astera-Labs-Intel-Sapphire-Rapids-Demo-at-SC22-1.jpg

QCT was also showing that the Leo CXL card shows up as its own NUMA node without CPU cores attached, and has around the same latency of accessing memory on the second socket.
QCT-QuantaGrid-D54Q-2U-Astera-Labs-Intel-Sapphire-Rapids-Demo-at-SC22-2.jpg

https://www.servethehome.com/qct-de...y-expansion-on-intel-sapphire-rapids-at-sc22/
 

Patriot Teases First SMI-Powered PCIe 5.0 SSD, New CXL DDR5 Card​


ZqY2CHNqjG7nVH7Z82Yipa-970-80.jpg.webp

GXLkvKcxZ8n4zBp9A6q6wa-970-80.jpg.webp

We also spotted Adata's new CXL 1.1 memory module, which can come packing up to 512GB of DDR5 memory that communicates over a PCIe 5.0 x4 bus. This module comes in the ES.3 form factor, so it can plug into arrangements similar to the 2.5" NVMe drive bays you see on the front of a server or into custom-built backplanes inside a separate chassis.
A single RISC-V powered Montage MXC (M88MX5891) CXL memory expander ties the banks of DDR5 memory together, allowing 32, 64, 128, 256, or 512GB of DRAM to be placed on a single device that is roughly the size of a 2.5" U.2 SSD. The controller supports the CXL 1.1 and 2.0 RAS spec, with CXL.mem and CXl.io protocols on the menu for memory expansion.
https://www.tomshardware.com/news/p...tm_source=twitter.com&utm_campaign=socialflow
 
@Nemesis11 Aparentemente o MCR-DIMM da Intel, que apareceu no roadmap update de há uns dias, vai ter ter um standard JEDEC chamado MRDIMM

Robert Hormuth

Corporate Vice President, Architecture and Strategy, Data Center Solutions Group at AMD


MRDIMM Gen 2 parece interessante


A SK Hynix tinha anunciado a MCR Dimm em Dezembro passado

SK hynix Inc. (or "the company", www.skhynix.com) announced today that it has developed working samples of DDR5* Multiplexer Combined Ranks (MCR) Dual In-line Memory Module, the world's fastest server DRAM product. The new product has been confirmed to operate at the data rate of minimum 8Gbps, and at least 80% faster than 4.8Gbps of the existing DDR5 products.
Picture1.jpg

n Intel's MCR technology.

* Buffer: A component that optimizes signal transmission performance between DRAM and CPU. Mainly installed onto modules for servers requiring high performance and reliability

By enabling simultaneous operation of two ranks, MCR DIMM allows transmission of 128 bytes of data to CPU at once, compared with 64 bytes fetched generally in conventional DRAM module. An increase in the amount of data sent to the CPU each time supports the data transfer rate of minimum 8Gbps, twice as fast as a single DRAM.

A close collaboration with business partners Intel and Renesas was key to success. The three companies worked together and cooperated throughout the process from the product design to verification.
https://www.prnewswire.com/news-rel...s-fastest-server-memory-module-301697691.html
 
Bastante interessante. Duvido é que apareça algo semelhante, nos próximos tempos, no mercado consumidor e até dava bastante jeito.
@Nemesis11 Aparentemente o MCR-DIMM da Intel, que apareceu no roadmap update de há uns dias, vai ter ter um standard JEDEC chamado MRDIMM
Parece que não é uma passagem "directa". Parece que a AMD também estava a desenvolver uma tecnologia semelhante (HB-DIMM), que ainda não tinha sido anunciada publicamente e MRDIMM da JEDEC é uma junção de MCR-DIMM (Intel SK Hynix + Intel) com HB-DIMM (AMD).
Not really. My understanding is both AMD and Intel were working on very similar proposals, Intel with MCR-DIMM and AMD with HB-DIMM. Both submitted to JEDEC for standardization. The final, converged JEDEC standard is called MRDIMM.
Both are extremely similar conceptually.
The core idea is identical: time-multiplex 2 sets of 64-bit data, one from each rank, using a bunch of data buffers with built-in multiplexers and deliver them over the 64-bit host DRAM channel at twice the data rate.
Differences relate to implementation details like single or dual-die packages, module height, power, etc. To be clear, the info I've seen doesn't go far back enough for me to discern if one was derived from the other in the first place.
 

Samsung Develops Industry’s First CXL DRAM Supporting CXL 2.0​


Samsung Electronics, a world leader in advanced semiconductor technology, today announced its development of the industry’s first 128-gigabyte (GB) DRAM to support Compute Express Link™ (CXL™) 2.0. Samsung worked closely with Intel on this landmark advancement on an Intel® Xeon® platform.
The new CXL DRAM supports PCle 5.0 interface (x8 lanes) and provides bandwidth of up to 35GB per second.
https://news.samsung.com/global/samsung-develops-industrys-first-cxl-dram-supporting-cxl-2-0
 
Back
Topo