Új hozzászólás Aktív témák

  • S_x96x_S

    addikt

    "A closer look at Intel and AMD's different approaches to gluing together CPUs"
    - Epycs or Xeons, more cores = more silicon, and it only gets more complex from here
    - While Intel eventually saw the wisdom of AMD's chiplet strategy, its approaches couldn't be more different.
    https://www.theregister.com/2024/10/24/intel_amd_packaging/

    AMD:
    "Placing the memory controllers on the I/O die does come with some pros and cons. On the upside, this means that memory bandwidth, for the most part, scales independently of core count. The downside is potentially higher memory and cache access latencies for certain workloads. We emphasize "potentially" as this kind of thing is highly workload dependent."

    AMD:
    "Perhaps the bigger question is where AMD will take its chiplet architecture next. Looking at AMD's 128-core Turin processors, there's not a lot of room left on the package for more silicon, but the House of Zen still has a few options to choose from.
    First, AMD could simply opt for a bigger package to make room for additional chiplets. Alternatively, the chipmaker could also pack more cores onto a smaller die. However, we suspect that AMD's sixth-gen Epycs could actually end up looking a lot more like its Instinct MI300-series accelerators.
    As you may recall, launched alongside the MI300X GPU was an APU that swapped two of the chip's CDNA3 tiles for a trio of CCDs with 24 Zen 4 cores between them. These compute tiles are stacked atop four I/O dies and are connected to a bank of eight HBM3 modules.
    Now, again, this is just speculation, but it's not hard to imagine AMD doing something similar, switching out all that memory and GPU dies for additional CCDs instead. Such a design would conceivably benefit from higher bandwidth and lower latencies for die-to-die communications too.
    Whether this will actually play out, only time will tell. We don't expect AMD's 6th-gen Epycs to arrive until late 2026
    """

    INTEL:
    "Intel's I/O dies are also quite a bit skinnier and house a combination of PCIe, CXL, and UPI links for communications with storage, peripherals, and other sockets. Alongside these, we also find a host of accelerators for direct stream (DSA), in-memory analytics (IAA), encrypt/decrypt (QAT), and load balancing.
    We're told that the placement of accelerators on the I/O die was done in part to place them closer to the data as it streams in and out of the chip."

    "Going off the renderings intel showed off earlier this year, Clearwater Forest could use up to 12 compute dies per package. The use of silicon interposers is by no means new and offers a number of benefits including higher chip-to-chip bandwidth and lower latencies than you'd typically see in an organic substrate. That's quite the departure from the pair of 144-core compute dies found on Intel's highest core count Sierra Forest parts.

Új hozzászólás Aktív témák