Jump to content

Everything we know about the GTX 1180, Nvidia's next graphics card - General Hangout & Discussions - InviteHawk - Your Only Source for Free Torrent Invites

Buy, Sell, Trade or Find Free Torrent Invites for Private Torrent Trackers Such As redacted, blutopia, losslessclub, femdomcult, filelist, Chdbits, Uhdbits, empornium, iptorrents, hdbits, gazellegames, animebytes, privatehd, myspleen, torrentleech, morethantv, bibliotik, alpharatio, blady, passthepopcorn, brokenstones, pornbay, cgpeers, cinemageddon, broadcasthenet, learnbits, torrentseeds, beyondhd, cinemaz, u2.dmhy, Karagarga, PTerclub, Nyaa.si, Polishtracker etc.

Everything we know about the GTX 1180, Nvidia's next graphics card


Recommended Posts

 

Nvidia's next next-gen GPU architecture is likely coming in August.

It's been a while since Nvidia introduced its last new graphics architecture for gaming GPUs—more than two years to be precise. That last architecture was Pascal, and it has powered everything from the top-tier GTX 1080 and GTX 1080 Ti to the entry level GTX 1050 and GT 1030. The next generation of Nvidia graphics cards is finally approaching, using the Turing architecture. Here's what we know about the GTX 1180, what we expect in terms of price, specs, and release date, and the winding path we've traveled between Pascal and Turing.

The things we 'know' about GTX 1180

The list of things that we know—that we're absolutely certain are correct—can basically be summarized into a single word: nothing. Nvidia has been extremely tight-lipped about its future GPUs this round, and we're not even sure about the name. Rumors of GTX 1180 and GTX 2080 have been swirling for months, though it looks like the 1180 is going to win out on the official name. We're going to stick with 1180 for the remainder of this piece and are confident enough of the name that it's ensconced in a cheap photoshop above. (Expect a hasty update if the winds of change start gusting.) We're also not sure what the codename for these new chips will be—GT104 would be an easy choice, but Nvidia had GT part names with the Tesla architecture back in the GTX 280 days (2008-2009). Those were all GT200 labels, though, so GT100 could still happen.

While Nvidia hasn't officially revealed anything, we're 99 percent certain on three things. First, the next generation architecture is codenamed Turing. Second, it will be manufactured using TSMC's 12nm FinFET process. (We may see some Turing GPUs manufactured by Samsung later, as was the case with the GTX 1050/1050 Ti and GT 1030 Pascal parts, but the initial parts will come from TSMC.) Third, the first Turing graphics cards will use GDDR6 memory—not HBM2, due to costs and other factors, but GDDR6 will deliver higher performance than the current GDDR5X. Let's hit those last two in a bit more detail.

What does the move to 12nm from 16nm mean in practice? Various sources indicate TSMC's 12nm is more of a refinement and tweak to the existing 16nm rather than a true reduction in feature sizes. In that sense, 12nm is more of a marketing term than a true die shrink, but optimizations to the process technology over the past two years should help improve clockspeeds, chip density, and power use—the holy trinity of faster, smaller, and cooler running chips.

GDDR6 continues down the path graphics memory has traveled from GDDR5 and GDDR5X. Over its lifetime, GDDR5 has gone from 3.6 GT/s (that's giga-transfers per second, though in practice it's almost the same as Gbit/s) with AMD's HD 4870 back in 2008, to 9 GT/s with the GTX 1060 6GB. GDDR5X has a range of 10-14 GT/s by sending more data per clock rather than higher clockspeeds. Where the base clock of the GTX 1070 GDDR5 is 2002MHz (8,008 MT/s effective), the GTX 1080 has a base clock of 1251MHz and sends twice as much data per clock (10,008 MT/s effective). Micron ended up being the only company to produce GDDR5X, with Nvidia being the only consumer running GDDR5X at 11 GT/s. GDDR6 will see far broader support, with Micron, Samsung, and SK-Hynix all participating. GDDR6 has an official target range of 14-16 GT/s, and Micron is already showing 18 GT/s modules. GTX 1180 cards are likely to use faster GDDR6, but the exact clockspeeds remain a question mark.

Expectations for GTX 1180

Moving on to what we expect from Turing and the GTX 1180, the list grows substantially. Obviously, performance needs to be better than the existing GPUs, and at lower prices for the same level of performance. That doesn't mean we'll see insane performance at low prices, but at the very least we should see GTX 1080 Ti levels of performance fall into the $500-$600 range. Nvidia has multiple paths to delivering higher performance than the GTX 1080 Ti, and which one GTX 1180 takes isn't yet known, so here are the options.

First, Nvidia can go with a larger chip and more cores. Originally slated to arrive last year, Volta morphed into a product that will only see the light of day in supercomputing, machine learning, and professional markets. Volta is incredibly potent, with the Titan V besting the GTX 1080 Ti by up to 30 percent, but it also includes a lot of technology that is of marginal use for gamers—specifically, games don't need double-precision FP64, and they don't need the Tensor cores. The easiest solution to envision is that Turing initially ships with a design similar to GV100, but without the TPU or FP64 parts—up to 5,376 CUDA cores would certainly give Turing GPUs a shot in the arm.

More likely is that Turing will be a similar rollout to Pascal. The first GTX 1180 cards will launch this year, but they won't be the full-fat version of Turing. Instead, we'll get GPUs that look a lot like the current GP102, meaning up to 3840 CUDA cores, only with improved efficiency and features and slightly higher clockspeeds. Then in another 9-12 months, we'll get Big Turing and GTX 1180 Ti, with more cores, more memory, and more performance.

But Nvidia isn't locked into any specific core count. If Turing sticks with the 128 CUDA cores per SM, which seems likely, the number of cores present will determine clockspeeds. The Titan V runs 5120 cores at up to 1455MHz, and with a refined 12nm process and changes to the underlying architecture, Turing could run 5120 CUDA cores at 1.5-1.7GHz. Or Nvidia could go with fewer cores and higher clocks, with GTX 1180 potentially being the first Nvidia GPU to ship with stock clocks above 2GHz. But regardless of how Nvidia gets there, we expect performance to be around 25 percent better than the GTX 1080 Ti FE, just like the GTX 1080 was around 25 percent faster than the GTX 980 Ti.

What about the GDDR6—how much VRAM will GTX 1180 have, and how fast will it clock? The safe bet is 8GB, though 12GB and 16GB are also possible. GDDR6 is officially set to run at 14-16 GT/s, but Micron has already talked about 18 GT/s as well. With a 256-bit interface, that would give GTX 1180 anywhere from 448GB/s to 576GB/s of bandwidth, and improvements in the architecture could allow Turing to make better use of the available bandwidth. My bet would be for 16 GT/s GDDR6, with 512GB/s, since that should be available from multiple manufacturers.

More VRAM is an outside possibility but having 16GB of VRAM on a graphics card is a lot like having 32GB of system memory: only professional applications are likely to use it. Even 8GB of VRAM is mostly overkill for games right now, and with consoles continuing to ship with 8GB, it will remain a major target for many years. Plus, going with 8GB on the GTX 1180 leaves the door open for a 12-24GB 1180 Ti and/or Titan card in the future. 12GB with a 384-bit interface could deliver 672-864GB/s of bandwidth, depending on where it falls in the 14-18 GT/s spectrum. 24GB would be almost purely for content creators and professionals, as well as supercomputing—something the Tesla V100 already addresses better.

Power requirements are almost certainly going to be higher this round than the GTX 1070/1080, mostly because the process technology hasn't changed enough during the past two years. 250W cards are relatively common these days, with the GTX 1080 using 180W, so the GTX 1180 will probably be in the 200-220W range. 8-pin plus 6-pin PCIe power connections will likely come on the reference (aka Founders Edition) models, and dual 8-pin connectors will ship on enthusiast cards.

And then there's the price, and there's plenty of uncertainty. The graphics card shortage caused by memory supply and cryptocurrency miner demands is basically gone, and there are lots of GPUs selling at or below MSRP now. I've heard rumors of anywhere from $600 to $1,000 for the GTX 1180, but the higher price rumors mostly came when you couldn't even find a GTX 1080 Ti in stock. Assuming GTX 1180 hits performance expectations, the initial cards should officially launch at $699-$749 for Founders Edition models, and the custom cards will come a month or two later and cost $100 less. There's a chance Nvidia might increase initial prices to $799 or more, however—it can always reduce prices in the future if necessary.

When is the GTX 1180 release date?

Previous rumors have pegged the GTX 1180 release date for late July, but current indications are that it might not happen until mid-to-late August. Nvidia was initially on the Hot Chips agenda to discuss its next-generation architecture on August 20 but has since been scrubbed from the schedule. Don't be surprised if the removal was in name only, however, and that Nvidia shows up at Hot Chips for a presentation as initially planned.

What about the GTX 1170?

Nvidia has a well-trodden path by now when it comes to graphics card launches. It starts with a high-end card like the 780/980/1080, and either simultaneously or shortly afterward releases the 'sensible alternative' GTX 770/970/1070. With the 900-series, both parts launched at the same time, while the 700-series had a one-week difference and the 10-series had a two-week gap between the parts. I think the slightly staggered rollout will happen with the 11-series as well, so 2-4 weeks between the 1180 and 1170 launches.

As for specs, again Nvidia's standard practice is to offer a trimmed down version of the same GPU core, essentially harvesting chips that can't work as the full 1180 and selling them as an 1170. The idea is to end up with performance about 20-25 percent slower and a price that's 30-35 percent lower. If the GTX 1180 has 3584-3840 cores, GTX 1170 will have 2560-2880 cores; if the high-end part goes with 5120 cores, expect closer to 3840 cores in the GTX 1170. Base and boost clocks are also traditionally slightly lower on the 70 model cards.

In terms of memory, GTX 1170 might use GDDR5/GDDR5X memory, but GDDR6 seems more likely (so that the boards can be the same for the 1170 and 1180). Slightly lower clockspeeds for the RAM are also typical, so maybe 12-14 GT/s instead of 14-16 GT/s. The new parts won't have less memory than the current generation, so 8GB is the most likely amount for VRAM.

Thanks to the reduced core count and slightly lower GPU and RAM clockspeeds, the GTX 1170 will also use less power. A single 8-pin PEG connector might be sufficient, though 8-pin plus 6-pin seems more likely. Power use in the 180-220W range is where this part is likely to land.

Will there be a GTX 1180 Ti or Titan card?

Naturally, but not any time soon. The Titan V exists, and while it's in a different category than previous GTX Titan offerings—notice the lack of GTX in the name, for example—Nvidia isn't likely to retire it any time soon. For professional users, the Volta GV100 remains the biggest and fastest GPU Nvidia makes, and Turing is primarily going after gamers. That means GT104 (or whatever Nvidia calls it) as the high-end Turing part initially, with the potential for GT100/GT102 in the future.

The first GTX Titan launched before the GTX 780 and with high-end FP64 support. The Titan Black and Xp variants came after the 780 Ti and 1080 Ti, while the Titan X and Titan X (Pascal) came before the 980 Ti and 1080 Ti, respectively. The important distinction is that Nvidia had an existing, larger GPU available that was in the professional space when the Titan cards came before the Ti cards. The Titan V already uses Nvidia's largest GV100 processor, and it includes HBM2. Will Nvidia have GT100 and GT104 launch at the same time? I wouldn't count on it, but if it does show up, expect prices to be quite a bit higher than the current 1080 Ti (like $999 or more).
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Customer Reviews

  • Similar Topics

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.