Apple : The dark web represents only a fraction of the rest of the internet |
- The dark web represents only a fraction of the rest of the internet
- AMD Radeon RX 640 re-branding for Polaris GPUs hinted in new drivers
- Deutsche Telekom and Ericsson push microwave link over 100Gbps
- The best cheap 4K TV deals and sale prices in the US - May 2019
- Ryzen 3000-Series CPUs may support even faster memory at 5,000MHz
- The new Apple TV app is here – a step towards a Netflix rival, or something else?
- Lenovo adds new solutions and devices to its ThinkIoT ecosystem
- Intel Cascade Lake-X release date, news and rumors
- Nvidia Volta release date, news, and features
- Intel Skylake-X release date, news and features
- Nvidia Turing release date, news and features
- Disc-less Xbox One S: everything you need to know about the all digital console
- Intel Lakefield release date, news and rumors
- AMD Navi release date, news and rumors
- Intel Ice Lake release date, news and features
- AMD Vega II release date, news and rumors
- AMD Vega release date, news and features: everything you need to know
- Intel Cannon Lake release date, news, and rumors
- AMD Ryzen Threadripper 3rd Generation release date, news, and rumors
- Intel Whiskey Lake release date, news and rumors
The dark web represents only a fraction of the rest of the internet Posted: 13 May 2019 12:45 PM PDT The dark web is often portrayed as a hidden corner of the internet where cybercriminals exchange information and plan cyberattacks but how does it compare to the rest of the internet? To answer this question, Recorded Future analysed 260,000 onion pages to better understand the reach of the Tor network from a starting set of onion sites pulled from public lists and its own content. The threat intelligence company quickly realized that the dark web is actually tiny when compared to the “clear web” used by consumers and businesses. Recorded Future's crawling found 55,828 different onion domains but only 8,416 were observed to be live on the Tor network during its investigation.
The company's findings disprove the misconception that the relationship between the surface web and dark web has an iceberg shape with the surface web being a small portion of the World Wide Web above the water and the dark web below accounting for the majority. The opposite is actually true and the surface web is vastly larger than the dark web. There are an estimated 200m unique and active surface web domains which means that the current live onion site network makes up less that 0.005 percent of the size of the World Wide Web. The dark webRecorded future also found that the onion sites that make up the dark web are disorganized, unreliable and plagued with scams. One such popular scam called typosquatting has been used to successfully defraud users of over 400 popular onion sites, earning thousands of dollars in Bitcoin from victims. Uptime is also an issue on the dark web with popular onion sites well below the 99.999 percent or “five nines” availability expected from reputable companies on the surface web. Onion sites also regularly disappear permanently with or without explanation. When it comes to the languages used on the dark web, onion sites are far more homogeneous than the surface web. Recorded future observed that 86 percent of onion sites have English as their primary language followed by Russian at 2.8 percent and German at 1.6 percent. On the surface web though, researchers report that English is the top language used at only 54 percent. By better understanding the origin and nature of the dark web, common misconceptions surrounding it can be dispelled while at the same time greater awareness can help consumers avoid falling victim to the scams and other illegal activity it contains.
This posting includes an audio/video/photo media file: Download Now |
AMD Radeon RX 640 re-branding for Polaris GPUs hinted in new drivers Posted: 13 May 2019 12:33 PM PDT While we continue to wait for AMD to push graphics processors forward with Navi and its 7-nanometer architecture, a leak is suggesting that may not be the only things AMD is preparing. A new graphics card name was discovered in AMD's Adrenalin 19.4.3 graphics drivers by a TechPowerUp reader. The newly identified card is a Radeon RX 640. This would put it on the lower end of the performance spectrum per AMD's naming convention. But, this would also suggest it's neither a Vega nor Navi GPU. Given that it follows the naming scheme of AMD's Polaris-based GPUS, it's a safe bet this Radeon RX 640 could be an upcoming GPU that continues to use the Polaris architecture. That's not nearly as exciting for tech enthusiasts as the impending release of Navi, but it would make sense for the budget end of the market.
Making sense of itThe new Radeon RX 640 is likely going to be new in name alone. TechPowerUp speculates that the Radeon RX 640 would be a re-branding of the current RX 550X. This would be in keeping with what AMD has done in the past. Last year, AMD pulled a similar move with a refresh of its Polaris GPUs adding an X suffix. This lead to new graphics cards like the Radeon RX 550X that were fundamentally the same as their predecessors. The Radeon RX 500X series GPUs had the same number of stream processors and compute units, frequency and underlying architecture as the RX 500 series before them had. In other words, the RX 640 will be anything but new. What it should be, though, is affordable. It can likely find its place as an affordable dedicated graphics card in budget builds. This would let AMD still remain relevant in the budget section of the market, while the Vega VII and upcoming Navi graphics cards find their footing in the higher end of the market.
Via HotHardware This posting includes an audio/video/photo media file: Download Now |
Deutsche Telekom and Ericsson push microwave link over 100Gbps Posted: 13 May 2019 11:40 AM PDT After demonstrating that it was possible to take wireless backhaul to 40Gbps earlier this year, Deutsche Telekom and Ericsson have returned to take a 1.5 kilometre link to 100Gbps. The trial was once again conducted at the Deutsche Telekom Service Center in Athens and it utilized Ericsson's Mini-Link 6352 microwave solution. To achieve such improved speeds, the recent trial used 8x8 line-of-sight Multiple-Input Multiple-Output in a 2.5 GHz channel within the 70 to 80GHz range. The telecom and network hardware manufacturer said that transmission rates were consistently above 100GBps with a peak speed of 140Gbps which means that the technology could be used in 5G networks.
Head of Product Area Networks at Ericsson, Par Narvinger explained how microwave could be used to supplement fibre connections or even as a replacement when fibre is unavailable, saying: "This trial signifies the successful establishment of true fibre capacities over the air using microwave. This means that microwave will be even more relevant for communications service providers in creating redundant networks as a back-up for fibre, or as a way of closing a fibre ring when fibre is not a viable solution. By carrying such high capacities, microwave further establishes itself as a key transport technology, capable of delivering the performance requirements of 5G." Spectrum sharingBack in February at MWC, Ericsson and Intel showcased simultaneous spectrum sharing between 4G and 5G. The software is able to support both mobile networks at the same time due to the “intelligent scheduling algorithms” developed by the two companies. New hardware is also not required as the software can be activated on all radio system equipment from Ericsson released since 2015. According to Ericsson and Intel, the split of 4G and 5G simultaneous capacity is adjusted every millisecond according to the active devices using the network. Ericsson Group president and CEO Borje Ekholm noted at event in Singapore that the security of networking hardware vendors such as Huawei is responsible for holding back the development and rollout of 5G networks. Via ZDNet This posting includes an audio/video/photo media file: Download Now |
The best cheap 4K TV deals and sale prices in the US - May 2019 Posted: 13 May 2019 11:19 AM PDT We've gone through top retailers such as Amazon and Walmart to find the best cheap TV sales and deals that are currently going on (we've also got a roundup of UK retailers). You can find Black Friday like prices on a variety of best-selling 4K TVs from brands such as Samsung, Sony, LG and more. You'll find discounts on 4K TVs, smart TVs, different model sizes, and a variety of prices that will fit any all TV viewing needs. Whether you want a small TV with a price tag to match or something to show all the colors of the rainbow (and a few million more) with HDR, we've found plenty of options. Read on to find the TV you want at a great price! TechRadar's cheap TV deal of the weekCheap TV deals (40-49 inch)Cheap TV deals (50-59 inch):Cheap TV deals (60-85 inch):More cheap TV sales:Not found the right cheap TV for you today? Or maybe you'd prefer to directly browse the TVs at your favourite retailers instead of our highlights of the best cheap TV deals? We're updating this page on a regular basis, so you may have better look another day. If you want to take a look for yourself now though, here are the direct links to a the full collection of TV deals at multiple stores. Learn more about Prime Day with our guide on Amazon Prime Day 2019: everything you need to know for the July deals event. This posting includes an audio/video/photo media file: Download Now |
Ryzen 3000-Series CPUs may support even faster memory at 5,000MHz Posted: 13 May 2019 10:43 AM PDT AMD's Ryzen processors (CPUs) have always liked extra fast memory (RAM), and it looks like that may continue to be the case with Ryzen 3000-Series processors as well. The upcoming CPUs will support up to DDR4-5000, according to a leak from @1usmus on Twitter The leak comes from Yuri Bubly, a developer and AMD partner who has created a DRAM calculator specifically for Ryzen processors. This suggests the new generation of processors will be just as hungry for extra-fast RAM as previous generations.
A speedy, powerful future for RyzenThis is just one more leak pointing to an extra high-performance future for Ryzen processors when they launch with AMD's new Zen 2 architecture. On top of the boosts they'll get from the move to a 7-nanometer process, the new Ryzen processors are now expected to come out with some serious core counts as well. Another recent leak had suggested a 16-core Ryzen 3000-series processor is coming out, likely at the top of the lineup. A 12-core model is also expected to come alongside it. That leak was of an engineering sample that may not have had the clock speeds the retail versions will have. Two 3000-Series Ryzen 9 processors have been suggested with 16 cores and a max speed of 5.1GHz on the highest-end version. Now, combining 16-core Ryzen processors at 4.3GHz (5.1GHz boost) with even faster RAM than was available to previous generations just means Ryzen will have that much more to offer for users looking to get the most out of their computers' processing power. The highest-end Ryzen CPU is sure to come at a premium, and DDR4-5000 will undoubtedly be pricey as well (if even available as a stock speed without overclocking). And, there's the price of a new motherboard to factor in as well. But, all this points to an even faster future for AMD, and may put even more pressure on Intel to move on from Coffee Lake. Via Hot Hardware This posting includes an audio/video/photo media file: Download Now |
The new Apple TV app is here – a step towards a Netflix rival, or something else? Posted: 13 May 2019 10:05 AM PDT Apple has unveiled big changes to its TV app, rolling out visual improvements and setting things up to integrate the hotly anticipated Apple TV Plus streaming service later this year. The app is now live on Apple TV devices, iPads, iPhones and iPods (assuming you're a good little Apple user and have updated recently...), as well as – somewhat remarkably, for those that haven't heard – on Samsung TVs. What you'll be getting is a new tile-based interface, a bundle of new 'channels' to watch, and new sections, including one dedicated to children.
Here we take a closer look at the new app ahead of its launch – these are the key things you need to know about what you'll be getting if you're a user of Apple's service. A new range of featuresOne of the key new features – for parents, at least – is the addition of the kids' portal. This is a new area, and one that's designed to help parents find the shows their little ones might actually enjoy. The portal is organized in a number of ways – you're greeted initially by a range of popular characters, meaning a child can more easily navigate to the shows they're interested in watching. The content is also grouped by age, meaning parents can be sure their kids are watching age-appropriate stuff, as well as thematically – so if your daughter is a fan of dragons, all the available shows featuring said fire-breathing creatures will be offered. Channels are another big new play for Apple. Where something like Amazon Prime is featured on the service, users need to tap the app to access the experience, where they might be greeted by something that doesn't look quite the same as the user interface they've been used to on Apple TV. (Apple is offering recommendations to app developers on how to make their user experience more similar to its own, but can't mandate it.) With a billion devices out there running iOS and having access to the TV app, having a Channel on Apple TV is worthwhile, especially for the smaller brands. Apple is making a big deal of the fact that, for example, streaming customers of HBO will be able to download and watch Game of Thrones offline – whether that's a key thing for you remains to be seen, but HBO is arguably the biggest US brand on the service. Apple has other Channels featuring Smithsonian and Starz (which will be the launch Channels for the UK as well) but you can see that this is a bigger deal for these outlets than it is for Apple to be hosting them, so you can expect to see much more recognizable names coming in the future. Another big change that Apple is making is surfacing content that you don't own, but which you can rent, buy or subscribe to. This is similar to what Amazon does on its Fire TV – and it can be rather irritating when you just want to know what you're 'allowed' to watch that evening, rather than seeing a movie you're excited by and having to decide whether you really want to pay for it. Hopefully this is something you'll be able to turn off, so that users can focus on the core experience. A new way to navigateAs usual, you can navigate with your voice, thanks to Siri integration, but there's also a new way to get around the service. Saying something like 'Siri, show me what's up next' will take you to a new portal that contains everything you've been watching. So, whether it's something you've been viewing on BBC iPlayer or HBO, or a film you've tagged to watch from the iTunes library, you'll be presented with those options first. The tiled interface is much easier to use, with the rectangles slick and easy to jump around – essentially, this is much like Netflix, with the categories listed vertically and content running horizontally across the screen. The focus here is on curation, and recommendations that Apple thinks you'd like to check out – as with apps and music, Apple has a dedicated team of editors who are curating what they consider to be the most relevant and trend-worthy shows for your eyes to binge on. They'll even sift through other on-demand apps to showcase the best content they think is going to excite users, although it's not clear whether Netflix is part of this – we didn't see any such shows in our demo, although Amazon Prime's The Grand Tour was part of the mix. You're also getting access to your iTunes catalogue here (obviously), with the relevant films available to stream or download in 4K HDR / Dolby Vision and Dolby Atmos if your device is capable. If you're watching on your iPad or iPhone, get ready to be alerted when a new show from your favorite series becomes available, or when something you might like to watch has popped up. This is something Netflix, among other streaming services, has been doing for a while, so it's another indication that Apple is looking to challenge the established players. A whole new world for Apple TVAnother key area that Apple is branching into is bringing its service to other platforms. In a similar way to allowing Android users access to Apple Music, the new Apple TV service will be available on all 2019 Samsung smart TVs, as well as selected models from the 2018 range. AirPlay 2 will also be supported on these platforms, meaning you'll be able to mirror your iPhone screen on your Samsung screen – we clearly live in more ecumenical times. Users of Fire TV, Roku, LG and Vizio TVs will be able to get the same AirPlay treatment later this year, but Samsung is going to be first out of the gate to offer Apple's suite of content. The experience won't be exactly the same – you'll only be able to access your iTunes library and Channels for now – but being able to play your Apple content elsewhere is a welcome development. Although there's no access to Apple TV Plus yet – the platform hasn't launched – the new layout and integration with other brands' devices shows that Apple wants to get as much market penetration for its TV app as possible, to encourage users to sign up to its streaming service when it lands. It's going to be a tough job for Apple to convince the world that it's a viable player alongside Amazon, Netflix and the rest, but by making its TV app more desirable for the vast range of iOS device users it currently serves, it's at least widening its scope.
This posting includes an audio/video/photo media file: Download Now |
Lenovo adds new solutions and devices to its ThinkIoT ecosystem Posted: 13 May 2019 09:45 AM PDT As part of its efforts to help make businesses smarter, Lenovo announced at its Accelerate business and partner event that it is bringing new solutions and devices to its ThinkIoT ecosystem. The company's new warehouse automation solution will help improve productivity and efficiency in the supply chain by combining skilled workers with advanced technology. For instance, a warehouse employee can identify whether packages are put on the right pallet in just 200 milliseconds by using a high-performance camera sensor to read a QR code on a sticker placed on each box. Lenovo's automated computer vision cashier prototype, which is currently being tested in a pilot program at its Beijing campus, could help transform the retail sector by using object recognition technology.
This new smart retail solution includes an integrated camera, processor and sensors which can be used by customers at brick and mortar stores to scan all of their items at once, eliminating the need to detect individual bar codes. ThinkCentre Nano and Nano IoTLenovo also announced that two new devices, the ThinkCentre Nano and Nano IoT, would be joining the ThinkIoT ecosystem. The ThinkCentre M90n is the most compact commercial desktop series in the world at one-third the size of the company's ThinkCentre Tiny and the device brings up to 30 percent better energy savings when compared to traditional desktops. The ThinkCentre M90n-1 can even draw power from a compatible USB Type C monitor and due to its space-saving convenience, the device can even be stored behind the monitor to provide a clutter-free work environment. While the ThinkCentre Nano has been designed for office use, the ThinkCentre M90n-1 Nano IoT has been created to withstand the rigors of manufacturing environments with a broader thermal range (0-50 °C) and a quiet fan-less design. The device serves as a secure IoT Gateway to provide processing and security for any IoT devices deployed in the environment that demand real-time responsiveness at the edge. The Nano IoT even enables the rapid relay of information between connected IoT peripherals, including both sensors and devices, in harsher commercial environments. An expansion I/O Box allows organizations to build out to their edge device needs with additional legacy ports and peripherals. Both the ThinkCentre Nano and Nano IoT are expected to ship in August with the ThinkCentre M90n having an expected list price of $639 and the ThinkCentre M90n Nano IoT expected to cost $539 when it becomes available. This posting includes an audio/video/photo media file: Download Now |
Intel Cascade Lake-X release date, news and rumors Posted: 13 May 2019 09:35 AM PDT Cascade Lake-X is another iteration of Skylake-X, which was initially released as a response to Threadripper. And, while processors like the Intel Core i9-7980XE and Core i9-9980XE outperform processors like the Ryzen Threadripper 1950X and 2950X, respectively, they’re also much more expensive – which leads to more enthusiasts going with AMD’s high-end desktop (HEDT) platform. Initially, we were expecting Cascade Lake-X to follow Skylake-X directly, but Intel launched Basin Falls Refresh instead. However, rumors lead us to believe that Cascade Lake-X will be the next HEDT platform, even if it will still be on 14nm. In fact, we’ve already seen some non-X Cascade Lake processors arrive in the small business-focused Intel Xeon-E lineup, so we can’t wait to see what Cascade Lake-X can do. But, just because Cascade Lake is focused on servers these days, doesn’t mean it’s not exciting, or that the microarchitecture won’t turn into HEDT products later on. So, keep this page bookmarked, and we’ll keep it updated with all the latest information. Cut to the Chase
Intel Cascade Lake-X release dateEven though we’ve finally got word from Intel about Cascade Lake server processors, the release schedule for Cascade Lake-X is kind of a mess. Some Cascade Lake Xeon processors hit the street already, namely the Xeon E-2100 line processors. These chips are aimed at small businesses, with up to 6 cores. There are also some 48-core server grade Cascade Lake processors coming out later this year, according to a report from Ars Technica. We’ve also seen a leak from the Chiphell forums that hints at a Cascade Lake-X reveal at Computex 2019. This leak refers to these new chips as LGA 2066 parts, which would put them in the same league as Intel’s new Basin Falls Refresh processors. That’s all we know for now, but something tells us that there will be much more to Cascade Lake over the next few months, so stay tuned for all your Cascade Lake release date information. Intel Cascade Lake-X priceThis is where things might get uncomfortable, so brace yourself. Intel doesn’t exactly have a history of releasing HCC and HEDT processors for what we would call ‘reasonable’ prices. So, we expect the pricing of Cascade Lake-X to fall in line with, or exceed Skylake-X’s pricing. However, the early days of Cascade Lake looks reasonable, with the small business-targeted pricing being pretty reasonable. The prices of the Cascade Lake Xeon E-2100 line is as follows:
When it comes to the eventual Cascade Lake-X chips, and the Xeon W chip the prices are going to be much higher. So, we’ve taken the liberty of listing the pricing of Skylake-X processors below.
Intel Cascade Lake-X specsAssuming for a second that the 28-core CPU showed off at Computex is indicative of what we can expect from Cascade Lake-X, things are about to get bananas. Skylake-X offered significantly higher core counts than the previous generation of HEDT processors because it was manufactured on a smaller die. Cascade Lake-X doesn't have the same advantage as it’s still based on 14nm++ manufacturing. So while it will be able to offer slightly higher core counts, core clocks and power efficiency than Skylake-X, the leap in performance isn’t going to be as breathtaking. Still, Intel could use this process refinement to deliver a better value proposition – maybe offering a 10-core CPU for under $1,000. This could attract plenty of enthusiasts over to this server and enterprise platform, rather than more affordable mainstream CPUs. Intel has finally revealed the Intel Xeon W-3175X, which is a 28-core processor, meant for high-end server solutions. We don’t quite know what it’s going to be able to do, as it won’t be coming out until December 2018, but we do have specs for the Xeon E-2100 series of processors, which are targeted at small business-grade servers. Those specs are as follows:
That’s all we know for now, so now we just have to wait for Intel to reveal what’s in store – and we’ll be sure to update this page the second we know more.
This posting includes an audio/video/photo media file: Download Now |
Nvidia Volta release date, news, and features Posted: 13 May 2019 09:35 AM PDT When it comes to the best graphics cards, Nvidia’s varied selection of awesome graphics cards is unrivaled. From the wickedly powerful GeForce RTX 2080 Ti to the entry-level GTX 1050 – and everything in between. Nvidia remains the crowned ruler of the graphics world. But, because both professionals and gamers are always demanding new graphics technology and more powerful hardware, the best graphics card will never be good enough. This is why Nvidia Volta was created – the next-generation architecture for creatives and professionals. And, Nvidia Volta cards are already out in the wild, but before you go grab that credit card you should understand that these cards aren’t for gamers. These high end graphics cards are aimed at creatives and professionals, and they have price tags to match. But, that’s not to say that Nvidia Volta isn’t exciting – it certainly is – especially when its developments in AI-powered Tensor cores has trickled down to Nvidia Turing GeForce graphics cards like the GeForce RTX 2080 Ti and RTX 2080. Cut to the chase
Nvidia Volta release dateWe already mentioned that Nvidia Volta is available now, but not in any from that you’ll slip into your gaming PC. The Nvidia Tesla V100 hit the streets in May 2017, as a pro-level GPU built to empower AI workloads and run data centers. It’s not a consumer-level device by any stretch of the imagination. On the other hand, the Nvidia Titan V – released in December 2017 – is considered a consumer device, but you might scoff at the $2,999 (about £2149, AU$3800) price point. The Titan V is targeted towards researchers and other high-performance computing needs rather than making PlayerUnknown's Battlegrounds look pretty. Nvidia Volta priceBecause Nvidia Volta cards are aimed entirely at professionals and creatives, you shouldn’t expect them to be affordable for a gaming build. Both the Volta-powered Titan V and Tesla V100 are ludicrously expensive. The Titan V comes in at a cool $2,999 (£2,800, AU$4,699), where the Tesla V100 will cost you about $8,799 (£8,819, about AU$12,378) – definitely not cards you want to use to run Battlefield V. As for consumer-grade cards, we don’t think Volta will ever actually create one. Instead, Nvidia Turing was created with real-time ray-tracing tech that’s priced at a level that gamers should find palatable. Nvidia Volta specsVolta introduces a new 12-nanometer manufacturing process, which marks a significant step up from the 16nm process seen with the Pascal GPUs. What exactly does that mean? Well, it means that Nvidia's manufacturing partner TSMC can jam a whole lot more transistors into a comparable amount of space, thus yielding a massive surge in computing ability. You'll get better performance, not to mention significant power savings. The Titan V packs in 21.1 billion transistors, which collectively provide a stunning 110 teraflops of computing power. New tensor cores are designed to speed up deep learning ability, with nine times more peak computing teraflops over the previous Titan Xp card. And it's twice as energy-efficient too. Additionally, the Titan V has six graphics processing clusters with 5,120 CUDA cores (up from 3,840 in the Titan Xp) and 320 texture units, and a base clock of 1200MHz that can boost up to 1455MHz. With 12GB HBM2 memory onboard, the Titan V boasts a memory bandwidth of 652.8GB/sec. The last bit on the specs side is surprising from Nvidia, whom just the last year championed the enhanced speed of GDDR5X memory on its high-end GPUs. The Nvidia Titan V is the first mainline card to feature high-bandwidth memory outside of the company’s Quadro and Tesla parts. Whether this means HBM2 will be a mainstay of Volta remains to be seen. All told, it's a potentially enormous upgrade over the last generation of Nvidia cards. But let's step back for a moment, because comparing a $2,999 gpu to its $1,200 predecessor doesn’t make much sense – and there isn’t going to be any Volta-powered gaming graphics cards, anyway. Based on these initial pro-level renditions, Volta seems capable of so much more than the previous Pascal-powered GPUs – and even if those are powerful enough right now, Volta appears poised to take AI and professional workloads to previously unseen levels. Either way, keep this page bookmarked, as we’ll update it as new details, leaks or cards pop up on our radar. This posting includes an audio/video/photo media file: Download Now |
Intel Skylake-X release date, news and features Posted: 13 May 2019 09:34 AM PDT Over the last few years, Intel has run into plenty of renewed competition in the CPU space – the endless war of AMD vs Intel is on in full force. Enter Skylake-X, Intel’s answer to AMD’s massively successful Ryzen and Threadripper processors. Back in 2017, when AMD launched Ryzen, Intel was caught by surprise. We didn’t know how Intel would respond, at least not at first. There were scattered rumors that Intel would launch Cannon Lake at Computex 2017, but, of course, that didn’t happen. Instead, Intel announced Skylake-X, a line of HEDT processors. But, it worked: Skylake-X, headed by the Intel Core i9-7980XE launched at Computex 2017, and while we knew it was coming, we were still impressed by the performance. But, the Intel Skylake-X story keeps going. About a year later, in 2018, Intel announced the new HEDT Basin Falls Refresh processors, like the beastly Intel Core i9-9980XE, based on an improved 14nm architecture, along with a 28-core Xeon W chip to extend the Skylake-X family into 2019 and beyond. Cut to the chase
Release dateIntel released the lower-end Skylake-X processors immediately following their reveal at Computex 2017 with the Core i7-7800X, Core i7-7820X and Core i9-7900X. All of which was released on June 19, 2017. Later, Intel launched three more enterprise processors, with the Intel Core i9-7920X, Core i9-7940X and Core i9-7960X all hitting the streets by September 2017. Later that month, Intel launched the Core i9-7980XE, an absolute unit of a chip that destroyed everything that came up against it. And, now that Kaby Lake-X chips are a thing – or, were a thing when they died unceremoniously – you might be thinking to yourself that all the fun ended with the 7980XE. However, Skylake-X isn’t dead yet. Intel also announced a Skylake SP 28-core server chip, which would later be christened the Intel Xeon W-3175X, back at Computex 2018. Intel’s newest addition to this family was officially announced on October 8, 2018 and release in February 2019 to dethrone Threadripper 2nd Generation. PriceThe biggest thing holding Intel Skylake-X chips from an easy recommendation is their price – they’re some of the most expensive consumer CPUs you can buy today. If you want to pick up a Core i9 processor for your latest build, you might actually need to sell your car first, as they start out around $999 (about £750, AU$1,340). Now, the Core i7 Skylake X processors are a bit more reasonable, but you’re still paying to adopt a high-end platform, so don’t expect to base a budget build off of Skylake-X. Below you can find the pricing for all of the Skylake-X processors currently available.
We also finally know that the 28-core A-series chip is going to be extremely expensive – if you want to get your hands on the Intel Xeon W-3175X, you’re looking at $2,999 (about £2,330, AU$4,250). Get ready to drop at least $10,000 if you’re planning on building an equally powerful rig around this chip. SpecsIntel newest Cascade Lake chip, the Intel Xeon W-3175X, seems to be beyond the realm of these other Skylake-X HEDT chips. With a name like Xeon, this CPU is designed for more enterprise usage than enthusiast overclocking. Although the chip first debuted at Computex 2018 with a 5.0GHz boost clock, it was later revealed that Intel was actually overclocking while immersing the processor in liquid-nitrogen. Without an extreme setup, you're more likely to see this chip running at a maximum of 4.3GHz. However, the next actual HEDT processors lie within the Basin Falls Refresh lineup. Chips like the Intel Core i9-9980XE are positioned to take on the Ryzen Threadripper 2990WX and Threadripper 2970WX. And, while it is faster, it’s still hard to justify the high price. Intel newest Skylake chip, the Intel Xeon W-3175X, is beyond the realm of these other HEDT chips. With the Xeon name, this CPU is designed for more enterprise usage – think 3D animation – than enthusiast overclocking. Although the chip first debuted at Computex 2018 with a 5.0GHz boost clock, it was later revealed that Intel was actually overclocking while immersing the chip in liquid-nitrogen. We finally have the final specs, and it looks like the Intel Xeon W-3175X will have a core clock of 3.1 GHz, with a boost of 3.8GHz across all 28 cores and a 4.3GHz single-core turbo clock. And, with the new Intel C621 chipset, this CPU comes with a greatly expanded support platform. This chip supports 68 PCIe lanes, 6-channel memory for up to 512GB of RAM running at 2,666MHz. Intel newest Skylake-X chip, the Intel Xeon W-3175X, seems to be beyond the realm of these other HEDT chips. With a name like Xeon, this CPU is designed for more enterprise usage than enthusiast overclocking. Although the chip first debuted at Computex 2018 with a 5.0GHz boost clock, it was later revealed that Intel was actually overclocking while immersing the chip in liquid-nitrogen. Without an extreme setup, you're more likely to see this chip running at a maximum 4.3GHz. Although Intel has yet to reveal a new chipset for the Xeon W-3175X, this CPU comes with a greatly expanded support platform. Team Blue has said the chip will support 68 PCIe lanes, 6-channel memory for up to 512GB of RAM running at 2,666MHz. All told, this maybe the processor to finally outpace the Ryzen Threadripper 2990WX and Threadripper 2970WX.
This posting includes an audio/video/photo media file: Download Now |
Nvidia Turing release date, news and features Posted: 13 May 2019 09:32 AM PDT The wait for Nvidia’s next-generation Turing graphics cards was excruciating but, finally, the latest and greatest GPU architecture is here. It was a long wait, but the sheer power of Nvidia Turing was definitely worth it. Now that we finally have Turing-powered Nvidia GeForce RTX 2080 Ti, RTX 2080, RTX 2070 and RTX 2060 cards in our hands, we know they’re the best graphics cards for the games we’ll be playing for the next few years. Nvidia Turing will be behind some creative work too, thanks to the recently introduced Nvidia Titan RTX. Nvidia Turing is all about the real time ray tracing, and it brings this long-awaited graphics technology to the mainstream. This could bring about a graphical revolution in the way the best PC games are rendered. And, judging by high-profile games like Metro Exodus, that revolution has arrived – especially considering you can enable ray tracing on non-RTX cards (to the tune of a major performance hit). Of course, there are Nvidia Turing cards out there for people that don’t need that unique RTX tech. Nvidia has launched the GeForce GTX 1660 Ti, GeForce GTX 1660 and GeForce GTX 1650. They’re not quite as powerful as their ray tracing cousins, but they’re significantly more affordable. Plus, Nvidia has officially announced its mobile Turing graphics processors, along with a ton of laptops equipped with them. Cut to the chase
Nvidia Turing release dateAll of the currently-announced Nvidia Turing GPUs are now out in the wild – the RTX 2080 Ti, 2080, 2070, 2060, GTX 1660 Ti and GTX 1660. Thankfully, the RTX-series cards are readily available now after some initial limited availability, and you’ll likely find more than a few models on sale everyday. And, at CES 2019, we didn’t just finally get an RTX 2060 announcement, but also over 40 gaming laptops sporting the mobile version of Nvidia RTX graphics. The best gaming laptops of CES 2019, like the Alienware Area 51m, were all packing the latest Nvidia Turing graphics, and they’ve started hitting the market almost immediately in early- to mid-January 2019. And, if you were looking for a graphics card that you won’t have to take out a personal loan to afford, the Nvidia GeForce GTX 1660 Ti arrived on February 22nd. The Nvidia GeForce GTX 1660 quickly followed with its own March 14th release date, then, finally, the Nvidia GeForce GTX 1650 arrived alongside Turing GTX graphics for laptops in April 2019. Nvidia Turing priceAlthough the Nvidia Turing series started with the Quadro RTX GPUs, we're far more interested in the graphics cards available for consumers. If you wanted to check out these enterprise-leaning parts head on here, otherwise read on for the prices of the announced consumer cards below:
Overall, the prices for Nvidia's newest graphics cards seem to have risen with the Nvidia GeForce RTX 2080 Ti taking the place of Nvidia’s past Titan cards. This shift up can sadly be seen across the entire lineup. Not to mention that the prices on the store for the RTX 2080 Ti and RTX 2080 were a bit higher than what Nvidia CEO and founder Jensen Huang revealed at the Nvidia Geforce Celebration at Gamescom 2018 – at the time of writing. For instance, the 2080 Ti was initially revealed at $999, but that price is rarely reflected outside of some occasional discounts. Then there’s the GTX Turing cards. Right now, these are led by the $279 (£259, AU$469) GTX 1660 Ti, and provide phenomenal value at the low end. The most recent of these cards, the GTX 1650 is priced at $149 (about £115, AU$210), and is positioned to compete with the AMD Radeon RX 570. Nvidia Turing specsThe headline feature of Nvidia Turing is the inclusion of ray-tracing tech that can render more realistic visuals and lighting in real time without having to fall back on programming tricks. These specialized RTX cores essentially calculate how light and sound travel in a 3D environment at a rate of up to 10 GigaRays on the RTX 2080 Ti. These specialized cores will also supposedly allow Nvidia Turing-based graphics cards to process ray tracing up to 25 times faster than Pascal. When these RTX Cores aren’t in use for processing ray tracing, they’ll essentially switch off, ceasing to draw any power. In addition to these RTX cores, the Turing Architecture will also feature Tensor Cores, like the ones found in Volta. These specialized cores enable artificial intelligence and neural networking so that Turing cards get better at rendering over time – something previously exclusive to supercomputers. With the ability to deliver 500 trillion Tensor operations a second, this technology accelerates deep learning training and inferencing. This will allow Nvidia to offer Deep Learning Super Sampling (DLSS), which could be a version of super sampling that won’t bring your computer to its knees. Even for games that don’t support this new DLSS tech, these AI-fueled cores should deliver traditional anti-aliasing much more efficiently – up to eight times. As with Volta, Nvidia Turing is adopting GDDR6 memory – up to 11GB in the RTX 2080 Ti, which can clock in at up to 14Gbps, quite the leap over the Pascal-powered Nvidia Titan Xp that clocked in at 11.4Gbps. The Nvidia GeForce RTX 2080 Ti is an absolute behemoth of a GPU. With 4,352 CUDA cores, 11GB of GDDR6 VRAM with a 352-bit memory bus and 18 billion transistors, it’ll be capable of 4K Ultra gaming at high refresh rates for years to come. It’s no wonder it comes with such a high price tag. The more mainstream RTX 2080 and RTX 2070 are both also quite impressive and absolutely destroy the previous generation of GPUs. The former will feature 2,944 CUDA cores, 8GB of GDDR6 memory and clocks in at a 1.5GHz base frequency. The 2070, though is be a bit weaker, coming with 2,304 CUDA cores 8GB of GDDR6 VRAM and clocked at a 1,410Mhz base frequency. And, while the RTX 2060 is basically just a cut-down RTX 2070, with the same TU106 GPU, but with 1,920 CUDA cores, 6GB of GDDR6 VRAM and a boost clock of 1,680 MHz, it’s still a formidable graphics card. There are rumors of an RTX 2070 Ti, through some leaked benchmarks. The details are a bit hard to believe, as it’s listed with 7.5GB of VRAM at 3,150MHz, when the 2070 has 8GB at 3,500MHz. Given the fact that Ti-branded graphics cards are usually more powerful than their non-Ti companions, we’d take this with a grain of salt. Nvidia has also launched some non-RTX cards, starting with the GTX 1660 Ti. This entry-level card features 1,536 CUDA cores, 6GB of GDDR6 VRAM at 12Gbps, and a base clock of 1,500 MHz. It’s slower than the RTX 2060, but it’s a substantial upgrade over the GTX 1060 it replaces. Team Green’s second non-RTX GPU, the GTX 1660 features 1,480 CUDA cores, 6GB of GDDR5 video memory and a reference boost clock of 1,785MHz. It might not sound mighty on paper, but between its low price and fantastic 1080p gaming performance, it’s currently the absolutely best entry-level graphics card you can buy. A third non-RTX GPU has been released, the Nvidia GeForce GTX 1650. This low-end GPU features the TU117 GPU, clocked at 1,485GHz with a boost of 1,665GHz. This budget card features 4GB of GDDR5 VRAM with 128GB/s of memory bandwidth on a 128-bit bus. Nvidia Turing PerformanceAnd, as long as you have the high-end specs to back them up, the new Turing RTX cards are able to perform much faster than their Pascal equivalents, and will be able to push it even further once DLSS or deep learning super sampling is more widespread. And, thanks to the AA improvements in the Tensor cores, we’re seeing about a 20-40% increase in games that don't support DLSS. In our benchmarks, the GeForce RTX 2080 outperforms the GeForce GTX 1080 Ti by about 11% and the Nvidia GTX 1080 by a more impressive 32% in Middle Earth: Shadow of War in 4K. This performance difference is even more massive when you look at the Nvidia GeForce RTX 2080 Ti which not only is 20% faster than the RTX 2080 in the same title, but beats out the last-generation 1080 Ti by a massive 30%, destroying the GTX 1080 with a 45% performance delta. Unfortunately, the Nvidia RTX 2070 is less impressive. While it does absolutely wipe the floor with the GTX 1070, it is essentially neck in neck with the GTX 1080 – barely hitting a 10% performance increase at 4K in Shadow of the Tomb Raider. At its price point we were hoping for more, especially after seeing the RTX 2080 and RTX 2080 Ti’s impressive performances. The RTX 2060 is obviously the weakest of the bunch, but you shouldn’t dismiss it outright. The mid-range Nvidia Turing card far outclasses the GTX 1060, but what’s more surprising is that it surpasses the GTX 1070 Ti – for a lower asking price. We were able to get 90 fps in Shadow of the Tomb Raider at 1080p, whereas the 1070 Ti lagged behind at 86 fps. That’s not a huge difference, but the 2060 is $100 cheaper at launch. In traditional games, there’s no question that Nvidia Turing marks a total upgrade from Pascal. And, over time as drivers mature and users get ready to start overclocking their Turing cards, the difference is only going to grow. That’s not to mention the inclusion of incoming DLSS and ray tracing in games, which should only increase the Nvidia Turing performance gap. When it comes to ray tracing, there’s only one title that supports it right now: Battlefield V. And, in that title, the Nvidia Turing cards use a hybrid rendering technique – combining both traditional rasterization and ray tracing in order to produce playable frame rates. Nvidia utilizes “Bounding Volume Hierarchy,” or BVH to track large portions of the scene being rendered for whether or not a ray is being bounced. The RTX cores will then dig deeper into these large rendering zones until it finds the polygon that’s getting hit by the light ray. This method impacts performance far less than tracking each ray live, but it’s still very demanding. In our own testing, you’ll be stuck at 1080p if you’re looking for smooth gameplay with ray tracing turned on. However, with Nvidia’s latest RTX drivers, it’s claiming to increase performance by up to 50% for ray tracing. We’ll be sure to test this and report back, but we have to wait for the new Battlefield V patch to do it. As for the Nvidia GeForce GTX 1660 Ti, you can expect much better performance than the GTX 1060 for less money – up to 56% faster in Shadow of the Tomb Raider at 1080p in our testing. That makes the 1660 Ti a beast when it comes to value.
This posting includes an audio/video/photo media file: Download Now |
Disc-less Xbox One S: everything you need to know about the all digital console Posted: 13 May 2019 09:31 AM PDT As digital downloads and online streaming has changed how we consume film, TV, entertainment and games, it's no huge surprise that Microsoft has launched its first disc-less console. Thanks to plenty of rumors, we knew that a disc-less version of the Xbox One S was in the works for months before it appeared – and now the Xbox One S All-Digital Edition is officially on sale. Before that, it was codenamed Xbox ‘Maverick’, and Windows Central was one of the sites sharing a lot of information in advanced. The Xbox One S All-Digital Edition isn't intended to compete with the brand’s higher-end consoles and current Blu-Ray, disc-based line-up, including the X Box One S and X Box One X – but it's a cheaper, more streamlined option. Here we've got the complete lowdown on the Xbox One S All-Digital Edition, now it's out in the wild: what features it has, how much you'll have to pay to get it, and more. Disc-less Xbox One S: Cut to the chase
New Xbox One S All-Digital: price and availabilityAs predicted by a report from Windows Central, the Xbox One S All-Digital arrived on May 7, 2019. You can currently pick up the console in the US for $249 and up and in the UK for £199.99 and up, direct from Microsoft. Oh and you get Minecraft, Sea of Thieves, and Forza Horizon 3 thrown in as well. It hasn't arrived everywhere in the world just yet though – at the time of writing, it's still listed as "coming soon" on the Xbox Australia website. New Xbox One S All-Digital: design and featuresObviously the key feature of the Xbox One S All-Digital is that it's missing a Blu-ray drive. Apart from that it's very much identical to the Xbox One S that we've had for several years now. No discs means everything has to be downloaded, and you've got 1TB hard drive built in. That should be plenty of room for your games, and you get three bundled with the console: Forza Horizon 3, Sea of Thieves and Minecraft (as reported in advance by Windows Central). Design-wise, again Windows Central was on the money with the advance rumors. The Xbox One S All-Digital looks exactly the same as the Xbox One S, but without a disc drive or (rather redundant) eject button.
This posting includes an audio/video/photo media file: Download Now |
Intel Lakefield release date, news and rumors Posted: 13 May 2019 09:30 AM PDT Intel must be feeling the heat from Qualcomm Snapdragon-powered laptops, because the company is now developing its first hybrid processor or system-on-a-chip (SOC). Then there’s Intel Lakefield, a processor unlike anything we’ve ever seen. It’s packed with more than one type of CPU core to create a more stable and better rounded system. Plus, these hybrid SOCs can even come with their own integrated memory, I/O interface, wireless connectivity, and, of course, integrated CPUs. Because it has so many different components built into the chip, Lakefield operates more like the ARM processors found in the best smartphones, than the traditional CPUs that have been powering laptops and desktop computers practically since they were invented.Lakefield could just change the computer game for the first time in years. It’s not surprising that Intel isn’t the only one chasing the SOC dream, as AMD Ryzen 3rd Generation chips are also architecturally designed to be hybrid processors. Before this new wave of SOCs hits us, here’s everything you need to know about Intel Lakefield. Cut to the Chase
Intel Lakefield release dateSo far, all we know is that Lakefield will apparently enter production sometime in 2019. Intel has also said that it’s committed to having 10nm devices on shelves in time for the 2019 holiday shopping season in the US. That will likely be Ice Lake, as Intel has said that platform should be out by the end of the year, but it’s possible we’ll see Lakefield products by the end of the year, too. However, Intel Lakefield-powered systems might actually come out earlier than that. According to some leaked roadmaps from Tweakers, Lakefield M-series processors may be released in Q2 2019, which could see a summer release date – just in time for seasonal travel. Intel Lakefield pricePricing on Lakefield processors is going to be really hard to pin down, because as we’ve mentioned these will be Intel’s first ever hybrid processors. Without any pre-existing chips to use as a point of reference, we don’t really have much to base our speculation on. Additionally, it seems like Lakefield will primarily be integrated into laptops and other types of mobile devices, rather than appearing on store shelves as boxed processors. With this in mind, these hybrid CPUs will likely only be available to commercial partners such as system integrators and original device manufacturers – so the chances of you being able to buy standalone Lakefield CPUs are slim. Intel Lakefield specsBefore we get into the nitty-gritty of the makeup of an Intel Lakefield processor, it’s worth discussing the guiding principle or technological revolution behind the company’s first hybrid x86 CPU: Foveros. To start this journey, we’ll first take a look back at where Intel has been. For the most part, Intel’s processors up to now have utilized either monolithic or 2D integration packaging – that is to say, that’s how the chips are built. The former process has been used for the bulk of Intel’s processors, in which the CPU core, integrated memory, I/O and graphics are all integrated into a single die or unit. Intel recently introduced 2D integration with its Kaby Lake G processors, which placed Intel CPU cores with discrete-class AMD graphics next to each other. Although the two separate components are ultimately built into a single package, they’re connected through the use of Embedded Multi-die Interconnect Bridge, which enables high-speed communication. Foveros is essentially an evolution of this 2D integration, in which other components can be stacked on top of each other in a form of logic-on-logic integration. The influence Foveros has had on the design on Intel’s Lakefield processor can be easily seen thanks to a recently released video that dissects the hybrid processor. We can clearly see the 12mm square package is stacked like a sandwich, with integrated memory sitting on top of the chip’s main components. The CPU itself includes a mix of different cores, including one big 10nm Sunny Cove core for performance and four, smaller 10nm Atom cores for low-energy tasks. This mix of differently performing CPU cores will supposedly allow small-form-factor devices to deliver both performance and power efficiency at the same time. What's more, this chip comes with Intel’s Gen11 integrated graphics, as well as interfaces for cameras and hardwired connections, or I/O – not to mention the integrated memory sitting on top of all these components. Despite their stacked nature, Intel Lakefield processors will still fit into the smallest of devices. At its CES 2019 keynote, Intel demonstrated how its hybrid processor can be fitted onto a motherboard no larger than a Roku Streaming Stick. Intel has said that Lakefield can scale from low-power systems to full PC performance machines. This wide range of devices includes traditional notebooks, 2-in-1 (convertible or detachable) laptops, and even dual-screen laptops. Devices similar to the Intel Tiger Rapids and Asus Project Precog could get this new hybrid processor. That’s everything we know about Intel Lakefield so far, but we’re sure to learn more about Intel’s innovative hybrid processor soon. Stay tuned to this page, as we’ll be bringing you the latest developments as we learn about them.
This posting includes an audio/video/photo media file: Download Now |
AMD Navi release date, news and rumors Posted: 13 May 2019 09:29 AM PDT Now that AMD has revealed its first 7nm consumer graphics card, the Radeon VII, it’s about time for Team Red to reveal its other 7nm GPU architecture, Navi. There have been rumors that suggest AMD Navi won’t occupy the high-end of the market right at launch, even if it may include similar variable rate shading tech to Nvidia Turing. Instead, it looks like the first AMD Navi cards will be mid-range SKUs, and we’ll be waiting for higher end models until 2020. Plus, AMD Navi was revealed as the graphics architecture behind the PS5 in an interview with Sony lead system architect Mark Cerny. So, before we dive into all the juicy gossip, speculation and even some official info, you should know that nothing has been confirmed. Still, you should keep this page bookmarked, and we’ll keep it updated with all the latest information as it emerges. Cut to the Chase
AMD Navi release dateRight now, the only official proof that Navi will be launched anytime soon is a small statement from AMD titled “Expanding our High-Performance Leadership with Focused 7nm Development.” In the release, AMD Chief Technology Officer Mark Papermaster states that “We have already tapped out multiple 7nm products at TSMC, including our first 7nm GPU planned to launch later this year and our first 7nm server CPU that we plan to launch in 2019.” Unfortunately, that statement doesn’t necessarily clarify when we’ll see AMD Navi. Especially when the only 7nm GPUs have been the Radeon Instinct cards for workstations, Vega II and the AMD Radeon VII – a high-end graphics card that competes with the RTX 2080. This all comes from that leaked roadmap we mentioned earlier, which suggests that AMD Navi will show up in the second half of 2019. However, the entire line probably won’t make an appearance at the same time. Instead, the cards that will likely show up in 2019 will be more mid-range products for budget-minded gamers with full-fat enthusiast grade 7nm cards showing up as late as 2020. As for when exactly we’ll see those mid-range graphics, we’ll likely see them launch in October 2019, according to the latest speculation. However, we’ve also seen speculation that AMD Navi will be revealed alongside AMD’s 7nm Zen 2 processors at Computex 2019. But, we don’t think AMD Navi will actually launch at this event, but rather later, much like the Radeon VII or Vega. Later, in an interview with Wired, Sony system architect Mark Cerny confirmed that AMD Navi will be the graphics architecture behind the PS5. This unfortunately doesn’t confirm when the desktop graphics products will hit the market, but he did mention that the next-generation console won’t make it out in 2019, which corroborates the 2020 release for the high-end Navi graphics. AMD Navi also may also be featured in laptops and even the next line of MacBooks if newly found code in macOS Mojave is to be believed. These Navi-powered MacBooks are rumored to arrive in July, which would follow just a month after the purported debut of next-gen consoles. Either way, we know that AMD Navi is coming, we’ve even seen rumors about Navi’s follow-up, called Arcturus, according to a report from Wccftech. It seems like AMD has a lot of stuff coming out in the next few years, we just have to wait to see what happens. AMD Navi priceOver the last decade or so, AMD has essentially built a name for itself in the GPU space as the budget king. And, as Nvidia gets ready to release its next generation of Turing RTX graphics cards that cost upwards of a thousand bucks, AMD will probably double down on this – just like it released Ryzen and Ryzen Threadripper to counter Intel. As for the exact pricing, we’re not entirely sure, but we expect the pricing to follow in line with the Radeon RX 500 series. Those prices are below.
We can expect the prices to go up, however, due to the likely inclusion of GDDR6 memory, which we’ll dive into later. Either way, expect to pay more for the Radeon RX 650. AMD Navi specsBeyond the rumor that it’s going to be running on a 7nm process and GDDR6, there’s not much we know about AMD Navi. Because AMD Navi is going to be the first mainstream GPU processor to break away from the GCN architecture, we could see more shaders and faster clock speeds, while improving efficiency substantially. And while we initially didn’t think that AMD would compete on the high-end with Nvidia with Navi, Sony Lead System Architect Mark Cerny recently revealed some details about the PS5, which will be powered by AMD Navi, that make us think twice. For instance, the PS5 will likely support ray tracing, along with 8K resolutions. We thank that’s a little pie-in-the-sky, and we’d take it with a grain of salt. But, AMD Navi could theoretically mark a true generational shift. We just don’t think that will happen with the 2019 Navi 10 graphics cards. However, because the details were extremely vague in this interview, we’ll have to wait and see. This is because AMD has traditionally competed with high-end Nvidia GPUs by just shoving multiple GPUs into a single graphics card, and passing that along as an enthusiast-grade product. This time around, though, AMD has made it clear that it’s not interested in making multi-GPU setups, according to a report from PCWorld. This is backed up by the fact that only two of the AMD Radeon RX 500 series cards were even compatible with CrossfireX. So, instead of any high-end GPU, like the PS5 information potentially suggests, we’re putting our money on a Radeon RX 600 series card that’ll continue the legacy of cards like the AMD Radeon RX 580, delivering respectable performance in modern laptops at 1080p, while keeping prices approachable. So, instead of an Nvidia GeForce RTX 2080 Ti rival, think more of an RTX 2060 in terms of direct competition – at least until the Navi 20 high-end graphics launch in 2020. This Navi 20 will follow the first iteration of Navi, or Navi 10, and should compete with Nvidia’s flagship. This next-generation graphics card family is rumored to offer improved performance, allowing for ray tracing, according to a report from Wccftech. So, this could be the actual architecture behind the PS5. Unfortunately, we’re going to have to keep waiting for concrete information on AMD Navi. But, until then, keep this page bookmarked for all the latest news, rumors and speculation. This posting includes an audio/video/photo media file: Download Now |
Intel Ice Lake release date, news and features Posted: 13 May 2019 09:27 AM PDT The computing world has eagerly waited for Intel to boldly show us something we haven’t seen before – and that might be Intel Ice Lake, the company’s first 10nm CPU family, which may make it out ahead of Cannon Lake. Intel Ice Lake processors will be the first commercially available processors that will move from the aging 14nm process, which has been ‘enhanced’ repeatedly since the introduction of Broadwell almost five years ago. With AMD going full steam ahead with its 7nm Zen2 processors, including AMD Ryzen 3rd Generation and presumably Threadripper 3rd Generation, it’s about time for Team Blue’s chips to move into the modern age – even if it is only on the mobile side of things. Either way, because the road to Ice Lake and beyond is going to be complicated, be sure to keep this page bookmarked, and we’ll update it with all the latest developments around Intel’s 10nm mobile microarchitecture. Cut to the Chase
Intel Ice Lake release dateIntel’s announcement of Ice Lake at CES 2019 came as a surprise, and it was even more surprising to hear that these 10nm Ultrabook-class processors would release this holiday season in the US. We don’t have a narrower release window than that, so we’re expecting these chips to arrive within the last three months of the year worldwide. However, it looks like Intel Ice Lake is going to be a limited release. According to a leaked roadmap found by Tweakers, Ice Lake will release around the same time as Comet Lake-U, another 14nm microarchitecture for mobile. This is just a leak, so take it with a grain of salt, but if true, it would mean that Intel’s 10nm Ice Lake likely won’t make it to the best Ultrabooks. Cannon Lake introduced the company’s first 10nm processor, but never saw a wide release. Instead, Intel announced Ice Lake to be the first 10nm processor that would see a wide release, but we might not see that happen if recent rumors are to be believed. It was a bit of a gotcha moment at CES, when we were fully expecting Cannon Lake to finally get its long-delayed reveal. But then again, Intel introduced its Sunny Cove architecture – from which these Ice Lake CPUs will be derived – a few weeks before the giant tech conference. As for when the Ice Lake processors will actually find their way into new laptops, it looks like we might learn something at Computex 2019. Intel has claimed that Ice Lake processors are on track, and should start shipping to manufacturers in June, with laptops hitting the streets at the tail end of 2019. Intel Ice Lake priceIntel isn’t anywhere close to announcing pricing for its Ice Lake-series processors, unfortunately. But, because Ice Lake will likely be a mobile platform, the pricing will probably fall in line with Whiskey Lake, which means pricing will mostly depend on the laptop manufacturers using it. AMD could also be another influence on Intel pricing – although unfortunately Team Red also has yet to announce pricing for its Ryzen 3rd Generation CPUs. So we’ll have to wait and see how the cost of the first 7nm processors will affect Team Blue’s pricing scheme for its own 10nm processors. Intel Ice Lake specsThe headline feature of Intel Ice Lake is, of course, the new underlying 10nm Sunny Cove architecture. What’s more, these new processors will be the first to feature next-generation Gen11 graphics. Overall, Intel has promised a 2x performance boost, thanks to the smaller fabrication processes and the die shrink from 14nm. However, according to the latest leaked roadmap from Tweakers, Ice Lake mobile processors will likely be limited to a 4-cores, while Comet Lake-U will feature 6-core parts, and will be based on a 14nm manufacturing process. Aside from the chips’ main microarchitecture, Ice Lake CPUs will also come with native support for Thunderbolt 3, Wi-Fi 6 (802.11ax) and DL Boost. While Intel has yet to officially announce any exact processor models, it showed off an unnamed Ice Lake U-series CPU at its Architecture Day and at CES 2019. The basic specs of the 15W laptop processor included 4-cores, 8-threads and Gen11 graphics with 64 GPU execution units (EU). The new graphics look like a significant improvement from the Gen9 GPU found in Intel’s current line of processors, which only featured 24 EUs. In fact, the company claims Ice Lake-U will be the first chip with one TFLOP of integrated graphics performance. Additionally, we’ve seen leaked GFXBench and CompuBench benchmarks that show Intel’s Iris Plus Graphics 940 (Gen11) blowing past other 15W parts like the Intel Core i5-8250U and AMD Ryzen 7 2700U. Going on the leaked scores, Gen11 graphics may even be able to keep up with a desktop AMD Ryzen 5 2400G APU. Lastly, Intel also promised that Ice Lake will help devices achieve a battery life of over 25 hours. This is largely thanks to Team Blue performing a top-to-bottom analysis of power draw to reduce energy consumption, and also adding its new 1W display technologies. That’s everything we know about Ice Lake right now. And, while all the rumors point to Ice Lake being a mobile microarchitecture, much like Whiskey Lake, we may end up seeing desktop parts, but we wouldn’t hold our breath. Either way, stay tuned, as we’ll be posting regular updates to this page as we hear new rumors, leaks and any official news.
This posting includes an audio/video/photo media file: Download Now |
AMD Vega II release date, news and rumors Posted: 13 May 2019 09:27 AM PDT It was a fairly quiet year for AMD’s graphics department in 2018, but now that AMD has released the Radeon VII graphics card for gaming, 2019 is the year of the 7nm GPU. We were left in anticipation for a new AMD graphics card for quite a while, but the Radeon VII managed to put AMD back on the map – even if it continues to be dominated by Nvidia. Thanks to the new 7nm GPU architecture, AMD was able to pack its latest graphics cards with more transistors than ever before, without increasing the power draw. Priced the same as the Nvidia GeForce RTX 2080, while delivering comparable performance – albeit without Nvidia Turing’s ray tracing or DLSS features – the Radeon VII has ensured that 2019 will be an exciting year for the best graphics cards. While the AMD Radeon VII is at least partially related to Vega, we still don’t know whether or not it’s officially part of any Vega II lineup. Based on what we were hearing before CES 2019, the Radeon VII lines up with what we were expecting from Vega II, so we’ll be treating it as part of that lineup, even if it’s not official. So, be sure to keep this page bookmarked, and we’ll dive into everything there is to know about AMD Vega II. Cut to the chase
AMD Vega II release dateAMD revealed the Radeon VII at CES 2019, as the first 7nm consumer graphics card. It actually released about a month later on February 7, 2019, offering buyers Devil May Cry V, Resident Evil 2 and the Division 2 as complementary additions. The AMD Radeon VII had some supply issues when it first hit the market, but the high-end GPU is now widely available. This is good news, as AMD’s Navi cards may not even show up until October 2019. We have seen AMD CEO Lisa Su say that Team Red would “provide new details about the next generation of high-performance platforms and products” at Computex 2019. Either way, we’ll be sure to update this article with any new information that arises. AMD Vega II priceRight now there’s only one AMD Vega II card, the Radeon VII, and that has launched at $699 (£649, AU$1,109), bundled with three games. We’re sure that AMD will fill out the product stack with lower priced GPUs in the future, but we don’t know what the pricing will look like quite yet. However, you can probably assume it’s going to compete with Nvidia’s Turing product stack directly. AMD Vega II specsThe switch from a 14nm process to a 7nm process allows AMD to pack even more power into each GPU. For its graphics cards, this move means more transistors in each GPU without having to increase the die size or the power requirements. And, the AMD Radeon VII is the fist 7nm graphics card for gamers, with 3,840 stream processors, 16GB HBM2 VRAM and 1TB/s of memory bandwidth. This goes directly against the Nvidia GeForce RTX 2080, which has 2,944 CUDA cores, 8GB of GDDR6 VRAM and 448GB/s of memory bandwidth. The Radeon VII will definitely outperform its competitor in some workloads thanks to sheer horsepower. But, it’s especially going to shine in creative workloads, meaning that it’s going to be the best graphics card for hobbyist video editors and content creators who don’t have the cash to drop on a professional product. But, the AMD Radeon VII won’t offer the double-precision capabilities that the enterprise Radeon MI60 does, according to ExtremeTech, so it’s still very much a consumer graphics card. Aside from standalone cards, it seems all but guaranteed that the Vega II series will find its way into mobile and lower-power devices as integrated graphics processors. With a new generation of Ryzen processors expected, a new generation of Vega graphics to integrate into APUs is only fitting. In fact, laptops will soon see a flurry of 2nd Gen Ryzen Mobile processors equipped with brand new Vega graphics. Even the anticipated Xbox Scarlet Cloud could see Vega II built into its APU.
This posting includes an audio/video/photo media file: Download Now |
AMD Vega release date, news and features: everything you need to know Posted: 13 May 2019 09:26 AM PDT AMD Vega first appeared way back at CES 2017. What we didn’t know then, however, is that it wouldn’t show up on the market until seven months later in August 2017. AMD Vega was extremely successful as a mining card. Back in 2017, when AMD Vega launched, cryptocurrency ruled the world, so it was the perfect storm. Luckily, cryptocurrency mining has all but died off and AMD Vega graphics cards are very affordable. However, we suspect many users will go to the new AMD Radeon VII instead. Not that the AMD Vega cards are necessarily outdated, though. AMD’s latest Radeon Software Adrenalin 2019 Edition unlocks a ton of new functionality for current graphics cards. AMD Vega graphics cards will be just fine for quite a while. Cut to the chase
AMD Radeon RX Vega release dateFirst revealed at CES 2017, nobody knew that AMD Vega wouldn’t release until August 2017. It was unfortunate that these GPUs hit the street right when cryptocurrency was inflating the price of pretty much every graphics card on the market. However, that gold rush has died down, and availability is pretty much back to normal. But, since the Vega 56 and 64 came out, AMD was relatively quiet about new additions to the Vega family, beyond some professional-grade GPUs and laptop graphics for Apple. However, at CES 2019, AMD revealed the Radeon VII, which takes the mantle from the AMD Vega 64 and competes with the Nvidia GeForce RTX 2080. AMD Radeon RX Vega priceBack in August 2017, the AMD Radeon RX Vega 64 launched as a standalone card for $499 or £549 (about AU$630). That's a big step up from the Radeon RX 500 lineup, which starts at as low as $169 (about £136, AU$219), but this is meant to be AMD's enthusiast-class grade graphics card. It's been a long-awaited sequel to the company's mostly defunct Radeon R9 Fury X, which was still going for a cool $389 (about £315, AU$505) up until the very end. What's more, at this price point it's competitive against the $549 (£539, AU$1,299) Nvidia GTX 1080 Founders Edition. After the launch of the Vega 64 in the US, AMD offered two other editions of the card that could be bought exclusively as part of ‘Radeon Packs’, which bundled two free games, and discounts on a Samsung CF791 curved ultra-wide FreeSync monitor and a nice $100 discount on a Ryzen 7 processor and motherboard. The packs are unfortunately unavailable to purchase at time of writing, however. Unfortunately, those hardware discounts only kicked in if users are buying the said monitor and CPU/motherboard combo at the same time as their Vega GPU. Of course, users could choose to not buy the extra components and peripherals while still getting the two free games, confirmed – in the US at least – to be Wolfenstein II and Prey. The limited-edition AMD Radeon RX Vega 64 came as part of a Radeon Black Pack for $599 (£460, AU$750). Meanwhile, the liquid-cooled Vega 64 ran for $699 (£530, AU$875), and can only be purchased as part of AMD's Radeon Aqua Pack. Lastly, you'll be able to purchase the AMD Radeon RX Vega 56 by itself for $400 (about £284, AU$500) MSRP. At the end of the day, Vega was – at least at launch – as competitive to Nvidia as Ryzen is to Intel. Thankfully, due to the cryptocurrency market cooling down, AMD Vega cards have fallen to a reasonable price. They’re not quite down to MSRP, but they’re pretty close – you won’t have to worry about spending 2-3x MSRP value to upgrade your GPU. AMD Radeon RX Vega specsFollowing its 2017 Capsaicin 2 livestream event, AMD revealed the exact specifications for its two new Vega GPUs, as well as its underlying Vega 10 architecture. From the chart above, it's clear the most powerful of the bunch will be the liquid-cooled version of the Radeon RX Vega 64. The more expensive water-cooled version will operate at higher base/boost clocks, despite sharing identical specs to its air-cooled twin. The RX Vega 56, on the other hand, is positioned against Nvidia’s GeForce GTX 1070 at $400 (about £305, AU$505). However, early benchmarks have cited performance that greatly outweighed that of its closest competition while running triple-A games at 2560 x 1440. Like the Polaris 10 architecture that preceded it, AMD's Polaris 10 architecture is built on an 14nm FinFET process that should ultimately make it more power-efficient and robust in performance. Vega 10 is also noticeably skewed towards delivering on more compute power than raw graphical strength like Nvidia's Pascal GPUs. This will likely mean Vega will be able to better handle the complex calculations of procedural surfaces, volumetric lighting and the overall quality of the in-game graphics. This era of Vega GPUs also ditches GDDR5 memory altogether for a new format known as HBM2, or high-bandwidth memory. AMD believes its efficient memory offers a 75% smaller footprint than GDDR5 while also being 3.5 times more power-efficient. AMD also claims that Vega’s high-bandwidth cache controller will improve maximum frame rates by 50% and minimum frame rates by 100% over GDDR5 memory. Interestingly, Vega 10 is also designed to support up to 16GB of HBM2 memory – which we've already seen from Radeon Vega Frontier Edition – so Nvidia's Titan X may finally get some competition from AMD. Stay tuned for more details regarding everything AMD Vega, as we'll be updating this page with the latest as it happens. In the meantime, be sure to update to the latest version of AMD Radeon Software Crimson ReLive for a generous helping of GPU control features. Gabe Carey also contributed to this article
This posting includes an audio/video/photo media file: Download Now |
Intel Cannon Lake release date, news, and rumors Posted: 13 May 2019 09:24 AM PDT Remember how Intel was going to release Cannon Lake in 2016? And, then when it got pushed back to 2018? Well, according to Intel’s Q1 financial results, we won’t see the long-awaited follow-up to Kaby Lake until 2019 – at least in a consumer-ready state. Before we get into Cannon Lake, we’re going to have to dive into its background, because it gets messy. You see, in the past, Intel has followed a simple ‘Tick Tock’ release schedule, where each generation would alternate between introducing a new die process and architecture. This is why we all expected Cannon Lake to succeed Skylake way back in 2016. Reality is never quite so simple. Intel has repeatedly pushed Cannon Lake, and the rest of its 10nm silicon, back. Since Skylake, Intel has launched Kaby Lake, Coffee Lake and Coffee Lake Refresh, three more refinements of the 14nm process. We’re not even sure when Intel will release Cannon Lake, if it ever does. This delay of releasing consumer-focused 10nm processors may actually explain why AMD has claimed more than half of the CPU market. But, at least Intel is sticking to a 2019 release date for its 10nm Ice Lake chips for laptops. Intel keeps denying that Cannon Lake is canceled, but with Ice Lake appearing in the very release window we assumed Cannon Lake would have, it’s hard to believe. Plus, Intel has revealed Sunny Cove as its next architecture. Still, we did get a Cannon Lake laptop chip, so we’ll keep trying to figure out what Intel is doing. So, be sure to keep this page bookmarked and we’ll update it with any new Cannon Lake information. Cut to the Chase
Release dateWe were supposed to get Cannon Lake twice already. The presumably 9th-generation Intel CPUs were initially supposed to follow Skylake in 2016, then Kaby Lake in 2018. But according to Intel’s Q1 2018 financial report, it’s “currently shipping low-volume 10nm product and now expects 10nm volume production to shift to 2019.” So, we’re probably not getting Cannon Lake in 2018. There was some recent speculation that pointed to Intel just cancelling Cannon Lake, because it wasn’t financially feasible to shift to 10nm. However, Intel came out and defended itself, saying that it was ‘making good progress on 10nm’ and that ‘yields are improving’. So, we just have to see when this smaller manufacturing process makes it to market. But, then we keep getting more and more evidence that Cannon Lake might never see the light of day. Intel just announced its Sunny Cove 10nm microarchitecture, for both server and client processors, for release in late 2019. So, is Sunny Cove just Cannon Lake rebranded? Is Cannon Lake cancelled? Who knows. Either way we’ll have to wait until at least 2019 to see Cannon Lake, or whatever it’s called now, but when in 2019? Well, we don’t know yet, but we did get our hands on a leaked Intel roadmap that shows Coffee Lake Refresh being Intel’s main platform through Q2 2019. But, given Intel’s recent supply issues, we could see the release pushed back to the latter half of 2019 or even 2020. PriceIntel probably isn’t going to rock the boat too much here. We expect prices to fall in line with the current lineup of 8th-generation Coffee Lake processors. We’ve taken the liberty of listing some of the prices here.
Unfortunately, we won’t know until Intel actually releases pricing information, but between generations, Intel doesn’t generally shift its pricing all that much. SpecsThis is where things are going to get interesting. It’s been a very long time since we’ve seen a die process shrink at Intel, but when Broadwell succeeded Haswell (yeah, it’s been that long), we saw 30% gains in efficiency. This will mean that battery life in the best laptops will surge, and we’ll be able to overclock even harder while keeping temps manageable. Tom’s Hardware has reported that a dual-core Cannon Lake CPU was shipped last year according to a Spectre microcode guidance document, but it’s highly unlikely that consumer units will be the same. Otherwise, AMD would eat them alive for that. We’ll likely see higher core counts, as Intel is going to want to go up against AMD in a major way. This dual-core chip was likely that Intel Core i3 8121U processor that was just listed by Intel. According to the product listing the processor features two-cores and four-threads delivering 2.2GHz of base performance and boost clock of 3.2GHz. It also has a TDP of just 15W and a 4MB cache. And, Anandtech seems to have got its hands on a laptop running that Core i3-8121U for testing – and, well, it’s not the next-generation revolution you were expecting. It beat the Kaby Lake Intel Core i3-8130U in some benchmarks, but many of the tests put the Cannon Lake mobile chip behind in power, while consuming more power. We’ve also seen a leaked document that suggests a new X399 chipset in the works that would support existing Coffee Lake CPUs and possibly Cannon Lake processors as well, suggesting enthusiasts won’t need to upgrade their motherboard to get on the Cannon Lake train. If the newly-announced Intel i7-8086K is any kind of clue as to what we can expect from future Intel products with its 5GHz boost clock, we won’t only see high core counts as a result of the move to a 10nm process, but crazy high clock speeds as well. Ultimately, we won’t know what performance advantages Cannon Lake CPUs will offer until Intel shares some more information about it, but because it’s been pushed back until 2019. It’s likely we won’t hear anything from the tech behemoth until Computex 2019. But, don’t worry if anything changes, we’ll update this page. So, keep your eyes glued to this page. This posting includes an audio/video/photo media file: Download Now |
AMD Ryzen Threadripper 3rd Generation release date, news, and rumors Posted: 13 May 2019 09:24 AM PDT Now that AMD has revealed its first 7nm Ryzen 3rd Generation processors, what will the future look like for Threadripper 3rd Generation – its next line of HEDT processors? The new Zen 2 architecture was revealed at CES 2019, with AMD promising dramatically improved efficiency and performance, thanks to the new 7nm manufacturing process. This is definitely exciting information if you’re looking to upgrade to a new Ryzen chip, but we’re interested in Threadripper 3rd Generation here. So, make sure you keep this page bookmarked, and we’ll update it with any Ryzen Threadripper 3rd Generation news and rumors that come our way. Cut to the chase
AMD Ryzen Threadripper 3rd Generation release dateAMD hasn’t come out and announced the release date for the next Threadripper chips, but the last two generations have had pretty consistent release dates, so we have a solid basis for speculation. Both the original Threadripper and Threadripper 2nd Generation launched in August of 2017 and 2018, respectively. We’re fairly sure that AMD is going to follow the same general release schedule this time around, but, obviously we don’t know that for sure. The Zen 2 architecture will launch in mid 2019, likely in Q2 or Q3, in the form of consumer-focused Ryzen chips. Right now, the rumors are pointing to a July release date for the consumer-focused Ryzen 3rd Generation chips. In fact, AMD CEO Lisa Su has hinted that she’d “provide new details about the next generation of high-performance platforms and products” at Computex 2019. We’re not sure if that will include Threadripper 3, as AMD’s HEDT platforms usually launch after the consumer-focused chips. But, it looks like AMD Ryzen Threadripper 3rd Generation might be delayed until 2020. AMD’s next-gen HEDT chips have dropped off of Team Red’s roadmap entirely, so we might be waiting quite a while to get our hands on them. This isn’t the most concrete of rumors, but it’s better to be prepared for the worst. We’ll update this article as soon as we hear more substantial rumors about the release date. AMD Ryzen Threadripper 3rd Generation priceNow, so far ahead of AMD Ryzen Threadripper 3rd Generation’s launch, we don’t have any official pricing information, but we can look at past generations to get an idea of what AMD’s next HEDT chips will cost. The original lineup of Threadripper topped out with the $999 (£999, AU$1,439) AMD Ryzen Threadripper 1950X, but the second generation introduced surprisingly lower priced replacements. There were also two higher specced – and higher priced – additions. We believe Threadripper 3rd Generation will follow the latter model. The prices of AMD Ryzen Threadripper 2nd Generation chips are as follows:
AMD could very well surprise us and introduce an even higher-end SKU, but we don’t think that’s likely. Stay tuned, and we’ll update this page as soon as we hear any Threadripper 3rd Generation price information. AMD Ryzen Threadripper 3rd Generation specsWith AMD Ryzen Threadripper 3rd Generation, we expect to see a huge bump in core counts, efficiency and per-core performance with Threadripper 3rd Generation. If you need a refresher on last year’s Threadripper lineup’s specs they’re as follows:
We’re sure that with the move to 7nm, AMD is going to massively improve specs. But, with Threadripper 3rd Generation, or Castle Peak, rumors are thin on the ground. However, there is plenty of buzz about the AMD Ryzen 3000 series, not to mention the 64-core AMD Epyc server CPU. While we’ve seen speculation that suggests that the AMD Ryzen 9 3850X will have 16-cores and 32-threads with a max clock of 5.1GHz, that hasn’t been confirmed. What we do know officially is Ryzen 3rd Generation will include at least an 8-Core Ryzen processor that matches the Intel Core i9-9900K’sCinebench score. AMD also showed off the system power in its test, the whole system ran at just 130W during the Cinebench run, compared to the Intel system’s 180W. We don’t know what frequency this chip was running at, but this high performance combined with the high efficiency is promising for whatever Threadripper chips come from Zen 2. This Zen 2 processor was revealed to have 8-core chiplets that were smaller than the IO die. This means that Threadripper 3rd Generation should have many more cores in the same amount of space. We could very well see a 64-core Threadripper chip later this year. It wouldn’t even be the first 64-core processor in AMD’s lineup. Team Red recently showed off its latest Epyc processors, based off its Rome microarchitecture, with up to 64-cores. If AMD is able to bring those high core counts to Castle Peak HEDT processors, we could see AMD absolutely dominate Intel’s Basin Falls Refresh – and potentially the Cascade Lake-X family that’s rumored to follow it. And, if Intel can’t answer with anything but 14nm silicon, AMD might claim the HEDT throne just as it toppled Intel’s mainstream sales. This is especially true if Intel can’t get a smaller manufacturing process out before Zen 3 comes out and refines AMD’s 7nm process, likely in 2020.
This posting includes an audio/video/photo media file: Download Now |
Intel Whiskey Lake release date, news and rumors Posted: 13 May 2019 09:23 AM PDT Intel Whiskey Lake has been officially launched, and it’s the latest line of 8th-Generation mobile processors behind the next generation of laptops. This next-gen mobile architecture was announced back at Computex 2018 alongside the Amber Lake Y-series fanless chips. We’re not sure why there are two different mobile microarchitectures, but they’re both products of the third refinement of Skylake. Both Whiskey Lake and Amber Lake will take Kaby Lake Refresh’s spot as the best processors for laptops and Ultrabooks. But, what about those Coffee Lake U-series processors announced in April 2018? Unfortunately, we didn’t see many Coffee Lake U-equipped laptops beyond the 13-inch MacBook Pro 2018. It seems like most laptop manufacturers skipped Coffee Lake U in favor of Whiskey Lake, probably thanks to the minimal improvements over Kaby Lake R. But, there are plenty of Whiskey Lake laptops out there today, like the Huawei MateBook 14 and the Razer Blade Stealth. So, even though Intel is still suffering from supply problems, Whiskey Lake is behind some of the best Ultrabooks out right now, like the new Dell XPS 13. So, with that in mind, we’ll dive into everything Whiskey Lake has to offer. Keep this page bookmarked, and we’ll keep it updated with all the latest info. Cut to the chase
Release dateWhile Whiskey Lake was announced back in 2018, it wasn’t until CES 2019 that we really started to see them show up in actual devices. There are plenty of new devices with Whiskey Lake and Amber Lake chips, like the Dell XPS 13, the new Asus ZenBooks and the thin and light Acer Swift 7 2019. And, if you’re looking for the more powerful H-series mobile processors, Intel has just released its 9th-generation Coffee Lake Refresh chips for laptops. These will be behind most gaming laptops and mobile workstations this year, while Whiskey Lake will remain behind most Ultrabooks. As for the next generation of Ultrabook class processors: Intel has announced Ice Lake, built on its 10nm process. Intel claims these processors will be out this summer for real this time, but we’ll probably start seeing laptops that are actually packing the chips in early 2020 – much like the launch of Whiskey Lake. PriceBecause Whiskey Lake is a mobile microarchitecture, it’s unlikely that the price of the actual chip will affect most consumers. For instance, the Intel Core i5-8250U is listed at $297 (about £225, AU$400) and the Intel Core i7 8550U at $409 (about £310, AU$551). However, these aren’t really public facing prices, and is rather what laptop manufacturers pay to include these chips in their laptops. So, we’re getting 3 15W TDP U-series processors and 3 5W Amber Lake Y-series chips. With a Core i3 (or m3), i5 and i7 in either category, we can expect a pretty wide range of laptops starting at around $500 (around £389, AU$680) for the m3 and i3 laptops, whereas Core i7-backed notebooks will be more expensive, likely around $1,000 (£760, AU$1,350). Either way, Whiskey Lake was just launched, so we’re sure we’ll see pricing for the laptops sporting these new processors in the very near future. SpecsIntel introduced three Whiskey Lake processors, a Core i3, a Core i5 and a Core i7. The Core i5 and i7 are both four-core, eight-thread chips, much like their Kaby Lake R brethren, whereas the Core i3 still rocks two cores and four threads. Where they really set themselves apart is in their clock speeds. The stock clocks on the Core i5 and i7 are virtually the same as the Core i5-8250U and Core i7-8550U at 1.6GHz and 1.8GHz respectively. But Intel, has boosted the boost clocks for the Core i5-8265U and Core i7-8565U drastically – up to 4.6GHz and 3.9GHz, respectively. We haven’t seen what this performance hike translates into RE: real world performance, but average users likely won’t notice a difference. However, this much of an increase in Turbo Boost speeds can back up Intel’s claim that Whiskey Lake can have up to a double-digit increase in performance over 7th-generation Kaby Lake processors. As far as the integrated GPU goes, though, it looks like Intel is sticking with the Intel UHD Graphics 620 chip found in the Kaby Lake-R chips, so don’t expect vastly improved GPU performance with this new microarchitecture – even if Intel says gamers will be able to play games using just the integrated graphics. What’s probably more noteworthy than clock speeds and GPU improvements, though is the fact that Intel has found a way to integrate 802.11 AC Wi-Fi into the PCH on Whiskey Lake processors. This functionality isn’t included in the Amber Lake Y-series chips, but Intel is offering discrete solutions for those fanless models – on top of eSIM support for laptop makers that want to build LTE modems into new laptops. We’re sure we’ll get more information about how these new processors perform once we get our hands on laptops running them. So, until then, be sure to check back often, as we’ll add any information that comes our way. This posting includes an audio/video/photo media file: Download Now |
You are subscribed to email updates from TechRadar - All the latest technology news. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment