الان دقیقا این موضوع چه ربطی به نسل هشت داره؟؟؟؟؟؟ درجریان هستی که اختلاف GPU ی XSX و PS5خیلی بیشتر از PS4 و XONE هست؟؟؟؟
4K
2160*3840=8.294.400
1080
1080*1980= 2.138.400
900P که رزولوشن XBOX ONE بود تا 720پایین می اومد
900*1600 = 144000
PS4 در رزولوشن 47 درصد با XBOX ONE اختلاف داشت
کسی با این اختلاف عمیق مشکلی داشت ؟
PS5 در بدترین حالت بازی ها روی 2050P یا 2000P اجرا میکنه
خود سرنی تو سخنرانی PS5ـش برگشت گفت که با سناریو قبلی کار کردن GPU روی فرکانس 2 گیگاهرتز و CPU با فرکانس 3.5 گیگاهرتز عملا امکان پذیر نبوده و تمام تلاششون رو کردن نشده و در عوض سیستم پاور ثابت و کلاک شناور رو گذاشتن که در بالاترین حالت GPU بتونه به 2.23 گیگاهرتز برسه
عدد 10.3TF هم تنها آمار دقیقی بود که سرنی از GPUـه PS5 داد و بولدش کرد
اگر که همون SSD و CPU رایزن کافی بودن برای نسل بعدی شدن PS5 خب دیگه چرا سونی رفت سمت 10TF روی GPU؟
چرا کلاک CPU رو فدا میکنن تا برسن به همچین چیزی؟ یا چرا انتظار دارن بازیسازها بازیشون رو بشینن بر اساس پاور PS5 بهینه کنن که به قول معروف با کلاک بالاتر انرژی کمتری مصرف کنن تا سیستم داغ نکنه و ... ؟
اصلا نمیدونم معنی این حرف یعنی چی؟ یعنی بازیساز باید بشینه کد بزنه بعد ببینه PS5 چقدر داغ میکنه؟
شایعات قبلی هم که همه هستن سیستم باید 9.2TF میشده (همنطور که خود سرنی هم اقرار کرد در گفتههاش با اشاره به کلاک 2 گیگاهرتز در بیلدهای قبلی) و باز با این وجود رفتن سمت 2.23 گیگاهرتز و حالا این بابا (zoo) هم اومده گفته که وضعیت چندان جالب نیست
این رو هم برای اونهایی گفتم که اول میگن اصلا شایعه منفی نیست، بعد میگن طرف یوتیوبری و توییتری هست، بعد میگن آدم مایکروسافت هست و دست آخر هم میگن ای بابا این چینیه و ...
لابد الان هم چون چرخه باید کامل بشه میگن طرف اسمش شرایر نیست
سناریو قبلی منظورش RDNA 1 استاد
PS5 از RDNA 2 استفاده میکنه
چرا از کلاک شناور استفاده میکنه ؟ سرنی کامل توضیح داد تا مشکلاتی مثل حرارت و صدای فن بالا بیش نیاد
چرا انتظار دارن بازیسازها بازیشون رو بشینن بر اساس پاور PS5 بهینه کنن
the idea is that developers may learn to optimise in a different way, by achieving identical results from the GPU but doing it faster via increased clocks delivered by optimising for power consumption. "The CPU and GPU each have a power budget, of course the GPU power budget is the larger of the two," adds Cerny. "If the CPU doesn't use its power budget - for example, if it is capped at 3.5GHz - then the unused portion of the budget goes to the GPU. That's what AMD calls SmartShift. There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower."
"There's another phenomenon here, which is called 'race to idle'. Let's imagine we are running at 30Hz, and we're using 28 milliseconds out of our 33 millisecond budget, so the GPU is idle for five milliseconds. The power control logic will detect that low power is being consumed - after all, the GPU is not doing much for that five milliseconds - and conclude that the frequency should be increased. But that's a pointless bump in frequency," explains Mark Cerny
At this point, the clocks may be faster, but the GPU has no work to do. Any frequency bump is totally pointless. "The net result is that the GPU doesn't do any more work, instead it processes its assigned work more quickly and then is idle for longer, just waiting for v-sync or the like. We use 'race to idle' to describe this pointless increase in a GPU's frequency," explains Cerny. "If you construct a variable frequency system, what you're going to see based on this phenomenon (and there's an equivalent on the CPU side) is that the frequencies are usually just pegged at the maximum! That's not meaningful, though; in order to make a meaningful statement about the GPU frequency, we need to find a location in the game where the GPU is fully utilised for 33.3 milliseconds out of a 33.3 millisecond frame.
"So, when I made the statement that the GPU will spend most of its time at or near its top frequency, that is with 'race to idle' taken out of the equation - we were looking at PlayStation 5 games in situations where the whole frame was being used productively. The same is true for the CPU, based on examination of situations where it has high utilisation throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency."
Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time. Cerny also stresses that power consumption and clock speeds don't have a linear relationship. Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," Cerny emphasises
It's an innovative approach, and while the engineering effort that went into it is likely significant, Mark Cerny sums it up succinctly: "One of our breakthroughs was finding a set of frequencies where the hotspot - meaning the thermal density of the CPU and the GPU - is the same. And that's what we've done. They're equivalently easy to cool or difficult to cool - whatever you want to call it."
There's likely more to discover about how boost will influence game design. Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core. It makes perfect sense as most game engines right now are architected with the low performance Jaguar in mind - even a doubling of throughput (ie 60fps vs 30fps) would hardly tax PS5's Zen 2 cores. However, this doesn't sound like a boost solution, but rather performance profiles similar to what we've seen on Nintendo Switch. "Regarding locked profiles, we support those on our dev kits, it can be helpful not to have variable clocks when optimising. Released PS5 games always get boosted frequencies so that they can take advantage of the additional power," explains Cerny.
But what if developers aren't going to optimise specifically to PlayStation 5's power ceiling? I wondered whether there were 'worst case scenario' frequencies that developers could work around - an equivalent to the base clocks PC components have. "Developers don't need to optimise in any way; if necessary, the frequency will adjust to whatever actions the CPU and GPU are performing," Mark Cerny counters. "I think you're asking what happens if there is a piece of code intentionally written so that every transistor (or the maximum number of transistors possible) in the CPU and GPU flip on every cycle. That's a pretty abstract question, games aren't anywhere near that amount of power consumption. In fact, if such a piece of code were to run on existing consoles, the power consumption would be well out of the intended operating range and it's even possible that the console would go into thermal shutdown. PS5 would handle such an unrealistic piece of code more gracefully."
Right now, it's still difficult to get a grip on boost and the extent to which clocks may vary. There has also been some confusion about backwards compatibility, where Cerny's comments about running the top 100 PlayStation 4 games on PS5 with enhanced performance were misconstrued to mean that only a relatively small amount of titles would run at launch. This was clarified a couple of days later (expect thousands of games to run) but the nature of backwards compatibility on PlayStation 5 is fascinating.
ای جانم به این شبیه ساز پیاده روی ...........
اما مورد بعدی
ssd مایکروسافت چی؟
یعنی منطور شما اینکه ssd مایکروسافت ssd نیست و فقط ssd سونی ssdیه و چون ssd مایکروسافت ssd نیست پس کلا از مدار ssd بودن خارجش می کنید؟
همینجوری به خودت القا کن که می تونی گرافیک بهتری روی Xbox Series X ببینی(همین الانش نمی بینی چه برسه به بعد)
گفتم در جریان باشی فرار رو به جلو های بعدی رو با دقت انجام بدی
هَی؟!
منو منشن کردی ولی چیزی ننوشتی! مگه ازار داری اخه!
خوشحال بودم که کوروش کبیر منو منشن کرده، بدو بدو اومدم دیدم نوشته ssd مایکروسافت!
خوب الان با این چی کار کنم؟ جمله بسازم؟
این صحبت های شما فقط و فقط نشات میگیره از این که بازار pc رو نمیشناسید.
جالب اینکه حتی خودتون هم این تفاوت هایی که میگید! رو (بجز سریعتر لود شدن) تا سال حداقل دوم و سوم نخواهید دید حتی روی عناوین انحصاری چون باید انجین ها رو اپگرید کنن و بر این اساس بازی بسازن، بعد انتظار داری بازار pc خیلی سریع به این تغییر واکنش نشون بده، در حالی که passive ترین بازار ممکن هستش.
SSD هم قطعا هیچ نقش بزرگتری قرار نیست انجام بده که در مخیله ما نگنجه. همون کاری رو میکنه که در نهایت براش طراحی شده.
پس اینا چیزهایی بودن که ذهن ما قابلیت پردازشش رو نداره!جالبه!
در جواب دوست عزیز بود که چرا SSD سونی قابل مقایسه با سری X نیست بازی های مایکروسافت باید روی HDD هم اجرا بشه
بر عکس مایکروسافت مشخص نیست اول کنسول ساخته یا DEV KIT که بازی های خودش زیر یک ماه وقت داشتن خروجی برای نمایش اماده کنن
It took two weeks to get Gears 5 to RTX 2080 standard on Xbox Series X - VG247
دموی ماینکرافت اعلام شده فقط یکماه پورت شده
سونی از سال 2018 شروع به ارسال DEVKIT کرده تمام استودیو ها سونی انجین ها رو اپگرید کردن و بر این اساس SSD بازی می سازن
الان PC رو گفتی زباله!؟ :/
نه بزرگوار
زباله دیگه به نام
Xbox Series S و PC
PC که بسیاری از HDD استفاده میکنن SSD سری X در حد لودینگ باقی می مونه