The AMD Mantle Thread

Page 81 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Basically true. Yet, I can see arguments that it isn't really compelling over the GT750M unless an OEM wants to obsess over minor TDP differences. Shrug.

But, the Iris Pro is exciting in that we know that INTEL knows what they're doing with integrated graphics. Broadwell should be some good stuff in terms of integrated graphics! Either way, if Mantle is being argued as an equalizer against intel mobile graphics or the GT750M, I dunno. The performance differences between some of AMD's *mobile* APUs and the Iris Pro or GT750M - there's a huge performance difference there. I seriously doubt Mantle will make up that ground. We'll see, though.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The only reason iris pro is faster than AMD's apu's is because it has a solution for memory bandwidth. That's one of the things I stated AMD needs to do in the post that sparked this tangent.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Huh? I can't think of a mobile AMD APU that is even in the same ballpark as Iris Pro, and mobile doesn't have an awful lot of options in terms of memory anyway....
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
At least you found one of the many caveats in what I was saying. Iris pro is like $600 minimum isn't it?

Like I said, AMD would leave intel in the dust if they solved memory bandwidth and got mantle usage widespread. They would eat up all the low end PC gaming market.

It won't always be super expensive. The tech will likely trickle down in due time. As in, the high end CPU's won't be the only ones with decent IGP power.
 
Last edited:

gorobei

Diamond Member
Jan 7, 2007
3,930
1,411
136
Did you actually read what I said?

The API that the Xbox One uses although similar in features to the Direct3D version, is not the same.

It's a low level API with greatly reduced overhead, so it's comparable to Mantle in that respect. That's what Microsoft wants to implement in the Windows version.

your post says explicitly that ms is bringing the xbone api to windows. there is absolutely 0 indication of that in the link you posted.

is there a press release or other statement indicating the "porting the Xbox One's low level API to Direct3D in Windows"?; because it hasnt come up in this or any of the other mantle threads.

AMD needs to stop being coy about Mantle and come clean with their plans (if they have any) to open it up to other IHVs and ISVs..

Mantle will not survive for long unless NVidia, Intel, Microsoft etcetera jump in.. Microsoft definitely will be against Mantle due to them wanting to be the sole ISV for PC gaming, which means DirectX exclusivity.

In fact, Microsoft is already in the process of porting the Xbox One's low level API to Direct3D in Windows.

What's going to Mantle happen when DX12 or whatever has similar low overhead capabilities? You think developers are still going to go with Mantle?
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
your post says explicitly that ms is bringing the xbone api to windows. there is absolutely 0 indication of that in the link you posted.

is there a press release or other statement indicating the "porting the Xbox One's low level API to Direct3D in Windows"?; because it hasnt come up in this or any of the other mantle threads.
Yes he saying the fact there that MS is planning to bring there Xbox API to PC
http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx
 

gorobei

Diamond Member
Jan 7, 2007
3,930
1,411
136
If it's a low level API than MS can't implement it to work on PC's without AMD and NVidia help.

Low level API implementation require extensive hardware knowledge, AMD and NVidia don't just give away that stuff.

And the question is, why would MS even bother with a new API? DX has been slow for ages, the console market is a bigger pie for them than the PC one. Is it really worth for them to go and make a new API?

ms uses new dx versions to force people to upgrade to the latest version of windows. people who wanted dx10 for crysis had to go to vista. they will likely try to make dx12 an exclusive for win8.2 or blue or whatever.

if steamOS takes off, or if mantle can bring new gpu hardware feature sets to w7 users, people will happily put off jumping to the latest winOS for potentially years.

the chances of ms getting a new low overhead api out the door is pretty unlikely. they are to big and bloated to turn on a dime project wise. to make a new api in less than a year is unlikely. in that time mantle will be out and a number of devs are already playing with it now. given that amd has been working on this for probably 2 years now(or whenever richard huddy made the dx bloat statement) that means ms will be 3 years behind by the time they get something comparable.

since mantle will already be out a new dx would only really benefit nvidia.
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
ms uses new dx versions to force people to upgrade to the latest version of windows. people who wanted dx10 for crysis had to go to vista. they will likely try to make dx12 an exclusive for win8.2 or blue or whatever.

if steamOS takes off, or if mantle can bring new gpu hardware feature sets to w7 users, people will happily put off jumping to the latest winOS for potentially years.

the chances of ms getting a new low overhead api out the door is pretty unlikely. they are to big and bloated to turn on a dime project wise. to make a new api in less than a year is unlikely. in that time mantle will be out and a number of devs are already playing with it now. given that amd has been working on this for probably 2 years now(or whenever richard huddy made the dx bloat statement) that means ms will be 3 years behind by the time they get something comparable.

since mantle will already be out a new dx would only really benefit nvidia.
Most funny thing is that even So called low leave API Mantle is also depended on DX feature and it is not an exclusive API it will running on DX.I dont get this stupid jokes that DX will be only use by Nvidia.
Not even 1 game is out yet for mantle and making silly predictions.
 
Last edited:

gorobei

Diamond Member
Jan 7, 2007
3,930
1,411
136
Yes he saying the fact there that MS is planning to bring there Xbox API to PC
http://blogs.windows.com/windows/b/appbuilder/archive/2013/10/14/raising-the-bar-with-direct3d.aspx

you posted the same link carfax posted. if you bothered to read the new features or watch the videos or read my post, you would know that the only things listed there are tiled resources, hardware overlay, low latency input, and multi source compositing.

none of those are a low level, to the metal, drawcall improving api.

the new features may come from the xbone api, but they are not low level version that will eliminate dx abstraction from win8.1
 
Aug 11, 2008
10,451
642
126
ms uses new dx versions to force people to upgrade to the latest version of windows. people who wanted dx10 for crysis had to go to vista. they will likely try to make dx12 an exclusive for win8.2 or blue or whatever.

if steamOS takes off, or if mantle can bring new gpu hardware feature sets to w7 users, people will happily put off jumping to the latest winOS for potentially years.

the chances of ms getting a new low overhead api out the door is pretty unlikely. they are to big and bloated to turn on a dime project wise. to make a new api in less than a year is unlikely. in that time mantle will be out and a number of devs are already playing with it now. given that amd has been working on this for probably 2 years now(or whenever richard huddy made the dx bloat statement) that means ms will be 3 years behind by the time they get something comparable.

since mantle will already be out a new dx would only really benefit nvidia.

Not really. It would benefit Intel integrated as well, which must be 60 or 70 percent of the overall PC market. Granted, most of those are not being used for high end gaming, but it is still a huge part of the market that mantle will not be used for.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
That is the big question. Apus are usually bandwidth limited. One would assume that hsa/huma/mantle would help alleviate this problem, but in all the publicity, no one has directly addressed this as far as I know. All the talk has been about increasing draw calls, but I don't know how this relates if you are bandwidth limited.

I am pretty sure amd have greatly improved the memory efficincy of the cgn 1.1 arch compared to 1.0 they are just no so vocal about it because its strategic important. And they are secret about it.

Hawaii scales perfectly with clocks and rules at 4k
Kaveri 1080 bf4 demonstration. Only possible because there is something radically more effective than tahiti gen.

But yes they need ddr4 to get better from there and surely its bandwith limited.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Not really. It would benefit Intel integrated as well, which must be 60 or 70 percent of the overall PC market. Granted, most of those are not being used for high end gaming, but it is still a huge part of the market that mantle will not be used for.

I don't see how a low level API for the AMD APU in XBox will benefit Intel.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Not really. It would benefit Intel integrated as well, which must be 60 or 70 percent of the overall PC market. Granted, most of those are not being used for high end gaming, but it is still a huge part of the market that mantle will not be used for.

Obviously making two different low level access to intel gfx and nv and implement it in a new ms api is not economic sensible. Far from.
What is the developers incentive for an old intel arch soon to be changed, on machines that is not in any way comparable to consoles?

Its not even midrange gaming machines and is to a large degree on the b2b market. Its not a market for developers and new engines. The old games can then sell to the intel gaming integrated. Its working fine and will continue to work if not better with more powerfull intel gfx. And Intel dont have to invest anything for that. The market is not interesting for them.

But Nv is in a difficult position here to keep their margins. They will have to pay for the software development both for api, drivers and 100% for devs cost on each and every game. Perhaps its better cost benefit for them just to sell the cards cheaper to compensate. Its a fast working method. And it will work. And they have plenty margin, profit, cash and brand for it.

I would select that option if i was nv as amd dont have profit or cash to follow the cuts. Brute force. Dump it to undermine the competitors platform when its most fragile. Right now. Its an investment. Nothing as effective as that to kill upcommers and smaller competitors.
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
Most funny thing is that even So called low leave API Mantle is also depended on DX feature and it is not an exclusive API it will running on DX.I dont get this stupid jokes that DX will be only use by Nvidia.
Not even 1 game is out yet for mantle and making silly predictions.

DX doesn't have features. Take tessellation for example. The XBOX 360 has a tessellation engine because ATI cards have been having it for a while. But why didn't we see tessellation in games before DX 11 even though ATI cards had that feature? Because DX didn't have support for it.

DX doesn't give you any extra features, it simply allows you to use already existing features on graphics card to be used in games.

You could make a game engine that doesn't have DX support at all and only runs on Mantle, and it could do everything a DX engine could do because all features that DX supports are on the GPU already.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
I don't see how a low level API for the AMD APU in XBox will benefit Intel.

Because it won't. Intel will benefit from all this Mantle stuff next year (or the next) when 20nm GPUs hit the market with a 60% or more performance increase and their CPUs see the usual 10/15% perf increase. If you want the highest end or a multi-GPU solution you will need an Intel CPU. That taking in mind that 1080p is still the preferred gaming resolution as I don't really see 1440p taking off now that we have the 2K candy in sight.

Perhaps its better cost benefit for them just to sell the cards cheaper to compensate. Its a fast working method. And it will work. And they have plenty margin, profit, cash and brand for it.

No they don't. They're already dumping cash into the Tegra brand hurting their financials and name big time. Q3 was 13% down in revenue and 40% down in profit YoY.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,684
338
126
The whole question with mantle and draw calls is how much does doubling draw calls increase performance? Is performance generally draw call limited?

Someone correct me but in BF3 singleplayer it is possible to hit the 200 fps cap on the game fairly easily assuming you have enough GPU power. In MP however, framerates are a lot lower and quite often CPU bound. This is not because of draw calls but because of other game processes (tasks to keep a 64 person match running properly). Decreasing draw call overhead would have a minimal effect on performance then.

It is CPU bound because you have to continually update every object via draw calls. In single player you have much less draw calls.

An RTS on the other hand have dozen upon dozen of units an will be CPU bound even in single player situations.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,684
338
126
Allrighty. Mantle.

Mantle will surely shine more on an huma-hsa-APU than on a regular setup with descrete graphics radeon - or otherwise. Right? What kind of "low level" optimizations would Mantle provide that is today not allready feasable with, say, directx?
Mantle must be all about the shared memory space and an API build on top of it, otherwise i've completly missed the point.
Getting nvidia and co on board mantle? makes no friggin sense, they dont have the product.
My bet is that AMD is trying to lock down the gaming segment for the next 10 years.. Will it play out? Dunno. Problary not, lotsa big $$ players involved.








 
Last edited:
Aug 11, 2008
10,451
642
126
I don't see how a low level API for the AMD APU in XBox will benefit Intel.

I understood the post I was referring to as saying ms might put out an alternative low level API, or at least a much more efficient version of DX, as an alternative to mantle that would work with Intel and nVidia. That would clearly benefit both of them. Perhaps I misunderstood the post.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
It is CPU bound because you have to continually update every object via draw calls. In single player you have much less draw calls.

An RTS on the other hand have dozen upon dozen of units an will be CPU bound even in single player situations.

Hmm. Even when not on screen?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,684
338
126
Hmm. Even when not on screen?

GW2 is one of the MMORPGs with the most physics out there.

Arenanet solution to solve low frame rates was simply culling people from your screen which was a pain and led to people being killed by invisible enemies in World vs World, especially in big battles with or more people present.

Their current implementation is giving players options on how many models you going to see and if you allow those players show in your screen to hav their current armors/race/profession displayed or just use a single fall back model (or even just a floating name).

Now in a game like BF3 there aren't races and less armor/gear variety but the physics model has more fidelity and the draw distance is higher.

When you are alone in the screen, performance is much higher than when you have a ton of people around, though.

The game engine still has to know where people are and when to report them to your client.

Multiplayer games aren't easy, especially games with decent physic models - there is a reason games at the high end end up with 32vs32 and many have considerably less than that.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
GW2 is one of the MMORPGs with the most physics out there.

Arenanet solution to solve low frame rates was simply culling people from your screen which was a pain and led to people being killed by invisible enemies in World vs World, especially in big battles with or more people present.

Their current implementation is giving players options on how many models you going to see and if you allow those players show in your screen to hav their current armors/race/profession displayed or just use a single fall back model (or even just a floating name).

Now in a game like BF3 there aren't races and less armor/gear variety but the physics model has more fidelity and the draw distance is higher.

When you are alone in the screen, performance is much higher than when you have a ton of people around, though.

The game engine still has to know where people are and when to report them to your client.

Multiplayer games aren't easy, especially games with decent physic models - there is a reason games at the high end end up with 32vs32 and many have considerably less than that.

The reason why GW2 does that is due to network traffic and server load. Nothing to do with GPU or CPU limits on the client. Its a known trade off for running free servers with minimal cost.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,684
338
126
The reason why GW2 does that is due to network traffic and server load. Nothing to do with GPU or CPU limits on the client. Its a known trade off for running free servers with minimal cost.

Sure it is...
That is why we have so many games that have sub fees and have real physics engines running... wait we don't.

https://forum-en.guildwars2.com/forum/wuv/wuv/Can-you-fix-it-culling

From the programmer Habib Loew
tl;dr: We are working on a fix (a collection of fixes, really) for culling and character loading issues. The fixes require significant changes to a number of game systems and thus require time to implement. We’re not yet ready to discuss a release date for the current set of fixes, but we’re working hard to improve the experience as quickly as we can.
Now for the long version:
We currently use server-side report culling to limit the number of characters that any given game client is aware of. By limiting the number of characters that we report to any given client we also limit the bandwidth used (by the server and the client) and avoid situations where the client is overwhelmed by the number of characters that need to be processed and rendered. While this system has some obvious advantages, and it works well in PvE, the large battles that are the signature of WvW tend to highlight the deficiencies of this approach.
There are also some client-side issues which have contributed to the perception of how our culling system works. Once a character is reported to a given client there’s a non-zero amount of time required to load and initially display the assets associated with that character. Extra load time varies depending on how beefy the client machine is (those with more memory, faster CPUs, more CPU cores, and faster drives experience shorter load times). One of our engine programmers recently completed an optimization pass on the character loading process and so we should be seeing improvements to that part of the issue very soon. Even so, the bulk of the issue remains with the server-side culling as it doesn’t matter how fast your client can load and draw a character if it hasn’t even been told that character should exist yet.
As you may have heard we already have a fix for the server-side culling implemented for sPvP. Because sPvP has dramatically less players we were able deploy our fix immediately without worrying about downstream side effects. WvW, however, operates at a much larger scale than sPvP and so we have a number of additional hurdles to clear before we can turn on the server-side fix. In order to address the culling issue we need to ensure that clients, including min-spec clients, are able to handle rendering and processing many more characters. We also need to ensure that the bandwidth needed by any given client remains reasonable and falls within our min-spec for connectivity. The WvW team is working to address both the bandwidth and the client performance issues even now. The changes that we’re making are complex and have a large impact on the way the game engine works. Because of the level of complexity involved, and the core systems that are impacted, these fixes take time to implement correctly. As such, I can’t give you a date when we’ll be done.
At the end of the day our goal is to dramatically improve the experience of large battles in WvW and provide a substantial increase to the number of players that can be seen by any given client.
https://forum-en.guildwars2.com/for...e-to-match-servers-properly/page/4#post356817

I’d like to take a moment to briefly address a few of the points of serious contention that have come up in this thread and hopefully clear up any confusion:
This is a multi-faceted issue that involves both reporting from the server to the client and client asset loading. Much like a super villain team-up these two parts of the issue are combining to make an unpleasant situation worse. We’re pursuing both parts of the issue and hope to see incremental improvements as we get fixes in and tested.
The reporting issue is really all about performance tradeoffs. Every decision that the server makes about what to report to which client consumes resources, as does the act of reporting itself. These resources (server CPU, network bandwidth, etc.) are finite and are the same ones that are used by every other aspect of WvW as well. How we make use of those resources determines the number of players we can support in a given map and how smooth the simulation feels. The system that we have in place now constitutes an attempt to strike a balance between a perfect simulation that handles all the details and makes them available to every client immediately and a simulation that supports a reasonably large number of players while maintaining smooth performance under most gameplay situations. This is a true dilemma because we really want to achieve both of those goals completely and simultaneously but that just isn’t possible at the moment.
Could we throw more hardware at the problem? Maybe, but the servers that we’re running on now are Serious Business™ and simply buying faster CPUs likely wouldn’t gain us even a linear increase in performance. With today’s hardware I believe that we’re likely to gain far more improvement from code changes than we would from slightly faster CPUs. As you might imagine the way that we manage client/server communication is pretty well core to the way our game works so making changes to that system is a tricky affair that must be undertaken with great care and much testing. Further, since we can’t really create any more resources (CPU, network) every substantial change involves making a hard decision about performance, scale, and completeness of the various aspects of Gw2. Some of the most robust, correct, and appealing solutions to this issue are also the ones that will take the longest to implement correctly, thus adding response time into the mix of factors we need to consider. In a situation like this, sadly, there are no easy answers. That said, we’re evaluating possible changes to reporting even now and are committed to making WvW into the best experience we can.
The client issue relates to the way that we load assets when preparing to display characters. WvW hits this more than most other parts of the game because players are pretty much the most complicated characters that we have and, especially at higher levels, they tend to be quite varied (so things like texture caching don’t help us as much as they might elsewhere in the game). WvW tends to have much higher player densities than the rest of the game so that’s why we see these issues coming up in WvW more than elsewhere. This asset loading issue will be influenced by client hardware (kind of like saying water is wet, I know) but we see this issue crop up on even high end systems so it’s clear that hardware is not the major determining factor. So, while better hardware may improve the situation a bit it won’t make the client issue go away completely. At this point we have a solid repro of the client issue and we’re aggressively pursuing fixes.
All of us who have worked on WvW (and many of those at ArenaNet who have not) are deeply invested in making WvW the best it can be. I personally have dedicated over a year of my life to developing this game type. Other have spent even longer. I know it can be terribly frustrating to deal with these issues (I’m a gamer too, I’ve been there!). I also know that frustration can make it tempting to believe that our silence on the forums means we’re ignoring issues with the game. Please believe me when I tell you that is simply not the case. We must always balance our time on the forums with our time spent working on the game. If we go silent for a while it’s generally because we’re busy working hard so that the next time we post we can have something substantial to tell you.
TL;DR: The issue is real, we’re aware of it, we’re working on fixes/improvements, the fixes/improvements are complicated and I can’t provide you an ETA.
Keep fighting the good fight and we’ll be back to let you know when we can share more details.

https://forum-en.guildwars2.com/forum/wuv/wuv/January-WvW-culling-loading-changes

In the January update we’ll be making a couple of preliminary changes to WvW.
1. The first of our engine changes will be coming on line to help improve character load times by using fallback models.
2. We’ll be switching over to the culling methodology that we trialed in December.

The engine change we’re making uses fallback models to represent characters until their detailed models are fully loaded. The fallback models are cached so that they can display without any asset load delay and there is a distinct fallback model for each race/gender/armor-class combination. As a result of this change players will be able to see characters represented as fallback models as soon as those characters are reported to the client. Once the specific, detailed model for a given character is completely loaded from disk the fallback model will be replaced with the detailed model. This visual compromise will help to ensure that players see other characters as quickly as possible in WvW. Please note that this change does not eliminate delays due to culling, it only addresses delays due to asset load times. Players on higher spec machines would therefore expect to see fallback models less often than players on lower spec machines. This change also lays the groundwork for more extensive uses of fallback models in future updates.
In December we ran a one matchup trial of an updated culling system. Based on all the feedback we received, both during and after the trial, we will be transitioning to the updated culling system that we used in the trial. This update allows the culling system to handle allies and enemies separately so that being surrounded by a group of allies will not impact the culling of enemies (and vice-versa). The general consensus after the trial is that this system lead to a better player experience in WvW. We further saw that some of the issues people had with the new system were related to asset load times rather than culling issues so making this change in combination with our character loading improvements should lead to an improved overall experience. It is important to note, however, that while we believe this change is an improvement to the WvW experience it does not fully address the issues with culling and we are still working towards our goal of removing culling from WvW completely. This change is intended to give players an improved WvW experience while we continue work on our more comprehensive solution.
While this update only contains a couple of visible changes to WvW it lays a lot of important groundwork for future updates. We have some exciting changes coming and this update is just the beginning!

Hi all, I think a little more explanation is in order.
First, let’s call the culling system that we’re switching to “affinity culling” just to make it easier to talk about. Under affinity culling the system handles enemies and allies independently. This means that the maximum number of allies that you can see under affinity culling is 1/2 the maximum number of characters (combined enemy & ally) that you could see under the original culling. Ditto for enemies. In exchange for that cost he benefit that we get is that running with (or through, or past) a large group of allies (e.g. a guild) won’t prevent you from seeing the enemies who are closest to you.
During the initial trial of affinity culling we found that a number of players reported asset loading issues that were highlighted by culling. Any time that your client is aware of another character at all (dot on the map, targetable in world, nameplate is visible, etc.) then culling is no longer a factor because that character has already been reported to your client. In that case if you can’t see the character the issue is one of asset load time. The fallback model feature which is shipping this month directly addresses those issues by providing a cached model to show immediately when the character is reported to the client.
The order of operations looks like this:
1) Character X enters player’s visibility
2) <delay due to the mechanics of culling>
3) Server reports character X to client
4) <delay due to asset load time>
5) Character X full model is visible on-screen to the player
Using fallback models we end up with this instead
1) Character X enters player&#8217;s visibility
2) <delay due to the mechanics of culling>
3) Server reports character X to client
4a) Character X fallback model is visible on-screen to the player
4b) <delay due to asset load time>
5) Character X full model is visible on-screen to the player

As you can see we&#8217;ve made the asset load delay unimportant (or at least less important) and allowed the player to be aware of character X sooner (in some cases quite a bit sooner).
The combination of affinity culling and fallback models provides a better experience than affinity culling alone. Even so, the affinity culling is not intended to be our last change to the system, but rather to hold players over until we can make more extensive updates to the system (which take time to implement and test). In future updates we&#8217;ll be making changes in an effort to eliminate the delay due to culling (step 2 above) and to tell each client about character X as soon as the server decides they have become visible again. We hope to do this by removing culling completely and preserving client performance though a mix of network and engine optimizations, the more extensive use of fallback models, and sundry wizardry.
As I&#8217;ve said in the past our ultimate goal is to remove culling and we&#8217;re pushing hard to make that happen. This update represents the first steps in that direction and much of what it accomplishes is to lay the groundwork for our upcoming updates. The next few months are an exciting time for us on the WvW team and we&#8217;re very excited about the changes are coming.

Shall I take your word or the word of the programmer of one the MMORPGs that has one of the best physics engine and allows the biggest battles?

And of course ignore my own experience when I reduce the number of models being reported and/or the quality of models being drawn by computer and see the performance rise.

(While this may seem off topic, it is linked to some of the work that goes behind the scene while playing games)
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
He writes exactly what I say. Server CPU, bandwidth, bandwidth and bandwidth.

Every decision that the server makes about what to report to which client consumes resources, as does the act of reporting itself. These resources (server CPU, network bandwidth, etc.) are finite and are the same ones that are used by every other aspect of WvW as well.
We also need to ensure that the bandwidth needed by any given client remains reasonable and falls within our min-spec for connectivity.
We currently use server-side report culling to limit the number of characters that any given game client is aware of. By limiting the number of characters that we report to any given client we also limit the bandwidth used (by the server and the client) and avoid situations where the client is overwhelmed by the number of characters that need to be processed and rendered.
Do you think Mantle will fix this? Even if you fix the draw calls for the CPU load, the network load, the server load. Then clients would still run like rubbish due to being GPU limited. BF4 for consoles is a nice example here. 900p and 720p on something between medium to low settings and the ability to dynamic downscale further. Everyone without SSDs would also suffer badly under the texture loadings.
 
Last edited:
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |