1/27/2024 0 Comments Switching speed of a procssorMost languages today support parallelism far more than they used to, and there are some languages (mostly functional programming languages) that have made it a core part of the language (see Erlang, Ada and Go as examples). This used to be a much larger problem than it is today. If a program cannot parallelize it's instructions, the fact that you have more cores means nothing. You may find this helpful.ĮDIT 3: Unrelated to the number of total clock cycles (number of cores * clock cycles per core) is the issue of parallelism. So even though clock speeds have been steady, processors have been getting faster in the sense that they can do more work per unit time.ĮDIT 2: Here is an interesting link that has more information on related topics. This happens to be a very important metric that sometimes gets overlooked (it is possible to have a 2Ghz CPU outperform a 3Ghz CPU) and this is a major area of innovation today. It should be noted that Moore's law does give limitations of it's own though.ĮDIT: More transistors means more work is done per clock cycle. This is independent of Moore's Law, but since the question is about the number of clock cycles, not the number of transistors, this explanation seems more apt. When the two ratios are equal, its time to get another core. By making cores faster, there is a quadratic relationship between heat and clock cylces. there is a constant ratio between clock speed and power draw. This is why there is an increase in the number of cores.īy adding more cores, the heat goes up linearly. This degrades the chips faster and slows them down.Īt a certain point, it is simply not worth it to increase the clock speed any more, as the increased temperature would be more than it would be to add another core. The more power that is required, the more heat your chip will give off. The larger the voltage needs to spike up, the more power is required. The faster the clock speed the larger the voltage drops need to be to make a coherent signal. Yes, this is a single point, but it may be a harbinger of the future.Īn Ars Technica Article all but declaring it dead. New memory is being shipped on the same process as old memory. Now it seems sub-20nm process may be stalled. Moore's law always referred to a process, that you can double density on chips at some regular repeating timeframe. There are a lot of ways to make things faster without a faster top MHz number. That said, CPU MHz isn't computer speed, just like horsepower isn't speed for a car. There's new materials to help this some, but until some wildly new material appears (diamonds, graphene) we're gonna get close to raw MHz speed limits. Though I'm not sure, this is probably the current speed limit, that the components are so small they're harder to make electronically stable. Eventually you lose enough juice that you make the chip hot and waste more current than you can use. As components get smaller, they get more surface area relative to their size, and the current leaks out, so it makes you need to pump more electricity into the chip. As chip design follows Moore's law and the components get smaller, new effects appear. Originally it was just an observation that component density pretty much doubles around every, that's it, nothing to do with speed.Īs a side effect, it effectively made things both faster (more things on the same chip, distances are closer) and cheaper (fewer chips needed, more chips per silicon wafer). And it doesn't have to do with speed, not directly anyway. The first thing, remember that Moore's Law isn't a law, it's just an observation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |