Problem Detail: In all computer architecture books we study that Cache memory could be divided into 3 levels (L1,L2 and L3) and its very beneficial to do so. Why don’t we use the same approach in case of main memory (RAM). Is there any particular reason that we avoid this?
Asked By : Haider
Answered By : jarrodparkes
Cache memory levels are inherently a “subdivision” of main memory (RAM). To speed up the access of RAM, the cache was created using the notion of “The Principle of Locality”. Frequently in programs, memory is accessed in locations that are close (local) to previously accessed locations, so it benefits to keep sections of data stored in nearby cache instead of going all the way to the hard drive. Also, cost has historically been a factor. Cache memory is more expensive than memory space on a hard drive, etc. Keep in mind, the landscape of computer memory models is always changing, and it is likely that other methods will be used in the future to speed up memory access. I think the current trend is to move away from mechanical storage devices with moving parts, hence the SSD’s.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/10641 Ask a Question Download Related Notes/Documents