Is the cache really that important for SSDs?

The current mainstream SSD is this solution.

With the rapid development of solid state drives, a large number of cost-effective SSDs have appeared in the storage market, which gives us more choices when choosing hard drives; at this stage, people are not only concerned about the hard drive interface and capacity when buying solid state drives. , The cache capacity of the hard disk has gradually become a reference factor for purchase; then what is the cache of the hard disk? What role does it play in solid state drives? Is it worth becoming a reference factor for purchase? This is the doubt of many players at this stage, and we will answer them one by one.

Is the cache really that important for SSDs?
First of all, we need to clarify what is caching?

Cache, literally understood as delayed buffer, is a buffer for data exchange. Simply put, it exists to balance the speed difference between high-speed equipment and low-speed equipment. Its main function is to reduce the gap between low-speed equipment and high-speed equipment. . Because the cache capacity of each product is limited, the algorithm cannot hit 100% accurately, so low-speed devices will more or less hinder high-speed devices. The role of cache can only be to minimize this phenomenon.

In traditional mechanical hard disks, the cache mainly plays a role in accelerating reading; if a piece of data has just been read, the data near it will be kept in the DRAM cache, and there will be a chance of being hit by the next read (directly from The reading speed of DRAM cache is faster than reading from mechanical disk), this is the main function of mechanical hard disk cache; theoretically, the larger the cache, the better the read performance of mechanical hard disk.

The cache of SSDs is a bit different; under normal circumstances, SSDs with cache will be faster than those without cache, but for SSDs, this improvement has a certain limit. As far as response speed is concerned, the response speed of SSD is generally within 0.2 milliseconds, which is not slower than cache, so the improvement of read speed with cache is not particularly large.

But the real demand of SSD for DRAM cache is not to cache data, but to store the FTL flash memory mapping table that is essential to the SSD: managing the mapping relationship between logical addresses and flash physical addresses.

Although the reading and writing speed of solid-state hard disk is much faster than that of mechanical hard disk, flash memory cannot be directly overwritten and written like disk platters. This means that solid-state hard disk must have a conversion table between logical address and actual physical address; When overwriting the position No., the actual execution in the SSD may be writing to position E, and then invalidating the original corresponding position C.

This FTL flash memory mapping table needs to have memory to store and update in real time. Different solid state drives use different algorithms to process this conversion table, and the required memory space will vary greatly. The conversion table after space optimization is even It can be directly put into the small-capacity memory integrated in the main control, which is the premise of the birth of the solid state drive without external cache.

Since the function of caching is almost so important for solid state drives, why are there still some SSDs without caching?

In fact, in order to store the FTL flash memory mapping table, there are two design schemes for SSD: DRAM (with DRAM) and DRAM-less (without DRAM); those with DRAM generally store the buffered data and the mapping table in DRAM. Advantages It is to find and update the mapping table quickly, with better performance, but the disadvantage is that one more DRAM is added, and the cost and power consumption increase. The current mainstream SSD is this solution.

And DRAM-less will put a small part of the mapping table on the on-chip SRAM, and the rest of the mapping table will be placed in the Flash. The advantage of this scheme is to save the cost and power consumption of DRAM, but reading and writing Flash requires It is much slower than reading and writing DRAM, so the speed is not as good as the DRAM solution, and the performance is relatively low. This is the current entry-level SSD that uses this solution.

In addition, there are some solid-state hard drives that “have cache” in another way-Phison’s new technology is called “HMB memory buffer technology”. Simply put, it removes the cache that should have appeared on the hard drive. Instead, borrow storage space in the memory to achieve data buffering! In this way, a cache particle can be reduced in the solid state drive, thereby reducing costs and facing consumers at a more suitable price.

The current mainstream SSD is this solution.
Whether an SSD product has a cache design is often determined by the manufacturer based on product positioning and usage. Generally, some entry-level products or low-speed products will be designed without a cache solution, while some high-speed products are due to data exchange. If the quantity is large, there is a cache design to improve the read and write efficiency of the product.

The above is a detailed explanation of the role of solid-state hard disk caching. In general, whether to buy a solid-state hard disk with caching function, you also need to decide according to your own needs. If the budget is sufficient and you want a better experience, then the solid state The hard drive should not let you down.