Using flash memory to run a desktop system, like Windows, was advised against for quite some time. But what made it a desirable and viable option for mobile devices? Today’s SuperUser Q&A post has the answer to a curious reader’s question.
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
The Question
SuperUser reader RockPaperLizard wants to know what makes eMMC flash memory viable in mobile devices, but not PCs:
What does make eMMC flash memory viable in mobile devices, but not PCs?
As SSDs have become more popular, wear-leveling technology has improved in order to allow operating systems to run on them. Various tablets, netbooks, and other slim computers use flash memory instead of a hard drive or SSD, and the operating system is stored on it.
How did this suddenly become practical? Do they typically implement wear-leveling technologies, for example?
The Answer
SuperUser contributors Speeddymon and Journeyman Geek have the answer for us. First up, Speeddymon:
Followed by the answer from Journeyman Geek:
For Android tablets and mobile phones, the NVRAM technology is eMMC based. The data I can find on this technology suggests between 3k to 10k write cycles. Unfortunately, none of what I have found so far is definitive, as Wikipedia is blank on this technology’s write cycles. All other places that I have looked happened to be various forums, so hardly what I would call a reliable source.
For comparison’s sake, the write cycles on other NVRAM technology such as SSDs, which use NAND or NOR technology, are between 10k and 30k.
Now, regarding the operating system’s choice of how to mount the file system. I cannot speak on how Apple does it, but for Android, the chip is partitioned out like a hard drive would be. You have an operating system partition, a data partition, and several other proprietary partitions depending on the device manufacturer.
The real root partition lives inside the bootloader, which is bundled as a compressed file (jffs2, cramfs, etc.) together with the kernel, so that when the device’s stage 1 boot is complete (the manufacturer’s logo screen usually), then the kernel boots and the root partition is simultaneously mounted as a RAM disk.
As the operating system boots up, it mounts the primary partition’s file system (/system, which is jffs2 on devices before Android 4.0, ext2/3/4 on devices since Android 4.0, and xfs on the latest devices) as read-only so that no data can be written to it. This can, of course, be worked around by so-called “rooting” of your device, which gives you access as a super user and allows you to remount the partition as read/write. Your “user” data is written to a different partition on the chip (/data, which follows the same convention as above based on the Android version).
With more and more mobile phones ditching SD card slots, you might think that you will hit the write cycle cap sooner because all of your data is now being saved to eMMC storage instead of an SD card. Fortunately, most file systems detect a failed write to a given area of storage. If a write fails, then the data is silently saved to a new area of storage and the bad area (known as a bad block) is cordoned off by the file system driver so that data is no longer written there in the future. If a read fails, then the data is marked as corrupt and either the user is told to run a file system check (or check disk), or the device automatically checks the file system during the next boot.
As a matter of fact, Google has a patent for automatically detecting and handling bad blocks: Managing bad blocks in flash memory for electronic data flash card
To get more to the point, your question on how this suddenly became practical is not the right question to ask. It was never impractical in the first place. It was strongly advised against installing an operating system (Windows) on an SSD (presumably) because of the number of writes it does to a disk.
For example, the registry receives literally hundreds of reads and writes per second, which can be seen with the Microsoft-SysInternals Regmon Tool.
Installing Windows was advised against on first generation SSDs because with the lack of wear leveling, the data written to the registry every second (likely) eventually caught up to early adopters and resulted in unbootable systems due to registry corruption.
With tablets, mobile phones, and pretty much any other embedded device, there is no registry (Windows Embedded devices being exceptions, of course) and thus, there is no worry of data constantly being written to the same parts of the flash medium.
For Windows Embedded devices, such as many of the kiosks found in public places (like Walmart, Kroger, etc.) where you may see a random BSOD from time to time, there is not a whole lot of configuration that can be done since they are pre-designed with configurations that are intended to never change. The only time changes take place is before the chip is written in most cases. Anything that needs to be saved, such as your payment to the grocery store, is done over the network to the store’s databases on a server.
Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.
They finally became cost effective for mainstream use. That “wear” is the only concern is a bit of an assumption. There have been systems running off solid state memory for a considerable period of time. Many folks who built car-puters booted off of CF cards (which were electrically compatible with PATA and trivial to install compared to PATA hard drives), and industrial computers have had small, rugged flash based storage.
That said, there were not many options for the average person. You could buy a pricy CF card and an adaptor for a laptop, or find a tiny, very pricy industrial disk on a module unit for a desktop. They were not very large compared to contemporary hard drives (modern IDE DOMs top out at 8GB or 16GB I think). I am pretty sure you could have gotten solid state system drives set up way before standard SSDs became common.
There have not really been any universal/magical improvements in wear leveling as far as I know. There have been incremental improvements while we have been moving away from pricy SLC to MLC, TLC, and even QLC along with smaller process sizes (all of which lower cost with some higher risk of wearing out). Flash has gotten a lot cheaper.
There were also a few alternatives that did not have wear issues. For example, running the entire system off a ROM (which is arguably solid state storage ) and battery backed RAM, which many early SSDs and portable devices like the Palm Pilot used. None of these are common today. Hard drives rocked compared to say, battery backed RAM (too expensive), early solid state devices (somewhat pricy), or peasants with flags (never caught on due to terrible data density). Even modern flash memory is a descendant of fast-erasing eeproms and eeproms have been used in electronic devices for storage of things like firmware for ages.
Hard drives simply were at a nice intersection of high volume (which is important), low cost, and relatively sufficient storage.
The reason you find eMMCs in modern, low end computers is the components are relatively cheap, large enough (for desktop operating systems) at that cost, and share commonality with mobile phone components, so they are produced in bulk with a standard interface. They also give great density of storage for their volume. Considering many of these machines have a paltry 32GB or 64GB drive, on par with hard drives from the better part of a decade ago, they are a sensible option in this role.
We are finally reaching the point where you can store a reasonable amount of memory affordably and with reasonable speeds on eMMCs and flash, which is why people go for them.
Image Credit: Martin Voltri (Flickr)