Since the early 2000s most consumer hard drive capacities are grouped in certain size classes measured in gigabytes. The exact capacity of a given drive is usually some number above or below the class designation. Although most manufacturers of hard disk drives and flash-memory disk devices define 1 gigabyte as 1000000000bytes, the computer operating systems used by most users usually calculate size in gigabytes by dividing the total capacity in bytes (whether it is disk capacity, file size, or system RAM) by 1073741824. This distinction can be a cause of confusion, as a hard disk with a manufacturer-rated capacity of 400 gigabytes may be reported by the operating system as only 372 GB large, depending on the type of report. The JEDEC memory standards uses the IEEE 100 nomenclatures which defines a gigabyte as 1073741824bytes.[1]
The difference between units based on SI and binary prefixes increases exponentially—for example, the SI kilobyte value is nearly 98% of the kibibyte, but a megabyte is under 96% of a mebibyte, and a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB (279 GiB) hard disk drive can appear as 279 GB. As storage sizes increase and larger units are used, this difference becomes even more pronounced. Some legal challenges have been waged over this confusion such as a legal challenge against Western Digital.[2][3] The settlement of the legal challenge against Western Digital included directions to add a disclaimer that the usable capacity may differ from the advertised capacity.[3]
Because of its physical design, computer memory is addressed in multiples of base 2, thus, memory size can always be factored by a power of two (for instance, 384 MiB = 3×227 bytes). It is thus convenient to use binary units for non-disk memory devices at the hardware level (for example, in using DIMM memory boards). Most software applications have no particular need to use or report memory in binary multiples and operating systems often use varying granularity when allocating it. Other computer measurements, like storage hardware size, data transfer rates, clock speeds, operations per second, etc., do not have an inherent base, and are usually presented in decimal units.
Bookmarks