관리 메뉴

Humaneer.net

Page Cache 본문

Engineering/Tip & Tech

Page Cache

비회원 2007. 9. 19. 21:52

(원문보기 - Wikipedia)

Page Cache
In computing, page cache, sometimes ambiguously called disk cache, is a transparent cache of disk-backed pages kept in primary storage (RAM) for quicker access. Page cache is typically used in operating system kernels with the paging memory management, and is completely transparent to applications. All memory that is not directly allocated to applications, is usually utilized for page cache. As hard disk read speeds are low and random accesses require expensive disk seeks compared to primary storage, this is why memory upgrades to computers usually yield significant improvements in their speed and responsiveness. This concept should not be confused with limited cache present in the actual hard disk hardware, which is more accurately called a "disk buffer".

Memory Conservation
Since non-dirty pages in the page cache have identical copies in secondary storage (hard disk), discarding and re-using their space is much quicker than paging out application memory, and is often preferred. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is done through the mmap syscall on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be pushed out of main memory eventually, leading to memory conservation.

Since cache pages can be easily dropped and re-used, some operating systems, notably Windows NT, even display some memory used for the page cache as "free" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.

Page Cache and Disk Writes
The page cache also aids in writing to a disk. Pages that have been modified in memory for writing to disk, are marked "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the page backing the particular block is looked up. If it is already found in cache, the write is done to that page in memory. Otherwise, when the write perfectly falls on page size boundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise, the page(s) are fetched from disk and requested modifications are done.

However, not all cached pages can be written to — often, program code is mapped as read-only or copy-on-write; in the latter case, modifications to code will only be visible to the process itself and will not be written to disk.

**
작년 OS 수업을 들으며 요놈을 간단하게나마 구현한게 떠오른다. -_-;

0 Comments
댓글쓰기 폼