When a VM issues an I/O operation FVP determines how it can serve this I/O. In the case of a read I/O an early distinction depends on whether this I/O had been requested before and is already within the FVP cache layer or whether it has to be fetched from the Storage System so later reads of this same block can be server from the FVP layer and the VM can benefit from this I/O acceleration.
When data block(s) are requested for the first time i.e. hence not in the FVP layer, this cache miss or first reads are named "false writes". The name false writes is a description of the status. From the VMs perspective this is a read operation from the persistent storage system, however from the FVP perspective it is a write operation because it also has to be written to the FVP cache layer for use in subsequent reads of the same block(s). So if a VM is added to an FVP acceleration policy, and an application is reading all blocks from start to the end there is no immediate benefit on performance; the good news is that most of the applications are not working this way. Most applications read and write and then read older data again.
Why this is important to understand in the case of backups and specifically with certain types of backups? Because if you think about a classic client backup the backup software reads from disk from start to end at least at the first time. This causes blocks getting cached but you eventually don't want to fill up your acceleration layer with data which is not permanently accessed. We name this cache pollution and for more information please refer to the PernixData FVP Backup Best Practices blog articles.
The following figure explains a false write operation when FVP is within the I/O path. It is also important to understand that there is no difference between the two policies Write-Through or Write-Back. The following figure explains an I/O from the read perspective.
Figure 1: Read I/O - Cache Misses
- Read I/O Request from VM
- Read I/O will be fetched from the Storage System if not in the Local Acceleration Resource or on the previous host. For subsequent same I/O will be served from Local Acceleration Resource
- Data fetch from Storage successful
- Data is written to Local Acceleration Media
- I/O completion to the VM
Obviously a cache hit would work in a way that the Read I/O request will be successful delivered by the FVP layer and is completed in a very quick and efficient way.