Thursday, August 18, 2016

PernixData FVP - Write I/O

This blog article describes Write I/O with PernixData FVP in place. We separate between just read acceleration (Write-Through) and read & write acceleration (Write-Back).

Write-Through - Write I/O

In Write-Through the first-time I/O operations are not accelerated by a given acceleration media but all subsequent read operations of these writes are served from the FVP acceleration layer. There are many applications which specifically benefit from this behaviour because they are doing a mix of first writes and then reading the same data and at the end, hence there is less data travelling from the persistent, and usually slower, storage to the VM because all the subsequent reads will be served from within the FVP layer. The following figure explains a write operation while the VM is in Write-Through.
Screen Shot 2016-06-06 at 12.31.51.png
Figure 1: Write I/O in Write-Through
  1. Write I/O from VM
  2. Write I/O will be send to both SAN and Local Acceleration Media in parallel.
  3. Acknowledgement comes from the SAN
  4. I/O completion to the VM


Write-Back

The next case is how FVP handles an I/O operation when the VM is in WB. Here, it is important to understand that Write-Back does not mean FVP only accelerates writes; it accelerates both reads and writes and works different with writes. When an application issues a write I/O, FVP issues this I/O in parallel to the local acceleration resource as well as to the network in case the VM has been configured with WB + 1 or 2 and commits the I/O completion back to the VM which usually results in very low latencies. FVP then opportunistically completes the I/O operation to the Storage System. The following figure explains how Write-Back works.
Screen Shot 2016-06-06 at 14.55.28.png
Figure 2: Write I/O in Write-Back
  1. Write I/O from VM.
  2. Write I/O is sent to both Local Acceleration Media and Storage System in parallel (in case of WB+1/2 the I/O will get also sent to the network peer(s) in parallel).
  3. Acknowledge comes from the Local Acceleration Media (in case of WB+1/2 the acknowledge has to come back from the network peers as well).
  4. I/O completion to the VM.
  5. Data then gets destaged to the Storage System. Writes from the destager to the Storage System are optimized.
  6. Acknowledge back to FVP.

1 comment:

  1. Excellent blog I visit this blog it's really awesome.The important thing is that in this blog content written clearly and understandable.PCI Vendor

    ReplyDelete