be6a9ad OvmfPkg: PlatformPei: determine the 64-bit PCI host aperture for X64 DXE

Authored and Committed by lersek 8 years ago
    OvmfPkg: PlatformPei: determine the 64-bit PCI host aperture for X64 DXE
    
    The main observation about the 64-bit PCI host aperture is that it is the
    highest part of the useful address space. It impacts the top of the GCD
    memory space map, and, consequently, our maximum address width calculation
    for the CPU HOB too.
    
    Thus, modify the GetFirstNonAddress() function to consider the following
    areas above the high RAM, while calculating the first non-address (i.e.,
    the highest inclusive address, plus one):
    
    - the memory hotplug area (optional, the size comes from QEMU),
    
    - the 64-bit PCI host aperture (we set a default size).
    
    While computing the first non-address, capture the base and the size of
    the 64-bit PCI host aperture at once in PCDs, since they are natural parts
    of the calculation.
    
    (Similarly to how PcdPciMmio32* are not rewritten on the S3 resume path
    (see the InitializePlatform() -> MemMapInitialization() condition), nor
    are PcdPciMmio64*. Only the core PciHostBridgeDxe driver consumes them,
    through our PciHostBridgeLib instance.)
    
    Set 32GB as the default size for the aperture. Issue#59 mentions the
    NVIDIA Tesla K80 as an assignable device. According to nvidia.com, these
    cards may have 24GB of memory (probably 16GB + 8GB BARs).
    
    As a strictly experimental feature, the user can specify the size of the
    aperture (in MB) as well, with the QEMU option
    
      -fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536
    
    The "X-" prefix follows the QEMU tradition (spelled "x-" there), meaning
    that the property is experimental, unstable, and might go away any time.
    Gerd has proposed heuristics for sizing the aperture automatically (based
    on 1GB page support and PCPU address width), but such should be delayed to
    a later patch (which may very well back out "X-PciMmio64Mb" then).
    
    For "everyday" guests, the 32GB default for the aperture size shouldn't
    impact the PEI memory demand (the size of the page tables that the DXE IPL
    PEIM builds). Namely, we've never reported narrower than 36-bit addresses;
    the DXE IPL PEIM has always built page tables for 64GB at least.
    
    For the aperture to bump the address width above 36 bits, either the guest
    must have quite a bit of memory itself (in which case the additional PEI
    memory demand shouldn't matter), or the user must specify a large aperture
    manually with "X-PciMmio64Mb" (and then he or she is also responsible for
    giving enough RAM to the VM, to satisfy the PEI memory demand).
    
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Jordan Justen <jordan.l.justen@intel.com>
    Cc: Marcel Apfelbaum <marcel@redhat.com>
    Cc: Thomas Lamprecht <t.lamprecht@proxmox.com>
    Ref: https://github.com/tianocore/edk2/issues/59
    Ref: http://www.nvidia.com/object/tesla-servers.html
    Contributed-under: TianoCore Contribution Agreement 1.0
    Signed-off-by: Laszlo Ersek <lersek@redhat.com>
    
        
file modified
+5 -0
file modified
+2 -0
file modified
+2 -0
file modified
+110 -0