VMware Fusion + Apple Silicon Ubuntu VM: When the Display Won't Show

VMware Fusion + Apple Silicon Ubuntu VM: When the Display Won't Show

The VM Is Alive, But the Display Is Dead

While running an Ubuntu VM with VMware Fusion on an Apple Silicon Mac, I ran into a strange situation. When starting the VM, the VMware Fusion window would just freeze. Moving the mouse, pressing keys — no response. But SSH into the machine and the guest OS is running just fine.

In severe cases, the VMX process would crash and kill the VM entirely. The VMware logs were filled with VUsbVideo: Failed to handle control request errors.

The environment was:

  • Host: macOS (Apple Silicon / ARM64)
  • VMware Fusion: 25.0.0
  • Guest: Ubuntu 22.04.5 LTS (arm64)
  • Kernel: 5.15.0-170-generic

Root Cause: The efifb vs. vmwgfx Territory War

Checking dmesg made the cause clear.

[    0.018925] pci 0000:00:0f.0: BAR 2: assigned to efifb
[    0.360618] vmwgfx 0000:00:0f.0: BAR 2: can't reserve [mem 0x70000000-0x77ffffff 64bit pref]
[    0.360627] vmwgfx: probe of 0000:00:0f.0 failed with error -16

error -16 is EBUSY — meaning the memory region is already occupied by another driver.

VMware Fusion's virtual display operates through the SVGA virtual GPU device (the vmwgfx driver). This device uses the PCI BAR 2 region (0x70000000-0x77ffffff) as VRAM. The problem is that on ARM64 EFI boot, this region gets claimed first.

Following the boot sequence:

  1. EFI firmware boots and uses a framebuffer memory region for display output
  2. Linux kernel loads and starts PCI device enumeration
  3. The PCI subsystem assigns BAR 2 of the SVGA device to efifb
  4. vmwgfx driver tries to load → BAR 2 is already occupied by efifb → load fails
  5. No display output → frozen screen

This doesn't happen in x86_64 VMs. On x86_64 there's a VGA compatibility mode where the legacy framebuffer and SVGA driver use different memory regions. ARM64 has no VGA compatibility mode — the EFI firmware directly uses the SVGA device's VRAM region as a framebuffer. That's where the conflict comes from.

Fix: Block efifb Initialization

The key is to prevent efifb from claiming BAR 2. Block efifb initialization via a kernel boot parameter.

Edit the GRUB Configuration

SSH in and run the following:

sudo sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT=.*/GRUB_CMDLINE_LINUX_DEFAULT="initcall_blacklist=efifb_init,sysfb_init"/' /etc/default/grub
sudo update-grub
sudo reboot

| Parameter | Purpose | |-----------|---------| | initcall_blacklist=efifb_init | Blocks efifb driver initialization, preventing it from claiming BAR 2 | | sysfb_init | Also blocks system framebuffer initialization (prevents double-claiming) |

VMX File Settings (Recommended)

For greater stability, add the following settings to the .vmx file while the VM is shut down.

svga.vramSize = "134217728"
svga.autodetect = "TRUE"
mks.enable3d = "FALSE"

| Setting | Change | Reason | |---------|--------|--------| | svga.vramSize | 256MB → 128MB | Smaller VRAM region reduces risk of memory mapping conflicts | | svga.autodetect | Enabled | Allow SVGA settings to be auto-detected | | mks.enable3d | Disabled | Disable unstable 3D acceleration on ARM64 |

Results

Before the fix:

vmwgfx 0000:00:0f.0: BAR 2: can't reserve [mem 0x70000000-0x77ffffff 64bit pref]
vmwgfx: probe of 0000:00:0f.0 failed with error -16

After the fix:

vmwgfx 0000:00:0f.0: [drm] VRAM at 0x0000000070000000 size is 131072 kiB
vmwgfx 0000:00:0f.0: [drm] Running on SVGA version 3.
[drm] Initialized vmwgfx 2.19.0 20210722 for 0000:00:0f.0 on minor 0

vmwgfx successfully acquires its VRAM allocation and display output is restored.

Side Note: Stale Lock Files

If the VMX process crashes, it may leave behind a lock file that prevents the VM from restarting. Delete it manually:

rm -rf "<VM path>/<VM name>.vmx.lck"
rm -rf "<VM path>/Virtual Disk.vmdk.lck"

When a lock file is present, the cleanShutdown value in the .vmx file is FALSE, and VMware Fusion shows the VM as unresponsive.

The Easier Way I Found Later: HWE Kernel

I only discovered this well after solving the problem the hard way — but if you select the HWE (Hardware Enablement) option during Ubuntu installation, you can avoid this issue entirely from the start.

The HWE kernel is Ubuntu's program for backporting newer kernels to LTS releases. For example, Ubuntu 22.04's default kernel is 5.15, but choosing HWE installs a 6.x kernel. The newer kernel includes improvements to the efifb/vmwgfx conflict on ARM64, so the display works correctly without needing to manually touch GRUB settings.

During Ubuntu Server installation, the HWE kernel option is offered in the installer. If you're on an already-installed system, you can switch with:

sudo apt install linux-generic-hwe-22.04
sudo reboot

If you're planning to create a new VM, selecting HWE at install time is the simplest solution. The GRUB parameter method on existing VMs also works, but fixing the root cause at the kernel level is cleaner.

Checklist for New VM Installation

  1. When installing Ubuntu ARM64, select the HWE kernel option (simplest)
  2. If HWE wasn't selected, SSH in and add initcall_blacklist=efifb_init,sysfb_init to GRUB
  3. Add svga.vramSize, svga.autodetect, and mks.enable3d settings to the VMX file
  4. Verify the driver loaded correctly with dmesg | grep vmwgfx