i’ve been using a pair of (relatively) cheap ar glasses as my primary display for a few weeks. overall i like it a lot, still need to get lens inserts to correct my elliptical (and a little interpupiary distance difference). but it’s only 1080p per eye, so of course i’m looking at how to build something better. which end me up looking at datasheets for ti’s mems display chips…

the first thought i have is obviously i’d want to replace the pair of processing chips i’d need for two eyes with a single fpga. they’re $80 a piece, and i’d need a decent fpga anyway, so that gives plenty of budget.

the second thought is… and here i thought they’d already be something cleverer than pulse width modulation for brightness control. but no, their “landed time ratio” guidelines show that for grays, landed time is inversely linear to brightness. and yet, the swing time for each mirror is 6μs, so, if you can flash your lights at controlled brightnesses for 6μs, full, half, quarter, etc., you swing the mirrors back and forth for just the ones you want to build up your total exposure, and get a frame every, 18μs * 3 (for primaries) * 4 (for HD to 4k) * 10 (for bits), 2160μs. that’s 462 Hz.

i could have my six primaries (to reach out into the corners of human vision that normal displays don’t cover), 16 bits per primary, and still get 4k at… 144 Hz.

then i look at the board layout requirements and run away screaming.