For months, the neurosurgeons had planned this operation using a new generation of virtual reality anatomical models. Rather than depending solely on transparencies, they fed CT scans, MRIs, and angiograms into a software package released by Stanford's Image Guidance Laboratories in 2002. The software synthesizes hundreds of 2-D "slices" and renders them into a 3-D model that can be viewed on a PC screen. It's essentially a graphical user interface for the body - intuitive and easy to manipulate. The system lets doctors plan and practice complex surgeries. In the OR, they can exactly match the models with the patient, giving them the ability to "see" beneath the surface. It's like having x-ray vision.