Visible Human, Know Thyself可视人,认识你自己
Visible Human, Construct Thyself:
The Digital Anatomist Dynamic Scene Generator
James F. Brinkley, Evan M. Albright, Sara Kim, Jose L.V. Mejino, Linda G. Shapiro,
Cornelius Rosse
Structural Informatics Group, Department of Biological Structure,
University of Washington, Seattle, WA
brinkley@u.washington.edu
Availability of the Visible Human dataset has led to Digital Anatomist Information System (AIS), a many interesting applications and research projects distributed, network-based system for anatomy in imaging and graphics, as evidenced by the Visible information [Brinkley1999].
Human web site and by papers from the previous
conference. However, the project has not yet Web BrowserWeb Browserachieved the long-term goal stated in the Visible
Human Fact sheet, to “…transparently link the print
library of functional-physiological knowledge with CGI ScriptCGI Scriptthe image library of structural-anatomical
knowledge into one unified resource of health
information.”. We believe that the critical missing FM ServerGraphics Server揇ata Server FM ServerGraphics Server揇ata Server pieces necessary to achieve this goal are 1) a
comprehensive symbolic knowledge base of 3-D anatomical terms and relationships that gives 3-D FMCorrespondencesFMCorrespondencesPrimitivesPrimitivesmeaning to the images, 2) a fully segmented dataset
that is widely available and that associates each Figure 1. The Digital Anatomist Dynamic voxel or extracted structure with a name from the Scene Generator modules. knowledge base, and 3) methods for combining these resources in intelligent ways. The building blocks of a scene are shown in the bottom row, and consist of terms and In a companion paper, we propose that the Digital relationships from the FM, 3-D mesh primitives, Anatomist Foundational Model (FM) has the and a list of correspondences between the potential to become the required symbolic primitives and names in the FM. These knowledge base[Rosse2000]. We are also aware of resources are made available by means of the many efforts to segment the VH data, although none FM Server and the Data Server (which for now of these efforts has yet resulted in widely available is simulated by a function that accesses a flat segmented data. file). In this paper we assume that the required resources Scenes are generated and rendered by the are or will become available. Instead we concentrate graphics server, currently running on an Intel on the third critical piece, namely, methods for Quad Processor. The graphics server accepts combining the resources in intelligent ways. We commands from perl CGI scripts that describe the Digital Anatomist Dynamic Scene implement three different user interfaces: an Generator, a Web-based program that uses the FM authoring interface for creating new scenes, a to intelligently combine individual 3-D mesh scene manager for saving and retrieving scenes, “primitives”, representing parts of organs, into 3-D and a scene explorer for end users. These anatomical scenes. The scenes are rendered on a interfaces generate web forms which capture fast server, the rendered images are then sent to a user commands to add structures to a scene, to web browser where the user can change the scene save a scene, to rotate a scene, etc. These or navigate through it. commands are in turn passed to the graphics server, which performs the action, renders the The Dynamic Scene Generator. The scene scene, and returns an image snapshot to the web generator is composed of several modules from our browser, along with forms allowing further
interaction. Screenshots of the three interfaces are
shown below:
Figure 4. The end user scene explorer presents
a list of structures in the scene that can be
selected, then removed or highlighted. It also
shows the available add-ons as small icons that
can be added by the user. As on the other
interfaces, camera controls allow the scene to
be rotated or zoomed.
Plans and Discussion. We are currently
performing a preliminary evaluation of this
system, to see how the scene generator can be Figure 2. The authoring interface, displays a frame used as an educational tool for anatomy for navigating through the FM, and for selecting teachers and students. Feedback from this and structures to add to the scene. It also allows queries other evaluations will help us combine 3-D of the FM so that entire subtrees (e.g. all parts of the scenes with 2-D annotated images to create a Descending thoracic aorta) can be added, distance learning module in anatomy. In the highlighted or removed. longer term, the resulting information system,
when filled out with segmented models from
the entire Visible Human, has the potential for
many other Web-based applications, including
structure-based visual access to non-image
based biomedical information, thereby bringing
us a step closer to the long term goals of the
Visible Human Project.
Acknowledgements. This work was supported
by NLM Grants LM06316 and LM06822.
References
[Rosse2000] Rosse, C., Mejino, J. L., Shapiro,
L. G. and Brinkley, J. F., "Visible Human,
Know Thyself: The Digital Anatomist
Structural Abstraction," in Visible Human
Figure 3. The scene manager interface allows the Project Conference 2000. Bethesda, Maryland:
author to create scene groups, consisting of initial National Library of Medicine, 2000 Submitted. scenes and “add-ons”, which are subscenes that can
be added to an initial scene in the end-user interface. [Brinkley1999] Brinkley, J. F., Wong, B. A., Once a scene group is created, it is immediately Hinshaw, K. P. and Rosse, C., "Design of an available to the scene explorer. anatomy information system," Computer
Graphics and Applications, vol. 19, pp. 38-48,
1999.