Abstract from Some of My Papers (2/6)



34 papers

PG 2016;  (Proc. of Pacific Graphics 2016,) CGF, Volume 35 (2016), Number 7, 2016-10.

"An Error Estimation Framework for Many-Light Rendering"

by K. Nabata, , K. Iwasaki,, Y. Dobash, Tomoyuki Nishita

Abstract

The popularity of many-light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many-light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.

Additional information


PG 2016;  (Proc. of Pacific Graphics 2016,) CGF, Volume 35 (2016), Number 7, 2016-10.

"Multiple Scattering Approximation in Heterogeneous Media by Narrow Beam Distributions"

by Mikio Shinya, Yoshinori Dobashi, Michio Shiraishi, Motonobu Kawashima, Tomoyuki Nishita

Abstract

Fast realistic rendering of objects in scattering media is still a challenging topic in computer graphics. In presence of participating media, a light beam is repeatedly scattered by media particles, changing direction and getting spread out. Explicitly evaluating this beam distribution would enable efficient simulation of multiple scattering events without involving costly stochastic methods. Narrow beam theory provides explicit equations that approximate light propagation in a narrow incident beam. Based on this theory, we propose a closed-form distribution function for scattered beams. We successfully apply it to the image synthesis of scenes in which scattering occurs, and show that our proposed estimation method is more accurate than those based on the Wentzel-Kramers-Brillouin (WKB) theory.

Additional information


EG 2015;  Computer Graphics Forum(Proc. of EUROGRAPHICS 2015), Volume 34, Number 2, pp.?, 2015-4.

"Implicit Formulation for SPH-based Viscous Fluids"

by Tetsuya Takahashi, Yoshinori Dobashi, Issei Fujishiro, Tomoyuki Nishita, Ming, C. Lin

Abstract

We propose a stable and efficient particle-based method for simulating highly viscous fluids that can generate coiling and buckling phenomena and handle variable viscosity. In contrast to previous methods that use explicit integration, our method uses an implicit formulation to improve the robustness of viscosity integration, therefore enabling use of larger time steps and higher viscosities. We use Smoothed Particle Hydrodynamics to solve the full form of viscosity, constructing a sparse linear system with a symmetric positive definite matrix, while exploiting the variational principle that automatically enforces the boundary condition on free surfaces. We also propose a new method for extracting coefficients of the matrix contributed by second-ring neighbor particles to efficiently solve the linear system using a conjugate gradient solver. Several examples demonstrate the robustness and efficiency of our implicit formulation over previous methods and illustrate the versatility of our method.

Additional information


EG 2014;  Computer Graphics Forum(Proc. of Eurographics 2014), Volume 33, Number 2, pp.333-340, 2014-4.

"Interactive Cloth Rendering of Microcylinder Appearance Model under Environment Lighting"

by Kei Iwasaki, Kazutaka Mizutani, Yoshinori Dobashi, Tomoyuki Nishita

Abstract

This paper proposes an interactive rendering method of cloth fabrics under environment lighting. The outgoing radiance from cloth fabrics in the microcylinder model is calculated by integrating the product of the distant environment lighting, the visibility function, the weighting function that includes shadowing/masking effects of threads, and the light scattering function of threads. The radiance calculation at each shading point of the cloth fabrics is simplified to a linear combination of triple product integrals of two circular Gaussians and the visibility function, multiplied by precomputed spherical Gaussian convolutions of the weighting function. We propose an efficient calculation method of the triple product of two circular Gaussians and the visibility function by using the gradient of signed distance function to the visibility boundary where the binary visibility changes in the angular domain of the hemisphere. Our GPU implementation enables interactive rendering of static cloth fabrics with dynamic viewpoints and lighting. In addition, interactive editing of parameters for the scattering function (e.g. thread’s albedo) that controls the visual appearances of cloth fabrics can be achieved.

Additional information


CGF 2012;  Computer Graphics Forum, Volume 31, Issue 7, pp. , 2012-9

"Wetting Effects in Hair Simulation "

by Witawat Rungjiratananon, Yoshihiro Kanamori, Tomoyuki Nishita

Abstract

There is considerable recent progress in hair simulations, driven by the high demands in computer animated movies. However, capturing the complex interactions between hair and water is still relatively in its infancy. Such interactions are best modeled as those between water and an anisotropic permeable medium as water can flow into and out of the hair volume biased in hair fiber direction. Modeling the interaction is further challenged when the hair is allowed to move. In this paper, we introduce a simulation model that reproduces interactions between water and hair as a dynamic anisotropic permeable material. We utilize an Eulerian approach for capturing the microscopic porosity of hair and handle the wetting effects using a Cartesian bounding grid. A Lagrangian approach is used to simulate every single hair strand including interactions with each other, yielding fine-detailed dynamic hair simulation. Our model and simulation generate many interesting effects of interactions between fine-detailed dynamic hair and water, i.e., water absorption and diffusion, cohesion of wet hair strands, water flow within the hair volume, water dripping from the wet hair strands and morphological shape transformations of wet hair.

Additional information


EG 2012;  Computer Graphics Forum, Volume 31, Issue 2, pages 575-582, 2012-5

"Pixel Art with Refracted Light by Rearrangeable Sticks"

by Yonghao Yue, Kei Iwasaki, Bing-Yu Chen, Yoshinori Dobashi, Tomoyuki Nishita

Abstract

Pixel art is a kind of digital art that through per-pixel manipulation enables production of a diverse array of artistic images. In this paper, we present a new way for people to experience and express pixel art. Our digital art consists of a set of sticks made of acrylate resin, each of which refracts light from a parallel light source, in certain directions. Artistic users are able to easily rearrange these sticks and view their digital art through the refracted light projection on any planar surface. As we demonstrate in this paper, a user can generate various artistic images using only a single set of sticks. We additionally envision that our pixel art with rearrangeable sticks would have great entertainment appeal, e.g., as an art puzzle.

Keywords: Rearrangeable, fabrication, pixel art, refracted light, mixed integer problem

Additional information


EG 2012;  Computer Graphics Forum, Volume 31, Issue 2, pages727-734, 2012-5

"Real-time Rendering of Dynamic Scenes under All-frequency Lighting using Integral Spherical Gaussian"

by Kei Iwasaki, Wataru Furuya, Yoshinori Dobashi, Tomoyuki Nishita

Abstract

We propose an efficient rendering method for dynamic scenes under all-frequency environmental lighting. To render the surfaces of objects illuminated by distant environmental lighting, the triple product of the lighting, the visibility function and the BRDF is integrated at each shading point on the surfaces. Our method represents the environmental lighting and the BRDF with a linear combination of spherical Gaussians, replacing the integral of the triple product with the sum of the integrals of spherical Gaussians over the visible region of the hemisphere. We propose a new form of spherical Gaussian, the integral spherical Gaussian, that enables the fast and accurate integration of spherical Gaussians with various sharpness over the visible region on the hemisphere. The integral spherical Gaussian simplifies the integration to a sum of four pre-integrated values, which are easily evaluated on-the-fly. With a combination of a set of spheres to approximate object geometries and the integral spherical Gaussian, our method can render object surfaces very efficiently. Our GPU implementation demonstrates realtime rendering of dynamic scenes with dynamic viewpoints, lighting, and BRDFs.

Additional information


CGF 2011;  Computer Graphics Forum, Volume 30, Issue 7, pp. 1869-1878, 2011-9

"Motion Deblurring from a Single Image using Circular Sensor Motion "

by Yosuke Bando, Bing-Yu Chen , Tomoyuki Nishita

Abstract

Image blur caused by object motion attenuates high frequency content of images, making post-capture deblurring an ill-posed problem. The recoverable frequency band quickly becomes narrower for faster object motion as high frequencies are severely attenuated and virtually lost. This paper proposes to translate a camera sensor circularly about the optical axis during exposure, so that high frequencies can be preserved for a wide range of in-plane linear object motion in any direction within some predetermined speed. That is, although no object may be photographed sharply at capture time, differently moving objects captured in a single image can be deconvolved with similar quality. In addition, circular sensor motion is shown to facilitate blur estimation thanks to distinct frequency zero patterns of the resulting motion blur point-spread functions. An analysis of the frequency characteristics of circular sensor motion in relation to linear object motion is presented, along with deconvolution results for photographs captured with a prototype camera.

Additional information


CGF 2011;  Computer Graphics Forum, Volume 30, Issue 8, pages ?- , 2011-9

"Toward Optimal Space Partitioning for Unbiased, Adaptive Free Path Sampling of Inhomogeneous Participating Media"

by Yonghao Yue, Kei Iwasaki, Bing-Yu Chen, Yoshinori Dobashi, Tomoyuki Nishita

Abstract

Photo-realistic rendering of inhomogeneous participating media with light scattering in consideration is impor- tant in computer graphics, and is typically computed using Monte Carlo based methods. The key technique in such methods is the free path sampling, which is used for determining the distance (free path) between successive scattering events. Recently, it has been shown that efficient and unbiased free path sampling methods can be con- structed based on Woodcock tracking. The key concept for improving the efficiency is to utilize space partitioning (e.g., kd-tree or uniform grid), and a better space partitioning scheme is important for better sampling efficiency. Thus, an estimation framework for investigating the gain in sampling efficiency is important for determining how to partition the space. However, currently, there is no estimation framework that works in 3D space. In this paper, we propose a new estimation framework to overcome this problem. Using our framework, we can analytically estimate the sampling efficiency for any typical partitioned space. Conversely, we can also use this estimation framework for determining the optimal space partitioning. As an application, we show that new space partition- ing schemes can be constructed using our estimation framework. Moreover, we show that the differences in the performances using different schemes can be predicted fairly well using our estimation framework.

Additional information


CGF 2010;  Computer Graphics Forum, Volume 29, Issue 8, pages 2438-2446, 2010-12

"Chain Shape Matching for Simulating Complex Hairstyles"

by Witawat Rungjiratananon, Y. Kanamori, Tomoyuki Nishita

Abstract

Animations of hair dynamics greatly enrich the visual attractiveness of human characters. Traditional simulation techniques handle hair as clumps or continuum for efficiency; however, the visual quality is limited because they cannot represent the fine-scale motion of individual hair strands. Although a recent mass-spring approach tackled the problem of simulating the dynamics of every strand of hair, it required a complicated setting of springs and suffered from high computational cost. In this paper, we base the animation of hair on such a fine-scale on Lattice Shape Matching (LSM), which has been successfully used for simulating deformable objects. Our method regards each strand of hair as a chain of particles, and computes geometrically-derived forces for the chain based on shape matching. Each chain of particles is simulated as an individual strand of hair. Our method can easily handle complex hairstyles such as curly or afro styles in a numerically stable way. While our method is not physicallybased, our GPU-based simulator achieves visually-plausible animations consisting of several tens of thousands of hair strands at interactive rates.

Keywords: hair simulation, shape matching;GPU, individual hair strand

Additional information


CGF 2010;  Computer Graphics Forum, Volume 29, Issue 8, pages 2427-2437, 2010-12

"An Eyeglass Simulator Using Conoid Tracing "

by Masanori Kakimoto, T. Tatsukawa1, Tomoyuki Nishita

Abstract

This paper proposes a method for displaying images at the fovea of the retina taking visual acuity into account. Previous research has shown that a point light source projected onto the retina forms an ellipse, which can be computed with wavefront tracing from each point in space. We propose a novel concept using conoid tracing, with which we can acquire defocusing information several times faster than that acquired by previous methods. We also show that conoid tracing is more robust and produces higher quality results. In conoid tracing the ray is regarded as a conoid, a thin cone-like shape with varying elliptical cross-section. The viewing ray from the retina is traced as a conoid and evaluated at each sample location. Using the sampled and pre-computed data for the spatial distribution of blurring, we implemented an interactive eyeglass simulator. This paper demonstrates some visualization results utilizing the interactivity of the simulator, which an eyeglass lens design company uses to evaluate the design of complex progressive lenses.

Keywords: progressive lenswavefront tracing, conoid tracing;defocus, depth of field

Additional information


CGF 2010;  Computer Graphics Forum, Vol. 29, No. 7, pp.2215-2223, (Proc. of PG2010), 2010-9

"Fast Particle-based Visual Simulation of Ice Melting"

by Kei Iwasaki, H. Uchida, Y. Dobashi, Tomoyuki Nishita1

Abstract

The visual simulation of natural phenomena has been widely studied. Although several methods have been proposed to simulate melting, the flows of meltwater drops on the surfaces of objects are not taken into account. In this paper, we propose a particle-based method for the simulation of the melting and freezing of ice objects and the interactions between ice and fluids. To simulate the flow of meltwater on ice and the formation of water droplets, a simple interfacial tension is proposed, which can be easily incorporated into common particle-based simulation methods such as Smoothed Particle Hydrodynamics. The computations of heat transfer, the phase transition between ice and water, the interactions between ice and fluids, and the separation of ice due to melting are further accelerated by implementing our method using CUDA. We demonstrate our simulation and rendering method for depicting melting ice at interactive frame-rates.

Additional information


CGF 2010;  Computer Graphics Forum, Vol. 29, No. 7, pp.2215-2223, (Proc. of PG2010), 2010-9

"Binary Orientation Trees for Volume and Surface Reconstruction from Unoriented Point Clouds"

by Yi-Ling Chen, B.Y. Chen, L. Shang-Hong, T. Nishita

Abstract

Given a complete unoriented point set, we propose a binary orientation tree (BOT) for volume and surface repre- sentation, which roughly splits the space into the interior and exterior regions with respect to the input point set. The BOTs are constructed by performing a traditional octree subdivision technique while the corners of each cell are associated with a tag indicating the in/out relationship with respect to the input point set. Starting from the root cell, a growing stage is performed to efficiently assign tags to the connected empty sub-cells. The unresolved tags of the remaining cell corners are determined by examining their visibility via the hidden point removal operator. We show that the outliers accompanying the input point set can be effectively detected during the construction of the BOTs. After removing the outliers and resolving the in/out tags, the BOTs are ready to support any volume or surface representation techniques. To represent the surfaces, we also present a modified MPU implicits algorithm enabled to reconstruct surfaces from the input unoriented point clouds by taking advantage of the BOTs.

Additional information


EG 2010;  Computer Graphics Forum, Vol.29, No.2, pp.733-742, 2010-5

"Motion Blur for EWA Surface Splatting "

by Simon Heinzle, Johanna Wolf, Yoshihiro Kanamori, Tim Weyrich, Tomoyuki Nishita, Markus Gross

Abstract

This paper presents a novel framework for elliptical weighted average (EWA) surface splatting with time-varying scenes. We extend the theoretical basis of the original framework by replacing the 2D surface reconstruction filters by 3D kernels which unify the spatial and temporal component of moving objects. Based on the newly derived mathematical framework we introduce a rendering algorithm that supports the generation of high-quality motion blur for point-based objects using a piecewise linear approximation of the motion. The rendering algorithm applies ellipsoids as rendering primitives which are constructed by extending planar EWA surface splats into the temporal dimension along the instantaneous motion vector. Finally, we present an implementation of the proposed rendering algorithm with approximated occlusion handling using advanced features of modern GPUs and show its capability of producing motion-blurred result images at interactive frame rates.

Additional information


PG 2009;  Computer Graphics Forum, Vol.28, No.7, pp.1935-1944 (PG 2009)

" Interactive Rendering of Interior Scenes with Dynamic Environment Illumination "

by Yonghao Yue, Kei Iwasaki, Bing-Yu Chen, Yoshinori Dobashi, and Tomoyuki Nishita

Abstract

A rendering system for interior scenes is proposed in this paper. The light reaches the interior scene, usually through small regions, such as windows or abat-jours, which we call portals. To provide a solution, suitable for rendering interior scenes with portals, we extend the traditional precomputed radiance transfer approaches. In our approach, a bounding sphere, which we call a shell, of the interior, centered at each portal, is created and the light transferred from the shell towards the interior through the portal is precomputed. Each shell acts as an environment light source and its intensity distribution is determined by rendering images of the scene, viewed from the center of the shell. By updating the intensity distribution of the shell at each frame, we are able to handle dynamic objects outside the shells. The material of the portals can also be modified at run time (e.g. changing from transparent glass to frosted glass). Several applications are shown, including the illumination of a cathedral, lit by skylight at different times of a day, and a car, running in a town, at interactive frame rates, with a dynamic viewpoint.

Additional information


PG 2009;  Computer Graphics Forum, Vol.28, No.7, pp.1837-1844 (PG 2009)

" Simulation of Tearing Cloth with Frayed Edges "

by Napaporn Metaaphanon, Yosuke Bando, Bing-Yu Chen and Tomoyuki Nishita

Abstract

Woven cloth can commonly be seen in daily life and also in animation. Unless prevented in some way, woven cloth usually frays at the edges. However, in computer graphics, woven cloth is typically modeled as a continuum sheet, which is not suitable for representing frays. This paper proposes a model that allows yarn movement and slippage during cloth tearing. Drawing upon techniques from textile and mechanical engineering fields, we model cloth as woven yarn crossings where each yarn can be independently torn when the strain limit is reached. To make the model practical for graphics applications, we simulate only tearing part of cloth with a yarn-level model using a simple constrained mass-spring system for computational efficiency. We designed conditions for switching from a standard continuum sheet model to our yarn-level model, so that frays can be initiated and propagated along the torn lines. Results show that our method can achieve plausible tearing cloth animation with frayed edges.

Additional information


PG 2009;  Proc. of PG 2009

"Interactive and Realistic Visualization System for Earth-Scale Clouds"

by Yoshinori Dobashi, Tsuyoshi Yamamoto and Tomoyuki Nishita

Abstract

This paper presents an interactive system for realistic visualization of earth-scale clouds. Realistic images can be generated at interactive frame rates while the viewpoint and the sunlight directions can be changed interactively. The realistic display of earth-scale clouds requires us to render large volume data representing the density distribution of the clouds. However, this is generally time-consuming and it is difficult to achieve the interactive performance, especially when the sunlight direction can be changed. To address this, our system precomputes the integrated intensities and opacities of clouds for various viewing and sunlight directions. This idea is combined with a novel hierarchical data structure for further acceleration. The photorealism of the final image is improved by taking into account the atmospheric effects and shadows of clouds on the earth. We demonstrate the usefulness of our system by an application to a space flight simulation.
PG 2008;  Computer Graphics Forum, Vol. 27, No. 7, pp.1887-1893 (PG 2008)

"Real-time Animation of Sand-Water Interaction "

by Witawat Rungjiratananon, Zoltan Szego, Yoshihiro Kanamori and Tomoyuki Nishita

Abstract

Recent advances in physically-based simulations have made it possible to generate realistic animations. However, in the case of solid-fluid coupling, wetting effects have rarely been noticed despite their visual importance especially in interactions between fluids and granular materials. This paper presents a simple particle-based method to model the physical mechanism of wetness propagating through granular materials; Fluid particles are absorbed in the spaces between the granular particles and these wetted granular particles then stick together due to liquid bridges that are caused by surface tension and which will subsequently disappear when over-wetting occurs. Our method can handle these phenomena by introducing a wetness value for each granular particle and by integrating those aspects of behavior that are dependent on wetness into the simulation framework. Using this method, a GPU-based simulator can achieve highly dynamic animations that include wetting effects in real time.

Additional information


EG 2008;  Computer Graphics Forum, Vol.27, No.2, pp.351-360 (Proc. EUROGRAPHICS 2008)

"GPU-based Fast Ray Casting for a Large Number of Metaballs, "

by Yoshihiro Kanamori, Zoltan Szego and and Tomoyuki Nishita

Abstract

Metaballs are implicit surfaces widely used to model curved objects, represented by the isosurface of a density field defined by a set of points. Recently, the results of particle-based simulations have been often visualized using a large number of metaballs, however, such visualizations have high rendering costs. In this paper we propose a fast technique for rendering metaballs on the GPU. Instead of using polygonization, the isosurface is directly evaluated in a per-pixel manner. For such evaluation, all metaballs contributing to the isosurface need to be extracted along each viewing ray, on the limited memory of GPUs. We handle this by keeping a list of metaballs contributing to the isosurface and efficiently update it. Our method neither requires expensive precomputation nor acceleration data structures often used in existing ray tracing techniques. With several optimizations, we can display a large number of moving metaballs quickly.

Additional information


EG 2008;  Computer Graphics Forum, Vol.27, No.2, pp. 477-486 (Proc. EUROGRAPHICS 2008)

"A Fast Simulation Method Using Overlapping Grids for Interactions between Smoke and Rigid Objects, "

by Y. Dobashi, Y. Matsuda, T. Yamamoto, and Tomoyuki Nishita

Abstract

Many methods for the visual simulation of natural phenomena related to fluids, such as smoke and fire, have been proposed. These methods use a technique known as ’computational fluid dynamics’ to compute the motion of fluids. Traditionally, a single computational grid is used for computing the motion of the fluids. When an object interacts with fluid, the resolution of the grid must be sufficiently high because the shape of the object is represented with a shape sampled at the grid points. This increases the number of grid points, and hence the computational cost is increased. To address this problem, we propose a method using multiple grids that overlap with each other. In addition to a large single grid (a global grid) that covers the whole of the simulation space, separate grids (local grids) are generated that surround each object. The resolution of a local grid around an object is higher than that of the global grid. The local grids move according to the motion of the objects. Therefore, the process of resampling the shape of the object is unnecessary when the object moves. To accelerate the computation, appropriate resolutions are adaptively-determined for the local grids according to the distance from the viewpoint. Furthermore, since we use regular (orthogonal) lattices for the grids, the method is very suitable for a GPU implementation. This realizes the real-time simulation of interaction between objects and smoke.

Additional information


EG 2007;  Computer Graphics Forum, Vol.26, No.3, pp.627-636 (Proc. EUROGRAPHICS 2007)

"Interactive Simulation of the Human Eye Depth of Field and Its Correction with Spectacle Lenses, "

by M. Kakimoto, T. Tatsukawa, Y. Mukai, and Tomoyuki Nishita

Abstract

This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.

Additional information


CAV 2005;  Computer Animation and Virtual Warld, Vol.16 pp.475-486

"Deferred Shadowing for Real-Time Rendering of Dynamic Scenes Under Environment Illumination "

by Naoki Tamura, J.Henry and Tomoyuki Nishita

Abstract

Environment illumination, which is a complex and distant lighting environment represented by images is often applied to create photo-realistic images. However, creating photo-realistic animations under environment illumination is exceedingly compute intensive. The Precomputed Radiance Transfer (PRT) methods achieve realtime rendering under environment illumination, however, these methods only have a limited application in animation because the objects in the scene cannot be moved or rotated. In this paper, we propose a method for rendering photorealistic animations of dynamic scenes under environment illumination in real time. We notice the fact that when objects are moved or rotated, changes of radiances occur mainly in the regions of shadows cast by other objects. Our method makes a distinction between self-shadow and shadows cast by other objects and computes these two kinds of shadows efficiently.

Additional information


TVC 2005;  The Visual Computing Vol.21

"Character animation creation using hand-drawn sketches "

by Bing-Yu Chen , Yutaka Ono, and Tomoyuki Nishita

Abstract

To create a character animation, a 3D character model is often needed. However, since humanlike characters are not rigid bodies, to deform the character model to fit each animation frame is tedious work. Therefore, we propose an easy-to-use method for creating a set of consistent 3D character models from some hand-drawn sketches while keeping the projected silhouettes and features of the created models consistent with the input sketches. Since the character models possess vertexwise correspondences, they can be used for frame-consistent texture mapping or for making character animations. In our system, the user only needs to annotate the correspondence of the features among the input-vector-based sketches; the remaining processes are all performed automatically.

Key Words:

Cel animation - Nonphotorealistic rendering - 3D morphing - Consistent mesh parameterization - Sketches

Additional information

==>    
       input (hand-drawn)    ---->    output (3D model)

CG Forum 2004; 

"Synthesizing Sound from Turbulent Field using Sound Textures for Interactive Fluid Simulation"

by Yoshinori Dobashi, T. Yamamoto, Tomoyuki Nishita

Abstract

Sound is an indispensable element for the simulation of a realistic virtual environment. Therefore, there has been much recent research focused on the simulation of realistic sound effects. This paper proposes a method for creating sound for turbulent phenomena such as fire. In a turbulent field, the complex motion of vortices leads to the generation of sound. This type of sound is called a vortex sound. The proposed method simulates a vortex sound by computing vorticity distributions using computational fluid dynamics. Sound textures for the vortex sound are first created in a pre-process step. The sound is then created at interactive rates by using these sound textures. The usefulness of the proposed method is demonstrated by applying it to the simulation of the sound of fire and other turbulent phenomena.

Key Words:

Additional information


JISE2003; Journal of Information Science and Engineering, Vol. 20 No. 2, pp. 219-232

"Modeling of Volcanic Clouds using CML"

by RYOICHI MIZUNO, YOSHINORI DOBASHI, and Tomoyuki Nishita

Abstract

In this paper, the model of volcanic clouds for computer graphics using the Coupled Map Lattice (CML) method is proposed. In this model, the Navier-Stokes equations are used, and the equations are solved by using the CML method that can be applied as an efficient fluid solver. Moreover, to generate desired shape of volcanic clouds, some parameters that allow intuitively control of the shape are provided. Hence, in this system, the behavior of the volcanic clouds can be calculated in practical calculation time, and by only changing some parameters various shapes of the volcanic clouds can be generated. Therefore, photo-realistic images/animations of various shapes of volcanic clouds can be created efficiently by the proposed approach.

Key words

Volcanic clouds, coupled map lattice, modeling, visualization, animation, computational fluid dynamics, cellular automaton

Additional information


WSCG04; Journal of WSCG04, pp. 277

"Diffusion and Multiple Anisotropic Scattering for Global Illumination in Clouds"

by
N. Max, G.,Schussman, R.Miyazaki, K.Iwasaki, and Tomoyuki Nishita

Abstract

The diffusion method is a good approximation inside the dense core of a cloud, but not at the more tenuous boundary regions. Also, it breaks down in regions where the density of scattering droplets is zero. We have enhanced it by using hardware cell projection volume rendering at cloud border voxels to account for the straight line light transport across these empty regions. We have also used this hardware volume rendering at key voxels in the low-density boundary regions to account for the multiple anisotropic scattering of the environment.

Key words

Diffusion approximation, multiple anisotropic scattering, global illumination, participating media, clouds.

Additional information


EG 2003; Computer Graphics Forum, Vol.23, No.3, 2003-9, pp.601-609

"A Fast Rendering Method for Refractive and Reflective Caustics Due to Water Surfaces"

by Kei Iwasaki,
Yoshinori Dobashi, and Tomoyuki Nishita

Abstract

In order to synthesize realistic images of scenes that include water surfaces, the rendering of optical effects caused by waves on the water surface, such as caustics and reflection, is necessary. However, rendering caustics is quite complex and time-consuming. In recent years, the performance of graphics hardware has made significant progress. This fact encourages researchers to study the acceleration of realistic image synthesis. We present here a method for the fast rendering of refractive and reflective caustics due to water surfaces. In the proposed method, an object is expressed by a set of texture mapped slices. We calculate the intensities of the caustics on the object by using the slices and store the intensities as textures. This makes it possible to render caustics at interactive rate by using graphics hardware. Moreover, we render objects that are reflected and refracted due to the water surface by using reflection/refraction mapping of these slices.

Key words

Additional information


EG 2003; Computer Graphics Forum, Vol.22, No.3, pp.411-418, 2003-9

"Animating Hair with Loosely Connected Particles "

by Yosuke Bando, Bing-Yu Chen, and Tomoyuki Nishita

Abstract

This paper presents a practical approach to the animation of hair at an interactive frame rate. In our approach, we model the hair as a set of particles that serve as sampling points for the volume of the hair, which covers the whole region where hair is present. The dynamics of the hair, including hair-hair interactions, is simulated using the interacting particles. The novelty of this approach is that, as opposed to the traditional way of modeling hair, we release the particles from tight structures that are usually used to represent hair strands or clusters. Therefore, by making the connections between the particles loose while maintaining their overall stiffness, the hair can be dynamically split and merged during lateral motion without losing its lengthwise coherence.

Key words

Additional information


TVC 2002;  The Visual Computer 2002, Vol.18, No.8, pp.493-510, 2002-12

"B-spline free-form deformation of polygonal object as trimmed Bezier surfaces,"

by
Jieqing Feng, Tomoyuki Nishita, Xiaogang Jin, Qunsheng Peng

Abstract

Free-form deformation is a powerful shape modification tool. How to approximate or compute the real deformation of a polygonal object is still problematic. In this paper, a new solution is proposed for this problem. First, a special initial B-spline volume is defined whose Jacobian is an identity matrix. The accurate deformation is as trimmed tensor-product Bezier surfaces. The description of the trimmed surfaces is consistent with that in the industrial standard, STEP. The degrees of the Bezier surfaces are lower than the theoretical results. Compared with previous algorithms, the proposed algorithm has the advantages of both storage and run-time.

Key words

Free-form deformation, B-spline, Polygons, Bezier surface, Trimmed surface

Additional information


IEEE TM 1998;  IEEE Trans. on Magnetics 1998, Vol. 34, No. 5, pp. 3431-3434,1998

"A Fast Volume Rendering Method for Time-Varying 3-D Scalar Field Visualization Using Orthonormal Wavelets,"

by Yoshinori Dobashi, C. Vlatko, Kazufumi Kaneda, Hideo Yamashita, Tomoyuki Nishita

Abstract

Animation of a time-varying 3-D scalar field distribution requires generation of a set of images at a sampled time intervals i.e. frames. Although, volume rendering method can be very advantageous for such 3-D scalar field visualizations, in case of animation, the computation time needed for generation of the entire set of images can be considerably long. To address this problem, this paper proposes a fast volume rendering method which utilize orthonormal wavelets. The coherency between frames, in the proposed method, is eliminated by expanding the scalar field into a series of wavelets. Application of the proposed method for time-varying eddy-current density distribution inside an aluminum plate (TEAM Workshop Problem 7) is given.

Additional information


IEEE CGA 1989; IEEE Computer Graphics and Applications 1989, Vol.9, No.2, pp.21-29, 1989-3

"Composition 3D Images with Anti-aliasing and Various Shading Effects,"

by Eihachiro Nakamae, T. Ishizaki, Tomoyuki Nishita, and S. Takita

Abstract

To make complex images look realistic, various types of geometric models and shading effects are needed. Programs capable of dealing with these are usually large and complex, and they are expensive to develop. This article proposes a method for compositing 3D images produced by different programs taking depth order into consideration. The method can add the following effects to composited images: 1. Antialiased images with scaling are displayed by a simple algorithm. 2. The algorithm can add shading effects due to various types of light, such as area sources and skylight. 3. Such shading effects as transparency and refraction are usually accomplished by ray tracing, but at the expense of enormous computation time. Our method allows ray tracing to be performed in localized regions, producing realistic results without the computational expense of ray tracing the whole image. In addition to the above processes, such shading effects as fog and texture mapping can be processed with conventional methods. Thus it becomes possible to display complex scenes with various shading effects, using relatively small computers.

Additional information


COMPSAC 1983;   Proc. of IEEE Computer Society's 7th International Computer Software & Applications Conference, pp.237-242, 1983-11

"Half-Tone Representation of 3-D Objects Illuminated by Area Sources or Polyhedron Sources,"

by Tomoyuki Nishita and Eihachiro Nakamae

Abstract

The degree of rearism of the shaded image of a three-dimensional scene depends remarkably on the successful simulation of shadowing and shading effects. The shading model has two main ingredients: properties of surface and properties of the illuminations falling on it. In most previous work, it seems that researchers' interest has been concentrated on the former rather than the latter, and a major deficiency in most computer-synthesized images has been the lack of penumbrae. This paper presents shading algorithms for area sources and polyhedron sources. The advanced points are as follows: 1) The use of shadow volumes formed by a convex polyhedron and area (or polyhedron) source results in easy determination of regions of penumbrae and umbrae on faces. 2) The illuminance in penumbrae caused by several polyhedra can be obtained by using the contour integration method. 3) The precise calculation of the illuminance for area sources and polyhedron sources gives the much-improved reality of half-tone representation.

Additional information


EG'97;  Computer Graphics Forum, Vol.16, No.3, pp.357-364, 1997-9

"A Modeling and Rendering Method for Snow by Using Metaballs"

by Tomoyuki Nishita, Iwasaki, Yoshinori Dobashi, and Eihachiro Nakamae

Abstract

The display of natural scenes such as mountains, trees, the earth as viewed from space, the sea, and waves have been attempted. Here a method to realistically display snow is proposed. In order to achieve this, two important elements have to be considered, namely the shape and shading model of snow, based on the physical phenomenon. In this paper, a method for displaying snow fallen onto objects, including curved surfaces and snow scattered by objects, such as skis, is proposed. Snow should be treated as particles with a density distribution since it consists of water particles, ice particles, and air molecules. In order to express the material property of snow, the phase functions of the particles must be taken into account, and it is well-known that the color of snow is white because of the multiple scattering of light. This paper describes a calculation method for light scattering due to snow particles taking into account both multiple scattering and sky light, and the modeling of snow.

Key words

snow, multiple scattering, Mie scattering, metaball, volume rendering

Additional information


EG'96;  Computer Graphics Forum, Vol.15, No.3, pp. 112-118, 1996-9

"Method for Calculation of Sky Light Luminance Aiming at an Interactive Architectural Design"

by Yoshinori Dobashi, Kazufumi Kaneda, Hideo Yamashita, and Tomoyuki Nishita

Abstract

Recently, computer graphics are frequently used for both architectural design and visual environmental assessment. Using computer graphics, designers can easily compare the effect of the natural light on their architectural designs under various conditions, such as different times of day, seasons, atmospheric conditions (clear or overcast sky) or building wall materials. In traditional methods of calculating the luminance due to sky light, however, all calculation must be performed from scratch if such conditions undergo change. Therefore, to compare the architectural designs under different conditions, a great deal of time has to be spent on generating the images. This paper proposes a new method of quickly generating images of an outdoor scee, taking into account glossy specular reflection, even if such conditions change. In this method, luminance due to sky light is expressed by a series of basi s functions, and basis luminances corresponding to each basis function are precalculated and stored in a compressed form in the preprocess. Once the basis luminances are calculated, the luminance due to sky light can be quickly calculated by the weighted sum of the basis luminances. Several examples of an architectural design demonstrate the usefulness of the proposed method.

Key Words:

snow, multiple scattering, Mie scattering, metaball, volume rendering

Additional information


EG'95;   Computer Graphics Forum, Vol.14, No.3, pp.229-240, 1995-9

"A Quick Rendering Method using Basis Functions for Interactive Lighting Design"

by Yoshinori Dobashi, Kazufumi Kaneda, Takanobu Nakashima, Hideo Yamashita, and Tomoyuki Nishita

Abstract

When designing interior lighting effects, it is desirable to compare a variety of lighting designs involving different lighting devices and directions of light. It is, however, time-consuming to generate images with many different lighting parameters, taking interreflection into account, because all luminances must be calculated and recalculated. This makes it difficult to design lighting effects interactively. To address this problem, this paper proposes a method of quickly generating images of a given scene illustrating an interreflective environment illuminated by sources with arbitrary luminous intensity distributions. In the proposed method, the luminous intensity ditribution is expressed with basis functions. The proposed method uses a series of spherical harmonic functions as basis functions, and calculates in advance each intensity on surfaces lit by the light sources whose luminous intensity distribution are the same as the spherical harmonic functions. The proposed method makes it possible to generate images so quickly that we can change the luminous intensity distribution interactively. Combining the proposed method with an interactive walk-through that employs intensity mapping, an interactive system for lighting design is implemented. The usefulness of the proposed method is demonstrated by its application to interactive lighting design, where many images are generated by altering lighting devices and/or direction of light.

Key Words:

Additional information


EG'94;  Computer Graphics Forum, Vol.13, No.3, pp.271-280, 1994-9

"A Method for Displaying Metaballs by using Bezier Clipping"

by Tomoyuki Nishita, and Eihachiro Nakamae

Abstract

For rendering curved surfaces, one of the most popular techniques is metaballs, an implicit model based on isosurfaces of potential fields. This technique is suitable for deformable objects and CSG model. For rendering metaballs, intersection tests between rays and isosurfaces are required. By defining the higher degree of functions for the field functions, richer capability can be expected, i.e., the smoother surfaces. However, one of the problems is that the intersection between the ray and isosurfaces can not be solved analytically for such a high degree function. Even though the field function is expressed by degree six polynomial in this paper (that means the degree six equation should be solved for the intersection test), in our algorithm, expressing the field function on the ray by Bezier functions and employing Bezier Clipping, the root of this function can be solved very effectively and precisely. This paper also discusses a deformed distribution function such as ellipsoids and a method displaying transparent objects such as clouds.

Key Words:

Metaballs, Blobs, Soft objects, Density function, Ray tracing, Bezier Clipping, Deformable objects, Geometric Modeling, Photo-realism

Additional information


EG'94;  Computer Graphics Forum, Vol.13, No.3, pp.85-96, 1994-9

"Skylight for Interior Design,"

by Yoshinori Dobashi, Kazufumi Kaneda, Eihachiro Nakashima, Hideo Yamashita, Tomoyuki Nishita, and K.Tadamura,

Abstract

It is inevitable for indoor lighting design to render a room lit by natural light, especially for an atelier or an indoor pool where there are many windows. This paper proposes a method for calculating the illuminance due to natural light, i.e. direct sunlight and skylight, passing through transparent planes such as window glass. The proposed method makes it possible to efficiently calculate such illuminance accurately, because it takes into account both non-uniform luminous intensity distribution of skylight and the distribution of transparency of glass according to incident angles of light. Several examples including the lighting design in an indoor pool, are shown to demonstrate the usefulness of proposed method.

Additional information


EG'93; Computer Graphics Forum,Vol.12, No.3, pp.385-393, 1993

"A New Radiosity Approach Using Area Sampling for Parametric Patches"

by Tomoyuki Nishita, Eihachiro Nakamae

Abstract

A high precision illumination model is indispensable for lighting simulation and realistic image synthesis. For the purpose of improving realism, research on global illumination has been done, and several papers on radiosity methods have been presented. In the most recently proposed methods, the shapes of light sources and objects are restricted to polygons or simple curved surfaces. We present a more general method which can handle the kind of free-form surfaces widely used in industrial products and in architecture. The method proposed here solves the problem of the interreflection of light (i.e., radiosities) between patches, and form-factors, which play an important role in this process, are precisely calculated without aliasing through the use of an area sampling method (i.e., pyramid tracing). Furthermore the method can handle both non-uniform intensity curved sources and non-diffuse surfaces.

Key Words:

Radiosity, Interreflection of light, Form-factor, Bezier Surfaces, Scan line algorithm, Shadows, Penumbra

Additional information


EG'93  Computer Graphics Forum, Vol.12, No.3, pp.189-201, 1993

"Modeling of Skylight and Rendering of outdoor Scenes,"

by K.Tadamura, E.Nakamae, K.Kaneda, M.Baba, H.Yamashita, T.Nishita,

Abstract

Photorealistic animated images are extremely effective for pre-evaluating visual impact of city renewal and construction of tall buildings. In order to generate a photorealistic image not only the direct sunlight but also skylight must be considered. This paper proposes a method of high-fidelity image generation for photorealistic outdoor scenes based on the following ideas: (1) The intensity distribution of skylight taking account of scattering and absorption due to particles in the atmosphere which coincides with CIE standard skylight luminance functions is sought, and realistic images considering about spectrum distribution of skylight for any altitude of the sun can be easily and accurately displayed. (2) A rectangular parallelepiped with a specialized distribution of intensity simulating the skylight is introduced for efficient calculation of illumination due to skylight, and by employing a graphics hardware calculation of the skylight illuminance taking into account shadow effects is obtained with high efficiency; these techniques can be used to generate sequences of images, making animations possible at far lower calculation cost than previous methods.

Additional information


EG'84  Proc. of EUROGRAPHICS'84, pp.419-432, 1984-9

"Computer Graphics for Visualizing Simulation Results,"

by Eihachiro Nakamae, Hideo Yamashita, K. Harada, and Tomoyuki Nishita,

Abstract

Computer graphics techniques for visualizing the following simulation results are developed: (1) lighting designs for different type sources such as point sources, linear sources, area sources, and polyhedron sources, (2) shaded time at arbitrary positions such as windows, walls, and even the inside of a room, (3) montages for view environment evaluation, (4) quasi-semi-transparent models for observing life generation process in anatomy, and (5) two and three dimensional magnetic fields analyzed by the finite element method.

Additional information


Last update: 10 Sept. 2003

nis@is.s.u-tokyo.ac.jp