My research fields are mainly human computer interaction and computer graphics. In particular, I am interested in enhancing artists and makers’ workflow of digital content creation and 3D fabrication.
Before joining UTS, I worked with Prof. Patrick Baudisch at Hasso Plattner Institute, Germany and Prof. Takeo Igarashi at U of Tokyo, Japan. I received my PhD from National Tsing Hua University, Taiwan in 2012, advised by Li-Yi Wei and Chun-Fa Chang.
Can supervise: YES
virtual reality; neuroscience; human computer interaction; computer graphics; 3D fabrication; 3D modeling
virtual reality; neuroscience; human computer interaction; personal fabrication; 3D modeling
Singh, AK, Chen, HT, Cheng, YF, King, JT, Ko, LW, Gramann, K & Lin, CT 2018, 'Visual Appearance Modulates Prediction Error in Virtual Reality', IEEE Access, vol. 6, pp. 24617-24624.View/Download from: UTS OPUS or Publisher's site
© 2013 IEEE. Different rendering styles induce different levels of agency and user behaviors in virtual reality environments. We applied an electroencephalogram-based approach to investigate how the rendering style of the users' hands affects behavioral and cognitive responses. To this end, we introduced prediction errors due to cognitive conflicts during a 3-D object selection task by manipulating the selection distance of the target object. The results showed that, for participants with high behavioral inhibition scores, the amplitude of the negative event-related potential at approximately 50-250 ms correlated with the realism of the virtual hands. Concurring with the uncanny valley theory, these findings suggest that the more realistic the representation of the user's hand is, the more sensitive the user becomes toward subtle errors, such as tracking inaccuracies.
Sasaki, N, Chen, H-T, Sakamoto, D & Igarashi, T 2015, 'Facetons: face primitives for building 3D architectural models in virtual environments', COMPUTER ANIMATION AND VIRTUAL WORLDS, vol. 26, no. 2, pp. 185-194.View/Download from: Publisher's site
2014 Copyright held by the Owner/Author. Publication rights licensed to ACM. (Figure Presented) Painting is a major form of content creation, offering unlimited control and freedom of expression. However, it can involve tedious manual repetitions, such as stippling large regions or hatching complex contours. Thus, a central goal in digital painting research is to automate tedious repetitions while allowing user control. Existing methods impose a sequential order, in which a small exemplar is prepared and then cloned through additional gestures. Such sequential mode may break the continuous, spontaneous flow of painting. Moreover, it is more suitable for homogeneous areas than nuanced variations common in real paintings. We present an interactive digital painting system that autocompletes tedious repetitions while preserving nuanced variations and maintaining natural flows. Specifically, users paint as usual, while our system records and analyzes their workflows. When potential repetition is detected, our system predicts what the user might want to draw and offers auto-completes that adjust to the existing shape-color context. Our method eliminates the need for sequential creation-cloning and better adapts to the local painting contexts. Furthermore, users can choose to accept, ignore, or modify those predictions and thus maintain full control. Our method can be considered as the painting analogy of auto-completes in common typing and IDE systems. We demonstrate the quality and usability of our system through painting results and a pilot user study.
Lei, SIE, Chen, YC, Chen, HT & Chang, CF 2013, 'Interactive physics-based ink splattering art creation', Computer Graphics Forum, vol. 32, no. 7, pp. 147-156.View/Download from: UTS OPUS or Publisher's site
This paper presents an interactive system for ink splattering, a form of abstract art that artists splat ink onto the canvas. The default input device of our system is a pressure-sensitive 2D stylus, the most common sketching tool for digital artists, and we propose two interaction mode: ink-flicking mode and ink-dripping mode, that are designed to be analogous to the artistic techniques of ink splattering in real world. The core of our ink splattering system is a novel three-stage ink splattering framework that simulates the physics-based interaction of ink with different mediums including brush heads, air and paper. We have implemented the physical engine in CUDA and the whole simulation process runs at interactive speed. © 2013 The Eurographics Association and John Wiley & Sons Ltd.
Revision control is a vital component of digital project management and has been widely deployed for text files. Binary files, on the other hand, have received relatively less attention. This can be inconvenient for graphics applications that use a significant amount of binary data, such as images, videos, meshes, and animations. Existing strategies such as storing whole files for individual revisions or simple binary deltas could consume significant storage and obscure vital semantic information. We present a nonlinear revision control system for images, designed with the common digital editing and sketching workflows in mind. We use DAG (directed acyclic graph) as the core structure, with DAG nodes representing editing operations and DAG edges the corresponding spatial, temporal and semantic relationships. We visualize our DAG in RevG (revision graph), which provides not only as a meaningful display of the revision history but also an intuitive interface for common revision control operations such as review, replay, diff, addition, branching, merging, and conflict resolving. Beyond revision control, our system also facilitates artistic creation processes in common image editing and digital painting workflows. We have built a prototype system upon GIMP, an open source image editor, and demonstrate its effectiveness through formative user study and comparisons with alternative revision control systems. © 2011, ACM. All rights reserved.
Kovacs, R, Seufert, A, Wall, LW, Chen, HT, Meinel, F, Müller, W, You, S, Kommana, Y & Baudisch, P 2017, 'Demonstrating trussFab: Fabricating sturdy large-scale structures on desktop 3D printers', Conference on Human Factors in Computing Systems - Proceedings, Conference on Human Factors in Computing Systems, Denver, Colarado, pp. 445-448.View/Download from: UTS OPUS or Publisher's site
Copyright © 2017 by the Association for Computing Machinery, Inc. (ACM). We demonstrate TrussFab, an end-to-end system that allows users to fabricate large-scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by complementing 3D print with plastic bottles. It does not use these bottles as "bricks" but as beams that form structurally sound structures, also known as trusses, allowing it to handle the forces resulting from scale and load.
Kovacs, R, Seufert, A, Wall, L, Chen, HT, Meinel, F, Müller, W, You, S, Brehm, M, Striebel, J, Kommana, Y, Popiak, A, Bläsius, T & Baudisch, P 2017, 'TrussFab: Fabricating sturdy large-scale structureson desktop 3D printers', Conference on Human Factors in Computing Systems - Proceedings, pp. 2606-2616.View/Download from: UTS OPUS or Publisher's site
© 2017 ACM. We present TrussFab, an integrated end-to-end system that allows users to fabricate large scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by complementing 3D print with plastic bottles. It does not use these bottles as "bricks" though, but as beams that form structurally sound node-link structures, also known as trusses, allowing it to handle the forces resulting from scale and load. TrussFab embodies the required engineering knowledge, allowing non-engineers to design such structures and to validate their design using integrated structural analysis. We have used TrussFab to design and fabricate tables and chairs, a 2.5 m long bridge strong enough to carry a human, a functional boat that seats two, and a 5 m diameter dome.
Kovacs, R, Wall, L, Seufert, A, Chen, HT, Müller, W, Meinel, F, Kommana, Y, Bläsius, T, Schneider, O, Roumen, T & Baudisch, P 2017, 'Demonstrating TrussFab's editor: Designing sturdy large-scale structures', UIST 2017 Adjunct - Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 43-45.View/Download from: UTS OPUS or Publisher's site
Copyright © 2017 is held by the owner/author(s). We demonstrate TrussFab's editor for creating large-scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by using plastic bottles as beams that form structurally sound node-link structures, also known as trusses, allowing it to handle the forces resulting from scale and load. During this hands-on demo at UIST, attendees will use the TrussFab software to design their own structures, validate their design using integrated structural analysis, and export their designs for 3D printing.
Zhang, W, Chen, T & Tan, C 2016, 'Demo: A safe low-cost HMD for underwater VR experiences', Proceeding SA '16 SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications, International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, ACM, Macao.View/Download from: UTS OPUS or Publisher's site
Recently, consumer head-mounted VR displays (HMDs) like the Oculus Rift1 and Samsung Gear VR2 have driven much research into interesting applications that include full-body sensory experiences. For example, PaperDude VR [Bolton et al. 2014] uses the Oculus Rift to implement a cycling exergame in order to motivate exercise. Other than using a consumer HMD like the Rift to provide the VR visuals, PaperDude VR includes a real bicycle on a stationary trainer in order to approximate a realistic experience using other non-visual senses. In another work, Birdly [Rheiner 2014] uses the HTC Vive3 as part of a system to simulate flying. In Birdly, the nonvisual elements are slightly harder to simulate, resulting in a rather complex robotic setup which includes sensory-motor coupling and a large fan to provide wind feedback
Ion, A, Frohnhofen, J, Wall, L, Kovacs, R, Alistar, M, Lindsay, J, Lopes, P, Chen, HT & Baudisch, P 2016, 'Metamaterial mechanisms', UIST 2016 - Proceedings of the 29th Annual Symposium on User Interface Software and Technology, ACM Symposium on User Interface Software and Technology, ACM, Tokyo, Japan, pp. 529-539.View/Download from: UTS OPUS or Publisher's site
Recently, researchers started to engineer not only the outer shape of objects, but also their internal microstructure. Such objects, typically based on 3D cell grids, are also known as metamaterials. Metamaterials have been used, for example, to create materials with soft and hard regions. So far, metamaterials were understood as materials-we want to think of them as machines. We demonstrate metamaterial objects that perform a mechanical function. Such metamaterial mechanisms consist of a single block of material the cells of which play together in a well-defined way in order to achieve macroscopic movement. Our metamaterial door latch, for example, transforms the rotary movement of its handle into a linear motion of the latch. Our metamaterial Jansen walker consists of a single block of cells-that can walk. The key element behind our metamaterial mechanisms is a specialized type of cell, the only ability of which is to shear. In order to allow users to create metamaterial mechanisms efficiently we implemented a specialized 3D editor. It allows users to place different types of cells, including the shear cell, thereby allowing users to add mechanical functionality to their objects. To help users verify their designs during editing, our editor allows users to apply forces and simulates how the object deforms in response.
Chen, HT, Wei, LY, Hartmann, B & Agrawala, M 2016, 'Data-driven adaptive history for image editing', Proceedings - 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, I3D 2016, ACM-SIGRAPH Interactive 3D Graphics, Association Computing Machinery, Redmond, Washington, United States, pp. 103-112.View/Download from: Publisher's site
Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects.
We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures.
A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.
Agrawal, H, Umapathi, U, Kovacs, R, Johannes, F, Chen, HT, Mueller, S & Baudisch, P 2015, 'Protopiper: Physically sketching room-sized objects at actual scale', UIST 2015 - Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology, ACM Symposium on User Interface Software and Technology, ACM, USA, pp. 427-436.View/Download from: UTS OPUS or Publisher's site
Physical sketching of 3D wireframe models using a handheld plastic extruder allows users to explore the design space of 3D models efficiently. Unfortunately, the scale of these devices limits users? design explorations to small-scale objects. We present protopiper, a computer aided, hand-held fabrication device that allows users to sketch room-sized objects at actual scale. The key idea behind protopiper is that it forms adhesive tape into tubes as its main building material, rather than extruded plastic or photopolymer lines. Since the resulting tubes are hollow they offer excellent strength-to-weight ratio, and thus scale well to large structures. Since the tape is pre-coated with adhesive it connects into tubes quickly, unlike extruded plastic that would require heating and cooling in the kilowatt range. We demonstrate protopiper?s use through several demo objects, ranging from more constructive objects, such as furniture, to more decorative objects, such as statues. In our exploratory user study, 16 participants created objects based on their own ideas. They rated the device as being "useful for creative exploration", "its ability to sketch at actual scale helped judge fit", and "fun to use.".
Umapathi, U, Chen, HT, Mueller, S, Wall, L, Seufert, A & Baudisch, P 2015, 'LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding', UIST 2015 - Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology, ACM Symposium on User Interface Software and Technology, ACM, USA, pp. 575-582.View/Download from: UTS OPUS or Publisher's site
Laser cutters are useful for rapid prototyping because they are fast. However, they only produce planar 2D geometry. One approach to creating non-planar objects is to cut the object in horizontal slices and to stack and glue them. This approach, however, requires manual effort for the assembly and time for the glue to set, defeating the purpose of using a fast fabrication tool. We propose eliminating the assembly step with our system LaserStacker. The key idea is to use the laser cutter to not only cut but also to weld. Users place not one acrylic sheet, but a stack of acrylic sheets into their cutter. In a single process, LaserStacker cuts each individual layer to shape (through all layers above it), welds layers by melting material at their interface, and heals undesired cuts in higher layers. When users take out the object from the laser cutter, it is already assembled. To allow users to model stacked objects efficiently, we built an extension to a commercial 3D editor (SketchUp) that provides tools for defining which parts should be connected and which remain loose. When users hit the export button, LaserStacker converts the 3D model into cutting, welding, and healing instructions for the laser cutter. We show how LaserStacker not only allow making static objects, such as architectural models, but also objects with moving parts and simple mechanisms, such as scissors, a simple pinball machine, and a mechanical toy with gears.
Mueller, S, Beyer, D, Mohr, T, Gurevich, S, Teibrich, A, Pfisterer, L, Guenther, K, Frohnhofen, J, Chen, HT, Baudisch, P, Im, S & Guimbretière, F 2015, 'Low-fidelity fabrication: Speeding up design iteration of 3D objects', Conference on Human Factors in Computing Systems - Proceedings, International Conference on Human Factors in Computing Systems, ACM, Seoul, Republic of Korea, pp. 327-330.View/Download from: UTS OPUS or Publisher's site
Low-fidelity fabrication systems speed up rapid prototyping by printing intermediate versions of a prototype as fast, low-fidelity previews. Only the final version is fabricated as a full high-fidelity 3D print. This allows designers to iterate more quickly-achieving a better design in less time. Depending on what is currently being tested, low-fidelity fabrication is implemented in different ways: (1) faBrickator allows for a modular approach by substituting sub-volumes of the 3D model with building blocks. (2) WirePrint allows for quickly testing the shape of an object, such as the ergonomic fit, by printing wireframe structures. (3) Platener preserves the technical function by substituting 3D print with laser-cut plates of the same size and thickness. At our CHI'15 interactivity booth, we give a combined live demo of all three low-fidelity fabrication systems- putting special focus on our new low-fidelity fabrication system Platener (paper at CHI'15).
Beyer, D, Gurevich, S, Mueller, S, Chen, HT & Baudisch, P 2015, 'Platener: Low-fidelity fabrication of 3D objects by substituting 3D print with laser-cut plates', Conference on Human Factors in Computing Systems - Proceedings, International Conference on Human Factors in Computing Systems, ACM, Seoul, Republic of Korea, pp. 1799-1806.View/Download from: UTS OPUS or Publisher's site
© Copyright 2015 ACM. This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func- Tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: The resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2,250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10x or more for 39.9% of all objects.
Chen, HT, Grossman, T, Wei, LY, Schmidt, R, Hartmann, B, Fitzmaurice, G & Agrawala, M 2014, 'History assisted view authoring for 3D models', Conference on Human Factors in Computing Systems - Proceedings, International Conference on Human Factors in Computing Systems, ACM, Canada, pp. 2027-2036.View/Download from: UTS OPUS or Publisher's site
3D modelers often wish to showcase their models for sharing or review purposes. This may consist of generating static viewpoints of the model or authoring animated fly-throughs. Manually creating such views is often tedious and few automatic methods are designed to interactively assist the modelers with the view authoring process. We present a view authoring assistance system that supports the creation of informative view points, view paths, and view surfaces, allowing modelers to author the interactive navigation experience of a model. The key concept of our implementation is to analyze the model's workflow history, to infer important regions of the model and representative viewpoints of those areas. An evaluation indicated that the viewpoints generated by our algorithm are comparable to those manually selected by the modeler. In addition, participants of a user study found our system easy to use and effective for authoring viewpoint summaries.
Sasaki, N, Chen, HT, Sakamoto, D & Igarashi, T 2013, 'Facetons: Face primitives with adaptive bounds for building 3D architectural models in virtual environment', Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, CM Symposium on Virtual Reality Software and Technology, pp. 77-82.View/Download from: Publisher's site
We present faceton, a geometric modeling primitive designed for building architectural models, using a six degrees of freedom (DoF) input device in a virtual environment (VE). A faceton is given as an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple drag-and-drop and group interaction of faceton, users can easily create 3D architecture models in the VE. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling in VE, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry (CSG), but it is driven by a novel adaptive bounding algorithm and is specifically designed for the 3D modeling activities in an immersive virtual environment. Copyright © 2013 ACM.
Chen, YC, Chiu, SJ, Chen, HT & Chang, CF 2008, 'Physically-based analysis and rendering of bidirectional texture functions data', Journal of Information Science and Engineering, pp. 83-98.
In pursuit of photorealistic rendering of surface material, Bidirectional Texture Function (BTF) has been used frequently in recent years. Its main drawback is its massive data size. To solve this, the Spatial Bidirectional Reflectance Function (SBRDF) techniques compress BTFs into reflectance model parameters. However, SBRDF cannot produce the self-shadowing and self-occlusion mesostructure effects in many real-world surfaces. Aiming at this drawback, we investigate how self-shadowing and self-occlusion affect the surface appearance by additional physically-based analysis. We mainly rely on two physical phenomena to separate self-shadowing and self-occlusion into two independent effects. First, the self-shadowing is view independent. Second, the self-occlusion is independent of changes in lighting direction. After those analyses, we are able to add self-shadowing and self-occlusion effects to SBRDF to achieve rendering quality that is much closer to the original uncompressed BTF data than SBRDF.
Chen, HT & Chang, CF 2004, 'AR for the masses: Building a low-cost portable AR system from off-the-shelf components', Proceedings VRCAI 2004 - ACM SIGGRAPH International Conference on Virtual Reality Continuum and its Applications in Industry, pp. 455-458.
To create the illusion that a virtual object coexists with physical objects and its environment is always an important goal in the research of augmented reality. Though there are already many commercial products on the market, they are too expensive, too cumbersome or too hard to set up for an ordinary user. Our "AR for the masses" system is cheap to build, easy to set up, and it does not require the users to wear a head-mounted display (HMD). Its cost is low because the whole system consists of only two web cameras (for 3D tracking), a paper box (as the proxy object) and a projector which is getting cheaper. It is also easy to set up: we can move the whole system to a new location easily and finish the calibration within just a few minutes.
As virtual reality (VR) emerges as a mainstream platform, designers have
started to experiment new interaction techniques to enhance the user
experience. This is a challenging task because designers not only strive to
provide designs with good performance but also carefully ensure not to disrupt
users' immersive experience. There is a dire need for a new evaluation tool
that extends beyond traditional quantitative measurements to assist designers
in the design process. We propose an EEG-based experiment framework that
evaluates interaction techniques in VR by measuring intentionally elicited
cognitive conflict. Through the analysis of the feedback-related negativity
(FRN) as well as other quantitative measurements, this framework allows
designers to evaluate the effect of the variables of interest. We studied the
framework by applying it to the fundamental task of 3D object selection using
direct 3D input, i.e. tracked hand in VR. The cognitive conflict is
intentionally elicited by manipulating the selection radius of the target
object. Our first behavior experiment validated the framework in line with the
findings of conflict-induced behavior adjustments like those reported in other
classical psychology experiment paradigms. Our second EEG-based experiment
examines the effect of the appearance of virtual hands. We found that the
amplitude of FRN correlates with the level of realism of the virtual hands,
which concurs with the Uncanny Valley theory.
Laursen, LF, Chen, H-T, Silva, P, Suehiro, L & Igarashi, T 2016, 'TapDrag: An Alternative Dragging Technique on Medium-Sized MultiTouch Displays Reducing Skin Irritation and Arm Fatigue'.
Medium-sized touch displays, sized 30 to 50 inches, are becoming more
affordable and more widely available. Prolonged use of such displays can result
in arm fatigue or skin irritation, especially when multiple long distance drags
are involved. To address this issue, we present TapDrag, an alternative
dragging technique that complements traditional dragging with a simple tapping
gesture on both ends of the intended dragging path. Our experimental evaluation
suggests that TapDrag is a viable alternative to traditional dragging with
faster task completion times for long distances. Qualitative user feedback
indicates that TapDrag helps prevent skin irritation. A reduction in arm
fatigue remains unconfirmed.