What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Updated: 04/12/2021 by Computer Hope

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

A GUI (graphical user interface) is a system of interactive visual components for computer software. A GUI displays objects that convey information, and represent actions that can be taken by the user. The objects change color, size, or visibility when the user interacts with them.

The GUI was first developed at Xerox PARC by Alan Kay, Douglas Engelbart, and a group of other researchers in 1981. Later, Apple introduced the Lisa computer with a GUI on January 19, 1983.

GUI is often pronounced by saying each letter (G-U-I or gee-you-eye). It sometimes is also pronounced as "gooey."

GUI overview

A GUI includes GUI objects, like icons, cursors, and buttons. These graphical elements are sometimes enhanced with sounds, or visual effects like transparency and drop shadows. Using these objects, a user can use the computer without having to know commands.

Below is a picture of the Windows 7 desktop and an example of a GUI operating system. In this example, you could use a mouse to move a pointer and click a program icon to start a program.

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Tip

For an example of a command line for comparison, see our command line page.

What are the elements of a GUI?

To make a GUI as user-friendly as possible, there are different elements and objects that the user use to interact with the software. Below is a list of each of these with a brief description.

  • Button - A graphical representation of a button that performs an action in a program when pressed
  • Dialog box - A type of window that displays additional information, and asks a user for input.
  • Icon - Small graphical representation of a program, feature, or file.
  • Menu - List of commands or choices offered to the user through the menu bar.
  • Menu bar - Thin, horizontal bar containing the labels of menus.
  • Ribbon - Replacement for the file menu and toolbar that groups programs activities together.
  • Tab - Clickable area at the top of a window that shows another page or area.
  • Toolbar - Row of buttons, often near the top of an application window, that controls software functions.
  • Window - Rectangular section of the computer's display that shows the program currently being used.

How does a GUI work?

A GUI uses windows, icons, and menus to carry out commands, such as opening, deleting, and moving files. Although a GUI operating system is primarily navigated using a mouse, a keyboard can also be used via keyboard shortcuts or the arrow keys.

For example, if you want to open a program on a GUI system, you would move the mouse pointer to the program's icon and double-click it. With a command line interface, you need to know the commands to navigate to the directory containing the program, list the files, and then run the file.

What are the benefits of GUI?

A GUI is considered to be more user-friendly than a text-based command-line interface, such as MS-DOS, or the shell of Unix-like operating systems.

Unlike a command-line operating system or CUI, like Unix or MS-DOS, GUI operating systems are easier to learn and use because commands do not need to be memorized. Additionally, users do not need to know any programming languages. Because of their ease of use and more modern appearance, GUI operating systems have come to dominate today's market.

What are examples of a GUI operating system?

Are all operating systems GUI?

No. Early command line operating systems like MS-DOS and some versions of Linux today have no GUI interface.

What are examples of a GUI interface?

How does the user interact with a GUI?

A pointing device, such as the mouse, is used to interact with nearly all aspects of the GUI. More modern (and mobile) devices also utilize a touch screen.

Does a GUI require a mouse?

No. Nearly all GUI interfaces, including Microsoft Windows, have options for navigating the interface with a keyboard, if you know the keyboard shortcuts.

Aero, Computer acronyms, Front end, Interface, Microsoft Windows, MS-DOS, Operating system, Operating system terms, UI, WIMP

User interface allowing interaction through graphical icons and visual indicators

The GUI (/ˌjuːˈ/ JEE-yoo-EYE[1][Note 1] or /ˈɡi/[2] GOO-ee), graphical user interface, is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based UIs, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of CLIs (command-line interfaces),[3][4][5] which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements.[6][7][8] Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where HUD (head-up display)[9] is preferred), or not including flat screens like volumetric displays[10] because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

GUI and interaction design

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

The GUI is presented (displayed) on the computer screen. It is the result of processed user input and usually the main interface for human-machine interaction. The touch UIs popular on small mobile devices are an overlay of the visual output to the visual input.

Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.

The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey).[11][12][13] Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool.

A GUI may be designed for the requirements of a vertical market as application-specific GUIs. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants,[14] self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).

Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.

Examples

Components

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Layers of a GUI based on a windowing system

A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.

A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers.[15]

The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.

In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.

Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations inbetween exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.[16]

Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and Tweetdeck with fixed width but variable height per item is usually implemented by specifying column-width:.

Post-WIMP interface

Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP UIs.[17]

As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.[18]

Interaction

Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).

There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.

History

Early efforts

Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos.") In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

The Xerox Star 8010 workstation introduced the first commercial GUI.

The Xerox PARC GUI consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay.[19][20][21] The PARC GUI employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.

The first commercially available computer with a GUI was 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star.[22][23] These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands.[24] Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.[25]

Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the GUIs used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Macintosh 128K, the first Macintosh (1984)

Popularization

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

HP LX System Manager running on a HP 200LX.

GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants.[26] Despite the GUIs advantages, many reviewers questioned the value of the entire concept,[27] citing hardware limits, and problems in finding compatible software.

In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS,[28] with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems,[29] and becoming a signature representation of Apple products.[30]

Windows 95, accompanied by an extensive marketing campaign,[31] was a major success in the marketplace at launch and shortly became the most popular desktop operating system.[32]

In 2007, with the iPhone[33] and later in 2010 with the introduction of the iPad,[34] Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.[35][36]

The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices.[37][38]

Comparison to other interfaces

Command-line interfaces

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

A modern CLI

Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions. Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. [3] But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.

GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.

WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables.

Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention.

GUI wrappers

GUI wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based UIs or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.

Three-dimensional graphical user interface

Many environments and games use the methods of 3D graphics to project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex. Windows Aero, and Aqua (MacOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use of drop shadows underneath windows and the cursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.

The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, File System Navigator, File System Visualizer, 3D Mailbox,[39][40] and GopherVR. Zooming (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006, Hillcrest Labs introduced the first ZUI for television.[41] Other innovations include the menus on the PlayStation 2, the menus on the Xbox, Sun's Project Looking Glass, Metisse, which was similar to Project Looking Glass,[42] BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents, Croquet OS, which is built for collaboration,[43] and compositing window managers such as Enlightenment and Compiz. Augmented reality and virtual reality also make use of 3D GUI elements.[44]

In science fiction

3D GUIs have appeared in science fiction literature and films, even before certain technologies were feasible or in common use.[45]

  • In prose fiction, 3D GUIs have been portrayed as immersible environments, coined as William Gibson's "cyberspace" and Neal Stephenson's "metaverse" and "avatars".
  • The 1993 American film Jurassic Park features Silicon Graphics' 3D file manager File System Navigator, a real-life file manager for Unix operating systems.
  • The film Minority Report has scenes of police officers using specialized 3D data systems.

See also

  • Apple Computer, Inc. v. Microsoft Corp.
  • Console user interface
  • Computer icon
  • Distinguishable interfaces
  • General Graphics Interface (software project)
  • GUI tree
  • Human factors and ergonomics
  • Look and feel
  • Natural user interface
  • Ncurses
  • Object-oriented user interface
  • Organic user interface
  • Rich web application
  • Skeuomorph
  • Skin (computing)
  • Theme (computing)
  • Text entry interface
  • Transportable Applications Environment
  • User interface design
  • Vector-based graphical user interface

Notes

  1. ^ "UI" by itself is still usually pronounced /ˌjˈ/ yoo-EYE.

References

  1. ^ Wells, John (2009). Longman Pronunciation Dictionary (3rd ed.). Pearson Longman. ISBN 978-1-4058-8118-0.
  2. ^ "How to pronounce GUI in English". dictionary.cambridge.org. Retrieved 2020-04-03.
  3. ^ a b "Command line vs. GUI". www.computerhope.com. Retrieved 2020-04-03.
  4. ^ MSCOM (2007-03-12). "The GUI versus the Command Line: Which is better? (Part 1)". Microsoft.com Operations. Microsoft Docs. Retrieved 2021-11-07. {{cite web}}: External link in |department= (help)
  5. ^ MSCOM (2007-03-26). "The GUI versus the Command Line: Which is better? (Part 2)". Microsoft.com Operations. Microsoft Docs. Retrieved 2021-11-07. {{cite web}}: External link in |department= (help)
  6. ^ "Graphical user interface". ScienceDaily. Retrieved 2019-05-09.
  7. ^ Levy, Steven. "Graphical User Interface (GUI)". Britannica.com. Retrieved 2019-06-12.
  8. ^ "GUI". PC Magazine Encyclopedia. pcmag.com. Retrieved 2019-06-12.
  9. ^ Greg Wilson (2006). "Off with Their HUDs!: Rethinking the Heads-Up Display in Console Game Design". Gamasutra. Archived from the original on January 19, 2010. Retrieved February 14, 2006.
  10. ^ "GUI definition". Linux Information Project. October 1, 2004. Retrieved 12 November 2008.
  11. ^ "chrome". www.catb.org. Retrieved 2020-04-03.
  12. ^ Jakob Nielsen (January 29, 2012). "Browser and GUI Chrome". Nngroup. Archived from the original on August 25, 2012. Retrieved May 20, 2012.
  13. ^ Martinez, Wendy L. (2011-02-23). "Graphical user interfaces: Graphical user interfaces". Wiley Interdisciplinary Reviews: Computational Statistics. 3 (2): 119–133. doi:10.1002/wics.150. S2CID 60467930.
  14. ^ The ViewTouch restaurant system by Giselle Bisson
  15. ^ "What is a graphical user interface (GUI)?". IONOS Digitalguide. Retrieved 2022-02-25.
  16. ^ Babich, Nick (30 May 2020). "Mobile UX Design: List View and Grid View". Medium. Retrieved 4 September 2021.
  17. ^ IEEE.org.
  18. ^ Reality-Based Interaction: A Framework for Post-WIMP Interfaces
  19. ^ Lieberman, Henry. "A Creative Programming Environment, Remixed", MIT Media Lab, Cambridge.
  20. ^ Salha, Nader. "Aesthetics and Art in the Early Development of Human-Computer Interfaces" Archived 2020-08-07 at the Wayback Machine, October 2012.
  21. ^ Smith, David. "Pygmalion: A Creative Programming Environment", 1975.
  22. ^ The first GUIs
  23. ^ Xerox Star user interface demonstration, 1982
  24. ^ "VisiCorp Visi On". The Visi On product was not intended for the home user. It was designed and priced for high-end corporate workstations. The hardware it required was quite a bit for 1983. It required a minimum of 512k of ram and a hard drive (5 megs of space).
  25. ^ A Windows Retrospective, PC Magazine Jan 2009. Ziff Davis. January 2009.
  26. ^ "Magic Desk I for Commodore 64".
  27. ^ Sandberg-Diment, Erik (1984-12-25). "Value of Windowing is Questioned". The New York Times.
  28. ^ Friedman, Ted (October 1997). "Apple's 1984: The Introduction of the Macintosh in the Cultural History of Personal Computers". Archived from the original on October 5, 1999.
  29. ^ Friedman, Ted (2005). "Chapter 5: 1984". Electric Dreams: Computers in American Culture. New York University Press. ISBN 978-0-8147-2740-9. Retrieved October 6, 2011.
  30. ^ Grote, Patrick (October 29, 2006). "Review of Pirates of Silicon Valley Movie". DotJournal.com. Archived from the original on November 7, 2006. Retrieved January 24, 2014.
  31. ^ Washington Post (August 24, 1995). "With Windows 95's Debut, Microsoft Scales Heights of Hype". Washington Post. Retrieved November 8, 2013.
  32. ^ "Computers | Timeline of Computer History | Computer History Museum". www.computerhistory.org. Retrieved 2017-04-02.
  33. ^ Mather, John. iMania, Ryerson Review of Journalism, (February 19, 2007) Retrieved February 19, 2007
  34. ^ "the iPad could finally spark demand for the hitherto unsuccessful tablet PC" --Eaton, Nick The iPad/tablet PC market defined? Archived 2011-02-01 at the Wayback Machine, Seattle Post-Intelligencer, 2010
  35. ^ Bright, Peter Ballmer (and Microsoft) still doesn't get the iPad, Ars Technica, 2010
  36. ^ "The iPad's victory in defining the tablet: What it means". InfoWorld. 2011-07-05.
  37. ^ Hanson, Cody W. (2011-03-17). "Chapter 2: Mobile Devices in 2011". Library Technology Reports. 47 (2): 11–23. ISSN 0024-2586.
  38. ^ "What is a Graphical User Interface? Definition and FAQs | OmniSci". www.omnisci.com. Retrieved 2022-01-26.
  39. ^ "3D Mailbox - 3-Dimensional Email Software. Bring e-mail to life! Email just got cool and fun". 3dmailbox.com. Archived from the original on 2019-07-21. Retrieved 2022-07-14.
  40. ^ "3D Mailbox". Download.com. Retrieved 2022-07-14.
  41. ^ Macworld.com November 11, 2006. Dan Moren. CES Unveiled@NY ‘07: Point and click coming to set-top boxes? Archived 2011-11-08 at the Wayback Machine
  42. ^ "Metisse - New Looking Glass Alternative". 29 June 2004. Retrieved 2 July 2020.
  43. ^ Smith, David A.; Kay, Alan; Raab, Andreas; Reed, David P. "Croquet – A Collaboration System Architecture" (PDF). croquetconsortium.org. Archived from the original (PDF) on 2007-09-27. Retrieved 2022-09-17. The efforts at Xerox PARC under the leadership of Alan Kay that drove the development of [...] powerful bit-mapped display based user interfaces was key. In some ways, all we are doing here is extending this model to 3D and adding a new robust object collaboration model.
  44. ^ Purwar, Sourabh (2019-03-04). "Designing User Experience for Virtual Reality (VR) applications". Medium. Retrieved 2022-05-06.
  45. ^ Dayton, Tom. "Object-Oriented GUIs are the Future". OpenMCT Blog. Archived from the original on 10 August 2014. Retrieved 23 August 2012.

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

  • Evolution of Graphical User Interface in last 50 years by Raj Lal
  • The men who really invented the GUI by Clive Akass
  • Graphical User Interface Gallery, screenshots of various GUIs
  • Marcin Wichary's GUIdebook, Graphical User Interface gallery: over 5500 screenshots of GUI, application and icon history
  • The Real History of the GUI by Mike Tuck
  • In The Beginning Was The Command Line by Neal Stephenson
  • 3D Graphical User Interfaces (PDF) by Farid BenHajji and Erik Dybner, Department of Computer and Systems Sciences, Stockholm University
  • Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis - University of Alicante (Reyes-Labarta et al. 2015-18)
  • Innovative Ways to Use Information Visualization across a Variety of Fields by Ryan Erwin Digital marketing specialist ( CLLAX ) (2022-05)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Graphical_user_interface&oldid=1118606124"


Page 2

Form of human-machine interaction

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

The 3D space used for interaction can be the real physical space, a virtual space representation simulated in the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an input device that detects the 3D position of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device.

The principles of 3D interaction are applied in a variety of domains such as tourism, art, gaming, simulation, education, information visualization, or scientific visualization.[1]

History

Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like Ivan Sutherland, Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when Morton Heilig invented the Sensorama simulator.[2] It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment. The next stage of development was Dr. Ivan Sutherland’s completion of his pioneering work in 1968, the Sword of Damocles.[3] He created a head-mounted display that produced 3D virtual environment by presenting a left and right still image of that environment.

Availability of technology as well as impractical costs held back the development and application of virtual environments until the 1980s. Applications were limited to military ventures in the United States. Since then, further research and technological advancements have allowed new doors to be opened to application in various other areas such as education, entertainment, and manufacturing.

Background

In 3D interaction, users carry out their tasks and perform functions by exchanging information with computer systems in 3D space. It is an intuitive type of interaction because humans interact in three dimensions in the real world. The tasks that users perform have been classified as selection and manipulation of objects in virtual space, navigation, and system control. Tasks can be performed in virtual space through interaction techniques and by utilizing interaction devices. 3D interaction techniques were classified according to the task group it supports. Techniques that support navigation tasks are classified as navigation techniques. Techniques that support object selection and manipulation are labeled selection and manipulation techniques. Lastly, system control techniques support tasks that have to do with controlling the application itself. A consistent and efficient mapping between techniques and interaction devices must be made in order for the system to be usable and effective. Interfaces associated with 3D interaction are called 3D interfaces. Like other types of user interfaces, it involves two-way communication between users and system, but allows users to perform action in 3D space. Input devices permit the users to give directions and commands to the system, while output devices allow the machine to present information back to them.

3D interfaces have been used in applications that feature virtual environments, and augmented and mixed realities. In virtual environments, users may interact directly with the environment or use tools with specific functionalities to do so. 3D interaction occurs when physical tools are controlled in 3D spatial context to control a corresponding virtual tool.

Users experience a sense of presence when engaged in an immersive virtual world. Enabling the users to interact with this world in 3D allows them to make use of natural and intrinsic knowledge of how information exchange takes place with physical objects in the real world. Texture, sound, and speech can all be used to augment 3D interaction. Currently, users still have difficulty in interpreting 3D space visuals and understanding how interaction occurs. Although it’s a natural way for humans to move around in a three-dimensional world, the difficulty exists because many of the cues present in real environments are missing from virtual environments. Perception and occlusion are the primary perceptual cues used by humans. Also, even though scenes in virtual space appear three-dimensional, they are still displayed on a 2D surface so some inconsistencies in depth perception will still exist.

3D user interfaces

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Scheme of 3D User Interaction phases

User interfaces are the means for communication between users and systems. 3D interfaces include media for 3D representation of system state, and media for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction.

3D user interfaces, are user interfaces where 3D interaction takes place, this means that the user's tasks occur directly within a three-dimensional space. The user must communicate with commands, requests, questions, intent, and goals to the system, and in turn this one has to provide feedback, requests for input, information about their status, and so on.

Both the user and the system do not have the same type of language, therefore to make possible the communication process, the interfaces must serve as intermediaries or translators between them.

The way the user transforms perceptions into actions is called Human transfer function, and the way the system transforms signals into display information is called System transfer function. 3D user interfaces are actually physical devices that communicate the user and the system with the minimum delay, in this case there are two types: 3D User Interface Output Hardware and 3D User Interface Input Hardware.

3D user interface output hardware

Output devices, also called display devices, allow the machine to provide information or feedback to one or more users through the human perceptual system. Most of them are focused on stimulating the visual, auditory, or haptic senses. However, in some unusual cases they also can stimulate the user's olfactory system.

3D visual displays

This type of devices are the most popular and its goal is to present the information produced by the system through the human visual system in a three-dimensional way. The main features that distinguish these devices are: field of regard and field of view, spatial resolution, screen geometry, light transfer mechanism, refresh rate and ergonomics.

Another way to characterize these devices is according to the different categories of depth perception cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D user interfaces are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. Virtual reality headsets and CAVEs (Cave Automatic Virtual Environment) are examples of a fully immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays.

3D audio displays

3D Audio displays are devices that present information (in this case sound) through the human auditory system, which is especially useful when supplying location and spatial information to the users. Its objective is to generate and display a spatialized 3D sound so the user can use its psychoacoustic skills and be able to determine the location and direction of the sound. There are different localizations cues: binaural cues, spectral and dynamic cues, head-related transfer functions, reverberation, sound intensity and vision and environment familiarity. Adding background audio component to a display also adds to the sense of realism.

3D haptic displays

These devices use the sense of touch to simulate the physical interaction between the user and a virtual object. There are three different types of 3D Haptic displays: those that provide the user a sense of force, the ones that simulate the sense of touch and those that use both. The main features that distinguish these devices are: haptic presentation capability, resolution and ergonomics. The human haptic system has 2 fundamental kinds of cues, tactile and kinesthetic. Tactile cues are a type of human touch cues that have a wide variety of skin receptors located below the surface of the skin that provide information about the texture, temperature, pressure and damage. Kinesthetic cues are a type of human touch cues that have many receptors in the muscles, joints and tendons that provide information about the angle of joints and stress and length of muscles.

3D user interface input hardware

These hardware devices are called input devices and their aim is to capture and interpret the actions performed by the user. The degrees of freedom (DOF) are one of the main features of these systems. Classical interface components (such as mouse and keyboards and arguably touchscreen) are often inappropriate for non 2D interaction needs.[1] These systems are also differentiated according to how much physical interaction is needed to use the device, purely active need to be manipulated to produce information, purely passive do not need to. The main categories of these devices are standard (desktop) input devices, tracking devices, control devices, navigation equipment, gesture interfaces, 3D mice, and brain-computer interfaces.

Desktop Input devices

This type of devices are designed for an interaction 3D on a desktop, many of them have an initial design thought in a traditional interaction in two dimensions, but with an appropriate mapping between the system and the device, this can work perfectly in a three-dimensional way. There are different types of them: keyboards, 2D mice and trackballs, pen-based tablets and stylus, and joysticks. Nonetheless, many studies have questioned the appropriateness of desktop interface components for 3D interaction [1][4][5] though this is still debated.[6][7]

Tracking devices

3D user interaction systems are based primarily on motion tracking technologies, to obtain all the necessary information from the user through the analysis of their movements or gestures, these technologies are called, tracking technologies.

Trackers detect or monitor head, hand or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. Examples of trackers include motion trackers, eye trackers, and data gloves. A simple 2D mouse may be considered a navigation device if it allows the user to move to a different location in a virtual 3D space. Navigation devices such as the treadmill and bicycle make use of the natural ways that humans travel in the real world. Treadmills simulate walking or running and bicycles or similar type equipment simulate vehicular travel. In the case of navigation devices, the information passed on to the machine is the user's location and movements in virtual space. Wired gloves and bodysuits allow gestural interaction to occur. These send hand or body position and movement information to the computer using sensors.

For the full development of a 3D User Interaction system, is required to have access to a few basic parameters, all this technology-based system should know, or at least partially, as the relative position of the user, the absolute position, angular velocity, rotation data, orientation or height. The collection of these data is achieved through systems of space tracking and sensors in multiple forms, as well as the use of different techniques to obtain. The ideal system for this type of interaction is a system based on the tracking of the position, using six degrees of freedom (6-DOF), these systems are characterized by the ability to obtain absolute 3D position of the user, in this way will get information on all possible three-dimensional field angles.

The implementation of these systems can be achieved by using various technologies, such as electromagnetic fields, optical, or ultrasonically tracking, but all share the main limitation, they should have a fixed external reference, either a base, an array of cameras, or a set of visible markers, so this single system can be carried out in prepared areas. Inertial tracking systems do not require external reference such as those based on movement, are based on the collection of data using accelerometers, gyroscopes, or video cameras, without a fixed reference mandatory, in the majority of cases, the main problem of this system, is based on not obtaining the absolute position, since not part of any pre-set external reference point so it always gets the relative position of the user, aspect that causes cumulative errors in the process of sampling data. The goal to achieve in a 3D tracking system would be based on obtaining a system of 6-DOF able to get absolute positioning and precision of movement and orientation, with a precision and an uncut space very high, a good example of a rough situation would be a mobile phone, since it has all the motion capture sensors and also GPS tracking of latitude, but currently these systems are not so accurate to capture data with a precision of centimeters and therefore would be invalid.

However, there are several systems that are closely adapted to the objectives pursued, the determining factor for them is that systems are auto content, i.e., all-in-one and does not require a fixed prior reference, these systems are as follows:

Nintendo Wii Remote ("Wiimote")

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Wiimote device

The Wii Remote device does not offer a technology based on 6-DOF since again, cannot provide absolute position, in contrast, is equipped with a multitude of sensors, which convert a 2D device in a great tool of interaction in 3D environments.

This device has gyroscopes to detect rotation of the user, accelerometers ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining orientation and electronic compasses and infra-red devices to capture the position.

This type of device can be affected by external references of infra-red light bulbs or candles, causing errors in the accuracy of the position.

Google Tango Devices

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Google's Project Tango tablet, 2014

The Tango Platform is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen.[8] The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments.[9][10][11]

Microsoft Kinect

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Kinect Sensor

The Microsoft Kinect device offers us a different motion capture technology for tracking.

Instead of basing its operation on sensors, this is based on a structured light scanner, located in a bar, which allows tracking of the entire body through the detection of about 20 spatial points, of which 3 different degrees of freedom are measured to obtain position, velocity and rotation of each point.

Its main advantage is ease of use, and the no requirement of an external device attached by the user, and its main disadvantage lies in the inability to detect the orientation of the user, thus limiting certain space and guidance functions.

Leap Motion

What is the term for the small graphic symbols displayed on a monitor screen that represents computer files on programs that a user clicks on to open?

Leap Motion Controller

The Leap Motion is a new system of tracking of hands, designed for small spaces, allowing a new interaction in 3D environments for desktop applications, so it offers a great fluidity when browsing through three-dimensional environments in a realistic way.

It is a small device that connects via USB to a computer, and used two cameras with infra-red light LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording responses from 300 frames per second, information is sent to the computer to be processed by the specific software company.

3D Interaction Techniques

3D Interaction Techniques are the different ways that the user can interact with the 3D virtual environment to execute different kind of tasks. The quality of these techniques has a profound effect on the quality of the entire 3D User Interfaces. They can be classified into three different groups: Navigation, Selection and manipulation and System control.

Navigation

The computer needs to provide the user with information regarding location and movement. Navigation is the most used by the user in big 3D environments and presents different challenges as supporting spatial awareness, giving efficient movements between distant places and making navigation bearable so the user can focus on more important tasks. These techniques, navigation tasks, can be divided into two components: travel and wayfinding. Travel involves moving from the current location to the desired point. Wayfinding refers to finding and setting routes to get to a travel goal within the virtual environment.

Travel

Travel is a conceptual technique that consists in the movement of the viewpoint from one location to another. This orientation is usually handled in immersive virtual environments by head tracking. Exists five types of travel interaction techniques:

  • Physical movement: uses the user's body motion to move through the virtual environment. Is an appropriate technique when is required an augmented perception of the feeling of being present or when is required physical effort form the user.
  • Manual viewpoint manipulation: the user's hands movements determine the displacement on the virtual environment. One example could be when the user moves their hands in a way that seems like is grabbing a virtual rope and pulls his self up. This technique could be easy to learn and efficient, but can cause fatigue.
  • Steering: the user has to constantly indicate where to move. Is a common and efficient technique. One example of this are the gaze-directed steering, where the head orientation determines the direction of travel.
  • Target-based travel: user specifies a destination point and the system effectuates the displacement. This travel can be executed by teleport, where the user is instantly moved to the destination point or the system can execute some transition movement to the destiny. These techniques are very simple from the user's point of view because he only has to indicate the destination.
  • Route planning: the user specifies the path that should be taken through the environment and the system executes the movement. The user may draw a path on a map of the virtual environment to plan a route. This technique allows users to control travel while they have the ability to do other tasks during motion.

Wayfinding

Wayfinding is the cognitive process of defining a route for the surrounding environment, using and acquiring spatial knowledge to construct a cognitive map of the environment. In virtual space it is different and more difficult to do than in the real world because synthetic environments are often missing perceptual cues and movement constraints. It can be supported using user-centered techniques such as using a larger field of view and supplying motion cues, or environment-centered techniques like structural organization and wayfinding principles.

In order for a good wayfinding, users should receive wayfinding supports during the virtual environment travel to facilitate it because of the constraints from the virtual world.

These supports can be user-centered supports such as a large field-of-view or even non-visual support such as audio, or environment-centered support, artificial cues and structural organization to define clearly different parts of the environment. Some of the most used artificial cues are maps, compasses and grids, or even architectural cues like lighting, color and texture.

Selection and Manipulation

Selection and Manipulation techniques for 3D environments must accomplish at least one of three basic tasks: object selection, object positioning and object rotation.

Users need to be able to manipulate virtual objects. Manipulation tasks involve selecting and moving an object. Sometimes, the rotation of the object is involved as well. Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans. However, this is not always possible. A virtual hand that can select and re-locate virtual objects will work as well.

3D widgets can be used to put controls on objects: these are usually called 3D Gizmos or Manipulators (a good example are the ones from Blender). Users can employ these to re-locate, re-scale or re-orient an object (Translate, Scale, Rotate).

Other techniques include the Go-Go technique and ray casting, where a virtual ray is used to point to and select an object.

Selection

The task of selecting objects or 3D volumes in a 3D environments requires first being able to find the desired target and then being able to select it. Most 3D datasets/environments are severed by occlusion problems,[12] so the first step of finding the target relies on manipulation of the viewpoint or of the 3D data itself in order to properly identify the object or volume of interest. This initial step is then of course tightly coupled with manipulations in 3D. Once the target is visually identified, users have access to a variety of techniques to select it.

Usually, the system provides the user a 3D cursor represented as a human hand whose movements correspond to the motion of the hand tracker. This virtual hand technique [13] is rather intuitive because simulates a real-world interaction with objects but with the limit of objects that we can reach inside a reach-area.

To avoid this limit, there are many techniques that have been suggested, like the Go-Go technique.[14] This technique allows the user to extend the reach-area using a non-linear mapping of the hand: when the user extends the hand beyond a fixed threshold distance, the mapping becomes non-linear and the hand grows.

Another technique to select and manipulate objects in 3D virtual spaces consists in pointing at objects using a virtual-ray emanating from the virtual hand.[15] When the ray intersects with the objects, it can be manipulated. Several variations of this technique has been made, like the aperture technique, which uses a conic pointer addressed for the user's eyes, estimated from the head location, to select distant objects. This technique also uses a hand sensor to adjust the conic pointer size.

Many other techniques, relying on different input strategies, have also been developed.[16][17]

Manipulation

3D Manipulations occurs before a selection task (in order to visually identify a 3D selection target) and after a selection has occurred, to manipulate the selected object. 3D Manipulations require 3 DOF for rotations (1 DOF per axis, namely x, y, z) and 3 DOF for translations (1 DOF per axis) and at least 1 additional DOF for uniform zoom (or alternatively 3 additional DOF for non-uniform zoom operations).

3D Manipulations, like navigation, is one of the essential tasks with 3D data, objects or environments. It is the basis of many 3D software (such as Blender, Autodesk, VTK) which are widely used. These software, available mostly on computers, are thus almost always combined with a mouse and keyboard. To provide enough DOFs (the mouse only offers 2), these software rely on modding with a key in order to separately control all the DOFs involved in 3D manipulations. With the recent avent of multi-touch enabled smartphones and tablets, the interaction mappings of these software have been adapted to multi-touch (which offers more simultaneous DOF manipulations than a mouse and keyboard). A survey conducted in 2017 of 36 commercial and academic mobile applications on Android and iOS however suggested that most applications did not provide a way to control the minimum 6 DOFs required,[7] but that among those which did, most made use of a 3D version of the RST (Rotation Scale Translation) mapping: 1 finger is used for rotation around x and y, while two-finger interaction controls rotation around z, and translation along x, y, and z.

System Control

System control techniques allows the user to send commands to an application, activate some functionality, change the interaction (or system) mode, or modify a parameter. The command sender always includes the selection of an element from a set. System control techniques as techniques that support system control tasks in three-dimensions can be categorized into four groups:

  • Graphical menus: visual representations of commands.
  • Voice commands: menus accessed via voice.
  • Gestural interaction: command accessed via body gesture.
  • Tools: virtual objects with an implicit function or mode.

Also exists different hybrid techniques that combine some of the types.

Symbolic input

This task allows the user to enter and/or edit, for example, text, making it possible to annotate 3D scenes or 3D objects.

See also

  • Finger tracking
  • Interaction technique
  • Interaction design
  • Human–computer interaction
  • Cave Automatic Virtual Environment (CAVE)
  • Virtual reality

References

  1. ^ a b c Bowman, Doug A. (2004). 3D User Interfaces: Theory and Practice. Redwood City, CA, USA: Addison Wesley Longman Publishing Co., Inc. ISBN 978-0201758672.
  2. ^ US 3050870A, Heilig, Morton L, "Sensorama simulator", published 1962-08-28 
  3. ^ Sutherland, I. E. (1968). "A head-mounted three dimensional display Archived 2016-03-04 at the Wayback Machine". Proceedings of AFIPS 68, pp. 757-764
  4. ^ Chen, Michael; Mountford, S. Joy; Sellen, Abigail (1988). "A study in interactive 3-D rotation using 2-D control devices" (PDF). Proceedings of the 15th annual conference on Computer graphics and interactive techniques - SIGGRAPH '88. New York, New York, USA: ACM Press. pp. 121–129. doi:10.1145/54852.378497. ISBN 0-89791-275-6.
  5. ^ Yu, Lingyun; Svetachov, Pjotr; Isenberg, Petra; Everts, Maarten H.; Isenberg, Tobias (2010-10-28). "FI3D: Direct-Touch Interaction for the Exploration of 3D Scientific Visualization Spaces" (PDF). IEEE Transactions on Visualization and Computer Graphics. 16 (6): 1613–1622. doi:10.1109/TVCG.2010.157. ISSN 1077-2626. PMID 20975204. S2CID 14354159.
  6. ^ Terrenghi, Lucia; Kirk, David; Sellen, Abigail; Izadi, Shahram (2007). "Affordances for manipulation of physical versus digital media on interactive surfaces". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press. pp. 1157–1166. doi:10.1145/1240624.1240799. ISBN 978-1-59593-593-9.
  7. ^ a b Besançon, Lonni; Issartel, Paul; Ammi, Mehdi; Isenberg, Tobias (2017). "Mouse, Tactile, and Tangible Input for 3D Manipulation". Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press. pp. 4727–4740. arXiv:1603.08735. doi:10.1145/3025453.3025863. ISBN 978-1-4503-4655-9.
  8. ^ Besancon, Lonni; Issartel, Paul; Ammi, Mehdi; Isenberg, Tobias (2017). "Hybrid Tactile/Tangible Interaction for 3D Data Exploration". IEEE Transactions on Visualization and Computer Graphics. 23 (1): 881–890. doi:10.1109/tvcg.2016.2599217. ISSN 1077-2626. PMID 27875202. S2CID 16626037.
  9. ^ Fitzmaurice, George W.; Buxton, William (1997). "An empirical evaluation of graspable user interfaces". Proceedings of the ACM SIGCHI Conference on Human factors in computing systems. New York, New York, USA: ACM Press. pp. 43–50. doi:10.1145/258549.258578. ISBN 0-89791-802-9.
  10. ^ Angus, Ian G.; Sowizral, Henry A. (1995-03-30). Fisher, Scott S.; Merritt, John O.; Bolas, Mark T. (eds.). Embedding the 2D interaction metaphor in a real 3D virtual environment. SPIE. doi:10.1117/12.205875.
  11. ^ Poupyrev, I.; Tomokazu, N.; Weghorst, S. (1998). "Virtual Notepad: handwriting in immersive VR" (PDF). Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180). IEEE Comput. Soc. pp. 126–132. doi:10.1109/vrais.1998.658467. ISBN 0-8186-8362-7.
  12. ^ Shneiderman, B. (1996). "The eyes have it: a task by data type taxonomy for information visualizations". Proceedings 1996 IEEE Symposium on Visual Languages. IEEE Comput. Soc. Press. pp. 336–343. doi:10.1109/vl.1996.545307. hdl:1903/466. ISBN 0-8186-7508-X.
  13. ^ Poupyrev, I.; Ichikawa, T.; Weghorst, S.; Billinghurst, M. (1998). "Egocentric Object Manipulation in Virtual Environments: Empirical Evaluation of Interaction Techniques". Computer Graphics Forum. 17 (3): 41–52. CiteSeerX 10.1.1.95.4933. doi:10.1111/1467-8659.00252. ISSN 0167-7055. S2CID 12784160.
  14. ^ Poupyrev, Ivan; Billinghurst, Mark; Weghorst, Suzanne; Ichikawa, Tadao (1996). "The go-go interaction technique" (PDF). The go-go interaction technique: non-linear mapping for direct manipulation in VR. ACM Digital Library. pp. 79–80. doi:10.1145/237091.237102. ISBN 978-0897917988. S2CID 1098140. Retrieved 2018-05-18.
  15. ^ Mine, Mark R. (1995). Virtual Environment Interaction Techniques (PDF) (Technical report). Department of Computer Science University of North Carolina.
  16. ^ Argelaguet, Ferran; Andujar, Carlos (2013). "A survey of 3D object selection techniques for virtual environments" (PDF). Computers & Graphics. 37 (3): 121–136. doi:10.1016/j.cag.2012.12.003. ISSN 0097-8493. S2CID 8565854.
  17. ^ Besançon, Lonni; Sereno, Mickael; Yu, Lingyun; Ammi, Mehdi; Isenberg, Tobias (2019). "Hybrid Touch/Tangible Spatial 3D Data Selection" (PDF). Computer Graphics Forum. Wiley. 38 (3): 553–567. doi:10.1111/cgf.13710. ISSN 0167-7055. S2CID 199019072.

Reading List
  1. 3D Interaction With and From Handheld Computers. Visited March 28, 2008
  2. Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2001, February). An Introduction to 3-D User Interface Design. Presence, 10(1), 96–108.
  3. Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2005). 3D User Interfaces: Theory and Practice. Boston: Addison–Wesley.
  4. Bowman, Doug. 3D User Interfaces. Interaction Design Foundation. Retrieved October 15, 2015
  5. Burdea, G. C., Coiffet, P. (2003). Virtual Reality Technology (2nd ed.). New Jersey: John Wiley & Sons Inc.
  6. Carroll, J. M. (2002). Human–Computer Interaction in the New Millennium. New York: ACM Press
  7. Csisinko, M., Kaufmann, H. (2007, March). Towards a Universal Implementation of 3D User Interaction Techniques [Proceedings of Specification, Authoring, Adaptation of Mixed Reality User Interfaces Workshop, IEEE VR]. Charlotte, NC, USA.
  8. Fröhlich, B.; Plate, J. (2000). "The Cubic Mouse: A New Device for 3D Input". Proceedings of ACM CHI 2000. New York: ACM Press. pp. 526–531. doi:10.1145/332040.332491.
  9. Interaction Techniques. DLR - Simulations- und Softwaretechnik. Retrieved October 18, 2015
  10. Keijser, J.; Carpendale, S.; Hancock, M.; Isenberg, T. (2007). "Exploring 3D Interaction in Alternate Control-Display Space Mappings". Proceedings of the 2nd IEEE Symposium on 3D User Interfaces. Los Alamitos, CA: IEEE Computer Society. pp. 526–531.
  11. Larijani, L. C. (1993). The Virtual Reality Primer. United States of America: R. R. Donnelley and Sons Company.
  12. Rhijn, A. van (2006). Configurable Input Devices for 3D Interaction using Optical Tracking. Eindhoven: Technische Universiteit Eindhoven.
  13. Stuerzlinger, W., Dadgari, D., Oh, J-Y. (2006, April). Reality-Based Object Movement Techniques for 3D. CHI 2006 Workshop: "What is the Next Generation of Human–Computer Interaction?". Workshop presentation.
  14. The CAVE (CAVE Automatic Virtual Environment). Visited March 28, 2007
  15. The Java 3-D Enabled CAVE at the Sun Centre of Excellence for Visual Genomics. Visited March 28, 2007
  16. Vince, J. (1998). Essential Virtual Reality Fast. Great Britain: Springer-Verlag London Limited
  17. Virtual Reality. Visited March 28, 2007
  18. Yuan, C., (2005, December). Seamless 3D Interaction in AR – A Vision-Based Approach. In Proceedings of the First International Symposium, ISVC (pp. 321–328). Lake Tahoe, NV, USA: Springer Berlin/ Heidelberg.
  • Bibliography on 3D Interaction and Spatial Input
  • The Inventor of the 3D Window Interface 1998 Archived 2020-07-03 at the Wayback Machine
  • 3DI Group
  • 3D Interaction in Virtual Environments

Retrieved from "https://en.wikipedia.org/w/index.php?title=3D_user_interaction&oldid=1090743695"