Lecture 3 - Sensor Based Interaction Flashcards
Some examples of sensors
- accelerometer
- gyroscope
- magnetometer
- temperature
- humidity
- moisture
- ambient light
- proximity
- barometric pressure
- GNSS location (GPS is part of this)
- heart-rate
- fingerprint
- iris scanner
- rader
- LIDAR
-depth
What is an accelerometer?
measure a moving object’s acceleration and can detect frequency and intensity of human movement
What is a gyroscope?
A gyroscope is a device used for measuring or maintaining orientation and angular velocity
What is a magnetometer?
used to measure the strength and direction of the magnetic field in the vicinity of the instrument.
What is the IMU
Inertial Measurement Unit
-> these are a group of sensors that sense intertial motion
-> includes accelerometer (speed), gyroscope (rotation),magnetometer(direction)
What sensors can be used to sense surroundings?
- ambient light sensor (how light or dark it is)
- proximity sensor (how close the nearest thing is)
-> we typically use this sensor to tell when the user is trying to interact or about to interact
-> typically an optical sensor of some sort
Sensors for getting surroundings are usually on the
front of the device
What sensors are used to sense position?
- barometric pressure (level above sea level)
- GNSS location (satellite systems that can be used to pinpoint location)
What kind of sensors can be used to sense 3D?
- radar (using waves to scan for objects nearby)
- LIDAR (Image result for LIDAR
Lidar — Light Detection and Ranging — is a remote sensing method used to examine the surface of the Earth.) - depth
Examples of 3D projects?
- Google tango (map spaces onto phone via camera)
- spatial for iphone (modelling objects via camera)
What are the fundamental components of our devices that we forget are sensors?
- camera
- microphone
- touchscreens
What can the camera be used for?
tracking objects / hands , object recognition
What can the microphone be used for?
- ambient audio
- speech recognition
What can the touchscreen be used for?
(main way to interact with devices in the modern day)
- touch gestures
- pre-touch sensing
- pressure sensing
Do touchscreens have potential to do more than what we are using them for now?
YES
- we can get the exact x,y coordinate of touch
- sensors can detect the full area of touch
- sensors can detect when fingers are above and not actually touching the screen
These abilities mean we can enrich the interaction currently available
What is sensor fusion?
Fusing different sensor data together?
Why fuse sensors?
Data becomes less uncertain for what we are trying to infer (potentially, however some things only require one sensor in isolation)
Example of sensor fusion?
- activity tracking from accelerometer and gyroscope (where and how fast)
- gaze detection from front camera and depth sensor
- indoor mapping from gyroscope and depth camera (depth at different orientations)
Can we use sensors to build new interactions?
Yes, this is called sensor-based interaction.
Examples of some recent sensor-based interactions?
- personal assistants e.g. alexa
- mid air gesture input
- 3D depth sensor input e.g. Google Tango
- grasp pressure input e.g. Google Active Edge
- touchscreen pressure input e.g. Apple 3D touch
What can sensors be used for in computing?
- context aware systems
- sensor-based interaction techniques
Different between inferring context and providing interaction from sensors.
Context helps us infer what the situation is of the user and adapt UI and funcionality, possibly adding special interaction.
Sensor based interaction is where we use a sensor to interact with a device in a certain way and this work regardless of context (unless combined with context awareness)
Another name for sensor fusion?
Mutlimodal sensing
Sensor fusion..
increases accuracy of action estimation and gives ability to have more expressive techniques
What is mid air gesture swiping?
hand movements or poses in the air
What are mid-air gestures for?
- quick low effort interaction
- interaction with less precision than a screen (as larger input space)
- useful when one can’t reach or touch the screen
- allows extra input states, as we can do more movements
- can be more expressive and dextrous
What sensors are required for mid-air gestures?
- depth sensors (high battery usage)
- cameras (2d sensing)
- proximity sensors (low battery alternative to depth sensors)
- radar sensing (another alternative)
Problems with mid-air gestures?
- do users know the gestures?
- do users know where to perform the gestures?
- how do we know the gestures are corect?
- do users know what to do when they don’t work
- how do we improve our gesture performance?
Mid-Air gestures are ambigous and unfamiliar compared to touch
What is a gated interacion?
One, which has to be explicitly enabled by the user. The good thing about this is that when enabled system can show instructions, as to limit accidental input and unwanted resource usage
With new interaction techniques we need to:
give as much feedback as possible, confirming that the system is responding to user action
-> visualisations are especially helpful
What is pressure sensing (isometric force)?
Using pressure of touch for interaction
How can we use pressure sensing
- for grip pressure and touch pressure
Why have pressure sensing?
- adds extra interaction states
- can be done on handed
- doesn’t require looking at the device
- passive to active input (e.g. pressure launches assistant)
How can we measure pressure?
- force sensitive sensors
- strain gauges
- barometric pressure sensors
Are humans good at modulating pressure?
Yes, humans can use 10 levels of pressure (on average)