Showing posts with label display. Show all posts
Showing posts with label display. Show all posts
Could future devices read images from our brains?
As an expert on cutting-edge digital displays, Mary Lou Jepsen studies how to show our most creative ideas on screens. And as a brain surgery patient herself, she is driven to know more about the neural activity that underlies invention, creativity, thought. She meshes these two passions in a rather mind-blowing talk on two cutting-edge brain studies that might point to a new frontier in understanding how (and what) we think.
Enabling speed reading
The reading game is about to change forever. With Spritz, which is coming to the Samsung Galaxy S5 and Samsung Gear 2 watch, words appear one at a time in rapid succession. This allows you to read at speeds of between 250 and 1,000 words per minute. The typical college-level reader reads at a pace of between 200 and 400 a minute.
Other apps have offered up similar types of rapid serial visual presentation to enhance reading speed and convenience on mobile devices in the past. What Spritz does differently is manipulate the format of the words to more appropriately line them up with the eye’s natural motion of reading. The “Optimal Recognition Point” (ORP) is slightly left of the center of each word, and is the precise point at which our brain deciphers each jumble of letters. The unique aspect of Spritz is that it identifies the ORP of each word, makes that letter red and presents all of the ORPs at the same space on the screen. In this way, our eyes don’t move at all as we see the words, and we can therefore process information instantaneously rather than spend time decoding each word.
This is 250 words per minute. Harry Potter and the Philosopher's stone is 76,944 words long. At this rate you could read HP1 in just over 5 hours.

350 words per minute doesn't seem that much faster. 3 hours and 40 minutes to finish Potter.
Science quote from their web page:
When reading, only around 20% of your time is spent processing content. The remaining 80% is spent physically moving your eyes from word to word and scanning for the next [Optimal Recognition Point].

Now it's getting harder to follow. Probably takes time to get used to, but still, I can't imagine being able to concentrate on this for too long. If you could keep up with this for two and a half hours, you could read Harry Potter 1 from cover to cover.

Boston-based Spritz, which says its been in "Stealth Mode" for nearly three years, is working on licensing its technology to software developers, ebook makers and even wearables.
Here's a little bit more about how it works: In every word you read, there is an "Optimal Recognition Point” or ORP. This is also called a "fixation point." The "fixation point" in every word is generally immediately to the left of the middle of a word, explains Kevin Larson, of Microsoft's Advanced Reading Technologies team. As you read, your eyes hop from fixation point to fixation point, often skipping significantly shorter words.
"After your eyes find the ORP, your brain starts to process the meaning of the word that you’re viewing," Spritz explains on its website. Spritz indicates the ORP by making it red, and positions each word so that the ORP is at the same point, so your eyes don't have to move. That's what makes it different from RSVP speed reading, which just shows you words in rapid succession with no regard to the ORP. Here's a graphic that shows how Spritz keeps your eyes still while reading:

via HuffingtonPost
Other apps have offered up similar types of rapid serial visual presentation to enhance reading speed and convenience on mobile devices in the past. What Spritz does differently is manipulate the format of the words to more appropriately line them up with the eye’s natural motion of reading. The “Optimal Recognition Point” (ORP) is slightly left of the center of each word, and is the precise point at which our brain deciphers each jumble of letters. The unique aspect of Spritz is that it identifies the ORP of each word, makes that letter red and presents all of the ORPs at the same space on the screen. In this way, our eyes don’t move at all as we see the words, and we can therefore process information instantaneously rather than spend time decoding each word.
This is 250 words per minute. Harry Potter and the Philosopher's stone is 76,944 words long. At this rate you could read HP1 in just over 5 hours.

350 words per minute doesn't seem that much faster. 3 hours and 40 minutes to finish Potter.
Science quote from their web page:
When reading, only around 20% of your time is spent processing content. The remaining 80% is spent physically moving your eyes from word to word and scanning for the next [Optimal Recognition Point].

Now it's getting harder to follow. Probably takes time to get used to, but still, I can't imagine being able to concentrate on this for too long. If you could keep up with this for two and a half hours, you could read Harry Potter 1 from cover to cover.

Boston-based Spritz, which says its been in "Stealth Mode" for nearly three years, is working on licensing its technology to software developers, ebook makers and even wearables.
Here's a little bit more about how it works: In every word you read, there is an "Optimal Recognition Point” or ORP. This is also called a "fixation point." The "fixation point" in every word is generally immediately to the left of the middle of a word, explains Kevin Larson, of Microsoft's Advanced Reading Technologies team. As you read, your eyes hop from fixation point to fixation point, often skipping significantly shorter words.
"After your eyes find the ORP, your brain starts to process the meaning of the word that you’re viewing," Spritz explains on its website. Spritz indicates the ORP by making it red, and positions each word so that the ORP is at the same point, so your eyes don't have to move. That's what makes it different from RSVP speed reading, which just shows you words in rapid succession with no regard to the ORP. Here's a graphic that shows how Spritz keeps your eyes still while reading:

via HuffingtonPost
A New Car UI
How touch screen controls in cars should work
The problem: Several automotive companies have begun replacing traditional controls in their cars with touch screens. Unfortunately, their eagerness to set new trends in hardware, is not matched by their ambition to create innovative software experiences for these new input mechanisms. Instead of embracing new constraints and opportunities, they merely replicate old button layouts and shapes on these new, flat, glowing surfaces. So even controls for air condition and infotainment - which are commonly used while driving - now lack any tactile feedback and require the driver's dexterity and attention when operated. Considering that distracted driving is the number one cause for car accidents, this is not a step in the right direction. The solution: A new mode that can be invoked at any time: It clears the entire screen of those tiny, intangible control elements and makes way for big, forgiving gestures that can be performed anywhere. In place of the lost tactile feedback, the interface leverages the driver's muscle memory to ensure their ability to control crucial features without taking their eyes off the road. via Matthaeus Krenn
The problem: Several automotive companies have begun replacing traditional controls in their cars with touch screens. Unfortunately, their eagerness to set new trends in hardware, is not matched by their ambition to create innovative software experiences for these new input mechanisms. Instead of embracing new constraints and opportunities, they merely replicate old button layouts and shapes on these new, flat, glowing surfaces. So even controls for air condition and infotainment - which are commonly used while driving - now lack any tactile feedback and require the driver's dexterity and attention when operated. Considering that distracted driving is the number one cause for car accidents, this is not a step in the right direction. The solution: A new mode that can be invoked at any time: It clears the entire screen of those tiny, intangible control elements and makes way for big, forgiving gestures that can be performed anywhere. In place of the lost tactile feedback, the interface leverages the driver's muscle memory to ensure their ability to control crucial features without taking their eyes off the road. via Matthaeus Krenn
Feel textures on a screen
Fujitsu have been demonstrating a prototype tablet that features haptic technology which gives the user the ability to feel the texture of the on screen image.
The Fujitsu uses ultrasonic inducers on the screen to vibrate it at different frequencies, creating a cushion of high-pressure above the screen that can be varied based on your fingertip's position on an X-Y axis. Match that with an onscreen image and you have something pretty magical - different 'surfaces' in images feel different to the touch.

"This technology enables tactile sensations — either smooth or rough, which had until now been difficult to achieve — right on the touch-screen display," Fujitsu said in a statement. "Users can enjoy realistic tactile sensations as they are applied to images of objects displayed on the screen."
The technology currently works only with a single point of contact, too - the whole screen reacts to that point of contact, and feedback can't be more accurately localised - but it's very early days and will undoubtedly evolve. Down the line, a version that could replicate a two-thumb control pad on screen would transform mobile gaming.
Fujitsu's track record with bringing its technology experiments to market, either in its own products on licensed to other vendors, is excellent - about 90% see manufacture. It hopes this one will hit retail in 2015.
The Fujitsu uses ultrasonic inducers on the screen to vibrate it at different frequencies, creating a cushion of high-pressure above the screen that can be varied based on your fingertip's position on an X-Y axis. Match that with an onscreen image and you have something pretty magical - different 'surfaces' in images feel different to the touch.

"This technology enables tactile sensations — either smooth or rough, which had until now been difficult to achieve — right on the touch-screen display," Fujitsu said in a statement. "Users can enjoy realistic tactile sensations as they are applied to images of objects displayed on the screen."
The technology currently works only with a single point of contact, too - the whole screen reacts to that point of contact, and feedback can't be more accurately localised - but it's very early days and will undoubtedly evolve. Down the line, a version that could replicate a two-thumb control pad on screen would transform mobile gaming.
Fujitsu's track record with bringing its technology experiments to market, either in its own products on licensed to other vendors, is excellent - about 90% see manufacture. It hopes this one will hit retail in 2015.
Telehuman

Researchers at Queen’s University have created a life-size pod that allows people to video conference each other in 3D, as if they were standing in front of one another. Think of it as 3D Skyping.
The pod, dubbed “TeleHuman,” works by having users stand in front of an acrylic, cylindrical pod. Video cameras capture images from all angles and a computer converts them into a life-size representation of the caller, which is then displayed to the receiver — and vice versa.
Read more here at Discovery.com
Introducing the Leap
Leap Motion has created a new device, called the Leap. It is an 8 cubic feet space in front of your computer, once connected to your computer via USB, it creates a four-cubic-foot virtual workspace. It claims is 200 times more accurate than existing technology and will take gesture controls to the next level.
"Leap can distinguish your individual fingers and track your movements down to a 1/100th of a millimetre"
"For the first time, you can control a computer in three dimensions with your natural hand and finger movements"
The Leap is $70, and a select few can pre-order them now, with the full roll-out coming this winter.
"Leap can distinguish your individual fingers and track your movements down to a 1/100th of a millimetre"
"For the first time, you can control a computer in three dimensions with your natural hand and finger movements"
The Leap is $70, and a select few can pre-order them now, with the full roll-out coming this winter.
Bionic eyes to be tested next year
Bionic Vision Australia plans to complete when bionic eye testing next year. Using 98 separate electrodes, the implanted chip will help those with genetic eye conditions see large objects such as buildings and elephants (if you happen to live amongst elephants).
It will be a camera built into a pair of glasses and wired to an external processing device. That information is then sent to the aforementioned implant and finally it reaches the vision processing center of the brain.
The company doesn't plan to stop there: a "high-acuity device" that would help those same folks recognise smaller things like facial features. Presuming it works, it will join the other bionic implants.
Via The Verge
Behind the Screen Overlay Interactions
Behind the Screen Overlay Interactions: Behind-the-screen interaction with a transparent OLED with view-dependent, depth-corrected gaze.
A project by Jinha Lee and Cati Boulanger, former intern and researcher respectively, at Microsoft Applied Sciences would change all that. They’re using a special transparent OLED screen from Samsung and a series of sensors, along with custom software that reshuffles the keyboard to the back of the screen. So you can work with your hands inside the virtual desktop.
A Day Made of Glass 2: Same Day. Expanded Corning Vision.
"A Day Made of Glass 2," Corning's expanded vision for the future of glass technologies. This video continues the story of how highly engineered glass, with companion technologies, will help shape our world.
Samsung's Smart Window
Soon, we shall be living in the world of Minority Report, and this "Smart Window" technology Samsung has at CES 2012 is going to help us get there.
Flexible tablet of the future
Samsung have released a video that shows a conceptual tablet with a flexible AMOLED screen. This impressive technology has currently been in development for 4 years now, and if the results are anywhere near as good as this video, the it will make the current iPad & co look like the tools of a caveman.
via Mashable
'True 3D' Display Using Laser Plasma Technology
Researchers at Burton Inc. (not the snowboard company) have demonstrated their True 3D display technology and the results are outstanding. Instead of using a screen, the lasers produce points of light from the plasma excitation of oxygen and nitrogen molecules in the air or water.
The lasers can create 50,000 dots-per second in mid-air at 15FPS. They are currently working on bring that up to 24-30FPS. Colours can be created by combining red, green and blue lasers. The additive color model is how your TV and computer monitor create colors. Hopefully we'll be able to watch 3D movies in actual 3D, not this fake shove-irritating-glasses-on-your-face 3D.
via Gizmodo
Kinect Effect
See the future possibilities of Kinect that go beyond the expected, into truly amazing things that people around the world are beginning to imagine.
HoloDesk
HoloDesk is a novel interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual
image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer.
Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display. A novel real-time algorithm for representing hands and other physical objects, which are sensed by the Kinect inside this volume, allows physically realistic interaction between real and virtual 3D objects.
Toyota Window to the world - multimedia system
Imagine when a journey from A to B is no longer routine as your car in the near-future encourages a sense of play, exploration and learning. This is the image engineers and designers from Toyota Motor Europe (TME) and the Copenhagen Institute of Interaction Design (CIID) had of Toyota's "Window to the World" vehicle concept.
NB: The video used to promote this vehicle concept is a simulation filmed in static, controlled environments. All health and safety requirements were met for the described conditions. Toyota will never promote unsafe behaviors, and will always encourage passengers to fasten their seatbelts.
What should spaceships look like?
As the next generation of spaceships is being conceived, should shuttle designers take their inspiration from sci-fi illustrators?
From Star Wars back to 2001: A Space Odyssey and even further back to comic hero Dan Dare and Victorian illustrations for the stories of Jules Verne and HG Wells, the way spaceships should look has been an important issue - before the first rocket booster ever fired.
But the fanciful reputation of sci-fi novels and films aside, the illustration of spacecraft might actually have a realistic place in the design of future vessels.
The line has often been blurred between the realm of the sci-fi artist and the real spacecraft designers.
via BBC
Xbox Kinect in the hospital operating room
A team at Sunnybrook has come up with a novel new medical use for the Xbox Kinect.
via: http://sunnyview.sunnybrook.ca/
Smart materials coming to 100% Design
Chris Lefteri from 100% Materials meets with Bibi Nelson from Bare Conductive to find out all about their smart paint, whether skin-safe conductive or interactive. Bibi also demonstrates paint that detects proximity. Weird science at its best...
NASA's Next-Gen Spacesuit Could Have an In-Helmet Display
Though NASA holds the keys to some of the most sophisticated technologies ever to make it into low Earth orbit, the spacesuits that astronauts wear up there are still in many ways similar to those worn during the Apollo missions of the 1960s and 1970s. Fortunately for future astronauts, they may get a next-gen visual upgrade via a piece of technology that is coming down from the mountaintop at this year’s Desert Research and Technology Studies (RATS).
Vancouver-based Recon Instruments, maker of GPS-enabled ski goggles with in-goggle displays tucked in the peripheral, is sending its technology to NASA for potential inclusion in the next generation of spacesuit helmets in which mission critical information and checklists could appear right before astronauts eyes. NASA’s spacesuit designers have been toying with the idea of an in-helmet displays for a while now, and considering that spacewalking astronauts currently rely on paper checklists taped to their arms, such a display represents a pretty big technological leap forward.
via PopSci
Brazilian Cops Get Augmented Eyeglasses That Can Pick Guilty Faces Out of a Crowd

Police officers in Brazil will soon be adding a layer of cyborg tech to their law enforcement toolbox via glasses rigged with facial recognition tech. The glasses, dubbed “RoboCop” glasses, scan faces in a crowd and check them against a criminal database, and officers in Rio de Janeiro and Sao Paolo have already been through demos with the technology.
At distances up to 50 yards, the glasses can reportedly scan 400 faces per second, comparing 46,000 biometric points on a person’s face against a database of terrorists and other criminals. If a match is made it is indicated by a red light that appears within the glasses frame, allowing police to zero in on those people with problematic pasts (or currently questionable legal statuses) without having to put police and citizens through the tedium of random ID checks.
via AOL News
Subscribe to:
Posts (Atom)