Sunday, April 13, 2008

OLED (Organic Light Emitting Diode)

Currently, LCD (Liquid Crystal Display) is a favoured display technolgy for computer and TVs. OLED has the potential to replace LCD because it solves many of the problems of LCD.

OLEDs are made from plastic, organic layers rather than crystalline layers like LCD. This means they are thinner, lighter, and more flexible. You could have an 80-inch high-def TV only a quarter of an inch thick. They can even be mounted on a plastic rather than glass base, and will be easier to produce in large sizes (movie screens? billboards?)

The are also brighter and do not need to be backlit like LCDS, because they make their own light rather than selectively blocking out light. This also means they have a much larger viewing angle than LCD (about 170 degrees). The lack of wasted light production also means they use less power than LCDs and produce truer black and whites.

They also refresh about 1000 times faster than LCD, so will reproduce motion better.

A pretty substantial package of benefits! So why hasn't OLED taken over already?

Right now, it is expensive to manufacture OLEDs. But competition in display technologies is so hot right now. Economic forces will no doubt make it attractive to mature this technology quickly, and it could end up cheaper to produce OLEDs thand LCD because crystals need not be grown and placed.

Another problem is that the lifetime of the blue organic components in OLEDs is currently pretty short (around 14,000 hours). Once again, I think the economic forces will overcome this relatively soon.

Currently, OLED is used in cellphones, digital cameras, PDAs like the Sony Clie, and a few prototype TVs. But as the video below shows, it will be used in pretty much any display application ( tvs, walls, clothes ) because developers are really throwing money and research hours at it. They believe it will be the technology to replace LCD:





Sunday, April 6, 2008

E-Paper

Reading long tracts of text from a computer screen hurts your eyes and just isn't as nice a feeling as reclining with a book in your hand.

But now you can get E-paper. Where normal screens constantly emit light to produce an image, E-paper is totally different: it manipulates pigments to produce an image. The overall effect is that it looks a lot like print on paper, using incidental light just like a book, rather than generating its own.

Basically, it has a whole layer of pixels like a normal screen, but each pixel is a tiny ball of pigment. Loads of little electrical charges flip each pigment pixel around (turning it 'on' or 'off') to create images. Currently they only have good black and white displays, but surely they will figure out colour sooner or later.

This video shows the basic principle for those interested:



And this video shows how thin it is. Rollability could be a major plus when used in mobile devices:



You will notice that when the image changes, it happens quite slowly. This shows the pigments being flipped on or off. The refresh speed should continue to improve as new devices are released.

So it seems that E-paper is really suited to reading, but not to rich motion graphic applications like video, which may look strange on paper.

E-paper is also very energy efficient because it only needs power to change the display, not to maintain it.

So what are some applications for E-Paper? The first commercial product to use
E-Paper technology was Sony's Librie Reader, and now you can use Amazon Kindle or Sony Reader for reading E-Books, Newspapers, and Magazines. Kindle lets you download E-Books and periodicals directly from Amazon via inbuilt mobile internet.

For all its convenience, whoever designed the Kindle visually needs to be shot, it looks incredibly clunky and dated for a device released in 2008. I'd like to see if Apple enters this field with a sleek 'iReader.' Apple continues to show that sexy industrial design is a major asset, and shouldn't be skimped on.

Amazon Kindle Video:



E-paper is also finding its way into cellphones and mobile devices in general. You can currently browse the web on what looks like paper. The iRex iLiad Reader is Linux-based, so all sorts of applications are being developed for it.

I think that people will always feel nostalgia for books. But book lovers need not fear. If anything, books as objects will become even more valuable if E-Paper takes over. Personally, I think the information contained in a book is more important than the shell. The worldwide annual paper consumption is roughly 300 million tons. Surely E-paper has the potential to at least offset this! It should be particularly attractive at the moment because of the carbon-offsetting trend.

In summary:
  • I think that E-Paper is a stayer because it deals with a real problem - reading long tracts of text comfortably from an electronic device.
  • I doubt whether it will become a mainstream entertainment display technology because people are hungry for rich motion imagery such as video, which other technologies render more faithfully.
  • It is exciting that books will now benefit from the speed, scope, and reach afforded by electronic distribution. The dissemination of information is more important than nostalgia.
  • E-paper's efficiency in terms of power and space are impressive.
  • E-paper, along with OLED (a post on this to come) represent an exciting shift towards Organic Electronics: http://en.wikipedia.org/wiki/Organic_electronics

Thursday, April 3, 2008

Surface Computing

I know Microsoft isn't everyone's favourite company, but let's face it, they have developed Surface Computing further than anyone else at present. I just hope the functionality these videos show filters down to the mid-range consumer and the platform isn't full of bugs due to rushed releases.

The following videos show the rich possibilities when Multi-Touch interaction and Object Recognition / Device Discovery are combined:





I'm can't tell if the credit card payments actually get processed on the table, or if it simply functions to work out the bill for you to pay traditionally.

I will also be interested to see how the ergonomic implications of Surface Computing determine how it will be applied. For example, bending over a table is fine if you are ordering a meal but wouldn't be suitable for extended periods at a workstation.

The official microsoft surface site:
http://www.microsoft.com/surface

And the microsoft surface blog:
http://blogs.msdn.com/surface

Device discovery and Object Recognition

These days it is common to be able to plug your new digital camera into your computer, which recognises it and automatically installs the software necessary to run it. Pretty handy.

This is referred to as 'Plug n Play' or 'Device Discovery.'

Short-range wireless networking protocols like Bluetooth also allow devices to discover each other when they are in range. Your GPS can talk to your phone or your phone can talk to one of those Ubergeek headsets.

But how do they decide which devices can 'speak' to which other devices? How do they establish a shared language? UPnP is a computer industry initiative whose aim is to make the Plug n Play connection language universal. It would be great if all our devices could play together. But for economic reasons, universality doesn't always catch on. For example, HDV and Blue-Ray are currently jostling to become the dominant high definition storage standard.

Bluetooth is more businesslike. It charges companies to be members and currently has Ericsson, IBM, Intel, Microsoft, Motorola, Nokia and Toshiba as Promoter Members. They strongly influence the strategic and technological direction of Bluetooth as a whole. So you can see how they would greatly influence the language of how devices speak to each other, obviously in ways most profitable to them.

Another consideration is as the electronic devices in our lives become more thoroughly interlinked, the complexity of the task of securing the information they contain or process is proportionally increased. FOr example, when someone is on one of those Ubernerd bluetooth headsets, it is possible to hack into it and eavesdrop, or even to inject your own audio for them to hear ( example at: http://www.youtube.com/watch?v=1c-jzYAH2gw )

Wait until you see what 'Surface Computing' can do for Device Discovery. Now things like credit cards can be recognised whe you place them on a surface. You won't need anything special in your credit card either. I cover this in the coming post.

Multi-Touch applications

Okay, I think this one is a stayer. And it has been around since the 80s! Having two points of contact literally adds a new dimension to control interfaces because not only can you control two points, but you control both the length and orientation of the line between them.

And we have two hands, so it is immediately more intuitive.

It seems the iPhone currently presents the most visible Multi-Touch application (particularly the photo viewer). If you are not familiar with it, here is a demo:



But the potential of Multi-Touch goes beyond simply resizing photos. Creative control software will be needed to fully exploit the possibilities. Here is a video from the Ted conference showcasing a whole lot of experimental uses of Multi-Touch:



What about if the surface recognises more than 2 objects? Then we are getting into some crazy possibilities like multiple user interaction, collaboration, and the recognition of specific objects. The iBar (below) will be competing with the unreleased Microsoft Surface.



We are moving beyond the paradigm of the mouse, and the dimensionality of our interaction is becoming truly 3-D, like our natural physical environments.

In this post I have tried to focus on the possibilities of Multi-Touch, but Multi-Touch has is a considerable synergy with another phenomenon called 'Surface Computing,' which I will cover in another post. But first, I will cover 'Device Discovery,' which also has implications for Surface Computing.

Seadragon and Web 2.0 rant

Yes, nerdy name, I know.

Summary: Seadragon attempts to 'do away with the limitations of screen real estate.' They basically want to provide so much visual data at such a hi-res that it allows the z-axis (zoom) to become as controllable as the x and y axes.

Another aspect they're developing is the hyperlinking of images. Their 'Photosynth' application can computationally build models of environments like the Notre Dame Catherdral by registering multiple images together.



Obviously the biggest hurdle to this type of approach is internet speed. That's a ridiculous amount of data to get through the pipeline by today's standards.

Photosynth's 'networking of images' is basically a Web 2.0 strategy.

A reservation I have with Web 2.0 is that most users really don't have anything useful to contribute. Now, with more voices gaining a stage, the fifteen minutes of fame Andy Warhol prophesied has become whittled down to 15 seconds of fame, so the web is becoming full of 'noise.'

I'm all for bringing back specialist editing and curation to content. But Photosynth and other Web 2.0 applications are a great way to build quick, cheap and nasty models by connecting bits of information. We just need to remember that these models really are rough, and shouldn't let them replace meatier, more dedicated models, or even the real thing.

In Anthropology, it was realised very early how hard it is to fairly and accurately record and present information because one inevitably imposes their own perspective on the object of study.
One of the practices attempting to work around this was 'Interpretive Anthropology.' The goal was to have a large amount of observers covering the subject. It was thought that the inclusion of so many voices would average out any extreme or skewed views. This is kind of what Web 2.0 is doing.

I suppose this is okay as long as the voices have really studied the subject. Put it this way - if you need to know the names of the Saints in Notre Dame, you might not trust the Photosynth model of it because all the tags are just from lay users.