Pages

Tuesday, December 10, 2013

Life of technology

Image from: Wikipedia Commons
It seems granted that computers should work with electricity and quietly (ignoring the fan). There was a time, however, “computers” were like your car engines, full of camshafts.

The most famous mechanical computer was perhaps the “differentialmachine” designed by Charles Babbage in 1800s. He is now considered as “father of the computers”. It was very legendary and had been used by the British government to produce astronomical and mathematical tables. The best mechanical computers were perhaps the ones used to calculate bomb trajectory in the World Wall II, after which computers became electrical.

It is hard but interesting to imagine what the world would be like if we are still using mechanical computers. Personal computing may still be possible, but each time you want to calculate something, you may need to hook it up with your car engine as the source for mechanical power. The Silicon Valley won’t be in California, but near some coalmines. Actually it won’t be call Silicon Valley, but maybe steel valley. Internet can also be conceived, however, in the form of railways.

Although an information era supported by mechanism does not sound realistic, it is still very pitiful to see the mechanical computer passing away, however hard Charles Babbage had worked to initiate its birth. Not only mechanical computers, vacuum cubes, for instance, which was the foundation for the first generation of electrical computers, can hardly be seen nowadays either. Even an electrical engineer may not know what a vacuum cube is!

I remember a lecture on magnetic devices about five years ago. The professor expressed pathetically what he had seen in the recent magnetic technology history. The computers has once relied on small magnets (though huge from the viewpoint of nanotechnology) and magnetic strips for both information storage and processing. Yet now, magnets are only used in hard disks, while all other components are electric, including the internal memory. In the 1980s and 1990s, a number of products similar to CD and DVD were developed in Japan. Instead of relying of optical properties, these products used a layer of magnetic material to store information and were read magnetically. However, these magnetic disks had never hit a wider market; they were only sold in Japan. Their existence was hardly know to people outside Japan. Now, they seems to have totally disappeared, losing the whole market to CD, DVD, and Blue-Ray. The magnetic technology had pioneered in many application, but then lost the competition to later players.

This is like the wild nature. Animals struggle to survive. If the evolution speed fails to keep up with the changing environment, the whole species may distinct. Closely related to the technologies are the companies. We no longer have SUN, the once software giant. Also, can anyone recall Palm? It was the first producer of personal digital assistant (PDA) in the world. Recently, Blackberry is at the same situation where Palm had been around 2009.

Image from: dualdflipflop
Perhaps, the most ironic tragedy is about Kodak. Kodak invented digital photography in 1975, but was killed by this technology thirty years later. The company had once secured almost 90% market share of photographic film in the US. People in Kodak were reluctant to develop digital technology for fear that it would hurt the photographic film business. This inevitably happened, but Kodak lost the leading position. It gradually shifted the market away from the highly profitable photographic film, ending Kodak’s life. The technology of photographic film may also be history as well.

If a technology can be thought of as a life, it is a life worth our respect. Although some of them have faded away, there have been an era that belongs to them. There can be a line of technological history, with symbols representing human talent and effort, like the Pyramids and the Great Wall.

The singularity

Image from: Wikipedia Commons
I am thrilled to learn how developed this term is.

The singularity, or specifically the technological singularity, refers to the future when artificial intelligence becomes so supreme beyond ours that we cannot even guess what the world will be like. It is often feared as a disaster and the end of the human era.

During the past century and the beginning of this one, the technology world has been developing exponentially. This is a big thing. Suppose you have 2 dollars this year and 4 dollars next year, if this growth is linear, 30 years later, you will have 60 dollars, while with an exponential growth, you will have a billion.

The most famous example may be Moore’s law, which describes the capability of computer chips doubling every 24 months. Similar to this trend, the global telecommunication capacity doubles every 24 month, the information throughput of computers doubles every 14 months and the information storage capacity doubles about every 40 months. This fuels our economy, and at the same time makes it possible for a super powered man-made god or devil.

According to the singularity hypothesis, there could be a technology explosion because of the exponential growth. Information technology, nanotechnology and genomics may be the frontier. I. J. Good mentioned the “intelligence explosion” as early as 1965. He expected the singularity point to be the time when the machine can improve themselves in a way unforeseen by human. Beyond this point, the self-improvement will happen recursively and soon surpass human brain. This could happen suddenly.

It seems the world is heading to this singularity fast. The Europe has a huge project bringing together major universities and research institutes across Europe to study brain human brain and apply the knowledge to intelligent systems. The want to build highly developed neuromorphic chips and robots. The first generation of neuromorphic is actually already on the horizon from the SyNAPSE project in US. Neuromophic chips may also hit the market soon. On Oct. 8th this year, Matt Grob, CTO of QUALCOMM, announced the company’s development of standardized neuromorphic chips – Zeroth processors – at MIT Technology Review.

At the Singularity summit 2012, there was a survey on when the singularity would happen. The median was 2040. That’s less than 30 years from now. This seems quite optimistic on our ability to develop the technology, but pessimistic on our ability to control technology.

The technology is actually progressing slower than it appears to be. The concept of artificial intelligence has been there for tens of years, but it is not until recently has it reached the daily life, let alone how useless Siri is and that the GPS sometimes lead you to drive against a one-way road. Even in terms of the cutting edge nanotechnology, the quantum physics has been known for almost a century, but its implementations in the latest research devices are still very shallow. No one has ever been able to touch the heart of quantum physics in applications yet. Turning what is known to what is applicable is still a hard job.

Perhaps, some social problems may be more urgent than the singularity. Before the singularity can possibly happen, the unemployment will inevitably rise because the jobs that are once for men will be taken by machines. This may have an effect to slow the technology development down and possibly cause other problems as well. This seems to be a real problem to worry instead of whether human will be slaved by robots.

Nevertheless, the notion from the singularity summit and the Singularity University is quite encouraging: identify and promote the ideas that will make a difference in human future. 

Monday, December 9, 2013

The man-made brain

Image from: Rick Bolin
It seems to me that the most exciting thing is to know what is possible in the near future. I have been interested in artificial intelligence for years, but I was still surprised to get to know that we are now so near to an artificial brain.

IBM and HRL are making a neuromorphic microprocessor as capable as a cat brain by 2016, which will be the outcome of the SyNAPSE project started in 2011.  

This may seem trivial given that IBM Watson, a supercomputer equipped with intelligent software, has already beat human in chess and the quiz show Jeopardy. It is now being promoted to be applied in medical care as well. Nevertheless, the breakthrough of SyNAPSE is to have an intelligent chip instead of a supercomputer. The technology of artificial neural network has long been widely used, but at a huge cost for high performance. IBM Watson is as large as several rooms, and your three meals each day cannot keep it working for even one hour. Realizing neural network at the chip level does make a big different here.

The basic idea is to use a fundamentally different architecture when building the microprocessor. Traditionally all computers follows the Von Neumann structure, where the system has separate units for memory, processing, control and input-output units. Information is processed sequentially. In a neuromorphic chip, however, memory, processing and control units are mixed, and a high level of parallelism is realized. I have thought that this kind of circuit would inevitably require new materials and device characteristics, but I really have overlooked the potential of the current transistor based integrated circuit technology.

IBM is building such processors by implementing neuromorphic algorithms in the hardware design. Many of these algorithms have already been used in software. The “hardware coding” makes the system much more efficient than just running intelligent software in normal computers.

In an introduction video, Dr. John Arthur, one of the researcher from IBM, mentioned “current computers can do a fantastic job in adding numbers, but they do really poorly in recognizing faces where human brain is very good at.” The goal is to build systems that can do recognition tasks automatically.

In the same video, Dr. Horst Simon, the deputy director of Lawrence Berkeley National Lab made a very interesting analogy to the invention of plane to describe this objective. Unlike birds, the planes have rigid wings instead of flapping wings. Dr. Simon said:

“Reorganization computing is exactly at the stage where we are looking at “flapping wings” and “rigid wings”. We don’t want to build a bird; we want to build a device that allows humans to fly. So, we don’t want to build here a human brain; we want to build devices that can help solve the tasks that current computers cannot solve at ease.”

“We don’t want to build a bird; we want to build a device that allows humans to fly.” I found this comment quite inspiring: take whatever reference to help finish the task. I always wonder what Dr. Von Neumann’s reference was when he outlined the architecture that gave birth to our current computers. But now, our brain is the natural and powerful reference.

Monday, December 2, 2013

Money that does not exist

Image from: Antana
The exchange rate of bitcoin over US dollars has reached over 1000:1 this month. You need to spend more than $1000 to buy 1 bitcoin, a virtual money that worth only several dollars one year ago.

Bitcoin is the first currency based on cryptography instead of central authorities, which debuted in January, 2009. Compared with US dollars, there is no agent playing the role of the US government for Bitcoin. Instead, it keeps a public ledger and log known as “blockchain”, which is maintained by “miners” distributed throughout the Internet. The miner solves the complicated mathematic puzzles to verify transactions and add the information to the blockchain. Just like the miners for gold, the miners here get the newly issued bitcoins for reward. A miner can simply be a personal computer. Recently, some machines that are specifically made due to the fast-rising price.

Bitcoin has been called the “gold of tomorrow” by the chief financial officer from the University of Nicosia, the first university that accepts Bitcoin as payment for tuitions. Indeed, Bitcoin is designed to be “gold”. The algorithm it adopted to generate new coins has an upper limit for the total coins available. As time goes on and more bitcoins become available in circulation, the mathematic puzzles in Bitcoin mining will become increasingly difficult. This means the salary for miners will reduce. Hardly any new coins could be generated when the number reaches 21 million, which is the total amount available according to the Bitcoin protocol. With the crazily rising price, bitcoin mining is now like the replay of the golden rush in 1800s.

Gold had been the ultimate money until 100 years ago. In the economic theory, what made gold the favorable money across culture are that first it has a limited amount available and hard to get; second it stores value. Obviously, Bitcoin possesses the first character but it is a serious problem whether it stores any value. It won’t be valuable unless it is widely accepted as the medium for exchange. 

Until recently, the largest market accepting bitcoins has been the blackmarket. A study from Carnegie Mellon University showed about 4.5~9% of all bitcoin transections had been used to buy illegal drugs from Silk Road, the “eBay for drugs” that was shut down by FBI in October this year. Other illegal transactions include gambling and money laundering. Real stores and business groups are now picking up the trend to accept bitcoins, most notably WordPress and Baidu, the searching giant in China. A Bitcoin ATM also became available recently at a coffee shop in Vancouver. Following this link, you will see more available thing to buy with bitcoins.

Nevertheless, the growth of popularity of bitcoin in exchange is much slower than that in investment. The fast-rising price is largely due to speculations instead of transection needs. Another major risk for the price to fall is whether it could become financially legal. Recently, both People’s Bank of China and the Federal Reverse in US stated that they don’t accept bitcoins as currency, nor do they deem bitcoins illegal. “People are free to buy and sell bitcoins as normal electronic goods,” according to Gang Yi, the vice president of People’s Bank of China, “it is very inspiring but we have no plan to admit its legal status as currency.” Yet not all countries are hesitating. There are also countries admitting Bitcoin, like Germany, France and UK, while there are also countries that have banned it, like Thailand.

While bitcoin is at this hot situation with an uncertain future, another alternative has also been released to share the cake, the Litecoin. It seems Bitcoin needs to be prepared for competitors, and perhaps the non-authority based cryptography currency could also compete with dollars in the future.

(Update) Chinese government has banned Bitcoin as currency in financial activities, although Bitcoin markets are still allowed as a normal digital good market.

Sunday, November 17, 2013

The dawn of quantum computing

Image from:  Steve Jurvetson
In the beginning of this year, Google and NASA jointly purchased a "quantum computer" from D-WAVE, a Canada based, world-first quantum computer provider. Although some scientists still keep skeptical whether their products so far are strictly "quantum", this progress is no-doubt extraordinary that the working principles clearly set it apart from normal computers.

The D-WAVE quantum computer relies on superconductivity. This is a phenomenon that the electric resistance disappears. In a structure named "Josephson junction", it results in a controllable oscillations which can be then manipulated for computational purpose. D-WAVE divide their quantum processor into three major component: qubits which stores the information based on the behavior of the Josephson junction, coupler which connects multiple qubits to form a system, and programmable magnetic memory which is the peripheral supporting circuitry to all users to program the processor. The essential feature setting it apart from conventional computers is that each qubit stores 0 and 1 simultaneously as their superposition. They can be slowly tuned into a classical state (either 0 or 1) once the work is done so that the results can be read out.

This feature makes it particularly suitable for optimization problems and potentially boosting artificial intelligence. Indeed, this early-stage product is far from a general purpose computer. According to NASA, their researchers will use this system to investigate quantum algorithms that might someday help solve difficult optimization problems in aeronautics, Earth and space sciences and space explorations. Google is trying to use this technology for even faster Internet experience. Two teams were set up to race each other within Google, one using D-WAVE, the other using classic computers. 

The researchers at both Google and NASA are excited with the new possibilities brought by this quantum computer. They release a video as an introduction and to express the excitement. Another interesting video is posted on the Google+ page for Google Quantum A.I. Lab Team. They build a "quantum world" in a game named Minecraft, thus becoming the qCraft as they named it.  Rupak Biswas, the deputy director of the Exploration Technology Directorate at Ames, called this time "the dawn of the quantum computing age."

Despite of the enthusiasm, this is indeed the very "dawn". D-WAVE is super cool, or more precisely, extremely cold. It works at a temperature more than 200 degrees below zero. It also needs careful calibration; booting it up takes over one month. Although NASA and Google both have plans, they are also both unclear what is really going on in the core and how it can be utilized. Tons of experiment are ahead. Nevertheless, this is our first step, and a first step that doesn't seem bad at all. Stay excited.

Wednesday, October 30, 2013

The boosted Home button

Image from: Wikipedia Commons
Finally the seemingly useless home button got something useful. Last month, apple released the IPhone 5s. One of the several new features is the so called “touch ID”, which is a fingerprint sensor integrated with the home button.

I am far from a fan of apple, but I always admit that it is one of the most brilliant technology companies. The ultimate goal for high-Tech products is to hide the technology. All that matters is to give a natural experience to the user.

The Touch ID, according to Apple, is the combination of “some of the most advanced hardware and software”. It scans your fingerprint while you press the home button and unlocks the phone if the scanned image matches what you have recorded. This is a complicated process involving sensor recognition, hardware encryption and software optimization.

The Touch ID sensor is hidden behind the sapphire home button. The sensor is thinner than a human hair, housing an 88x88 array of capacitors to catch every detail on your finger (a resolution of 500 pixels per inch). Two noticeable aspect of this high-Tech button is the dark areas on the sensor die and the metal ring surrounding the button. Chipworks imaged the die of the sensor and it is unusual to see that the silicon has been partially etched to provide a recessed shelf within the die area for wire bonds at the top and bottom edges. Although the wire bonds is old-fashioned, this trick allows the chip surface to touch directly to the sapphire disc, minimizing the finger-chip distance and thus maximizing the accuracy. At the front side of the button, the metal ring that everyone notices is more than just decoration. It is actually part of the sensor. This ring detects your finger and wake up the sensor chip before the button is touched. This time interval gives the user an illusion that the matching process happens in no time. He may even forget that the phone is securely protected.

Now, your fingerprint is protecting your phone, but who is protecting your fingerprint? For this, Apple implemented the solution developed by ARM, a microprocessor IP provider. ARM developed the so called “Trustzone” technology, which is a portion on the microprocessor that is only accessible by certain hardware but not any software from the OS system. This hardware encryption makes it impossible for any app to steal your fingerprint information.

Actually the fingerprint technology has been existed for long and Apple is not the first smartphone company to implement fingerprint sensors. Samsung, Moto and HTC all have released products using fingerprint to protect the phone, but no one managed to attract enough public attention. Indeed, technology is one thing; how the fingerprint recognition is integrated with the phone-unlocking process is another. The ease to use sometimes determines. In fact, more than half the users leave their smartphone unprotected to avoid the trouble of entering password. Touch ID seems the best fingerprint based solution that embraces both convenience and security, although it is still too early to conclude. Everything happens with only one press on the button.

Yet Touch ID is not unbeatable. Shortly after the release, the Chaos Computer Club successful hacked it with a fake finger and documented the video. They took advantage of the fingerprint image left on the touch screen to replicate a fake one, which for a daily used phone could be harder but still doable. This is the Achilles’ heel for not only Touch ID, but all biometrics solutions that use individual’s biological trait to secure the information. The words from Frank Rieger, who is the spokesperson of the Chaos Computer Club, really worth attention:

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can´t change and that you leave everywhere every day as a security token. The public should no longer be fooled by the biometrics industry with false security claims. Biometrics is fundamentally a technology designed for oppression and control, not for securing everyday device access.”

Our biological information is unique and unchangeable. Plus, most such information is also hard to protect. Take fingerprint for instance, anything you’ve touched will have your fingerprint left on. On the other hand, for something that is not normally accessible, if you are hacked once, you are hacked forever. This makes it extremely essential to protect such information itself. The “Trustzone” technology is good enough to block software attack, but it still needs to demonstrate the protection over forceful read at the hardware itself.

Perhaps, there is no 100 percent security. The implementation of Touch ID may not fully secure your phone, but it definitely makes it harder for someone to break into your phone. Indeed, engineering is the art of trade-offs. If someone has the resources to break into your phone for information, he probably already has many other ways to spy on you. It is also always advised not to store sensitive information in consumer electronics. In this regard, whatever Touch ID provides is sufficient. Most importantly, it is thousand times more convenience than entering password!

Tuesday, October 8, 2013

The smartphone prefers winter

You are ready to spend over $200 plus a 2-year contract for the latest smartphone with a much higher hardware standard. But should you? The newest ­­­­­­ads claim that the quad-core processors will ensure a much faster user experience. But the truth is smartphones are not designed to operate at full capacity without overheating. 

An interesting test on Google Nexus 4 smartphone published by Anandtech showed one of the important issues that the smartphone companies face. They tested the device in a freezer and in a room at standard temperature. The performances in video gaming are benchmarked, which almost doubles in the freezer. According to their test, the smartphone operates at a reduced frequency when it is hot, which is then like driving a Lamborghini along with the jammed traffic. It has been rumored that the major smartphone companies are looking into the potential application of “liquid cooling” techniques to help avoid this issue.

Actually, the heating problem is well acknowledged by the industry. It is a barrier that hinders the performance boost of all “computers”, including smartphones. One fact is that the heat flux generated by today's microprocessors is loosely comparable to that on the Sun's surface. But the chip temperature has to be kept below 100 ⁰C. The situation for smartphones is more challenging. No one wants to hold even a 50 ⁰C cellphone in hand.

The problem with smartphones, unlike normal computers, is that they don’t have fans inside. “A simple model can show the maximum heat dissipation rate is around 3 Watts, even at ideal conditions,” according to Rui Mao who has studied heat managing materials in electronic devices for more than 5 years. The power of a typical cutting edge microprocessor for smartphones at its full load is roughly the same or even larger. This means that the claims about smartphone performance are vastly overstated.

What makes the situation worse is that the heat is not evenly distributed in the smartphone. Leyden Energy imaged the temperature at the back of a smartphone loaded with demanding apps. The region where the microprocessor locates reaches a peak temperature twice as high as that at the edges. The traditional approach of using graphite and foil radiator to transfer the heat to the outer casing seems to be out-paced by the processors, according to several comments on the liquid-cooled phones. The liquid-cooling technology uses heat pipes to help spread the heat from the processor. It is already widely implemented in ultrabooks and NEC has launched the world’s first smartphone (Medias X06E which is only available in Japan) with ultra-thin heat pipes. It is rumored that other major player in the market like Apple, Samsung and HTC are also not far from adopting this technology.

A heat pipe used in electronic devices normally consists of a sealed pipe made of copper and filled with working fluid. The fluid evaporates at the hot end and condensate at the cool end in each heat transfer cycle. Although this is a mature technology in computers and even tablets, there are still practical challenges to fit the thickness within 1 millimeter for smartphones. The present yield for 0.6 millimeter heat pipes is only 30%, but DigiTimes is still optimistic for more products to hit the market in 2014.

While waiting for the companies to deliver us new phones armed with this liquid cooling technology, we shall bear in mind that no single technique makes a smartphone stand out. Facilitating heat dissipation is one thing; reducing heat generation is another, plus the ever increasing demand on performance. The smartphone is a complex system. Perhaps, looking behind the advertisement could help you make a wiser choice.

Related links: