![]() ![]() But in recent years multicore technology has enabled complex tasks to be completed thousands of times faster and in a much more energy-efficient way. ![]() Much existing software has been designed using ancient assumptions that processors can only do only one operation at a time. Instead, the researchers recommend techniques like parallelizing code. “We can’t keep doing ‘business as usual’ if we want to continue to get the speed-ups we’ve grown accustomed to.” ![]() “These are the kinds of strategies that programmers have to rethink as hardware improvements slow down,” says Thompson. While this approach reduces coding time, the inefficiencies it creates quickly compound: if a single reduction is 80 percent as efficient as a custom solution, and you then add 20 layers of reduction, the code will ultimately be 100 times less efficient than it could be. For example, if someone has to create a system to recognize yes-or-no voice commands, but doesn’t want to code a whole new custom program, they could take an existing program that recognizes a wide range of words and tweak it to respond only to yes-or-no answers. With software, they say that programmers’ previous prioritization of productivity over performance has led to problematic strategies like “reduction”: taking code that worked on problem A and using it to solve problem B. The authors make recommendations about three areas of computing: software, algorithms, and hardware architecture. Leiserson co-wrote the paper, published this week, with Research Scientist Neil Thompson, professors Daniel Sanchez and Joel Emer, Adjunct Professor Butler Lampson, and research scientists Bradley Kuszmaul and Tao Schardl. “If we want to harness the full potential of these technologies, we must change our approach to computing.” “But nowadays, being able to make further advances in fields like machine learning, robotics, and virtual reality will require huge amounts of computational power that miniaturization can no longer provide,” says Leiserson, the Edwin Sibley Webster Professor in MIT's Department of Electrical Engineering and Computer Science. The inefficiency that this tendency introduces has been acceptable, because faster computer chips have always been able to pick up the slack. Leiserson says that the performance benefits from miniaturization have been so great that, for decades, programmers have been able to prioritize making code-writing easier rather than making the code itself run faster. In a recent journal article published in Science, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) identifies three key areas to prioritize to continue to deliver computing speed-ups: better software, new algorithms, and more streamlined hardware. While we wait for the maturation of new computing technologies like quantum, carbon nanotubes, or photonics (which may take a while), other approaches will be needed to get performance as Moore’s Law comes to an end. As a result, over the past decade researchers have been scratching their heads to find other ways to improve performance so that the computer industry can continue to innovate. But today, we’re approaching the limit of how small transistors can get. And for a long time, the smaller the transistors were, the faster they could switch. Transistors, the tiny switches that implement computer microprocessors, are so small that 1,000 of them laid end-to-end are no wider than a human hair. This miniaturization trend has led to silicon chips today that have almost unimaginably small circuitry. For half a century, Moore’s Law has endured: Computers have gotten smaller, faster, cheaper, and more efficient, enabling the rapid worldwide adoption of PCs, smartphones, high-speed internet, and more. In 1965, Intel co-founder Gordon Moore predicted that the number of transistors that could fit on a computer chip would grow exponentially - and they did, doubling about every two years. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |