Last week I gave a presentation at the Congress Of Future Engineering Software (COFES) in Tel Aviv on the Opportunities For Innovation In Design And Sustainability.
I was the token hardware person at a conference for softies, and I asked the question of those people, “can hardware help in design and sustainability – or is it just something we take for granted ?”
Does anyone, end user, software developer, IT manager, know what the FLOPS load/requirement of their application and their usage actually is? Do they care? And if they knew would they make different buying decisions?
Could more intelligent hardware selection impact not just operating costs and overall reliability – and uptime, but the quality of the ultimate design’s efficiency – its “Green-ness”?
This is a time of change, of inflection points, of serious long term decisions. Not just in AEC and CAD, but in all disciplines, all the way down to the chip that is powering your phone or tablet, and up to the super computer cluster locally or in the cloud you are drawing on. Today’s chips could be the butterflies Eckels missed.
Concepts like 'Chaos Theory' more colorfully known as the "Butterfly Effect" impact our life. Scientists know the environment is a fragile system depending on interaction with nature.
The original concept came from a 1951 science fiction short story by Ray Bradbury entitled “A Sound Of Thunder,” and was popularized by “The Twilight Zone.” The notion was a team of people go to the past, when dinosaurs roamed, and they roam on a floating walkway so they don’t disturb anything. But one of them wanders off the ramp. When they return to their time things are different and a totalitarian regime is in power and life is dark and depressing. Then they discover that the guy who wandered off the ramp stepped on a butterfly and the loss of that butterfly created huge consequences.
Today in laboratories all over the world decisions are being made on the size of the problem to be examined. Scientists and engineers bound their problems on the basis of compute cycles – how long will it take to get an answer? In the case of basic science, astrophysics, and quantum mechanics, where noble prizes are considerations, the computational examination may run for years – and how long it will run is decided on the amount of time the researcher has, and the accuracy of the desired answer is set accordingly. In the case of protein folding for example, the length of time of examination of the protein’s activity is measured in microseconds – but it may take a day of computation to see one microsecond of the protein’s behavior (accurately) – a figure of merit for the examination of a protein is 100 microseconds – one hundred days to see if the drug interaction you designed is effective.
The thing that quantum mechanics, protein folding, and astrophysics have in common with design for sustainability, is the size of the dataset, the accuracy or resolution of the matrix, and the machines it is run on.
Not only is a problem’s solution measured in petaflops or teraflops, it’s measured in megawatts. The recently announced Chinese supercomputer Tianhe-1 consumes over 4 megawatts. The soon to be commissioned Blue Waters supercomputer at the University of Illinois will consume over 80 Megawatts and requires a separate power source.
So now researchers have to size their problems by FLOPS, watts, and dollars in deciding how accurate an answer they want and when they can get it.
Now architects and designers who have been charged with designing a building or campus, and making it green and long lasting—sustainable— have to run calculations on all the environmental, geophysical, power conversion, recycling, and dozens of other factors that impact and interact with a building.
We’re not talking about designing the wonderful “One Hoss Shay,” that lasted one hundred years and a day – no way. We’re talking about stuff that has to stay, and pay, carbon neutral at least, and a co-generator if possible.
To create such a design you have to run big models, against hundreds of conditions, over a long – very long time. The longer and more accurate you can run the simulation, the more sustainable your design will be. You aren’t designing tract houses with a 20 year useful life – you’re designing monuments, the pyramids, except your pyramids have to sustain life, and like Hippocrates do no harm.
Can a little semiconductor really be that influential?
It’s not as simple as going to your local computer shoppee or IT manager and saying, give me the most FLOPS I can get for $50k. You have to know the tradeoffs. I’m going to skip discussing the visualization aspects, although that’s my favorite subject, and just assume you’ve got that covered (although in my heart I know you haven’t).
Now you have to dig in – how parallel is the problem, what is the matrix, the dataset like – is it regular, yielding nicely to a SIMD configuration, or bumpy and messy needing a lot of divergent serial processes? If you don’t know, then you’re not qualified to engage in a design sustainability project.
2010 was the year we really entered into the world of heterogeneous processing – the mixing of massively parallel processors and conventional serial processors. You may have heard it referred to as x86 and GPU.
GPUs, parallel processors, offer the highest compute density (in sq silicon) available, orders of magnitude greater than conventional x86 processors. GPUs offer the highest performance per watt and also the greatest raw computation per dollar.
But, they are also a challenge to program – parallel processing is complicated and often bewildering. Conventional processors are familiar, and you can gang a bunch of them together in clusters, and they offer a brute-force solution to most problems. Albeit while sucking up the most power, money, size, and delivering the least FLOPS relative to a GPU. But x86 processors are needed, this is not a situation of either or, it’s a requirement for both, that’s why it’s a heterogeneous problem.
If you and your IT staff and your applications supplier are lazy, too intimidated by the technological challenge of parallel processing, then you will not be able to run your design simulations, within your time and money budget, and get the most sustainable design. Now is your butterfly moment.
If your IT staff and/or your application supplier can find the parallelism in the datasets, in the design, in the vision of the project, and map that to the parallel processing capabilities of these new processors, you can run longer, more complex, and dare I say it, more beautiful simulations and resulting visualizations.
The free lunch is over. You can’t keep doing things the same old way – life, buildings, materials, laws, and a changing global weather environment won’t allow it.
Imagine, that little butterfly of a chip is going to change everything you do, and what you accomplish.