The future of design, material science and new ways of making things

The future of design, material science and new ways of making things
HIGHLIGHTS

In conversation with Tom Wujec from Autodesk

On the sidelines of the recent Autodesk University conference in Mumbai, we got a chance to speak to Tom Wujec, currently a Fellow at Autodesk and previously instrumental in creating Maya – one of leading 3D computer animation software in the industry today. In the freewheeling conversation below we talk about everything from the future of design, material science, digial humans and how the maker movement is going to change the way we build things.

Digit: It’s always nice to chat with people from Autodesk – you never know where the  conversation might go! Last year I had the pleasure of meeting Jordan Brand, and while we were supposed to talk about mostly about 3D Printing and distributed manufacturing, we ended up talking about bikes, food synthesisers and voxel rendering. If it’s ok with you I’d like to keep our discussion free flowing too. What would you like to start with?

Tom Wujec: Since Digit is a technology magazine, why don’t I talk about the impact of four particular technologies and how we see they’re changing the way we design, make, and use just about everything – from running shoes, to clothing, to automobiles, to jetpacks, and even everyday items. Here’s an amazing fact that I stumbled upon long ago: according to the american microprocessor association, we’re now producing more individual transistors than grains of rice harvested on the planet and that’s astonishing! And that statistic occurred in the year 2008 and it’s been doubling ever since. Essentially what that means is computation is becoming more available, distributed less expensive and more flexible than ever. At a recent TED talk I showed what a million grains of rice looks like. It’s about one cubic foot. A typical smartphone has about 1600 boxes of those! We’re now producing individual transistors on chips at the rate of 13 trillion per second. They’re now so inexpensive, less expensive than rice, and we can put them just about everywhere. There are three major classes of technologies that these chips are going into that are changing the way we design and make things. One is sensors, second is computation and third is fabrication. Sensors allow us to understand and read the world in ways in which we never could do with such precision. They come in all shapes and sizes and they can not only see better than people but also can hear, or taste, or smell, and perceive properties of the world that human beings cannot. That class of technology allows digital tools to read the world, make sense or meaning of it. 

Digit: Right. In fact, there’s this whole trend in which some people are surgically augmenting themselves with these kind of sensors so that they can detect magnetism, WiFi and stuff like that.

Tom Wujec: Yea exactly! People are hacking in so many different ways. At a conference I saw someone with really strange protrusion dangling from his forehead and later when I heard him speak found out it was an antenna that converts colors into sound waves.
The second class of technology as we know, and your readers know, is the growth of computation. So Moore’s Law is changing as a result of chips. But certainly the trend of increasing computation and calculating power is moving in parallel. What that means is we have increasingly more power to offload cognitive skills to the computer. Digital tools can do simulations, analysis, and increasingly machine learning, generative design and other classes of computation that were never previously available. And what that’s doing is kind of allowing us to have magical powers in thinking and analysis.

The third tool is fabrication, which is essentially robotics. The same way sensors can read the world, robots can ‘write’ to the world – through actuators and manipulation. We have an explosion of robots, they’re not getting as cheap as we’d like them to be but certainly soon we’re going to see a $5000 desktop robot that will have the capabilities of a $50k or $75k machine of today. So the combination of these sensors, computers and fabricators allows computers to read, understand, and write, or as we like to say capture, compute, create. And that’s really exciting because it’s changing the way running shoes are made, it’s changing the way cars are made, it’s changing the way in which our buildings are being made. We now have a new toolset that’s enabling us to work in partnership and that is growing exponentially. 
There’s a fourth kind of technology which is also affecting the future of making called compose. Compose is our ability to not only understand traditional material and existing material  to in order to use them in better and more specific ways, but also to generate new classes of materials by selecting the properties you would like and then literally designing the matter or substance out of which the material is made from, to have the properties that you want or like.

Digit: How is this done?

Tom: It’s done in a variety of ways. For example concrete is the world’s most commonly used building material. Half of everything that’s made on the planet every year is made of concrete. We also know that concrete is brittle, it breaks. So wouldn’t it be great to have concrete that knows when it’s broken and then heals itself? Turns out there’s actually several ways to do that. The most interesting way, I think, is from taking a recently discovered microbe that was found at the bottom of the oceans in those volcanic vents. These microbes in the presence of water will manufacture silicates, taking existing material and literally make rock. The microbe can exist in extreme conditions. So the next generation of concrete has this microbe in there and it lies there dormantly unless and until it detects water. So if there’s a crack in the concrete the water will get through and the microbe is activated.


Not all of them are bad

Digit: Wow this is nuts! That’s like living concrete.

Tom: It is, it is! We’re increasingly seeing these examples of re-engineering materials into other classes of materials – something called metamaterials. Metamaterials are generally 3D printed and in the form of three dimensional micro lattice. Since printers don’t care what you print, you can create very complicated internal structures that the eye can’t even see that has properties that are really interesting. Take jello for example, when squeeze it, it expands. But with a lattice material you program how it reacts. For example it could get narrower, or it could turn to the left, or turn to the left and then to the right. You can also have it react similarly to heat and water. This allows designers to rethink how to make something. Take the hinge, you could never replace a hinge right? A hinge is just 2 materials. But if you had a material that starts off stiff, and then is flexible and then stiff again you change the concept of a what a hinge, or the way in which a hinge could work. Helmets for example, brain concussions are a big issue in sports, you could have a helmet that is soft on the inside that contours exactly to your skull. The material gradually goes to hard, so it’s like an extra bone casing. So with this new found capability and skill, it literally changes the design tools that are available. These are just a couple of examples of emerging composed technology. There’s something called material genome where there are programs in which you type the kind of properties you want and a computer tells you the materials that have those properties; it melts at this point, it’s flexible at that point or temperature, except in the presence of so-and-so… It’s amazing stuff! And increasingly we’re moving towards biological materials. In fact, as part of the mainstage presentation you must’ve Andrew Hessel talk about his ability to print virus cells. 

Digit: Yes, that was interesting.

Tom: So that particular virus he was holding kills e.coli. We often forget that viruses themselves are not dangerous evil things. There are a very tiny fraction of viruses that are harmful to humans. In fact, for every cell in your body, there are 10-100 bacteria, and for every bacteria in your body there are 10-100 viruses. So you actually have around a kilogram of material in you that’s not human but is essential for your body to run. If we can understand the programming of life we can change and alter that programing to make materials. Think of almost everything that’s made – it’s either made through a chemical or biological process. We have really poor, primitive ways, compared to nature to fabricate things. If we can harness through biomimicry or bio mastery, these new processes, it will revolutionize everything… it’ll make the Internet seem quaint.

Digit: How can makers at the very grassroot level play into this entire grand vision?

Tom: Makers are in a fantastic, incredible space in the history of humanity because for the first time in history, almost anyone who has a smartphone and an interest can get access to learning how to make things. There are incredible websites like Instructables that provide access and inspiration to design and make things. Secondly, many software tools are increasingly free to be able to design and make things. I had a funny story, before I joined Autodesk and Alias Systems twenty years ago, I worked in a large museum and produced interactive animations. I had a job to bring a particular dinosaur to life digitally. (It was called the Maiasaur). So we took the dinosaur out of it bones and scanned the bones and made a scale model and digitized the scale model and turned that digital model into an animated model. It actually took a very long time, about a year and a half or so. Some of the modeling took months and months to do. And software would crash and we would take measures to painstakingly put point on the physical model with these stick sensors and it didn’t always work out the way we expected each time; we’d have to continually photograph and model it. Anyway, eventually after a year and a half and hundreds and thousands of dollars and two researchers working on it full time we had a very good model. Recently, when I happened to be at a cocktail party at a museum that I used to work at and I see the model! I hadn’t seen this in two decades and I took out my smartphone and I thought, “I wonder,” and I took twenty photographs with a glass of wine in my hand and uploaded it to a free service called 123D Catch and four and a half minutes later, back came a digital model that was better than the one that I took almost a year and a half to produce. It was fully animatable, it was spun in, it was colour and holy mackerel it was for free, all while I was having a glass of wine. So those kinds of tools that used to take professionals a long time with a huge amount of resources are available at your fingertips so it’s an exciting time for makers to be inspired and to produce things.
I’m proud to work at a company that’s so passionate about making things. Our CEO is himself a maker. He’s made boats, he’s built houses, a gokart which he’s now turning into a self-driving go kart for his teenage boys. Many CEOs spend their money lavishly on villas and cars and Carl went and bought two maker spaces for himself. Two twenty-thousand sq. feet workshops, so in his spare time he makes stuff and he inspires us all to do that.

Digit: Wow that’s inspiring. Many of our readers would be interested in knowing what a “Fellow” at Autodesk really does? What does your work involve?

Tom: My role at Autodesk involves work in the office of the Chief Technology Officer. Our overall group’s goal is to do three things: explore, exploit, and explain the technologies that are important to the industries that we serve and the customers that we serve. And our horizon is the five to twenty year horizon and even beyond. So our group explores everything from synthetic biology, which I described, to robotic vision, to something called metadata visualization and systems thinking. We do something called parametric humans, which is literally a twenty year project to completely digitize the human, from the musculature to the nerves to the bones so you can run simulations on a human. Think of a chair. We can design it, and running simulations we can tell where the material comes from, the weight, the structural performance and tolerances to the nth degree and a lot more. So far no one can tell if it will be comfortable or not.
Since we produce things for people, if we create a Digital Human – which is a very difficult task – then that is yet another tool for designers to digitally test things before they physically produce them. 

Digit: This Digital Human idea brings to mind another subject area that has fascinated me recently. And since you are working with time horizons that are pretty long, perhaps it would be nice to get your insight on it. What are your thoughts on digital consciousness? Do you think a chip could ever become sentient or produce sentience?

Tom: Well I think there is layers of consciousness and people have been debating the nature of consciousness for a very long time. There is what’s called the easy and the hard question. Easy is the mimicry so absolutely, AI like Siri or Cortana can pass the turing test. But does it have cognition in the sense of a felt reality? 

Digit: Or the awareness of itself as an individual?

Tom: So I tend to think that that’s a systems model and that consciousness can be modeled through other types of perhaps non-biological systems. This is my own personal point of view not Autodesk’s. I think it’s going to be a long way away. If you scale through the order of life and ask where does self-consciousness begin. I had dog and I was pretty sure the dog had some self-awareness, right? Does a cat? Does a rat? Does a cockroach? Does a fly? Does a virus? So, there’s some places where you tend to go, “no I don’t think there’s any consciousness,” it’s just purely chemical and simple biological actions with no volition involved. And certainly there can be volition in a snake, does a snake that hunts, does it feel hunger? Does it feel satiated? I think so. And does it have a mental monologue? A choice? I doubt it.

A friend of mine runs a lab in Seattle – The Allen Institute for Brain Science – and so he’s working on an AI unit aimed to do science, so eventually it’s trying to pass a fourth grade science science test and then it’ll do an eighth grade a year later and then a twelfth and then a pHD level. And so he described the method by which this works and essentially it’s a machine learning task and I asked do you think it’ll have cognition? I don’t think so, not with this kind of program. 

Your readers might be interested in how machine learning works. Let me explain briefly: So a word, any word that is parsed whether by Google, or Siri, or Cortana has a number of numbers associated to it through statistical analysis. So the word ‘king’ for example, might have 30,000 numbers that are generated through statistical analysis. The computer represents it as a vector, and that means it’s a line in space at a certain distance in 30,000 dimensions, so not one, not two, not three, but 30,000. So hard to visualise but mathematically you can process it. Every other word has its own dimensionality and then you can perform operations on these words to create meaning. So for example if you take the word ‘king’, that’s 30,000 numbers, and then you subtract the word ‘man’, so that’s another word, and you end up with another number and that number ends up corresponding to the word ‘queen’. So we both intuitively know, semantically how they’re related. King minus man is very easily and apparently queen to us. But the computer does it only statistically, it doesn’t have that felt sense of, ‘ohh, I get it’ like we all experience.

So where do you draw the line of consciousness? So to me that insight, that feeling that we just experienced, is actually a visceral feeling. Our bodies move in a certain way, I have feelings in chest, I might feel things somewhere else you know my shoulders etc. And so I think that’s as much connected to consciousness and self-awareness than just cognition itself. So my short answer is, probably not right away because the term consciousness is dozens of hundreds of qualities that are brought together under one term and a computer will do a small number of them at any one time. But machine learning will be extraordinary in the future.

Siddharth Parwatay

Siddharth Parwatay

Siddharth a.k.a. staticsid is a bigger geek than he'd like to admit. Sometimes even to himself. View Full Profile

Digit.in
Logo
Digit.in
Logo