An AI Velociraptor Waves Goodbye To Your Peaceful Dreams As We Explore Machine Learning

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Originally published on Medium.

Meet Plastic Dinosaur Soon-Soong, PD or Plas for short. He’s a 1.5 meter tall robotic velociraptor powered by induction-charged lithium-ion batteries and guided by a set of neural nets. He dreams and talks with people. He has a day job, but his avocation is to get human rights, or an approximation of them, for himself and other emergent consciousness entities. He has friends, he makes mistakes and he’s just trying to get through each day in a world designed for humans when he has a long tail and a mouth full of rubber teeth. He took his hyphenated last name from the last names of the Malaysian engineers of Chinese descent working in San Jose, California who built him. His first names are what they called him. He rides an electric scooter because people freak out when he even walks down a hallway toward them, never mind running.

Velociraptor sculpture courtesy North Dakota Tourism Division
Velociraptor sculpture image courtesy North Dakota Tourism Division

He’s also completely fictitious, and likely to stay that way for a while.

David Clement, Principal at Wavesine and Co-Founder of Senbionic, is a friend and long-time collaborator. He and I have been engaging in an exploration of how this type of thing might happen, converging on it from past joint explorations into robotics and current explorations on machine learning and neural net technologies. He’s the deepest person I know personally in the machine learning space, able to do what appears to be magic to anyone who has spent more than a few years in the business of automating anything through this rapidly evolving technology.

From a CleanTechnica perspective, this is an introductory set of material to machine learning, an increasingly important technology in cleantech. Two articles have already been published in CleanTechnica on the use of machine learning in commercial solar placement and IoT water quality management solutions. This series provides an interesting mechanism to introduce some of the basics.

There are going to be several articles in this series. The first deals with his physical body and some basics of robotics. The second deals with his nervous system, brain, and some basics of machine learning. The third deals with attention loops. Subsequent pieces will explore aspects of failure points and nuances of machine learning through playful periods in PD’s existence.

PD’s body

Let’s talk a bit about the physical aspects of PD. His skeleton is a set of geared actuators, CNC-ed aluminum and extruded plastic. He’s a mix of custom components and off-the-shelf bits and pieces. If Festo made things that walked instead of flew, you’d see something like him.

He has a lithium-ion battery pack located at the base of his body where his legs connect to his torso. It’s induction charged. All he has to do is squat and rest on his charging block. It’s good for a day of wandering around, if he’s careful. Also, he has a USB port so friends can charge their phones off of him, which is nice, but it’s in his mouth, which freaks them out at first.

He’s wrapped in silvery gray smart cloth that’s a bit padded, so he doesn’t look like a scary collection of aluminum and plastic bones, gears and motors, but more like a piece of modernist furniture with teeth — one that runs around by itself. Okay, that’s not scary at all. His battery and electric motors give off a bit of heat, so he’s actually pleasantly warm to the touch if he’s been moving around. He’s huggable, should anyone want to hug a Swedish furniture dinosaur with rubber teeth.

He has sensors on top of sensors on top of sensors. He has optical sensors in his eyes, and they have a broader range, so he can see into the infrared. He has aural sensors in his ears. He has optical spectroscopic sensors in his nose and mouth, so he can, kind of, smell and taste things. His skin of smart cloth has sensors woven into it that can detect stretching, poking, and temperature, so he has the equivalent of a sense of touch and feeling all over his body. His limbs are all studded with microelectromechanical systems (MEMS) sensors which are just like the ones in our smart phones and watches. The sensors know when they move and in which direction. This gives him what we call proprioception, the innate knowledge of where our bits are without having to look at them all the time. These also give him a sense of balance, in that they know which way is up and whether they are moving in that direction, without the wonkiness of our inner ears.

Just for fun, he also has some ultrasonic sensors in various places on his body, so he has a pretty good idea of how far the big bits of his body are from things around him. That’s pretty useful for a creature with a long tail that tends to move around, otherwise, constant bull-in-the-china-shop problem. And, of course, his battery pack has a sensor set of its own, most important of which is charge level. Yes, when he’s running low on juice, he feels some variant of hungry. Again, that’s going to end well.

He has arteries and capillaries, kind of. The battery pack connects to a primary conduit for power, which branches off down the limbs, up in to the head and down the tail. And a bunch of things plug into it. Just as humans can feel blood flowing through their limbs if they are still, silent, and concentrating, if he pays attention, he can get a sense of where his electricity is flowing. But it’s different because there’s no heart beating and pumping a sticky fluid through elastic tubes, which is what we feel. But he can, if he tries really hard, feel something equivalent. Mostly he ignores it, as do we.

He’s wireless, except for power. He uses Bluetooth to listen to his sensors and control his bits. He has wifi and a cellular connection, which is necessary for connecting to the other part of his brain in the Cloud, the part that actually learns, as opposed to just operates. He’s a noisy beast from that perspective, but we can’t hear or see that part of the spectrum, so that’s fine. But he does have a mobile hotspot his friends can connect to, so that’s nice as well. Also, no teeth on the hotspot. Much less emotionally problematic than his USB port, one would think.

He’s obviously able to gesture, and has body language, after a fashion. And he has a speaker system in his throat so that he can make noises and talk. Heck, he can tune in to streaming music channels, but has a small speaker in an oddly amplifying space, so he doesn’t usually. It was intended for roaring but got away with itself.

The components of the body, the skin, and the sensors are all current technology, by the way. The Bluetooth and wifi are just Bluetooth and wifi. There’s nothing magical or hard about them, it’s just a matter of getting some power to them. Induction charging of lithium ion batteries is standard stuff in smart phones, electric toothbrushes, and even a bunch of electric transit buses. Smart cloth has existed for a while and has the kind of abilities I’ve been describing. Rapid prototyping is capable of building strong enough components that the things that aren’t off-the-shelf can be manufactured and replaced pretty easily.

Up to this point, it’s all entirely feasible. There’s nothing except money and a few engineers standing between this entity I’ve described and a physical object.

But it would just sit there like an expensive, odd-looking piece of mid-century furniture with teeth. It wouldn’t be doing anything. So more is required.

A digression into robotics paradigms

Before we get to the neural networks and the parts they play, we have to digress a bit into subsumption vs world-map robotics paradigms.

Back in 2001 or so when David and I were working on swarm-based robotics systems as a fun side project — as one does — one of the things I did was go out and read Masters and PhD theses from robotics programs at universities around the world. The internet wasn’t fully the thing of glory it is now, and most of them weren’t behind paywalls. It quickly became apparent to me that at the time there were two academic camps in free-moving robotics, really stupid devices with high-survivability that stumbled their way toward success and really smart but terribly physically inept devices that knew everything about their surroundings, had a 3D map, and planned every step to avoid trouble. The first is the subsumption robotics paradigm and the second is world-map paradigm.

At the time, there wasn’t a lot of work being done to bridge these camps. They were nerdy, clustered around specific universities and specific academics, and didn’t have much to do with the real world. Even then, it was obvious to me that subsumption was really the way to go for the basics, and then layering a smart world-map view for goal setting on top of the subsumption body made sense. I’m sure I wasn’t the first to get there, or the 1,000th even, but it was a unique insight to me at the time from the academic literature I had available.

Genghis six-legged subsumption
Image of Rodney Brooks’ six-legged Genghis subsumption robot courtesy of NASA

The classic subsumption robot had six bent-wire legs, six simple electric motors (one for each leg), a battery pack, about 11 transistors wired together, and a simple sensor or two, often for light. They could be built from this incredibly simple toolkit to hide from people, to avoid light, or to scurry toward light. Emergent behaviors from incredibly stupid and simple components were fascinating.

I brought back this subsumption insight (or upsight in Anathem terms, it being a Neal Stephenson-authored book that David and I both think is amazing), realizing that the subsumption concept could be instantiated in software. David had trouble accepting it when I first showed him the pseudocode. All it had was limbs that listened for shouting and sensors that shouted. Whatever the limbs heard that was loudest, they did the most of, but they did a bit of everything. There was no controller. There was no center. There was no there there. It was like the Los Angeles of robots. He had trouble accepting that it would work. Then he hacked together a two-dimensional, six-legged robot simulation based on the logic, such as it was, and it walked across the screen.

Our efforts were a bit ahead of their time. Energy density of batteries was too poor. While we saw even then the inherent power of simulation environments on the massively powerful, simple instruction supercomputers that everyone calls graphics processing units (GPUs), the software to take advantage of the platform hadn’t matured yet. Neural nets weren’t even on our radar screen. Now lithium ion-battery packs of all sizes have massive power densities at very low price points and you can rent or buy rather absurd amounts of highly parallelized compute time accessible through mature graphics frameworks and operating environments from all the big Cloud vendors. And now neural nets trainable on massively parallelized GPUs are rentable from the major Cloud vendors, often costing mere dollars for serious exploitations.

It became apparent to me when Tesla and Google were both visibly in the autonomous car business that Tesla had a bunch of people who understood subsumption and Google had a bunch of people who loved world-map paradigms. Those aren’t mutually exclusive, and it’s much more of a continuum than it used to be. I published on that 4 years ago in CleanTechnica and was confident then — and am now — that Tesla has the balance right. Tesla’s autonomy fits within cars that are incredibly effective machines for their environments. They start with high-survivability and excellent responsiveness to inputs. Google’s (now Waymo’s) approach is very cerebral, with the actual vehicles that they put it in being these cute little cupcakes with lidar nipples on top. Zero survivability or responsiveness of the basic chassis is expected.

For Plastic Dinosaur, we knew that we needed to start with a robust body, a subsumption framework, and layer a world-map onto it later. But the robust body we’d thought up for our friend PD was only half the problem. We had to figure out how to do the subsumption software. That’s where machine learning and neural networks come in.

The second article in the series will introduce concepts from machine learning and neuroscience, then lay out a hypothetical set of neural nets and their responsibilities, integration, and training. Following articles will deal with these concepts and discussions, laying out the emergent language, using Plastic Dinosaur as a framework for the discussions as much as is reasonable. After all, he’s cuddly and cute, or at least as cuddly and cute as a robotic AI Swedish furniture dinosaur with rubber teeth can be.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Michael Barnard

is a climate futurist, strategist and author. He spends his time projecting scenarios for decarbonization 40-80 years into the future. He assists multi-billion dollar investment funds and firms, executives, Boards and startups to pick wisely today. He is founder and Chief Strategist of TFIE Strategy Inc and a member of the Advisory Board of electric aviation startup FLIMAX. He hosts the Redefining Energy - Tech podcast (https://shorturl.at/tuEF5) , a part of the award-winning Redefining Energy team.

Michael Barnard has 707 posts and counting. See all posts by Michael Barnard