Robotics Archives - COBRA softwares
30 Jun

Robots are coming for your job — but maybe that’s okay

google-robots-640x353

Many news stories about robots have a bit of fun joking about the impending robot apocalypse when the machines no longer need us. For many people, a more frightening and real possibility is that a robot may be coming for their jobs. It can feel odd to celebrate the admittedly fascinating advances in fields like self-driving cars and manufacturing robots, because these devices are quite likely to replace humans. Still, this is nothing new — automation has been taking over human jobs for as long as we’ve had the capacity to worry about such things as a society. As we approach a new age of automation, what will become of us?

The robots are coming

Robotic technology has come a long way in just the last decade. We’ve gone from contraptions that could barely get up on two feet or pick up an object, to advanced robots that cannot be knocked down or can place small components in a smartphone chassis. Just look at how Foxconn has improved the monotonous task of assembling small electronics.

Last year, the company explained that its plans to replace some of its workforce were on hold because the robotic arms it had designed were not precise enough to add components to iPhones based on Apple’s stringent requirements. However, a few months ago, Foxconn announced that an improved version of the Foxbot had resulted in the elimination of 60,000 human workers in its factories. The robots are simply better at repetitive tasks than humans who demand bathroom breaks and wages.

In the same vein, there’s Amazon and its network of massive warehouses. These facilities still rely on human workers, but the company has also increased automation with robots manufactured by Amazon’s Kiva subsidiary. These squat automatons jet around the building, picking up shelves and other heavy objects to move them where humans can pick your order more easily. Amazon insists this is not about eliminating jobs, but it’s not exactly ending its efforts to automate the warehouse game. The company hosts robotics competitions that encourage engineers to develop computer vision systems and graspers that can identify and pick up products from a shelf. That’s one of the main things humans still do in Amazon’s warehouses.

Self-driving cars are further from reality, but they could make an even bigger impact on the way we live and work. Google is at the forefront of this technology, and has been testing the cars for several years. The company is keen to point out that in all the miles it has driven, there have been virtually no accidents, and most of those were caused by the fallible humans behind the wheel. More recently, Uber has started researching self-driving car technology, which could one day allow it to do away with the human drivers it contracts with.

1-FAOTYaCoYpUhjiAe3sjofA                                                 Most common occupation per state.

Self-driving cars still have a long way to go — for example, they currently only work in good weather with well-defined streets. It’s only a matter of time before someone figures out how to make these systems more reliable. It’s going to have huge economic impact when that happens. There are an estimated 3.5 million truck drivers in the US, and many of those jobs could vanish in short order.

The trend toward increasing automation is going to continue over the coming years, it’s inevitable. What does that mean for all those people?

… and they want your job

A robot does not strictly want anything yet, but the people who own the businesses want them to take over for human workers. Automation reduces the cost of production, so businesses invest in it. Right now that means robots and AI. Those who feel this will be detrimental to the middle class point out that the technology being developed now is considerably more capable than what we’ve dealt with in the past.

robotpoker-640x353

Most common occupation per state.

Manufacturing has long since disappeared as a substantial source of jobs in the developed world now that automation has taken over. It was the same story when farming automation took over a century ago. These populations have been shunted into service-oriented jobs, many of the same jobs that are now threatened by improved robotics and artificial intelligence. Some economists worry that technology is “destroying jobs faster than it is creating them.”  With powerful computer vision, artificial intelligence, and humanoid robots just around the corner, it might not be long before more jobs than ever before are handed over to robots.

If this school of thought is right, more capable robots could further suppress income for already low-income workers. No one’s going to hire a human to pack boxes when a robot can do it better. The question is whether or not society will come up with new industries as a result of our increasing automation.

… but maybe that’s okay

A great economic thinker once worried aloud that the deployment of new machinery would soon “totally exclude” the labor of workers. Thousands of people were on the verge of being out of work, and then what would they do? This line of reasoning probably sounds familiar because we hear people saying the same basic thing just a few paragraphs up, but this wasn’t a modern economist worrying about robots. It was Thomas Mortimer, the English writer and economist writing about automated sawmills in 1772’s “Elements of Commerce.”

From the very moment machines became capable of taking on a repetitive task with greater efficiency than a human, we’ve been gripped by a fierce existential worry about our own obsolescence. And indeed, automation has slowly but surely pushed people out of industries.

There’s a certain discomfort when we talk about a new innovation in robotics that seems aimed at taking over for a human worker. It’s undeniably cool when someone improves a humanoid robot that could so easily slip into our daily lives — after all, the world is designed around humans, so humanoid robots make sense. Maybe on some level that’s the goal, but the unending march of progress is not necessarily malicious.

google-new-self-driving-car-prototype-640x352

Looking at the history of automation, the doomsday predictions have never come true. There’s never been an explosion of long-term unemployment because of it. Sure, people lose jobs, and that’s genuinely unfortunate. No one wants that, but automation frees humans from menial labor—the sort of jobs people would rather not do anyway. Letting robots do what humans used to do could improve everyone’s quality of life in the long-term as new industries and better jobs appear. If there’s any way for a business to make money with human workers, you can be sure they’ll find it.

This transition still requires humanity to work together — something we often stink at. Some believe we’re reaching a point where a developed society simply doesn’t need everyone to work. If that’s the case, do we use something like universal basic income to encourage volunteer work and entrepreneurship? A robot may come for your job one day, but maybe that could end up benefiting you. We just don’t know exactly how yet. The next few years are going to be interesting.

 

Share this
27 Jun

Dawn of the robotic butler: R2D2’s descendants already walk (or roll) among us

valkyrie-640x360

The age of robotic butlers may appear impossibly distant, as we gaze disheartened at our Roomba bumping stupidly against the staircase. But the first glimmerings of a much different sort of robot helper are already apparent. Like protozoan emerging from the primordial soup, the features that will comprise the next generation of home robots are present in the marketplace even now. As they start connecting up to form ever more complex automations, the results promise to be astounding. Hold on to your seat as we careen through the futuristic miasma that is the latest in robotic butlers.

One of the first myths worth dispelling as we embark on this journey is that home robotics is a single field. In fact, the next generation of home robots will be made possible by a portfolio of technologies that have gradually been maturing over the last decade. The mistake many have made is looking for a single technological threshold to be breached, marking the dawn of the robotic age. Instead, the robotic assistant of the future is being made possible through the gradual maturing of at least three different fields in robotics – speech and scene recognition, sensor capabilities, and power electronics. By browsing the latest developments in these arenas, we can catch a glimpse of the kind of robotic butler that will likely be serving us breakfast in the decades to come.

If there is a single technology that is most likely to be pointed to as enabling the dawn of the robotic butler, it’s machine learning. Machine learning is a branch of computer science that includes artificial neural networks – the technology behind Siri and Google’s speech recognition. This is the area that has probably received the biggest investment from heavyweight technology companies like Microsoft, Google, and Facebook. And it’s no surprise since it pertains directly to their business model – which is at the end of the day software rather than hardware. In looking at the advances made in machine learning we can, therefore, discern the “brains” of our robotic butler.

While it may seem like a very curtailed sort of butler, the Amazon Echo speaker and the soon-to-launch Google Home speaker are at the forefront of machine learning in regards to speech recognition and home automation – two of the key components we will look for in a robot butler. At its I/O conference last month, Google announced its latest brainchild, Google Home, a speaker which packs all the advanced AI we have come to expect from Google Now into a small-profile audio device. The speaker will contain far field “always-on” microphones, poised day or night to respond to our commands, however absurd (I can’t be the only one asking Google whether it’s better to peel a banana from the top or bottom?)

google-home-640x427

The brains behind the speaker will be capable of controlling much of your home automation including dimming the lights, changing thermostat settings, and unlocking smart door locks. In addition, it will have all of Google Now’s features, now repackaged as Google Assistant, including offering directions, sending text messages, and answering simple knowledge based queries.

Though the price for the Google speaker will probably be comparable with the Amazon Echo, weighing in just under two hundred dollars, the costs in terms of privacy will likely be far higher – a permanent eavesdropper lurking within our homes, controlled by one of the world’s largest corporations. But judging by the public’s reception to the Amazon Echo, it’s a tradeoff many people are willing to make.

The other area of machine learning that demands a closer look in regards to robotic butlers is scene recognition. While still in its infancy compared with speech recognition, scene recognition is essential to enabling robots to make sense of their visual surroundings. And it is orders of magnitude more difficult than speech recognition.

The old saw that a picture is worth a thousand words is literally true when it comes to scene recognition. Though we rarely stop to think about it, the amount of information digested by our vision processing system in the human cortex is several times larger than the auditory inputs. As an example, walk into an evening party, and in one swift glance you can gain more information about the relationships between people than could be obtained in a 10-minute description of the same proceedings.

160531-asus-zenbo-4-100665052-large

Though we have fewer examples of cutting edge scene recognition in consumer technology products as compared with speech recognition, at least two examples are already in the wild. These include the consumer robots Jibo and Zenbo and the face detection algorithms employed in many digital cameras and smart phones. Both Jibo and Zenbo possess limited scene recognition capabilities. For instance, in its promotional material for Zenbo, Asus demonstrates how the home robot can use its onboard video camera to recognize when an elderly person has fallen and respond by calling an emergency contact.

Meanwhile, many smart phones already come packaged with face recognition algorithms, a kind of primitive scene recognition that could allow a robot to differentiate between members of the household in which they “live,” and recognize when a new face, perhaps belonging to an intruder, has been detected. For a more detailed breakdown on the latest in scene recognition refer to ExtremeTech’s previous explorations on this topic

vslam-640x383

The other major advancement that will propel robotic butlers to the next level is happening in the domain of sensors. Three-dimensional cameras of the type pioneered by the Microsoft Kinect, and those in next generation iRobot Roombas, will allow the robotic butler to sense its surroundings with unparalleled finesse. iRobot is one of the companies pushing the envelope in this regard, as their latest Roomba demonstrates. Possessing vSLAM technology, a form of visual mapping that uses multiple cameras to create a layout of the environment, the Roomba 980 can traverse a living room in straight lines rather than its predecessor’s characteristic bumping and arbitrary manner. This same technology enables it to plot out the most efficient route to take when vacuuming, resembling much more closely the way a human would approach the task.

The third domain in which technological advancements will reap rewards for robotic butlers is the arena of power electronics and actuators. This is a more traditional engineering topic, and for the latest we can turn toward an organization that has been tackling the thorniest engineering problems for many decades, NASA. While its Valkyrie robot failed spectacularly during the DARPA robotics challenge, in regards to the power electronics that make up what we might think of as the brick and mortar of a robot, Valkyrie represented something of a high water mark.

In robotics, the versatility of a limb is measured in degrees of freedom, which describes the number single-axis rotational movements possessed by a joint. In general, the more degrees of freedom, the more physically versatile the robot. This NASA Valkyrie robot boasted a whopping 44 degrees of freedom, compared with the 28 degrees of freedom possessed by its closest rival, the Boston Dynamics Atlas robot. We should, therefore, look for robots resembling Valkyrie in design when it comes to mimicking the smooth muscle movements exhibited by humans while walking and picking up objects.

Having browsed the major areas of technology germane to robotic butlers, we can now see a dim outline of what the future holds. Imagine a robot possessing the body of NASA’s Valkyrie robot, with the brains and hearing of a Google Home speaker and the eyes of the Roomba 980. It’s a Frankenstein creation to be sure, and one that few of us could afford or even wish to have snooping about in the kitchen. Nevertheless, with Mark Zuckerberg talking a big game about desiring a robotic butler to help him around the house, at least one billionaire is in the market for such a device. And if history teaches us anything, it’s that once a technology enters the possession of the uber wealthy, it won’t take long for it to filter into the aspirations of the common people.

 

Share this
18 Jun

Google’s developing its own version of the Laws of Robotics

AI artificial intelligence neural networks

Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?

That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.

atlas upgrade 2

The Laws are becoming pretty much a requirement at this point.

Specifically, they code the AI to ignore human input and its consequences for success or failure. If going inside is a “failure” and it learns that every time a human picks it up, the human then carries it inside, the robot might decide to start running away from any human who approaches. If going inside is a desired goal, it may learn to give up on pathfinding its way inside, and simply bump into human ankles until it gets what it wants. Writ large, the “law” being developed is basically, “Thou shalt not learn to win the game in ways that are annoying and that I didn’t see coming.”

It’s a very good rule to have.

Elon Musk seems to be using the media’s love of sci-fi panic headlines to promote his name and brand, at this point, but he’s not entirely off base when he says that we need to worry about AI run amok. The issue isn’t necessarily hegemony by the robot overlords, but widespread chaos as AI-based technologies enter an ever-wider swathe of our lives. Without the ability to safely interrupt an AI and not influence its learning, the simple act of stopping a robot from doing something unsafe or unproductive could make it less safe or productive — making human intervention a tortured, overly complex affair with unforeseeable consequences.

google-car-hed-2-640x353Asimov’s Three Laws of Robotics are conceptual in nature — they describe the types of things that cannot be done. But to provide the Three Laws in such a form requires a brain that understands words like “harm” and can accurately identify the situations and actions that will produce it. The laws, those simple when written in English, will be of absolutely ungodly complexity when written out in software. They will reach into every nook and cranny of an AI’s cognition, editing not the thoughts that can be produced from input, but what input will be noticed, and how will it be interpreted. The Three Laws will be attributes of machine intelligence, not limitations put upon it — that is, they will be that, or they won’t work.

This Google initiative might seem a ways off from First Do No (Robot) Harm, but this grounded understanding of the Laws shows how it really is the beginning robot personality types. We’re starting to shape how robots think, not what they think, and to do it with the intention of adjusting their potential behavior, not their observed behavior. That is, in essence, the very basics of a robot morality.

Google's latest self-driving car prototype (December 2014)

Should this car notice if its engineers start ending its.

We don’t know violence is bad because evolution provided us a group of “Violence Is Bad” neurons, but in part because evolution provided us with mirror neurons and a deeply-laid cognitive bias to project ourselves into situations we see or imagine, experiencing some version of the feelings therein. The higher-order belief about morality emerges at least in part from comparatively simple changes in how data is processed. The rules being imagined and proposed at Google are even more rudimentary, but they’re the beginning of the same path. So, if you want to teach a robot not to do harm to humans, you have to start with some basic aspects of its cognition.

Portal RobotsModern machine learning is about letting machines re-code themselves within certain limits, and those limits mostly exist to direct the algorithm in a positive direction. It doesn’t know what “good” means, and so we have to give it a definition, and a means to judge its own actions against that standard. But with so-called “unsupervised machine learning,” it’s possible to let an artificial intelligence change its own learning rules and learn from the effects of those modifications. It’s a branch of learning that could make ever-pausing Tetris bots seem like what they are: quaint but serious reminders of just how alien a computer’s mind really is, and how far things could very easily go off course.

The field of unsupervised learning is in its infancy today, but it carries the potential for true robot versatility and even creativity, as well as exponentially fast change in abilities and traits. It’s the field that could realize some of the truly fantastical predictions of science fiction — from techno-utopias run by super-efficient and unbiased machines, to techno-dystopia run by malevolent and inhuman ones. It could let a robot usefully navigate in a totally unforeseen alien environment, or lead that robot to slowly acquire some V’ger-like improper understanding of its mission.

 

Share this
27 May

Bee-inspired robot uses static electricity to land anywhere

robobee

Robots come in all shapes and sizes these days from flexible pill-shaped robots you can swallow to humanoid robots that can walk like us. Harvard’s Microrobotics Laboratory has been working on a bee-inspired robot for a few years now, but researchers recently hit on an interesting modification. The micro-drone can already fly like a bee, but what if it could land like one too? This modification could make the device ideal for covert surveillance, search and rescue, or environmental monitoring.

The RoboBee was first unveiled by Harvard in 2013, and the basic design hasn’t changed much since. It’s a bit smaller than a quarter, but weighs just 0.8 grams — 31 times lighter than a penny. The tiny plastic wings flap at up to 120Hz, allowing for controlled flight. It can take off, cover small distances, and land using bottom-mounted tripod feet. The landing part is what engineers have been looking into. The tripod design meant the RoboBee could only land on flat surfaces, which is not the case with a real bee. They can stick to anything, and now its robotic counterpart can do that too.

Being able to perch on different structures is no small feat. That could be the difference between a micro-robot being useful and a mere laboratory curiosity. Being so tiny, there’s not a lot of room for a battery. Right now, the RoboBee is still externally powered via the thin wires you see extending from the base. Eventually, a small internal battery could let it fly untethered. It’s never going to have a lot of range, but being able to land on any surface lets it continue gathering data for extended periods from different locations before flying back. The alternative if there’s no flat landing spot would be to flutter around for a few minutes until its batteries run low and head back.

The Harvard team thinks it has this capability figured out with a new electrostatic pad on the top of the robot. It’s attached via a flexible mount so the robot doesn’t have to line up to bring the most surface area in contact with its target. The switchable electrostatic charge attracts the pad to the surface and the RoboBee is safely perched. This works with a wide range of materials like wood, glass, and leaves. It would work just fine on your walls or ceiling too. The power required to maintain the connection is several orders of magnitude lower than is required for flight, so the RoboBee can hang around for a very long time.

Developing an internally powered version of the RoboBee could take another five or ten years, unfortunately. When that does happen, tiny insectoid robots like this could fly around and land in order to spot check pollution levels or yes, spy on people.

 

Share this
17 May

MIT creates origami robot you can swallow

MIT origami

Robots are not all huge walking contraptions, ripe for abuse by human engineers. Some of them are small enough to operate inside the human body, and that’s exactly what a new robot developed at MIT is all about. The “ingestible origami robot” can fold up inside a pill to be swallowed. After unfolding itself, the robot can be steered around to clear obstructions or patch wounds by doctors on the outside.

It took a number of prototypes before the team settled on a material for the robot’s frame — dried pig intestine. So, this is pretty much a robot made out of sausage casing that shrinks when heated. It consists of two layers of structural sausage material with inlaid magnetic components. A pattern of slits in the structure determines its folded shape.

The origami bot was developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with the aid of researchers from the University of Sheffield and the Tokyo Institute of Technology. As the name implies, the ingestible origami robot begins life folded up inside a pill. For its test, MIT scientists use a pill-shaped ice capsule to move the robot through a simulated esophagus and stomach. Once it’s in the stomach, the ice melts and the robot unfolds, ready for action.

The origami bot can propel itself (via an external magnetic field) through a process called “stick-slip” motion. The robot’s flexible surface sticks to the stomach via friction when it moves, but it can slide along freely when the body flexes to change shape. The biocompatible makeup of this robot means it’s more flexible than similar robots, and that allows it to get about 20% of its movement from simple paddling force from its fin-like shape.

As for what it does, the initial test was the removal of a battery that had become embedded in the stomach lining. Thousands of people (mostly children) swallow coin cell batteries every year, and they can cause serious issues if they become stuck. The origami robot is quite adept at sliding over to the battery and dislodging it so it can be safely shepherded through the digestive tract. It can do this, of course, because it’s equipped with a magnet that allows doctors to control it from outside the body. MIT researchers also believe the biocompatible nature of the origami bot could allow it to patch internal wounds and give the body time to heal.

The team hoped to move on to tests in living patients soon, possibly with origami bots that can power themselves without an external magnetic field.

 

Share this
15 Apr

MIT reveals how its military-funded Cheetah robot can now jump over obstacles on its own

Scientists at MIT’s Biometrics Robotics Lab have now trained their robotic Cheetah to see and jump over hurdles as it runs, making this the first four-legged robot to run and jump over obstacles autonomously. The cheetah’s previous greatest accomplishment was that it was able to run untethered.

MIT researchers Hae Wong Park, Patrick Wensing, Sangbae Kim first tested the robot’s agility on a treadmill in their lab and then let the robot off its leash to see if the robot ca run and jump on its own. The Cheetah, which weighs about 70 lbs, cleared 18-inch hurdles while moving at a speed of 5 mph. The robot can run at 13 mph of a flat course.

The MIT will showcase the robot’s new skills at the DARPA Robotics Challenge Finals in California on June 5-6.

The Cheetah project is funded by DARPA‘s Maximum Mobility and Manipulation division.

Share this

© 2015-2020 COBRA Softwares Pvt Ltd. All rights reserved.