Rise of the industrial robots in 1980s led to a major evolution of Henry Ford’s assembly line concept. Routine tasks that were being handled by factory workers started to be performed by robots with greater accuracy and efficiency. With the spread of device-to-device communication and IoT protocols, robots started taking a serious amount of responsibility from factory workers. This movement was initially led by German high tech firms that branded this shift as the 4th industrial revolution. On the opposite side, there was China, taking advantage of their access to low cost human labor as an alternative to automation. Very recently, a few years ago, China flipped their strategy 180 degrees, becoming the biggest advocate of automation by starting to invest aggressively on buying robots and developing their own robotics know-how led by government initiatives. So the technology won. There is no question about the need for automation anymore and major manufacturing companies, including the ones in China, Germany and rest of the world are racing to automate their industrial processes to increase productivity.
Everything that can be automated will be automated, according to Zuboff’s law. However, some things are more difficult to be automated than other things. Those are non-repetitive tasks which require higher level of cognition and ability to adapt into unknown conditions. These tasks can be defined as the last mile in factory automation. Tesla, the electric car manufacturer, was recently criticized by analysts for automating their assembly line more than necessary. The reason of the criticism was the fact that the cost of automation exceeded the cost of a human led assembly line. This may be a valid argument for today but the solution should be found in increasing the bar for commercially viable automation and not by going back to lower levels of automation.
So if we are moving towards full automation and lights-out manufacturing, how can the last mile in factories be automated without overspending and over engineering? Autonomy seems to be the answer, therefore Artificial Intelligence.
There is a slight but important difference between Automation and Autonomy. Autonomy is the state of being able to make independent decisions in situations that were not experienced before. By definition, autonomous systems are always automated, but the other way around is not always the case. A system may be fully automated but not autonomous. “Blind Automation” is the term we use for this category.
Systems that are blindly automated are based on hard coded rules, as in expert systems, and they tend to be very difficult to be reconfigured if the conditions or requirements change over time. Autonomous systems, in contrast, generalize the rules for decision making in different scenarios by allowing a higher level of abstraction in their programming.
For example, in the blind automation scenario, a robot may be programmed to go to coordinates x, y, z in order to pick up an object. This will work without problems as long as the object is found precisely at coordinates x, y, z. However, if for any reason the object is positioned at a slightly different location, the robot will still go to x, y, z, fail to pick the object up and continue executing the rest of its commands without noticing this problem. Eventually every step subsequent to this failure will also fail.
A better strategy for performing the same task, is using a robot with vision sensors, and programming it to go to the object’s position (as a variable) to pick it up. Regardless of where the object is positioned, robot will localize it using its sensors and successfully pick it up. This difference between the two programming paradigms; “go to x, y, z” and “go to an object’s position” makes a big difference in making the system less prone to making mistakes in unexpected conditions which is crucial for last mile factory automation.
In this scenario where robots make autonomous decisions by using real-time sensor data, a feedback loop is established between physical and digital environments. The constant flow of information from physical to digital and vice versa create immense amounts of structured data which is the fuel for AI powered autonomous factories of the future. Such action-measurement-action strategies allow the systems to self-improve over time, creating a data network effect in manufacturing. The more one process is executed, the more efficient it will get. This allows super-human performance in almost every domain given enough data.
Robots making hundreds of decisions every second and receiving hundreds of measurements from the environment and from other devices also require massive computation and storage capabilities. Despite the popularity of cloud computing in most applications today, a decentralized compute power is necessary for most robotic applications and for IoT devices in general. The throughput between the edge and the cloud through internet connectivity is simply not enough to process all data generated by sensors and make sensible decisions in real-time. So edge computing, or fog computing – dedicated servers on premise as a layer between individual devices and the cloud – will likely be the norm in autonomous factories of the future unless we experience a breakthrough in science that leads to infinitely fast and reliable internet connectivity. However unlike compute power, storage of important data, that is filtered on the edge, should be centralized in order to achieve the best decision making models at every location.
As a summary, a few conclusions and predictions about the future are:
- We are moving towards an autonomous black-box factory model where the factory robots are operated by themselves without human intervention.
- These factories will be powered by the data that is generated by themselves and their productivity will increase over time exceeding human performance in every process involved in manufacturing.
- Factories will compute on-premise / on-edge and they will push important data to cloud for benefiting from a shared pool of data with other factories.
About the Author
Daghan Cam is the Co-founder and CEO of Ai Build, a London based startup developing Artificial Intelligence and Robotics technologies for the construction industry. He is also a visiting lecturer at University College London doing research on robotic fabrication, large scale 3d printing and parallel algorithms with GPU computing. His work focuses on developing intelligence for automating complex tasks in design and manufacturing by using computer vision and machine learning techniques.