Can computers be used to
more effectively direct technological development?
“First, we thought the PC was a calculator. Then we found out how to turn numbers into letters with ASCII — and we thought it was a typewriter. Then we discovered graphics, and we thought it was a television. With the World Wide Web, we've realized it's a brochure.”
― Douglas Adams
As discussed above, about 25 years ago, in a bid to limit CO2
emissions, motorists were given incentives to buy diesel cars – in the form of reduced Vehicle Excise Duty. The results of this misguided policy were, of course, calamitous, with many cities worldwide finding themselves in breach of air quality legislative standards. Unfortunately, the politicians who had introduced the policy change had missed the fact that while a diesel engine may produce less CO2
per mile than a petrol one, it also emits far more of other types of pollution. Some of the blame for this oversight could perhaps be attributed to the fact that most politicians tend to have humanities or law backgrounds rather than a grounding in science and so would be more prone to being unable to see the whole picture, as it were. The amazing thing is that this policy was not limited to one or two small regions, but covered vast areas on an international scale. Just about the whole of Europe seems to have been affected by this diesel mania, with the French being particularly enthusiastic advocates. This resulted in extremely high pollution rates in Paris, with the situation eventually becoming so bad that a rather draconian solution was sought in the form of dramatic increases in diesel fuel tax. Of course, the average French citizen did not take kindly to the price hike – seeing it as a typical manifestation of the superior attitude and lack of concern of the French political class for the average guy on the street. Cue massive riots and civil unrest in the form of the ‘Yellow Vests’ protests, and the burning out of God knows how many cars. One can appreciate
the grievances of the protestors, but really, burning out all the cars – that seems a little too much – I know I would have been upset if my car had been consumed in one of their conflagrations. Anyway, in time-honoured French tradition, there were multitudinous violent demonstrations on the streets (but not quite a revolution thank goodness) and in the tradition of their overdoing things and subsequent defeat, the government/president conceded to the demonstrators’ demands and reversed the diesel price increases (with all the lamentable long-term environmental consequences that we may expect to attend that).
So, one thing we can learn from all this is that just because something is believed by many, or even most people, that does not guarantee that it is correct. In fact, it can often mean that they are completely wrong and there are many examples of ‘urban myths’ or things that are for some reason popularly believed despite being erroneous. An example, albeit a prosaic one, is provided by the process of topping up your car’s engine with oil – which is something that most of us do from time to time. There is an urban myth that if, in doing this, you mix mineral oil with semi-synthetic oils, sludging will occur. There is, though, precious little evidence of this; all that would occur in practice is that you would have created an oil mixture with an unknown specification (viscosity grade). Sludging is most commonly caused by not changing the oil at the recommended intervals. (There is another myth that once you have changed to a synthetic oil you can’t change back – of course, in reality, you can do so at any time you wish.) Most semi-synthetic oils are actually manufactured by mixing mineral and synthetic oils! Poe made many astute observations and he effectively touches upon this (urban myths, not oil), in his short story The Purloined Letter, where his seminal genius private investigator Dupin says: "'Il y a a parier,'"
replied Dupin, quoting from Chamfort, "'que toute idee publique, toute convention recue, est une sottise, car elle a convenu au plus grand nombre.'
” Roughly translated, Dupin is saying that you can bet that any public idea, any conventionally received belief, is nonsense, because it appeals to the greatest number of people. (By the way, Dupin, along with his narrator was the inspiration and basis
for Sherlock Holmes and Dr Watson – which was something that Sir Arthur Conan Doyle was very willing to acknowledge and did so in his story ‘The Resident Patient’. Conan Doyle also famously noted at one point that Poe's detective stories represented “a model for all time”.) The idea that the validity of a concept is more important than its popularity was also expressed in the famous quote by the great Galileo Galilei: “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.”
Away from such lofty sentiments, and getting back to the rather more everyday diesel car situation, this fashion for replacing petrol engines with diesels did not, mercifully, catch on in America as much as Europe, despite the fact that a number of European auto manufacturers did their best for it to do so. You may ask why diesel power is such a poor option for automobiles and the answer, of course, lies with not the volume of the pollution it generates, but its nature. Diesel engines produce nitrous oxides that are quite toxic as well as micro-particles that can cause cancer – to the extent that the World Health Organisation has designated diesel fumes as carcinogenic – which is not something they have done for petrol engine emissions.
The effects of the appalling decision to encourage motorists to invest in diesel vehicles were also exacerbated by the diesel emission scandal (or ‘Dieselgate’, discussed above), where VW was found to have installed, with the knowledge of senior management, software that rigged automobile tests and made vehicles appear less polluting than they actually were so that they could appear to be within emission limits and so be sold.
Statue of Galileo in Florence. (Pen and ink drawing by the author.)
This is significant because it was not a mistake – it was a deliberate effort to mislead that resulted in all of us breathing in much more pollution than we should have for many years – the result is a vast number of deaths occurring that could and should have been avoidable. There is a case for viewing this as manslaughter – and it is not surprising that an arrest warrant has been issued in the USA for the CEO of VW. (This gentleman, Martin Winterkorn, was pictured above. He lives in Germany, which does not currently extradite its citizens to countries outside of the EU, so he is safe from arrest – as long as he doesn’t decide to visit America.) But what has all this to do with employing computers, and perhaps deep learning/CNNs, to more effectively direct technological development? Well, auto companies have been manufacturing petrol and diesel engines for many years and vast amounts of data are available on their respective gaseous emissions. At the same time, the health effects of inhaling different types of gases is something of interest to public health investigators over the long term; for example, the relative toxicity of NO2
has been well-known for many years. These data could have been used to train a neural network to evaluate the relative risk to health of petrol and diesel as automobile fuels, thereby highlighting the danger of a dramatic switch to diesel. Or, failing that, at least more statistical analyses could have been done.
The example of identifying dangers associated with diesel emissions is, of course, a very simple one and may not have required extensive training of a deep neural network to generate a useful result. What it does show is that, again, data are king, and our decisions need to be based on real data rather than assumptions or beliefs, no matter how widely held they may be. Automatic acceptance of mistakenly held views might explain why machines in general, and perhaps automobiles in particular, do not seem to have progressed as much as one would have expected when considering the long time they have been in existence and the prodigious resourcing that has been invested in their development. Auto manufacturers tend to assume that auto customers want something traditional – or somewhat similar to what their father bought and which they were drove around in as a child – despite the fact that in their father’s day global
warming was a little-appreciated threat. So, the problem may be due in part to the general conservative nature of the sector, but limitations in the thinking related to the technological developments involved must also play a role. One of the most significant of these developments relates to the need for effective and efficient design. The traditional approach to engineering design is to employ a trial and error method whereby something is designed - possibly after employing some modelling such as FEA (Finite Element Analysis) and if it works well it is adopted, otherwise changes are made and it is re-manufactured. This tends to lead to a generally operational outcome, but is very wasteful in terms of materials, labour, and time, and there is no guarantee an optimal solution will be generated. Wouldn’t it be much better to make more use of the extensive engineering-related data that already exist in order to gain direction for technological development (e.g. direct R&D activity for cars away from diesels and towards petrol or, even better, electric power), and then to assist with optimisation of the design of the various engineering devices/systems involved? The problem with doing this in the past has been that, although data exist, effective utilisation has been difficult due to the under-developed data-processing methods that companies generally employ - e.g. databases that are effectively look-up tables or very simple (usually straight-line) models. In engineering, complexity and non-linearity is the norm. Output characteristics of a given manufacturing operation are usually dependent upon multiple input parameters, and we cannot assume a linear relationship between inputs and outputs. So how can we make use of the existing voluminous amounts of data? Deep learning, and CNNs in particular, are powerful techniques for recognising patterns and relationships in vast amounts of data - even when there are significant variations within the data due to variations in environmental parameters. The actual optimisation of the design of the engineering devices/systems can be achieved by introducing minimal constraints (in the form of what you wish the device or system to be capable of) and then relying on the CNN to identify suitable solutions based upon patterns the network finds in the available training data. For example, suppose you wish to manufacture an engineering
component that will fasten two other components together in the presence of forces that would tend to separate them. The constraints would be the relative orientation and position of the two components to be fastened, the forces to be exerted on them and, if possible, the maximum permissible weight and/or cost of the fastening component. The CNN could employ vast amounts of existing data to determine the optimal morphology and size of the fastening component, along with the material composition and manufacturing method that would provide the needed mechanical properties and strength. In fact, it could make use of the extensive amounts of FEA data that have been generated over the years – or perhaps even generate its own FEA data for a given problem. Here the CNN would be taking the place of the human who traditionally uses FEA systems to try to optimise designs – which may not be a bad thing, since using such systems is, in my experience, not the most exciting activity in the world as well as being very time-consuming. This interesting idea is further discussed, in slightly more detail, in just a mo.
CNNs are a rare demonstration of a true breakthrough in AI, and the above scenario is just one example of how applying AI to automate design could result in improvements in productivity/product quality. While we are here, it is worth noting that there have been numerous attempts in the past to incorporate AI into design. I myself completed a PhD in the mid-1990s that was concerned with development of a knowledge-based system for assisting with design of parts to be manufactured by power metallurgy (or PM, which is a specific engineering process that involves forming final parts from powdered metals). A knowledge-based system (or KBS, also known as an expert system) is an attempt to represent the expertise of subject experts in a computer, to assist others who may be less expert, in designing parts to be manufactured. The knowledge-based system employs rules to interrogate a proposed design to see if it is suitable for manufacture by PM. The system also includes a materials selector to ensure that steels with the needed mechanical properties can be selected by the user and there is also an interface to a finite element (FEA) system that could be employed to fine-tune
the design for the process. The problem with all of this is that none of it really comprises a breakthrough in AI – everything this KBS did could have been done by a relational database perhaps also interfaced to an FEA package (relevant software includes Oracle or SQL from IBM). My God, I worked hard on that PhD – I wrote 10 papers on it and there was much talk of knowledge-based systems being like a ‘tiger in a cage’ – but, unfortunately, the tiger never got out of the cage! In the late 1980s and 90s there was also much enthusiasm for neural networks (NNs) – and I also dabbled in this, having developed NNs for tasks such as irregular metal powder characterisation and calibration of cameras in computer vision. The problem with all these systems and approaches has to do with something called knowledge engineering
, which is the process by which the domain knowledge
, or expertise in a particular area such as design for PM, is captured and represented within the computer. To be more specific, knowledge engineering can be very laborious and the way it is undertaken has an enormous influence over whether the KBS will be of any use in practice. Typically, a KBS will either not go into enough detail to be practically useful, or if it does the requirements of doing so may necessitate the area addressed being so limited as to make the KBS of little use in general. The problem is that knowledge engineering (for example, identifying factors or features that are particularly important for solving particular problems i.e. ‘hand crafting’ features of interest), is laborious and different people may tend to do it in different ways – thereby limiting standardisation and applicability. The great advantage of CNNs is that the hand crafting is eliminated – identification of features of interest is achieved automatically within the CNN, based on the available data – so that the subjectivity and possible introduction of errors associated with traditional identification of useful features is taken out of the equation. The result is something that really does approach artificial intelligence in an almost uncanny way. Whereas we used to talk about AI in design in the 1990s, now we can actually implement it!
Interestingly, CNNs allow us to do things that many people in the 1990s thought could be done with neural networks. For example,
consider a system able to identify pencils in sets of images. Many people used to think that this could be done by training NNs with all sorts of images of pencils – whereas in reality it could not – the images had to be manually processed so that all pencils were presented to the camera in similar ways, with similar lighting conditions and so on – otherwise there would be no guarantee a pencil could be correctly identified. But in contrast to this, CNNs can use all sorts of training data, and if there is enough, they will be able to reliably identify a pencil in a given image. Coming back to the fastening component, by training with extensive databases of previously designed components, a CNN could suggest a design for a given requirement and then go into lower levels of recommending how the component could actually be manufactured. Earlier, finite element analysis was mentioned. This is a numerical modelling technique that can be used to simulate the strain a component, with a given geometry and material type, would experience when subjected to a specified load. Those who studied physics or engineering at school or college will remember that strain = change in length ÷ original length and is how much something distorts (or stretches) when experiencing typical working forces. FEA is, in fact, a very involved subject – particularly when plastic deformation is involved – this is where a component stretches but then does not go back to its original size when the load is removed. Apart from involving very complex maths, the setups required, and computations involved, are extensive and very time-consuming. The subject is indeed so involved that many bright individuals specialise in it and can spend their entire lives on it. Imagine that – spending your entire life on FEA simulation of a pressure applied to a metal pipe coupling to see what chance there is of cracks appearing during use. Very boring? Perhaps, but if the metal coupling is for something critical such as the primary cooling circuit for a nuclear reactor, then no doubt it’s needed and somebody has to do it. Thinking about it though, maybe a better solution is to employ a power source that could not be subject to catastrophic environmental effects in the event of a system failure. Bill Gates claimed to have actually done this for nuclear power in the case of his ‘slow wave’ reactor – but it was unlucky for him that as he was
trying to launch this new technology the Fukushima Daiichi nuclear disaster occurred, which turned many people (including his potential investors) off nuclear power of any sort for the foreseeable future. Later on, I will return to the subject of nuclear power.
But could CNNs be used in the design of components much more complex than simple fasteners? Perhaps CNNs could help automate FEA for generating vast amounts of data for various types of engineering components. The emerging data could then be used to train another CNN for advising on design of similar components. The result would be expected to be an AI system able to reliably quantify strain during use for components without all the time-consuming computations needed for conventional FEA every time a slightly modified component is to be designed (CNNs can be consulted very quickly). In the case of completely new designs for components, perhaps after simply being presented with the constraints of a problem, a CNN could be employed to generate innovative design solutions for existing problems, as well as providing information for directing the FEA analysis. The CNN may also be able to employ the FEA system in a more systematic (i.e. again, less trial and error) way – which should improve the chances of reaching an optimal solution. All the simulation data thus generated could be stored and used in future CNN advisory systems for related designs – thereby providing objective and quantitative assistance with the fine tuning – the final stage of the component design process. In this way, we could replace much of the current ‘black art’ or conventional practices, with component designs driven by real data and employing optimal 3D morphologies and industrial processes. Such optimisation would assist with avoiding ‘over engineering’ and improve performance as well as minimising costs and, of course, this would apply to just about all fields of engineering in addition to mechanical engineering/component manufacture.
The Fukushima Daiichi nuclear disaster ended Bill Gates’ ‘slow wave’ reactor dream. (Illustration by the author.)
To summarise, deep learning networks such as CNNs represent a potentially powerful approach for informing strategies for adopting preferred technologies, as well as eliminating the traditional ‘black art’ or guess work that is still widely employed in engineering. By providing data-driven optimal solutions that will work the first time, they surely comprise an initial step along the Martian road. Or, to put it another way, all these far-out ideas about deep learning directing engineering are blowing my mind! And if it looks like CNNs can help direct engineering design, is there any chance they can do the same thing for science research?