At this point of the book, I hope that you’re inspired and ready to leverage AI within your organization guided by AIPB. In this chapter, I share some final thoughts on the importance of executive leadership, followed by an overview of AI trends and a look ahead at the future.
If you are a senior-level executive, I would like to extend an extra thanks for reading this book. Why? Two reasons. First, executive leadership understanding and buy-in is critical to advancing AI initiatives as well as helping ensure initiative success. Second, because many senior executives become far removed from the areas they oversee in their business and rightfully become mostly concerned with strategy over tactics and subject matter expertise. For good reasons, businesses need people to own strategy, P&L, operations, and other executive-level responsibilities for the business and each function. There is a definite downside, however, to this removal and focus only on the 30,000 (or whatever thousand) foot view. It is often expressed as a need for an executive summary for everything.
It’s very important in my opinion for executive leadership to have an appreciable amount of subject matter expertise in areas core to the businesses offerings and respective lines of business. This is especially true in the case of AI, machine learning, and data science. These fields can be difficult for people to understand, and when decision makers are those who don’t understand, it can be very difficult to move forward with advanced analytics initiatives.
As a counterpoint, it’s fine if your understanding of these fields is only at the executive-summary level, but that means delegation of trust and decision making must be given to those with greater expertise. The reason is simple. I strongly believe that a major reason why many companies are still apprehensive to adopt AI, despite knowing that they need AI to achieve certain goals, produce certain outcomes, or to remain competitive, is that key decision makers don’t understand it well enough. That’s understandable. Myself and others are working on demystifying and simplifying AI, but more work still needs to be done.
I also think that it’s extremely important for executive leadership to have an appreciable amount of product acumen, as well. Companies are built around a product, including when the product is a service. The product is in fact the company glue, especially when viewed in terms of a hub-and-spoke model that I have created, in which the product is the hub and everything else are spokes. Figure 16-1 presents the Tech Product Hub-and-Spoke Model. Note that the AIPB expert categories are the spokes.
In general, executives and managers build and run businesses around a product or suite of products, designers design the product, builders build the product (e.g., software engineers), testers test the product, and scientists help build, understand, and optimize the product. The Hub-and-Spoke Model holds for entire business functions, as well—marketers market the product; sales folks sell the product; customer success folks support the product; product development designs, builds, and tests the product; operations operate a business around a product; finance and accounting track and manage money invested in or generated by the product; and product managers manage almost everything related to the vision, strategy, development, and success of the product.
Some of the greatest technology companies were created and/or run by CEOs that were former product managers and/or subject matter experts. Steve Jobs is an obvious example, but the list also includes Sundar Pichai (CEO, Google), Satya Nadella (Microsoft CEO), Marissa Mayer (CEO, Yahoo), and Indra Nooyi (CEO, PepsiCo).1
So, let’s move on to final thoughts about AIPB and AI-based scientific innovation. To stay relevant, remain competitive, and especially to get ahead of the competition, companies must continue to innovate, particularly around data and analytics. With exploding increases in data generation and decreases in costs to store, process, and analyze the data, there has never been a more important time to develop a vision and strategy on how you will harness and use your data to create better human experiences and business success.
As we’ve covered, advanced analytics techniques such as AI and machine learning offer amazing opportunities for innovation and value creation. That said, many people still struggle to understand what exactly AI and machine learning are, how they differ from data science, and how all of these fields can drive real-world value. Additionally, pursuing AI offers huge possibilities for success but also failure for the many potential reasons discussed.
The key is to begin today. Don’t wait, don’t create barriers to entry, and, most important, don’t get left behind. A great quote from Andy Weir is, “A good plan today is better than a perfect plan tomorrow.” Make a plan today for incorporating AI into your business. Certain aspects of AI readiness such as leadership, cultural shift, and executive level buy-in are a must; the rest can be worked on as you go. Ensure that readiness-related gaps are filled and required shifts are prioritized and made. Also, approach data and analytics in terms of increasing levels of maturity, along with the maturity dependencies and tradeoffs among uncertainty, risk, and reward. With the required leadership, cultural shift and executive buy-in, start small and increasingly incorporate machine learning and other AI techniques in real-world applications for your business.
Also, keep in mind that value comes not only by way of ROI, but also as improvements to human experiences and delight. Optimize for this as much as you do for business objectives and KPIs. The benefits to your customers and users will benefit your business as a result.
AIPB, and the guidance in this book, will help executives and managers ensure beneficial and successful AI pursuits due to its unique and purpose-built North Star, benefits, structure, and approach. Build great AI-driven products, services, and solutions—period!
Let’s discuss the future of AI and the things you should expect to hear more about and watch for in the coming years.
Unlike many established and more stable digital technologies such as mobile and web, AI is an incredibly dynamic and advancing field, and is literally changing on a daily basis. Not only from a technical and capabilities perspective, but as an explosion of AI use cases and applications in the real world. AI is making a significant transition from being something that offers huge potential, to something that is actually driving real and significant value for both people and business.
This will certainly continue as the capabilities and potential applications of AI expand in the future, and just as important, as people and companies better understand what AI is, how it can create value, and how to realize that value successfully. Hopefully, this book and AIPB have provided a framework to help develop that understanding and guide the process of AI-based scientific innovation. As I said in Chapter 1, if by simply understanding the concepts presented by AIPB and the contents of this book, executives and managers are able to progress further ahead with advanced analytics than where they are today, that’s a win.
With that, let’s discuss the future of AI in categories. The coverage here is high level and brief. You are encouraged to further research specific areas of interest, and don’t worry if you’re not familiar with some of the jargon that follows. My goal is to provide you with a big picture view of what’s in store for the future of AI in both the short and long term.
AI is still in its infancy, and we’re only now beginning to see a major increase in real-world applications and use cases. Many of these new applications come from either the most prominent tech companies such as Amazon, Google, and Netflix, or from much smaller innovative and disruptive companies.
Many large enterprise companies have the resources to hire AI talent and pursue AI initiatives but aren’t able to, or worse, experience failure due to factors such as lack of data and analytics readiness, maturity, and inability to properly understand and address many key considerations associated with AI. This puts these companies in a precarious position because there are some very small and highly agile startups that are more than happy to pursue AI and disrupt incumbents and industries.
As a result of these challenges, there is an increasing demand for data and advanced analytics leadership to help facilitate better understanding and drive the creation of visions and strategies around AI, but also for making AI more understandable in general to both businesses and consumers. This means an increasing demand for easy-to-understand AI and machine learning training for executives and managers that will help shed light on how advanced analytics can be used within their organization and help facilitate opportunity identification, ideation, and vision development around AI.
Although the technical details might be well understood by only highly specialized AI researchers and machine learning engineers, executives and leaders need to get to a point where they can think of ways to achieve goals by using their data to create deep actionable insights, augment human intelligence, automate repetitive tasks and decision making, predict outcomes, quantify feedback, and much more. From a maturity perspective, this means gaining a better understanding as described, and also graduating from traditional BI and descriptive analytics to more advanced predictive and prescriptive analytics. This is the only way to unlock the true potential of data—an impossible task if you employ only simpler, traditional analytics.
Part of the increased understanding around AI includes the realization that data is gold, and that you cannot place enough value on data readiness and quality. You can’t build a brick house without bricks, just like you can’t use AI to create new sources of value, differentiation, and competitive advantage without high-quality data. Companies are beginning to better understand this, and the next step is toward data democratization. Data silos stifle AI innovation and progress. Becoming a data-driven and/or data-informed organization requires data and access to as much as possible across data sources and business functions.
There’s also a severe shortage of AI and machine learning talent. To address the talent shortage, many companies are working on developing tools to help democratize, simplify (reduce complexity through abstraction), and even automate some of the work normally carried out by data scientists and machine learning engineers. Automated machine learning (AutoML) is one of those areas undergoing active development and advancement. AutoML enables those who have limited expertise to train and optimize machine learning models, with examples including AWS SageMaker and Google’s AI Hub. Google also released Kubeflow Pipelines to help simplify machine learning workflows.
Some of the automated aspects of AutoML are really useful, especially for those who have the requisite expertise, although I’m personally very wary about handing someone the keys to a Ferrari with a manual transmission who has never driven before and doesn’t have a driver’s permit. There’s a lot of potential for those without the requisite expertise to make poor decisions due to not knowing what trade-offs, considerations, and different techniques to try when training models, which can result in sunk costs, lost time, and failed initiatives. In the worst case, this can mean taking on a significant amount of liability risk, with potentially life-or-death consequences.
Analytics democratization and open data are also areas of increased focus. There’s been a massive proliferation of freely available data, machine learning models, and open source code. Models are also more portable and shareable thanks to the advent of tools like the predictive modeling markup language (PMML).
Lastly, today’s advanced AI techniques such as deep learning require significant computing resources, training costs, and time. There are a lot of people focused on improving efficiency through algorithmic and hardware advancements, as covered earlier. This will help accelerate model learning and training, which results in reduced costs and time. It also enables quicker experimentation and hypothesis testing, and a more Agile approach overall.
Everything discussed in this section will enable increased understanding, much more widespread adoption at the manager and practitioner levels, and ultimately a proliferation of many more real-world AI use cases and applications.
AI and machine learning are very hot and active areas of research. The latest research mostly involves advancements in algorithms and techniques. Some of the most exciting areas of research include natural language (NLP, NLG, NLU, machine translation), deep learning, reinforcement learning, transfer learning, personalization, recommendation systems, generative AI, and information retrieval (e.g., speech and visual search). Refer to Chapter 5 for a refresher.
Also, techniques such as deep reinforcement learning are exploring ways to create AI that is self-directed and able to self-learn over time, whereas other approaches are trying to make AI better able to solve multiple problems simultaneously; or in other words, multitask. Other techniques are being developed to help solve the cold-start problem that we previously discussed. Also, advanced approaches to causal inference traditionally carried out with A/B and multivariate testing are being developed using new AI techniques.
In addition, researchers are trying to find ways to make applying AI easier and more efficient. All advanced analytics techniques require data. Large amounts of high-quality, prepared data can be difficult and/or expensive to come by. As such, techniques such as few-shot learning are being developed to enable AI with relatively small amounts of data and without having the same data quality requirements. Researchers are also trying to find ways to improve algorithmic efficiency (e.g., neural networks) in order to train models faster and reduce costs. This includes developing simpler algorithms and models that can achieve the same or better results as the state-of-the-art algorithms in use today and that require less data.
Another very interesting area of development is around how machines learn in general. This includes online learning, incremental learning, and out-of-core (aka external-memory) learning. All of these techniques allow learning to happen on an ongoing and incremental basis as new data is fed back into the system, or in situations where datasets are too large to allow training on a single computer.
In the age of big data and with today’s huge datasets getting exponentially bigger with the proliferation of technologies such as IoT, scaling AI is becoming more important than ever. This includes moving and processing enormous datasets for consumption by AI algorithms as well as in terms of deploying production AI solutions that are able to perform reliably and consistently at scale. There are ongoing software advancements to help with this.
There is also a proliferation of open source, proprietary, and cloud-based software available for building AI solutions. This includes software packages, libraries, platforms, frameworks, APIs, SDKs, and collaboration tools. It also includes databases and data management systems such as those used for efficient analytics (e.g., data warehousing and data lakes).
Modern, advanced AI and machine learning techniques such as deep learning, along with training these models on large amounts of data, are increasingly requiring highly specialized and performant hardware, nowadays referred to as AI chips.
One of the major recent hardware advancements for meeting the demands of today’s AI was to use GPUs instead of traditional CPUs for processing large amounts of data and AI model training. GPUs are much better suited for dealing with large amounts of data and performing the underlying mathematical computations that today’s AI algorithms require. Other specialized hardware in this category include application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs).
Certain companies have created branded and proprietary AI chips for these applications. NVIDIA is very well known for its GPUs, for example. Google has created an ASIC it calls a tensor processing unit (TPU) for intensive machine learning tasks. Intel has its own chip called the Intel Nervana Neural Network Processor (NNP), and it is teaming up with Facebook to create a new “inference” chip called the NNP-I (the I is for “inference”).
Client/server and cloud computing have been the primary computing architectures for quite a while. As the internet and technology in general has grown exponentially in terms of scale, computing has advanced accordingly in order to scale with it. That includes both horizontal and vertical scaling.
With today’s increasing focus on mobile devices and performance in general, new computing architectures are becoming much more relevant and popular. This includes increasing demand for offline computing abilities; for instance, the ability to use and benefit from applications even when offline.
Two very exciting areas of future development that are both highly relevant to AI are edge and fog computing. In traditional cloud-computing architectures, data is passed back and forth between clients (e.g., mobile devices, web browsers) and servers (cloud based or on-premises). The time required for data transmission and processing between clients and cloud servers is overhead that can be significant and nonperformant in some cases.
To meet increasing demands for performance and powerful real-time computing closer to the source of data (e.g., client, sensor), edge and fog computing are gaining steam. Edge refers to devices themselves; for example, a mobile phone, tablet, or sensor. Fog typically refers to the gateway between devices and the cloud; for example, an internet gateway. In these cases, computing and data storage are shifted from the cloud to closer to the devices and other generators of data. This can result in major speed and performance increases.
As AI continues to advance and evolve in its applications, it is beginning to converge with other technologies, and people are recognizing the potential value of integrating AI into a multitude of technology solutions. For me, convergence is most apparent when certain applications no longer seem to be powered by AI, a phenomenon called the AI effect (discussed further shortly).
Personal assistants (e.g., Alexa, Siri) are becoming more like that all the time—they are thought of more as assistants than AI. These assistants represent the convergence of AI, audio hardware such as mics and speakers, electronic hardware, and internet connectivity (IoT). AI is also becoming increasingly integrated in established technologies, as well. Examples include recommendation engines and personalization in eComm and mComm experiences.
There are many current and future areas of AI convergence and integration. Examples include the following:
Autonomous machines and vehicles
Robots and robotic process automation (RPA)
Control systems
Checkout-free and line-free shopping (e.g., Amazon Go and sensor fusion)
IoT and intelligent systems (e.g., smart cities, smart grids)
Computer vision using specialized cameras, light detection and ranging (LiDar), and other forms of sensing
Fog and edge computing (e.g., AI deep learning models on mobile devices)
Blockchain
Quantum computing
Simulation and digital twins
Predictive, prescriptive, and anomaly detecting AI is also being integrated into more traditional processes associated with information technology, supply chains, manufacturing, transportation, and logistics.
Lastly, speech is poised to dominate human interactions with technology in the future. We’re seeing that already, but not at the levels I expect will come in the not-too-distant future. People are interacting with technology and devices increasingly by speaking to them and having the technology speak back. This includes the assistants that we see today as well as other applications of conversational and question-answering AI. There will be a generation of children at some point who will have no concept of what it’s like to type on a physical or digital keyboard. They will have grown up simply talking to everything.
AI is certainly becoming more publicly prominent. Not only is there hype about it seemingly at every corner, but there is an ongoing proliferation of TV commercials about AI solutions, and people are interacting with AI now more than ever in their daily lives; including encountering AI in mobile apps, web apps, assistants, chatbots, IoT, robotics, augmented intelligence, and automation.
As a result of this newfound attention and relevance, people are beginning to raise serious and legitimate questions around the ethical and responsible use of AI, if and how AI should be regulated, what AI means politically, and, finally, how AI will affect society, and will that impact be good, bad, or both? These are good questions, and more attention from people and organizations is being directed toward answering them every day.
Because we covered the impact of AI on jobs at length in Chapter 15, let’s turn our attention to other ways that AI will likely have increasing effects on society.
As we discussed earlier in the book, a primary concern for people at the data level is data privacy and security, which are key areas of data governance. It’s worth mentioning that data governance is not new and was the responsibility of companies and IT departments long before AI was on anybody’s radar or was being used in any appreciable way. That said, people’s privacy, security, and trust around data matters a lot, and AI and machine learning initiatives create additional demand for data, so we’re now seeing increased attention on this in the context of AI. Business leadership along with analytics, IT, and security experts need to work collaboratively to be able to take advantage of data to create better human experiences and business success while also providing transparency where possible and ensuring maximum privacy, security, and trust.
Additionally, governments at the national and local levels are increasingly regulating around privacy and fair use with the goal of helping to protect consumers. Europe’s GDPR went live in May 2018, and the California Consumer Privacy Act is coming in 2020. Government attention, politics, and imposed regulations in this area will likely grow over time, so keep an eye out for future developments.
Fairness, bias, and inclusion are very important considerations for the future of AI, as well. AI can potentially and unwittingly be used in unfair, biased, and noninclusive ways. This is a topic gaining prominence and will definitely receive further attention as AI progresses. One step in that direction was the “Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems,” launched on May 16, 2018 at RightsCon Toronto.
To better measure and track everything discussed so far, a premium is being placed on AI transparency, interpretability, and explainability as discussed in depth in Chapter 13, and as a result, many people are focused on creating interpretable and explainable AI. Expect to see many more developments there.
To wrap up this section, it’s worth mentioning a few organizations on the frontlines of these issues and future considerations. There are many people and organizations that want to help ensure the ethical, fair, inclusive, transparent, and safe use of AI that is aligned to human values and that benefits all of humanity. Safe use refers to the growing concept of AI safety. Some of them include the following:
The Future of Life Institute (FLI)—Asilomar AI Principles on AI research issues, ethics and values, and longer-term issues
Brookings Artificial Intelligence and Emerging Technologies Initiative
IEEE Standards Association—Global Initiative on Ethics of Autonomous and Intelligent Systems
OpenAI—AI research for safe artificial general intelligence
Microsoft’s Fairness, Accountability, Transparency, and Ethics in AI (FATE) group
I find all of this to be very important because my interest is entirely in ethical and beneficial use of AI, and so is this book. It’s about using AI to benefit people and businesses alike and to create better human experiences and business success. When pursued through that lens, AI represents a game-changing opportunity to transform and improve peoples lives as well as extend and save them.
Work continues on making progress toward solving AI-complete (AI-hard) problems, with the goal of ultimately achieving strong AI; aka artificial general intelligence (AGI). This might require the combination of many existing techniques, creation of an entirely new technique, or something else.
A concept and model called comprehensive AI services (CAIS)2 approaches the solution of general intelligence as the integration of superintelligent highly specialized AI services working together, similar to the concept of service-oriented architecture (SOA) in software architecture.
Solving the AGI problem is considered “difficult” for many reasons. It means huge advancements in AI with capabilities that are way beyond anything we have today. Here’s a nonexhaustive list of required advancements:
Acquire the ability to multitask across different tasks types
Emulate human understanding, reasoning, and logic
Emulate cognitive functions and processes in general
Learn from observation and the environment in the same way babies and animals do
Become self-directed, self-learning, self-improving, and self-modifying
Emulate causal inference (cause and effect predictions) in the way humans naturally do
This is a ridiculously tall order at the moment, and we shouldn’t expect a ton of progress on this in the near future. Also, historically, people tend to very much overestimate progress and adoption of new innovations and technologies. Think about AI; although adoption is certainly increasing, it’s nowhere near the pace that many thought. Even if amazing advanced technology exists and is available for use, this doesn’t mean that everyone will use it.
There are a lot of forces that actually prevent adoption, as we’ve discussed. This is covered in depth in the Innovator’s Dilemma by Clayton Christensen, and is also made clear by Everett Rogers seminal work on his diffusion of innovations theory, in which he categorizes adopters as innovators, early adopters, early majority, late majority, or laggards. In my experience, it doesn’t matter what the state of the technology is, per se, but rather what the level of widespread adoption is. In that context, I’d say that AI in still early in its diffusion and adoption.
Ultimately, estimates vary wildly across the board on when we can expect anything close to resembling AGI, and who knows at this point about adoption rates after AGI is available, so I will just leave it at that.
Another concept worth discussing is the AI effect. The AI effect describes the case in which after an AI application has become somewhat mainstream, it’s no longer considered by many as AI. It happens because people’s tendency is to no longer think of the solution as involving real intelligence and only being an application of normal computing. This despite the fact that these applications still fit the definition of AI regardless of widespread usage. The key takeaway here is that today’s AI is not necessarily tomorrow’s AI, at least not in some people’s minds.
This makes perfect sense if you think about it. At one point in time, when Steve Jobs and Apple first launched the iPhone, it was truly amazing to people that a phone could be a one-stop shop for music, pictures, phone calls, messaging, games, and more, all while introducing touch-based interactions and a gesture-sensitive screen. Now people just expect all of that to be a part of any mobile phone, and most don’t give much thought to it anymore. It’s table stakes. The same can be said for C-3PO in Star Wars. In the film, it is portrayed that protocol droids are commonplace and there is nothing particularly special about the technology or machine intelligence. I almost lose sight of how impressive C-3PO would be if he was a real robot because I’m a huge Star Wars fan and am so used to that character. Who knows, maybe some day humans will think of AGI in the same way?
Amazon and Netflix recommendations are another good example of this. People are so used to these recommendations that they might not think of it as a remarkable bit of technology and application of AI. These systems are in fact very remarkable and drive a huge proportion of both company’s revenue, user engagement, and retention. Some estimates indicate that 35% of Amazon’s revenues are generated by its recommendations, and 75% of everything watched on Netflix comes by way of recommendations.3 This is obviously far from trivial.
As I’ve said many times in this book, AI is absolutely able to benefit both people and business and create better human experiences and business success. To realize these benefits and outcomes, the right experts must collaborate to perform the proper AI assessments and create appropriate strategies to ensure success when pursuing AI initiatives. They must also collaboratively create an effective AI vision and strategy that is highly likely to succeed, and be able to execute the strategy to build, deliver, and optimize successful AI solutions.
The AIPB Framework and its unique and purpose-built North Star, benefits, structure, and approach to AI-based, scientific innovation will help many people and companies navigate this process and better undergo a successful applied AI transformation. For additional assistance, remember to visit https://aipbbook.com to check for the latest AIPB information, resources, and to sign up for the mailing list. Lastly, if you enjoyed and learned something new and useful from this book, please leave a positive review wherever you bought it.
Best of luck in all of your AI pursuits, and I can’t wait to see what the future of AI holds.
1 https://www.mckinsey.com/industries/high-tech/our-insights/product-managers-for-the-digital-world
2 Drexler, K.E. (2019): “Reframing Superintelligence: Comprehensive AI Services as General Intelligence,” Technical Report 2019-1, Future of Humanity Institute, University of Oxford.
3 https://www.mckinsey.com/industries/retail/our-insights/how-retailers-can-keep-up-with-consumers)