- 20 percent rule (Google), 24
- 100 percent test coverage, 124
- A
- Acceptance, 125f
- Access
- control systems, 87
- methods, 75–76
- Accountability, 181
- Ackerson, Chris, 169–172
- Advanced Encryption Standard (AES), 91
- Agile, 101
- framework, 103
- process, 68
- term, usage, 48
- user stories, generation, 59
- weakening, 136–137
- Akolkar, Rahul, 183–186
- Algorithms, public appearance, 159
- AlphaSense, 170
- Amazon Web Services (AWS), 111, 167
- Analytics software, 82
- Apache Hadoop, usage, 8, 85
- Apache JMeter, 132
- Application code, promotion, 121f
- Application programming interface (API), 75
- internal APIs, 112
- programmatic calls, 124
- usage, 110–111
- Artificial Intelligence (AI)
- adoption, 6–10, 98, 172
- advances, 171, 174, 176, 178, 181, 185, 188
- AI-focused meetups, 144
- algorithm, usage, 6
- assistance, job function target, 172, 174, 176, 178, 182, 186, 188
- benefits, 175
- Bin-Picking product, model reliance, 3
- capabilities, 64
- overestimation, problem, 45, 196
- chatbots, avoidance, 43
- Cloud Services catalog, sample (IBM), 111f
- data consumption, 58–59
- deep learning (relationship), Venn diagram (usage), 21f
- development, 157
- expertise, 184, 187
- experts, 169
- focus, narrowness, 44
- implementation, process, 164–165
- lifecycle, 10, 51, 139, 167–168
- pitfalls, 159–161, 206–207
- roadmap action checklist, 193
- limitations, 41–44
- pitfalls, 44–45
- potential, 163
- power/responsibility, 158–159
- project plan, building, 64–66
- security, 177–178
- solution, outsourcing, 100–101
- strong/weak artificial intelligence, 42
- technology, data scientist application, 82
- training AI models, data availability, 73f
- Artificial Intelligence (AI) models
- comments, 154
- data availability, 73f
- description, 151, 153
- files, 152, 153
- hosted demo, 153
- library
- building, 150–155
- components, 151–153
- entry, example, 153–154
- indexing mechanism, 151
- solutions, 154–155
- licensing, 152, 154
- metrics, 152, 153
- parameters, 152, 154
- performance, quantification, 145–147
- tagging, 151
- tags, 153, 154
- technology, 152, 154
- testing, example, 125–127
- training data, 152, 154
- validation data, 152
- Artificial Intelligence (AI) system
- creation, 76
- human intervention, 129–131
- improvement, 144–145
- information flow, 30f
- knowledge, 71
- learning process, 142–144
- output generation, 82
- robust AI system, ensuring, 128–129
- Asset, Liability, Debt and Derivative Investment Network (Aladdin) software, deployment, 5–6
- Automated chat support system, 128–129
- Automated testing, 124–128
- Awesome Public Datasets, 78
- Azure (Microsoft), 167
- B
- Babbage, Charles, 13–14
- Back propagation (backpropagation), 18, 42
- Backups, usage, 93–94, 202–203
- Bad data (AI limitation), 43
- Balanced ground truth, 72–73
- Balanced scorecard, perspectives, 63
- BERT model, 170, 171
- Binary files, 150
- Black box, 42
- BlackRock, Inc., case study, 5–6
- Bootstrapping, 79
- Boundaries, 58–69
- Bradford, Jeff, 173–174
- Brainstorming, usage, 38–41
- Business
- business-critical code, release, 156
- process mapping, 27–28
- stakeholders, categories, 32
- C
- Cambridge Analytica, practices, 92–93, 202
- Capital allocation, group (filtering example), 36
- Categorizing, 34–37, 79
- Cause and effect (AI limitation), 42
- Chance encounters, value, 38–41
- Change
- Agile process, 68
- proactive policy, 25
- request procedures, neglect, 198–199
- request process, 52
- Chaos Monkey code (Netflix), 128
- Charges/budgets, 52
- Chatbots, 23, 130
- architecture, sample, 131f
- login interface, 50
- product comparisons/reviews, 98
- support chatbot, logical architecture, 107f
- technologies, sample, 108t
- usage, hybrid approach, 130–131
- Chief Technology Officer (CTO), process map creation, 33
- Chunking, 17
- Classifier
- confusion matrix, ample, 146f
- image classifier, building, 79–80
- machine model, 72
- Classifying, 35–37
- Client-dependent variables, requirement, 4
- Cloud
- APIs, 110–111
- APIs, SLA, 135
- cloud-based API model, 111
- deployment paradigms, 133–135
- provider, 135
- scalability, relationship, 132–133
- solutions, 202–203
- Cloud Engine Learning Engine (Google), 132–133
- Cloud Machine Learning Engine (Google), 111
- Cloud Platform (Google), 167
- Code base, parts (identification), 155–156
- Code defect, appearance, 127
- Code repository, creation, 118
- Comma-separated value (CSV)
- Computer security, 92, 201
- Conclusion, forming, 38
- Confusion matrix, sample, 146f
- Constructive criticism, destructive criticism (contrast), 39
- Constructive feedback, 83–84
- Containers, 134
- Continuous integration, 119–123
- benefit, 123
- pipeline, 119–121
- principles, 123
- true continuous integration, 121–123
- Convolutional neural network (CNN), 23
- Core competency, 99
- Co-relations, AI usage, 158
- Core technology, replacement, 114
- Cost-benefit analysis, usage, 38
- Creativity, fostering, 24
- Cross-departmental exchanges, 40–41
- Cross-disciplinary knowledge base, 148
- Crowdsourcing, platform/usage, 79–80
- Culture, focus, 44–45, 196
- Customer
- balanced scorecard perspective, 63
- interaction, reduction, 65
- satisfaction
- increase, 65
- KPI example, 62
- user stories, 65
- Customer relationship management (CRM) system, 84
- D
- Daily stand-ups (team role), 102–103
- Dark designs, 92
- Data, 173–174
- access, 84–85
- analysis, third-party vendors usage, 92–93
- anonymization, 87
- availability, 27
- bad data (AI limitation), 43
- collation, 29
- consumption, 58–59
- curation (technology adoption phase), 8, 71, 157, 166
- pitfalls, 90–94, 199–203
- roadmap action checklist, 192
- tasks, 81
- data-powered capabilities, 110–111
- data-restore operation, 202
- exploration, 74–75
- governance, 166
- roadmap action checklist, 192
- improvement, 157–158
- licensing
- disadvantage, 79
- insufficiency, 90, 199
- non-free data, licensing (disadvantages), 78–79
- readiness, 89–90
- recovery, untested backup failure, 93
- responsibility, 89
- science flow, 82f
- scientist
- AI technology application, 82
- role, 81–82
- security, insufficiency, 91–92, 200–201
- storage technology, 82
- temporary data, usage, 90
- text-based data, 154–155
- training data, 152, 154
- transformation, tasks, 81
- tweaking, 9
- validation data, 152
- Database structure, reverse engineering, 75–76
- Data collection, 73–80, 180–181
- crowdsourcing, usage, 79–80
- licensing, usage, 77–79
- opt-in/opt-out, 86
- policies, 86
- Dataflows, tracking, 29–30
- Data.gov, 78
- Data governance (technology adoption phase), 8, 71, 85–89, 157
- board, creation, 87–88
- completion, goal, 86
- initiation, 88
- pitfalls, 90–94
- Datasets, labeling, 80
- Decision tree process, 130
- Deduction complexity, compounding, 4
- Deep learning, 21, 82, 152
- AI relationship, Venn diagram (usage), 21f
- models, usage, 3
- Defect tracking, digitization, 57
- Deliverables
- Delphi method, 7, 60, 166
- Delphi technique, usage, 104
- Demo, production, 103–104
- Departmental cross-talks, implementation, 40–41
- Design thinking, 7, 53–58, 61, 166
- process, 54f, 64–65
- session, sample, 53–54
- Developers, code conflict (absence), 119
- Development
- environment, 119
- feedback loop, 105–106
- team
- Digital internal data collection, 74–76
- Digital neuron, firing, 20–21
- Direct database connection, 75–76
- Docker (container technology), 134
- Documentation, quality, 151
- “Do Not Track,” browser requests, 85
- Duesterwald, Evelyn, 177–179
- E
- Electromagnetic interference (EMI), external stressor, 59
- ELIZA, usage, 16
- Embedded code, ubiquity, 1
- EMNIST dataset, 72
- Empathy, AI limitation, 43–44
- Empathy maps
- creation, 55–56
- generation, 65
- sample, 56
- Employee
- impact, group (filtering example), 36
- performance (KPI example), 62
- time, saving (KPI example), 62
- turnover/attrition, 147
- Encryption, 86
- Epochs, number (deep learning), 152
- Error cost escalation, 107–108
- Explainability, 42, 181
- External stressors, mitigation, 59
- F
- F1 score, 147
- Failover protocol, 129–130
- Fallback capability, 130
- False positive/negative, 145
- FANUC Corporation
- case study, 2–3
- robot, example, 4f
- FANUC Intelligent Edge Link & Drive (FIELD), usage, 3
- Feedback
- constructive feedback, 83–84
- gathering, 103
- ignoring, 160
- loops, 82–84, 105–106
- application, 121–122
- continuation, 135
- mechanisms, 141
- qualitative feedback, 65
- receiving, 142
- stages/roles, 105f
- user feedback, incorporation, 140–142
- Fibonacci Sequence, 104
- File export, 75
- Filtering, 34–35
- Filters, usage, 26
- Financial balanced scorecard perspective, 63
- Financial success, measurability criteria, 51
- Firm
- discovery, 99–100
- references, request, 100
- First-mover advantage, achievement, 23–24
- Flores, Steven, 187–189
- Flowcharts
- Formal change request procedures, defining (neglect), 68
- Freelancer.com, 101
- Friedl, Jeffrey, 16
- Fully connected neural network, multiple layers, 21f
- G
- General Data Protection Regulation (GDPR), 88–89, 166
- Generalizations (AI limitation), 41–42
- GitHub, 128
- Goals
- defining, 56–57
- setting, 102
- transformation, 57
- Goldstein, Rob, 6
- Google search results, usage, 81
- Governance
- data governance, 8, 71, 85–89
- term, application, 85
- Graphical processing units (GPUs), usage, 134–135
- Grinder, The, 132
- Ground truth, 71
- absence, 90–91, 200
- balanced ground truth, 72–73
- distribution, building methods, 72
- proportional ground truths, 73
- Guide rails, presence, 129
- H
- Hashing, 86–87
- Health Information Technology for Economic and Clinical Health Act (HITECH Act), 88
- Health Insurance Portability and Accountability Act (HIPAA), 88
- Hidden Markov models, 19–20
- H&R Block, case study, 4–5
- Hybridization, 118
- Hype (AI limitation), 43
- Hypercare, 139
- Hypothesis/predictions, forming/testing, 38
- I
- IBM
- AI Cloud Services catalog, sample, 111f
- Watson, capabilities, 4
- Watson Services, 111
- Idea
- creation/discovery, 31
- expected returns, group (filtering example), 36
- grouping, example, 37f
- independent requests (number), group (filtering example), 36
- Idea bank
- control, 148
- maintenance, 25–27
- organization, 35–37
- review, 37–38, 147–148
- sample, 26t–27t
- updating, 147–148
- Ideation, 165
- focus, narrowness, 195
- pitfalls, 44–45, 195–196
- process, problems, 44, 195–196
- roadmap action checklist, 191
- technology adoption phase, 6–7, 13
- If-this, then-that decision, 130
- Image classifier, building, 79–80
- Image recognition/classification, 23
- Imitation game, 14
- Indexing mechanism, 151
- Information
- flows, 29–31, 30f
- personal information, 77
- tracking, 29–30
- Infrastructure-as-a-service (IaaS), 133
- “Infrastructure as code” paradigm, 134
- Infrastructure testing, 127–128
- Innovation
- culture, impact, 44–45
- innovation-focused organization, 23–25
- method, 38–39
- priority, 25
- Integration, 125f
- Intelligent business model, 164
- Interactive Voice Response (IVR) system, 43
- Internal APIs, 112
- Internal data collection
- digital component, 74–76
- physical component, 76–77
- Internal process, balanced scorecard perspective, 63
- Internal systems, monthly review meetings (incorporation), 141
- Internet of Things (IoT), 1
- Inverse document frequency, 18
- J
- Java-ML library, 111
- JavaScript, usage, 110
- Jenkins (CI tool), 123
- Just-in-time (JIT) ordering, 2
- K
- Kaggle, 78
- Key drivers, selection/modeling, 60
- Key performance indicators (KPIs), 48
- measurability criteria, 51
- Keys, 17
- Knowledge base, 148–150
- cross-disciplinary knowledge base, 148
- expansion, 150
- materials, categories, 149
- online knowledge base, features, 149
- Knowledge sharing, 141
- Kubernetes, 134
- L
- Language-independent method, usage, 110
- Learning chatbot, project creation, 65
- Learning/growth, balanced scorecard perspective, 63
- “Lessons learned” document, usage, 145
- Licensing, usage, 77–79
- LoadRunner, 132
- Load testing, usage, 132
- Load tests, 131
- Logical architecture diagram, usage, 106
- Logic, explainability, 42
- Long short-term memory (LSTM) neural networks, usage, 22–23
- Lovelace, Ada, 13–14
- M
- Machine learning, 18–19, 82
- models, 121, 132–133
- precision, definition, 146
- system, building, 205–206
- Machine Learning on Azure (Microsoft), 111
- Machine model, 72
- Mailing lists, feedback mechanisms, 141
- Market research, usage, 54–55
- Markov chains, 19
- Markov models, 19
- hidden Markov models, 19–20
- Markov property, 19
- Mastering Regular Expressions (Friedl), 16
- Measurability criteria, 51
- Metadata, curation, 5
- Microservices, 110–111
- Models. See Artificial Intelligence models
- Monthly report, replacement, 57
- Multiserve deployment, 128
- Mutual trust, value, 136–137
- N
- Naive Bayes approach, 204
- Narrow AI, definition, 171
- Natural language processing (NLP), 15, 82
- application, 5
- BERT model, 170, 171
- machine learning, 17
- programmatic NLP, 15–17
- statistical NLP, 17–18
- Need-based approach, 40
- Nephew, Jill, 42, 179–183
- Networked devices, 74
- Neural networks, 20–23
- convolutional neural network (CNN), 23
- impact, 114
- single neuron example, 20f
- Nightly build, running, 120
- NodeJS application, 111
- Non-free data, licensing (disadvantages), 78–79
- Numerical analysis, 82
- O
- Observation, making, 38
- OneSignal, profitability, 77
- Online knowledge base, features, 149
- Open source, 168
- community, leveraging, 156
- contribution, 155–156
- projects, impact, 156
- technologies, usage, 98
- Operating system (OS), copy, 134
- Organizational chart, enhancement, 28f
- Organizational flowcharts, usage, 28–29
- Out-of-the-box IoT equipment, usage, 2–3
- Overfitting, 84
- P
- Palantir, founding, 158
- Parameter tweaking, 9
- Performance
- AI model performance, quantification, 145–147
- benchmark level, defining, 51–52
- testing, 120
- tests, 131
- Periodic updating, facilitation, 143
- Permission structures, 10
- Personal information, 77
- Personas
- determination, 54–55
- development, 64–65
- Physical architecture, 108f
- Physical data, value, 77
- Physical internal data collection, 76–77
- Precision, term (usage), 146
- Present circumstances, assessment, 60
- Probability, 17
- Problems, invention/misrepresentation, 67
- Process
- flowchart, 34f
- map, CTO creation, 33
- Production, 117, 167
- code defect, appearance, 127
- code repository, creation, 118
- environment, 119
- model, promotion, 122f
- pitfalls, 135–137, 204–206
- roadmap action checklist, 193
- skills, absence, 137
- team/schedule, 30
- technology adoption phase, 9–10
- Product owner (team role), 101
- Products, tagging/categorization, 34
- Programmatic NLP, 15–17
- Programming language, selection, 109–110
- Programming techniques, usage, 16
- Project
- AI pieces, process, 118
- assumptions, 51
- breakdown, approaches, 53–61
- completion/assumption, 160, 206
- completion criteria, 51–52
- defining (technology adoption phase), 7–8, 47, 165–166
- pitfalls, 196–199
- roadmap action checklist, 191–192
- definition, pitfalls, 66–68
- deliverables, list, 51
- focus, problem, 44–45, 196
- governance, 49–50
- kickoff, 50
- measurability, 62–63
- oversight activities, 49–50
- plan
- building, 64–66
- components, 48–52
- metrics, 48
- roadmap, clarity, 103
- scope, 49
- stakeholders, feedback, 97
- success criteria, 65
- work schedules/locations, 49
- Proportional ground truths, 73
- Prototype
- code, leveraging, 117–118
- design, 106–107
- reuse, 117–118
- technology scales, ensuring, 131–133
- Prototyping (technology adoption phase), 8–9, 97, 166–167
- excess, problem, 113, 203–204
- pitfalls, 112–114, 203–204
- planning, problem, 113
- roadmap action checklist, 192–193
- solutions, 97–99
- tool, problem, 113–114
- Python
- HTTP library, 111
- selection, 109–110
- PyTorch, 151
- Q
- Qualitative feedback, 65
- Quality assurance (QA)
- specialists, 56, 57
- team, monitoring activity, 55
- R
- Ranking, 17, 35–37
- Recall
- Recurrent neural network (RNN), 22–23
- Red Hat, 156
- Reports accuracy (KPI example), 62
- Representational state transfer (REST) API, 110–111
- Research, performing, 38
- Results, iteration/sharing, 38
- Risk, group (filtering example), 36
- Rituals, importance, 44
- Rivest-Shamir-Adleman (RSA), 91
- Robinson, Nathan S., 175–177
- Robust AI system, ensuring, 128–129
- Row-level security, 92, 201
- S
- Safari Books Online, 149
- Salesforce (CRM system), 84
- Scalability
- cloud, relationship, 132–133
- handling, 122
- testing, 120
- Scenario planning/analysis, 60, 61
- Schedule, 30, 49
- Schwaber, Ken, 101
- Scientific method, usage/steps, 38
- Scrum
- framework, 103
- master (team role), 102
- overview, 101–103
- Service level agreement (SLA), 135
- Shortcuts, coding, 123
- Single sign-on (SSO), 10
- Smoothing, inclusion, 19
- Social media, usage, 67
- Sociological, technological, economic, environmental, and political inputs (STEEP) model, 60
- Software as a service (SaaS) architecture, usage, 123
- Software checks, conducting, 92–93, 201–202
- Solutions, 97–99
- premature construction, 67–68, 198
- Sorting, 34–35
- Source control, code conflict (absence), 119
- Spear phishing, 91
- Specification document, design (problem), 49
- Sprint
- planning (team role), 102
- review, 103
- Stage environment (test environment), 119
- Staging environment, 122–123
- Stakeholders
- buy-in, absence, 66, 196–197
- categories, 32
- feedback, 97
- Standard operating procedures (SOPs), 28–29
- Statistical NLP, 17–18
- Step size (deep learning), 152
- Story
- points, usage, 104
- user stories, 57–58, 65
- Strong artificial intelligence, 42
- Subject matter experts (SMEs), 51–52, 72, 136, 205
- Subsystems, usage, 59–60
- Success, criteria, 50, 65
- Support chatbot
- logical architecture, 107f
- physical architecture, 108f
- Support query, resolution (KPI example), 62
- Sutherland, Jeff, 101
- Switches, 163
- System
- bootstrapping, 79
- building, decision point, 109
- confidence, erosion, 129
- power, leveraging, 81
- Systems planning, 7, 61
- Systems thinking, 58–60
- T
- Tagging, 34
- Tags, 153
- Talent, employing/contracting (contrast), 99–101
- Technological evaluation, 9
- Technology, 144–145
- adoption
- end user resistance, 136
- fear, mitigation, 205
- phases, 6–10
- selection, 107–110
- chatbot technologies, sample, 108t
- Temporary data, usage, 90
- TensorFlow, usage, 109–110, 151, 155
- Term frequency, 18
- Term frequency-inverse document frequency (tf-idf), 17
- Test-driven development (TDD), 10
- Test environment, 119
- Testing frameworks, 10
- Test types, 124–125
- Text-based data, 154–155
- Text generators, 23
- Text-to-speech capability, provision, 67
- Thiel, Peter, 158
- Think Fridays (IBM concept), 24
- Third-party vendors, usage, 92–93
- Time, group (filtering example), 35–36
- Tokenization, 17, 92, 201
- Tone analysis technology, 130
- Training
- AI models, data availability, 73f
- data, 152, 154
- Transparency, value, 136–137
- Travis CI, 123
- True continuous integration, 121–123
- True positive/negative, 145
- Turing, Alan (Turing test), 13–15, 42
- standard interpretation, 14f
- U
- Unit testing, 125f
- Unit tests, 124
- Upwork, 101
- User
- password hashing, 86–87
- privacy, ignoring, 92–93, 201–202
- rights/privacy, respect, 93
- training, inadequacy (provision), 160–161, 207
- type, defining, 55
- User experience (UX) skills, 100–101
- User feedback
- ignoring, 160, 206–207
- incorporation, 140–142
- User/security model, 9–10
- User stories
- accomplishment, 102
- defining/creating, 57–58
- development team construction, 117
- establishment, 65
- prioritization, 97, 103–104
- V
- Validation, defining, 120
- Value
- analysis, 31–34, 104
- concept, 31–32
- defining, 104
- identification process, 32–33
- monetary value, relationship, 32
- Vendor selection, risk, 67–68
- Venn diagram, usage, 21f
- Verification, defining, 120
- Virtual machines (VMs), 133
- Visibility gaps, 157
- Visual defect identification system, building, 56
- Visualization, popularization, 133
- W
- Walls, separation ability, 40–41
- Weak artificial intelligence, 42
- Web-based APIs, usage, 110–111
- WebLOAD, 132
- Weizenbaum, Joseph, 16
- Word2vec, development, 20
- Word embedding, 22
- WordPress (blog), 149, 154
- Work
- schedules/locations, 49
- stream activities, 50–52
- Workload, example (spike exhibit), 133f