Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Making Software
SPECIAL OFFER: Upgrade this ebook with O’Reilly Preface
Organization of This Book Conventions Used in This Book Safari® Books Online Using Code Examples How to Contact Us
I. General Principles of Searching For and Using Evidence
1. The Quest for Convincing Evidence
In the Beginning The State of Evidence Today
Challenges to the Elegance of Studies Challenges to Statistical Strength Challenges to Replicability of Results
Change We Can Believe In The Effect of Context Looking Toward the Future References
2. Credibility, or Why Should I Insist on Being Convinced?
How Evidence Turns Up in Software Engineering Credibility and Relevance
Fitness for Purpose, or Why What Convinces You Might Not Convince Me Quantitative Versus Qualitative Evidence: A False Dichotomy
Aggregating Evidence
Limitations and Bias
Types of Evidence and Their Strengths and Weaknesses
Controlled Experiments and Quasi-Experiments
Credibility Relevance
Surveys
Credibility Relevance
Experience Reports and Case Studies
Credibility Relevance
Other Methods Indications of Credibility (or Lack Thereof) in Reporting
General characteristics A clear research question An informative description of the study setup A meaningful and graspable data presentation A transparent statistical analysis (if any) An honest discussion of limitations Conclusions that are solid yet relevant
Society, Culture, Software Engineering, and You Acknowledgments References
3. What We Can Learn from Systematic Reviews
An Overview of Systematic Reviews The Strengths and Weaknesses of Systematic Reviews
The Systematic Review Process
Planning the review Conducting the review Reporting the review
Problems Associated with Conducting a Review
Systematic Reviews in Software Engineering
Cost Estimation Studies
The accuracy of cost estimation models The accuracy of cost estimates in industry
Agile Methods
Dybå and Dingsøyr Hannay, Dybå, Arisholm, and Sjøberg
Inspection Methods
Conclusion References
4. Understanding Software Engineering Through Qualitative Methods
What Are Qualitative Methods? Reading Qualitative Research Using Qualitative Methods in Practice Generalizing from Qualitative Results Qualitative Methods Are Systematic References
5. Learning Through Application: The Maturing of the QIP in the SEL
What Makes Software Engineering Uniquely Hard to Research A Realistic Approach to Empirical Research The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research The Quality Improvement Paradigm
Characterize Set Goals Select Process Execute Process Analyze Package
Conclusion References
6. Personality, Intelligence, and Expertise: Impacts on Software Development
How to Recognize Good Programmers
Individual Differences: Fixed or Malleable Personality Intelligence The Task of Programming Programming Performance Expertise Software Effort Estimation
Individual or Environment
Skill or Safety in Software Engineering Collaboration Personality Again A Broader View of Intelligence
Concluding Remarks References
7. Why Is It So Hard to Learn to Program?
Do Students Have Difficulty Learning to Program?
The 2001 McCracken Working Group The Lister Working Group
What Do People Understand Naturally About Programming? Making the Tools Better by Shifting to Visual Programming Contextualizing for Motivation Conclusion: A Fledgling Field References
8. Beyond Lines of Code: Do We Need More Complexity Metrics?
Surveying Software Measuring the Source Code A Sample Measurement
Source Lines of Code (SLOC) Lines of Code (LOC) Number of C Functions McCabe’s Cyclomatic Complexity Halstead’s Software Science Metrics
Statistical Analysis
Overall Analysis Differences Between Header and Nonheader Files The Confounding Effect: Influence of File Size in the Intensity of Correlation
Effects of size on correlations for header files Effects of size on correlations for nonheader files Effect on the Halstead’s Software Science metrics Summary of the confounding effect of file size
Some Comments on the Statistical Methodology So Do We Need More Complexity Metrics? References
Bibliography
II. Specific Topics in Software Engineering
9. An Automated Fault Prediction System
Fault Distribution Characteristics of Faulty Files Overview of the Prediction Model Replication and Variations of the Prediction Model
The Role of Developers Predicting Faults with Other Types of Models
Building a Tool The Warning Label References
10. Architecting: How Much and When?
Does the Cost of Fixing Software Increase over the Project Life Cycle? How Much Architecting Is Enough?
Cost-to-Fix Growth Evidence
Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting
The Foundations of the COCOMO II Architecture and Risk Resolution (RESL) Factor
Economies and diseconomies of scale Reducing software rework via architecture and risk resolution A successful example: CCPDS-R
The Architecture and Risk Resolution Factor in Ada COCOMO and COCOMO II
How the Ada Process Model promoted risk-driven concurrent engineering software processes Architecture and risk resolution (RESL) factor in COCOMO II Improvement shown by incorporating architecture and risk resolution
ROI for Software Systems Engineering Improvement Investments
So How Much Architecting Is Enough? Does the Architecting Need to Be Done Up Front? Conclusions References
11. Conway’s Corollary
Conway’s Law Coordination, Congruence, and Productivity
Implications
Organizational Complexity Within Microsoft
Implications
Chapels in the Bazaar of Open Source Software Conclusions References
Bibliography
12. How Effective Is Test-Driven Development?
The TDD Pill—What Is It? Summary of Clinical TDD Trials The Effectiveness of TDD
Internal Quality External Quality Productivity Test Quality
Enforcing Correct TDD Dosage in Trials Cautions and Side Effects Conclusions Acknowledgments General References Clinical TDD Trial References
Bibliography
13. Why Aren’t More Women in Computer Science?
Why So Few Women?
Ability Deficits, Preferences, and Cultural Biases
Evidence for deficits in female mathematical-spatial abilities The role of preferences and lifestyle choices
Biases, Stereotypes, and the Role of Male Computer-Science Culture
Should We Care?
What Can Society Do to Reverse the Trend? Implications of Cross-National Data
Conclusion References
14. Two Comparisons of Programming Languages
A Language Shoot-Out over a Peculiar Search Algorithm
The Programming Task: Phonecode Comparing Execution Speed Comparing Memory Consumption Comparing Productivity and Program Length Comparing Reliability Comparing Program Structure Should I Believe This?
Plat_Forms: Web Development Technologies and Cultures
The Development Task: People-by-Temperament Lay Your Bets Comparing Productivity Comparing Artifact Size Comparing Modifiability Comparing Robustness and Security Hey, What About <Insert-Your-Favorite-Topic>?
So What? References
Bibliography
15. Quality Wars: Open Source Versus Proprietary Software
Past Skirmishes The Battlefield Into the Battle
File Organization Code Structure Code Style Preprocessing Data Organization
Outcome and Aftermath Acknowledgments and Disclosure of Interest References
Bibliography
16. Code Talkers
A Day in the Life of a Programmer
Diary Study Observational Study Were the Programmers on Their Best Behavior?
What Is All This Talk About?
Getting Answers to Questions The Search for Rationale Interruptions and Multitasking What Questions Do Programmers Ask? Are Agile Methods Better for Communication?
A Model for Thinking About Communication References
Bibliography
17. Pair Programming
A History of Pair Programming Pair Programming in an Industrial Setting
Industry Practices in Pair Programming Results of Using Pair Programming in Industry
Pair Programming in an Educational Setting
Practices Specific to Education Results of Using Pair Programming in Education
Distributed Pair Programming Challenges Lessons Learned Acknowledgments References
18. Modern Code Review
Common Sense A Developer Does a Little Code Review
Focus Fatigue Speed Kills Size Kills The Importance of Context
Group Dynamics
Are Meetings Required? False-Positives Are External Reviewers Required At All?
Conclusion References
Bibliography
19. A Communal Workshop or Doors That Close?
Doors That Close A Communal Workshop Work Patterns One More Thing… References
Bibliography
20. Identifying and Managing Dependencies in Global Software Development
Why Is Coordination a Challenge in GSD? Dependencies and Their Socio-Technical Duality
The Technical Dimension
Syntactic dependencies and their impact on productivity and quality Logical dependencies and their impact on productivity and quality
The Socio-Organizational Dimension
Different types of work dependencies and their impacts on productivity and quality
The Socio-Technical Dimension
From Research to Practice
Leveraging the Data in Software Repositories The Role of Team Leads and Managers in Supporting the Management of Dependencies Developers, Work Items, and Distributed Development
Future Directions
Software Architectures Suitable for Global Software Development Collaborative Software Engineering Tools Balancing Standarization and Flexibility
References
21. How Effective Is Modularization?
The Systems What Is a Change? What Is a Module? The Results
Change Locality Examined Modules Emergent Modularity
Threats to Validity Summary References
22. The Evidence for Design Patterns
Design Pattern Examples Why Might Design Patterns Work? The First Experiment: Testing Pattern Documentation
Design of the Experiment Results
The Second Experiment: Comparing Pattern Solutions to Simpler Ones The Third Experiment: Patterns in Team Communication Lessons Learned Conclusions Acknowledgments References
23. Evidence-Based Failure Prediction
Introduction Code Coverage Code Churn Code Complexity Code Dependencies People and Organizational Measures Integrated Approach for Prediction of Failures Summary Acknowledgments References
24. The Art of Collecting Bug Reports
Good and Bad Bug Reports What Makes a Good Bug Report? Survey Results
Contents of Bug Reports (Developers) Contents of Bug Reports (Reporters)
Evidence for an Information Mismatch Problems with Bug Reports The Value of Duplicate Bug Reports Not All Bug Reports Get Fixed Conclusions Acknowledgments References
Bibliography
25. Where Do Most Software Flaws Come From?
Studying Software Flaws Context of the Study Phase 1: Overall Survey
Summary of Questionnaire Summary of the Data Summary of the Phase 1 Study
Phase 2: Design/Code Fault Survey
The Questionnaire Statistical Analysis
Finding and fixing faults Faults Fault Frequency Adjusted by Effort Underlying causes Means of prevention Underlying causes and means of prevention
Interface Faults Versus Implementation Faults
What Should You Believe About These Results?
Are We Measuring the Right Things? Did We Do It Right? What Can You Do with the Results?
What Have We Learned? Acknowledgments References
26. Novice Professionals: Recent Graduates in a First Software Engineering Job
Study Methodology
Subjects Task Analysis Task Sample Reflection Methodology Threats to Validity
Software Development Task
Task Breakdown
Communication Documentation Working on bugs Programming Project management and tools Design specifications and testing
Strengths and Weaknesses of Novice Software Developers
Strengths Weaknesses
Reflections
Managing Getting Engaged Persistence, Uncertainty, and Noviceness Large-Scale Software Team Setting
Misconceptions That Hinder Learning Reflecting on Pedagogy
Pair Programming Legitimate Peripheral Participation Mentoring
Implications for Change
New Developer Onboarding Educational Curricula
References
27. Mining Your Own Evidence
What Is There to Mine? Designing a Study A Mining Primer
Step 1: Determining Which Data to Use Step 2: Data Retrieval Step 3: Data Conversion (Optional) Step 4: Data Extraction Step 5: Parsing the Bug Reports Step 6: Linking Data Sets
Linking code changes to bug reports Linking bug reports to code changes (optional)
Step 6: Checking for Missing Links Step 7: Mapping Bugs to Files
Where to Go from Here Acknowledgments References
28. Copy-Paste as a Principled Engineering Tool
An Example of Code Cloning Detecting Clones in Software Investigating the Practice of Code Cloning
Forking Templating Customizing
Our Study Conclusions References
29. How Usable Are Your APIs?
Why Is It Important to Study API Usability? First Attempts at Studying API Usability
Study Design Summary of Findings from the First Study
If At First You Don’t Succeed...
Design of the Second Study Summary of Findings from the Second Study Cognitive Dimensions
Adapting to Different Work Styles
Scenario-Based Design
Conclusion References
30. What Does 10x Mean? Measuring Variations in Programmer Productivity
Individual Productivity Variation in Software Development
Extremes in Individual Variation on the Bad Side What Makes a Real 10x Programmer
Issues in Measuring Productivity of Individual Programmers
Productivity in Lines of Code per Staff Month Productivity in Function Points What About Complexity? Is There Any Way to Measure Individual Productivity?
Team Productivity Variation in Software Development References
A. Contributors Index About the Authors Colophon SPECIAL OFFER: Upgrade this ebook with O’Reilly
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion