Another important aspect is model evaluation. Unless you apply your models to new data and measure a business objective, you're not doing predictive analytics. Evaluation techniques, such as cross-validation and separated train/test sets, simply split your test data, which can give only you an estimate of how your model will perform. Life often doesn't hand you a train dataset with all of the cases defined, so there is a lot of creativity involved in defining these two sets in a real-world dataset.
At the end of the day, we want to improve a business objective, such as improve ad conversion rate, and get more clicks on recommended items. To measure the improvement, execute A/B tests, measuring differences in metrics across statistically identical populations that each experience a different algorithm. Decisions on the product are always data-driven.