As in PCA, we only wish to keep the eigenvectors that are doing most of the work:
# keep the top two linear discriminants
linear_discriminants = eig_vecs.T[:2]
linear_discriminants
array([[-0.2049, -0.3871, 0.5465, 0.7138], [ 0.009 , 0.589 , -0.2543, 0.767 ]])
We can look at the ratio of explained variance in each component/linear discriminant by dividing each eigenvalue by the sum total of all eigenvalues:
#explained variance ratios
eig_vals / eig_vals.sum()
array([ .99147, .0085275, -2.0685e-17, -2.0685e-17])
It appears that the first component is doing a vast majority of the work and holding over 99% of the information on its own.