10% are true positives, but we’d only recognise 99% of them by test 1. We’d then run test 2 and, although it is great at 99.9%, we’d still miss a few more. Hence the reduction in test sensitivity.
But what did we gain we running the two-step screen? If we screened 1,000,000 people, how many that would have been false positives would have been accurately identified as negative by running test two (test 1 specificity 98%, test 2 specificity 99%, total tested population incidence 10%)?