In the past decade, Taiwan and other countries have seen a growth in the number of interpreter training programs and, as a result, an increasing number of new interpreters have entered the job market. As the market becomes more competitive, one has to wonder how these aspiring interpreters are judged by their respective training programs at their exit exams as being ready to work as interpreters. This study aims to answer this question by comparing exit exam practices of Taiwanese, Chinese, British and American programs that train English-Chinese interpreters. Eleven such programs were chosen, including seven programs in Taiwan, one in China, two in Britain, and one in the USA. Our data were collected through interviews, questionnaires, correspondence, and analysis of relevant documents. All data were analyzed, coded and categorized into three categories: exam policies, test-writing and evaluation practices. Our analysis showed that interpreter programs generally did not use specific criteria to judge or control the difficulty level of tests. The difficulty of a test was often used as part of the evaluation criteria. Also, interpreting experts’ holistic judgment was heavily relied upon in evaluation. All interpreting programs had developed evaluation criteria for their exit exams. However, these criteria were often not thoroughly followed in the actual exam evaluation.