Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: (1) Should AI as trustworthy be sought through explainability, or accurate performance? (2) Should AI be considered trustworthy at all, or is reliability a preferable aim? (3) Should AI ethics be oriented toward establishing protections for users, or toward catalyzing innovation? Specific answers are less significant than the larger demonstration that AI ethics is currently unbalanced toward theoretical principles, and will benefit from increased exposure to grounded practices and dilemmas.