Thanks to advances in statistical machine learning, AI is enjoying renewed popularity today. However, two properties are still in great need of improvement: a) robustness and b) explainability. In many application domains, the question of why a particular result was obtained is often more important than the result itself. This is directly related to robustness, because disturbances in the input data can have dramatic effects on the output and lead to completely different results. This is relevant in all critical areas where we work with real data from our environment, i.e. where we do not have i.i.d. laboratory data. Therefore, the use of AI in real-world domains that impact human life (agriculture, climate, forestry, health, etc.) has led to an increased demand for trustworthy AI. In sensitive areas where traceability, transparency and interpretability are required, explainable AI (XAI) is now even essential due to regulatory requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it can be beneficial to include a human in the loop. A human expert can sometimes - not always, of course - bring experience and conceptual understanding to the AI pipeline.