Large language models (LLMs) have significantly improved natural language processing, holding the potential to support health workers and their clients directly. Unfortunately, there is a substantial and variable drop in performance for low-resource languages. Here we present results from an exploratory case study in Malawi, aiming to enhance the performance of LLMs in Chichewa through innovative prompt engineering techniques. By focusing on practical evaluations over traditional metrics, we assess the subjective utility of LLM outputs, prioritizing end-user satisfaction. Our findings suggest that tailored prompt engineering may improve LLM utility in underserved linguistic contexts, offering a promising avenue to bridge the language inclusivity gap in digital health interventions.