Although neural machine translation systems like ChatGPT and Gemini are widely used in education, recent studies suggest that tailored assistance is necessary based on students’ specific writing proficiency levels (Lu et al., 2024). However, effective collaboration between AI and human writing and students’ perceptions of AI-led platforms across various text types and topics remain under-researched. This paper examines which AI-assisted writing tasks are most important to students’ perceptions, focusing on correct grammar, word usage, clear logic and faithful to the requirements, and reader friendliness. It evaluates the impact of AI writing and reviewing across five text types: exploration papers, complaint letters, email requests, CVs, and personal essays, and assesses students’ perceptions of AI’s effectiveness. The study involved 137 English-language undergraduates who wrote and reviewed five practical writing tasks both individually and with AI assistance. This was followed by a series of questionnaires and interviews conducted over an eighteen-week AI-Assisted Practical Writing course during the autumn semester of 2023. Our findings revealed that undergraduates performed better on opinion-based and format-heavy tasks, such as exploration papers, formal emails, and letters, due to AI’s strengths in vocabulary, structure, logic, and style. However, AI was less helpful for tasks requiring personal touch, like CVs and personal essays, where it lacked originality and emotional depth. Students preferred human reviews over AI reviews because of AI’s instability, incomplete mistake recognition, illogical statements, generic feedback, and lack of empathy. To address these weaknesses, we propose a two-directional model of human-AI cooperation for practical writing (HAIC), based on Cardon et al.’s (2023) Capabilities of AI Literacy for Communication. This model includes four capabilities: brainstorming, integration, verification, and critique. The two pathways are: starting with an AI-generated draft, followed by human verification and post-editing (AI-human pathway), or beginning with a human-generated draft, followed by AI verification and human post-editing (human-AI pathway). Our data shows that formal email writing and formal letter writing are most effective with the AI-human pathway, while personal papers and CV writing are best suited to the human-AI pathway. Our findings are validated by the alignment between students’ perceptions of AI effectiveness and their use of AI first for format-loaded types of practical writing. Additionally, the more a task requires human elements and creativity, the less likely students are to use AI initially. This paper concludes with practical implications for translation teaching and training.