Open-weight models can be tuned to customise them for specific tasks that they were not originally trained to do by fine-tuning them, and there are a number of techniques and tools to perform this. For this experiment I wanted to try full fine-tuning with https://pytorch.org/torchtune/ to teach Llama 3.1 8b…