INT4 LoRA fine-tuning vs QLoRA: A user inquired about the variances between INT4 LoRA fantastic-tuning and QLoRA in terms of precision and speed. A further member explained that QLoRA with HQQ will involve frozen quantized weights, will not use tinnygemm, and utilizes dequantizing alongside torch.matmulSiri and ChatGPT Integration Debate: Confusion