Konan Technology is set to release a new model of its Large Language Model (LLM), ‘ENT-11’, which integrates general and inference modes into a single engine. The model, which has 32B parameters, outperformed DeepC in benchmarks and offers high-performance AI services at a lower GPU cost. The model is optimized for Korean, surpassing models like Quen, Lama, Gema, and DeepC in terms of the number of Korean tokens included in the pre-training phase. It also supports up to 128K tokens of long context, equivalent to 128 pages of A4 paper in Korean tokens and 320 pages in English tokens.


Original Article: Read More

Auto-posted at: 2025-03-26 09:05:10

Leave a Reply

Your email address will not be published. Required fields are marked *