[论文翻译]SHAKTI:专为边缘AI和低资源环境优化的25亿参数小语言模型


原文地址:https://arxiv.org/pdf/2410.11331v1


SHAKTI: A 2.5 BILLION PARAMETER SMALL LANGUAGE MODEL OPTIMIZED FOR EDGE AI AND LOW-RESOURCE ENVIRONMENTS

SHAKTI:专为边缘AI和低资源环境优化的25亿参数小语言模型

ABSTRACT

摘要

We introduce Shakti, a 2.5 billion parameter language model specifically optimized for resourceconstrained environments such as edge devices, including smartphones, wearables, and IoT systems. Shakti combines high-performance NLP with optimized efficiency and precision, making it ideal for real-time AI applications where computational resources and memory are limited. With support for vernacular languages and domain-specific tasks, Shakti excels in industries such as healthcare, finance, and customer service. Benchmark evaluations demonstrate that Shakti performs competitively against larger models while maintaining low latency and on-device efficiency, positioning it as a leading solution for edge AI.

我们推出Shakti,这是一款专为智能手机、可穿戴设备和物联网系统等资源受限的边缘设备优化的25亿参数大语言模型。Shakti将高性能自然语言处理(NLP)与优化效率及精度相结合,成为计算资源和内存有限的实时AI应用的理想选择。该模型支持方言和领域特

阅读全文(20积分)