当前位置: 首页 > news >正文

企业官方网站是什么中国企业500强2023

企业官方网站是什么,中国企业500强2023,乐享校园网站建设策划书,网站关键词搜索优化是怎么做的TTS task TTS#xff08;Text-to-Speech#xff09;任务是一种自然语言处理#xff08;NLP#xff09;任务#xff0c;其中模型的目标是将输入的文本转换为声音#xff0c;实现自动语音合成。具体来说#xff0c;模型需要理解输入的文本并生成对应的语音输出#xff0…TTS task TTSText-to-Speech任务是一种自然语言处理NLP任务其中模型的目标是将输入的文本转换为声音实现自动语音合成。具体来说模型需要理解输入的文本并生成对应的语音输出使得合成的语音听起来自然而流畅类似于人类语音的表达方式。 Bark Barkhttps://github.com/suno-ai/bark 是由 Suno 创建的基于转换器的文本到音频模型。Bark 可以生成高度逼真的多语言语音以及其他音频包括音乐、背景噪音和简单的音效。该模型还可以产生非语言交流如大笑、叹息和哭泣。为了支持研究社区我们提供了对预训练模型检查点的访问这些检查点已准备好进行推理并可用于商业用途。 bark.cpp https://github.com/PABannier/bark.cpp 编译 $mkdir build $cd build $cmake .. -- The C compiler identification is GNU 9.5.0 -- The CXX compiler identification is GNU 9.5.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Linux detected -- Configuring done -- Generating done -- Build files have been written to: /home/pdd/le/bark.cpp/build $cmake --build . --config Release [ 7%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o [ 14%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o [ 21%] Linking C static library libggml.a [ 21%] Built target ggml [ 28%] Building CXX object CMakeFiles/bark.cpp.dir/bark.cpp.o [ 42%] Linking CXX static library libbark.cpp.a [ 42%] Built target bark.cpp [ 50%] Building CXX object examples/main/CMakeFiles/main.dir/main.cpp.o [ 57%] Linking CXX executable ../../bin/main [ 57%] Built target main [ 64%] Building CXX object examples/server/CMakeFiles/server.dir/server.cpp.o [ 71%] Linking CXX executable ../../bin/server [ 71%] Built target server [ 78%] Building CXX object examples/quantize/CMakeFiles/quantize.dir/quantize.cpp.o [ 85%] Linking CXX executable ../../bin/quantize [ 85%] Built target quantize [ 92%] Building CXX object tests/CMakeFiles/test-tokenizer.dir/test-tokenizer.cpp.o [100%] Linking CXX executable ../bin/test-tokenizer [100%] Built target test-tokenizer权重下载与转换 $cd ../ # text_2.pt, coarse_2.pt, fine_2.pt,https://dl.fbaipublicfiles.com/encodec/v0/encodec_24khz-d7cc33bc.th $python3 download_weights.py --download-dir ./models # convert the model to ggml format $python3 convert.py --dir-model ./models --codec-path ./models --vocab-path ./ggml_weights/ --out-dir ./ggml_weights/ $ ls -ahl ./models/ 总用量 13G drwxrwxr-x 2 pdd pdd 4.0K Jan 29 08:22 . drwxrwxr-x 13 pdd pdd 4.0K Jan 29 06:50 .. -rwxrwxrwx 1 pdd pdd 3.7G Jan 29 07:34 coarse_2.pt -rw-rw-r-- 1 pdd pdd 89M Jan 29 07:29 encodec_24khz-d7cc33bc.th -rwxrwxrwx 1 pdd pdd 3.5G Jan 29 07:53 fine_2.pt -rwxrwxrwx 1 pdd pdd 5.0G Jan 29 07:22 text_2.pt $ ls -ahl ./ggml_weights/ 总用量 4.2G drwxrwxr-x 2 pdd pdd 4.0K Jan 29 08:34 . drwxrwxr-x 13 pdd pdd 4.0K Jan 29 06:50 .. -rw-rw-r-- 1 pdd pdd 1.3M Jan 29 08:33 ggml_vocab.bin -rw-rw-r-- 1 pdd pdd 1.3G Jan 29 08:34 ggml_weights_coarse.bin -rw-rw-r-- 1 pdd pdd 45M Jan 29 08:34 ggml_weights_codec.bin -rw-rw-r-- 1 pdd pdd 1.2G Jan 29 08:34 ggml_weights_fine.bin -rw-rw-r-- 1 pdd pdd 1.7G Jan 29 08:33 ggml_weights_text.bin -rw-rw-r-- 1 pdd pdd 973K Jan 29 05:23 vocab.txt $ ./main -m ./ggml_weights/ -p this is an audio运行 $ ./build/bin/main -h usage: ./build/bin/main [options]options:-h, --help show this help message and exit-t N, --threads N number of threads to use during computation (default: 4)-s N, --seed N seed for random number generator (default: 0)-p PROMPT, --prompt PROMPTprompt to start generation with (default: random)-m FNAME, --model FNAMEmodel path (default: /home/pdd/le/bark.cpp/ggml_weights)-o FNAME, --outwav FNAMEoutput generated wav (default: output.wav)$ ./build/bin/main -m ./ggml_weights/ -p this is an audio bark_load_model_from_file: loading model from ./ggml_weights/ bark_load_model_from_file: reading bark text model gpt_model_load: n_in_vocab 129600 gpt_model_load: n_out_vocab 10048 gpt_model_load: block_size 1024 gpt_model_load: n_embd 1024 gpt_model_load: n_head 16 gpt_model_load: n_layer 24 gpt_model_load: n_lm_heads 1 gpt_model_load: n_wtes 1 gpt_model_load: ftype 0 gpt_model_load: qntvr 0 gpt_model_load: ggml tensor size 304 bytes gpt_model_load: ggml ctx size 1894.87 MB gpt_model_load: memory size 192.00 MB, n_mem 24576 gpt_model_load: model size 1701.69 MB bark_load_model_from_file: reading bark vocabbark_load_model_from_file: reading bark coarse model gpt_model_load: n_in_vocab 12096 gpt_model_load: n_out_vocab 12096 gpt_model_load: block_size 1024 gpt_model_load: n_embd 1024 gpt_model_load: n_head 16 gpt_model_load: n_layer 24 gpt_model_load: n_lm_heads 1 gpt_model_load: n_wtes 1 gpt_model_load: ftype 0 gpt_model_load: qntvr 0 gpt_model_load: ggml tensor size 304 bytes gpt_model_load: ggml ctx size 1443.87 MB gpt_model_load: memory size 192.00 MB, n_mem 24576 gpt_model_load: model size 1250.69 MBbark_load_model_from_file: reading bark fine model gpt_model_load: n_in_vocab 1056 gpt_model_load: n_out_vocab 1056 gpt_model_load: block_size 1024 gpt_model_load: n_embd 1024 gpt_model_load: n_head 16 gpt_model_load: n_layer 24 gpt_model_load: n_lm_heads 7 gpt_model_load: n_wtes 8 gpt_model_load: ftype 0 gpt_model_load: qntvr 0 gpt_model_load: ggml tensor size 304 bytes gpt_model_load: ggml ctx size 1411.25 MB gpt_model_load: memory size 192.00 MB, n_mem 24576 gpt_model_load: model size 1218.26 MBbark_load_model_from_file: reading bark codec model encodec_model_load: model size 44.32 MBbark_load_model_from_file: total model size 4170.64 MBbark_tokenize_input: prompt: this is an audio bark_tokenize_input: number of tokens in prompt 513, first 8 tokens: 20579 20172 20199 33733 129595 129595 129595 129595 bark_forward_text_encoder: ...........................................................................................................bark_print_statistics: mem per token 4.81 MB bark_print_statistics: sample time 16.03 ms / 109 tokens bark_print_statistics: predict time 9644.73 ms / 87.68 ms per token bark_print_statistics: total time 9663.29 msbark_forward_coarse_encoder: ...................................................................................................................................................................................................................................................................................................................................bark_print_statistics: mem per token 8.53 MB bark_print_statistics: sample time 4.43 ms / 324 tokens bark_print_statistics: predict time 52071.64 ms / 160.22 ms per token bark_print_statistics: total time 52080.24 msggml_new_object: not enough space in the contexts memory pool (needed 4115076720, available 4112941056) 段错误 (核心已转储)一开始以为是内存不足去增加了虚拟内存但仍然报错 $ sudo dd if/dev/zero ofswapfile bs1024 count10000000 记录了100000000 的读入 记录了100000000 的写出 10240000000字节10 GB9.5 GiB已复制55.3595 s185 MB/s $ sudo chmod 600 ./swapfile # delete the swapfile if you dont need it $ sudo mkswap -f ./swapfile 正在设置交换空间版本 1大小 9.5 GiB (10239995904 个字节) 无标签 UUIDf3e2a0be-b880-48da-b598-950b7d69f94f $ sudo swapon ./swapfile $ free -mtotal used free shared buff/cache available 内存 15731 6441 307 1242 8982 7713 交换 11813 2047 9765$ ./build/bin/main -m ./ggml_weights/ -p this is an audio ggml_new_object: not enough space in the contexts memory pool (needed 4115076720, available 4112941056)去看了报错的函数应该不是内存的原因 static struct ggml_object * ggml_new_object(struct ggml_context * ctx, enum ggml_object_type type, size_t size) {// always insert objects at the end of the contexts memory poolstruct ggml_object * obj_cur ctx-objects_end;const size_t cur_offs obj_cur NULL ? 0 : obj_cur-offs;const size_t cur_size obj_cur NULL ? 0 : obj_cur-size;const size_t cur_end cur_offs cur_size;// align to GGML_MEM_ALIGNsize_t size_needed GGML_PAD(size, GGML_MEM_ALIGN);char * const mem_buffer ctx-mem_buffer;struct ggml_object * const obj_new (struct ggml_object *)(mem_buffer cur_end);if (cur_end size_needed GGML_OBJECT_SIZE ctx-mem_size) {GGML_PRINT(%s: not enough space in the contexts memory pool (needed %zu, available %zu)\n,__func__, cur_end size_needed, ctx-mem_size);assert(false);return NULL;}*obj_new (struct ggml_object) {.offs cur_end GGML_OBJECT_SIZE,.size size_needed,.next NULL,.type type,};ggml_assert_aligned(mem_buffer obj_new-offs);if (obj_cur ! NULL) {obj_cur-next obj_new;} else {// this is the first object in this contextctx-objects_begin obj_new;}ctx-objects_end obj_new;//printf(%s: inserted new object at %zu, size %zu\n, __func__, cur_end, obj_new-size);return obj_new; }然后找到了https://github.com/PABannier/bark.cpp/issues/122 $ cd bark.cpp/ $ git checkout -f 07e651618b3a8a27de3bfa7f733cdb0aa8f46b8a HEAD 目前位于 07e6516 ENH Decorrelate fine GPT graph (#111)运行成功 /home/pdd/le/bark.cpp/cmake-build-debug/bin/main bark_load_model_from_file: loading model from /home/pdd/le/bark.cpp/ggml_weights bark_load_model_from_file: reading bark text model gpt_model_load: n_in_vocab 129600 gpt_model_load: n_out_vocab 10048 gpt_model_load: block_size 1024 gpt_model_load: n_embd 1024 gpt_model_load: n_head 16 gpt_model_load: n_layer 24 gpt_model_load: n_lm_heads 1 gpt_model_load: n_wtes 1 gpt_model_load: ftype 0 gpt_model_load: qntvr 0 gpt_model_load: ggml tensor size 272 bytes gpt_model_load: ggml ctx size 1894.87 MB gpt_model_load: memory size 192.00 MB, n_mem 24576 gpt_model_load: model size 1701.69 MB bark_load_model_from_file: reading bark vocabbark_load_model_from_file: reading bark coarse model gpt_model_load: n_in_vocab 12096 gpt_model_load: n_out_vocab 12096 gpt_model_load: block_size 1024 gpt_model_load: n_embd 1024 gpt_model_load: n_head 16 gpt_model_load: n_layer 24 gpt_model_load: n_lm_heads 1 gpt_model_load: n_wtes 1 gpt_model_load: ftype 0 gpt_model_load: qntvr 0 gpt_model_load: ggml tensor size 272 bytes gpt_model_load: ggml ctx size 1443.87 MB gpt_model_load: memory size 192.00 MB, n_mem 24576 gpt_model_load: model size 1250.69 MBbark_load_model_from_file: reading bark fine model gpt_model_load: n_in_vocab 1056 gpt_model_load: n_out_vocab 1056 gpt_model_load: block_size 1024 gpt_model_load: n_embd 1024 gpt_model_load: n_head 16 gpt_model_load: n_layer 24 gpt_model_load: n_lm_heads 7 gpt_model_load: n_wtes 8 gpt_model_load: ftype 0 gpt_model_load: qntvr 0 gpt_model_load: ggml tensor size 272 bytes gpt_model_load: ggml ctx size 1411.25 MB gpt_model_load: memory size 192.00 MB, n_mem 24576 gpt_model_load: model size 1218.26 MBbark_load_model_from_file: reading bark codec modelbark_load_model_from_file: total model size 4170.64 MBbark_tokenize_input: prompt: this is an audio bark_tokenize_input: number of tokens in prompt 513, first 8 tokens: 20579 20172 20199 33733 129595 129595 129595 129595 encodec_model_load: model size 44.32 MB bark_forward_text_encoder: ...........................................................................................................bark_print_statistics: mem per token 4.80 MB bark_print_statistics: sample time 59.49 ms / 109 tokens bark_print_statistics: predict time 24761.95 ms / 225.11 ms per token bark_print_statistics: total time 24826.76 msbark_forward_coarse_encoder: ...................................................................................................................................................................................................................................................................................................................................bark_print_statistics: mem per token 8.51 MB bark_print_statistics: sample time 19.74 ms / 324 tokens bark_print_statistics: predict time 178366.69 ms / 548.82 ms per token bark_print_statistics: total time 178396.22 msbark_forward_fine_encoder: .....bark_print_statistics: mem per token 0.66 MB bark_print_statistics: sample time 304.20 ms / 6144 tokens bark_print_statistics: predict time 407086.19 ms / 58155.17 ms per token bark_print_statistics: total time 407399.91 msbark_forward_encodec: mem per token 760209 bytes bark_forward_encodec: predict time 4349.03 ms bark_forward_encodec: total time 4349.07 msNumber of frames written 51840.main: load time 11441.58 ms main: eval time 614987.69 ms main: total time 626429.31 msProcess finished with exit code 0 CG 科大讯飞 语义理解 AIUI封装https://github.com/iboB/pytorch-ggml-plugin
http://www.tj-hxxt.cn/news/229925.html

相关文章:

  • 乌兰县wap网站建设公司免费的招标网有哪些
  • 深圳做男装什么网站容易找工网络营销第二板斧是什么
  • 商城型网站建设多少钱渭南网站建设推广
  • 网站平台做捐助功能有风险吗什么是php网站
  • 黑龙江省建设教育信息网站广西桂林天气预报15天查询
  • 做的视频传到哪个网站好静态网站模板源码下载
  • 上海行业网站建设在线制作头像框
  • dremrever做网站流程网页生成pdf不显示
  • 高端建站是什么公司网站建设建设
  • 新网站 seo网站seo分析报告
  • 新建网站站点的一级a做爰片图片免费观看网站
  • 重庆高考征集志愿网站拍摄企业宣传片哪家好
  • 网站建设初期怎么添加内容天津专业网站设计
  • 广西网站设计运营公司如何推广网站运营
  • 在线原型设计网站wordpress 主页地址函数
  • 一站式外贸综合服务平台如何建设一个简单的公司网站
  • wordpress 火车头接口邯郸网站优化技巧
  • 合肥企业建站程序国家重大建设项目库网站电话
  • 网站移动化建设方案asp.net 网站建设方案
  • 网站建设厌倦网站 缓存方式
  • 万维网申请网站域名佛山国内快速建站
  • 东营网站建设教程简单建设网站首页
  • 建地方门户网站建筑设计方案汇报ppt
  • 结婚网站模版深州做网站公司
  • 合肥网站制作开发个人个案网站 类型
  • 支付宝 收费 网站开发建站工具论坛
  • 专业的盐城网站开发wordpress如和安装
  • 网站建设是什么语言齐齐哈尔网站开发
  • 中小型网站建设服务重庆建设工程交易网
  • 网站建设 预算江华网站建设