Compare commits
69 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c03dc9c984 | ||
|
|
7569c08a62 | ||
|
|
f07e5802f7 | ||
|
|
ffcfe8e03b | ||
|
|
35a7ef657a | ||
|
|
250ec4f65c | ||
|
|
5d0ffdad8a | ||
|
|
95e4d3170d | ||
|
|
dfa8328bb0 | ||
|
|
5177c1871a | ||
|
|
1901c2905b | ||
|
|
b312c52a33 | ||
|
|
fb974cefcf | ||
|
|
c7f7fa12b4 | ||
|
|
6a19e2bb29 | ||
|
|
443f5bf61e | ||
|
|
7d00e9c768 | ||
|
|
c0ab0ba473 | ||
|
|
4b2f9e42d7 | ||
|
|
4ce32a8851 | ||
|
|
47e4cff758 | ||
|
|
96e109e199 | ||
|
|
36dffe8de3 | ||
|
|
6d2e4a8081 | ||
|
|
a7c45b125f | ||
|
|
6c2b5b8cf4 | ||
|
|
91e9f3900d | ||
|
|
ab1bd03f0b | ||
|
|
cd0cbc8061 | ||
|
|
c6c6390a83 | ||
|
|
6bfb9355cf | ||
|
|
34d785a246 | ||
|
|
c9bd480514 | ||
|
|
5349f29415 | ||
|
|
6500cafa4f | ||
|
|
e2e92a433e | ||
|
|
dd90cfecbb | ||
|
|
7a5b037ad8 | ||
|
|
ee0d2371d5 | ||
|
|
c4586d37f5 | ||
|
|
2d8cd23fe7 | ||
|
|
85d446e2d0 | ||
|
|
afd064e15d | ||
|
|
809d6cabbb | ||
|
|
8058eed9ab | ||
|
|
15ee6126a5 | ||
|
|
b6a7ea2756 | ||
|
|
63c3402c94 | ||
|
|
5a6dd6c7a5 | ||
|
|
8c226322a0 | ||
|
|
3a7888937f | ||
|
|
6760a0ad00 | ||
|
|
6288b70ae2 | ||
|
|
4adc010388 | ||
|
|
162b5e17c3 | ||
|
|
0d43ba2124 | ||
|
|
080d8d82b4 | ||
|
|
fc50e16bc5 | ||
|
|
345b6d59a1 | ||
|
|
4ec19fd56a | ||
|
|
136630ec60 | ||
|
|
9d3d99a595 | ||
|
|
747c745ec0 | ||
|
|
a53ca843e8 | ||
|
|
8b18d84d8a | ||
|
|
edc4df6eb5 | ||
|
|
5ed98d317c | ||
|
|
c22ef5f1d2 | ||
|
|
bcc9621976 |
81
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
name: 🐛 Bug
|
||||||
|
description: 出现错误或未按预期工作
|
||||||
|
title: "请在此处填写标题"
|
||||||
|
labels:
|
||||||
|
- bug
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
**在提交此问题之前,请确保您已阅读以下文档:[Getting Started (英文)](https://github.com/harry0703/MoneyPrinterTurbo/blob/main/README-en.md#system-requirements-) 或 [快速开始 (中文)](https://github.com/harry0703/MoneyPrinterTurbo/blob/main/README.md#%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B-)。**
|
||||||
|
|
||||||
|
**请填写以下信息:**
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: 是否已存在类似问题?
|
||||||
|
description: |
|
||||||
|
请务必检查此问题是否已有用户反馈。
|
||||||
|
|
||||||
|
在提交新问题前,使用 GitHub 的问题搜索框(包括已关闭的问题)或通过 Google、StackOverflow 等工具搜索,确认该问题是否重复。
|
||||||
|
|
||||||
|
您可能已经可以找到解决问题的方法!
|
||||||
|
options:
|
||||||
|
- label: 我已搜索现有问题
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 当前行为
|
||||||
|
description: 描述您当前遇到的情况。
|
||||||
|
placeholder: |
|
||||||
|
MoneyPrinterTurbo 未按预期工作。当我执行某个操作时,视频未成功生成/程序报错了...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 预期行为
|
||||||
|
description: 描述您期望发生的情况。
|
||||||
|
placeholder: |
|
||||||
|
当我执行某个操作时,程序应当...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 重现步骤
|
||||||
|
description: 描述重现问题的步骤。描述的越详细,越有助于定位和修复问题。
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 堆栈追踪/日志
|
||||||
|
description: |
|
||||||
|
如果您有任何堆栈追踪或日志,请将它们粘贴在此处。(注意不要包含敏感信息)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: input
|
||||||
|
attributes:
|
||||||
|
label: Python 版本
|
||||||
|
description: 您遇到此问题时使用的 Python 版本。
|
||||||
|
placeholder: v3.13.0, v3.10.0 等
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: input
|
||||||
|
attributes:
|
||||||
|
label: 操作系统
|
||||||
|
description: 您使用 MoneyPrinterTurbo 遇到问题时的操作系统信息。
|
||||||
|
placeholder: macOS 14.1, Windows 11 等
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: input
|
||||||
|
attributes:
|
||||||
|
label: MoneyPrinterTurbo 版本
|
||||||
|
description: 您在哪个版本的 MoneyPrinterTurbo 中遇到了此问题?
|
||||||
|
placeholder: v1.2.2 等
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 其他信息
|
||||||
|
description: 您还有什么其他信息想补充吗?例如问题的截图或视频记录。
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
1
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
blank_issues_enabled: false
|
||||||
38
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
name: ✨ 增加功能
|
||||||
|
description: 为此项目提出一个新想法
|
||||||
|
title: "请在此处填写标题"
|
||||||
|
labels:
|
||||||
|
- enhancement
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: 是否已存在类似的功能请求?
|
||||||
|
description: 请确保此功能请求是否重复。
|
||||||
|
options:
|
||||||
|
- label: 我已搜索现有的功能请求
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 痛点
|
||||||
|
description: 请解释您的功能请求。
|
||||||
|
placeholder: 我希望可以实现这一点
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 建议的解决方案
|
||||||
|
description: 请描述您能想到的解决方案。
|
||||||
|
placeholder: 您可以添加这个功能 / 更改这个流程 / 使用某种方法
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 有用的资源
|
||||||
|
description: 请提供一些有助于实现您建议的资源。
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: 其他信息
|
||||||
|
description: 您还有什么其他想补充的信息吗?例如问题的截图或视频记录。
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
3
.gitignore
vendored
@@ -23,3 +23,6 @@ node_modules
|
|||||||
# 模型目录
|
# 模型目录
|
||||||
/models/
|
/models/
|
||||||
./models/*
|
./models/*
|
||||||
|
|
||||||
|
venv/
|
||||||
|
.venv
|
||||||
1
.pdm-python
Normal file
@@ -0,0 +1 @@
|
|||||||
|
./MoneyPrinterTurbo/.venv/bin/python
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
# Use an official Python runtime as a parent image
|
# Use an official Python runtime as a parent image
|
||||||
FROM python:3.10-slim-bullseye
|
FROM python:3.11-slim-bullseye
|
||||||
|
|
||||||
# Set the working directory in the container
|
# Set the working directory in the container
|
||||||
WORKDIR /MoneyPrinterTurbo
|
WORKDIR /MoneyPrinterTurbo
|
||||||
@@ -41,4 +41,4 @@ CMD ["streamlit", "run", "./webui/Main.py","--browser.serverAddress=127.0.0.1","
|
|||||||
## For Linux or MacOS:
|
## For Linux or MacOS:
|
||||||
# docker run -v $(pwd)/config.toml:/MoneyPrinterTurbo/config.toml -v $(pwd)/storage:/MoneyPrinterTurbo/storage -p 8501:8501 moneyprinterturbo
|
# docker run -v $(pwd)/config.toml:/MoneyPrinterTurbo/config.toml -v $(pwd)/storage:/MoneyPrinterTurbo/storage -p 8501:8501 moneyprinterturbo
|
||||||
## For Windows:
|
## For Windows:
|
||||||
# docker run -v %cd%/config.toml:/MoneyPrinterTurbo/config.toml -v %cd%/storage:/MoneyPrinterTurbo/storage -p 8501:8501 moneyprinterturbo
|
# docker run -v ${PWD}/config.toml:/MoneyPrinterTurbo/config.toml -v ${PWD}/storage:/MoneyPrinterTurbo/storage -p 8501:8501 moneyprinterturbo
|
||||||
155
README-en.md
@@ -35,9 +35,18 @@ like to express our special thanks to
|
|||||||
**RecCloud (AI-Powered Multimedia Service Platform)** for providing a free `AI Video Generator` service based on this
|
**RecCloud (AI-Powered Multimedia Service Platform)** for providing a free `AI Video Generator` service based on this
|
||||||
project. It allows for online use without deployment, which is very convenient.
|
project. It allows for online use without deployment, which is very convenient.
|
||||||
|
|
||||||
https://reccloud.com
|
- Chinese version: https://reccloud.cn
|
||||||
|
- English version: https://reccloud.com
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
## Thanks for Sponsorship 🙏
|
||||||
|
|
||||||
|
Thanks to Picwish https://picwish.cn for supporting and sponsoring this project, enabling continuous updates and maintenance.
|
||||||
|
|
||||||
|
Picwish focuses on the **image processing field**, providing a rich set of **image processing tools** that extremely simplify complex operations, truly making image processing easier.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
## Features 🎯
|
## Features 🎯
|
||||||
|
|
||||||
@@ -51,28 +60,26 @@ https://reccloud.com
|
|||||||
satisfactory one
|
satisfactory one
|
||||||
- [x] Supports setting the **duration of video clips**, facilitating adjustments to material switching frequency
|
- [x] Supports setting the **duration of video clips**, facilitating adjustments to material switching frequency
|
||||||
- [x] Supports video copy in both **Chinese** and **English**
|
- [x] Supports video copy in both **Chinese** and **English**
|
||||||
- [x] Supports **multiple voice** synthesis
|
- [x] Supports **multiple voice** synthesis, with **real-time preview** of effects
|
||||||
- [x] Supports **subtitle generation**, with adjustable `font`, `position`, `color`, `size`, and also
|
- [x] Supports **subtitle generation**, with adjustable `font`, `position`, `color`, `size`, and also
|
||||||
supports `subtitle outlining`
|
supports `subtitle outlining`
|
||||||
- [x] Supports **background music**, either random or specified music files, with adjustable `background music volume`
|
- [x] Supports **background music**, either random or specified music files, with adjustable `background music volume`
|
||||||
- [x] Video material sources are **high-definition** and **royalty-free**
|
- [x] Video material sources are **high-definition** and **royalty-free**, and you can also use your own **local materials**
|
||||||
- [x] Supports integration with various models such as **OpenAI**, **moonshot**, **Azure**, **gpt4free**, **one-api**,
|
- [x] Supports integration with various models such as **OpenAI**, **Moonshot**, **Azure**, **gpt4free**, **one-api**,
|
||||||
**qianwen**, **Google Gemini**, **Ollama** and more
|
**Qwen**, **Google Gemini**, **Ollama**, **DeepSeek**, **ERNIE** and more
|
||||||
|
- For users in China, it is recommended to use **DeepSeek** or **Moonshot** as the large model provider (directly accessible in China, no VPN needed. Free credits upon registration, generally sufficient for use)
|
||||||
|
|
||||||
❓[How to Use the Free OpenAI GPT-3.5 Model?](https://github.com/harry0703/MoneyPrinterTurbo/blob/main/README-en.md#common-questions-)
|
❓[How to Use the Free OpenAI GPT-3.5 Model?](https://github.com/harry0703/MoneyPrinterTurbo/blob/main/README-en.md#common-questions-)
|
||||||
|
|
||||||
### Future Plans 📅
|
### Future Plans 📅
|
||||||
|
|
||||||
- [ ] Introduce support for GPT-SoVITS dubbing
|
- [ ] GPT-SoVITS dubbing support
|
||||||
- [ ] Enhance voice synthesis with large models for a more natural and emotionally resonant voice output
|
- [ ] Optimize voice synthesis using large models for more natural and emotionally rich voice output
|
||||||
- [ ] Incorporate video transition effects to ensure a smoother viewing experience
|
- [ ] Add video transition effects for a smoother viewing experience
|
||||||
- [ ] Improve the relevance of video content
|
- [ ] Add more video material sources, improve the matching between video materials and script
|
||||||
- [ ] Add options for video length: short, medium, long
|
- [ ] Add video length options: short, medium, long
|
||||||
- [ ] Package the application into a one-click launch bundle for Windows and macOS for ease of use
|
- [ ] Support more voice synthesis providers, such as OpenAI TTS
|
||||||
- [ ] Enable the use of custom materials
|
- [ ] Automate upload to YouTube platform
|
||||||
- [ ] Offer voiceover and background music options with real-time preview
|
|
||||||
- [ ] Support a wider range of voice synthesis providers, such as OpenAI TTS, Azure TTS
|
|
||||||
- [ ] Automate the upload process to the YouTube platform
|
|
||||||
|
|
||||||
## Video Demos 📺
|
## Video Demos 📺
|
||||||
|
|
||||||
@@ -115,10 +122,27 @@ https://reccloud.com
|
|||||||
- Recommended minimum 4 CPU cores or more, 8G of memory or more, GPU is not required
|
- Recommended minimum 4 CPU cores or more, 8G of memory or more, GPU is not required
|
||||||
- Windows 10 or MacOS 11.0, and their later versions
|
- Windows 10 or MacOS 11.0, and their later versions
|
||||||
|
|
||||||
|
## Quick Start 🚀
|
||||||
|
|
||||||
|
Download the one-click startup package, extract and use directly (the path should not contain **Chinese characters**, **special characters**, or **spaces**)
|
||||||
|
|
||||||
|
### Windows
|
||||||
|
- Baidu Netdisk (1.2.1 latest version): https://pan.baidu.com/s/1pSNjxTYiVENulTLm6zieMQ?pwd=g36q Extraction code: g36q
|
||||||
|
|
||||||
|
After downloading, it is recommended to **double-click** `update.bat` first to update to the **latest code**, then double-click `start.bat` to launch
|
||||||
|
|
||||||
|
After launching, the browser will open automatically (if it opens blank, it is recommended to use **Chrome** or **Edge**)
|
||||||
|
|
||||||
|
### Other Systems
|
||||||
|
|
||||||
|
One-click startup packages have not been created yet. See the **Installation & Deployment** section below. It is recommended to use **docker** for deployment, which is more convenient.
|
||||||
|
|
||||||
## Installation & Deployment 📥
|
## Installation & Deployment 📥
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
- Try to avoid using **Chinese paths** to prevent unpredictable issues
|
- Try to avoid using **Chinese paths** to prevent unpredictable issues
|
||||||
- Ensure your **network** is stable, meaning you can access foreign websites normally
|
- Ensure your **network** is stable, VPN needs to be in `global traffic` mode
|
||||||
|
|
||||||
#### ① Clone the Project
|
#### ① Clone the Project
|
||||||
|
|
||||||
@@ -132,11 +156,6 @@ git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
|||||||
- Follow the instructions in the `config.toml` file to configure `pexels_api_keys` and `llm_provider`, and according to
|
- Follow the instructions in the `config.toml` file to configure `pexels_api_keys` and `llm_provider`, and according to
|
||||||
the llm_provider's service provider, set up the corresponding API Key
|
the llm_provider's service provider, set up the corresponding API Key
|
||||||
|
|
||||||
#### ③ Configure Large Language Models (LLM)
|
|
||||||
|
|
||||||
- To use `GPT-4.0` or `GPT-3.5`, you need an `API Key` from `OpenAI`. If you don't have one, you can set `llm_provider`
|
|
||||||
to `g4f` (a free-to-use GPT library https://github.com/xtekky/gpt4free)
|
|
||||||
|
|
||||||
### Docker Deployment 🐳
|
### Docker Deployment 🐳
|
||||||
|
|
||||||
#### ① Launch the Docker Container
|
#### ① Launch the Docker Container
|
||||||
@@ -152,6 +171,8 @@ cd MoneyPrinterTurbo
|
|||||||
docker-compose up
|
docker-compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> Note:The latest version of docker will automatically install docker compose in the form of a plug-in, and the start command is adjusted to `docker compose up `
|
||||||
|
|
||||||
#### ② Access the Web Interface
|
#### ② Access the Web Interface
|
||||||
|
|
||||||
Open your browser and visit http://0.0.0.0:8501
|
Open your browser and visit http://0.0.0.0:8501
|
||||||
@@ -162,27 +183,28 @@ Open your browser and visit http://0.0.0.0:8080/docs Or http://0.0.0.0:8080/redo
|
|||||||
|
|
||||||
### Manual Deployment 📦
|
### Manual Deployment 📦
|
||||||
|
|
||||||
#### ① Create a Python Virtual Environment
|
> Video tutorials
|
||||||
|
>
|
||||||
|
> - Complete usage demonstration: https://v.douyin.com/iFhnwsKY/
|
||||||
|
> - How to deploy on Windows: https://v.douyin.com/iFyjoW3M
|
||||||
|
|
||||||
It is recommended to create a Python virtual environment
|
#### ① Install Dependencies
|
||||||
using [conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)
|
|
||||||
|
It is recommended to use [pdm](https://pdm-project.org/en/latest/#installation)
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
||||||
cd MoneyPrinterTurbo
|
cd MoneyPrinterTurbo
|
||||||
conda create -n MoneyPrinterTurbo python=3.10
|
pdm sync
|
||||||
conda activate MoneyPrinterTurbo
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### ② Install ImageMagick
|
#### ② Install ImageMagick
|
||||||
|
|
||||||
###### Windows:
|
###### Windows:
|
||||||
|
|
||||||
- Download https://imagemagick.org/archive/binaries/ImageMagick-7.1.1-29-Q16-x64-static.exe
|
- Download https://imagemagick.org/script/download.php Choose the Windows version, make sure to select the **static library** version, such as ImageMagick-7.1.1-32-Q16-x64-**static**.exe
|
||||||
- Install the downloaded ImageMagick, **do not change the installation path**
|
- Install the downloaded ImageMagick, **do not change the installation path**
|
||||||
- Modify the `config.toml` configuration file, set `imagemagick_path` to your actual installation path (if you didn't
|
- Modify the `config.toml` configuration file, set `imagemagick_path` to your actual installation path
|
||||||
change the path during installation, just uncomment it)
|
|
||||||
|
|
||||||
###### MacOS:
|
###### MacOS:
|
||||||
|
|
||||||
@@ -209,14 +231,12 @@ Note that you need to execute the following commands in the `root directory` of
|
|||||||
###### Windows
|
###### Windows
|
||||||
|
|
||||||
```bat
|
```bat
|
||||||
conda activate MoneyPrinterTurbo
|
|
||||||
webui.bat
|
webui.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
###### MacOS or Linux
|
###### MacOS or Linux
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
conda activate MoneyPrinterTurbo
|
|
||||||
sh webui.sh
|
sh webui.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -235,13 +255,15 @@ online for a quick experience.
|
|||||||
|
|
||||||
A list of all supported voices can be viewed here: [Voice List](./docs/voice-list.txt)
|
A list of all supported voices can be viewed here: [Voice List](./docs/voice-list.txt)
|
||||||
|
|
||||||
|
2024-04-16 v1.1.2 Added 9 new Azure voice synthesis voices that require API KEY configuration. These voices sound more realistic.
|
||||||
|
|
||||||
## Subtitle Generation 📜
|
## Subtitle Generation 📜
|
||||||
|
|
||||||
Currently, there are 2 ways to generate subtitles:
|
Currently, there are 2 ways to generate subtitles:
|
||||||
|
|
||||||
- edge: Faster generation speed, better performance, no specific requirements for computer configuration, but the
|
- **edge**: Faster generation speed, better performance, no specific requirements for computer configuration, but the
|
||||||
quality may be unstable
|
quality may be unstable
|
||||||
- whisper: Slower generation speed, poorer performance, specific requirements for computer configuration, but more
|
- **whisper**: Slower generation speed, poorer performance, specific requirements for computer configuration, but more
|
||||||
reliable quality
|
reliable quality
|
||||||
|
|
||||||
You can switch between them by modifying the `subtitle_provider` in the `config.toml` configuration file
|
You can switch between them by modifying the `subtitle_provider` in the `config.toml` configuration file
|
||||||
@@ -250,15 +272,19 @@ It is recommended to use `edge` mode, and switch to `whisper` mode if the qualit
|
|||||||
satisfactory.
|
satisfactory.
|
||||||
|
|
||||||
> Note:
|
> Note:
|
||||||
> If left blank, it means no subtitles will be generated.
|
>
|
||||||
|
> 1. In whisper mode, you need to download a model file from HuggingFace, about 3GB in size, please ensure good internet connectivity
|
||||||
|
> 2. If left blank, it means no subtitles will be generated.
|
||||||
|
|
||||||
**Download whisper**
|
> Since HuggingFace is not accessible in China, you can use the following methods to download the `whisper-large-v3` model file
|
||||||
- Please ensure a good internet connectivity
|
|
||||||
- `whisper` model can be downloaded from HuggingFace: https://huggingface.co/openai/whisper-large-v3/tree/main
|
|
||||||
|
|
||||||
After downloading the model to local machine, copy the whole folder and put it into the following path: `.\MoneyPrinterTurbo\models`
|
Download links:
|
||||||
|
|
||||||
This is what the final path should look like: `.\MoneyPrinterTurbo\models\whisper-large-v3`
|
- Baidu Netdisk: https://pan.baidu.com/s/11h3Q6tsDtjQKTjUu3sc5cA?pwd=xjs9
|
||||||
|
- Quark Netdisk: https://pan.quark.cn/s/3ee3d991d64b
|
||||||
|
|
||||||
|
After downloading the model, extract it and place the entire directory in `.\MoneyPrinterTurbo\models`,
|
||||||
|
The final file path should look like this: `.\MoneyPrinterTurbo\models\whisper-large-v3`
|
||||||
|
|
||||||
```
|
```
|
||||||
MoneyPrinterTurbo
|
MoneyPrinterTurbo
|
||||||
@@ -302,6 +328,16 @@ Once successfully started, modify the `config.toml` configuration as follows:
|
|||||||
- Change `openai_base_url` to `http://localhost:3040/v1/`
|
- Change `openai_base_url` to `http://localhost:3040/v1/`
|
||||||
- Set `openai_model_name` to `gpt-3.5-turbo`
|
- Set `openai_model_name` to `gpt-3.5-turbo`
|
||||||
|
|
||||||
|
> Note: This method may be unstable
|
||||||
|
|
||||||
|
### ❓AttributeError: 'str' object has no attribute 'choices'
|
||||||
|
|
||||||
|
This issue is caused by the large language model not returning a correct response.
|
||||||
|
|
||||||
|
It's likely a network issue. Use a **VPN**, or set `openai_base_url` to your proxy, which should solve the problem.
|
||||||
|
|
||||||
|
At the same time, it is recommended to use **Moonshot** or **DeepSeek** as the large model provider, as these service providers have faster access and are more stable in China.
|
||||||
|
|
||||||
### ❓RuntimeError: No ffmpeg exe could be found
|
### ❓RuntimeError: No ffmpeg exe could be found
|
||||||
|
|
||||||
Normally, ffmpeg will be automatically downloaded and detected.
|
Normally, ffmpeg will be automatically downloaded and detected.
|
||||||
@@ -353,6 +389,43 @@ For Linux systems, you can manually install it, refer to https://cn.linux-consol
|
|||||||
|
|
||||||
Thanks to [@wangwenqiao666](https://github.com/wangwenqiao666) for their research and exploration
|
Thanks to [@wangwenqiao666](https://github.com/wangwenqiao666) for their research and exploration
|
||||||
|
|
||||||
|
### ❓ImageMagick's security policy prevents operations related to temporary file @/tmp/tmpur5hyyto.txt
|
||||||
|
|
||||||
|
You can find these policies in ImageMagick's configuration file policy.xml.
|
||||||
|
This file is usually located in /etc/ImageMagick-`X`/ or a similar location in the ImageMagick installation directory.
|
||||||
|
Modify the entry containing `pattern="@"`, change `rights="none"` to `rights="read|write"` to allow read and write operations on files.
|
||||||
|
|
||||||
|
### ❓OSError: [Errno 24] Too many open files
|
||||||
|
|
||||||
|
This issue is caused by the system's limit on the number of open files. You can solve it by modifying the system's file open limit.
|
||||||
|
|
||||||
|
Check the current limit:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ulimit -n
|
||||||
|
```
|
||||||
|
|
||||||
|
If it's too low, you can increase it, for example:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ulimit -n 10240
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❓Whisper model download failed, with the following error
|
||||||
|
|
||||||
|
LocalEntryNotfoundEror: Cannot find an appropriate cached snapshotfolderfor the specified revision on the local disk and
|
||||||
|
outgoing trafic has been disabled.
|
||||||
|
To enablerepo look-ups and downloads online, pass 'local files only=False' as input.
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
An error occured while synchronizing the model Systran/faster-whisper-large-v3 from the Hugging Face Hub:
|
||||||
|
An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the
|
||||||
|
specified revision on the local disk. Please check your internet connection and try again.
|
||||||
|
Trying to load the model directly from the local cache, if it exists.
|
||||||
|
|
||||||
|
Solution: [Click to see how to manually download the model from netdisk](#subtitle-generation-)
|
||||||
|
|
||||||
## Feedback & Suggestions 📢
|
## Feedback & Suggestions 📢
|
||||||
|
|
||||||
- You can submit an [issue](https://github.com/harry0703/MoneyPrinterTurbo/issues) or
|
- You can submit an [issue](https://github.com/harry0703/MoneyPrinterTurbo/issues) or
|
||||||
|
|||||||
54
README.md
@@ -72,10 +72,6 @@
|
|||||||
- [ ] 支持更多的语音合成服务商,比如 OpenAI TTS
|
- [ ] 支持更多的语音合成服务商,比如 OpenAI TTS
|
||||||
- [ ] 自动上传到YouTube平台
|
- [ ] 自动上传到YouTube平台
|
||||||
|
|
||||||
## 交流讨论 💬
|
|
||||||
|
|
||||||
<img src="docs/wechat-group.jpg" width="250">
|
|
||||||
|
|
||||||
## 视频演示 📺
|
## 视频演示 📺
|
||||||
|
|
||||||
### 竖屏 9:16
|
### 竖屏 9:16
|
||||||
@@ -121,20 +117,15 @@
|
|||||||
|
|
||||||
## 快速开始 🚀
|
## 快速开始 🚀
|
||||||
|
|
||||||
下载一键启动包,解压直接使用(路径不要有 **中文** 和 **空格**)
|
下载一键启动包,解压直接使用(路径不要有 **中文**、**特殊字符**、**空格**)
|
||||||
|
|
||||||
### Windows
|
### Windows
|
||||||
|
- 百度网盘(1.2.1 老版本): https://pan.baidu.com/s/1pSNjxTYiVENulTLm6zieMQ?pwd=g36q 提取码: g36q
|
||||||
- 百度网盘: https://pan.baidu.com/s/1MzBmcLTmVWohPEp9ohvvzA?pwd=pdcu 提取码: pdcu
|
|
||||||
|
|
||||||
下载后,建议先**双击执行** `update.bat` 更新到**最新代码**,然后双击 `start.bat` 启动
|
下载后,建议先**双击执行** `update.bat` 更新到**最新代码**,然后双击 `start.bat` 启动
|
||||||
|
|
||||||
启动后,会自动打开浏览器(如果打开是空白,建议换成 **Chrome** 或者 **Edge** 打开)
|
启动后,会自动打开浏览器(如果打开是空白,建议换成 **Chrome** 或者 **Edge** 打开)
|
||||||
|
|
||||||
### 其他系统
|
|
||||||
|
|
||||||
还没有制作一键启动包,看下面的 **安装部署** 部分,建议使用 **docker** 部署,更加方便。
|
|
||||||
|
|
||||||
## 安装部署 📥
|
## 安装部署 📥
|
||||||
|
|
||||||
### 前提条件
|
### 前提条件
|
||||||
@@ -148,7 +139,7 @@
|
|||||||
git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
||||||
```
|
```
|
||||||
|
|
||||||
#### ② 修改配置文件
|
#### ② 修改配置文件(可选,建议启动后也可以在 WebUI 里面配置)
|
||||||
|
|
||||||
- 将 `config.example.toml` 文件复制一份,命名为 `config.toml`
|
- 将 `config.example.toml` 文件复制一份,命名为 `config.toml`
|
||||||
- 按照 `config.toml` 文件中的说明,配置好 `pexels_api_keys` 和 `llm_provider`,并根据 llm_provider 对应的服务商,配置相关的
|
- 按照 `config.toml` 文件中的说明,配置好 `pexels_api_keys` 和 `llm_provider`,并根据 llm_provider 对应的服务商,配置相关的
|
||||||
@@ -170,6 +161,8 @@ cd MoneyPrinterTurbo
|
|||||||
docker-compose up
|
docker-compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> 注意:最新版的docker安装时会自动以插件的形式安装docker compose,启动命令调整为docker compose up
|
||||||
|
|
||||||
#### ② 访问Web界面
|
#### ② 访问Web界面
|
||||||
|
|
||||||
打开浏览器,访问 http://0.0.0.0:8501
|
打开浏览器,访问 http://0.0.0.0:8501
|
||||||
@@ -185,16 +178,14 @@ docker-compose up
|
|||||||
- 完整的使用演示:https://v.douyin.com/iFhnwsKY/
|
- 完整的使用演示:https://v.douyin.com/iFhnwsKY/
|
||||||
- 如何在Windows上部署:https://v.douyin.com/iFyjoW3M
|
- 如何在Windows上部署:https://v.douyin.com/iFyjoW3M
|
||||||
|
|
||||||
#### ① 创建虚拟环境
|
#### ① 依赖安装
|
||||||
|
|
||||||
建议使用 [conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) 创建 python 虚拟环境
|
建议使用 [pdm](https://pdm-project.org/en/latest/#installation)
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
git clone https://github.com/harry0703/MoneyPrinterTurbo.git
|
||||||
cd MoneyPrinterTurbo
|
cd MoneyPrinterTurbo
|
||||||
conda create -n MoneyPrinterTurbo python=3.10
|
pdm sync
|
||||||
conda activate MoneyPrinterTurbo
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### ② 安装好 ImageMagick
|
#### ② 安装好 ImageMagick
|
||||||
@@ -225,14 +216,12 @@ pip install -r requirements.txt
|
|||||||
###### Windows
|
###### Windows
|
||||||
|
|
||||||
```bat
|
```bat
|
||||||
conda activate MoneyPrinterTurbo
|
|
||||||
webui.bat
|
webui.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
###### MacOS or Linux
|
###### MacOS or Linux
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
conda activate MoneyPrinterTurbo
|
|
||||||
sh webui.sh
|
sh webui.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -300,33 +289,6 @@ MoneyPrinterTurbo
|
|||||||
|
|
||||||
## 常见问题 🤔
|
## 常见问题 🤔
|
||||||
|
|
||||||
### ❓如何使用免费的OpenAI GPT-3.5模型?
|
|
||||||
|
|
||||||
[OpenAI宣布ChatGPT里面3.5已经免费了](https://openai.com/blog/start-using-chatgpt-instantly),有开发者将其封装成了API,可以直接调用
|
|
||||||
|
|
||||||
**确保你安装和启动了docker服务**,执行以下命令启动docker服务
|
|
||||||
|
|
||||||
```shell
|
|
||||||
docker run -p 3040:3040 missuo/freegpt35
|
|
||||||
```
|
|
||||||
|
|
||||||
启动成功后,修改 `config.toml` 中的配置
|
|
||||||
|
|
||||||
- `llm_provider` 设置为 `openai`
|
|
||||||
- `openai_api_key` 随便填写一个即可,比如 '123456'
|
|
||||||
- `openai_base_url` 改为 `http://localhost:3040/v1/`
|
|
||||||
- `openai_model_name` 改为 `gpt-3.5-turbo`
|
|
||||||
|
|
||||||
> 注意:该方式稳定性较差
|
|
||||||
|
|
||||||
### ❓AttributeError: 'str' object has no attribute 'choices'`
|
|
||||||
|
|
||||||
这个问题是由于大模型没有返回正确的回复导致的。
|
|
||||||
|
|
||||||
大概率是网络原因, 使用 **VPN**,或者设置 `openai_base_url` 为你的代理 ,应该就可以解决了。
|
|
||||||
|
|
||||||
同时建议使用 **Moonshot** 或 **DeepSeek** 作为大模型提供商,这两个服务商在国内访问速度更快,更加稳定。
|
|
||||||
|
|
||||||
### ❓RuntimeError: No ffmpeg exe could be found
|
### ❓RuntimeError: No ffmpeg exe could be found
|
||||||
|
|
||||||
通常情况下,ffmpeg 会被自动下载,并且会被自动检测到。
|
通常情况下,ffmpeg 会被自动下载,并且会被自动检测到。
|
||||||
|
|||||||
@@ -4,10 +4,10 @@ import os
|
|||||||
|
|
||||||
from fastapi import FastAPI, Request
|
from fastapi import FastAPI, Request
|
||||||
from fastapi.exceptions import RequestValidationError
|
from fastapi.exceptions import RequestValidationError
|
||||||
from fastapi.responses import JSONResponse
|
|
||||||
from loguru import logger
|
|
||||||
from fastapi.staticfiles import StaticFiles
|
|
||||||
from fastapi.middleware.cors import CORSMiddleware
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
|
from fastapi.responses import JSONResponse
|
||||||
|
from fastapi.staticfiles import StaticFiles
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
from app.config import config
|
from app.config import config
|
||||||
from app.models.exception import HttpException
|
from app.models.exception import HttpException
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
import os
|
import os
|
||||||
import socket
|
|
||||||
import toml
|
|
||||||
import shutil
|
import shutil
|
||||||
|
import socket
|
||||||
|
|
||||||
|
import toml
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
root_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
|
root_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
|
||||||
@@ -17,7 +18,7 @@ def load_config():
|
|||||||
example_file = f"{root_dir}/config.example.toml"
|
example_file = f"{root_dir}/config.example.toml"
|
||||||
if os.path.isfile(example_file):
|
if os.path.isfile(example_file):
|
||||||
shutil.copyfile(example_file, config_file)
|
shutil.copyfile(example_file, config_file)
|
||||||
logger.info(f"copy config.example.toml to config.toml")
|
logger.info("copy config.example.toml to config.toml")
|
||||||
|
|
||||||
logger.info(f"load config from file: {config_file}")
|
logger.info(f"load config from file: {config_file}")
|
||||||
|
|
||||||
@@ -44,7 +45,9 @@ app = _cfg.get("app", {})
|
|||||||
whisper = _cfg.get("whisper", {})
|
whisper = _cfg.get("whisper", {})
|
||||||
proxy = _cfg.get("proxy", {})
|
proxy = _cfg.get("proxy", {})
|
||||||
azure = _cfg.get("azure", {})
|
azure = _cfg.get("azure", {})
|
||||||
ui = _cfg.get("ui", {})
|
ui = _cfg.get("ui", {
|
||||||
|
"hide_log": False,
|
||||||
|
})
|
||||||
|
|
||||||
hostname = socket.gethostname()
|
hostname = socket.gethostname()
|
||||||
|
|
||||||
@@ -56,7 +59,7 @@ project_description = _cfg.get(
|
|||||||
"project_description",
|
"project_description",
|
||||||
"<a href='https://github.com/harry0703/MoneyPrinterTurbo'>https://github.com/harry0703/MoneyPrinterTurbo</a>",
|
"<a href='https://github.com/harry0703/MoneyPrinterTurbo'>https://github.com/harry0703/MoneyPrinterTurbo</a>",
|
||||||
)
|
)
|
||||||
project_version = _cfg.get("project_version", "1.2.0")
|
project_version = _cfg.get("project_version", "1.2.5")
|
||||||
reload_debug = False
|
reload_debug = False
|
||||||
|
|
||||||
imagemagick_path = app.get("imagemagick_path", "")
|
imagemagick_path = app.get("imagemagick_path", "")
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import threading
|
import threading
|
||||||
from typing import Callable, Any, Dict
|
from typing import Any, Callable, Dict
|
||||||
|
|
||||||
|
|
||||||
class TaskManager:
|
class TaskManager:
|
||||||
@@ -33,7 +33,7 @@ class TaskManager:
|
|||||||
try:
|
try:
|
||||||
with self.lock:
|
with self.lock:
|
||||||
self.current_tasks += 1
|
self.current_tasks += 1
|
||||||
func(*args, **kwargs) # 在这里调用函数,传递*args和**kwargs
|
func(*args, **kwargs) # call the function here, passing *args and **kwargs.
|
||||||
finally:
|
finally:
|
||||||
self.task_done()
|
self.task_done()
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
from fastapi import APIRouter
|
from fastapi import APIRouter, Request
|
||||||
from fastapi import Request
|
|
||||||
|
|
||||||
router = APIRouter()
|
router = APIRouter()
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from fastapi import APIRouter, Depends
|
from fastapi import APIRouter
|
||||||
|
|
||||||
|
|
||||||
def new_router(dependencies=None):
|
def new_router(dependencies=None):
|
||||||
|
|||||||
@@ -1,15 +1,16 @@
|
|||||||
from fastapi import Request
|
from fastapi import Request
|
||||||
|
|
||||||
from app.controllers.v1.base import new_router
|
from app.controllers.v1.base import new_router
|
||||||
from app.models.schema import (
|
from app.models.schema import (
|
||||||
VideoScriptResponse,
|
|
||||||
VideoScriptRequest,
|
VideoScriptRequest,
|
||||||
VideoTermsResponse,
|
VideoScriptResponse,
|
||||||
VideoTermsRequest,
|
VideoTermsRequest,
|
||||||
|
VideoTermsResponse,
|
||||||
)
|
)
|
||||||
from app.services import llm
|
from app.services import llm
|
||||||
from app.utils import utils
|
from app.utils import utils
|
||||||
|
|
||||||
# 认证依赖项
|
# authentication dependency
|
||||||
# router = new_router(dependencies=[Depends(base.verify_token)])
|
# router = new_router(dependencies=[Depends(base.verify_token)])
|
||||||
router = new_router()
|
router = new_router()
|
||||||
|
|
||||||
|
|||||||
@@ -94,6 +94,22 @@ def create_task(
|
|||||||
task_id=task_id, status_code=400, message=f"{request_id}: {str(e)}"
|
task_id=task_id, status_code=400, message=f"{request_id}: {str(e)}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from fastapi import Query
|
||||||
|
|
||||||
|
@router.get("/tasks", response_model=TaskQueryResponse, summary="Get all tasks")
|
||||||
|
def get_all_tasks(request: Request, page: int = Query(1, ge=1), page_size: int = Query(10, ge=1)):
|
||||||
|
request_id = base.get_task_id(request)
|
||||||
|
tasks, total = sm.state.get_all_tasks(page, page_size)
|
||||||
|
|
||||||
|
response = {
|
||||||
|
"tasks": tasks,
|
||||||
|
"total": total,
|
||||||
|
"page": page,
|
||||||
|
"page_size": page_size,
|
||||||
|
}
|
||||||
|
return utils.get_response(200, response)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@router.get(
|
@router.get(
|
||||||
"/tasks/{task_id}", response_model=TaskQueryResponse, summary="Query task status"
|
"/tasks/{task_id}", response_model=TaskQueryResponse, summary="Query task status"
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ class HttpException(Exception):
|
|||||||
self.message = message
|
self.message = message
|
||||||
self.status_code = status_code
|
self.status_code = status_code
|
||||||
self.data = data
|
self.data = data
|
||||||
# 获取异常堆栈信息
|
# Retrieve the exception stack trace information.
|
||||||
tb_str = traceback.format_exc().strip()
|
tb_str = traceback.format_exc().strip()
|
||||||
if not tb_str or tb_str == "NoneType: None":
|
if not tb_str or tb_str == "NoneType: None":
|
||||||
msg = f"HttpException: {status_code}, {task_id}, {message}"
|
msg = f"HttpException: {status_code}, {task_id}, {message}"
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import warnings
|
import warnings
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import Any, List, Optional
|
from typing import Any, List, Optional, Union
|
||||||
|
|
||||||
import pydantic
|
import pydantic
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
@@ -18,6 +18,15 @@ class VideoConcatMode(str, Enum):
|
|||||||
sequential = "sequential"
|
sequential = "sequential"
|
||||||
|
|
||||||
|
|
||||||
|
class VideoTransitionMode(str, Enum):
|
||||||
|
none = None
|
||||||
|
shuffle = "Shuffle"
|
||||||
|
fade_in = "FadeIn"
|
||||||
|
fade_out = "FadeOut"
|
||||||
|
slide_in = "SlideIn"
|
||||||
|
slide_out = "SlideOut"
|
||||||
|
|
||||||
|
|
||||||
class VideoAspect(str, Enum):
|
class VideoAspect(str, Enum):
|
||||||
landscape = "16:9"
|
landscape = "16:9"
|
||||||
portrait = "9:16"
|
portrait = "9:16"
|
||||||
@@ -44,44 +53,6 @@ class MaterialInfo:
|
|||||||
duration: int = 0
|
duration: int = 0
|
||||||
|
|
||||||
|
|
||||||
# VoiceNames = [
|
|
||||||
# # zh-CN
|
|
||||||
# "female-zh-CN-XiaoxiaoNeural",
|
|
||||||
# "female-zh-CN-XiaoyiNeural",
|
|
||||||
# "female-zh-CN-liaoning-XiaobeiNeural",
|
|
||||||
# "female-zh-CN-shaanxi-XiaoniNeural",
|
|
||||||
#
|
|
||||||
# "male-zh-CN-YunjianNeural",
|
|
||||||
# "male-zh-CN-YunxiNeural",
|
|
||||||
# "male-zh-CN-YunxiaNeural",
|
|
||||||
# "male-zh-CN-YunyangNeural",
|
|
||||||
#
|
|
||||||
# # "female-zh-HK-HiuGaaiNeural",
|
|
||||||
# # "female-zh-HK-HiuMaanNeural",
|
|
||||||
# # "male-zh-HK-WanLungNeural",
|
|
||||||
# #
|
|
||||||
# # "female-zh-TW-HsiaoChenNeural",
|
|
||||||
# # "female-zh-TW-HsiaoYuNeural",
|
|
||||||
# # "male-zh-TW-YunJheNeural",
|
|
||||||
#
|
|
||||||
# # en-US
|
|
||||||
# "female-en-US-AnaNeural",
|
|
||||||
# "female-en-US-AriaNeural",
|
|
||||||
# "female-en-US-AvaNeural",
|
|
||||||
# "female-en-US-EmmaNeural",
|
|
||||||
# "female-en-US-JennyNeural",
|
|
||||||
# "female-en-US-MichelleNeural",
|
|
||||||
#
|
|
||||||
# "male-en-US-AndrewNeural",
|
|
||||||
# "male-en-US-BrianNeural",
|
|
||||||
# "male-en-US-ChristopherNeural",
|
|
||||||
# "male-en-US-EricNeural",
|
|
||||||
# "male-en-US-GuyNeural",
|
|
||||||
# "male-en-US-RogerNeural",
|
|
||||||
# "male-en-US-SteffanNeural",
|
|
||||||
# ]
|
|
||||||
|
|
||||||
|
|
||||||
class VideoParams(BaseModel):
|
class VideoParams(BaseModel):
|
||||||
"""
|
"""
|
||||||
{
|
{
|
||||||
@@ -98,15 +69,18 @@ class VideoParams(BaseModel):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
video_subject: str
|
video_subject: str
|
||||||
video_script: str = "" # 用于生成视频的脚本
|
video_script: str = "" # Script used to generate the video
|
||||||
video_terms: Optional[str | list] = None # 用于生成视频的关键词
|
video_terms: Optional[str | list] = None # Keywords used to generate the video
|
||||||
video_aspect: Optional[VideoAspect] = VideoAspect.portrait.value
|
video_aspect: Optional[VideoAspect] = VideoAspect.portrait.value
|
||||||
video_concat_mode: Optional[VideoConcatMode] = VideoConcatMode.random.value
|
video_concat_mode: Optional[VideoConcatMode] = VideoConcatMode.random.value
|
||||||
|
video_transition_mode: Optional[VideoTransitionMode] = None
|
||||||
video_clip_duration: Optional[int] = 5
|
video_clip_duration: Optional[int] = 5
|
||||||
video_count: Optional[int] = 1
|
video_count: Optional[int] = 1
|
||||||
|
|
||||||
video_source: Optional[str] = "pexels"
|
video_source: Optional[str] = "pexels"
|
||||||
video_materials: Optional[List[MaterialInfo]] = None # 用于生成视频的素材
|
video_materials: Optional[List[MaterialInfo]] = (
|
||||||
|
None # Materials used to generate the video
|
||||||
|
)
|
||||||
|
|
||||||
video_language: Optional[str] = "" # auto detect
|
video_language: Optional[str] = "" # auto detect
|
||||||
|
|
||||||
@@ -122,7 +96,7 @@ class VideoParams(BaseModel):
|
|||||||
custom_position: float = 70.0
|
custom_position: float = 70.0
|
||||||
font_name: Optional[str] = "STHeitiMedium.ttc"
|
font_name: Optional[str] = "STHeitiMedium.ttc"
|
||||||
text_fore_color: Optional[str] = "#FFFFFF"
|
text_fore_color: Optional[str] = "#FFFFFF"
|
||||||
text_background_color: Optional[str] = "transparent"
|
text_background_color: Union[bool, str] = True
|
||||||
|
|
||||||
font_size: int = 60
|
font_size: int = 60
|
||||||
stroke_color: Optional[str] = "#000000"
|
stroke_color: Optional[str] = "#000000"
|
||||||
@@ -143,7 +117,7 @@ class SubtitleRequest(BaseModel):
|
|||||||
subtitle_position: Optional[str] = "bottom"
|
subtitle_position: Optional[str] = "bottom"
|
||||||
font_name: Optional[str] = "STHeitiMedium.ttc"
|
font_name: Optional[str] = "STHeitiMedium.ttc"
|
||||||
text_fore_color: Optional[str] = "#FFFFFF"
|
text_fore_color: Optional[str] = "#FFFFFF"
|
||||||
text_background_color: Optional[str] = "transparent"
|
text_background_color: Union[bool, str] = True
|
||||||
font_size: int = 60
|
font_size: int = 60
|
||||||
stroke_color: Optional[str] = "#000000"
|
stroke_color: Optional[str] = "#000000"
|
||||||
stroke_width: float = 1.5
|
stroke_width: float = 1.5
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
import json
|
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
|
import g4f
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from openai import OpenAI
|
from openai import AzureOpenAI, OpenAI
|
||||||
from openai import AzureOpenAI
|
|
||||||
from openai.types.chat import ChatCompletion
|
from openai.types.chat import ChatCompletion
|
||||||
|
|
||||||
from app.config import config
|
from app.config import config
|
||||||
@@ -13,6 +14,7 @@ _max_retries = 5
|
|||||||
|
|
||||||
|
|
||||||
def _generate_response(prompt: str) -> str:
|
def _generate_response(prompt: str) -> str:
|
||||||
|
try:
|
||||||
content = ""
|
content = ""
|
||||||
llm_provider = config.app.get("llm_provider", "openai")
|
llm_provider = config.app.get("llm_provider", "openai")
|
||||||
logger.info(f"llm provider: {llm_provider}")
|
logger.info(f"llm provider: {llm_provider}")
|
||||||
@@ -20,8 +22,6 @@ def _generate_response(prompt: str) -> str:
|
|||||||
model_name = config.app.get("g4f_model_name", "")
|
model_name = config.app.get("g4f_model_name", "")
|
||||||
if not model_name:
|
if not model_name:
|
||||||
model_name = "gpt-3.5-turbo-16k-0613"
|
model_name = "gpt-3.5-turbo-16k-0613"
|
||||||
import g4f
|
|
||||||
|
|
||||||
content = g4f.ChatCompletion.create(
|
content = g4f.ChatCompletion.create(
|
||||||
model=model_name,
|
model=model_name,
|
||||||
messages=[{"role": "user", "content": prompt}],
|
messages=[{"role": "user", "content": prompt}],
|
||||||
@@ -179,7 +179,10 @@ def _generate_response(prompt: str) -> str:
|
|||||||
headers={"Authorization": f"Bearer {api_key}"},
|
headers={"Authorization": f"Bearer {api_key}"},
|
||||||
json={
|
json={
|
||||||
"messages": [
|
"messages": [
|
||||||
{"role": "system", "content": "You are a friendly assistant"},
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "You are a friendly assistant",
|
||||||
|
},
|
||||||
{"role": "user", "content": prompt},
|
{"role": "user", "content": prompt},
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -197,7 +200,9 @@ def _generate_response(prompt: str) -> str:
|
|||||||
"client_secret": secret_key,
|
"client_secret": secret_key,
|
||||||
}
|
}
|
||||||
access_token = (
|
access_token = (
|
||||||
requests.post("https://aip.baidubce.com/oauth/2.0/token", params=params)
|
requests.post(
|
||||||
|
"https://aip.baidubce.com/oauth/2.0/token", params=params
|
||||||
|
)
|
||||||
.json()
|
.json()
|
||||||
.get("access_token")
|
.get("access_token")
|
||||||
)
|
)
|
||||||
@@ -250,6 +255,8 @@ def _generate_response(prompt: str) -> str:
|
|||||||
)
|
)
|
||||||
|
|
||||||
return content.replace("\n", "")
|
return content.replace("\n", "")
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error: {str(e)}"
|
||||||
|
|
||||||
|
|
||||||
def generate_script(
|
def generate_script(
|
||||||
@@ -295,7 +302,7 @@ Generate a script for a video, depending on the subject of the video.
|
|||||||
paragraphs = response.split("\n\n")
|
paragraphs = response.split("\n\n")
|
||||||
|
|
||||||
# Select the specified number of paragraphs
|
# Select the specified number of paragraphs
|
||||||
selected_paragraphs = paragraphs[:paragraph_number]
|
# selected_paragraphs = paragraphs[:paragraph_number]
|
||||||
|
|
||||||
# Join the selected paragraphs into a single string
|
# Join the selected paragraphs into a single string
|
||||||
return "\n\n".join(paragraphs)
|
return "\n\n".join(paragraphs)
|
||||||
@@ -319,7 +326,9 @@ Generate a script for a video, depending on the subject of the video.
|
|||||||
|
|
||||||
if i < _max_retries:
|
if i < _max_retries:
|
||||||
logger.warning(f"failed to generate video script, trying again... {i + 1}")
|
logger.warning(f"failed to generate video script, trying again... {i + 1}")
|
||||||
|
if "Error: " in final_script:
|
||||||
|
logger.error(f"failed to generate video script: {final_script}")
|
||||||
|
else:
|
||||||
logger.success(f"completed: \n{final_script}")
|
logger.success(f"completed: \n{final_script}")
|
||||||
return final_script.strip()
|
return final_script.strip()
|
||||||
|
|
||||||
@@ -358,6 +367,9 @@ Please note that you must use English for generating video search terms; Chinese
|
|||||||
for i in range(_max_retries):
|
for i in range(_max_retries):
|
||||||
try:
|
try:
|
||||||
response = _generate_response(prompt)
|
response = _generate_response(prompt)
|
||||||
|
if "Error: " in response:
|
||||||
|
logger.error(f"failed to generate video script: {response}")
|
||||||
|
return response
|
||||||
search_terms = json.loads(response)
|
search_terms = json.loads(response)
|
||||||
if not isinstance(search_terms, list) or not all(
|
if not isinstance(search_terms, list) or not all(
|
||||||
isinstance(term, str) for term in search_terms
|
isinstance(term, str) for term in search_terms
|
||||||
|
|||||||
@@ -1,14 +1,14 @@
|
|||||||
import os
|
import os
|
||||||
import random
|
import random
|
||||||
|
from typing import List
|
||||||
from urllib.parse import urlencode
|
from urllib.parse import urlencode
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
from typing import List
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from moviepy.video.io.VideoFileClip import VideoFileClip
|
from moviepy.video.io.VideoFileClip import VideoFileClip
|
||||||
|
|
||||||
from app.config import config
|
from app.config import config
|
||||||
from app.models.schema import VideoAspect, VideoConcatMode, MaterialInfo
|
from app.models.schema import MaterialInfo, VideoAspect, VideoConcatMode
|
||||||
from app.utils import utils
|
from app.utils import utils
|
||||||
|
|
||||||
requested_count = 0
|
requested_count = 0
|
||||||
@@ -40,7 +40,10 @@ def search_videos_pexels(
|
|||||||
video_orientation = aspect.name
|
video_orientation = aspect.name
|
||||||
video_width, video_height = aspect.to_resolution()
|
video_width, video_height = aspect.to_resolution()
|
||||||
api_key = get_api_key("pexels_api_keys")
|
api_key = get_api_key("pexels_api_keys")
|
||||||
headers = {"Authorization": api_key}
|
headers = {
|
||||||
|
"Authorization": api_key,
|
||||||
|
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36",
|
||||||
|
}
|
||||||
# Build URL
|
# Build URL
|
||||||
params = {"query": search_term, "per_page": 20, "orientation": video_orientation}
|
params = {"query": search_term, "per_page": 20, "orientation": video_orientation}
|
||||||
query_url = f"https://api.pexels.com/videos/search?{urlencode(params)}"
|
query_url = f"https://api.pexels.com/videos/search?{urlencode(params)}"
|
||||||
@@ -126,7 +129,7 @@ def search_videos_pixabay(
|
|||||||
for video_type in video_files:
|
for video_type in video_files:
|
||||||
video = video_files[video_type]
|
video = video_files[video_type]
|
||||||
w = int(video["width"])
|
w = int(video["width"])
|
||||||
h = int(video["height"])
|
# h = int(video["height"])
|
||||||
if w >= video_width:
|
if w >= video_width:
|
||||||
item = MaterialInfo()
|
item = MaterialInfo()
|
||||||
item.provider = "pixabay"
|
item.provider = "pixabay"
|
||||||
@@ -158,11 +161,19 @@ def save_video(video_url: str, save_dir: str = "") -> str:
|
|||||||
logger.info(f"video already exists: {video_path}")
|
logger.info(f"video already exists: {video_path}")
|
||||||
return video_path
|
return video_path
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36"
|
||||||
|
}
|
||||||
|
|
||||||
# if video does not exist, download it
|
# if video does not exist, download it
|
||||||
with open(video_path, "wb") as f:
|
with open(video_path, "wb") as f:
|
||||||
f.write(
|
f.write(
|
||||||
requests.get(
|
requests.get(
|
||||||
video_url, proxies=config.proxy, verify=False, timeout=(60, 240)
|
video_url,
|
||||||
|
headers=headers,
|
||||||
|
proxies=config.proxy,
|
||||||
|
verify=False,
|
||||||
|
timeout=(60, 240),
|
||||||
).content
|
).content
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -177,7 +188,7 @@ def save_video(video_url: str, save_dir: str = "") -> str:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
try:
|
try:
|
||||||
os.remove(video_path)
|
os.remove(video_path)
|
||||||
except Exception as e:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
logger.warning(f"invalid video file: {video_path} => {str(e)}")
|
logger.warning(f"invalid video file: {video_path} => {str(e)}")
|
||||||
return ""
|
return ""
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
import ast
|
import ast
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
|
|
||||||
from app.config import config
|
from app.config import config
|
||||||
from app.models import const
|
from app.models import const
|
||||||
|
|
||||||
@@ -14,12 +15,23 @@ class BaseState(ABC):
|
|||||||
def get_task(self, task_id: str):
|
def get_task(self, task_id: str):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_all_tasks(self, page: int, page_size: int):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
# Memory state management
|
# Memory state management
|
||||||
class MemoryState(BaseState):
|
class MemoryState(BaseState):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self._tasks = {}
|
self._tasks = {}
|
||||||
|
|
||||||
|
def get_all_tasks(self, page: int, page_size: int):
|
||||||
|
start = (page - 1) * page_size
|
||||||
|
end = start + page_size
|
||||||
|
tasks = list(self._tasks.values())
|
||||||
|
total = len(tasks)
|
||||||
|
return tasks[start:end], total
|
||||||
|
|
||||||
def update_task(
|
def update_task(
|
||||||
self,
|
self,
|
||||||
task_id: str,
|
task_id: str,
|
||||||
@@ -32,6 +44,7 @@ class MemoryState(BaseState):
|
|||||||
progress = 100
|
progress = 100
|
||||||
|
|
||||||
self._tasks[task_id] = {
|
self._tasks[task_id] = {
|
||||||
|
"task_id": task_id,
|
||||||
"state": state,
|
"state": state,
|
||||||
"progress": progress,
|
"progress": progress,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
@@ -52,6 +65,28 @@ class RedisState(BaseState):
|
|||||||
|
|
||||||
self._redis = redis.StrictRedis(host=host, port=port, db=db, password=password)
|
self._redis = redis.StrictRedis(host=host, port=port, db=db, password=password)
|
||||||
|
|
||||||
|
def get_all_tasks(self, page: int, page_size: int):
|
||||||
|
start = (page - 1) * page_size
|
||||||
|
end = start + page_size
|
||||||
|
tasks = []
|
||||||
|
cursor = 0
|
||||||
|
total = 0
|
||||||
|
while True:
|
||||||
|
cursor, keys = self._redis.scan(cursor, count=page_size)
|
||||||
|
total += len(keys)
|
||||||
|
if total > start:
|
||||||
|
for key in keys[max(0, start - total):end - total]:
|
||||||
|
task_data = self._redis.hgetall(key)
|
||||||
|
task = {
|
||||||
|
k.decode("utf-8"): self._convert_to_original_type(v) for k, v in task_data.items()
|
||||||
|
}
|
||||||
|
tasks.append(task)
|
||||||
|
if len(tasks) >= page_size:
|
||||||
|
break
|
||||||
|
if cursor == 0 or len(tasks) >= page_size:
|
||||||
|
break
|
||||||
|
return tasks, total
|
||||||
|
|
||||||
def update_task(
|
def update_task(
|
||||||
self,
|
self,
|
||||||
task_id: str,
|
task_id: str,
|
||||||
@@ -64,6 +99,7 @@ class RedisState(BaseState):
|
|||||||
progress = 100
|
progress = 100
|
||||||
|
|
||||||
fields = {
|
fields = {
|
||||||
|
"task_id": task_id,
|
||||||
"state": state,
|
"state": state,
|
||||||
"progress": progress,
|
"progress": progress,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
import json
|
import json
|
||||||
import os.path
|
import os.path
|
||||||
import re
|
import re
|
||||||
|
from timeit import default_timer as timer
|
||||||
|
|
||||||
from faster_whisper import WhisperModel
|
from faster_whisper import WhisperModel
|
||||||
from timeit import default_timer as timer
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from app.config import config
|
from app.config import config
|
||||||
@@ -88,7 +88,7 @@ def create(audio_file, subtitle_file: str = ""):
|
|||||||
is_segmented = True
|
is_segmented = True
|
||||||
|
|
||||||
seg_end = word.end
|
seg_end = word.end
|
||||||
# 如果包含标点,则断句
|
# If it contains punctuation, then break the sentence.
|
||||||
seg_text += word.word
|
seg_text += word.word
|
||||||
|
|
||||||
if utils.str_contains_punctuation(word.word):
|
if utils.str_contains_punctuation(word.word):
|
||||||
@@ -246,7 +246,7 @@ def correct(subtitle_file, video_script):
|
|||||||
script_index += 1
|
script_index += 1
|
||||||
subtitle_index = next_subtitle_index
|
subtitle_index = next_subtitle_index
|
||||||
|
|
||||||
# 处理剩余的脚本行
|
# Process the remaining lines of the script.
|
||||||
while script_index < len(script_lines):
|
while script_index < len(script_lines):
|
||||||
logger.warning(f"Extra script line: {script_lines[script_index]}")
|
logger.warning(f"Extra script line: {script_lines[script_index]}")
|
||||||
if subtitle_index < len(subtitle_items):
|
if subtitle_index < len(subtitle_items):
|
||||||
|
|||||||
@@ -87,10 +87,10 @@ def generate_audio(task_id, params, video_script):
|
|||||||
2. check if the network is available. If you are in China, it is recommended to use a VPN and enable the global traffic mode.
|
2. check if the network is available. If you are in China, it is recommended to use a VPN and enable the global traffic mode.
|
||||||
""".strip()
|
""".strip()
|
||||||
)
|
)
|
||||||
return None, None
|
return None, None, None
|
||||||
|
|
||||||
audio_duration = math.ceil(voice.get_audio_duration(sub_maker))
|
audio_duration = math.ceil(voice.get_audio_duration(sub_maker))
|
||||||
return audio_file, audio_duration
|
return audio_file, audio_duration, sub_maker
|
||||||
|
|
||||||
|
|
||||||
def generate_subtitle(task_id, params, video_script, sub_maker, audio_file):
|
def generate_subtitle(task_id, params, video_script, sub_maker, audio_file):
|
||||||
@@ -98,7 +98,7 @@ def generate_subtitle(task_id, params, video_script, sub_maker, audio_file):
|
|||||||
return ""
|
return ""
|
||||||
|
|
||||||
subtitle_path = path.join(utils.task_dir(task_id), "subtitle.srt")
|
subtitle_path = path.join(utils.task_dir(task_id), "subtitle.srt")
|
||||||
subtitle_provider = config.app.get("subtitle_provider", "").strip().lower()
|
subtitle_provider = config.app.get("subtitle_provider", "edge").strip().lower()
|
||||||
logger.info(f"\n\n## generating subtitle, provider: {subtitle_provider}")
|
logger.info(f"\n\n## generating subtitle, provider: {subtitle_provider}")
|
||||||
|
|
||||||
subtitle_fallback = False
|
subtitle_fallback = False
|
||||||
@@ -164,6 +164,7 @@ def generate_final_videos(
|
|||||||
video_concat_mode = (
|
video_concat_mode = (
|
||||||
params.video_concat_mode if params.video_count == 1 else VideoConcatMode.random
|
params.video_concat_mode if params.video_count == 1 else VideoConcatMode.random
|
||||||
)
|
)
|
||||||
|
video_transition_mode = params.video_transition_mode
|
||||||
|
|
||||||
_progress = 50
|
_progress = 50
|
||||||
for i in range(params.video_count):
|
for i in range(params.video_count):
|
||||||
@@ -178,6 +179,7 @@ def generate_final_videos(
|
|||||||
audio_file=audio_file,
|
audio_file=audio_file,
|
||||||
video_aspect=params.video_aspect,
|
video_aspect=params.video_aspect,
|
||||||
video_concat_mode=video_concat_mode,
|
video_concat_mode=video_concat_mode,
|
||||||
|
video_transition_mode=video_transition_mode,
|
||||||
max_clip_duration=params.video_clip_duration,
|
max_clip_duration=params.video_clip_duration,
|
||||||
threads=params.n_threads,
|
threads=params.n_threads,
|
||||||
)
|
)
|
||||||
@@ -209,9 +211,12 @@ def start(task_id, params: VideoParams, stop_at: str = "video"):
|
|||||||
logger.info(f"start task: {task_id}, stop_at: {stop_at}")
|
logger.info(f"start task: {task_id}, stop_at: {stop_at}")
|
||||||
sm.state.update_task(task_id, state=const.TASK_STATE_PROCESSING, progress=5)
|
sm.state.update_task(task_id, state=const.TASK_STATE_PROCESSING, progress=5)
|
||||||
|
|
||||||
|
if type(params.video_concat_mode) is str:
|
||||||
|
params.video_concat_mode = VideoConcatMode(params.video_concat_mode)
|
||||||
|
|
||||||
# 1. Generate script
|
# 1. Generate script
|
||||||
video_script = generate_script(task_id, params)
|
video_script = generate_script(task_id, params)
|
||||||
if not video_script:
|
if not video_script or "Error: " in video_script:
|
||||||
sm.state.update_task(task_id, state=const.TASK_STATE_FAILED)
|
sm.state.update_task(task_id, state=const.TASK_STATE_FAILED)
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -242,7 +247,9 @@ def start(task_id, params: VideoParams, stop_at: str = "video"):
|
|||||||
sm.state.update_task(task_id, state=const.TASK_STATE_PROCESSING, progress=20)
|
sm.state.update_task(task_id, state=const.TASK_STATE_PROCESSING, progress=20)
|
||||||
|
|
||||||
# 3. Generate audio
|
# 3. Generate audio
|
||||||
audio_file, audio_duration = generate_audio(task_id, params, video_script)
|
audio_file, audio_duration, sub_maker = generate_audio(
|
||||||
|
task_id, params, video_script
|
||||||
|
)
|
||||||
if not audio_file:
|
if not audio_file:
|
||||||
sm.state.update_task(task_id, state=const.TASK_STATE_FAILED)
|
sm.state.update_task(task_id, state=const.TASK_STATE_FAILED)
|
||||||
return
|
return
|
||||||
@@ -259,7 +266,9 @@ def start(task_id, params: VideoParams, stop_at: str = "video"):
|
|||||||
return {"audio_file": audio_file, "audio_duration": audio_duration}
|
return {"audio_file": audio_file, "audio_duration": audio_duration}
|
||||||
|
|
||||||
# 4. Generate subtitle
|
# 4. Generate subtitle
|
||||||
subtitle_path = generate_subtitle(task_id, params, video_script, None, audio_file)
|
subtitle_path = generate_subtitle(
|
||||||
|
task_id, params, video_script, sub_maker, audio_file
|
||||||
|
)
|
||||||
|
|
||||||
if stop_at == "subtitle":
|
if stop_at == "subtitle":
|
||||||
sm.state.update_task(
|
sm.state.update_task(
|
||||||
@@ -318,3 +327,13 @@ def start(task_id, params: VideoParams, stop_at: str = "video"):
|
|||||||
task_id, state=const.TASK_STATE_COMPLETE, progress=100, **kwargs
|
task_id, state=const.TASK_STATE_COMPLETE, progress=100, **kwargs
|
||||||
)
|
)
|
||||||
return kwargs
|
return kwargs
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
task_id = "task_id"
|
||||||
|
params = VideoParams(
|
||||||
|
video_subject="金钱的作用",
|
||||||
|
voice_name="zh-CN-XiaoyiNeural-Female",
|
||||||
|
voice_rate=1.0,
|
||||||
|
)
|
||||||
|
start(task_id, params, stop_at="video")
|
||||||
|
|||||||
21
app/services/utils/video_effects.py
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
from moviepy import Clip, vfx
|
||||||
|
|
||||||
|
|
||||||
|
# FadeIn
|
||||||
|
def fadein_transition(clip: Clip, t: float) -> Clip:
|
||||||
|
return clip.with_effects([vfx.FadeIn(t)])
|
||||||
|
|
||||||
|
|
||||||
|
# FadeOut
|
||||||
|
def fadeout_transition(clip: Clip, t: float) -> Clip:
|
||||||
|
return clip.with_effects([vfx.FadeOut(t)])
|
||||||
|
|
||||||
|
|
||||||
|
# SlideIn
|
||||||
|
def slidein_transition(clip: Clip, t: float, side: str) -> Clip:
|
||||||
|
return clip.with_effects([vfx.SlideIn(t, side)])
|
||||||
|
|
||||||
|
|
||||||
|
# SlideOut
|
||||||
|
def slideout_transition(clip: Clip, t: float, side: str) -> Clip:
|
||||||
|
return clip.with_effects([vfx.SlideOut(t, side)])
|
||||||
@@ -1,16 +1,97 @@
|
|||||||
import glob
|
import glob
|
||||||
|
import os
|
||||||
import random
|
import random
|
||||||
|
import gc
|
||||||
|
import shutil
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from moviepy.editor import *
|
from moviepy import (
|
||||||
|
AudioFileClip,
|
||||||
|
ColorClip,
|
||||||
|
CompositeAudioClip,
|
||||||
|
CompositeVideoClip,
|
||||||
|
ImageClip,
|
||||||
|
TextClip,
|
||||||
|
VideoFileClip,
|
||||||
|
afx,
|
||||||
|
concatenate_videoclips,
|
||||||
|
)
|
||||||
from moviepy.video.tools.subtitles import SubtitlesClip
|
from moviepy.video.tools.subtitles import SubtitlesClip
|
||||||
from PIL import ImageFont
|
from PIL import ImageFont
|
||||||
|
|
||||||
from app.models import const
|
from app.models import const
|
||||||
from app.models.schema import MaterialInfo, VideoAspect, VideoConcatMode, VideoParams
|
from app.models.schema import (
|
||||||
|
MaterialInfo,
|
||||||
|
VideoAspect,
|
||||||
|
VideoConcatMode,
|
||||||
|
VideoParams,
|
||||||
|
VideoTransitionMode,
|
||||||
|
)
|
||||||
|
from app.services.utils import video_effects
|
||||||
from app.utils import utils
|
from app.utils import utils
|
||||||
|
|
||||||
|
class SubClippedVideoClip:
|
||||||
|
def __init__(self, file_path, start_time, end_time, width=None, height=None):
|
||||||
|
self.file_path = file_path
|
||||||
|
self.start_time = start_time
|
||||||
|
self.end_time = end_time
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return f"SubClippedVideoClip(file_path={self.file_path}, start_time={self.start_time}, end_time={self.end_time}, width={self.width}, height={self.height})"
|
||||||
|
|
||||||
|
|
||||||
|
audio_codec = "aac"
|
||||||
|
video_codec = "libx264"
|
||||||
|
fps = 30
|
||||||
|
|
||||||
|
def close_clip(clip):
|
||||||
|
if clip is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# close main resources
|
||||||
|
if hasattr(clip, 'reader') and clip.reader is not None:
|
||||||
|
clip.reader.close()
|
||||||
|
|
||||||
|
# close audio resources
|
||||||
|
if hasattr(clip, 'audio') and clip.audio is not None:
|
||||||
|
if hasattr(clip.audio, 'reader') and clip.audio.reader is not None:
|
||||||
|
clip.audio.reader.close()
|
||||||
|
del clip.audio
|
||||||
|
|
||||||
|
# close mask resources
|
||||||
|
if hasattr(clip, 'mask') and clip.mask is not None:
|
||||||
|
if hasattr(clip.mask, 'reader') and clip.mask.reader is not None:
|
||||||
|
clip.mask.reader.close()
|
||||||
|
del clip.mask
|
||||||
|
|
||||||
|
# handle child clips in composite clips
|
||||||
|
if hasattr(clip, 'clips') and clip.clips:
|
||||||
|
for child_clip in clip.clips:
|
||||||
|
if child_clip is not clip: # avoid possible circular references
|
||||||
|
close_clip(child_clip)
|
||||||
|
|
||||||
|
# clear clip list
|
||||||
|
if hasattr(clip, 'clips'):
|
||||||
|
clip.clips = []
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"failed to close clip: {str(e)}")
|
||||||
|
|
||||||
|
del clip
|
||||||
|
gc.collect()
|
||||||
|
|
||||||
|
def delete_files(files: List[str] | str):
|
||||||
|
if isinstance(files, str):
|
||||||
|
files = [files]
|
||||||
|
|
||||||
|
for file in files:
|
||||||
|
try:
|
||||||
|
os.remove(file)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
def get_bgm_file(bgm_type: str = "random", bgm_file: str = ""):
|
def get_bgm_file(bgm_type: str = "random", bgm_file: str = ""):
|
||||||
if not bgm_type:
|
if not bgm_type:
|
||||||
@@ -34,115 +115,185 @@ def combine_videos(
|
|||||||
audio_file: str,
|
audio_file: str,
|
||||||
video_aspect: VideoAspect = VideoAspect.portrait,
|
video_aspect: VideoAspect = VideoAspect.portrait,
|
||||||
video_concat_mode: VideoConcatMode = VideoConcatMode.random,
|
video_concat_mode: VideoConcatMode = VideoConcatMode.random,
|
||||||
|
video_transition_mode: VideoTransitionMode = None,
|
||||||
max_clip_duration: int = 5,
|
max_clip_duration: int = 5,
|
||||||
threads: int = 2,
|
threads: int = 2,
|
||||||
) -> str:
|
) -> str:
|
||||||
audio_clip = AudioFileClip(audio_file)
|
audio_clip = AudioFileClip(audio_file)
|
||||||
audio_duration = audio_clip.duration
|
audio_duration = audio_clip.duration
|
||||||
logger.info(f"max duration of audio: {audio_duration} seconds")
|
logger.info(f"audio duration: {audio_duration} seconds")
|
||||||
# Required duration of each clip
|
# Required duration of each clip
|
||||||
req_dur = audio_duration / len(video_paths)
|
req_dur = audio_duration / len(video_paths)
|
||||||
req_dur = max_clip_duration
|
req_dur = max_clip_duration
|
||||||
logger.info(f"each clip will be maximum {req_dur} seconds long")
|
logger.info(f"maximum clip duration: {req_dur} seconds")
|
||||||
output_dir = os.path.dirname(combined_video_path)
|
output_dir = os.path.dirname(combined_video_path)
|
||||||
|
|
||||||
aspect = VideoAspect(video_aspect)
|
aspect = VideoAspect(video_aspect)
|
||||||
video_width, video_height = aspect.to_resolution()
|
video_width, video_height = aspect.to_resolution()
|
||||||
|
|
||||||
clips = []
|
clip_files = []
|
||||||
|
subclipped_items = []
|
||||||
video_duration = 0
|
video_duration = 0
|
||||||
|
|
||||||
raw_clips = []
|
|
||||||
for video_path in video_paths:
|
for video_path in video_paths:
|
||||||
clip = VideoFileClip(video_path).without_audio()
|
clip = VideoFileClip(video_path)
|
||||||
clip_duration = clip.duration
|
clip_duration = clip.duration
|
||||||
|
clip_w, clip_h = clip.size
|
||||||
|
close_clip(clip)
|
||||||
|
|
||||||
start_time = 0
|
start_time = 0
|
||||||
|
|
||||||
while start_time < clip_duration:
|
while start_time < clip_duration:
|
||||||
end_time = min(start_time + max_clip_duration, clip_duration)
|
end_time = min(start_time + max_clip_duration, clip_duration)
|
||||||
split_clip = clip.subclip(start_time, end_time)
|
if clip_duration - start_time > max_clip_duration:
|
||||||
raw_clips.append(split_clip)
|
subclipped_items.append(SubClippedVideoClip(file_path= video_path, start_time=start_time, end_time=end_time, width=clip_w, height=clip_h))
|
||||||
# logger.info(f"splitting from {start_time:.2f} to {end_time:.2f}, clip duration {clip_duration:.2f}, split_clip duration {split_clip.duration:.2f}")
|
|
||||||
start_time = end_time
|
start_time = end_time
|
||||||
if video_concat_mode.value == VideoConcatMode.sequential.value:
|
if video_concat_mode.value == VideoConcatMode.sequential.value:
|
||||||
break
|
break
|
||||||
|
|
||||||
# random video_paths order
|
# random subclipped_items order
|
||||||
if video_concat_mode.value == VideoConcatMode.random.value:
|
if video_concat_mode.value == VideoConcatMode.random.value:
|
||||||
random.shuffle(raw_clips)
|
random.shuffle(subclipped_items)
|
||||||
|
|
||||||
|
logger.debug(f"total subclipped items: {len(subclipped_items)}")
|
||||||
|
|
||||||
# Add downloaded clips over and over until the duration of the audio (max_duration) has been reached
|
# Add downloaded clips over and over until the duration of the audio (max_duration) has been reached
|
||||||
while video_duration < audio_duration:
|
for i, subclipped_item in enumerate(subclipped_items):
|
||||||
for clip in raw_clips:
|
if video_duration > audio_duration:
|
||||||
# Check if clip is longer than the remaining audio
|
break
|
||||||
if (audio_duration - video_duration) < clip.duration:
|
|
||||||
clip = clip.subclip(0, (audio_duration - video_duration))
|
|
||||||
# Only shorten clips if the calculated clip length (req_dur) is shorter than the actual clip to prevent still image
|
|
||||||
elif req_dur < clip.duration:
|
|
||||||
clip = clip.subclip(0, req_dur)
|
|
||||||
clip = clip.set_fps(30)
|
|
||||||
|
|
||||||
|
logger.debug(f"processing clip {i+1}: {subclipped_item.width}x{subclipped_item.height}, current duration: {video_duration:.2f}s, remaining: {audio_duration - video_duration:.2f}s")
|
||||||
|
|
||||||
|
try:
|
||||||
|
clip = VideoFileClip(subclipped_item.file_path).subclipped(subclipped_item.start_time, subclipped_item.end_time)
|
||||||
|
clip_duration = clip.duration
|
||||||
# Not all videos are same size, so we need to resize them
|
# Not all videos are same size, so we need to resize them
|
||||||
clip_w, clip_h = clip.size
|
clip_w, clip_h = clip.size
|
||||||
if clip_w != video_width or clip_h != video_height:
|
if clip_w != video_width or clip_h != video_height:
|
||||||
clip_ratio = clip.w / clip.h
|
clip_ratio = clip.w / clip.h
|
||||||
video_ratio = video_width / video_height
|
video_ratio = video_width / video_height
|
||||||
|
logger.debug(f"resizing to {video_width}x{video_height}, source: {clip_w}x{clip_h}, ratio: {clip_ratio:.2f}, target ratio: {video_ratio:.2f}")
|
||||||
|
|
||||||
if clip_ratio == video_ratio:
|
if clip_ratio == video_ratio:
|
||||||
# 等比例缩放
|
clip = clip.resized(new_size=(video_width, video_height))
|
||||||
clip = clip.resize((video_width, video_height))
|
|
||||||
else:
|
else:
|
||||||
# 等比缩放视频
|
|
||||||
if clip_ratio > video_ratio:
|
if clip_ratio > video_ratio:
|
||||||
# 按照目标宽度等比缩放
|
|
||||||
scale_factor = video_width / clip_w
|
scale_factor = video_width / clip_w
|
||||||
else:
|
else:
|
||||||
# 按照目标高度等比缩放
|
|
||||||
scale_factor = video_height / clip_h
|
scale_factor = video_height / clip_h
|
||||||
|
|
||||||
new_width = int(clip_w * scale_factor)
|
new_width = int(clip_w * scale_factor)
|
||||||
new_height = int(clip_h * scale_factor)
|
new_height = int(clip_h * scale_factor)
|
||||||
clip_resized = clip.resize(newsize=(new_width, new_height))
|
|
||||||
|
|
||||||
background = ColorClip(
|
background = ColorClip(size=(video_width, video_height), color=(0, 0, 0)).with_duration(clip_duration)
|
||||||
size=(video_width, video_height), color=(0, 0, 0)
|
clip_resized = clip.resized(new_size=(new_width, new_height)).with_position("center")
|
||||||
)
|
clip = CompositeVideoClip([background, clip_resized])
|
||||||
clip = CompositeVideoClip(
|
|
||||||
[
|
close_clip(clip_resized)
|
||||||
background.set_duration(clip.duration),
|
close_clip(background)
|
||||||
clip_resized.set_position("center"),
|
|
||||||
|
shuffle_side = random.choice(["left", "right", "top", "bottom"])
|
||||||
|
if video_transition_mode.value == VideoTransitionMode.none.value:
|
||||||
|
clip = clip
|
||||||
|
elif video_transition_mode.value == VideoTransitionMode.fade_in.value:
|
||||||
|
clip = video_effects.fadein_transition(clip, 1)
|
||||||
|
elif video_transition_mode.value == VideoTransitionMode.fade_out.value:
|
||||||
|
clip = video_effects.fadeout_transition(clip, 1)
|
||||||
|
elif video_transition_mode.value == VideoTransitionMode.slide_in.value:
|
||||||
|
clip = video_effects.slidein_transition(clip, 1, shuffle_side)
|
||||||
|
elif video_transition_mode.value == VideoTransitionMode.slide_out.value:
|
||||||
|
clip = video_effects.slideout_transition(clip, 1, shuffle_side)
|
||||||
|
elif video_transition_mode.value == VideoTransitionMode.shuffle.value:
|
||||||
|
transition_funcs = [
|
||||||
|
lambda c: video_effects.fadein_transition(c, 1),
|
||||||
|
lambda c: video_effects.fadeout_transition(c, 1),
|
||||||
|
lambda c: video_effects.slidein_transition(c, 1, shuffle_side),
|
||||||
|
lambda c: video_effects.slideout_transition(c, 1, shuffle_side),
|
||||||
]
|
]
|
||||||
)
|
shuffle_transition = random.choice(transition_funcs)
|
||||||
|
clip = shuffle_transition(clip)
|
||||||
logger.info(
|
|
||||||
f"resizing video to {video_width} x {video_height}, clip size: {clip_w} x {clip_h}"
|
|
||||||
)
|
|
||||||
|
|
||||||
if clip.duration > max_clip_duration:
|
if clip.duration > max_clip_duration:
|
||||||
clip = clip.subclip(0, max_clip_duration)
|
clip = clip.subclipped(0, max_clip_duration)
|
||||||
|
|
||||||
clips.append(clip)
|
# wirte clip to temp file
|
||||||
|
clip_file = f"{output_dir}/temp-clip-{i+1}.mp4"
|
||||||
|
clip.write_videofile(clip_file, logger=None, fps=fps, codec=video_codec)
|
||||||
|
|
||||||
|
close_clip(clip)
|
||||||
|
|
||||||
|
clip_files.append(clip_file)
|
||||||
video_duration += clip.duration
|
video_duration += clip.duration
|
||||||
|
|
||||||
video_clip = concatenate_videoclips(clips)
|
except Exception as e:
|
||||||
video_clip = video_clip.set_fps(30)
|
logger.error(f"failed to process clip: {str(e)}")
|
||||||
logger.info("writing")
|
|
||||||
# https://github.com/harry0703/MoneyPrinterTurbo/issues/111#issuecomment-2032354030
|
# merge video clips progressively, avoid loading all videos at once to avoid memory overflow
|
||||||
video_clip.write_videofile(
|
logger.info("starting clip merging process")
|
||||||
filename=combined_video_path,
|
if not clip_files:
|
||||||
|
logger.warning("no clips available for merging")
|
||||||
|
return combined_video_path
|
||||||
|
|
||||||
|
# if there is only one clip, use it directly
|
||||||
|
if len(clip_files) == 1:
|
||||||
|
logger.info("using single clip directly")
|
||||||
|
shutil.copy(clip_files[0], combined_video_path)
|
||||||
|
delete_files(clip_files)
|
||||||
|
logger.info("video combining completed")
|
||||||
|
return combined_video_path
|
||||||
|
|
||||||
|
# create initial video file as base
|
||||||
|
base_clip_path = clip_files[0]
|
||||||
|
temp_merged_video = f"{output_dir}/temp-merged-video.mp4"
|
||||||
|
temp_merged_next = f"{output_dir}/temp-merged-next.mp4"
|
||||||
|
|
||||||
|
# copy first clip as initial merged video
|
||||||
|
shutil.copy(base_clip_path, temp_merged_video)
|
||||||
|
|
||||||
|
# merge remaining video clips one by one
|
||||||
|
for i, clip_path in enumerate(clip_files[1:], 1):
|
||||||
|
logger.info(f"merging clip {i}/{len(clip_files)-1}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# load current base video and next clip to merge
|
||||||
|
base_clip = VideoFileClip(temp_merged_video)
|
||||||
|
next_clip = VideoFileClip(clip_path)
|
||||||
|
|
||||||
|
# merge these two clips
|
||||||
|
merged_clip = concatenate_videoclips([base_clip, next_clip])
|
||||||
|
|
||||||
|
# save merged result to temp file
|
||||||
|
merged_clip.write_videofile(
|
||||||
|
filename=temp_merged_next,
|
||||||
threads=threads,
|
threads=threads,
|
||||||
logger=None,
|
logger=None,
|
||||||
temp_audiofile_path=output_dir,
|
temp_audiofile_path=output_dir,
|
||||||
audio_codec="aac",
|
audio_codec=audio_codec,
|
||||||
fps=30,
|
fps=fps,
|
||||||
)
|
)
|
||||||
video_clip.close()
|
close_clip(base_clip)
|
||||||
logger.success("completed")
|
close_clip(next_clip)
|
||||||
|
close_clip(merged_clip)
|
||||||
|
|
||||||
|
# replace base file with new merged file
|
||||||
|
delete_files(temp_merged_video)
|
||||||
|
os.rename(temp_merged_next, temp_merged_video)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"failed to merge clip: {str(e)}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# after merging, rename final result to target file name
|
||||||
|
os.rename(temp_merged_video, combined_video_path)
|
||||||
|
|
||||||
|
# clean temp files
|
||||||
|
delete_files(clip_files)
|
||||||
|
|
||||||
|
logger.info("video combining completed")
|
||||||
return combined_video_path
|
return combined_video_path
|
||||||
|
|
||||||
|
|
||||||
def wrap_text(text, max_width, font="Arial", fontsize=60):
|
def wrap_text(text, max_width, font="Arial", fontsize=60):
|
||||||
# 创建字体对象
|
# Create ImageFont
|
||||||
font = ImageFont.truetype(font, fontsize)
|
font = ImageFont.truetype(font, fontsize)
|
||||||
|
|
||||||
def get_text_size(inner_text):
|
def get_text_size(inner_text):
|
||||||
@@ -154,8 +305,6 @@ def wrap_text(text, max_width, font="Arial", fontsize=60):
|
|||||||
if width <= max_width:
|
if width <= max_width:
|
||||||
return text, height
|
return text, height
|
||||||
|
|
||||||
# logger.warning(f"wrapping text, max_width: {max_width}, text_width: {width}, text: {text}")
|
|
||||||
|
|
||||||
processed = True
|
processed = True
|
||||||
|
|
||||||
_wrapped_lines_ = []
|
_wrapped_lines_ = []
|
||||||
@@ -178,7 +327,6 @@ def wrap_text(text, max_width, font="Arial", fontsize=60):
|
|||||||
_wrapped_lines_ = [line.strip() for line in _wrapped_lines_]
|
_wrapped_lines_ = [line.strip() for line in _wrapped_lines_]
|
||||||
result = "\n".join(_wrapped_lines_).strip()
|
result = "\n".join(_wrapped_lines_).strip()
|
||||||
height = len(_wrapped_lines_) * height
|
height = len(_wrapped_lines_) * height
|
||||||
# logger.warning(f"wrapped text: {result}")
|
|
||||||
return result, height
|
return result, height
|
||||||
|
|
||||||
_wrapped_lines_ = []
|
_wrapped_lines_ = []
|
||||||
@@ -195,7 +343,6 @@ def wrap_text(text, max_width, font="Arial", fontsize=60):
|
|||||||
_wrapped_lines_.append(_txt_)
|
_wrapped_lines_.append(_txt_)
|
||||||
result = "\n".join(_wrapped_lines_).strip()
|
result = "\n".join(_wrapped_lines_).strip()
|
||||||
height = len(_wrapped_lines_) * height
|
height = len(_wrapped_lines_) * height
|
||||||
# logger.warning(f"wrapped text: {result}")
|
|
||||||
return result, height
|
return result, height
|
||||||
|
|
||||||
|
|
||||||
@@ -209,7 +356,7 @@ def generate_video(
|
|||||||
aspect = VideoAspect(params.video_aspect)
|
aspect = VideoAspect(params.video_aspect)
|
||||||
video_width, video_height = aspect.to_resolution()
|
video_width, video_height = aspect.to_resolution()
|
||||||
|
|
||||||
logger.info(f"start, video size: {video_width} x {video_height}")
|
logger.info(f"generating video: {video_width} x {video_height}")
|
||||||
logger.info(f" ① video: {video_path}")
|
logger.info(f" ① video: {video_path}")
|
||||||
logger.info(f" ② audio: {audio_path}")
|
logger.info(f" ② audio: {audio_path}")
|
||||||
logger.info(f" ③ subtitle: {subtitle_path}")
|
logger.info(f" ③ subtitle: {subtitle_path}")
|
||||||
@@ -228,49 +375,68 @@ def generate_video(
|
|||||||
if os.name == "nt":
|
if os.name == "nt":
|
||||||
font_path = font_path.replace("\\", "/")
|
font_path = font_path.replace("\\", "/")
|
||||||
|
|
||||||
logger.info(f"using font: {font_path}")
|
logger.info(f" ⑤ font: {font_path}")
|
||||||
|
|
||||||
def create_text_clip(subtitle_item):
|
def create_text_clip(subtitle_item):
|
||||||
|
params.font_size = int(params.font_size)
|
||||||
|
params.stroke_width = int(params.stroke_width)
|
||||||
phrase = subtitle_item[1]
|
phrase = subtitle_item[1]
|
||||||
max_width = video_width * 0.9
|
max_width = video_width * 0.9
|
||||||
wrapped_txt, txt_height = wrap_text(
|
wrapped_txt, txt_height = wrap_text(
|
||||||
phrase, max_width=max_width, font=font_path, fontsize=params.font_size
|
phrase, max_width=max_width, font=font_path, fontsize=params.font_size
|
||||||
)
|
)
|
||||||
|
interline = int(params.font_size * 0.25)
|
||||||
|
size=(int(max_width), int(txt_height + params.font_size * 0.25 + (interline * (wrapped_txt.count("\n") + 1))))
|
||||||
|
|
||||||
_clip = TextClip(
|
_clip = TextClip(
|
||||||
wrapped_txt,
|
text=wrapped_txt,
|
||||||
font=font_path,
|
font=font_path,
|
||||||
fontsize=params.font_size,
|
font_size=params.font_size,
|
||||||
color=params.text_fore_color,
|
color=params.text_fore_color,
|
||||||
bg_color=params.text_background_color,
|
bg_color=params.text_background_color,
|
||||||
stroke_color=params.stroke_color,
|
stroke_color=params.stroke_color,
|
||||||
stroke_width=params.stroke_width,
|
stroke_width=params.stroke_width,
|
||||||
print_cmd=False,
|
# interline=interline,
|
||||||
|
# size=size,
|
||||||
)
|
)
|
||||||
duration = subtitle_item[0][1] - subtitle_item[0][0]
|
duration = subtitle_item[0][1] - subtitle_item[0][0]
|
||||||
_clip = _clip.set_start(subtitle_item[0][0])
|
_clip = _clip.with_start(subtitle_item[0][0])
|
||||||
_clip = _clip.set_end(subtitle_item[0][1])
|
_clip = _clip.with_end(subtitle_item[0][1])
|
||||||
_clip = _clip.set_duration(duration)
|
_clip = _clip.with_duration(duration)
|
||||||
if params.subtitle_position == "bottom":
|
if params.subtitle_position == "bottom":
|
||||||
_clip = _clip.set_position(("center", video_height * 0.95 - _clip.h))
|
_clip = _clip.with_position(("center", video_height * 0.95 - _clip.h))
|
||||||
elif params.subtitle_position == "top":
|
elif params.subtitle_position == "top":
|
||||||
_clip = _clip.set_position(("center", video_height * 0.05))
|
_clip = _clip.with_position(("center", video_height * 0.05))
|
||||||
elif params.subtitle_position == "custom":
|
elif params.subtitle_position == "custom":
|
||||||
# 确保字幕完全在屏幕内
|
# Ensure the subtitle is fully within the screen bounds
|
||||||
margin = 10 # 额外的边距,单位为像素
|
margin = 10 # Additional margin, in pixels
|
||||||
max_y = video_height - _clip.h - margin
|
max_y = video_height - _clip.h - margin
|
||||||
min_y = margin
|
min_y = margin
|
||||||
custom_y = (video_height - _clip.h) * (params.custom_position / 100)
|
custom_y = (video_height - _clip.h) * (params.custom_position / 100)
|
||||||
custom_y = max(min_y, min(custom_y, max_y)) # 限制 y 值在有效范围内
|
custom_y = max(
|
||||||
_clip = _clip.set_position(("center", custom_y))
|
min_y, min(custom_y, max_y)
|
||||||
|
) # Constrain the y value within the valid range
|
||||||
|
_clip = _clip.with_position(("center", custom_y))
|
||||||
else: # center
|
else: # center
|
||||||
_clip = _clip.set_position(("center", "center"))
|
_clip = _clip.with_position(("center", "center"))
|
||||||
return _clip
|
return _clip
|
||||||
|
|
||||||
video_clip = VideoFileClip(video_path)
|
video_clip = VideoFileClip(video_path).without_audio()
|
||||||
audio_clip = AudioFileClip(audio_path).volumex(params.voice_volume)
|
audio_clip = AudioFileClip(audio_path).with_effects(
|
||||||
|
[afx.MultiplyVolume(params.voice_volume)]
|
||||||
|
)
|
||||||
|
|
||||||
|
def make_textclip(text):
|
||||||
|
return TextClip(
|
||||||
|
text=text,
|
||||||
|
font=font_path,
|
||||||
|
font_size=params.font_size,
|
||||||
|
)
|
||||||
|
|
||||||
if subtitle_path and os.path.exists(subtitle_path):
|
if subtitle_path and os.path.exists(subtitle_path):
|
||||||
sub = SubtitlesClip(subtitles=subtitle_path, encoding="utf-8")
|
sub = SubtitlesClip(
|
||||||
|
subtitles=subtitle_path, encoding="utf-8", make_textclip=make_textclip
|
||||||
|
)
|
||||||
text_clips = []
|
text_clips = []
|
||||||
for item in sub.subtitles:
|
for item in sub.subtitles:
|
||||||
clip = create_text_clip(subtitle_item=item)
|
clip = create_text_clip(subtitle_item=item)
|
||||||
@@ -280,26 +446,28 @@ def generate_video(
|
|||||||
bgm_file = get_bgm_file(bgm_type=params.bgm_type, bgm_file=params.bgm_file)
|
bgm_file = get_bgm_file(bgm_type=params.bgm_type, bgm_file=params.bgm_file)
|
||||||
if bgm_file:
|
if bgm_file:
|
||||||
try:
|
try:
|
||||||
bgm_clip = (
|
bgm_clip = AudioFileClip(bgm_file).with_effects(
|
||||||
AudioFileClip(bgm_file).volumex(params.bgm_volume).audio_fadeout(3)
|
[
|
||||||
|
afx.MultiplyVolume(params.bgm_volume),
|
||||||
|
afx.AudioFadeOut(3),
|
||||||
|
afx.AudioLoop(duration=video_clip.duration),
|
||||||
|
]
|
||||||
)
|
)
|
||||||
bgm_clip = afx.audio_loop(bgm_clip, duration=video_clip.duration)
|
|
||||||
audio_clip = CompositeAudioClip([audio_clip, bgm_clip])
|
audio_clip = CompositeAudioClip([audio_clip, bgm_clip])
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"failed to add bgm: {str(e)}")
|
logger.error(f"failed to add bgm: {str(e)}")
|
||||||
|
|
||||||
video_clip = video_clip.set_audio(audio_clip)
|
video_clip = video_clip.with_audio(audio_clip)
|
||||||
video_clip.write_videofile(
|
video_clip.write_videofile(
|
||||||
output_file,
|
output_file,
|
||||||
audio_codec="aac",
|
audio_codec=audio_codec,
|
||||||
temp_audiofile_path=output_dir,
|
temp_audiofile_path=output_dir,
|
||||||
threads=params.n_threads or 2,
|
threads=params.n_threads or 2,
|
||||||
logger=None,
|
logger=None,
|
||||||
fps=30,
|
fps=fps,
|
||||||
)
|
)
|
||||||
video_clip.close()
|
video_clip.close()
|
||||||
del video_clip
|
del video_clip
|
||||||
logger.success("completed")
|
|
||||||
|
|
||||||
|
|
||||||
def preprocess_video(materials: List[MaterialInfo], clip_duration=4):
|
def preprocess_video(materials: List[MaterialInfo], clip_duration=4):
|
||||||
@@ -316,94 +484,35 @@ def preprocess_video(materials: List[MaterialInfo], clip_duration=4):
|
|||||||
width = clip.size[0]
|
width = clip.size[0]
|
||||||
height = clip.size[1]
|
height = clip.size[1]
|
||||||
if width < 480 or height < 480:
|
if width < 480 or height < 480:
|
||||||
logger.warning(f"video is too small, width: {width}, height: {height}")
|
logger.warning(f"low resolution material: {width}x{height}, minimum 480x480 required")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if ext in const.FILE_TYPE_IMAGES:
|
if ext in const.FILE_TYPE_IMAGES:
|
||||||
logger.info(f"processing image: {material.url}")
|
logger.info(f"processing image: {material.url}")
|
||||||
# 创建一个图片剪辑,并设置持续时间为3秒钟
|
# Create an image clip and set its duration to 3 seconds
|
||||||
clip = (
|
clip = (
|
||||||
ImageClip(material.url)
|
ImageClip(material.url)
|
||||||
.set_duration(clip_duration)
|
.with_duration(clip_duration)
|
||||||
.set_position("center")
|
.with_position("center")
|
||||||
)
|
)
|
||||||
# 使用resize方法来添加缩放效果。这里使用了lambda函数来使得缩放效果随时间变化。
|
# Apply a zoom effect using the resize method.
|
||||||
# 假设我们想要从原始大小逐渐放大到120%的大小。
|
# A lambda function is used to make the zoom effect dynamic over time.
|
||||||
# t代表当前时间,clip.duration为视频总时长,这里是3秒。
|
# The zoom effect starts from the original size and gradually scales up to 120%.
|
||||||
# 注意:1 表示100%的大小,所以1.2表示120%的大小
|
# t represents the current time, and clip.duration is the total duration of the clip (3 seconds).
|
||||||
zoom_clip = clip.resize(
|
# Note: 1 represents 100% size, so 1.2 represents 120% size.
|
||||||
|
zoom_clip = clip.resized(
|
||||||
lambda t: 1 + (clip_duration * 0.03) * (t / clip.duration)
|
lambda t: 1 + (clip_duration * 0.03) * (t / clip.duration)
|
||||||
)
|
)
|
||||||
|
|
||||||
# 如果需要,可以创建一个包含缩放剪辑的复合视频剪辑
|
# Optionally, create a composite video clip containing the zoomed clip.
|
||||||
# (这在您想要在视频中添加其他元素时非常有用)
|
# This is useful when you want to add other elements to the video.
|
||||||
final_clip = CompositeVideoClip([zoom_clip])
|
final_clip = CompositeVideoClip([zoom_clip])
|
||||||
|
|
||||||
# 输出视频
|
# Output the video to a file.
|
||||||
video_file = f"{material.url}.mp4"
|
video_file = f"{material.url}.mp4"
|
||||||
final_clip.write_videofile(video_file, fps=30, logger=None)
|
final_clip.write_videofile(video_file, fps=30, logger=None)
|
||||||
final_clip.close()
|
final_clip.close()
|
||||||
del final_clip
|
del final_clip
|
||||||
material.url = video_file
|
material.url = video_file
|
||||||
logger.success(f"completed: {video_file}")
|
logger.success(f"image processed: {video_file}")
|
||||||
return materials
|
return materials
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
m = MaterialInfo()
|
|
||||||
m.url = "/Users/harry/Downloads/IMG_2915.JPG"
|
|
||||||
m.provider = "local"
|
|
||||||
materials = preprocess_video([m], clip_duration=4)
|
|
||||||
print(materials)
|
|
||||||
|
|
||||||
# txt_en = "Here's your guide to travel hacks for budget-friendly adventures"
|
|
||||||
# txt_zh = "测试长字段这是您的旅行技巧指南帮助您进行预算友好的冒险"
|
|
||||||
# font = utils.resource_dir() + "/fonts/STHeitiMedium.ttc"
|
|
||||||
# for txt in [txt_en, txt_zh]:
|
|
||||||
# t, h = wrap_text(text=txt, max_width=1000, font=font, fontsize=60)
|
|
||||||
# print(t)
|
|
||||||
#
|
|
||||||
# task_id = "aa563149-a7ea-49c2-b39f-8c32cc225baf"
|
|
||||||
# task_dir = utils.task_dir(task_id)
|
|
||||||
# video_file = f"{task_dir}/combined-1.mp4"
|
|
||||||
# audio_file = f"{task_dir}/audio.mp3"
|
|
||||||
# subtitle_file = f"{task_dir}/subtitle.srt"
|
|
||||||
# output_file = f"{task_dir}/final.mp4"
|
|
||||||
#
|
|
||||||
# # video_paths = []
|
|
||||||
# # for file in os.listdir(utils.storage_dir("test")):
|
|
||||||
# # if file.endswith(".mp4"):
|
|
||||||
# # video_paths.append(os.path.join(utils.storage_dir("test"), file))
|
|
||||||
# #
|
|
||||||
# # combine_videos(combined_video_path=video_file,
|
|
||||||
# # audio_file=audio_file,
|
|
||||||
# # video_paths=video_paths,
|
|
||||||
# # video_aspect=VideoAspect.portrait,
|
|
||||||
# # video_concat_mode=VideoConcatMode.random,
|
|
||||||
# # max_clip_duration=5,
|
|
||||||
# # threads=2)
|
|
||||||
#
|
|
||||||
# cfg = VideoParams()
|
|
||||||
# cfg.video_aspect = VideoAspect.portrait
|
|
||||||
# cfg.font_name = "STHeitiMedium.ttc"
|
|
||||||
# cfg.font_size = 60
|
|
||||||
# cfg.stroke_color = "#000000"
|
|
||||||
# cfg.stroke_width = 1.5
|
|
||||||
# cfg.text_fore_color = "#FFFFFF"
|
|
||||||
# cfg.text_background_color = "transparent"
|
|
||||||
# cfg.bgm_type = "random"
|
|
||||||
# cfg.bgm_file = ""
|
|
||||||
# cfg.bgm_volume = 1.0
|
|
||||||
# cfg.subtitle_enabled = True
|
|
||||||
# cfg.subtitle_position = "bottom"
|
|
||||||
# cfg.n_threads = 2
|
|
||||||
# cfg.paragraph_number = 1
|
|
||||||
#
|
|
||||||
# cfg.voice_volume = 1.0
|
|
||||||
#
|
|
||||||
# generate_video(video_path=video_file,
|
|
||||||
# audio_path=audio_file,
|
|
||||||
# subtitle_path=subtitle_file,
|
|
||||||
# output_file=output_file,
|
|
||||||
# params=cfg
|
|
||||||
# )
|
|
||||||
|
|||||||
@@ -2,11 +2,13 @@ import asyncio
|
|||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from typing import Union
|
||||||
from xml.sax.saxutils import unescape
|
from xml.sax.saxutils import unescape
|
||||||
|
|
||||||
|
import edge_tts
|
||||||
|
from edge_tts import SubMaker, submaker
|
||||||
from edge_tts.submaker import mktimestamp
|
from edge_tts.submaker import mktimestamp
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from edge_tts import submaker, SubMaker
|
|
||||||
import edge_tts
|
|
||||||
from moviepy.video.tools import subtitles
|
from moviepy.video.tools import subtitles
|
||||||
|
|
||||||
from app.config import config
|
from app.config import config
|
||||||
@@ -14,8 +16,6 @@ from app.utils import utils
|
|||||||
|
|
||||||
|
|
||||||
def get_all_azure_voices(filter_locals=None) -> list[str]:
|
def get_all_azure_voices(filter_locals=None) -> list[str]:
|
||||||
if filter_locals is None:
|
|
||||||
filter_locals = ["zh-CN", "en-US", "zh-HK", "zh-TW", "vi-VN"]
|
|
||||||
voices_str = """
|
voices_str = """
|
||||||
Name: af-ZA-AdriNeural
|
Name: af-ZA-AdriNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
@@ -302,21 +302,33 @@ Gender: Female
|
|||||||
Name: en-US-AnaNeural
|
Name: en-US-AnaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
|
Name: en-US-AndrewMultilingualNeural
|
||||||
|
Gender: Male
|
||||||
|
|
||||||
Name: en-US-AndrewNeural
|
Name: en-US-AndrewNeural
|
||||||
Gender: Male
|
Gender: Male
|
||||||
|
|
||||||
Name: en-US-AriaNeural
|
Name: en-US-AriaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
|
Name: en-US-AvaMultilingualNeural
|
||||||
|
Gender: Female
|
||||||
|
|
||||||
Name: en-US-AvaNeural
|
Name: en-US-AvaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
|
Name: en-US-BrianMultilingualNeural
|
||||||
|
Gender: Male
|
||||||
|
|
||||||
Name: en-US-BrianNeural
|
Name: en-US-BrianNeural
|
||||||
Gender: Male
|
Gender: Male
|
||||||
|
|
||||||
Name: en-US-ChristopherNeural
|
Name: en-US-ChristopherNeural
|
||||||
Gender: Male
|
Gender: Male
|
||||||
|
|
||||||
|
Name: en-US-EmmaMultilingualNeural
|
||||||
|
Gender: Female
|
||||||
|
|
||||||
Name: en-US-EmmaNeural
|
Name: en-US-EmmaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
@@ -602,12 +614,24 @@ Gender: Male
|
|||||||
Name: it-IT-ElsaNeural
|
Name: it-IT-ElsaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
Name: it-IT-GiuseppeNeural
|
Name: it-IT-GiuseppeMultilingualNeural
|
||||||
Gender: Male
|
Gender: Male
|
||||||
|
|
||||||
Name: it-IT-IsabellaNeural
|
Name: it-IT-IsabellaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
|
Name: iu-Cans-CA-SiqiniqNeural
|
||||||
|
Gender: Female
|
||||||
|
|
||||||
|
Name: iu-Cans-CA-TaqqiqNeural
|
||||||
|
Gender: Male
|
||||||
|
|
||||||
|
Name: iu-Latn-CA-SiqiniqNeural
|
||||||
|
Gender: Female
|
||||||
|
|
||||||
|
Name: iu-Latn-CA-TaqqiqNeural
|
||||||
|
Gender: Male
|
||||||
|
|
||||||
Name: ja-JP-KeitaNeural
|
Name: ja-JP-KeitaNeural
|
||||||
Gender: Male
|
Gender: Male
|
||||||
|
|
||||||
@@ -644,7 +668,7 @@ Gender: Male
|
|||||||
Name: kn-IN-SapnaNeural
|
Name: kn-IN-SapnaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
Name: ko-KR-HyunsuNeural
|
Name: ko-KR-HyunsuMultilingualNeural
|
||||||
Gender: Male
|
Gender: Male
|
||||||
|
|
||||||
Name: ko-KR-InJoonNeural
|
Name: ko-KR-InJoonNeural
|
||||||
@@ -758,7 +782,7 @@ Gender: Male
|
|||||||
Name: pt-BR-FranciscaNeural
|
Name: pt-BR-FranciscaNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
Name: pt-BR-ThalitaNeural
|
Name: pt-BR-ThalitaMultilingualNeural
|
||||||
Gender: Female
|
Gender: Female
|
||||||
|
|
||||||
Name: pt-PT-DuarteNeural
|
Name: pt-PT-DuarteNeural
|
||||||
@@ -988,27 +1012,20 @@ Name: zh-CN-XiaoxiaoMultilingualNeural-V2
|
|||||||
Gender: Female
|
Gender: Female
|
||||||
""".strip()
|
""".strip()
|
||||||
voices = []
|
voices = []
|
||||||
name = ""
|
# 定义正则表达式模式,用于匹配 Name 和 Gender 行
|
||||||
for line in voices_str.split("\n"):
|
pattern = re.compile(r"Name:\s*(.+)\s*Gender:\s*(.+)\s*", re.MULTILINE)
|
||||||
line = line.strip()
|
# 使用正则表达式查找所有匹配项
|
||||||
if not line:
|
matches = pattern.findall(voices_str)
|
||||||
continue
|
|
||||||
if line.startswith("Name: "):
|
for name, gender in matches:
|
||||||
name = line[6:].strip()
|
# 应用过滤条件
|
||||||
if line.startswith("Gender: "):
|
if filter_locals and any(
|
||||||
gender = line[8:].strip()
|
name.lower().startswith(fl.lower()) for fl in filter_locals
|
||||||
if name and gender:
|
):
|
||||||
# voices.append({
|
|
||||||
# "name": name,
|
|
||||||
# "gender": gender,
|
|
||||||
# })
|
|
||||||
if filter_locals:
|
|
||||||
for filter_local in filter_locals:
|
|
||||||
if name.lower().startswith(filter_local.lower()):
|
|
||||||
voices.append(f"{name}-{gender}")
|
voices.append(f"{name}-{gender}")
|
||||||
else:
|
elif not filter_locals:
|
||||||
voices.append(f"{name}-{gender}")
|
voices.append(f"{name}-{gender}")
|
||||||
name = ""
|
|
||||||
voices.sort()
|
voices.sort()
|
||||||
return voices
|
return voices
|
||||||
|
|
||||||
@@ -1030,7 +1047,7 @@ def is_azure_v2_voice(voice_name: str):
|
|||||||
|
|
||||||
def tts(
|
def tts(
|
||||||
text: str, voice_name: str, voice_rate: float, voice_file: str
|
text: str, voice_name: str, voice_rate: float, voice_file: str
|
||||||
) -> [SubMaker, None]:
|
) -> Union[SubMaker, None]:
|
||||||
if is_azure_v2_voice(voice_name):
|
if is_azure_v2_voice(voice_name):
|
||||||
return azure_tts_v2(text, voice_name, voice_file)
|
return azure_tts_v2(text, voice_name, voice_file)
|
||||||
return azure_tts_v1(text, voice_name, voice_rate, voice_file)
|
return azure_tts_v1(text, voice_name, voice_rate, voice_file)
|
||||||
@@ -1048,7 +1065,7 @@ def convert_rate_to_percent(rate: float) -> str:
|
|||||||
|
|
||||||
def azure_tts_v1(
|
def azure_tts_v1(
|
||||||
text: str, voice_name: str, voice_rate: float, voice_file: str
|
text: str, voice_name: str, voice_rate: float, voice_file: str
|
||||||
) -> [SubMaker, None]:
|
) -> Union[SubMaker, None]:
|
||||||
voice_name = parse_voice_name(voice_name)
|
voice_name = parse_voice_name(voice_name)
|
||||||
text = text.strip()
|
text = text.strip()
|
||||||
rate_str = convert_rate_to_percent(voice_rate)
|
rate_str = convert_rate_to_percent(voice_rate)
|
||||||
@@ -1071,7 +1088,7 @@ def azure_tts_v1(
|
|||||||
|
|
||||||
sub_maker = asyncio.run(_do())
|
sub_maker = asyncio.run(_do())
|
||||||
if not sub_maker or not sub_maker.subs:
|
if not sub_maker or not sub_maker.subs:
|
||||||
logger.warning(f"failed, sub_maker is None or sub_maker.subs is None")
|
logger.warning("failed, sub_maker is None or sub_maker.subs is None")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
logger.info(f"completed, output file: {voice_file}")
|
logger.info(f"completed, output file: {voice_file}")
|
||||||
@@ -1081,7 +1098,7 @@ def azure_tts_v1(
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
def azure_tts_v2(text: str, voice_name: str, voice_file: str) -> [SubMaker, None]:
|
def azure_tts_v2(text: str, voice_name: str, voice_file: str) -> Union[SubMaker, None]:
|
||||||
voice_name = is_azure_v2_voice(voice_name)
|
voice_name = is_azure_v2_voice(voice_name)
|
||||||
if not voice_name:
|
if not voice_name:
|
||||||
logger.error(f"invalid voice name: {voice_name}")
|
logger.error(f"invalid voice name: {voice_name}")
|
||||||
|
|||||||
@@ -1,12 +1,12 @@
|
|||||||
|
import json
|
||||||
import locale
|
import locale
|
||||||
import os
|
import os
|
||||||
import platform
|
|
||||||
import threading
|
import threading
|
||||||
from typing import Any
|
from typing import Any
|
||||||
from loguru import logger
|
|
||||||
import json
|
|
||||||
from uuid import uuid4
|
from uuid import uuid4
|
||||||
|
|
||||||
import urllib3
|
import urllib3
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
from app.models import const
|
from app.models import const
|
||||||
|
|
||||||
@@ -26,33 +26,33 @@ def get_response(status: int, data: Any = None, message: str = ""):
|
|||||||
|
|
||||||
def to_json(obj):
|
def to_json(obj):
|
||||||
try:
|
try:
|
||||||
# 定义一个辅助函数来处理不同类型的对象
|
# Define a helper function to handle different types of objects
|
||||||
def serialize(o):
|
def serialize(o):
|
||||||
# 如果对象是可序列化类型,直接返回
|
# If the object is a serializable type, return it directly
|
||||||
if isinstance(o, (int, float, bool, str)) or o is None:
|
if isinstance(o, (int, float, bool, str)) or o is None:
|
||||||
return o
|
return o
|
||||||
# 如果对象是二进制数据,转换为base64编码的字符串
|
# If the object is binary data, convert it to a base64-encoded string
|
||||||
elif isinstance(o, bytes):
|
elif isinstance(o, bytes):
|
||||||
return "*** binary data ***"
|
return "*** binary data ***"
|
||||||
# 如果对象是字典,递归处理每个键值对
|
# If the object is a dictionary, recursively process each key-value pair
|
||||||
elif isinstance(o, dict):
|
elif isinstance(o, dict):
|
||||||
return {k: serialize(v) for k, v in o.items()}
|
return {k: serialize(v) for k, v in o.items()}
|
||||||
# 如果对象是列表或元组,递归处理每个元素
|
# If the object is a list or tuple, recursively process each element
|
||||||
elif isinstance(o, (list, tuple)):
|
elif isinstance(o, (list, tuple)):
|
||||||
return [serialize(item) for item in o]
|
return [serialize(item) for item in o]
|
||||||
# 如果对象是自定义类型,尝试返回其__dict__属性
|
# If the object is a custom type, attempt to return its __dict__ attribute
|
||||||
elif hasattr(o, "__dict__"):
|
elif hasattr(o, "__dict__"):
|
||||||
return serialize(o.__dict__)
|
return serialize(o.__dict__)
|
||||||
# 其他情况返回None(或者可以选择抛出异常)
|
# Return None for other cases (or choose to raise an exception)
|
||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# 使用serialize函数处理输入对象
|
# Use the serialize function to process the input object
|
||||||
serialized_obj = serialize(obj)
|
serialized_obj = serialize(obj)
|
||||||
|
|
||||||
# 序列化处理后的对象为JSON字符串
|
# Serialize the processed object into a JSON string
|
||||||
return json.dumps(serialized_obj, ensure_ascii=False, indent=4)
|
return json.dumps(serialized_obj, ensure_ascii=False, indent=4)
|
||||||
except Exception as e:
|
except Exception:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
@@ -94,7 +94,7 @@ def task_dir(sub_dir: str = ""):
|
|||||||
|
|
||||||
|
|
||||||
def font_dir(sub_dir: str = ""):
|
def font_dir(sub_dir: str = ""):
|
||||||
d = resource_dir(f"fonts")
|
d = resource_dir("fonts")
|
||||||
if sub_dir:
|
if sub_dir:
|
||||||
d = os.path.join(d, sub_dir)
|
d = os.path.join(d, sub_dir)
|
||||||
if not os.path.exists(d):
|
if not os.path.exists(d):
|
||||||
@@ -103,7 +103,7 @@ def font_dir(sub_dir: str = ""):
|
|||||||
|
|
||||||
|
|
||||||
def song_dir(sub_dir: str = ""):
|
def song_dir(sub_dir: str = ""):
|
||||||
d = resource_dir(f"songs")
|
d = resource_dir("songs")
|
||||||
if sub_dir:
|
if sub_dir:
|
||||||
d = os.path.join(d, sub_dir)
|
d = os.path.join(d, sub_dir)
|
||||||
if not os.path.exists(d):
|
if not os.path.exists(d):
|
||||||
@@ -112,7 +112,7 @@ def song_dir(sub_dir: str = ""):
|
|||||||
|
|
||||||
|
|
||||||
def public_dir(sub_dir: str = ""):
|
def public_dir(sub_dir: str = ""):
|
||||||
d = resource_dir(f"public")
|
d = resource_dir("public")
|
||||||
if sub_dir:
|
if sub_dir:
|
||||||
d = os.path.join(d, sub_dir)
|
d = os.path.join(d, sub_dir)
|
||||||
if not os.path.exists(d):
|
if not os.path.exists(d):
|
||||||
@@ -182,7 +182,7 @@ def split_string_by_punctuations(s):
|
|||||||
next_char = s[i + 1]
|
next_char = s[i + 1]
|
||||||
|
|
||||||
if char == "." and previous_char.isdigit() and next_char.isdigit():
|
if char == "." and previous_char.isdigit() and next_char.isdigit():
|
||||||
# 取现1万,按2.5%收取手续费, 2.5 中的 . 不能作为换行标记
|
# # In the case of "withdraw 10,000, charged at 2.5% fee", the dot in "2.5" should not be treated as a line break marker
|
||||||
txt += char
|
txt += char
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@@ -210,7 +210,7 @@ def get_system_locale():
|
|||||||
# en_US, en_GB return en
|
# en_US, en_GB return en
|
||||||
language_code = loc[0].split("_")[0]
|
language_code = loc[0].split("_")[0]
|
||||||
return language_code
|
return language_code
|
||||||
except Exception as e:
|
except Exception:
|
||||||
return "en"
|
return "en"
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,194 +1,200 @@
|
|||||||
[app]
|
[app]
|
||||||
|
video_source = "pexels" # "pexels" or "pixabay"
|
||||||
|
|
||||||
video_source = "pexels" # "pexels" or "pixabay"
|
# 是否隐藏配置面板
|
||||||
# Pexels API Key
|
hide_config = false
|
||||||
# Register at https://www.pexels.com/api/ to get your API key.
|
|
||||||
# You can use multiple keys to avoid rate limits.
|
|
||||||
# For example: pexels_api_keys = ["123adsf4567adf89","abd1321cd13efgfdfhi"]
|
|
||||||
# 特别注意格式,Key 用英文双引号括起来,多个Key用逗号隔开
|
|
||||||
pexels_api_keys = []
|
|
||||||
|
|
||||||
# Pixabay API Key
|
# Pexels API Key
|
||||||
# Register at https://pixabay.com/api/docs/ to get your API key.
|
# Register at https://www.pexels.com/api/ to get your API key.
|
||||||
# You can use multiple keys to avoid rate limits.
|
# You can use multiple keys to avoid rate limits.
|
||||||
# For example: pixabay_api_keys = ["123adsf4567adf89","abd1321cd13efgfdfhi"]
|
# For example: pexels_api_keys = ["123adsf4567adf89","abd1321cd13efgfdfhi"]
|
||||||
# 特别注意格式,Key 用英文双引号括起来,多个Key用逗号隔开
|
# 特别注意格式,Key 用英文双引号括起来,多个Key用逗号隔开
|
||||||
pixabay_api_keys = []
|
pexels_api_keys = []
|
||||||
|
|
||||||
# 如果你没有 OPENAI API Key,可以使用 g4f 代替,或者使用国内的 Moonshot API
|
# Pixabay API Key
|
||||||
# If you don't have an OPENAI API Key, you can use g4f instead
|
# Register at https://pixabay.com/api/docs/ to get your API key.
|
||||||
|
# You can use multiple keys to avoid rate limits.
|
||||||
|
# For example: pixabay_api_keys = ["123adsf4567adf89","abd1321cd13efgfdfhi"]
|
||||||
|
# 特别注意格式,Key 用英文双引号括起来,多个Key用逗号隔开
|
||||||
|
pixabay_api_keys = []
|
||||||
|
|
||||||
# 支持的提供商 (Supported providers):
|
# 支持的提供商 (Supported providers):
|
||||||
# openai
|
# openai
|
||||||
# moonshot (月之暗面)
|
# moonshot (月之暗面)
|
||||||
# oneapi
|
# azure
|
||||||
# g4f
|
# qwen (通义千问)
|
||||||
# azure
|
# deepseek
|
||||||
# qwen (通义千问)
|
# gemini
|
||||||
# gemini
|
# ollama
|
||||||
llm_provider="openai"
|
# g4f
|
||||||
|
# oneapi
|
||||||
|
# cloudflare
|
||||||
|
# ernie (文心一言)
|
||||||
|
llm_provider = "openai"
|
||||||
|
|
||||||
########## Ollama Settings
|
########## Ollama Settings
|
||||||
# No need to set it unless you want to use your own proxy
|
# No need to set it unless you want to use your own proxy
|
||||||
ollama_base_url = ""
|
ollama_base_url = ""
|
||||||
# Check your available models at https://ollama.com/library
|
# Check your available models at https://ollama.com/library
|
||||||
ollama_model_name = ""
|
ollama_model_name = ""
|
||||||
|
|
||||||
########## OpenAI API Key
|
########## OpenAI API Key
|
||||||
# Get your API key at https://platform.openai.com/api-keys
|
# Get your API key at https://platform.openai.com/api-keys
|
||||||
openai_api_key = ""
|
openai_api_key = ""
|
||||||
# No need to set it unless you want to use your own proxy
|
# No need to set it unless you want to use your own proxy
|
||||||
openai_base_url = ""
|
openai_base_url = ""
|
||||||
# Check your available models at https://platform.openai.com/account/limits
|
# Check your available models at https://platform.openai.com/account/limits
|
||||||
openai_model_name = "gpt-4-turbo"
|
openai_model_name = "gpt-4o-mini"
|
||||||
|
|
||||||
########## Moonshot API Key
|
########## Moonshot API Key
|
||||||
# Visit https://platform.moonshot.cn/console/api-keys to get your API key.
|
# Visit https://platform.moonshot.cn/console/api-keys to get your API key.
|
||||||
moonshot_api_key=""
|
moonshot_api_key = ""
|
||||||
moonshot_base_url = "https://api.moonshot.cn/v1"
|
moonshot_base_url = "https://api.moonshot.cn/v1"
|
||||||
moonshot_model_name = "moonshot-v1-8k"
|
moonshot_model_name = "moonshot-v1-8k"
|
||||||
|
|
||||||
########## OneAPI API Key
|
########## OneAPI API Key
|
||||||
# Visit https://github.com/songquanpeng/one-api to get your API key
|
# Visit https://github.com/songquanpeng/one-api to get your API key
|
||||||
oneapi_api_key=""
|
oneapi_api_key = ""
|
||||||
oneapi_base_url=""
|
oneapi_base_url = ""
|
||||||
oneapi_model_name=""
|
oneapi_model_name = ""
|
||||||
|
|
||||||
########## G4F
|
########## G4F
|
||||||
# Visit https://github.com/xtekky/gpt4free to get more details
|
# Visit https://github.com/xtekky/gpt4free to get more details
|
||||||
# Supported model list: https://github.com/xtekky/gpt4free/blob/main/g4f/models.py
|
# Supported model list: https://github.com/xtekky/gpt4free/blob/main/g4f/models.py
|
||||||
g4f_model_name = "gpt-3.5-turbo"
|
g4f_model_name = "gpt-3.5-turbo"
|
||||||
|
|
||||||
########## Azure API Key
|
########## Azure API Key
|
||||||
# Visit https://learn.microsoft.com/zh-cn/azure/ai-services/openai/ to get more details
|
# Visit https://learn.microsoft.com/zh-cn/azure/ai-services/openai/ to get more details
|
||||||
# API documentation: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference
|
# API documentation: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference
|
||||||
azure_api_key = ""
|
azure_api_key = ""
|
||||||
azure_base_url=""
|
azure_base_url = ""
|
||||||
azure_model_name="gpt-35-turbo" # replace with your model deployment name
|
azure_model_name = "gpt-35-turbo" # replace with your model deployment name
|
||||||
azure_api_version = "2024-02-15-preview"
|
azure_api_version = "2024-02-15-preview"
|
||||||
|
|
||||||
########## Gemini API Key
|
########## Gemini API Key
|
||||||
gemini_api_key=""
|
gemini_api_key = ""
|
||||||
gemini_model_name = "gemini-1.0-pro"
|
gemini_model_name = "gemini-1.0-pro"
|
||||||
|
|
||||||
########## Qwen API Key
|
########## Qwen API Key
|
||||||
# Visit https://dashscope.console.aliyun.com/apiKey to get your API key
|
# Visit https://dashscope.console.aliyun.com/apiKey to get your API key
|
||||||
# Visit below links to get more details
|
# Visit below links to get more details
|
||||||
# https://tongyi.aliyun.com/qianwen/
|
# https://tongyi.aliyun.com/qianwen/
|
||||||
# https://help.aliyun.com/zh/dashscope/developer-reference/model-introduction
|
# https://help.aliyun.com/zh/dashscope/developer-reference/model-introduction
|
||||||
qwen_api_key = ""
|
qwen_api_key = ""
|
||||||
qwen_model_name = "qwen-max"
|
qwen_model_name = "qwen-max"
|
||||||
|
|
||||||
|
|
||||||
########## DeepSeek API Key
|
########## DeepSeek API Key
|
||||||
# Visit https://platform.deepseek.com/api_keys to get your API key
|
# Visit https://platform.deepseek.com/api_keys to get your API key
|
||||||
deepseek_api_key = ""
|
deepseek_api_key = ""
|
||||||
deepseek_base_url = "https://api.deepseek.com"
|
deepseek_base_url = "https://api.deepseek.com"
|
||||||
deepseek_model_name = "deepseek-chat"
|
deepseek_model_name = "deepseek-chat"
|
||||||
|
|
||||||
# Subtitle Provider, "edge" or "whisper"
|
# Subtitle Provider, "edge" or "whisper"
|
||||||
# If empty, the subtitle will not be generated
|
# If empty, the subtitle will not be generated
|
||||||
subtitle_provider = "edge"
|
subtitle_provider = "edge"
|
||||||
|
|
||||||
#
|
#
|
||||||
# ImageMagick
|
# ImageMagick
|
||||||
#
|
#
|
||||||
# Once you have installed it, ImageMagick will be automatically detected, except on Windows!
|
# Once you have installed it, ImageMagick will be automatically detected, except on Windows!
|
||||||
# On Windows, for example "C:\Program Files (x86)\ImageMagick-7.1.1-Q16-HDRI\magick.exe"
|
# On Windows, for example "C:\Program Files (x86)\ImageMagick-7.1.1-Q16-HDRI\magick.exe"
|
||||||
# Download from https://imagemagick.org/archive/binaries/ImageMagick-7.1.1-29-Q16-x64-static.exe
|
# Download from https://imagemagick.org/archive/binaries/ImageMagick-7.1.1-29-Q16-x64-static.exe
|
||||||
|
|
||||||
# imagemagick_path = "C:\\Program Files (x86)\\ImageMagick-7.1.1-Q16\\magick.exe"
|
# imagemagick_path = "C:\\Program Files (x86)\\ImageMagick-7.1.1-Q16\\magick.exe"
|
||||||
|
|
||||||
|
|
||||||
#
|
#
|
||||||
# FFMPEG
|
# FFMPEG
|
||||||
#
|
#
|
||||||
# 通常情况下,ffmpeg 会被自动下载,并且会被自动检测到。
|
# 通常情况下,ffmpeg 会被自动下载,并且会被自动检测到。
|
||||||
# 但是如果你的环境有问题,无法自动下载,可能会遇到如下错误:
|
# 但是如果你的环境有问题,无法自动下载,可能会遇到如下错误:
|
||||||
# RuntimeError: No ffmpeg exe could be found.
|
# RuntimeError: No ffmpeg exe could be found.
|
||||||
# Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.
|
# Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.
|
||||||
# 此时你可以手动下载 ffmpeg 并设置 ffmpeg_path,下载地址:https://www.gyan.dev/ffmpeg/builds/
|
# 此时你可以手动下载 ffmpeg 并设置 ffmpeg_path,下载地址:https://www.gyan.dev/ffmpeg/builds/
|
||||||
|
|
||||||
# Under normal circumstances, ffmpeg is downloaded automatically and detected automatically.
|
# Under normal circumstances, ffmpeg is downloaded automatically and detected automatically.
|
||||||
# However, if there is an issue with your environment that prevents automatic downloading, you might encounter the following error:
|
# However, if there is an issue with your environment that prevents automatic downloading, you might encounter the following error:
|
||||||
# RuntimeError: No ffmpeg exe could be found.
|
# RuntimeError: No ffmpeg exe could be found.
|
||||||
# Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.
|
# Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.
|
||||||
# In such cases, you can manually download ffmpeg and set the ffmpeg_path, download link: https://www.gyan.dev/ffmpeg/builds/
|
# In such cases, you can manually download ffmpeg and set the ffmpeg_path, download link: https://www.gyan.dev/ffmpeg/builds/
|
||||||
|
|
||||||
# ffmpeg_path = "C:\\Users\\harry\\Downloads\\ffmpeg.exe"
|
# ffmpeg_path = "C:\\Users\\harry\\Downloads\\ffmpeg.exe"
|
||||||
#########################################################################################
|
#########################################################################################
|
||||||
|
|
||||||
# 当视频生成成功后,API服务提供的视频下载接入点,默认为当前服务的地址和监听端口
|
# 当视频生成成功后,API服务提供的视频下载接入点,默认为当前服务的地址和监听端口
|
||||||
# 比如 http://127.0.0.1:8080/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
# 比如 http://127.0.0.1:8080/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
||||||
# 如果你需要使用域名对外提供服务(一般会用nginx做代理),则可以设置为你的域名
|
# 如果你需要使用域名对外提供服务(一般会用nginx做代理),则可以设置为你的域名
|
||||||
# 比如 https://xxxx.com/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
# 比如 https://xxxx.com/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
||||||
# endpoint="https://xxxx.com"
|
# endpoint="https://xxxx.com"
|
||||||
|
|
||||||
# When the video is successfully generated, the API service provides a download endpoint for the video, defaulting to the service's current address and listening port.
|
# When the video is successfully generated, the API service provides a download endpoint for the video, defaulting to the service's current address and listening port.
|
||||||
# For example, http://127.0.0.1:8080/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
# For example, http://127.0.0.1:8080/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
||||||
# If you need to provide the service externally using a domain name (usually done with nginx as a proxy), you can set it to your domain name.
|
# If you need to provide the service externally using a domain name (usually done with nginx as a proxy), you can set it to your domain name.
|
||||||
# For example, https://xxxx.com/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
# For example, https://xxxx.com/tasks/6357f542-a4e1-46a1-b4c9-bf3bd0df5285/final-1.mp4
|
||||||
# endpoint="https://xxxx.com"
|
# endpoint="https://xxxx.com"
|
||||||
endpoint=""
|
endpoint = ""
|
||||||
|
|
||||||
|
|
||||||
# Video material storage location
|
# Video material storage location
|
||||||
# material_directory = "" # Indicates that video materials will be downloaded to the default folder, the default folder is ./storage/cache_videos under the current project
|
# material_directory = "" # Indicates that video materials will be downloaded to the default folder, the default folder is ./storage/cache_videos under the current project
|
||||||
# material_directory = "/user/harry/videos" # Indicates that video materials will be downloaded to a specified folder
|
# material_directory = "/user/harry/videos" # Indicates that video materials will be downloaded to a specified folder
|
||||||
# material_directory = "task" # Indicates that video materials will be downloaded to the current task's folder, this method does not allow sharing of already downloaded video materials
|
# material_directory = "task" # Indicates that video materials will be downloaded to the current task's folder, this method does not allow sharing of already downloaded video materials
|
||||||
|
|
||||||
# 视频素材存放位置
|
# 视频素材存放位置
|
||||||
# material_directory = "" #表示将视频素材下载到默认的文件夹,默认文件夹为当前项目下的 ./storage/cache_videos
|
# material_directory = "" #表示将视频素材下载到默认的文件夹,默认文件夹为当前项目下的 ./storage/cache_videos
|
||||||
# material_directory = "/user/harry/videos" #表示将视频素材下载到指定的文件夹中
|
# material_directory = "/user/harry/videos" #表示将视频素材下载到指定的文件夹中
|
||||||
# material_directory = "task" #表示将视频素材下载到当前任务的文件夹中,这种方式无法共享已经下载的视频素材
|
# material_directory = "task" #表示将视频素材下载到当前任务的文件夹中,这种方式无法共享已经下载的视频素材
|
||||||
|
|
||||||
material_directory = ""
|
material_directory = ""
|
||||||
|
|
||||||
# Used for state management of the task
|
# Used for state management of the task
|
||||||
enable_redis = false
|
enable_redis = false
|
||||||
redis_host = "localhost"
|
redis_host = "localhost"
|
||||||
redis_port = 6379
|
redis_port = 6379
|
||||||
redis_db = 0
|
redis_db = 0
|
||||||
redis_password = ""
|
redis_password = ""
|
||||||
|
|
||||||
# 文生视频时的最大并发任务数
|
# 文生视频时的最大并发任务数
|
||||||
max_concurrent_tasks = 5
|
max_concurrent_tasks = 5
|
||||||
|
|
||||||
# webui界面是否显示配置项
|
|
||||||
# webui hide baisc config panel
|
|
||||||
hide_config = false
|
|
||||||
|
|
||||||
|
|
||||||
[whisper]
|
[whisper]
|
||||||
# Only effective when subtitle_provider is "whisper"
|
# Only effective when subtitle_provider is "whisper"
|
||||||
|
|
||||||
# Run on GPU with FP16
|
# Run on GPU with FP16
|
||||||
# model = WhisperModel(model_size, device="cuda", compute_type="float16")
|
# model = WhisperModel(model_size, device="cuda", compute_type="float16")
|
||||||
|
|
||||||
# Run on GPU with INT8
|
# Run on GPU with INT8
|
||||||
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
|
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
|
||||||
|
|
||||||
# Run on CPU with INT8
|
# Run on CPU with INT8
|
||||||
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
|
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
|
||||||
|
|
||||||
# recommended model_size: "large-v3"
|
# recommended model_size: "large-v3"
|
||||||
model_size="large-v3"
|
model_size = "large-v3"
|
||||||
# if you want to use GPU, set device="cuda"
|
# if you want to use GPU, set device="cuda"
|
||||||
device="CPU"
|
device = "CPU"
|
||||||
compute_type="int8"
|
compute_type = "int8"
|
||||||
|
|
||||||
|
|
||||||
[proxy]
|
[proxy]
|
||||||
### Use a proxy to access the Pexels API
|
### Use a proxy to access the Pexels API
|
||||||
### Format: "http://<username>:<password>@<proxy>:<port>"
|
### Format: "http://<username>:<password>@<proxy>:<port>"
|
||||||
### Example: "http://user:pass@proxy:1234"
|
### Example: "http://user:pass@proxy:1234"
|
||||||
### Doc: https://requests.readthedocs.io/en/latest/user/advanced/#proxies
|
### Doc: https://requests.readthedocs.io/en/latest/user/advanced/#proxies
|
||||||
|
|
||||||
# http = "http://10.10.1.10:3128"
|
# http = "http://10.10.1.10:3128"
|
||||||
# https = "http://10.10.1.10:1080"
|
# https = "http://10.10.1.10:1080"
|
||||||
|
|
||||||
[azure]
|
[azure]
|
||||||
# Azure Speech API Key
|
# Azure Speech API Key
|
||||||
# Get your API key at https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices
|
# Get your API key at https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices
|
||||||
speech_key=""
|
speech_key = ""
|
||||||
speech_region=""
|
speech_region = ""
|
||||||
|
|
||||||
|
[ui]
|
||||||
|
# UI related settings
|
||||||
|
# 是否隐藏日志信息
|
||||||
|
# Whether to hide logs in the UI
|
||||||
|
hide_log = false
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ services:
|
|||||||
build:
|
build:
|
||||||
context: .
|
context: .
|
||||||
dockerfile: Dockerfile
|
dockerfile: Dockerfile
|
||||||
container_name: "webui"
|
container_name: "moneyprinterturbo-webui"
|
||||||
ports:
|
ports:
|
||||||
- "8501:8501"
|
- "8501:8501"
|
||||||
command: [ "streamlit", "run", "./webui/Main.py","--browser.serverAddress=127.0.0.1","--server.enableCORS=True","--browser.gatherUsageStats=False" ]
|
command: [ "streamlit", "run", "./webui/Main.py","--browser.serverAddress=127.0.0.1","--server.enableCORS=True","--browser.gatherUsageStats=False" ]
|
||||||
@@ -16,7 +16,7 @@ services:
|
|||||||
build:
|
build:
|
||||||
context: .
|
context: .
|
||||||
dockerfile: Dockerfile
|
dockerfile: Dockerfile
|
||||||
container_name: "api"
|
container_name: "moneyprinterturbo-api"
|
||||||
ports:
|
ports:
|
||||||
- "8080:8080"
|
- "8080:8080"
|
||||||
command: [ "python3", "main.py" ]
|
command: [ "python3", "main.py" ]
|
||||||
|
|||||||
BIN
docs/api.jpg
|
Before Width: | Height: | Size: 252 KiB After Width: | Height: | Size: 113 KiB |
|
Before Width: | Height: | Size: 384 KiB After Width: | Height: | Size: 284 KiB |
BIN
docs/webui.jpg
|
Before Width: | Height: | Size: 340 KiB After Width: | Height: | Size: 275 KiB |
|
Before Width: | Height: | Size: 90 KiB After Width: | Height: | Size: 137 KiB |
32
pyproject.toml
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
[project]
|
||||||
|
name = "MoneyPrinterTurbo"
|
||||||
|
version = "1.2.3"
|
||||||
|
description = "Default template for PDM package"
|
||||||
|
authors = [
|
||||||
|
{name = "yyhhyyyyyy", email = "yyhhyyyyyy8@gmail.com"},
|
||||||
|
]
|
||||||
|
dependencies = [
|
||||||
|
"moviepy==2.1.1",
|
||||||
|
"streamlit==1.40.2",
|
||||||
|
"edge-tts==6.1.19",
|
||||||
|
"fastapi==0.115.6",
|
||||||
|
"uvicorn==0.32.1",
|
||||||
|
"openai==1.56.1",
|
||||||
|
"faster-whisper==1.1.0",
|
||||||
|
"loguru==0.7.2",
|
||||||
|
"google-generativeai==0.8.3",
|
||||||
|
"dashscope==1.20.14",
|
||||||
|
"g4f==0.3.8.1",
|
||||||
|
"azure-cognitiveservices-speech==1.41.1",
|
||||||
|
"redis==5.2.0",
|
||||||
|
"python-multipart==0.0.19",
|
||||||
|
"streamlit-authenticator==0.4.1",
|
||||||
|
"pyyaml",
|
||||||
|
]
|
||||||
|
requires-python = "==3.11.*"
|
||||||
|
readme = "README.md"
|
||||||
|
license = {text = "MIT"}
|
||||||
|
|
||||||
|
|
||||||
|
[tool.pdm]
|
||||||
|
distribution = false
|
||||||
@@ -1,26 +1,15 @@
|
|||||||
requests~=2.31.0
|
moviepy==2.1.2
|
||||||
moviepy~=2.0.0.dev2
|
streamlit==1.45.0
|
||||||
openai~=1.13.3
|
edge_tts==6.1.19
|
||||||
faster-whisper~=1.0.1
|
fastapi==0.115.6
|
||||||
edge_tts~=6.1.10
|
uvicorn==0.32.1
|
||||||
uvicorn~=0.27.1
|
openai==1.56.1
|
||||||
fastapi~=0.110.0
|
faster-whisper==1.1.0
|
||||||
tomli~=2.0.1
|
loguru==0.7.3
|
||||||
streamlit~=1.33.0
|
google.generativeai==0.8.3
|
||||||
loguru~=0.7.2
|
dashscope==1.20.14
|
||||||
aiohttp~=3.9.3
|
g4f==0.5.2.2
|
||||||
urllib3~=2.2.1
|
azure-cognitiveservices-speech==1.41.1
|
||||||
pillow~=10.3.0
|
redis==5.2.0
|
||||||
pydantic~=2.6.3
|
python-multipart==0.0.19
|
||||||
g4f~=0.3.0.4
|
pyyaml
|
||||||
dashscope~=1.15.0
|
|
||||||
google.generativeai~=0.4.1
|
|
||||||
python-multipart~=0.0.9
|
|
||||||
redis==5.0.3
|
|
||||||
# if you use pillow~=10.3.0, you will get "PIL.Image' has no attribute 'ANTIALIAS'" error when resize video
|
|
||||||
# please install opencv-python to fix "PIL.Image' has no attribute 'ANTIALIAS'" error
|
|
||||||
opencv-python~=4.9.0.80
|
|
||||||
# for azure speech
|
|
||||||
# https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/9-more-realistic-ai-voices-for-conversations-now-generally/ba-p/4099471
|
|
||||||
azure-cognitiveservices-speech~=1.37.0
|
|
||||||
git-changelog~=2.5.2
|
|
||||||
239
webui/Main.py
@@ -1,5 +1,10 @@
|
|||||||
import os
|
import os
|
||||||
|
import platform
|
||||||
import sys
|
import sys
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
|
import streamlit as st
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
# Add the root directory of the project to the system path to allow importing modules from the project
|
# Add the root directory of the project to the system path to allow importing modules from the project
|
||||||
root_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
|
root_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
|
||||||
@@ -9,12 +14,17 @@ if root_dir not in sys.path:
|
|||||||
print(sys.path)
|
print(sys.path)
|
||||||
print("")
|
print("")
|
||||||
|
|
||||||
import os
|
from app.config import config
|
||||||
import platform
|
from app.models.schema import (
|
||||||
from uuid import uuid4
|
MaterialInfo,
|
||||||
|
VideoAspect,
|
||||||
import streamlit as st
|
VideoConcatMode,
|
||||||
from loguru import logger
|
VideoParams,
|
||||||
|
VideoTransitionMode,
|
||||||
|
)
|
||||||
|
from app.services import llm, voice
|
||||||
|
from app.services import task as tm
|
||||||
|
from app.utils import utils
|
||||||
|
|
||||||
st.set_page_config(
|
st.set_page_config(
|
||||||
page_title="MoneyPrinterTurbo",
|
page_title="MoneyPrinterTurbo",
|
||||||
@@ -30,18 +40,61 @@ st.set_page_config(
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
from app.config import config
|
|
||||||
from app.models.const import FILE_TYPE_IMAGES, FILE_TYPE_VIDEOS
|
|
||||||
from app.models.schema import MaterialInfo, VideoAspect, VideoConcatMode, VideoParams
|
|
||||||
from app.services import llm, voice
|
|
||||||
from app.services import task as tm
|
|
||||||
from app.utils import utils
|
|
||||||
|
|
||||||
hide_streamlit_style = """
|
streamlit_style = """
|
||||||
<style>#root > div:nth-child(1) > div > div > div > div > section > div {padding-top: 0rem;}</style>
|
<style>
|
||||||
|
h1 {
|
||||||
|
padding-top: 0 !important;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
"""
|
"""
|
||||||
st.markdown(hide_streamlit_style, unsafe_allow_html=True)
|
st.markdown(streamlit_style, unsafe_allow_html=True)
|
||||||
st.title(f"MoneyPrinterTurbo v{config.project_version}")
|
|
||||||
|
# 定义资源目录
|
||||||
|
font_dir = os.path.join(root_dir, "resource", "fonts")
|
||||||
|
song_dir = os.path.join(root_dir, "resource", "songs")
|
||||||
|
i18n_dir = os.path.join(root_dir, "webui", "i18n")
|
||||||
|
config_file = os.path.join(root_dir, "webui", ".streamlit", "webui.toml")
|
||||||
|
system_locale = utils.get_system_locale()
|
||||||
|
|
||||||
|
|
||||||
|
if "video_subject" not in st.session_state:
|
||||||
|
st.session_state["video_subject"] = ""
|
||||||
|
if "video_script" not in st.session_state:
|
||||||
|
st.session_state["video_script"] = ""
|
||||||
|
if "video_terms" not in st.session_state:
|
||||||
|
st.session_state["video_terms"] = ""
|
||||||
|
if "ui_language" not in st.session_state:
|
||||||
|
st.session_state["ui_language"] = config.ui.get("language", system_locale)
|
||||||
|
|
||||||
|
# 加载语言文件
|
||||||
|
locales = utils.load_locales(i18n_dir)
|
||||||
|
|
||||||
|
# 创建一个顶部栏,包含标题和语言选择
|
||||||
|
title_col, lang_col = st.columns([3, 1])
|
||||||
|
|
||||||
|
with title_col:
|
||||||
|
st.title(f"MoneyPrinterTurbo v{config.project_version}")
|
||||||
|
|
||||||
|
with lang_col:
|
||||||
|
display_languages = []
|
||||||
|
selected_index = 0
|
||||||
|
for i, code in enumerate(locales.keys()):
|
||||||
|
display_languages.append(f"{code} - {locales[code].get('Language')}")
|
||||||
|
if code == st.session_state.get("ui_language", ""):
|
||||||
|
selected_index = i
|
||||||
|
|
||||||
|
selected_language = st.selectbox(
|
||||||
|
"Language / 语言",
|
||||||
|
options=display_languages,
|
||||||
|
index=selected_index,
|
||||||
|
key="top_language_selector",
|
||||||
|
label_visibility="collapsed",
|
||||||
|
)
|
||||||
|
if selected_language:
|
||||||
|
code = selected_language.split(" - ")[0].strip()
|
||||||
|
st.session_state["ui_language"] = code
|
||||||
|
config.ui["language"] = code
|
||||||
|
|
||||||
support_locales = [
|
support_locales = [
|
||||||
"zh-CN",
|
"zh-CN",
|
||||||
@@ -54,23 +107,6 @@ support_locales = [
|
|||||||
"th-TH",
|
"th-TH",
|
||||||
]
|
]
|
||||||
|
|
||||||
font_dir = os.path.join(root_dir, "resource", "fonts")
|
|
||||||
song_dir = os.path.join(root_dir, "resource", "songs")
|
|
||||||
i18n_dir = os.path.join(root_dir, "webui", "i18n")
|
|
||||||
config_file = os.path.join(root_dir, "webui", ".streamlit", "webui.toml")
|
|
||||||
system_locale = utils.get_system_locale()
|
|
||||||
# print(f"******** system locale: {system_locale} ********")
|
|
||||||
|
|
||||||
if "video_subject" not in st.session_state:
|
|
||||||
st.session_state["video_subject"] = ""
|
|
||||||
if "video_script" not in st.session_state:
|
|
||||||
st.session_state["video_script"] = ""
|
|
||||||
if "video_terms" not in st.session_state:
|
|
||||||
st.session_state["video_terms"] = ""
|
|
||||||
if "ui_language" not in st.session_state:
|
|
||||||
st.session_state["ui_language"] = config.ui.get("language", system_locale)
|
|
||||||
|
|
||||||
|
|
||||||
def get_all_fonts():
|
def get_all_fonts():
|
||||||
fonts = []
|
fonts = []
|
||||||
for root, dirs, files in os.walk(font_dir):
|
for root, dirs, files in os.walk(font_dir):
|
||||||
@@ -161,48 +197,32 @@ def tr(key):
|
|||||||
loc = locales.get(st.session_state["ui_language"], {})
|
loc = locales.get(st.session_state["ui_language"], {})
|
||||||
return loc.get("Translation", {}).get(key, key)
|
return loc.get("Translation", {}).get(key, key)
|
||||||
|
|
||||||
|
# 创建基础设置折叠框
|
||||||
st.write(tr("Get Help"))
|
|
||||||
|
|
||||||
llm_provider = config.app.get("llm_provider", "").lower()
|
|
||||||
|
|
||||||
if not config.app.get("hide_config", False):
|
if not config.app.get("hide_config", False):
|
||||||
with st.expander(tr("Basic Settings"), expanded=False):
|
with st.expander(tr("Basic Settings"), expanded=False):
|
||||||
config_panels = st.columns(3)
|
config_panels = st.columns(3)
|
||||||
left_config_panel = config_panels[0]
|
left_config_panel = config_panels[0]
|
||||||
middle_config_panel = config_panels[1]
|
middle_config_panel = config_panels[1]
|
||||||
right_config_panel = config_panels[2]
|
right_config_panel = config_panels[2]
|
||||||
with left_config_panel:
|
|
||||||
display_languages = []
|
|
||||||
selected_index = 0
|
|
||||||
for i, code in enumerate(locales.keys()):
|
|
||||||
display_languages.append(f"{code} - {locales[code].get('Language')}")
|
|
||||||
if code == st.session_state["ui_language"]:
|
|
||||||
selected_index = i
|
|
||||||
|
|
||||||
selected_language = st.selectbox(
|
# 左侧面板 - 日志设置
|
||||||
tr("Language"), options=display_languages, index=selected_index
|
with left_config_panel:
|
||||||
|
# 是否隐藏配置面板
|
||||||
|
hide_config = st.checkbox(
|
||||||
|
tr("Hide Basic Settings"), value=config.app.get("hide_config", False)
|
||||||
)
|
)
|
||||||
if selected_language:
|
config.app["hide_config"] = hide_config
|
||||||
code = selected_language.split(" - ")[0].strip()
|
|
||||||
st.session_state["ui_language"] = code
|
|
||||||
config.ui["language"] = code
|
|
||||||
|
|
||||||
# 是否禁用日志显示
|
# 是否禁用日志显示
|
||||||
hide_log = st.checkbox(
|
hide_log = st.checkbox(
|
||||||
tr("Hide Log"), value=config.app.get("hide_log", False)
|
tr("Hide Log"), value=config.ui.get("hide_log", False)
|
||||||
)
|
)
|
||||||
config.ui["hide_log"] = hide_log
|
config.ui["hide_log"] = hide_log
|
||||||
|
|
||||||
|
# 中间面板 - LLM 设置
|
||||||
|
|
||||||
with middle_config_panel:
|
with middle_config_panel:
|
||||||
# openai
|
st.write(tr("LLM Settings"))
|
||||||
# moonshot (月之暗面)
|
|
||||||
# oneapi
|
|
||||||
# g4f
|
|
||||||
# azure
|
|
||||||
# qwen (通义千问)
|
|
||||||
# gemini
|
|
||||||
# ollama
|
|
||||||
llm_providers = [
|
llm_providers = [
|
||||||
"OpenAI",
|
"OpenAI",
|
||||||
"Moonshot",
|
"Moonshot",
|
||||||
@@ -400,6 +420,7 @@ if not config.app.get("hide_config", False):
|
|||||||
if st_llm_account_id:
|
if st_llm_account_id:
|
||||||
config.app[f"{llm_provider}_account_id"] = st_llm_account_id
|
config.app[f"{llm_provider}_account_id"] = st_llm_account_id
|
||||||
|
|
||||||
|
# 右侧面板 - API 密钥设置
|
||||||
with right_config_panel:
|
with right_config_panel:
|
||||||
|
|
||||||
def get_keys_from_config(cfg_key):
|
def get_keys_from_config(cfg_key):
|
||||||
@@ -414,6 +435,8 @@ if not config.app.get("hide_config", False):
|
|||||||
if value:
|
if value:
|
||||||
config.app[cfg_key] = value.split(",")
|
config.app[cfg_key] = value.split(",")
|
||||||
|
|
||||||
|
st.write(tr("Video Source Settings"))
|
||||||
|
|
||||||
pexels_api_key = get_keys_from_config("pexels_api_keys")
|
pexels_api_key = get_keys_from_config("pexels_api_keys")
|
||||||
pexels_api_key = st.text_input(
|
pexels_api_key = st.text_input(
|
||||||
tr("Pexels API Key"), value=pexels_api_key, type="password"
|
tr("Pexels API Key"), value=pexels_api_key, type="password"
|
||||||
@@ -426,6 +449,7 @@ if not config.app.get("hide_config", False):
|
|||||||
)
|
)
|
||||||
save_keys_to_config("pixabay_api_keys", pixabay_api_key)
|
save_keys_to_config("pixabay_api_keys", pixabay_api_key)
|
||||||
|
|
||||||
|
llm_provider = config.app.get("llm_provider", "").lower()
|
||||||
panel = st.columns(3)
|
panel = st.columns(3)
|
||||||
left_panel = panel[0]
|
left_panel = panel[0]
|
||||||
middle_panel = panel[1]
|
middle_panel = panel[1]
|
||||||
@@ -438,7 +462,9 @@ with left_panel:
|
|||||||
with st.container(border=True):
|
with st.container(border=True):
|
||||||
st.write(tr("Video Script Settings"))
|
st.write(tr("Video Script Settings"))
|
||||||
params.video_subject = st.text_input(
|
params.video_subject = st.text_input(
|
||||||
tr("Video Subject"), value=st.session_state["video_subject"]
|
tr("Video Subject"),
|
||||||
|
value=st.session_state["video_subject"],
|
||||||
|
key="video_subject_input",
|
||||||
).strip()
|
).strip()
|
||||||
|
|
||||||
video_languages = [
|
video_languages = [
|
||||||
@@ -450,8 +476,12 @@ with left_panel:
|
|||||||
selected_index = st.selectbox(
|
selected_index = st.selectbox(
|
||||||
tr("Script Language"),
|
tr("Script Language"),
|
||||||
index=0,
|
index=0,
|
||||||
options=range(len(video_languages)), # 使用索引作为内部选项值
|
options=range(
|
||||||
format_func=lambda x: video_languages[x][0], # 显示给用户的是标签
|
len(video_languages)
|
||||||
|
), # Use the index as the internal option value
|
||||||
|
format_func=lambda x: video_languages[x][
|
||||||
|
0
|
||||||
|
], # The label is displayed to the user
|
||||||
)
|
)
|
||||||
params.video_language = video_languages[selected_index][1]
|
params.video_language = video_languages[selected_index][1]
|
||||||
|
|
||||||
@@ -463,9 +493,13 @@ with left_panel:
|
|||||||
video_subject=params.video_subject, language=params.video_language
|
video_subject=params.video_subject, language=params.video_language
|
||||||
)
|
)
|
||||||
terms = llm.generate_terms(params.video_subject, script)
|
terms = llm.generate_terms(params.video_subject, script)
|
||||||
|
if "Error: " in script:
|
||||||
|
st.error(tr(script))
|
||||||
|
elif "Error: " in terms:
|
||||||
|
st.error(tr(terms))
|
||||||
|
else:
|
||||||
st.session_state["video_script"] = script
|
st.session_state["video_script"] = script
|
||||||
st.session_state["video_terms"] = ", ".join(terms)
|
st.session_state["video_terms"] = ", ".join(terms)
|
||||||
|
|
||||||
params.video_script = st.text_area(
|
params.video_script = st.text_area(
|
||||||
tr("Video Script"), value=st.session_state["video_script"], height=280
|
tr("Video Script"), value=st.session_state["video_script"], height=280
|
||||||
)
|
)
|
||||||
@@ -476,10 +510,13 @@ with left_panel:
|
|||||||
|
|
||||||
with st.spinner(tr("Generating Video Keywords")):
|
with st.spinner(tr("Generating Video Keywords")):
|
||||||
terms = llm.generate_terms(params.video_subject, params.video_script)
|
terms = llm.generate_terms(params.video_subject, params.video_script)
|
||||||
|
if "Error: " in terms:
|
||||||
|
st.error(tr(terms))
|
||||||
|
else:
|
||||||
st.session_state["video_terms"] = ", ".join(terms)
|
st.session_state["video_terms"] = ", ".join(terms)
|
||||||
|
|
||||||
params.video_terms = st.text_area(
|
params.video_terms = st.text_area(
|
||||||
tr("Video Keywords"), value=st.session_state["video_terms"], height=50
|
tr("Video Keywords"), value=st.session_state["video_terms"]
|
||||||
)
|
)
|
||||||
|
|
||||||
with middle_panel:
|
with middle_panel:
|
||||||
@@ -513,7 +550,6 @@ with middle_panel:
|
|||||||
config.app["video_source"] = params.video_source
|
config.app["video_source"] = params.video_source
|
||||||
|
|
||||||
if params.video_source == "local":
|
if params.video_source == "local":
|
||||||
_supported_types = FILE_TYPE_VIDEOS + FILE_TYPE_IMAGES
|
|
||||||
uploaded_files = st.file_uploader(
|
uploaded_files = st.file_uploader(
|
||||||
"Upload Local Files",
|
"Upload Local Files",
|
||||||
type=["mp4", "mov", "avi", "flv", "mkv", "jpg", "jpeg", "png"],
|
type=["mp4", "mov", "avi", "flv", "mkv", "jpg", "jpeg", "png"],
|
||||||
@@ -523,21 +559,48 @@ with middle_panel:
|
|||||||
selected_index = st.selectbox(
|
selected_index = st.selectbox(
|
||||||
tr("Video Concat Mode"),
|
tr("Video Concat Mode"),
|
||||||
index=1,
|
index=1,
|
||||||
options=range(len(video_concat_modes)), # 使用索引作为内部选项值
|
options=range(
|
||||||
format_func=lambda x: video_concat_modes[x][0], # 显示给用户的是标签
|
len(video_concat_modes)
|
||||||
|
), # Use the index as the internal option value
|
||||||
|
format_func=lambda x: video_concat_modes[x][
|
||||||
|
0
|
||||||
|
], # The label is displayed to the user
|
||||||
)
|
)
|
||||||
params.video_concat_mode = VideoConcatMode(
|
params.video_concat_mode = VideoConcatMode(
|
||||||
video_concat_modes[selected_index][1]
|
video_concat_modes[selected_index][1]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# 视频转场模式
|
||||||
|
video_transition_modes = [
|
||||||
|
(tr("None"), VideoTransitionMode.none.value),
|
||||||
|
(tr("Shuffle"), VideoTransitionMode.shuffle.value),
|
||||||
|
(tr("FadeIn"), VideoTransitionMode.fade_in.value),
|
||||||
|
(tr("FadeOut"), VideoTransitionMode.fade_out.value),
|
||||||
|
(tr("SlideIn"), VideoTransitionMode.slide_in.value),
|
||||||
|
(tr("SlideOut"), VideoTransitionMode.slide_out.value),
|
||||||
|
]
|
||||||
|
selected_index = st.selectbox(
|
||||||
|
tr("Video Transition Mode"),
|
||||||
|
options=range(len(video_transition_modes)),
|
||||||
|
format_func=lambda x: video_transition_modes[x][0],
|
||||||
|
index=0,
|
||||||
|
)
|
||||||
|
params.video_transition_mode = VideoTransitionMode(
|
||||||
|
video_transition_modes[selected_index][1]
|
||||||
|
)
|
||||||
|
|
||||||
video_aspect_ratios = [
|
video_aspect_ratios = [
|
||||||
(tr("Portrait"), VideoAspect.portrait.value),
|
(tr("Portrait"), VideoAspect.portrait.value),
|
||||||
(tr("Landscape"), VideoAspect.landscape.value),
|
(tr("Landscape"), VideoAspect.landscape.value),
|
||||||
]
|
]
|
||||||
selected_index = st.selectbox(
|
selected_index = st.selectbox(
|
||||||
tr("Video Ratio"),
|
tr("Video Ratio"),
|
||||||
options=range(len(video_aspect_ratios)), # 使用索引作为内部选项值
|
options=range(
|
||||||
format_func=lambda x: video_aspect_ratios[x][0], # 显示给用户的是标签
|
len(video_aspect_ratios)
|
||||||
|
), # Use the index as the internal option value
|
||||||
|
format_func=lambda x: video_aspect_ratios[x][
|
||||||
|
0
|
||||||
|
], # The label is displayed to the user
|
||||||
)
|
)
|
||||||
params.video_aspect = VideoAspect(video_aspect_ratios[selected_index][1])
|
params.video_aspect = VideoAspect(video_aspect_ratios[selected_index][1])
|
||||||
|
|
||||||
@@ -555,7 +618,7 @@ with middle_panel:
|
|||||||
# tts_providers = ['edge', 'azure']
|
# tts_providers = ['edge', 'azure']
|
||||||
# tts_provider = st.selectbox(tr("TTS Provider"), tts_providers)
|
# tts_provider = st.selectbox(tr("TTS Provider"), tts_providers)
|
||||||
|
|
||||||
voices = voice.get_all_azure_voices(filter_locals=support_locales)
|
voices = voice.get_all_azure_voices(filter_locals=None)
|
||||||
friendly_names = {
|
friendly_names = {
|
||||||
v: v.replace("Female", tr("Female"))
|
v: v.replace("Female", tr("Female"))
|
||||||
.replace("Male", tr("Male"))
|
.replace("Male", tr("Male"))
|
||||||
@@ -621,10 +684,15 @@ with middle_panel:
|
|||||||
saved_azure_speech_region = config.azure.get("speech_region", "")
|
saved_azure_speech_region = config.azure.get("speech_region", "")
|
||||||
saved_azure_speech_key = config.azure.get("speech_key", "")
|
saved_azure_speech_key = config.azure.get("speech_key", "")
|
||||||
azure_speech_region = st.text_input(
|
azure_speech_region = st.text_input(
|
||||||
tr("Speech Region"), value=saved_azure_speech_region
|
tr("Speech Region"),
|
||||||
|
value=saved_azure_speech_region,
|
||||||
|
key="azure_speech_region_input",
|
||||||
)
|
)
|
||||||
azure_speech_key = st.text_input(
|
azure_speech_key = st.text_input(
|
||||||
tr("Speech Key"), value=saved_azure_speech_key, type="password"
|
tr("Speech Key"),
|
||||||
|
value=saved_azure_speech_key,
|
||||||
|
type="password",
|
||||||
|
key="azure_speech_key_input",
|
||||||
)
|
)
|
||||||
config.azure["speech_region"] = azure_speech_region
|
config.azure["speech_region"] = azure_speech_region
|
||||||
config.azure["speech_key"] = azure_speech_key
|
config.azure["speech_key"] = azure_speech_key
|
||||||
@@ -649,15 +717,21 @@ with middle_panel:
|
|||||||
selected_index = st.selectbox(
|
selected_index = st.selectbox(
|
||||||
tr("Background Music"),
|
tr("Background Music"),
|
||||||
index=1,
|
index=1,
|
||||||
options=range(len(bgm_options)), # 使用索引作为内部选项值
|
options=range(
|
||||||
format_func=lambda x: bgm_options[x][0], # 显示给用户的是标签
|
len(bgm_options)
|
||||||
|
), # Use the index as the internal option value
|
||||||
|
format_func=lambda x: bgm_options[x][
|
||||||
|
0
|
||||||
|
], # The label is displayed to the user
|
||||||
)
|
)
|
||||||
# 获取选择的背景音乐类型
|
# Get the selected background music type
|
||||||
params.bgm_type = bgm_options[selected_index][1]
|
params.bgm_type = bgm_options[selected_index][1]
|
||||||
|
|
||||||
# 根据选择显示或隐藏组件
|
# Show or hide components based on the selection
|
||||||
if params.bgm_type == "custom":
|
if params.bgm_type == "custom":
|
||||||
custom_bgm_file = st.text_input(tr("Custom Background Music File"))
|
custom_bgm_file = st.text_input(
|
||||||
|
tr("Custom Background Music File"), key="custom_bgm_file_input"
|
||||||
|
)
|
||||||
if custom_bgm_file and os.path.exists(custom_bgm_file):
|
if custom_bgm_file and os.path.exists(custom_bgm_file):
|
||||||
params.bgm_file = custom_bgm_file
|
params.bgm_file = custom_bgm_file
|
||||||
# st.write(f":red[已选择自定义背景音乐]:**{custom_bgm_file}**")
|
# st.write(f":red[已选择自定义背景音乐]:**{custom_bgm_file}**")
|
||||||
@@ -697,7 +771,9 @@ with right_panel:
|
|||||||
|
|
||||||
if params.subtitle_position == "custom":
|
if params.subtitle_position == "custom":
|
||||||
custom_position = st.text_input(
|
custom_position = st.text_input(
|
||||||
tr("Custom Position (% from top)"), value="70.0"
|
tr("Custom Position (% from top)"),
|
||||||
|
value="70.0",
|
||||||
|
key="custom_position_input",
|
||||||
)
|
)
|
||||||
try:
|
try:
|
||||||
params.custom_position = float(custom_position)
|
params.custom_position = float(custom_position)
|
||||||
@@ -734,11 +810,6 @@ if start_button:
|
|||||||
scroll_to_bottom()
|
scroll_to_bottom()
|
||||||
st.stop()
|
st.stop()
|
||||||
|
|
||||||
if llm_provider != "g4f" and not config.app.get(f"{llm_provider}_api_key", ""):
|
|
||||||
st.error(tr("Please Enter the LLM API Key"))
|
|
||||||
scroll_to_bottom()
|
|
||||||
st.stop()
|
|
||||||
|
|
||||||
if params.video_source not in ["pexels", "pixabay", "local"]:
|
if params.video_source not in ["pexels", "pixabay", "local"]:
|
||||||
st.error(tr("Please Select a Valid Video Source"))
|
st.error(tr("Please Select a Valid Video Source"))
|
||||||
scroll_to_bottom()
|
scroll_to_bottom()
|
||||||
|
|||||||
@@ -1,6 +1,14 @@
|
|||||||
{
|
{
|
||||||
"Language": "German",
|
"Language": "Deutsch",
|
||||||
"Translation": {
|
"Translation": {
|
||||||
|
"Login Required": "Anmeldung erforderlich",
|
||||||
|
"Please login to access settings": "Bitte melden Sie sich an, um auf die Einstellungen zuzugreifen",
|
||||||
|
"Username": "Benutzername",
|
||||||
|
"Password": "Passwort",
|
||||||
|
"Login": "Anmelden",
|
||||||
|
"Login Error": "Anmeldefehler",
|
||||||
|
"Incorrect username or password": "Falscher Benutzername oder Passwort",
|
||||||
|
"Please enter your username and password": "Bitte geben Sie Ihren Benutzernamen und Ihr Passwort ein",
|
||||||
"Video Script Settings": "**Drehbuch / Topic des Videos**",
|
"Video Script Settings": "**Drehbuch / Topic des Videos**",
|
||||||
"Video Subject": "Worum soll es in dem Video gehen? (Geben Sie ein Keyword an, :red[Dank KI wird automatisch ein Drehbuch generieren])",
|
"Video Subject": "Worum soll es in dem Video gehen? (Geben Sie ein Keyword an, :red[Dank KI wird automatisch ein Drehbuch generieren])",
|
||||||
"Script Language": "Welche Sprache soll zum Generieren von Drehbüchern verwendet werden? :red[KI generiert anhand dieses Begriffs das Drehbuch]",
|
"Script Language": "Welche Sprache soll zum Generieren von Drehbüchern verwendet werden? :red[KI generiert anhand dieses Begriffs das Drehbuch]",
|
||||||
@@ -10,12 +18,19 @@
|
|||||||
"Generate Video Keywords": "Klicken Sie, um KI zum Generieren zu verwenden [Video Keywords] basierend auf dem **Drehbuch**",
|
"Generate Video Keywords": "Klicken Sie, um KI zum Generieren zu verwenden [Video Keywords] basierend auf dem **Drehbuch**",
|
||||||
"Please Enter the Video Subject": "Bitte geben Sie zuerst das Drehbuch an",
|
"Please Enter the Video Subject": "Bitte geben Sie zuerst das Drehbuch an",
|
||||||
"Generating Video Script and Keywords": "KI generiert ein Drehbuch und Schlüsselwörter...",
|
"Generating Video Script and Keywords": "KI generiert ein Drehbuch und Schlüsselwörter...",
|
||||||
"Generating Video Keywords": "AI is generating video keywords...",
|
"Generating Video Keywords": "KI generiert Video-Schlüsselwörter...",
|
||||||
"Video Keywords": "Video Schlüsselwörter (:blue[① Optional, KI generiert ② Verwende **, (Kommas)** zur Trennung der Wörter, in englischer Sprache])",
|
"Video Keywords": "Video Schlüsselwörter (:blue[① Optional, KI generiert ② Verwende **, (Kommas)** zur Trennung der Wörter, in englischer Sprache])",
|
||||||
"Video Settings": "**Video Einstellungen**",
|
"Video Settings": "**Video Einstellungen**",
|
||||||
"Video Concat Mode": "Videoverkettungsmodus",
|
"Video Concat Mode": "Videoverkettungsmodus",
|
||||||
"Random": "Zufällige Verkettung (empfohlen)",
|
"Random": "Zufällige Verkettung (empfohlen)",
|
||||||
"Sequential": "Sequentielle Verkettung",
|
"Sequential": "Sequentielle Verkettung",
|
||||||
|
"Video Transition Mode": "Video Übergangsmodus",
|
||||||
|
"None": "Kein Übergang",
|
||||||
|
"Shuffle": "Zufällige Übergänge",
|
||||||
|
"FadeIn": "FadeIn",
|
||||||
|
"FadeOut": "FadeOut",
|
||||||
|
"SlideIn": "SlideIn",
|
||||||
|
"SlideOut": "SlideOut",
|
||||||
"Video Ratio": "Video-Seitenverhältnis",
|
"Video Ratio": "Video-Seitenverhältnis",
|
||||||
"Portrait": "Portrait 9:16",
|
"Portrait": "Portrait 9:16",
|
||||||
"Landscape": "Landschaft 16:9",
|
"Landscape": "Landschaft 16:9",
|
||||||
@@ -23,8 +38,8 @@
|
|||||||
"Number of Videos Generated Simultaneously": "Anzahl der parallel generierten Videos",
|
"Number of Videos Generated Simultaneously": "Anzahl der parallel generierten Videos",
|
||||||
"Audio Settings": "**Audio Einstellungen**",
|
"Audio Settings": "**Audio Einstellungen**",
|
||||||
"Speech Synthesis": "Sprachausgabe",
|
"Speech Synthesis": "Sprachausgabe",
|
||||||
"Speech Region": "Region(:red[Required,[Get Region](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices)])",
|
"Speech Region": "Region(:red[Erforderlich,[Region abrufen](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices)])",
|
||||||
"Speech Key": "API Key(:red[Required,[Get API Key](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices)])",
|
"Speech Key": "API-Schlüssel(:red[Erforderlich,[API-Schlüssel abrufen](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices)])",
|
||||||
"Speech Volume": "Lautstärke der Sprachausgabe",
|
"Speech Volume": "Lautstärke der Sprachausgabe",
|
||||||
"Speech Rate": "Lesegeschwindigkeit (1,0 bedeutet 1x)",
|
"Speech Rate": "Lesegeschwindigkeit (1,0 bedeutet 1x)",
|
||||||
"Male": "Männlich",
|
"Male": "Männlich",
|
||||||
@@ -54,26 +69,31 @@
|
|||||||
"Video Generation Completed": "Video erfolgreich generiert",
|
"Video Generation Completed": "Video erfolgreich generiert",
|
||||||
"Video Generation Failed": "Video Generierung fehlgeschlagen",
|
"Video Generation Failed": "Video Generierung fehlgeschlagen",
|
||||||
"You can download the generated video from the following links": "Sie können das generierte Video über die folgenden Links herunterladen",
|
"You can download the generated video from the following links": "Sie können das generierte Video über die folgenden Links herunterladen",
|
||||||
"Basic Settings": "**Grunde Instellungen**",
|
"Basic Settings": "**Grundeinstellungen** (:blue[Klicken zum Erweitern])",
|
||||||
"Pexels API Key": "Pexels API Key ([Get API Key](https://www.pexels.com/api/))",
|
"Language": "Sprache",
|
||||||
"Pixabay API Key": "Pixabay API Key ([Get API Key](https://pixabay.com/api/docs/#api_search_videos))",
|
"Pexels API Key": "Pexels API-Schlüssel ([API-Schlüssel abrufen](https://www.pexels.com/api/))",
|
||||||
"Language": "Language",
|
"Pixabay API Key": "Pixabay API-Schlüssel ([API-Schlüssel abrufen](https://pixabay.com/api/docs/#api_search_videos))",
|
||||||
"LLM Provider": "LLM Provider",
|
"LLM Provider": "KI-Modellanbieter",
|
||||||
"API Key": "API Key (:red[Required])",
|
"API Key": "API-Schlüssel (:red[Erforderlich])",
|
||||||
"Base Url": "Base Url",
|
"Base Url": "Basis-URL",
|
||||||
"Model Name": "Model Name",
|
"Account ID": "Konto-ID (Aus dem Cloudflare-Dashboard)",
|
||||||
"Please Enter the LLM API Key": "Please Enter the **LLM API Key**",
|
"Model Name": "Modellname",
|
||||||
"Please Enter the Pexels API Key": "Please Enter the **Pexels API Key**",
|
"Please Enter the LLM API Key": "Bitte geben Sie den **KI-Modell API-Schlüssel** ein",
|
||||||
"Please Enter the Pixabay API Key": "Please Enter the **Pixabay API Key**",
|
"Please Enter the Pexels API Key": "Bitte geben Sie den **Pexels API-Schlüssel** ein",
|
||||||
"Get Help": "If you need help, or have any questions, you can join discord for help: https://harryai.cc",
|
"Please Enter the Pixabay API Key": "Bitte geben Sie den **Pixabay API-Schlüssel** ein",
|
||||||
"Video Source": "Video Source",
|
"Get Help": "Wenn Sie Hilfe benötigen oder Fragen haben, können Sie dem Discord beitreten: https://harryai.cc",
|
||||||
"TikTok": "TikTok (TikTok support is coming soon)",
|
"Video Source": "Videoquelle",
|
||||||
"Bilibili": "Bilibili (Bilibili support is coming soon)",
|
"TikTok": "TikTok (TikTok-Unterstützung kommt bald)",
|
||||||
"Xiaohongshu": "Xiaohongshu (Xiaohongshu support is coming soon)",
|
"Bilibili": "Bilibili (Bilibili-Unterstützung kommt bald)",
|
||||||
"Local file": "Local file",
|
"Xiaohongshu": "Xiaohongshu (Xiaohongshu-Unterstützung kommt bald)",
|
||||||
"Play Voice": "Play Voice",
|
"Local file": "Lokale Datei",
|
||||||
"Voice Example": "This is an example text for testing speech synthesis",
|
"Play Voice": "Sprachausgabe abspielen",
|
||||||
"Synthesizing Voice": "Synthesizing voice, please wait...",
|
"Voice Example": "Dies ist ein Beispieltext zum Testen der Sprachsynthese",
|
||||||
"TTS Provider": "Select the voice synthesis provider"
|
"Synthesizing Voice": "Sprachsynthese läuft, bitte warten...",
|
||||||
|
"TTS Provider": "Sprachsynthese-Anbieter auswählen",
|
||||||
|
"Hide Log": "Protokoll ausblenden",
|
||||||
|
"Hide Basic Settings": "Basis-Einstellungen ausblenden\n\nWenn diese Option deaktiviert ist, wird die Basis-Einstellungen-Leiste nicht auf der Seite angezeigt.\n\nWenn Sie sie erneut anzeigen möchten, setzen Sie `hide_config = false` in `config.toml`",
|
||||||
|
"LLM Settings": "**LLM-Einstellungen**",
|
||||||
|
"Video Source Settings": "**Videoquellen-Einstellungen**"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,14 @@
|
|||||||
{
|
{
|
||||||
"Language": "English",
|
"Language": "English",
|
||||||
"Translation": {
|
"Translation": {
|
||||||
|
"Login Required": "Login Required",
|
||||||
|
"Please login to access settings": "Please login to access settings",
|
||||||
|
"Username": "Username",
|
||||||
|
"Password": "Password",
|
||||||
|
"Login": "Login",
|
||||||
|
"Login Error": "Login Error",
|
||||||
|
"Incorrect username or password": "Incorrect username or password",
|
||||||
|
"Please enter your username and password": "Please enter your username and password",
|
||||||
"Video Script Settings": "**Video Script Settings**",
|
"Video Script Settings": "**Video Script Settings**",
|
||||||
"Video Subject": "Video Subject (Provide a keyword, :red[AI will automatically generate] video script)",
|
"Video Subject": "Video Subject (Provide a keyword, :red[AI will automatically generate] video script)",
|
||||||
"Script Language": "Language for Generating Video Script (AI will automatically output based on the language of your subject)",
|
"Script Language": "Language for Generating Video Script (AI will automatically output based on the language of your subject)",
|
||||||
@@ -16,6 +24,13 @@
|
|||||||
"Video Concat Mode": "Video Concatenation Mode",
|
"Video Concat Mode": "Video Concatenation Mode",
|
||||||
"Random": "Random Concatenation (Recommended)",
|
"Random": "Random Concatenation (Recommended)",
|
||||||
"Sequential": "Sequential Concatenation",
|
"Sequential": "Sequential Concatenation",
|
||||||
|
"Video Transition Mode": "Video Transition Mode",
|
||||||
|
"None": "None",
|
||||||
|
"Shuffle": "Shuffle",
|
||||||
|
"FadeIn": "FadeIn",
|
||||||
|
"FadeOut": "FadeOut",
|
||||||
|
"SlideIn": "SlideIn",
|
||||||
|
"SlideOut": "SlideOut",
|
||||||
"Video Ratio": "Video Aspect Ratio",
|
"Video Ratio": "Video Aspect Ratio",
|
||||||
"Portrait": "Portrait 9:16",
|
"Portrait": "Portrait 9:16",
|
||||||
"Landscape": "Landscape 16:9",
|
"Landscape": "Landscape 16:9",
|
||||||
@@ -76,6 +91,9 @@
|
|||||||
"Voice Example": "This is an example text for testing speech synthesis",
|
"Voice Example": "This is an example text for testing speech synthesis",
|
||||||
"Synthesizing Voice": "Synthesizing voice, please wait...",
|
"Synthesizing Voice": "Synthesizing voice, please wait...",
|
||||||
"TTS Provider": "Select the voice synthesis provider",
|
"TTS Provider": "Select the voice synthesis provider",
|
||||||
"Hide Log": "Hide Log"
|
"Hide Log": "Hide Log",
|
||||||
|
"Hide Basic Settings": "Hide Basic Settings\n\nHidden, the basic settings panel will not be displayed on the page.\n\nIf you need to display it again, please set `hide_config = false` in `config.toml`",
|
||||||
|
"LLM Settings": "**LLM Settings**",
|
||||||
|
"Video Source Settings": "**Video Source Settings**"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
99
webui/i18n/pt.json
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
{
|
||||||
|
"Language": "Português Brasileiro",
|
||||||
|
"Translation": {
|
||||||
|
"Login Required": "Login Necessário",
|
||||||
|
"Please login to access settings": "Por favor, faça login para acessar as configurações",
|
||||||
|
"Username": "Nome de usuário",
|
||||||
|
"Password": "Senha",
|
||||||
|
"Login": "Entrar",
|
||||||
|
"Login Error": "Erro de Login",
|
||||||
|
"Incorrect username or password": "Nome de usuário ou senha incorretos",
|
||||||
|
"Please enter your username and password": "Por favor, digite seu nome de usuário e senha",
|
||||||
|
"Video Script Settings": "**Configurações do Roteiro do Vídeo**",
|
||||||
|
"Video Subject": "Tema do Vídeo (Forneça uma palavra-chave, :red[a IA irá gerar automaticamente] o roteiro do vídeo)",
|
||||||
|
"Script Language": "Idioma para Gerar o Roteiro do Vídeo (a IA irá gerar automaticamente com base no idioma do seu tema)",
|
||||||
|
"Generate Video Script and Keywords": "Clique para usar a IA para gerar o [Roteiro do Vídeo] e as [Palavras-chave do Vídeo] com base no **tema**",
|
||||||
|
"Auto Detect": "Detectar Automaticamente",
|
||||||
|
"Video Script": "Roteiro do Vídeo (:blue[① Opcional, gerado pela IA ② Pontuação adequada ajuda na geração de legendas])",
|
||||||
|
"Generate Video Keywords": "Clique para usar a IA para gerar [Palavras-chave do Vídeo] com base no **roteiro**",
|
||||||
|
"Please Enter the Video Subject": "Por favor, insira o Roteiro do Vídeo primeiro",
|
||||||
|
"Generating Video Script and Keywords": "A IA está gerando o roteiro do vídeo e as palavras-chave...",
|
||||||
|
"Generating Video Keywords": "A IA está gerando as palavras-chave do vídeo...",
|
||||||
|
"Video Keywords": "Palavras-chave do Vídeo (:blue[① Opcional, gerado pela IA ② Use **vírgulas em inglês** para separar, somente em inglês])",
|
||||||
|
"Video Settings": "**Configurações do Vídeo**",
|
||||||
|
"Video Concat Mode": "Modo de Concatenação de Vídeo",
|
||||||
|
"Random": "Concatenação Aleatória (Recomendado)",
|
||||||
|
"Sequential": "Concatenação Sequencial",
|
||||||
|
"Video Transition Mode": "Modo de Transição de Vídeo",
|
||||||
|
"None": "Nenhuma Transição",
|
||||||
|
"Shuffle": "Transição Aleatória",
|
||||||
|
"FadeIn": "FadeIn",
|
||||||
|
"FadeOut": "FadeOut",
|
||||||
|
"SlideIn": "SlideIn",
|
||||||
|
"SlideOut": "SlideOut",
|
||||||
|
"Video Ratio": "Proporção do Vídeo",
|
||||||
|
"Portrait": "Retrato 9:16",
|
||||||
|
"Landscape": "Paisagem 16:9",
|
||||||
|
"Clip Duration": "Duração Máxima dos Clipes de Vídeo (segundos)",
|
||||||
|
"Number of Videos Generated Simultaneously": "Número de Vídeos Gerados Simultaneamente",
|
||||||
|
"Audio Settings": "**Configurações de Áudio**",
|
||||||
|
"Speech Synthesis": "Voz de Síntese de Fala",
|
||||||
|
"Speech Region": "Região(:red[Obrigatório,[Obter Região](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices)])",
|
||||||
|
"Speech Key": "Chave da API(:red[Obrigatório,[Obter Chave da API](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/SpeechServices)])",
|
||||||
|
"Speech Volume": "Volume da Fala (1.0 representa 100%)",
|
||||||
|
"Speech Rate": "Velocidade da Fala (1.0 significa velocidade 1x)",
|
||||||
|
"Male": "Masculino",
|
||||||
|
"Female": "Feminino",
|
||||||
|
"Background Music": "Música de Fundo",
|
||||||
|
"No Background Music": "Sem Música de Fundo",
|
||||||
|
"Random Background Music": "Música de Fundo Aleatória",
|
||||||
|
"Custom Background Music": "Música de Fundo Personalizada",
|
||||||
|
"Custom Background Music File": "Por favor, insira o caminho do arquivo para a música de fundo personalizada:",
|
||||||
|
"Background Music Volume": "Volume da Música de Fundo (0.2 representa 20%, a música de fundo não deve ser muito alta)",
|
||||||
|
"Subtitle Settings": "**Configurações de Legendas**",
|
||||||
|
"Enable Subtitles": "Ativar Legendas (Se desmarcado, as configurações abaixo não terão efeito)",
|
||||||
|
"Font": "Fonte da Legenda",
|
||||||
|
"Position": "Posição da Legenda",
|
||||||
|
"Top": "Superior",
|
||||||
|
"Center": "Centralizar",
|
||||||
|
"Bottom": "Inferior (Recomendado)",
|
||||||
|
"Custom": "Posição personalizada (70, indicando 70% abaixo do topo)",
|
||||||
|
"Font Size": "Tamanho da Fonte da Legenda",
|
||||||
|
"Font Color": "Cor da Fonte da Legenda",
|
||||||
|
"Stroke Color": "Cor do Contorno da Legenda",
|
||||||
|
"Stroke Width": "Largura do Contorno da Legenda",
|
||||||
|
"Generate Video": "Gerar Vídeo",
|
||||||
|
"Video Script and Subject Cannot Both Be Empty": "O Tema do Vídeo e o Roteiro do Vídeo não podem estar ambos vazios",
|
||||||
|
"Generating Video": "Gerando vídeo, por favor aguarde...",
|
||||||
|
"Start Generating Video": "Começar a Gerar Vídeo",
|
||||||
|
"Video Generation Completed": "Geração do Vídeo Concluída",
|
||||||
|
"Video Generation Failed": "Falha na Geração do Vídeo",
|
||||||
|
"You can download the generated video from the following links": "Você pode baixar o vídeo gerado a partir dos seguintes links",
|
||||||
|
"Basic Settings": "**Configurações Básicas** (:blue[Clique para expandir])",
|
||||||
|
"Language": "Idioma",
|
||||||
|
"Pexels API Key": "Chave da API do Pexels ([Obter Chave da API](https://www.pexels.com/api/))",
|
||||||
|
"Pixabay API Key": "Chave da API do Pixabay ([Obter Chave da API](https://pixabay.com/api/docs/#api_search_videos))",
|
||||||
|
"LLM Provider": "Provedor LLM",
|
||||||
|
"API Key": "Chave da API (:red[Obrigatório])",
|
||||||
|
"Base Url": "URL Base",
|
||||||
|
"Account ID": "ID da Conta (Obter no painel do Cloudflare)",
|
||||||
|
"Model Name": "Nome do Modelo",
|
||||||
|
"Please Enter the LLM API Key": "Por favor, insira a **Chave da API LLM**",
|
||||||
|
"Please Enter the Pexels API Key": "Por favor, insira a **Chave da API do Pexels**",
|
||||||
|
"Please Enter the Pixabay API Key": "Por favor, insira a **Chave da API do Pixabay**",
|
||||||
|
"Get Help": "Se precisar de ajuda ou tiver alguma dúvida, você pode entrar no discord para obter ajuda: https://harryai.cc",
|
||||||
|
"Video Source": "Fonte do Vídeo",
|
||||||
|
"TikTok": "TikTok (Suporte para TikTok em breve)",
|
||||||
|
"Bilibili": "Bilibili (Suporte para Bilibili em breve)",
|
||||||
|
"Xiaohongshu": "Xiaohongshu (Suporte para Xiaohongshu em breve)",
|
||||||
|
"Local file": "Arquivo local",
|
||||||
|
"Play Voice": "Reproduzir Voz",
|
||||||
|
"Voice Example": "Este é um exemplo de texto para testar a síntese de fala",
|
||||||
|
"Synthesizing Voice": "Sintetizando voz, por favor aguarde...",
|
||||||
|
"TTS Provider": "Selecione o provedor de síntese de voz",
|
||||||
|
"Hide Log": "Ocultar Log",
|
||||||
|
"Hide Basic Settings": "Ocultar Configurações Básicas\n\nOculto, o painel de configurações básicas não será exibido na página.\n\nSe precisar exibi-lo novamente, defina `hide_config = false` em `config.toml`",
|
||||||
|
"LLM Settings": "**Configurações do LLM**",
|
||||||
|
"Video Source Settings": "**Configurações da Fonte do Vídeo**"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,6 +1,14 @@
|
|||||||
{
|
{
|
||||||
"Language": "Tiếng Việt",
|
"Language": "Tiếng Việt",
|
||||||
"Translation": {
|
"Translation": {
|
||||||
|
"Login Required": "Yêu cầu đăng nhập",
|
||||||
|
"Please login to access settings": "Vui lòng đăng nhập để truy cập cài đặt",
|
||||||
|
"Username": "Tên đăng nhập",
|
||||||
|
"Password": "Mật khẩu",
|
||||||
|
"Login": "Đăng nhập",
|
||||||
|
"Login Error": "Lỗi đăng nhập",
|
||||||
|
"Incorrect username or password": "Tên đăng nhập hoặc mật khẩu không chính xác",
|
||||||
|
"Please enter your username and password": "Vui lòng nhập tên đăng nhập và mật khẩu của bạn",
|
||||||
"Video Script Settings": "**Cài Đặt Kịch Bản Video**",
|
"Video Script Settings": "**Cài Đặt Kịch Bản Video**",
|
||||||
"Video Subject": "Chủ Đề Video (Cung cấp một từ khóa, :red[AI sẽ tự động tạo ra] kịch bản video)",
|
"Video Subject": "Chủ Đề Video (Cung cấp một từ khóa, :red[AI sẽ tự động tạo ra] kịch bản video)",
|
||||||
"Script Language": "Ngôn Ngữ cho Việc Tạo Kịch Bản Video (AI sẽ tự động xuất ra dựa trên ngôn ngữ của chủ đề của bạn)",
|
"Script Language": "Ngôn Ngữ cho Việc Tạo Kịch Bản Video (AI sẽ tự động xuất ra dựa trên ngôn ngữ của chủ đề của bạn)",
|
||||||
@@ -16,6 +24,13 @@
|
|||||||
"Video Concat Mode": "Chế Độ Nối Video",
|
"Video Concat Mode": "Chế Độ Nối Video",
|
||||||
"Random": "Nối Ngẫu Nhiên (Được Khuyến Nghị)",
|
"Random": "Nối Ngẫu Nhiên (Được Khuyến Nghị)",
|
||||||
"Sequential": "Nối Theo Thứ Tự",
|
"Sequential": "Nối Theo Thứ Tự",
|
||||||
|
"Video Transition Mode": "Chế Độ Chuyển Đổi Video",
|
||||||
|
"None": "Không Có Chuyển Đổi",
|
||||||
|
"Shuffle": "Chuyển Đổi Ngẫu Nhiên",
|
||||||
|
"FadeIn": "FadeIn",
|
||||||
|
"FadeOut": "FadeOut",
|
||||||
|
"SlideIn": "SlideIn",
|
||||||
|
"SlideOut": "SlideOut",
|
||||||
"Video Ratio": "Tỷ Lệ Khung Hình Video",
|
"Video Ratio": "Tỷ Lệ Khung Hình Video",
|
||||||
"Portrait": "Dọc 9:16",
|
"Portrait": "Dọc 9:16",
|
||||||
"Landscape": "Ngang 16:9",
|
"Landscape": "Ngang 16:9",
|
||||||
@@ -54,10 +69,10 @@
|
|||||||
"Video Generation Completed": "Hoàn Tất Tạo Video",
|
"Video Generation Completed": "Hoàn Tất Tạo Video",
|
||||||
"Video Generation Failed": "Tạo Video Thất Bại",
|
"Video Generation Failed": "Tạo Video Thất Bại",
|
||||||
"You can download the generated video from the following links": "Bạn có thể tải video được tạo ra từ các liên kết sau",
|
"You can download the generated video from the following links": "Bạn có thể tải video được tạo ra từ các liên kết sau",
|
||||||
"Pexels API Key": "Khóa API Pexels ([Lấy Khóa API](https://www.pexels.com/api/))",
|
|
||||||
"Pixabay API Key": "Pixabay API Key ([Get API Key](https://pixabay.com/api/docs/#api_search_videos))",
|
|
||||||
"Basic Settings": "**Cài Đặt Cơ Bản** (:blue[Nhấp để mở rộng])",
|
"Basic Settings": "**Cài Đặt Cơ Bản** (:blue[Nhấp để mở rộng])",
|
||||||
"Language": "Ngôn Ngữ",
|
"Language": "Ngôn Ngữ",
|
||||||
|
"Pexels API Key": "Khóa API Pexels ([Lấy Khóa API](https://www.pexels.com/api/))",
|
||||||
|
"Pixabay API Key": "Khóa API Pixabay ([Lấy Khóa API](https://pixabay.com/api/docs/#api_search_videos))",
|
||||||
"LLM Provider": "Nhà Cung Cấp LLM",
|
"LLM Provider": "Nhà Cung Cấp LLM",
|
||||||
"API Key": "Khóa API (:red[Bắt Buộc])",
|
"API Key": "Khóa API (:red[Bắt Buộc])",
|
||||||
"Base Url": "Url Cơ Bản",
|
"Base Url": "Url Cơ Bản",
|
||||||
@@ -65,16 +80,20 @@
|
|||||||
"Model Name": "Tên Mô Hình",
|
"Model Name": "Tên Mô Hình",
|
||||||
"Please Enter the LLM API Key": "Vui lòng Nhập **Khóa API LLM**",
|
"Please Enter the LLM API Key": "Vui lòng Nhập **Khóa API LLM**",
|
||||||
"Please Enter the Pexels API Key": "Vui lòng Nhập **Khóa API Pexels**",
|
"Please Enter the Pexels API Key": "Vui lòng Nhập **Khóa API Pexels**",
|
||||||
"Please Enter the Pixabay API Key": "Vui lòng Nhập **Pixabay API Key**",
|
"Please Enter the Pixabay API Key": "Vui lòng Nhập **Khóa API Pixabay**",
|
||||||
"Get Help": "Nếu bạn cần giúp đỡ hoặc có bất kỳ câu hỏi nào, bạn có thể tham gia discord để được giúp đỡ: https://harryai.cc",
|
"Get Help": "Nếu bạn cần giúp đỡ hoặc có bất kỳ câu hỏi nào, bạn có thể tham gia discord để được giúp đỡ: https://harryai.cc",
|
||||||
"Video Source": "Video Source",
|
"Video Source": "Nguồn Video",
|
||||||
"TikTok": "TikTok (TikTok support is coming soon)",
|
"TikTok": "TikTok (Hỗ trợ TikTok sắp ra mắt)",
|
||||||
"Bilibili": "Bilibili (Bilibili support is coming soon)",
|
"Bilibili": "Bilibili (Hỗ trợ Bilibili sắp ra mắt)",
|
||||||
"Xiaohongshu": "Xiaohongshu (Xiaohongshu support is coming soon)",
|
"Xiaohongshu": "Xiaohongshu (Hỗ trợ Xiaohongshu sắp ra mắt)",
|
||||||
"Local file": "Local file",
|
"Local file": "Tệp cục bộ",
|
||||||
"Play Voice": "Play Voice",
|
"Play Voice": "Phát Giọng Nói",
|
||||||
"Voice Example": "This is an example text for testing speech synthesis",
|
"Voice Example": "Đây là văn bản mẫu để kiểm tra tổng hợp giọng nói",
|
||||||
"Synthesizing Voice": "Synthesizing voice, please wait...",
|
"Synthesizing Voice": "Đang tổng hợp giọng nói, vui lòng đợi...",
|
||||||
"TTS Provider": "Select the voice synthesis provider"
|
"TTS Provider": "Chọn nhà cung cấp tổng hợp giọng nói",
|
||||||
|
"Hide Log": "Ẩn Nhật Ký",
|
||||||
|
"Hide Basic Settings": "Ẩn Cài Đặt Cơ Bản\n\nẨn, thanh cài đặt cơ bản sẽ không hiển thị trên trang web.\n\nNếu bạn muốn hiển thị lại, vui lòng đặt `hide_config = false` trong `config.toml`",
|
||||||
|
"LLM Settings": "**Cài Đặt LLM**",
|
||||||
|
"Video Source Settings": "**Cài Đặt Nguồn Video**"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,14 @@
|
|||||||
{
|
{
|
||||||
"Language": "简体中文",
|
"Language": "简体中文",
|
||||||
"Translation": {
|
"Translation": {
|
||||||
|
"Login Required": "需要登录",
|
||||||
|
"Please login to access settings": "请登录后访问配置设置 (:gray[默认用户名: admin, 密码: admin, 您可以在 config.toml 中修改])",
|
||||||
|
"Username": "用户名",
|
||||||
|
"Password": "密码",
|
||||||
|
"Login": "登录",
|
||||||
|
"Login Error": "登录错误",
|
||||||
|
"Incorrect username or password": "用户名或密码不正确",
|
||||||
|
"Please enter your username and password": "请输入用户名和密码",
|
||||||
"Video Script Settings": "**文案设置**",
|
"Video Script Settings": "**文案设置**",
|
||||||
"Video Subject": "视频主题(给定一个关键词,:red[AI自动生成]视频文案)",
|
"Video Subject": "视频主题(给定一个关键词,:red[AI自动生成]视频文案)",
|
||||||
"Script Language": "生成视频脚本的语言(一般情况AI会自动根据你输入的主题语言输出)",
|
"Script Language": "生成视频脚本的语言(一般情况AI会自动根据你输入的主题语言输出)",
|
||||||
@@ -16,6 +24,13 @@
|
|||||||
"Video Concat Mode": "视频拼接模式",
|
"Video Concat Mode": "视频拼接模式",
|
||||||
"Random": "随机拼接(推荐)",
|
"Random": "随机拼接(推荐)",
|
||||||
"Sequential": "顺序拼接",
|
"Sequential": "顺序拼接",
|
||||||
|
"Video Transition Mode": "视频转场模式",
|
||||||
|
"None": "无转场",
|
||||||
|
"Shuffle": "随机转场",
|
||||||
|
"FadeIn": "渐入",
|
||||||
|
"FadeOut": "渐出",
|
||||||
|
"SlideIn": "滑动入",
|
||||||
|
"SlideOut": "滑动出",
|
||||||
"Video Ratio": "视频比例",
|
"Video Ratio": "视频比例",
|
||||||
"Portrait": "竖屏 9:16(抖音视频)",
|
"Portrait": "竖屏 9:16(抖音视频)",
|
||||||
"Landscape": "横屏 16:9(西瓜视频)",
|
"Landscape": "横屏 16:9(西瓜视频)",
|
||||||
@@ -76,6 +91,9 @@
|
|||||||
"Voice Example": "这是一段测试语音合成的示例文本",
|
"Voice Example": "这是一段测试语音合成的示例文本",
|
||||||
"Synthesizing Voice": "语音合成中,请稍候...",
|
"Synthesizing Voice": "语音合成中,请稍候...",
|
||||||
"TTS Provider": "语音合成提供商",
|
"TTS Provider": "语音合成提供商",
|
||||||
"Hide Log": "隐藏日志"
|
"Hide Log": "隐藏日志",
|
||||||
|
"Hide Basic Settings": "隐藏基础设置\n\n隐藏后,基础设置面板将不会显示在页面中。\n\n如需要再次显示,请在 `config.toml` 中设置 `hide_config = false`",
|
||||||
|
"LLM Settings": "**大模型设置**",
|
||||||
|
"Video Source Settings": "**视频源设置**"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||