![]() |
3 days ago | |
---|---|---|
.github | 1 month ago | |
demo | 3 months ago | |
detection | 1 week ago | |
fonts | 9 months ago | |
inpainting | 1 week ago | |
ocr | 1 week ago | |
text_rendering | 5 days ago | |
textmask_refinement | 1 week ago | |
training | 10 months ago | |
translators | 1 week ago | |
upscaling | 3 weeks ago | |
.dockerignore | 3 months ago | |
.gitignore | 5 days ago | |
CHANGELOG.md | 7 months ago | |
CHANGELOG_CN.md | 7 months ago | |
Dockerfile | 3 weeks ago | |
LICENSE | 1 year ago | |
Makefile | 3 weeks ago | |
README.md | 1 week ago | |
README_CN.md | 2 months ago | |
alphabet-all-v5.txt | 1 year ago | |
docker_prepare.py | 3 weeks ago | |
manual.html | 3 months ago | |
requirements.txt | 5 days ago | |
run_as_colab.ipynb | 3 weeks ago | |
translate_demo.py | 5 days ago | |
ui.html | 1 month ago | |
utils.py | 5 days ago | |
web_main.py | 3 days ago | |
ws.proto | 1 month ago | |
ws_pb2.py | 1 month ago |
Translate texts in manga/images.\ 中文说明 | Change Log \ Join us on discord https://discord.gg/Ak8APNy4vb
Some manga/images will never be translated, therefore this project is born.\ Primarily designed for translating Japanese text, but also supports Chinese, English and Korean.\ Supports inpainting and text rendering.\ Successor to https://github.com/PatchyVideo/MMDOCR-HighPerformance
This is a hobby project, you are welcome to contribute!\ Currently this only a simple demo, many imperfections exist, we need your support to make this project better!
GPU server is not cheap, please consider to donate to us.
Official Demo (by zyddnys): https://touhou.ai/imgtrans/\ Browser Userscript (by QiroNT): https://greasyfork.org/scripts/437569
Sample images can be found here
# First, you need to have Python(>=3.8) installed on your system.
$ python --version
Python 3.10.6
# Clone this repo
$ git clone https://github.com/zyddnys/manga-image-translator.git
# Install the dependencies
$ pip install -r requirements.txt
$ pip install git+https://github.com/lucasb-eyer/pydensecrf.git
The models will be downloaded into ./models at runtime.
Some pip dependencies will not compile without Microsoft C++ Build Tools
(See ).
If you have trouble installing pydensecrf with the command above you can download the pre-compiled wheels from https://www.lfd.uci.edu/~gohlke/pythonlibs/#_pydensecrf according to your python version and install it with pip.
# `--use-cuda` is optional, if you have a compatible NVIDIA GPU, you can use it.
# use `--use-cuda-limited` to defer vram expensive language translations to the cpu
# use `--inpainter=none` to disable inpainting.
# use `--translator=<translator>` to specify a translator.
# use `--translator=none` if you only want to use inpainting (blank bubbles)
# use `--target-lang <language_code>` to specify a target language.
# replace <path_to_image_file> with the path to the image file.
$ python translate_demo.py --verbose --use-cuda --translator=google -l ENG -i <path_to_image_file>
# result can be found in `result/`.
# same options as above.
# use `--mode batch` to enable batch translation.
# replace <path_to_image_folder> with the path to the image folder.
$ python translate_demo.py --verbose --mode batch --use-cuda --translator=google -l ENG -i <path_to_image_folder>
# results can be found in `<path_to_image_folder>-translated/`.
# same options as above.
# use `--mode web` to start a web server.
$ python translate_demo.py --verbose --mode web --use-cuda
# the demo will be serving on http://127.0.0.1:5003
Manual translation replaces machine translation with human translators. Basic manual translation demo can be found at http://127.0.0.1:5003/manual when using web mode.
API
Two modes of translation service are provided by the demo: synchronous mode and asynchronous mode.\
In synchronous mode your HTTP POST request will finish once the translation task is finished.\
In asynchronous mode your HTTP POST request will respond with a task_id
immediately, you can use this task_id
to poll for translation task state.
file:<content-of-image>
to http://127.0.0.1:5003/runtask_id
to find translation result in result/
directory, e.g. using Nginx to expose result/
file:<content-of-image>
to http://127.0.0.1:5003/submittask_id
{"taskid": <task-id>}
to http://127.0.0.1:5003/task-statefinished
, error
or error-lang
result/
directory, e.g. using Nginx to expose result/
POST a form request with form data file:<content-of-image>
to http://127.0.0.1:5003/manual-translate
and wait for response.
You will obtain a JSON response like this:
{
"task_id": "12c779c9431f954971cae720eb104499",
"status": "pending",
"trans_result": [
{
"s": "☆上司来ちゃった……",
"t": ""
}
]
}
Fill in translated texts:
{
"task_id": "12c779c9431f954971cae720eb104499",
"status": "pending",
"trans_result": [
{
"s": "☆上司来ちゃった……",
"t": "☆Boss is here..."
}
]
}
Post translated JSON to http://127.0.0.1:5003/post-translation-result and wait for response.\
Then you can find the translation result in result/
directory, e.g. using Nginx to expose result/
.
Name | API Key | Offline | Docker | Note |
---|---|---|---|---|
✔️ | ||||
youdao | ✔️ | ✔️ | Requires YOUDAO_APP_KEY and YOUDAO_SECRET_KEY |
|
baidu | ✔️ | ✔️ | Requires BAIDU_APP_ID and BAIDU_SECRET_KEY |
|
deepl | ✔️ | ✔️ | Requires DEEPL_AUTH_KEY |
|
papago | ✔️ | |||
offline | ✔️ | ✔️ | Chooses most suitable offline translator for language | |
offline_big | ✔️ | |||
nllb | ✔️ | ✔️ | ||
nllb_big | ✔️ | |||
sugoi | ✔️ | ✔️ | ||
sugoi_big | ✔️ | |||
none | ✔️ | ✔️ | Translate to empty texts | |
original | ✔️ | ✔️ | Keep original texts |
Used by the --target-lang
argument.
CHS: Chinese (Simplified)
CHT: Chinese (Traditional)
CSY: Czech
NLD: Dutch
ENG: English
FRA: French
DEU: German
HUN: Hungarian
ITA: Italian
JPN: Japanese
KOR: Korean
PLK: Polish
PTB: Portuguese (Brazil)
ROM: Romanian
RUS: Russian
ESP: Spanish
TRK: Turkish
UKR: Ukrainian
VIN: Vietnames
Requirements:
demo/doc
folder)This project has docker support under zyddnys/manga-image-translator:main
image.
This docker image contains all required dependencies / models for the project.
It should be noted that this image is fairly large (~ 15GB).
The web server can be hosted using (For CPU)
docker run -p 5003:5003 -v result:/app/result --ipc=host --rm zyddnys/manga-image-translator:main -l ENG --manga2eng --verbose --log-web --mode web --host=0.0.0.0 --port=5003
or
docker-compose -f demo/doc/docker-compose-web-with-cpu.yml up
depending on which you prefer. The web server should start on port 5003 and images should become in the /result
folder.
To use docker with the CLI (I.e in batch mode)
docker run -v <targetFolder>:/app/<targetFolder> -v <targetFolder>-translated:/app/<targetFolder>-translated --ipc=host --rm zyddnys/manga-image-translator:main --mode=batch -i=/app/<targetFolder> <cli flags>
Note: In the event you need to reference files on your host machine
you will need to mount the associated files as volumes into the /app
folder inside the container.
Paths for the CLI will need to be the internal docker path /app/...
instead of the paths on your host machine
Some translation services require API keys to function to set these pass them as env vars into the docker container. For example:
docker run --env="DEEPL_AUTH_KEY=xxx" --ipc=host --rm zyddnys/manga-image-translator:main <cli flags>
To use with a supported GPU please first read the initial
Docker
section. There are some special dependencies you will need to use
To run the container with the following flags set:
docker run ... --gpus=all ... zyddnys/manga-image-translator:main ... --use-cuda
Or (For the web server + GPU)
docker-compose -f demo/doc/docker-compose-web-with-gpu.yml up
To build the docker image locally you can run (You will require make on your machine)
make build-image
Then to test the built image run
make run-web-server
A list of what needs to be done next, you're welcome to contribute.
The following samples are from the original version, they do not represent the current main branch version.
Original | Translated |
---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |