Huggingface 提供了加载数据集的不同选项。为 ControlNet 加载本地图像数据集时,考虑数据集结构、文件路径以及与 Huggingface 数据处理工具的兼容性等方面非常重要。
假设您已经创建了调节图像并且具有以下文件夹结构:
my_dataset/ ├── README.md └──data/ ├── captions.jsonl ├── conditioning_images │ ├── 00001.jpg │ └── 00002.jpg └── images ├── 00001.jpg └── 00002.jpg
在此结构中,conditioning_images 文件夹存储您的调节图像,而 images 文件夹包含 ControlNet 的目标图像。 Captions.jsonl 文件包含链接到这些图像的标题。
{"image": "images/00001.jpg", "text": "This is the caption of the first image."} {"image": "images/00002.jpg", "text": "This is the caption of the second image."}
注意
字幕文件(或以下元数据文件)也可以是 csv 文件。但是,如果您选择 CSV,请注意值分隔符,因为文本可能包含逗号,这可能会导致解析问题。
元数据文件是提供有关数据集的附加信息的好方法。它可以包含各种类型的数据,例如边界框、类别、文本,或者在我们的例子中,是条件图像的路径。
让我们创建metadata.jsonl 文件:
import json from pathlib import Path def create_metadata(data_dir, output_file): metadata = [] try: with open(f"{data_dir}/captions.jsonl", "r") as f: for line in f: data = json.loads(line) file_name = Path(data["image"]).name metadata.append( { "image": data["image"], "conditioning_image": f"conditioning_images/{file_name}", "text": data["text"], } ) with open(f"{data_dir}/metadata.jsonl", "w") as f: for line in metadata: f.write(json.dumps(line) + "\n") except (FileNotFoundError, json.JSONDecodeError) as e: print(f"Error processing data: {e}") # Example usage: data_dir = "my_dataset/data" create_metadata(data_dir)
这将创建一个metadata.jsonl,其中包含我们的ControlNet 所需的所有信息。文件中的每一行对应一个图像、一个条件图像和相关的文本标题。
{"image": "images/00001.jpg", "conditioning_image": "conditioning_images/00001.jpg", "text": "This is the caption of the first image."} {"image": "images/00002.jpg", "conditioning_image": "conditioning_images/00002.jpg", "text": "This is the caption of the second image."}
创建metadata.jsonl 文件后,您的文件结构应如下所示:
my_dataset/ ├── README.md └──data/ ├── captions.jsonl ├── metadata.jsonl ├── conditioning_images │ ├── 00001.jpg │ └── 00002.jpg └── images ├── 00001.jpg └── 00002.jpg
最后,我们必须创建一个加载脚本来处理metadata.jsonl 文件中的所有数据。该脚本应与数据集位于同一目录中,并且应具有相同的名称。
您的目录结构应如下所示:
my_dataset/ ├── README.md ├── my_dataset.py └──data/ ├── captions.jsonl ├── metadata.jsonl ├── conditioning_images │ ├── 00001.jpg │ └── 00002.jpg └── images ├── 00001.jpg └── 00002.jpg
对于脚本,我们需要实现一个继承自 GeneratorBasedBuilder 的类,并包含以下三个方法:
import datasets class MyDataset(datasets.GeneratorBasedBuilder): def _info(self): def _split_generators(self, dl_manager): def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
添加数据集元数据
有很多选项可用于指定有关数据集的信息,但最重要的是:
# Global variables _DESCRIPTION = "TODO" _HOMEPAGE = "TODO" _LICENSE = "TODO" _CITATION = "TODO" _FEATURES = datasets.Features( { "image": datasets.Image(), "conditioning_image": datasets.Image(), "text": datasets.Value("string"), }, )
正如您在上面看到的,我已将一些变量设置为“TODO”。这些选项仅供参考,不会影响加载。
def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=_FEATURES, supervised_keys=("conditioning_image", "text"), homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION, )
定义数据集分割
dl_manager 用于从 Huggingface 存储库下载数据集,但这里我们使用它来获取在 load_dataset 函数中传递的数据目录路径。
在这里我们定义数据的本地路径
注意
如果您为文件夹结构选择了不同的名称,则可能需要调整metadata_path、images_dir 和conditioning_images_dir 变量。
def _split_generators(self, dl_manager): base_path = Path(dl_manager._base_path).resolve() metadata_path = base_path / "data" / "metadata.jsonl" images_dir = base_path / "data" conditioning_images_dir = base_path / "data" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # These kwargs will be passed to _generate_examples gen_kwargs={ "metadata_path": str(metadata_path), "images_dir": str(images_dir), "conditioning_images_dir": str(conditioning_images_dir), }, ), ]
最后一个方法加载 matadata.jsonl 文件并生成图像及其关联的调节图像和文本。
@staticmethod def load_jsonl(path): """Generator to load jsonl file.""" with open(path, "r") as f: for line in f: yield json.loads(line) def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir): for row in self.load_jsonl(metadata_path): text = row["text"] image_path = row["image"] image_path = os.path.join(images_dir, image_path) image = open(image_path, "rb").read() conditioning_image_path = row["conditioning_image"] conditioning_image_path = os.path.join( conditioning_images_dir, row["conditioning_image"] ) conditioning_image = open(conditioning_image_path, "rb").read() yield row["image"], { "text": text, "image": { "path": image_path, "bytes": image, }, "conditioning_image": { "path": conditioning_image_path, "bytes": conditioning_image, }, }
按照以下步骤,您可以从本地路径加载 ControlNet 数据集。
# with the loading script, we can load the dataset ds = load_dataset("my_dataset") # (optional) # pass trust_remote_code=True to avoid the warning about custom code # ds = load_dataset("my_dataset", trust_remote_code=True)
如果您有任何疑问,请随时在下面留言。
加载脚本的完整代码:
import os import json import datasets from pathlib import Path _VERSION = datasets.Version("0.0.2") _DESCRIPTION = "TODO" _HOMEPAGE = "TODO" _LICENSE = "TODO" _CITATION = "TODO" _FEATURES = datasets.Features( { "image": datasets.Image(), "conditioning_image": datasets.Image(), "text": datasets.Value("string"), }, ) _DEFAULT_CONFIG = datasets.BuilderConfig(name="default", version=_VERSION) class MyDataset(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [_DEFAULT_CONFIG] DEFAULT_CONFIG_NAME = "default" def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=_FEATURES, supervised_keys=("conditioning_image", "text"), homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION, ) def _split_generators(self, dl_manager): base_path = Path(dl_manager._base_path) metadata_path = base_path / "data" / "metadata.jsonl" images_dir = base_path / "data" conditioning_images_dir = base_path / "data" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # These kwargs will be passed to _generate_examples gen_kwargs={ "metadata_path": metadata_path, "images_dir": images_dir, "conditioning_images_dir": conditioning_images_dir, }, ), ] @staticmethod def load_jsonl(path): """Generator to load jsonl file.""" with open(path, "r") as f: for line in f: yield json.loads(line) def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir): for row in self.load_jsonl(metadata_path): text = row["text"] image_path = row["image"] image_path = os.path.join(images_dir, image_path) image = open(image_path, "rb").read() conditioning_image_path = row["conditioning_image"] conditioning_image_path = os.path.join( conditioning_images_dir, row["conditioning_image"] ) conditioning_image = open(conditioning_image_path, "rb").read() yield row["image"], { "text": text, "image": { "path": image_path, "bytes": image, }, "conditioning_image": { "path": conditioning_image_path, "bytes": conditioning_image, }, }
以上是分步指南:从本地路径加载 HuggingFace ControlNet 数据集的详细内容。更多信息请关注PHP中文网其他相关文章!