Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] KeyError: 'gt_label' #1888

Open
yanshuangying888 opened this issue Apr 16, 2024 · 4 comments
Open

[Bug] KeyError: 'gt_label' #1888

yanshuangying888 opened this issue Apr 16, 2024 · 4 comments

Comments

@yanshuangying888
Copy link

分支

main 分支 (mmpretrain 版本)

描述该错误

config.py内容如下:(The contents of config.py are as follows:)

auto_scale_lr = dict(base_batch_size=256)
data_preprocessor = dict(
    mean=[
        123.675,
        116.28,
        103.53,
    ],
    num_classes=7,
    std=[
        58.395,
        57.12,
        57.375,
    ],
    to_rgb=True)
dataset_type = 'ImageNet'
default_hooks = dict(
    checkpoint=dict(interval=50, type='CheckpointHook'),
    logger=dict(interval=100, type='LoggerHook'),
    param_scheduler=dict(type='ParamSchedulerHook'),
    sampler_seed=dict(type='DistSamplerSeedHook'),
    timer=dict(type='IterTimerHook'),
    visualization=dict(enable=False, type='VisualizationHook'))
default_scope = 'mmpretrain'
env_cfg = dict(
    cudnn_benchmark=False,
    dist_cfg=dict(backend='nccl'),
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
launcher = 'none'
load_from = None
log_level = 'INFO'
model = dict(
    backbone=dict(
        depth=50,
        num_stages=4,
        out_indices=(3, ),
        style='pytorch',
        type='ResNet'),
    head=dict(
        in_channels=2048,
        loss=dict(loss_weight=1.0, type='CrossEntropyLoss'),
        num_classes=7,
        topk=(
            1,
            5,
        ),
        type='LinearClsHead'),
    neck=dict(type='GlobalAveragePooling'),
    type='ImageClassifier')
optim_wrapper = dict(
    optimizer=dict(lr=0.1, momentum=0.9, type='SGD', weight_decay=0.0001))
param_scheduler = dict(
    by_epoch=True,
    gamma=0.1,
    milestones=[
        100,
        200,
        300,
        400,
        500,
    ],
    type='MultiStepLR')
randomness = dict(deterministic=False, seed=None)
resume = False
test_cfg = dict()
test_dataloader = dict(
    batch_size=64,
    collate_fn=dict(type='default_collate'),
    dataset=dict(
        data_root='D:\\data',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(edge='short', scale=256, type='ResizeEdge'),
            dict(crop_size=224, type='CenterCrop'),
            dict(type='PackInputs'),
        ],
        split='test',
        type='ImageNet'),
    num_workers=8,
    persistent_workers=True,
    pin_memory=True,
    sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = [
    dict(topk=(
        1,
        5,
    ), type='Accuracy'),
    dict(
        items=[
            'precision',
            'recall',
            'f1-score',
        ], type='SingleLabelMetric'),
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(edge='short', scale=256, type='ResizeEdge'),
    dict(crop_size=224, type='CenterCrop'),
    dict(type='PackInputs'),
]
train_cfg = dict(by_epoch=True, max_epochs=500, val_interval=1)
train_dataloader = dict(
    batch_size=64,
    collate_fn=dict(type='default_collate'),
    dataset=dict(
        data_root='D:\\data',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(scale=224, type='RandomResizedCrop'),
            dict(direction='horizontal', prob=0.5, type='RandomFlip'),
            dict(type='PackInputs'),
        ],
        split='train',
        type='ImageNet'),
    num_workers=8,
    persistent_workers=True,
    pin_memory=True,
    sampler=dict(shuffle=True, type='DefaultSampler'))
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(scale=224, type='RandomResizedCrop'),
    dict(direction='horizontal', prob=0.5, type='RandomFlip'),
    dict(type='PackInputs'),
]
val_cfg = dict()
val_dataloader = dict(
    batch_size=64,
    collate_fn=dict(type='default_collate'),
    dataset=dict(
        data_root='D:\\data',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(edge='short', scale=256, type='ResizeEdge'),
            dict(crop_size=224, type='CenterCrop'),
            dict(type='PackInputs'),
        ],
        split='val',
        type='ImageNet'),
    num_workers=8,
    persistent_workers=True,
    pin_memory=True,
    sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = [
    dict(topk=(
        1,
        5,
    ), type='Accuracy'),
    dict(
        items=[
            'precision',
            'recall',
            'f1-score',
        ], type='SingleLabelMetric'),
]
vis_backends = [
    dict(type='LocalVisBackend'),
]
visualizer = dict(
    type='UniversalVisualizer', vis_backends=[
        dict(type='LocalVisBackend'),
    ])
work_dir = 'D:\\resnet50_8xb32_in1k_epoch500'

执行导出.pkl文件命令报错:(An error occurred when exporting.pkl files:)

python tools/test.py config.py resnet50_8xb32_in1k_epoch500.pth --out results.pkl
报错如下:(The error is as follows:)
Traceback (most recent call last):
  File "D:\Pythonfiles\Undergraduate_Thesis\mmpretrain-main\mmpretrain-main\tools\test.py", line 193, in <module>
    main()
  File "D:\Pythonfiles\Undergraduate_Thesis\mmpretrain-main\mmpretrain-main\tools\test.py", line 186, in main
    metrics = runner.test()
              ^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\site-packages\mmengine\runner\runner.py", line 1823, in test
    metrics = self.test_loop.run()  # type: ignore
              ^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\site-packages\mmengine\runner\loops.py", line 443, in run
    self.run_iter(idx, data_batch)
  File "C:\ProgramData\anaconda3\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\site-packages\mmengine\runner\loops.py", line 463, in run_iter
    self.evaluator.process(data_samples=outputs, data_batch=data_batch)
  File "C:\ProgramData\anaconda3\Lib\site-packages\mmengine\evaluator\evaluator.py", line 60, in process
    metric.process(data_batch, _data_samples)
  File "d:\pythonfiles\undergraduate_thesis\mmpretrain-main\mmpretrain-main\mmpretrain\evaluation\metrics\single_label.py", line 157, in process
    result['gt_label'] = data_sample['gt_label'].cpu()
                         ~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'gt_label'

请问这么解决这个问题呢?感激不尽您的建议。(How do you solve this problem? Your advice is greatly appreciated.)

环境信息

{'sys.platform': 'win32',
'Python': '3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, '
'13:26:23) [MSC v.1916 64 bit (AMD64)]',
'CUDA available': True,
'MUSA available': False,
'numpy_random_seed': 2147483648,
'GPU 0': 'NVIDIA GeForce RTX 4090',
'CUDA_HOME': 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1',
'NVCC': 'Cuda compilation tools, release 12.1, V12.1.105',
'MSVC': '用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.39.33521 版',
'GCC': 'n/a',
'PyTorch': '2.1.2+cu121',
'TorchVision': '0.16.2+cu121',
'OpenCV': '4.9.0',
'MMEngine': '0.10.3',
'MMCV': '2.1.0',
'MMPreTrain': '1.2.0+unknown'}

其他信息

No response

@JMcarrot
Copy link

JMcarrot commented May 1, 2024

I also met this error,how do you solve it?

@yanshuangying888
Copy link
Author

yanshuangying888 commented May 1, 2024

I also met this error,how do you solve it?
@JMcarrot
你好,朋友,很高兴能够回复你,我现在将我修正后的内容发给你,希望能够帮助到您。
(Hello, my friend, I am very glad to reply to you. Now I will send you the corrected content, and I hope it can help you.)

auto_scale_lr = dict(base_batch_size=256)
data_preprocessor = dict(
    mean=[
        123.675,
        116.28,
        103.53,
    ],
    num_classes=7,
    std=[
        58.395,
        57.12,
        57.375,
    ],
    to_rgb=True)
dataset_type = 'ImageNet'
default_hooks = dict(
    checkpoint=dict(interval=1, type='CheckpointHook'),
    logger=dict(interval=100, type='LoggerHook'),
    param_scheduler=dict(type='ParamSchedulerHook'),
    sampler_seed=dict(type='DistSamplerSeedHook'),
    timer=dict(type='IterTimerHook'),
    visualization=dict(enable=False, type='VisualizationHook'))
default_scope = 'mmpretrain'
env_cfg = dict(
    cudnn_benchmark=False,
    dist_cfg=dict(backend='nccl'),
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
launcher = 'none'
load_from = 'D:\\epoch_50.pth'
log_level = 'INFO'
model = dict(
    backbone=dict(
        depth=50,
        num_stages=4,
        out_indices=(3, ),
        style='pytorch',
        type='ResNet'),
    head=dict(
        in_channels=2048,
        loss=dict(loss_weight=1.0, type='CrossEntropyLoss'),
        num_classes=7,
        topk=(
            1,
            5,
        ),
        type='LinearClsHead'),
    neck=dict(type='GlobalAveragePooling'),
    type='ImageClassifier')
optim_wrapper = dict(
    optimizer=dict(lr=0.1, momentum=0.9, type='SGD', weight_decay=0.0001))
param_scheduler = dict(
    by_epoch=True,
    gamma=0.1,
    milestones=[
        100,
        200,
        300,
        400,
        500,
    ],
    type='MultiStepLR')
randomness = dict(deterministic=False, seed=None)
resume = False
test_cfg = dict()
test_dataloader = dict(
    batch_size=64,
    collate_fn=dict(type='default_collate'),
    dataset=dict(
        data_prefix='test',
        data_root='D:\\data',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(edge='short', scale=256, type='ResizeEdge'),
            dict(crop_size=224, type='CenterCrop'),
            dict(type='PackInputs'),
        ],
        type='CustomDataset'),
    num_workers=8,
    persistent_workers=True,
    pin_memory=True,
    sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = [
    dict(topk=(
        1,
        5,
    ), type='Accuracy'),
    dict(
        items=[
            'precision',
            'recall',
            'f1-score',
        ], type='SingleLabelMetric'),
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(edge='short', scale=256, type='ResizeEdge'),
    dict(crop_size=224, type='CenterCrop'),
    dict(type='PackInputs'),
]
train_cfg = dict(by_epoch=True, max_epochs=500, val_interval=1)
train_dataloader = dict(
    batch_size=64,
    collate_fn=dict(type='default_collate'),
    dataset=dict(
        data_prefix='train',
        data_root='D:\\data',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(scale=224, type='RandomResizedCrop'),
            dict(direction='horizontal', prob=0.5, type='RandomFlip'),
            dict(type='PackInputs'),
        ],
        type='CustomDataset'),
    num_workers=8,
    persistent_workers=True,
    pin_memory=True,
    sampler=dict(shuffle=True, type='DefaultSampler'))
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(scale=224, type='RandomResizedCrop'),
    dict(direction='horizontal', prob=0.5, type='RandomFlip'),
    dict(type='PackInputs'),
]
val_cfg = dict()
val_dataloader = dict(
    batch_size=64,
    collate_fn=dict(type='default_collate'),
    dataset=dict(
        data_prefix='val',
        data_root='D:\\data',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(edge='short', scale=256, type='ResizeEdge'),
            dict(crop_size=224, type='CenterCrop'),
            dict(type='PackInputs'),
        ],
        type='CustomDataset'),
    num_workers=8,
    persistent_workers=True,
    pin_memory=True,
    sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = [
    dict(topk=(
        1,
        5,
    ), type='Accuracy'),
    dict(
        items=[
            'precision',
            'recall',
            'f1-score',
        ], type='SingleLabelMetric'),
]
vis_backends = [
    dict(type='LocalVisBackend'),
]
visualizer = dict(
    type='UniversalVisualizer', vis_backends=[
        dict(type='LocalVisBackend'),
    ])
work_dir = 'D:\\resnet50_8xb32_in1k_epoch500'

你可以重点关注data_prefix=XXX这一部分,将原有的split=XXX进行替换。
(You can focus on the data_prefix=XXX section and replace the original split=XXX.)
希望这能帮助到您。
(Hope this helps you.)

@Jayden-ch
Copy link

@yanshuangying888 你好,我也遇到了同样的问题,按照您的方法修改了data_prefix后,依旧没有得到解决

@yanshuangying888
Copy link
Author

@yanshuangying888 你好,我也遇到了同样的问题,按照您的方法修改了data_prefix后,依旧没有得到解决

@Jayden-ch 您好,朋友,很高兴能够回复您,我在修正此问题时,参考了以往的mmpretrain版本,我将以往能运行的版本进行了代码保存,并与报错代码进行比对,直到修改了data_prefix后才修正,因为每一位开发者的机型与环境都不一定完全相同,您可以根据以往的版本作为参考,看看能不能修复您的报错,您也可以尝试将我的修正代码改变必要的参数后在您的环境运行,希望能够帮助到您。(When I corrected this problem, I referred to the previous versions of mmpretrain. I saved the code of the previous versions that could run and compared them with the error code. I did not correct it until I modified data_prefix, because the model and environment of each developer may not be exactly the same. You can use the previous version as a reference to see if you can fix your error, you can also try my fix code to change the necessary parameters to run in your environment, I hope it can help you.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants