-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Train] Add example of pre-training Llama model on Intel Gaudi #45459
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(for train team folks to review)
7ffb29c
to
30f43b6
Compare
Signed-off-by: Wu, Gangsheng <gangsheng.wu@intel.com>
Signed-off-by: Wu, Gangsheng <gangsheng.wu@intel.com>
Signed-off-by: Wu, Gangsheng <gangsheng.wu@intel.com>
@@ -0,0 +1,568 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High-level question: seems that it's using deepspeed zero-3 for pre-training. why we also include megatron here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this example import megatron just for data processing
" )\n", | ||
"\n", | ||
" # Data loader only on rank 0 of each model parallel group.\n", | ||
" if args.use_dataset_only or mpu.get_tensor_model_parallel_rank() == 0:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did we configure the tensor and pipeline parallel group size?
"(RayTrainWorker pid=339380) {'loss': nan, 'grad_norm': nan, 'learning_rate': 4.9e-05, 'epoch': 0.0, 'memory_allocated (GB)': 40.42, 'max_memory_allocated (GB)': 93.68, 'total_memory_available (GB)': 94.62}\n", | ||
"(RayTrainWorker pid=339380) {'loss': nan, 'grad_norm': nan, 'learning_rate': 4.875e-05, 'epoch': 0.0, 'memory_allocated (GB)': 40.4, 'max_memory_allocated (GB)': 93.68, 'total_memory_available (GB)': 94.62}\n", | ||
"(RayTrainWorker pid=339380) {'loss': nan, 'grad_norm': nan, 'learning_rate': 4.85e-05, 'epoch': 0.0, 'memory_allocated (GB)': 40.4, 'max_memory_allocated (GB)': 93.68, 'total_memory_available (GB)': 94.62}\n", | ||
"(RayTrainWorker pid=339380) {'loss': nan, 'grad_norm': nan, 'learning_rate': 4.825e-05, 'epoch': 0.0, 'memory_allocated (GB)': 40.45, 'max_memory_allocated (GB)': 93.68, 'total_memory_available (GB)': 94.62}\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems that the loss and grad norm are nan
, can you try to fix the bug?
"\n", | ||
" # Set backend to hccl in TorchConfig\n", | ||
" torch_config = TorchConfig(backend=\"hccl\")\n", | ||
" runtime_env = {\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this if it's empty
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Process dataset to dataloader" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, can we introduce the model sharding layouts, so that the users can better understand why the dataloader is defined this?
Including pp/tp/dp group size, global batch size, per device batch size, etc.
Why are these changes needed?
To leverage the potential of Intel Gaudi accelerator, we extend Ray Train's capabilities by adding support for Intel Gaudi (HPU) hardware. This PR include an example for pre-training Llama-7b on multi HPUs.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.