From 4c3db783083ac1a738e9f6fc01686904cb755842 Mon Sep 17 00:00:00 2001 From: atqy Date: Wed, 13 Jul 2022 11:04:50 -0700 Subject: [PATCH 1/5] test ci notebooks --- .../scikit_bring_your_own/scikit_bring_your_own.ipynb | 2 ++ .../basic_sagemaker_processing.ipynb | 2 ++ 2 files changed, 4 insertions(+) diff --git a/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb b/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb index 9fab0f8d5b..8cc926e1f6 100644 --- a/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb +++ b/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb @@ -6,6 +6,8 @@ "source": [ "# Building your own algorithm container\n", "\n", + "test ci.\n", + "\n", "With Amazon SageMaker, you can package your own algorithms that can than be trained and deployed in the SageMaker environment. This notebook will guide you through an example that shows you how to build a Docker container for SageMaker and use it for training and inference.\n", "\n", "By packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. \n", diff --git a/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb b/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb index 552236814b..a35ad37da7 100644 --- a/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb +++ b/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb @@ -6,6 +6,8 @@ "source": [ "# Get started with SageMaker Processing\n", "\n", + "test ci.\n", + "\n", "This notebook corresponds to the section \"Preprocessing Data With The Built-In Scikit-Learn Container\" in the blog post [Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation](https://aws.amazon.com/blogs/aws/amazon-sagemaker-processing-fully-managed-data-processing-and-model-evaluation/). \n", "It shows a lightweight example of using SageMaker Processing to create train, test, and validation datasets. SageMaker Processing is used to create these datasets, which then are written back to S3.\n", "\n", From d67799f1d1acf0d09125ae83f68b10f1cf5b7131 Mon Sep 17 00:00:00 2001 From: atqy Date: Wed, 13 Jul 2022 19:15:50 -0700 Subject: [PATCH 2/5] test trigger --- .../basic_sagemaker_processing.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb b/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb index a35ad37da7..8577dd61c4 100644 --- a/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb +++ b/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb @@ -6,7 +6,7 @@ "source": [ "# Get started with SageMaker Processing\n", "\n", - "test ci.\n", + "test ci..\n", "\n", "This notebook corresponds to the section \"Preprocessing Data With The Built-In Scikit-Learn Container\" in the blog post [Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation](https://aws.amazon.com/blogs/aws/amazon-sagemaker-processing-fully-managed-data-processing-and-model-evaluation/). \n", "It shows a lightweight example of using SageMaker Processing to create train, test, and validation datasets. SageMaker Processing is used to create these datasets, which then are written back to S3.\n", From eda9c1f1c6e85800703fde910d75d16e1715a6b2 Mon Sep 17 00:00:00 2001 From: atqy Date: Thu, 14 Jul 2022 10:53:06 -0700 Subject: [PATCH 3/5] test ci --- .../mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb b/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb index 4b3e5ac077..8e44ce1f3a 100644 --- a/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb +++ b/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb @@ -5,6 +5,9 @@ "metadata": {}, "source": [ "# Reduce MaskCNN Training Time with Apache MXNet and Horovod on Amazon SageMaker\n", + "\n", + "test ci.\n", + "\n", "Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models. As datasets continue to increase in size, additional compute is required to reduce the amount of time it takes to train. One method to scale horizontally and add these additional resources on SageMaker is through the use of Horovod and Apache MXNet. In this post, we will show how users can reduce training time with MXNet and Horovod on SageMaker. Finally, we will demonstrate how you can improve performance even more with advanced sections on Horovod Timeline, Horovod Autotune, Horovod Fusion, and MXNet Optimization. \n", "\n", "## Distributed Training \n", From 3d17a740ce7094f1d49fd98d2ddd9f2495d9223d Mon Sep 17 00:00:00 2001 From: atqy Date: Sat, 30 Jul 2022 00:13:04 -0700 Subject: [PATCH 4/5] tet another notebook --- .../pytorch/data_parallel/yolov5/yolov5.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb b/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb index f90b4ddb2f..3d783a3f3b 100644 --- a/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb +++ b/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Distributed data parallel YOLOv5 training with PyTorch and SageMaker distributed\n", + "# Distributed data parallel YOLOv5 training with PyTorch and SageMaker distributed test\n", "\n", "[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library (`smdistributed.dataparallel`) is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet.\n", "\n", From 54e82f67f1730e019bc95e85bb3cedbbd172ea1f Mon Sep 17 00:00:00 2001 From: atqy Date: Sat, 30 Jul 2022 08:07:44 -0700 Subject: [PATCH 5/5] remove fialing notebook --- .../mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb | 3 --- 1 file changed, 3 deletions(-) diff --git a/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb b/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb index 8e44ce1f3a..4b3e5ac077 100644 --- a/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb +++ b/sagemaker-python-sdk/mxnet_horovod_maskrcnn/horovod_deployment_notebook.ipynb @@ -5,9 +5,6 @@ "metadata": {}, "source": [ "# Reduce MaskCNN Training Time with Apache MXNet and Horovod on Amazon SageMaker\n", - "\n", - "test ci.\n", - "\n", "Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models. As datasets continue to increase in size, additional compute is required to reduce the amount of time it takes to train. One method to scale horizontally and add these additional resources on SageMaker is through the use of Horovod and Apache MXNet. In this post, we will show how users can reduce training time with MXNet and Horovod on SageMaker. Finally, we will demonstrate how you can improve performance even more with advanced sections on Horovod Timeline, Horovod Autotune, Horovod Fusion, and MXNet Optimization. \n", "\n", "## Distributed Training \n",