diff --git a/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb b/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb index 9fab0f8d5b..8cc926e1f6 100644 --- a/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb +++ b/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb @@ -6,6 +6,8 @@ "source": [ "# Building your own algorithm container\n", "\n", + "test ci.\n", + "\n", "With Amazon SageMaker, you can package your own algorithms that can than be trained and deployed in the SageMaker environment. This notebook will guide you through an example that shows you how to build a Docker container for SageMaker and use it for training and inference.\n", "\n", "By packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. \n", diff --git a/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb b/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb index 552236814b..8577dd61c4 100644 --- a/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb +++ b/sagemaker_processing/basic_sagemaker_data_processing/basic_sagemaker_processing.ipynb @@ -6,6 +6,8 @@ "source": [ "# Get started with SageMaker Processing\n", "\n", + "test ci..\n", + "\n", "This notebook corresponds to the section \"Preprocessing Data With The Built-In Scikit-Learn Container\" in the blog post [Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation](https://aws.amazon.com/blogs/aws/amazon-sagemaker-processing-fully-managed-data-processing-and-model-evaluation/). \n", "It shows a lightweight example of using SageMaker Processing to create train, test, and validation datasets. SageMaker Processing is used to create these datasets, which then are written back to S3.\n", "\n", diff --git a/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb b/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb index f90b4ddb2f..3d783a3f3b 100644 --- a/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb +++ b/training/distributed_training/pytorch/data_parallel/yolov5/yolov5.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Distributed data parallel YOLOv5 training with PyTorch and SageMaker distributed\n", + "# Distributed data parallel YOLOv5 training with PyTorch and SageMaker distributed test\n", "\n", "[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library (`smdistributed.dataparallel`) is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet.\n", "\n",