2017-08-01 | Tobias Sterbak


Efficient AWS usage for deep learning

When running experiments with deep neural nets you want to use appropriate hardware. Most of the time I work on a thinkpad laptop with no GPU. This makes experimenting painfully slow. A convenient way is to use an AWS instance, for example the p2.xlarge.

I will assume you have an AWS account (or that you are able to get one, it’s easy). Then I can show you how to efficiently use AWS to do deep learning.

The setup

First you need to add your AWS credentials to ~/.aws/credentials.

aws_access_key_id = YOUR_KEY
aws_secret_access_key = YOUR_SECRET

If you don’t have a .aws/ directory, just create it. Next you have to set your default region in your ~/.aws/config by adding

region=eu-west-1

Now the last think you have to do is installing pip install boto3.

The script

Now we want to automate the creation of a AWS instance as far as possible. We want to use this pre-configured image (so called AMI). Make sure you pick the right one for your region.

This is the script that we will use to spin up a AWS machine with the required ami.

import boto3
import datetime

instance_type = "p2.xlarge"
print("Starting spot instance of type {}".format(instance_type))
client = boto3.client('ec2')
response = client.request_spot_instances(
    DryRun=False,
    SpotPrice='0.25',
    InstanceCount=1,
    Type='one-time',
    LaunchSpecification={
        'ImageId': 'ami-d36386aa',
        'KeyName': 'aws_test',
        'SecurityGroups': ['dl'],
        'InstanceType': instance_type,
        'Placement': {
            'AvailabilityZone': 'eu-west-1a',
        },
        'BlockDeviceMappings': [
            {
                'DeviceName': '/dev/xvda',
                'Ebs': {
                    'SnapshotId': 'snap-0595b270bf9fd5579',
                    'VolumeSize': 50,
                    'DeleteOnTermination': True,
                    'VolumeType': 'gp2',
                    'Encrypted': False
                },
            },
        ],
        'EbsOptimized': False,
        'Monitoring': {
            'Enabled': False
        },
        'SecurityGroupIds': ['sg-a2dd59db']
    })
print(response)
print()
instances = ec2.instances.filter(
    Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
for instance in instances:
    print("Id: {}, type: {}, ip: {}".format(instance.id, instance.instance_type, instance.public_ip_address))

You need to apply some changes to make it work for you. You have to change the security groups, the snap-id and your keyName. Now you can simply run this script and you get GPU instance pre-configured for deep learning running and see the IP to connect to with ssh.

Next time we embed this python script in a bash script to automatically add packages to your instance.

Have fun!


Buy Me A Coffee



PrivacyImprintRSS

© depends-on-the-definition 2017-2022