02 Feb 2013 11:06:02pm - Dazz - 2 Comment(s)
SO! Let's recap my achievements for today. Sadly, much of what I read today on the internet was extremely unfriendly in terms of "what the hell do I _actually_ do" when it came to running custom commands after deployment, so I'll quickly recap the specifics where I found documentation not 100% hand-holding-friendly. The rest you can find on Amazon's extensive documentation site.
Today I wanted get started on my new blog code. I'm very tired of using WordPress (I still love you WordPress, honest) with one of the biggest factors I wish to address being the ability to simply deploy on Amazon Elastic Beanstalk. So today, it's get an RDS MySQL instance connected, throw Laravel on it, and then run any outstanding migrations (or just run them to start with). Dead simple.
Much of this I learnt from the developer guide on AWS documentation, which was extremely useful. Following this guide, you'll get a new local git repo where you can throw laravel 4's app base onto (currently found here). I won't go into it, it's already awesomely documented there on how to get to this point in the Beanstalk PHP getting started guide (Linked again in case you still haven't read that before reading on here).
Elastic Beanstalk has become even more awesome for users of Laravel, as 'composer install' is automatically run when it see's you have composer.json in your root folder of your app. So this helps a bunch in the fact we don't have to worry about either a) uploading the entire local vendor directory b) running composer on the instance ourselves after we push an update.
You'll notice after you follow the guide on using eb, you'll have yourself a directory called .elasticbeanstalk which is automatically added to the .gitignore. In there, you'll need to make some quick adjustments to your environment's configuration file. (Not the 'config' file, but the file that is 'optionsettings.EnvironmentNameHere'.
[aws:elasticbeanstalk:container:php:phpini] document_root=/public composer_options= zlib.output_compression=Off memory_limit=256M allow_url_fopen=On max_execution_time=60 display_errors=On
The above is what I'm working with, you'll notice you need to ensure your document_root is the /public folder of your app, fairly straight forward, and that I turned on errors so that I can get some feedback as I develop 'in the cloud'. (10 points to me for using buzz-word-phrase).
If you wish to log into the created EC2 instances that are running your code (say, when shit goes wrong), it doesn't hurt to setup a key pair in your EC2 Management dashboard and then update the information under [aws:autoscaling:launchconfiguration] as follows in my example.
[aws:autoscaling:launchconfiguration] InstanceType=t1.micro EC2KeyName=mysecretkeypairname
Now we'll need to make sure that when our instances update or deploy for the first time, they run artisan migrate. We'll setup laravel's database configuration shortly so keep your pants on.
Make a new directory in your root directory called .ebextensions - This will be where all your commands run either before, as, or after your application is deployed. You can see a full list of things you can do (and when) in some more great documentation.
In our new folder, I made a new file called 01migrate.config, as it's run alphabetically and one day I may need more tasks to run from artisan after a deployment. The file simply contains the call to artisan migrate.
container_commands: artisanmigrate: command: "php artisan migrate --env=elastic" leader_only: true
In a nut shell, we're calling the command we're about to run 'artisanmigrate', it's running 'php artisan migrate --env=elastic', and I only want the leader of the environment (as we could have multiple instances all trying to do this at the same time) to run this command. The environment flag is important, as is (at time of writing) having 'php' in the command. I believe container_commands runs after your application is unzipped, but not before permissions are fixed up, thus it fails to run with it's shebang.
Now, you should be familiar with laravel environments, and so we find ourselves creating a new directory under app/config/ so go ahead and make mkdir app/config/elastic now.
Inside that, we'll have our database.php file which will have database settings. It needs to look like the following;
<?php return array( 'default' => 'mysql', 'connections' => array( 'mysql' => array( 'driver' => 'mysql', 'host' => $_SERVER['RDS_HOSTNAME'], 'port' => $_SERVER['RDS_PORT'], 'database' => $_SERVER['RDS_DB_NAME'], 'username' => $_SERVER['RDS_USERNAME'], 'password' => $_SERVER['RDS_PASSWORD'], 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), ), );
Elastic Beanstalk will give us the hostname, and other settings via the $_SERVER global, so we'll just throw these into the array that Laravel is asking for.
The fun part, where I'm sure there's a much better way of achieving this, is in your app/start.php file. As an elastic beanstalk hostname will change with ever environment you setup, I didn't want to either hardcode what was generated for the environment, use a cname domain, nor did I want to specifically say this deploy is always an elastic deployment. Hence the following code that replaces the default env detection (small addition really);
$elastic_hostname = isset($_SERVER['RDS_HOSTNAME']) ? $_SERVER['SERVER_NAME'] : 'none-existant-hostname'; $env = $app->detectEnvironment(array( 'local' => array('localhost'), 'elastic' => array($elastic_hostname) ));
Basically saying where the server global RDS_HOSTNAME is set, I'll assume we're the elastic environment - so make this hostname = the elastic environment.
Ta da, once you git commit, git aws.push, and grab yourself a coffee while instances are rebuilt on Amazon's end, you should have a working copy of Laravel 4 connected to MySQL in their RDS. Huzzah! After all this, ensure your migrations folder has the sessions migration instructions in there (seek laravel documentation), setup your sessions driver to be the mysql database (again, seek someone else's documentation) and voila.
When using eb on the command line to generate and update your instances, the RDS that is created with the environment WILL BE TERMINATED when you stop and/or delete the environment. So it's important to note, this setup is MOSTLY for testing on the fly. However, and in my opinion, if you were to use Elastic in production (which I so totally will when I get around to it), what I would do is NOT create a RDS when I create a new elastic application/environment, but create it manually in AWS's console/dashboard - and then in the elastic/database.php configuration - specifically put in the credentials there for the connection.
Of course thereafter I would need to setup the security access between the Elastic/EC2 and RDS myself - but then regardless of what happens to the environment - the RDS instance would always persist.