Phoenix vs. Rails: models and migrations

Phoenix vs. Rails: models and migrations

In the previous article, I described the structure of creating a new project in Phoenix. Today we will analyze how Phoenixs migrations and models relate to ActiveRecord and ActiveRecord::migration in rails.

Since in that article we were engaged in creating a simple application for publishing blogposts, today we will write a model of posting. As before, we can also now use ready-made generators to create the structure of our application. To write a new post, use the following command:

The above command is similar to its rail counterpart:

In case you haven’t noticed, I’ll give you a quick hint that the creators of Phoenixs were strongly inspired by Rails. The execution of this code, just like in rails, creates a model, CRUD views, controller and migration. The difference between Phoenix and Rails is that Phoenix does not automatically add paths to our router (however, you will be notified if you use this generator), so we will add these paths by appending a resource macro to our router:

As with rails, you must also migrate changes to the database in Phoenix to create a new table. However, let’s first look at the migration file generated by the generator:

The phoenixian migrations are very similar to those of the rails. The biggest difference between them is the different ways of writing, which are due to differences in paradigms: Ruby uses dot notation, while Elixir uses Lambda. Okay, but what if we want to implement a more complex migration, but it’s hard to rollback from the migration module? Unfortunately, I have another boring answer for you: just like in rails, we use separate ” up ” and “down” methods. Personally, I really like it, because I think migrations in rails are great and I’ve never had a problem with them, as long as they were used in the right way.

Let us now return to our model of posting. The models in phoenixx are actually elixir structures, which in turn are actually good old maps. They allow you to more conveniently address fields in structures by using dot notation, and therefore we do not need to use the map function.get / 3. Below is our post model:

The first model line contains code that imports functions that are appropriate for the models in our application. By default, functions are imported for Ecto, which is a kind of ORM library (it’s basically a database wrapper) for elixir. We’re not going to use these functions until later, so let’s move on for now. In the next few lines, we have the code that defines the table pattern used by the model. The definition contains the name of the database table, fields, field types, default values, and a timestamp macro, which is a macro – again very similar to rails – adds 3 default fields: id, created_at, and updated_at.

Here we also include all information about the relationships contained in the model. I really like the fact that the table pattern is included in the model, so I’m not forced to use separate libraries, such as annotate in ActiveRecord models.

The next method is changeset and here we can already see a very cool pattern in action. Changeset is a function that takes an already existing model (the first parameter is struct) and adds new parameters to it (e.g. from an HTML form) and checks whether this model works correctly after new changes. “What’s so amazing about that?”, you will probably ask. I mean, we’ve had this at the rails for years. The whole difference is in terms of validation. The “traditional” rails way to validate is to add them to the global scope of the model. In phoenixx, we do this in a specific function. If business logic requires a different method of validation, e.g. depending on the type of user, we can simply write another changeset function. Of course, in the ruby world, we also have this type of solution available-just use dry-validation or object forms. In fact, very few projects in the rails ecosystem use these methods, and in Phoenix they are the default solution. We can also see how the pipe operator works in the changeset function – if you have only dealt with more complex bash commands, this will not be new for you. The pipe operator first takes our structure, which is our data, gives it new parameters (we do not change the object itself-the immutability of objects!), and at the end verifies the correctness of the parameters in the new structure. There are, of course, other verification functions that you can use-all of which can be found in the Ecto documentation.Changeset.

That would be enough for today’s entry. We learned today what are the differences between migrations and models in rails and in Phoenix. In a nutshell: the migrations in both languages are very similar, and the models differ quite significantly, as Phoenix pushes for a different approach than ActiveRecord. The default style in Phoenix is ” how about not creating global validation rules in the model and then limiting their scope?”. I have to admit, I really like this style. During my adventure with the rails, I realized that a lighter model is a better model. The fewer states I have to think about, the better.

In the next episode of the series we will work on the request-response cycle and how to influence it through pipelines.

Bartosz Łęcki, Ruby on Rails Developer, Netguru

Go to our cases Get a free quote