Programming Posts

Django vs. Rails in 2019 (hint: there is a clear winner)

I recently had the opportunity to work on a production application in the Django framework. I had previously worked with the framework (version 1.4) at a tech startup circa 2012, but hadn’t revisited it in several years. I feel like I can provide a more comprehensive, in-depth comparison than many can in that I have worked professionally with both frameworks (and at their current release versions) how to make a pay stub. I am a bit stunned by how much Django has fallen behind in dimensions such as its broad state of maturity, community/ecosystem, productivity and much more and even improving business management. I wanted to do a post enumerating these points, both in the hopes that it can help others deciding between these two frameworks.

tl;dr – in 2019, unless you have some critical dependency on a Python library and you cannot work around it with a service-wise integration, just choose Rails.

Without further ado, here is a point-by-point assessment of Django in relation to Ruby on Rails in 2019:

1) No intelligent reloading

This might seem an odd place to start but it is to me actually very important and symbolic of the broader state of the Django framework and its comparative stagnation over the past decade. Django’s development server, by default, entirely reloads your project when you modify a file. Django’s default shell (and indeed,  even the extended shell_plus) expose no faculty at all for you to reload modified code – you must manually exit and restart them after any relevant code change, have a look at what salary payment is. This is in contrast to Rails’ intelligent class loading, which has been a feature of the framework for over a decade and continues to see optimization and refinement (such as with the introduction of Xavier Noria’s Zeitwerk into Rails 6). Modifying a file in development causes only the implemented class to be reloaded in the runtime of the development server, and even an active REPL can be manually updated with the new class implementation with a call to the globally-exposed reload! method, have a look at why using a check stub maker to create paylips will make your company run smoothly.

2) Dependency management

Rails directly integrates bundler as a solution for dependency management cost reduction. When you pull down a Rails project to work on, you will find all explicit dependencies declared in the Gemfile in the project root, and a full hierarchical dependency graph in the Gemfile.lock. Furthermore, because the framework is bundler-aware, dependencies are able to hook directly into your application with sensible defaults using the faculties provided by Railties, and without any needless boilerplate.

Meanwhile, Django’s guide does not even broach the topic of dependencies. In my experience, most projects rely on Virtualenv, pip and a requirements.txt file, while a small number rely instead on setuptools and a setup.py file.

I don’t have much experience with the setup.py-based approach but as for the Virtualenv +requirements.txt approach, there are some major shortcomings:

  • Dependencies are not environment-specific (i.e. there is no equivalent of bundler groups). This makes it much more of a chore to prevent test or development environment dependencies from bloating your production deployment. Perhaps it is possible to manage per-environment dependencies by using several different requirements files, but I have never tried and I imagine this would prove a chore when switching from a “development” to “testing” context, for instance, while working on a project.
  • Dependency file is flat, not hierarchical – you cannot easily see which package is a dependency of which, making it difficult to prune old dependencies as your project evolves (though there is a third-party tool that can help here).
  • Because the framework lacks direct integration with the dependency management tool, and lacks a system of hooks like Railties, there is always tedious boilerplate left for you to write whenever you integrate a third-party library (“app”) with your own Django project.

3) I18n and localization

Rails has an extremely mature and pragmatic internationalization (I18n) and localization framework. If you follow the framework’s documentation and use it’s t (translate) and l (localize) in the appropriate places, your application will be, by default, internationalization and localization ready in a very deep way.

Rails’ I18n and localization faculties are pragmatic to a degree that I have not seen, in fact, in any other web application framework (nor framework for other platforms, for that matter). I18n/localization in Rails works with a Thread-local pseudo-global I18n.locale attribute. What this means is that, with a single assignment such as I18n.locale = :de, values application-wide (for your current thread) will be seamlessly translated. This comes extremely useful in contexts such as background jobs or emails, where you can simply have some code like this and have things seamlessly rendered in the correct language and with correct localizations (i.e. proper date/time formatting, inflections, etc):

Django’s translation framework similarly depends on a thread-local configuration (translation.activate('(locale code)')), but is built atop gettext, a translation framework that I would argue is significantly less practical.

Whereas in Rails, you can do something like t('.heading') from a template app/views/users/profile.html.erb and the framework will know to pull the translation with the key users.profile.heading, the analogous translation call in Django would be something like:

Translations are managed in rigid “.po” (“portal object”) files:

But there is much, much more to distinguish Rails I18n/localization faculties over those of Django. Django’s translation framework is not at all dynamic/context-aware. This means that the framework will fail to cope, in any seamless way, with more complex pluralization rules like those of Slavic and Arabic languages, which are reasonably accommodated by Rails more dynamic localization scheme.

4) ActiveSupport

ActiveSupport is a library of utilities that, for practical purposes, can be thought of as an extension of the Ruby standard library. The breadth of these utilities is extensive, but they all evolved from a sense of pragmatism on the part of Rails’ maintainers. They include, for instance, things such as a String#to_sentence utility, which lets you do things like:

ActiveSupport also includes extensive data/time related utilities (and operator overloads/core type extensions) that allow for intuitive expression of date/time related calculations:

 

5) Far less mature asset management

6) Less mature caching faculties

7) Less flexible, more boilerplate-heavy ORM

8) Impractical migration framework

9) Management Commands vs Rake Tasks

10) No assumption of multiple environments

11) No credential encryption

12) Less mature ecosystem (as concerns PaaS, etc)

13) Boilerplate, boilerplate, boilerplate (as concerns absence of autoloading, Railties, etc).

14) Missing Batteries (Mailer Previews, ActiveJob, Parallel Testing etc)

15) Philosophy

i.e. think of “constraints are good” (i.e. the extremely limited template language in Django), “explicit is better than implicit” (as opposed to “convention over configuration”). I think explicit over implicit as a knock on Rails really misses the point. Everything “implicit” is actually just configuration of an extremely well-documented system. Ask any Rails developer of even modest experience about “magic” and they will tell you that there isn’t any magic in the framework. A lot of the things that people coming from other backgrounds might cite as magic in a “hot take” are really just instances of pragmatic API design (and where the underlying implementation is actually wholly comprehensible and often very well-documented to boot).

Scaling with Rails

Scaling with Rails

I have spent much of the past decade now building web applications professionally. Though not exclusively so, a large part of this work has ended up being with the Ruby on Rails framework. There is a widespread notion that monolithic Rails applications don’t scale, but in my experience I have found such applications often the most readily scalable of all that I have encountered.

I wanted to share some high-level insights and personally-acquired knowledge on this subject, in the hopes that such things may be taken into consideration by teams or developers weighing their options in architecting a web application for scale.

Leverage often comes in knowing what not to build

Leverage often comes in knowing what not to build

Something I have observed in my work over the last few years is that many genuinely competent, highly productive developers are tricked by their very skill into undertaking projects that, though ultimately successful to a degree, produce outcomes much poorer than had they simply integrated with a third-party platform. Oftentimes immense leverage can be gained simply by knowing what not to build.

One of the very wonderful things about the moment that we live in is how many mature platforms – both open source and closed, community-driven and commercial – we have at our disposal.

On microservices and distributed architectures

On microservices and distributed architectures

Like the boiling frog, we often fail to appreciate just how significantly the infrastructure upon which we rely as developers has improved over the last decade. When I began working with the Rails framework in 2010, everything from the hardware we used for local development, to the infrastructure upon which we tested and deployed, was positively anemic by today’s standard.

My personal laptop, reasonably high-end for the time, had a 5400 RPM spinning disk and 2 GB of RAM. SSDs were exotic, even on servers. Nowadays, you can get bare metal servers with 512gb-1tb of RAM, 2x multi-core CPUs and terabytes of fast SSD storage for a price that is perfectly reasonable for even small companies. Similarly, you can easily and cheaply launch fleets of high-spec virtual servers with providers like Amazon Web Services and DigitalOcean at minutes’ notice.

In many ways, it seems to me that we are often basing architectural decisions on imagined constraints. In my experience, a decision to embrace a microservices architecture should not follow primarily from concerns about scalability.

Some reflections on slow travel
Belgrade, Serbia

Some reflections on slow travel

I’ve spent the last couple of years doing what I would call slow travel, spending periods of a month or more in different cities. This began in late 2016 with a job that brought me to Berlin for months at a time over a period of a year. I quit that job last October and in the time since have been through Europe, Asia, the US, and then back around to Europe, where I am for the next several months with no fixed end in sight. I wanted to compile some observations that I’ve made in this time.

Why not give users equity?

There is a perspective from which the venture model that dominates a large subset of the tech industry is a little baffling. The infrastructure necessary to operate web/software products has never been cheaper. To an extent, the manpower has also never been cheaper (in the respect that it is increasingly practical to hire from a global talent pool spanning regions with very modest salaries). Still, the model that dominates Silicon Valley and other leading innovation hubs is one of selling off huge chunks of equity and pursuing liquidity for shareholders (via acquisition or IPO) over autonomy and sustainability handling employee payments with paystub (i.e. indefinite operation with profit distributions) here are some tips to improve business reputation using customer service. A recent innovation for a business improvement are the deductions on paystubs, remember a paycheck stub summarizes how your total earnings were distributed.

I say this is “baffling”, but there are reasons why the model persists.

A simple recipe for forwarding webhooks to your local development environment

A simple recipe for forwarding webhooks to your local development environment

If your web application integrates with third party services via Webhooks, you will likely have encountered the need to forward Webhook requests to a local development server.

My solution to this had long been a free utility called Localtunnel, which forwards requests from a randomly-generated public host (i.e. [random-prefix].localtunnel.me) to your local machine. A major problem with Localtunnel is that it will not provide you a persistent URL, so you end up having to constantly reconfigure services to point to newly-allocated/temporary URLs.

If you have a VPS or dedicated server, you can use SSH remote forwarding to accomplish the same thing that Localtunnel does, but maintain a persistent remote hostname and port. This way you can configure development Webhooks once for your third-party integrations and be set.

A tribute to Brutalist web design
Apartment block near Alexanderplatz, Berlin

A tribute to Brutalist web design

I spent a good part of last year living in Berlin, encountering large, Cold War era constructions like the apartment block pictured above on my morning walk. This style of architecture, distinguished by exposed concrete cast in hard lines, with little paint or ornamentation, is part of an architectural school known as Brutalism, and was very popular through the mid 20th century, especially in Eastern Europe.

The term has since been transposed to the online realm with brutalistwebsites.com, a site put together in 2016 by Pascal Deville (now Creative Director at the Freundliche Grüsse). I wanted to pay tribute to the tongue in cheek term by recognizing some of my own favorite Brutalist websites.