Skip to content

Why are we pessimistic about optimistic UI for web applications?

Just like everyone else, one of the problems that we face while developing web applications is the slowness of the website from the user’s point of view. Once upon a time, in the prehistoric times of the internet, we used to write websites using PHP, and user’s every action caused the entire page to be refreshed. The webpage itself seemed relatively unresponsive – any time a user did something, they had to wait a disproportionate amount of time before the change was saved to the server and reflected back on the webpage.

As time passed, we decided to try a different approach and join the growing trend of SPA (Single Page Application). Although SPA caused some difficulties during the beginning (why doesn’t the “back” button work?), it eliminated one basic problem: all of a sudden, all the user’s actions were asynchronous and did not require the whole page to be updated. But we decided to go one step further and tried to use something called Optimistic UI.

The moment the user performed an action, such as renaming an advertising campaign, the web application behaved as if the action had succeeded before we received a response from the server. At first glance, the advantage is clear: the web application looks completely responsive because it never waits for anything. The webpage shows you the new name of the campaign before we receive a response from the server. In fact, it couldn’t be any more responsive or faster! In order for this strategy to have a chance to succeed at all, we had to meet some prerequisites:

  • First of all, we had to ensure that the vast majority of HTTP requests to the server really went through. It probably wouldn’t work so well if every second request failed. And we really tried to make them all go through! Even in the browser itself, we tried to validate what we could, before we sent it to the server. Whenever the site sent a request to the server, it was 99.9% clear that it would go through, unless there’s a database outage or some similar hell.
  • “You didn’t need to read the response from the server?” you ask. We didn’t! All actions are divided into Commands and Queries on the server. A query is the typical GET request that returns data from the server, it doesn’t change the state of the system. A command is a typical POST / PUT / DELETE request, that changes the state of the system, but does not return any data. Each command results either in an error or empty data. The webpage had nothing to wait for; he optimistically waited for the command to go through on the server and did not expect any data back from it.

“So what if the command didn’t go through?”

I swear I could hear you saying that! It was a stumbling block. Despite all efforts, it of course sometimes happened that the command didn’t go through. Maybe the database wasn’t really running, maybe we had a bug in the code, or maybe the user entered a tunnel and lost their internet connection. In such a situation, the command did not go through and we had to solve it somehow.

It definitely wasn’t an easy task.

Imagine that you’re a user and you’re browsing the web and you click on something. You change the settings here, you increase the budget there, you rename something, you adjust the targeting, delete some labels, assign some new ones… And only now the webpage finds out that the budget change did not go through on the server for some reason.

Something got stuck on the server and the frontend found out after 20 seconds that the budget couldn’t be changed. How should it let the user – who is already editing a completely different campaign in a different part of the website after 20 seconds – know about this? And what should we do with the remaining settings that the user made? Should we revert everything that the user has done since then? When the user renames a campaign three times and the second renaming fails, should we pretend that everything is fine?  And what if the user already closed the window where they made the change? What if they closed their laptop and went for lunch?

These are all questions to which there are certainly good answers. Nevertheless, we lived quite a long time with the fact that we weren’t solving these errors much.  And when we did some error treatment, it was more like “oh, something went wrong, please refresh the page”. Surprisingly, this seemed to be enough for a long time and our customers weren’t complaining much. On the other hand, some of our customers were comfortable with the fact that it took over ten minutes to generate a report during peak times because they can at least make coffee… The “Are the customers complaining? No.” benchmark test was not exactly a good one in this case. This was all happening in the pre-Redux era, the web application was written in AngularJS and the campaign name change was implemented as a simple assignment = “newName”. Which isn’t exactly the best possible implementation in case you want to keep things like undo.

How to get out of this? Pessimistically!

A few years later, we started developing a new frontend using Angular 2. We were once again wondering whether we want to go down the optimistic path. And we were very pessimistic about it. Our thought process went like this:

  • If we have slow commands, then the handling of undo operation or error handling in general is hard to solve. It is foolish to inform the user after X seconds that the penultimate that they performed did not actually go through. It is difficult for a developer to develop it, it is difficult for a user to understand it.
  • If we have quick commands, then no optimistic interface is needed, because quick commands will not delay the user. The webpage can always wait for the result of the command and react only after success. It is easy for a developer to develop it, it is easy for the user to understand it.
  • Almost every action in our web interface is important. If one of the thousands of likes you give on Facebook fails, the world won’t collapse. If you think that you changed the campaign’s budget from a million crowns to one hundred thousand crowns and it didn’t go through, then it is a major problem.

And that’s the reason why we chose to return to pessimistic UI. If the user changes any settings, we send a command to the server and the frontend obediently waits for the result. In the meantime, the user sees the progress bar that informs them that something is happening. Fun fact: because the vast majority of commands are really fast, we had to artificially extend the time period for which we display the progress bar because otherwise, it would flash only for 100 ms and the user did not even notice it. It looked strange, more like a malfunction. The user can also edit other parts of the web page at the same time, they don’t have to blindly wait until this one specific command goes through. In the end, the user is informed – for example via a checkmark – if their action was successful. Or we display some beautiful error message somewhere. Users are satisfied that they immediately see a result that is also reliable.

In fact, this is a win/win situation for both users and developers. Users gained a more reliable website thanks to the pessimistic UI. Developers simplified their work. What more could we want?

Did you like this article?

Leave a Reply

Your email address will not be published. Required fields are marked *