10th Nov 2020
7 min read

ARK Explorer 4.0 - Under The Hood

Today we’ll talk about the tech-stack behind our upcoming release of the brand new ARK Explorer. It has received a major overhaul to increase its reliability and performance while adding new features at the same time.

‘Beta Testing’ phase of the Explorer on our Development Network starts Tuesday, 17th of November 2020 (date subject to change). You can learn more by joining our Slack (#devnet channel).

Reliability

One major downside of the previous version of the Explorer was that if the node that it was connected to had high traffic load, you could end up receiving timeouts or getting rate limited. This could make the Explorer unusable for a short period of time. These limitations make sense in day to day operations to avoid nodes being overwhelmed, but you want the Explorer to be up 24/7 with as little downtime as possible.

Relying on the API of ARK Core had several downsides in terms of reliability:

  1. There were random timeouts during periods of high traffic.
  2. Rate limiting could make the Explorer temporarily unusable.
  3. Most importantly, if the API server crashes the Explorer becomes useless. All the data is still stored in the database, but you won’t be able to access it until the API is up again.

Resolving these issues meant we had to drastically change how the Explorer interfaced with Core. This new release will benefit greatly by interfacing directly with the Core database. Let’s go into more detail about those benefits.

Tech-Stack

When planning the new Explorer we looked at the features we wanted to build, how to best achieve them and how the Tech-Stack we had been using in the past would play out. The biggest changes we wanted to make were to move from the API to the Database, maintain or improve performance and expose more features.

After looking at all of those goals and the technology we had been using, it became clear that it would be quite laborious to build a proper solution with TypeScript in a reasonable amount of time. Eventually we chose to move to a more modern solution that we had already been using for all of our internal and upcoming web projects.

Laravel and Livewire are the frameworks that power all of our internal projects and all our upcoming projects such as MarketSquare, Deployer and Nodem. Paired with TailwindCSS it permits us to rapidly prototype applications without having to think about all of the nitty gritty things like templating, database interactions, interactive UIs and everything else you have to consider when building modern web applications.

Laravel is the foundation of the new Explorer. It powers the entire back-end by providing a powerful ORM that allows us to easily get up and running with the Core database. Its templating is simple and extensible. Most importantly, it allows us to reuse our standardised UI components from our internal projects. This ensures that the new Explorer UI is consistent with our new brand guidelines and products.

Livewire is the heart of the UI in the new Explorer, allowing us to build a user-interface that can reload parts of the page without having to ever touch JavaScript. We are still using JavaScript in the Explorer, but only when we have no other choice. When we do need to use it, we use Alpine.js . Alpine.js is developed by the same people that made Livewire and is made to play nicely together with it in order to make building modern user-interfaces a joyful task instead of a chore.

Features

When working with an API you are always limited by the data it exposes. This meant that to implement new features we either had to make changes to Core or write a plugin to expose all of the needed data and then update that plugin every time we add a new feature or need to fix a bug.

Making changes to Core to expose more data isn’t always an option. It can impact performance of the network when exposing data through the API that is expensive to compute. Also, there isn’t really any justification for exposing such data for a single application. Doing the same through a plugin would provide the data we need, but still means relinance on an external HTTP Server to feed us the data which is prone to crashing. There is also latency issues and obviously having to be rate limited to avoid spam.

We have circumvented all of these limitations by directly interfacing with the Core database which is powered by PostgreSQL. A direct link with PostgreSQL has the benefit of having a rock solid and proven solution to access all data that Core exposes without having to worry about random bugs or traffic spikes since the database is only accessible from a specific IP address. A side-effect of these changes is that we are also no longer limited by the features we can build because we have full access to all data. Later in this article you’ll see that we are now able to add long-requested features without any major performance impact.

Performance

Performance is the most important concern when it comes to an application that exposes data purely for the purpose of browsing it. With an external API you have no real control over how fast the data gets to you because it could be queueing your requests, being busy caching it or doing some data transformation before you get a response.

When you directly interface with the database you are only limited by the database setup and performance. Slow queries can often be resolved by adding composite indices or making use of caching at the database level. Our use-case will work fine with server-side caching by using Redis and executing database queries that can make efficient use of database indices.

All expensive operations like aggregating votes, all-time fees forged or getting the latest blocks for every delegate are performed through cronjobs and finally cached with Redis. This greatly improves performance because we don’t need to execute those for every visitor that opens the Explorer. Aggregating this data would be impossible through the Core API without making nodes more vulnerable to DDoS attacks due to the performance hog those operations would cause on the average node.

We are caching as much data as possible to avoid unnecessary database queries to get the data served as fast as possible. We will continue to improve performance as the new release is out in the wild and reveals new data that would be a good candidate for caching instead of retrieving repeated or expensive data.

Security

By now you might have figured out that another benefit of this new setup is increased security. You will no longer have to expose a server with a public API that can then be obtained by simply looking at the network tools in your browser.

The recommended setup would be that you run a Core instance behind a Firewall with remote access to your database with a read-only user. This instance should have all public services like APIs and Webhooks disabled to reduce the load on it as much as possible. Its sole job should be to sync data and leave as much resources as possible to PostgreSQL.

We recommend that this instance has at a bare minimum 8GB of RAM and 4 CPU Cores. This is the absolute minimum requirement. But for smooth operations that are able to handle all of the data aggregation and computations we recommend at least 16GB of RAM and 8 CPU Cores. The ideal setup would be an instance that has 32GB of RAM and 16 CPU Cores which will leave Explorer and Core with more than enough room for spikes in resource consumption.

To further improve the stability and performance of your setup you can run a separate database server that is only accessed by Core and Explorer instead of running the database and Core on the same server. Keep in mind that you should keep your Core and Database server in the same geo location to avoid high latency which could cause delayed I/O operations.

Conclusion

While end users will only see the new front facing side, developers will get a solid and stable foundation to show their ARK-based chain in the best possible light.

Be sure to check our ‘Beta Testing’ phase of the Explorer on our Development Network starting Tuesday, 17th of November 2020 (date subject to change). You can learn more by joining our Slack (#devnet channel).

Share:

Get in Touch!

Whether you want to learn more about ARK Ecosystem, want to apply for developer bounty, become our partner or just want to say Hello, get in touch and we will get back to you.



An Ecosystem of Developers

Join us on our journey to create the future of Web3.