WEBVTT

NOTE Created by CaptionSync from Automatic Sync Technologies www.automaticsync.com

00:00:00.266 --> 00:00:12.996 align:middle
Hello. So I hope you're having a good time so
far, we're getting through the first day slowly.

00:00:12.996 --> 00:00:25.946 align:middle
So, I'm Jordi Boggiano and I want to
talk today about hosting applications

00:00:25.946 --> 00:00:31.536 align:middle
across multiple regions or meaning
geographically, across the whole planet ideally.

00:00:33.026 --> 00:00:36.956 align:middle
Um, so just a quick word about myself.

00:00:36.956 --> 00:00:41.076 align:middle
I've been doing internet
things for quite a while now.

00:00:41.076 --> 00:00:46.616 align:middle
I've also been leading Composer
and Packagist development,

00:00:46.906 --> 00:00:49.316 align:middle
a bunch of other open source projects.

00:00:50.966 --> 00:00:53.156 align:middle
And for, let's say like work...

00:00:53.156 --> 00:01:02.776 align:middle
kind of getting money, you know, because one has
to at some point, I work part time at teamup.com

00:01:02.776 --> 00:01:06.776 align:middle
and part time for private
Packagist, which is kind of helping

00:01:06.776 --> 00:01:12.406 align:middle
to pay the composer development as well.

00:01:12.406 --> 00:01:17.666 align:middle
So the first question is why would you want
to host something across multiple regions?

00:01:17.956 --> 00:01:26.226 align:middle
Because it, you know, it is definitely going to
cause you some pain, like it's not the easy way.

00:01:26.616 --> 00:01:35.646 align:middle
I mean, obviously I think the main
reason for me is that you have used those

00:01:35.646 --> 00:01:41.176 align:middle
across multiple places and
latency tends to be like painful.

00:01:41.766 --> 00:01:48.106 align:middle
So, there was this video that came by the other
day, I don't know if you've seen it but it's

00:01:48.106 --> 00:01:52.666 align:middle
like the effects of one and a half second
of latency in the real world, you know.

00:01:53.826 --> 00:02:04.516 align:middle
I mean it kinda shows it's a funny, like a silly
example, but it's true that like a little bit

00:02:04.516 --> 00:02:10.916 align:middle
of latency can really mess with people and it
just makes for a really terrible experience.

00:02:14.706 --> 00:02:21.856 align:middle
Other point is like if you have hosting across
multiple regions, one single region can go down

00:02:21.906 --> 00:02:26.146 align:middle
and in theory, you know, you should
have a resilient system in it.

00:02:26.416 --> 00:02:33.156 align:middle
You should be able to stay up even in case of
kinda critical host failures, which you know,

00:02:33.156 --> 00:02:36.556 align:middle
these days it's like cloud
infrastructure, it's like all magical

00:02:36.556 --> 00:02:38.496 align:middle
and nothing ever breaks in theory.

00:02:38.586 --> 00:02:43.046 align:middle
But you know, sometimes things go bad.

00:02:43.106 --> 00:02:50.276 align:middle
Like it has happened that AWS has a complete
region down even including like, you know,

00:02:50.276 --> 00:02:52.556 align:middle
they have this concept of availability zones

00:02:52.556 --> 00:02:55.876 align:middle
which in theory should be
completely separated infrastructures.

00:02:56.466 --> 00:03:01.786 align:middle
So in one single region you can host
like different availability zones.

00:03:01.786 --> 00:03:06.826 align:middle
And then, they guarantee
you somehow that, you know,

00:03:06.826 --> 00:03:09.416 align:middle
if one zone goes down, the
others should stay up.

00:03:10.416 --> 00:03:12.636 align:middle
This, in the past hasn't always been true.

00:03:12.636 --> 00:03:18.126 align:middle
So it's not quite enough if
you really want to be safe.

00:03:19.166 --> 00:03:27.126 align:middle
The third point is really, like
most of us, I think are using CDN's

00:03:27.126 --> 00:03:29.016 align:middle
for at least delivering web assets.

00:03:29.416 --> 00:03:31.816 align:middle
This is a fairly common practice
and it's very easy.

00:03:31.816 --> 00:03:36.136 align:middle
Usually there is lots of tools
and websites that help with that.

00:03:36.946 --> 00:03:40.076 align:middle
But why don't, why don't we do it for the rest,

00:03:40.076 --> 00:03:43.316 align:middle
like if we saw that it is
valuable to do it for that.

00:03:43.316 --> 00:03:44.706 align:middle
Why not the rest?

00:03:46.526 --> 00:03:49.296 align:middle
So, why shouldn't you do it?

00:03:49.296 --> 00:03:53.196 align:middle
I mean, I don't know: is
anyone here like hosting things

00:03:53.196 --> 00:03:56.856 align:middle
like on a global scale or
like more than one region?

00:03:57.036 --> 00:04:00.616 align:middle
It's kind of hard to see but I see a few hands.

00:04:00.786 --> 00:04:05.826 align:middle
But like maybe three to five percent I guess.

00:04:06.686 --> 00:04:09.086 align:middle
So this is not so common.

00:04:09.086 --> 00:04:10.646 align:middle
So why don't you do it?

00:04:10.646 --> 00:04:11.476 align:middle
I don't know.

00:04:11.476 --> 00:04:17.956 align:middle
I mean, I came up with a few reasons
why I didn't do it until recently.

00:04:18.206 --> 00:04:25.906 align:middle
Maybe they don't match exactly yours, but,
it's just a kind of a lead of where to go

00:04:25.906 --> 00:04:26.986 align:middle
and where we could improve things.

00:04:27.896 --> 00:04:31.626 align:middle
I think the first reason: we use the database.

00:04:31.626 --> 00:04:37.026 align:middle
Like having the database in
multiple regions is a major pain.

00:04:37.106 --> 00:04:43.956 align:middle
Like this is really, like that's
usually already a killer by itself.

00:04:44.366 --> 00:04:47.816 align:middle
Like, if you don't have like a multi-master
setup, then it gets really complicated

00:04:47.886 --> 00:04:52.686 align:middle
to have synchronization across regions
and if the master region goes down,

00:04:52.686 --> 00:04:54.836 align:middle
then what happens to the replicas?

00:04:56.116 --> 00:05:00.466 align:middle
It's tricky, what can you do there?

00:05:00.466 --> 00:05:04.306 align:middle
I think like using one of those, you know,

00:05:04.306 --> 00:05:07.086 align:middle
cloud databases from the
get-go is probably a good idea.

00:05:07.706 --> 00:05:11.916 align:middle
I think the issue is like usually people
start with MySQL on their computer,

00:05:12.116 --> 00:05:16.056 align:middle
and then everything is fine, but
then five years down the line

00:05:16.056 --> 00:05:19.406 align:middle
when you actually need the global
scale, it's like it's too late

00:05:19.406 --> 00:05:21.146 align:middle
and it can't rewrite the entire application.

00:05:21.996 --> 00:05:24.056 align:middle
So this is a bit of an issue.

00:05:25.286 --> 00:05:32.826 align:middle
Um, but like all the providers have some
solutions and then you have like MongoDB

00:05:32.826 --> 00:05:37.816 align:middle
for example is, you know, it's available
no matter what platform you're using.

00:05:38.476 --> 00:05:45.546 align:middle
Ah, there's another talk at the moment
in the other track if you are interested.

00:05:47.196 --> 00:05:54.366 align:middle
Um, anyway, you know, there are some solutions,
I don't have a ton of experience with these

00:05:54.366 --> 00:05:59.236 align:middle
so I don't wanna dive in too much, but you
know there are things that can help you,

00:05:59.236 --> 00:06:02.396 align:middle
but for that, usually you have
to write the application really

00:06:02.396 --> 00:06:06.076 align:middle
with this mindset from the very beginning.

00:06:08.816 --> 00:06:15.486 align:middle
Um, yeah. What I found is that
really like the cloud providers,

00:06:15.486 --> 00:06:21.946 align:middle
they sell this magical cloud thing but it's
usually, actually doesn't help you that much.

00:06:21.946 --> 00:06:23.806 align:middle
Like for at least for this problematic,

00:06:23.806 --> 00:06:26.576 align:middle
I found that there are some
limitations which are really annoying.

00:06:27.616 --> 00:06:30.366 align:middle
Um, like one is Redis.

00:06:30.366 --> 00:06:36.006 align:middle
It's just a simple example, but you can
replicate Redis for aws within one region,

00:06:36.556 --> 00:06:38.646 align:middle
can have as many replicas
as you want, no problem.

00:06:39.046 --> 00:06:40.716 align:middle
We cannot go beyond the regions.

00:06:40.716 --> 00:06:44.156 align:middle
If you want to replicate in
another region, there is no way.

00:06:45.256 --> 00:06:49.156 align:middle
Um, so, sure you can host your own
Redis and, you know, you can do things,

00:06:49.206 --> 00:06:52.886 align:middle
but they just don't solve the problem for you.

00:06:54.416 --> 00:06:58.036 align:middle
Um, another issue we had, which thankfully fixed

00:06:58.196 --> 00:07:02.546 align:middle
about a year ago was that
so this concept of VPC.

00:07:02.686 --> 00:07:08.546 align:middle
I don't know how familiar you are with us, but
just to explain it really quick, it's like a,

00:07:09.966 --> 00:07:11.596 align:middle
it's kind of like your private network for,

00:07:11.596 --> 00:07:16.416 align:middle
for all your infrastructure
within aws within one region.

00:07:17.316 --> 00:07:22.166 align:middle
And, so ideally you want to keep
things within the private network

00:07:22.166 --> 00:07:24.916 align:middle
and only have one entry point
for like the web, you know,

00:07:25.086 --> 00:07:27.936 align:middle
like the http port on one entry and that's it.

00:07:27.936 --> 00:07:30.026 align:middle
Like the rest shouldn't be
reachable from the outside

00:07:30.026 --> 00:07:31.726 align:middle
because that's just more secure that way.

00:07:32.866 --> 00:07:37.796 align:middle
Um, if you have things in multiple regions,
usually they need to talk to each other somehow.

00:07:38.296 --> 00:07:44.376 align:middle
Like if you can't connect two VPC's to each
other, that means you need to open everything

00:07:44.376 --> 00:07:50.386 align:middle
up on the internet and that's like, ehm,
like I don't really feel like trusting MySql

00:07:50.386 --> 00:07:53.266 align:middle
or Postgres authentication to the Internet.

00:07:53.266 --> 00:07:57.156 align:middle
Like, it's just bad things have
happened elsewhere in the past.

00:07:57.506 --> 00:07:58.686 align:middle
I'd rather not take the chance.

00:07:58.786 --> 00:08:05.036 align:middle
So this thankfully has been fixed now so
it's fine, but it's actually like guided some

00:08:05.036 --> 00:08:09.566 align:middle
of my decisions in the past, which,
so that's why I'm mentioning it.

00:08:10.296 --> 00:08:17.526 align:middle
And then finally I think awareness is also an
issue in that most of the developers they work

00:08:17.526 --> 00:08:22.436 align:middle
with like fast internet connections,
they're usually close to their servers

00:08:22.436 --> 00:08:25.026 align:middle
and so they just don't feel
this pain of latency.

00:08:25.176 --> 00:08:29.276 align:middle
Like it's usually, we're not the
ones experiencing the problem,

00:08:29.276 --> 00:08:33.086 align:middle
it's more like users in, on
some random satellite connection

00:08:33.086 --> 00:08:37.806 align:middle
or some far away country
from your hosting location.

00:08:39.236 --> 00:08:45.246 align:middle
Ah, so I think the demand
internally is not there usually.

00:08:45.246 --> 00:08:49.426 align:middle
Um, yep, So that's that for the intro.

00:08:49.426 --> 00:08:51.466 align:middle
Now I want to look at a couple of case studies.

00:08:51.596 --> 00:08:58.156 align:middle
And kind of really the idea is just to
share, like, a few approaches I took.

00:08:58.576 --> 00:09:01.146 align:middle
I'm not saying this is like
the ultimate solution

00:09:01.356 --> 00:09:03.606 align:middle
and not trying to sell you the silver bullets.

00:09:04.146 --> 00:09:11.116 align:middle
It's just ideas that might help if you are
attempting this yourself because what I found is

00:09:11.116 --> 00:09:14.326 align:middle
that there's not a lot of
information on how to do this stuff.

00:09:14.326 --> 00:09:18.876 align:middle
Like I've looked at this for years and
it's just, I haven't found a lot of info.

00:09:19.736 --> 00:09:26.076 align:middle
Usually those that do it, like mega corporations
with insane budgets and they can afford to have,

00:09:26.076 --> 00:09:29.776 align:middle
you know, hundreds of Dev ops
engineers doing this stuff.

00:09:30.636 --> 00:09:33.456 align:middle
Like for most small companies,
that's just not an option.

00:09:34.056 --> 00:09:37.226 align:middle
So anyway, let's dive in.

00:09:37.226 --> 00:09:39.986 align:middle
So first of all, I wanna, I
wanna look at packagist.org,

00:09:40.526 --> 00:09:42.946 align:middle
which I guess is something
you're all familiar with,

00:09:42.946 --> 00:09:47.096 align:middle
so that kind of makes it a
bit more interesting, maybe.

00:09:47.906 --> 00:09:55.296 align:middle
Um so there we have like just, just to look
at what the goals are with what we're trying

00:09:55.296 --> 00:09:59.726 align:middle
to achieve, first of all,
really high reliability.

00:09:59.956 --> 00:10:02.436 align:middle
Because if this goes down, the repository,

00:10:02.436 --> 00:10:07.646 align:middle
like with all the metadata composer just
fails and things go bad very quickly.

00:10:08.376 --> 00:10:14.066 align:middle
People tend to use Twitter very quickly.

00:10:16.416 --> 00:10:20.746 align:middle
So, it's like, it has to be up.

00:10:20.996 --> 00:10:24.276 align:middle
That's, that's just a reality.

00:10:25.026 --> 00:10:32.316 align:middle
It also has to be simple, um, for different
reasons, but I just like to keep things simple

00:10:32.316 --> 00:10:37.546 align:middle
if I can, because it's, you know, we have a
very small team, it's mostly me doing this.

00:10:38.506 --> 00:10:46.016 align:middle
Um, so like, you know, this, again, like
there's not a hundred Dev ops people

00:10:46.016 --> 00:10:48.186 align:middle
that can do like crazy infrastructure.

00:10:48.506 --> 00:10:54.756 align:middle
It has to be a simple solution that works
well and that just also doesn't break

00:10:55.246 --> 00:10:58.796 align:middle
because I can't be like on the watch 24/7, so.

00:11:00.436 --> 00:11:09.636 align:middle
It has to be global because we have really
users throughout the globe and ideally low cost

00:11:09.636 --> 00:11:13.666 align:middle
because this being open source,
well there's just not a big budget.

00:11:16.746 --> 00:11:18.766 align:middle
So what did we end up with?

00:11:18.866 --> 00:11:24.666 align:middle
Um, I kind of have a self-built CDN.

00:11:26.146 --> 00:11:34.276 align:middle
So we have like these primary servers that
kind of generate files and all the metadata

00:11:34.276 --> 00:11:38.936 align:middle
and then we have a set of mirrors
that are spread throughout the world.

00:11:39.426 --> 00:11:42.686 align:middle
And those just like synchronize
from the primary.

00:11:43.636 --> 00:11:50.386 align:middle
So the benefits from doing it ourselves versus
using some of the existing providers, uh,

00:11:50.386 --> 00:11:55.756 align:middle
is mainly invalidation: because we need really
fast responses because when update, like the,

00:11:55.756 --> 00:12:00.426 align:middle
they push something, you know, they want to
run composer update like 10 seconds later

00:12:00.426 --> 00:12:04.986 align:middle
and if it doesn't work, like if it doesn't find
the new tag, they just pushed or something,

00:12:04.986 --> 00:12:07.846 align:middle
then they come and complain
this needs to happen fast.

00:12:08.776 --> 00:12:15.346 align:middle
Um, these days, I think there are a few
cdn providers which are actually, um,

00:12:16.416 --> 00:12:21.066 align:middle
which offers like invalidation
to the extent that we need.

00:12:21.566 --> 00:12:26.126 align:middle
So I'm looking at maybe switching
to, to one of them, but we'll see.

00:12:27.476 --> 00:12:35.776 align:middle
Um, then to kind of route people between the
servers, like the replicas we have Route53,

00:12:35.776 --> 00:12:43.116 align:middle
which is the DNS solution of AWS, um, which
basically does like a latency-based routing.

00:12:43.116 --> 00:12:46.206 align:middle
So it's like if you are close
to this region you get routed

00:12:46.246 --> 00:12:48.556 align:middle
to one of these servers in that region.

00:12:49.606 --> 00:12:53.126 align:middle
Then we have health checks on all the servers.

00:12:53.126 --> 00:12:57.026 align:middle
So if one of them goes down, or is
not responsive anymore from AWS,

00:12:57.026 --> 00:13:01.276 align:middle
it just gets killed off, and people
don't get routed there anymore.

00:13:01.516 --> 00:13:05.556 align:middle
We have like, you know, a
few minutes of, DNS TTL.

00:13:05.556 --> 00:13:09.016 align:middle
So it kind of, re-routes people fairly quickly.

00:13:10.436 --> 00:13:20.436 align:middle
And like, again, like in terms of simplicity,
it's really easy to set up a new one.

00:13:20.546 --> 00:13:23.816 align:middle
Like a few weeks ago we had
this issue where, um,

00:13:24.516 --> 00:13:28.096 align:middle
some of the mirrors were just
unreachable for some people.

00:13:28.096 --> 00:13:32.446 align:middle
But it was like, the problem
is it wasn't a global failure.

00:13:32.446 --> 00:13:37.556 align:middle
The servers themselves were fine, it was
a routing issue on the Internet somehow.

00:13:38.196 --> 00:13:43.016 align:middle
So the health checks didn't pick that up
and it was just, some people were affected.

00:13:43.016 --> 00:13:45.776 align:middle
I don't know, probably some of you
in this room had some problems.

00:13:46.646 --> 00:13:52.106 align:middle
Um, but like, I mean people were saying,
you know, I got, I got, I tried at home

00:13:52.106 --> 00:13:55.256 align:middle
and it was fine and in the office it's
broken, like it was really strange.

00:13:55.256 --> 00:14:01.046 align:middle
So, at some point I just couldn't do anything
really about the Internet routing sadly,

00:14:01.046 --> 00:14:07.416 align:middle
so I just a completely swapped these,
these instances and like created new ones

00:14:07.416 --> 00:14:12.526 align:middle
in some other, some other regions and
like it takes me like 20 minutes or so.

00:14:12.846 --> 00:14:14.456 align:middle
It's quickly up to speed.

00:14:14.536 --> 00:14:19.146 align:middle
So, um, so that's good.

00:14:19.326 --> 00:14:24.336 align:middle
Uh, so the setup kind of looks like this
where we have like multiple replica regions.

00:14:25.406 --> 00:14:29.536 align:middle
And you see like for the...

00:14:29.536 --> 00:14:31.526 align:middle
so the metadata is being pulled from the,

00:14:31.526 --> 00:14:36.526 align:middle
from the primary to the replicas,
but the website is not.

00:14:36.826 --> 00:14:40.066 align:middle
So the website we only have
it in one region still.

00:14:40.066 --> 00:14:46.716 align:middle
That's just for simplicity, because yeah, I mean
it's a bit more latency for the website users

00:14:46.716 --> 00:14:50.046 align:middle
that are far away, but that's
just a cost I was okay with.

00:14:50.456 --> 00:14:55.926 align:middle
I mean that's a tradeoff you have to make.

00:14:55.926 --> 00:15:02.426 align:middle
So when users go to the website, this is just
a proxy from the replicas to the main one.

00:15:06.806 --> 00:15:09.526 align:middle
So, what are the problems?

00:15:09.526 --> 00:15:13.906 align:middle
Well, as I just mentioned it, this is
only the repository, not the website.

00:15:13.966 --> 00:15:19.236 align:middle
So it's not a complete solution
for sure, but it solves the,

00:15:19.236 --> 00:15:24.616 align:middle
let's say the high availability needs we
have, that are mostly at the repository level.

00:15:25.726 --> 00:15:30.626 align:middle
Um, so that kind of solves the critical part.

00:15:31.536 --> 00:15:39.316 align:middle
Another thing that was kinda weird with this
is like I had reports of files sometimes

00:15:39.616 --> 00:15:43.176 align:middle
like people will get a 404
when running composer.

00:15:43.996 --> 00:15:49.756 align:middle
And it took me quite awhile to figure out what
was going on, but it's, the problem was the,

00:15:49.856 --> 00:15:55.206 align:middle
like as we route people like in a
kind of round-robin fashion, like,

00:15:55.456 --> 00:16:02.426 align:middle
whenever you resolve the DNS you just go to
this server or that server within one region,

00:16:04.496 --> 00:16:08.746 align:middle
there was a race condition actually,
where one server could be up to speed

00:16:08.746 --> 00:16:13.746 align:middle
with the latest metadata but the other
one not yet in one, one single region.

00:16:13.746 --> 00:16:18.366 align:middle
And so one request would go and hit one
server and, like, it would get the filename

00:16:18.366 --> 00:16:22.246 align:middle
to get next, and will try and get it and
will hit the second server that wasn't

00:16:22.246 --> 00:16:25.196 align:middle
up to speed and then you get a 404.

00:16:25.196 --> 00:16:26.766 align:middle
and then, you know, if you retry it's fixed,

00:16:27.026 --> 00:16:30.356 align:middle
like within seconds, usually
it was fixing itself.

00:16:30.356 --> 00:16:34.196 align:middle
So it was kinda hard to debug why, because
every time someone reported it, I'm like:

00:16:34.246 --> 00:16:38.166 align:middle
"I don't know, the file is
there, I don't see the problem".

00:16:38.906 --> 00:16:44.916 align:middle
Um, so there's a simple proxy hack there where
it's just if the file is not there locally,

00:16:45.056 --> 00:16:51.356 align:middle
it proxies it to the main region again instead
of returning a 404 and that kind of solves it.

00:16:52.186 --> 00:16:55.526 align:middle
Um, just, you know, little things
that you don't necessarily think

00:16:55.916 --> 00:17:06.526 align:middle
of when you, when you get started.

00:17:06.616 --> 00:17:13.206 align:middle
So second use case, um, second
case study is team up.

00:17:13.616 --> 00:17:16.456 align:middle
So that's a, it's a calendar application.

00:17:16.456 --> 00:17:22.976 align:middle
So it's also used like pretty well,
pretty much all around the world.

00:17:23.136 --> 00:17:28.376 align:middle
Um, so what are the goals here again?

00:17:28.646 --> 00:17:31.616 align:middle
Global audience: we kind
of need to be everywhere.

00:17:32.916 --> 00:17:39.076 align:middle
Um, low latency just because it's
not a good experience whenever you're

00:17:39.076 --> 00:17:45.206 align:middle
like clicking something if you need to
wait half a second, it's just not nice.

00:17:45.736 --> 00:17:52.916 align:middle
The high reliability, I say full data
access, obviously it's good if you can like,

00:17:53.236 --> 00:17:55.116 align:middle
you know, work fully with the application.

00:17:55.116 --> 00:18:01.476 align:middle
But the critical really critical part here
is accessing the data because we've had times

00:18:01.476 --> 00:18:06.336 align:middle
where we were down in the past and people
send us emails like completely freaking

00:18:06.336 --> 00:18:08.506 align:middle
out because their entire business was down.

00:18:08.506 --> 00:18:11.966 align:middle
Like they just, somehow they use the calendar,

00:18:12.226 --> 00:18:15.516 align:middle
this one source of information
for everything they have to do.

00:18:15.516 --> 00:18:21.536 align:middle
Which is good, I mean, it's quite fascinating
to see all the use-cases there are out there,

00:18:21.596 --> 00:18:26.286 align:middle
but it's just, that means this is
extremely critical infrastructure

00:18:26.286 --> 00:18:28.156 align:middle
for a lot of small businesses.

00:18:28.156 --> 00:18:34.906 align:middle
And like yeah, we just felt really bad
about like any downtime because we're like:

00:18:34.906 --> 00:18:39.066 align:middle
"oh my God, this is just
someone out there is like sitting

00:18:39.066 --> 00:18:41.206 align:middle
in the office, like completely lost".

00:18:42.226 --> 00:18:46.346 align:middle
Um, I mean it's the same when,
you know, when Github is down

00:18:46.346 --> 00:18:47.856 align:middle
or something and don't laugh too much.

00:18:47.906 --> 00:18:50.146 align:middle
Like we have the same issues
with some tools, right?

00:18:50.146 --> 00:18:56.216 align:middle
Like industries, like different
industries have different bottlenecks.

00:18:56.336 --> 00:19:03.756 align:middle
But, and again, one of the goals there was low
maintenance because we're a very small team,

00:19:03.756 --> 00:19:11.296 align:middle
like only like four or five devs and so it's,
yeah, there's just not a lot of manpower

00:19:11.296 --> 00:19:16.926 align:middle
to keep this going, so it has to be
somewhat self-sustainable and stable.

00:19:19.766 --> 00:19:22.926 align:middle
So what did we end up with?

00:19:23.656 --> 00:19:28.036 align:middle
Um, so what we used wasTerraform.

00:19:28.336 --> 00:19:32.516 align:middle
I don't know if you're familiar with
it, it's kind of like puppet or Ansible,

00:19:32.516 --> 00:19:36.496 align:middle
but for like setting up the, the infrastructure.

00:19:36.616 --> 00:19:42.626 align:middle
So it's kind of a high level: just you
configure all the servers, all the VPCs,

00:19:42.806 --> 00:19:45.376 align:middle
all the routing, all the, lots of things.

00:19:46.376 --> 00:19:53.536 align:middle
Um, and that allows you to automate things and
that means it's pretty good because if you need

00:19:53.536 --> 00:19:58.116 align:middle
to add a new region, you can just like copy a
few lines and say ok, this, this one is now,

00:19:58.116 --> 00:20:00.866 align:middle
like just load this all this
config, but for this region

00:20:00.866 --> 00:20:02.546 align:middle
and serve that region and you're done.

00:20:02.886 --> 00:20:09.426 align:middle
You just run it and creates
all the servers and everything.

00:20:10.446 --> 00:20:14.296 align:middle
Again, here we had to make some
compromises with what is run where.

00:20:14.296 --> 00:20:19.766 align:middle
So we have the primary region
that has everything like websites,

00:20:19.916 --> 00:20:22.516 align:middle
databases, background workers and all that.

00:20:23.256 --> 00:20:29.986 align:middle
And then the replicas they have
website, a database copy but no workers.

00:20:32.446 --> 00:20:37.936 align:middle
Um, other kind of trade-off is
we're storing the sessions in Redis.

00:20:37.936 --> 00:20:41.186 align:middle
And as I mentioned, you can't
easily replicate across regions.

00:20:41.966 --> 00:20:47.666 align:middle
We thought, well, I mean actually people usually
don't go from one region to the next, like,

00:20:48.056 --> 00:20:53.226 align:middle
you know, unless you are in a rocket or
something, you don't transition from one region

00:20:53.226 --> 00:20:57.206 align:middle
to the next that quickly, that
losing a session would be a problem.

00:20:58.236 --> 00:21:04.936 align:middle
So we just decided to have like local session
buckets in every region and that's it.

00:21:04.936 --> 00:21:08.116 align:middle
Like there's no, there's no
concept of global session.

00:21:11.376 --> 00:21:14.876 align:middle
So the reads, like if you
were just looking at the data,

00:21:14.876 --> 00:21:17.496 align:middle
this is handled locally in every single region.

00:21:17.496 --> 00:21:24.136 align:middle
And then, when you write something, um,
we're talking to the primary database

00:21:24.136 --> 00:21:29.946 align:middle
in the primary region, uh, through this VPC
peering because we built this early this year.

00:21:29.946 --> 00:21:31.606 align:middle
So this was available thankfully.

00:21:32.556 --> 00:21:39.676 align:middle
Um, so it looks something like this, with
a primary region, some replica region.

00:21:40.506 --> 00:21:44.906 align:middle
As you see they're really the same
apart from the workers: user comes,

00:21:45.086 --> 00:21:47.766 align:middle
does a GET request is handled
locally, no problem.

00:21:49.636 --> 00:21:59.266 align:middle
Um, if the user does a POST doing some changes,
we have the database writes going across.

00:22:00.646 --> 00:22:08.666 align:middle
So we started with the region, the primary being
in the US west coast and a replica in Europe.

00:22:09.336 --> 00:22:17.376 align:middle
So I don't know if you can spot the problem
there, but the result was something like this.

00:22:17.516 --> 00:22:20.126 align:middle
It was just not super fast.

00:22:20.866 --> 00:22:24.246 align:middle
The reads were going fine of course,
because they were handled locally,

00:22:24.286 --> 00:22:27.846 align:middle
but like the writes were just horribly slow.

00:22:28.836 --> 00:22:30.336 align:middle
Uh, so what happened?

00:22:30.876 --> 00:22:33.966 align:middle
Like I felt really stupid when I realized this.

00:22:34.346 --> 00:22:42.216 align:middle
I don't know why I didn't think of that before,
but obviously every Redis call or SQL query

00:22:42.216 --> 00:22:47.346 align:middle
that you do has to go from Europe to
the US west coast, which means, yes,

00:22:47.436 --> 00:22:50.996 align:middle
somewhere around 100 milliseconds of latency.

00:22:52.236 --> 00:22:57.526 align:middle
So, you know, typical pages
maybe running like 5, 10 queries.

00:22:57.566 --> 00:23:00.906 align:middle
Well you multiply that by 100
milliseconds and you quickly end

00:23:00.906 --> 00:23:08.706 align:middle
up witha response time that's actually way
worse than hitting the US server directly.

00:23:08.706 --> 00:23:13.096 align:middle
So, yeah, it wasn't a very proud moment.

00:23:13.956 --> 00:23:20.116 align:middle
But I thought okay, like I can see
there's some issues I can fix here.

00:23:20.116 --> 00:23:25.316 align:middle
I was like, one of the problems is
already like establishing the connection.

00:23:25.316 --> 00:23:29.016 align:middle
So a single, just doing a
single query was pretty bad

00:23:29.016 --> 00:23:35.846 align:middle
because establishing the MySql
connection, including TLS,

00:23:36.606 --> 00:23:38.636 align:middle
means usually like two round trips.

00:23:39.276 --> 00:23:45.456 align:middle
So already just opening the connection, you're
like 200 milliseconds, then you send the query

00:23:45.456 --> 00:23:47.426 align:middle
and then it's just, it was really bad.

00:23:47.426 --> 00:23:53.686 align:middle
So I thought okay, this proxy SQL I
can use this to build connection pools,

00:23:53.686 --> 00:23:55.656 align:middle
then we will reuse the connections,
so we don't have

00:23:55.726 --> 00:23:57.816 align:middle
to have these round trips every single time.

00:23:58.316 --> 00:24:00.026 align:middle
So that helped, sure.

00:24:00.056 --> 00:24:05.486 align:middle
I mean, it shaved off like
these two round trips.

00:24:05.696 --> 00:24:08.046 align:middle
Um, but yeah, it was still really unworkable.

00:24:08.046 --> 00:24:15.156 align:middle
Like some, some of the pages were doing
way too many requests and it was just,

00:24:15.156 --> 00:24:18.796 align:middle
so way too many SQL queries and
so it was just not really working.

00:24:20.266 --> 00:24:24.716 align:middle
So we changed the approach and I'm like,

00:24:24.716 --> 00:24:31.536 align:middle
I'm a very stubborn person,
so I didn't want to give up.

00:24:31.536 --> 00:24:38.906 align:middle
So what we ended up with, I don't know if anyone
else is doing this, maybe it's a crazy solution.

00:24:38.966 --> 00:24:44.816 align:middle
Like I haven't had a lot of feedback on
this, but feel free to let me know later.

00:24:45.806 --> 00:24:48.456 align:middle
Uh, so we proxied the write.

00:24:48.456 --> 00:24:53.146 align:middle
So if we see the POST is coming in or
like a DELETE request, we say okay,

00:24:53.146 --> 00:24:57.256 align:middle
this is gonna do some modifications
on the primary database.

00:24:57.396 --> 00:25:02.966 align:middle
So we don't want to handle this locally on the
replicas, we just proxy the entire request.

00:25:03.616 --> 00:25:10.656 align:middle
The problem is you can't really do
this like at the Nginx level easily

00:25:10.656 --> 00:25:13.516 align:middle
because you don't have the
sessions in the main region.

00:25:14.886 --> 00:25:19.946 align:middle
So if you just proxy the request in
Nginx that's the easy way to do it,

00:25:20.086 --> 00:25:26.516 align:middle
but then you're missing the session data
and so yeah, you just find yourself logged

00:25:26.516 --> 00:25:30.376 align:middle
out when you're trying to do some
modifications that doesn't really work.

00:25:31.196 --> 00:25:39.546 align:middle
Uh, so I implemented this in php instead, and so
in the application when we see requests coming

00:25:39.546 --> 00:25:45.936 align:middle
in and it's a POST or something that's
going to modify the content, we say, okay,

00:25:45.936 --> 00:25:48.446 align:middle
we'll just take the local session, pack it

00:25:48.636 --> 00:25:54.066 align:middle
up in a header forward everything
including that header.

00:25:54.066 --> 00:26:04.376 align:middle
We also for the client IP, obviously with the
usual, like "forwarded" headers and so on.

00:26:04.576 --> 00:26:10.476 align:middle
To make sure that this is, you know, not
possible to abuse because the problem is,

00:26:10.476 --> 00:26:17.196 align:middle
we're now like, unserializing session data from
headers, which, you know, it's not the best idea

00:26:17.196 --> 00:26:21.836 align:middle
in terms of security or taking user content
and like just dumping it into the session

00:26:21.836 --> 00:26:26.016 align:middle
and serializing it, like you want to be
really careful when you do things like that.

00:26:26.876 --> 00:26:33.976 align:middle
Um, so definitely we do use
HTTPS over the proxy link.

00:26:34.726 --> 00:26:40.326 align:middle
Uh, it's also going through the VPC peering
for additional security, and on top of that,

00:26:40.326 --> 00:26:45.276 align:middle
we also sign the requests just to make
sure that nobody can inject a request

00:26:45.596 --> 00:26:48.076 align:middle
that would kind of deserialize session data.

00:26:50.626 --> 00:26:55.696 align:middle
So with all of this, not too bad.

00:26:57.296 --> 00:27:05.146 align:middle
Then if like if this, somehow the proxying
fails, then we will go back to actually dealing

00:27:05.146 --> 00:27:11.906 align:middle
with the request locally as we would otherwise,
and we do the slow SQL request over the ocean

00:27:11.906 --> 00:27:16.166 align:middle
and it takes a while then to run, but
at least it completes successfully.

00:27:19.166 --> 00:27:21.176 align:middle
So, now it looks something like this.

00:27:21.786 --> 00:27:29.046 align:middle
Um, so if you do a POST, you come through
and then we send the same exact POST,

00:27:29.236 --> 00:27:33.176 align:middle
but with some additional headers for the
session, the client IP and the signature.

00:27:33.766 --> 00:27:36.826 align:middle
I hope this makes sense.

00:27:38.736 --> 00:27:47.086 align:middle
So results kinda like, you know, faster
turtle for sure, but still a turtle.

00:27:48.336 --> 00:27:58.676 align:middle
So what happened here was like when I tried
this, I was like trying to disable the replica

00:27:58.676 --> 00:28:05.376 align:middle
and just hit the US servers directly, and I got
and average of something like 120 milliseconds,

00:28:05.666 --> 00:28:10.546 align:middle
like roundtrip time to, for
any kind of request response.

00:28:10.546 --> 00:28:12.586 align:middle
It Was around this ballpark.

00:28:13.306 --> 00:28:17.306 align:middle
So it's not terrible because
like, US is fairly a good link.

00:28:18.276 --> 00:28:28.236 align:middle
Um, but then I noticed when I used the replica
on the reads, I got about 20 milliseconds.

00:28:28.846 --> 00:28:32.106 align:middle
That was super fast, hitting
the local server, great.

00:28:32.676 --> 00:28:36.846 align:middle
But when I did a POST, well I had this
20 milliseconds to reach the replica

00:28:36.846 --> 00:28:41.966 align:middle
and then the replica internally would,
for some reason add 200 milliseconds.

00:28:42.196 --> 00:28:44.506 align:middle
And I was like, what the hell?

00:28:44.506 --> 00:28:52.016 align:middle
I don't understand how it's much slower to
execute from the replica within the AWS network

00:28:52.016 --> 00:28:59.856 align:middle
and everything, I would think this
runs faster than me hitting the US.

00:28:59.856 --> 00:29:07.656 align:middle
Ao yeah, it's just turned out to be, actually
the proxy every time had to open a connection.

00:29:07.706 --> 00:29:11.866 align:middle
Again you have this round trip time
of like opening the SSL connection.

00:29:13.186 --> 00:29:14.576 align:middle
So what can we do here?

00:29:14.576 --> 00:29:16.816 align:middle
Well, we can again do some connection pooling.

00:29:17.256 --> 00:29:23.526 align:middle
So, uh, so this time I added,
a local Nginx proxy

00:29:23.526 --> 00:29:26.216 align:middle
because we anyway have Nginx
running on the local machine.

00:29:26.216 --> 00:29:29.706 align:middle
So we just, instead of proxying
to the US directly,

00:29:29.706 --> 00:29:34.456 align:middle
we proxy to the local proxy, proxy in a way.

00:29:34.616 --> 00:29:37.796 align:middle
Ok, it's getting complicated.

00:29:39.016 --> 00:29:46.206 align:middle
But, so that way will you, I mean, this
adds really nothing in terms of complexities

00:29:46.206 --> 00:29:53.856 align:middle
like these ten lines of config in Nginx and
like this never fails, it's quite reliable.

00:29:53.856 --> 00:29:59.036 align:middle
You just have to make sure that
you do a few things like this...

00:29:59.036 --> 00:30:01.746 align:middle
oh, that's interesting.

00:30:03.066 --> 00:30:05.846 align:middle
This laser pointer doesn't
work at all on the screen.

00:30:06.676 --> 00:30:08.916 align:middle
Anyway, I shall use the mouse.

00:30:09.716 --> 00:30:16.506 align:middle
Um, so you want to make sure here that you
set the HTTP version to 1.1 to make sure

00:30:16.506 --> 00:30:20.216 align:middle
that you have keep alive enabled,
or you could use two workers,

00:30:20.256 --> 00:30:23.456 align:middle
but I don't think Nginx supports
it for proxying.

00:30:24.436 --> 00:30:29.326 align:middle
Um, and the other one you
really need, is this connection,

00:30:29.476 --> 00:30:33.046 align:middle
overriding the connection header just in
case the client has a connection close

00:30:33.046 --> 00:30:36.306 align:middle
in the request just to make sure it's gone.

00:30:37.116 --> 00:30:44.016 align:middle
Um, and then you set this keepalive on top.

00:30:44.016 --> 00:30:50.396 align:middle
Um, this kinda, it went well, but at
first like I had very mixed results.

00:30:50.396 --> 00:30:57.086 align:middle
Like I was trying it and it would sometimes
be fast so I had like sometimes the request

00:30:57.156 --> 00:31:00.026 align:middle
like a response within like 80 milliseconds.

00:31:00.906 --> 00:31:05.556 align:middle
And then sometimes it was still doing this
200 millisecond overhead and I was like,

00:31:05.806 --> 00:31:08.166 align:middle
I just don't get it, like it was really random.

00:31:08.766 --> 00:31:15.616 align:middle
And then eventually I figured out that
this keepalive 8, like the way Nginx works,

00:31:15.616 --> 00:31:22.546 align:middle
it actually just allocates these eight buckets
and then you have these worker processes

00:31:22.546 --> 00:31:27.346 align:middle
in Nginx that says, you know, typically
you set this to like the amount

00:31:27.346 --> 00:31:29.676 align:middle
of CPU cores you have or
double that or something.

00:31:29.676 --> 00:31:40.186 align:middle
So, on a big server you maybe have like
16 or 32 of these Nginx processes and each

00:31:40.186 --> 00:31:41.986 align:middle
of them actually has eight connections.

00:31:42.816 --> 00:31:46.746 align:middle
And so depending on which one you hit, you
hit one that has a connection open or not.

00:31:47.476 --> 00:31:50.656 align:middle
And so like sometimes it was
fast, sometimes it wasn't.

00:31:50.656 --> 00:31:55.316 align:middle
And so like, it's just the kind
of things that make you lose a lot

00:31:55.316 --> 00:31:57.266 align:middle
of time for really small details.

00:31:58.086 --> 00:32:01.336 align:middle
But I'm glad I understood where it
was coming from because it was kind

00:32:01.436 --> 00:32:05.896 align:middle
of not making me feel good to
have this somewhat random result.

00:32:08.206 --> 00:32:16.126 align:middle
Um, okay. So what are the other problems
that we're having with the solution now?

00:32:16.736 --> 00:32:19.836 align:middle
Uh, because with this, now
it's consistently faster to hit

00:32:19.836 --> 00:32:22.756 align:middle
like the European server,
this works really well.

00:32:23.746 --> 00:32:30.516 align:middle
Um, the issue is while you're
going across the ocean and more,

00:32:30.516 --> 00:32:33.606 align:middle
so there is always something
that's going to fail.

00:32:33.606 --> 00:32:36.806 align:middle
Like there's always one request,
that's gonna fail.

00:32:37.266 --> 00:32:42.396 align:middle
Like if you do enough requests, some
of them are going to fail sometimes.

00:32:42.396 --> 00:32:48.256 align:middle
So that's why I kept this fallback step,
so that in the worse case we can kind

00:32:48.256 --> 00:32:54.246 align:middle
of handle it slower but at least it's handled.

00:32:54.246 --> 00:32:59.826 align:middle
Other issue I had, and this one I didn't
get to the bottom of, is the load balancer

00:32:59.826 --> 00:33:03.116 align:middle
on AWS sometimes times out these requests.

00:33:03.426 --> 00:33:09.046 align:middle
And I just, I don't get it, like looking at all
the logs, it seems to be hitting the servers,

00:33:09.196 --> 00:33:15.316 align:middle
gets a response like within 20 milliseconds
or so - it's not a timeout problem.

00:33:15.316 --> 00:33:18.196 align:middle
But for some reason it just
gets stuck at some point.

00:33:18.936 --> 00:33:20.656 align:middle
I can never get back.

00:33:20.896 --> 00:33:28.456 align:middle
So eventually we just had to decide to abandon
the load balancer for the proxy request

00:33:28.456 --> 00:33:35.206 align:middle
and we just hit the EC2 machines directly,
which is not ideal, but that's just.

00:33:36.216 --> 00:33:40.166 align:middle
like it, it actually works
better in the end than with it.

00:33:41.726 --> 00:33:49.036 align:middle
Other challenges, as we are sending the session
via the headers, session size is then limited

00:33:49.036 --> 00:33:51.066 align:middle
because the header size has a limit.

00:33:52.296 --> 00:33:58.346 align:middle
Um, so this may or may not be an issue for you,
like we actually really hardly use the session,

00:33:58.346 --> 00:34:01.746 align:middle
so it's really just for marking
the user as logged in or not.

00:34:02.536 --> 00:34:04.136 align:middle
So there's very little data in it.

00:34:04.846 --> 00:34:09.996 align:middle
So this doesn't really hurt us, but, ya
know, if you're storing like tons of stuff

00:34:09.996 --> 00:34:15.016 align:middle
in the session, it definitely might be
preventing this whole thing from flying.

00:34:17.746 --> 00:34:23.696 align:middle
So in the end we got to this
point where it's like super fast.

00:34:23.696 --> 00:34:28.076 align:middle
Um, so what are the downsides though?

00:34:28.076 --> 00:34:30.316 align:middle
Just to quickly recap.

00:34:30.316 --> 00:34:36.026 align:middle
In case the primary is down, we're still
like in read only state in all the replicas.

00:34:36.116 --> 00:34:41.606 align:middle
Like that's just something you can't really
fix unless you have multi-master setup.

00:34:42.056 --> 00:34:46.886 align:middle
I don't think we're getting there,
like with our current team size.

00:34:46.886 --> 00:34:53.266 align:middle
Like, AWS announced I think
last year that they will

00:34:53.266 --> 00:34:58.476 align:middle
at some point release a multi master
Aurora, which is like this kind

00:34:59.806 --> 00:35:03.436 align:middle
of AWS implemented the version of
MySQL and Postgres and whatnot.

00:35:04.456 --> 00:35:07.966 align:middle
I don't think this is out
there yet, but I'm not sure.

00:35:07.966 --> 00:35:08.956 align:middle
I haven't checked in a while.

00:35:10.006 --> 00:35:13.856 align:middle
I also don't understand how they can
possibly guarantee this will work.

00:35:13.856 --> 00:35:18.066 align:middle
I can't just conceptually, it
doesn't make any sense to me,

00:35:18.066 --> 00:35:21.716 align:middle
but they have very smart people
so maybe they figured out a way.

00:35:21.716 --> 00:35:28.256 align:middle
It would be really amazing if we could just like
switch from using MySQL or Postgres to using

00:35:28.256 --> 00:35:33.786 align:middle
like a multi-master MySQL or Postgres, like by
just pressing a button, that would be great.

00:35:34.466 --> 00:35:35.996 align:middle
But we'll see.

00:35:37.566 --> 00:35:44.386 align:middle
Um, the workers, I mentioned they're only
in the primary region, it just makes sense

00:35:44.386 --> 00:35:47.406 align:middle
for latency reasons: those
are writing lots of stuff.

00:35:47.746 --> 00:35:52.046 align:middle
We just don't want to have
them all over the place.

00:35:52.046 --> 00:35:57.206 align:middle
It's a danger, yes, like if the whole region
is down, that means the workers are down,

00:35:57.366 --> 00:36:02.276 align:middle
but it's just something we have to live with.

00:36:02.406 --> 00:36:09.466 align:middle
Other downside, definitely higher complexity
than, you know, the other solution.

00:36:09.466 --> 00:36:10.766 align:middle
But it also does a lot more.

00:36:10.766 --> 00:36:15.186 align:middle
So I think it pays off for us.

00:36:16.676 --> 00:36:24.116 align:middle
So the end result of this kind of
infrastructure: that's the history

00:36:24.116 --> 00:36:30.236 align:middle
of all uptime, like from
February 14, 2014 to now.

00:36:30.236 --> 00:36:37.276 align:middle
We had lots of really bad months where it
was like down to 99 percent uptime time.

00:36:37.276 --> 00:36:38.326 align:middle
Like this is really bad.

00:36:38.506 --> 00:36:45.726 align:middle
I don't remember the numbers but
this is like really, really bad.

00:36:46.606 --> 00:36:54.496 align:middle
And now in the last 9 months or so since
we migrated, we have something that's more

00:36:55.456 --> 00:36:58.316 align:middle
up to like 9, like four 9's almost.

00:36:59.366 --> 00:37:01.876 align:middle
There was a glitch there in
July, not sure what anymore,

00:37:02.876 --> 00:37:08.126 align:middle
but otherwise it's really been super
stable and we're really happy with this.

00:37:11.196 --> 00:37:17.236 align:middle
Ok, so just to sum up quickly,
I think really this you have

00:37:17.236 --> 00:37:19.346 align:middle
to take on a case by case approach.

00:37:20.656 --> 00:37:23.406 align:middle
The first point is to look at
the audience location obviously.

00:37:23.406 --> 00:37:29.676 align:middle
I mean if you're doing some website that's
only for like German users, you know,

00:37:30.416 --> 00:37:36.876 align:middle
sure you want to probably host it in Germany
or somewhere nearby, and you don't need to have

00:37:37.026 --> 00:37:44.216 align:middle
like some, some region in Sydney because
yeah, it just doesn't make any sense.

00:37:44.216 --> 00:37:45.886 align:middle
So that's a per-project thing.

00:37:46.016 --> 00:37:51.646 align:middle
I don't know, depends on what you're working on.

00:37:51.986 --> 00:37:57.086 align:middle
All the other issue is really: what are
the requirements in terms of latency,

00:37:57.086 --> 00:38:00.006 align:middle
like what's okay latency for you.

00:38:00.006 --> 00:38:04.976 align:middle
Like, if you know, if you think taking
something and you get a response

00:38:04.976 --> 00:38:09.426 align:middle
like 300-400 milliseconds later is
fine, then that's your number, right?

00:38:09.426 --> 00:38:13.116 align:middle
Like you gotta look at what you want to achieve.

00:38:13.736 --> 00:38:18.796 align:middle
Like we wanted to really try and bring
this to kind of snappy, like instant feels,

00:38:18.796 --> 00:38:25.756 align:middle
so you kind of, it's like, it's
like a desktop app kind of.

00:38:26.046 --> 00:38:30.756 align:middle
Um, but yeah, that's, that's again
something you need to evaluate for yourself.

00:38:31.016 --> 00:38:36.666 align:middle
Then I think one of the big factor in like
deciding how complex you go is the team size.

00:38:36.756 --> 00:38:40.306 align:middle
Because I think anything is possible

00:38:40.376 --> 00:38:47.456 align:middle
but like some solutions require really
big teams and that's not always available.

00:38:48.206 --> 00:38:57.576 align:middle
Then finally like the tech stack again, like
go for a cloud database or wait for Amazon

00:38:57.576 --> 00:39:05.496 align:middle
to solve physics and just come up with this
magical database, that will do everything great.

00:39:06.816 --> 00:39:08.816 align:middle
And that's that.

00:39:09.096 --> 00:39:09.976 align:middle
Thank you very much.

00:39:10.016 --> 00:39:25.396 align:middle
I think we have like one and a half minutes
for questions unless I got the time wrong,

00:39:26.156 --> 00:39:27.836 align:middle
so I'm not sure if there are any questions.

00:39:28.546 --> 00:39:33.266 align:middle
Yes. I don't know if we do
microphones here or not.

00:39:34.056 --> 00:39:37.026 align:middle
Okay: just shout, I'll repeat it.

00:39:37.206 --> 00:39:46.026 align:middle
So why are we only running the
workers on the primary region?

00:39:47.066 --> 00:39:51.926 align:middle
Uh, because otherwise you would
have this latency, like let's say,

00:39:51.926 --> 00:40:01.626 align:middle
as they run in the background anyway, like
having them near the user has no benefits.

00:40:01.626 --> 00:40:03.226 align:middle
Ok, I can't hear you anymore sorry.

00:40:04.056 --> 00:40:05.466 align:middle
Let's discuss this later.

00:40:06.486 --> 00:40:08.336 align:middle
Anyway, yeah.

00:40:08.336 --> 00:40:09.186 align:middle
So that's, that.

00:40:09.186 --> 00:40:10.476 align:middle
Enjoy the rest of the conference.

00:40:10.476 --> 00:40:11.276 align:middle
Thank you very much.

00:40:11.846 --> 00:40:19.736 align:middle
If you have questions, please come by.

