Scaling a Monolith with 5 Different Patterns

Scaling a Monolith with 5 Different Patterns

CodeOpinion

1 год назад

13,276 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@mohammadutd2323
@mohammadutd2323 - 05.01.2023 18:00

Can you make a video about multi database multi tenant apps

Ответить
@SergijKoscejev
@SergijKoscejev - 05.01.2023 19:01

I love this channel. It changed my mind on software development

Ответить
@PelFox
@PelFox - 05.01.2023 22:34

I feel like microservices besides cloud scaling is scaling teams of developers. Having 200 developers working in the same monolith could become messy where's instead each team owns certain services it's a none issue.

However, what I often see is 1 developer or 1 team making everything microservices and then they also have to manage all of that themselves, while the system have like 100 users..

Ответить
@MichaelKocha
@MichaelKocha - 06.01.2023 03:20

Me, a 3d game artist thinking this was a video on procedural materials for game environment art, being insanely confused 3 minutes into the video wondering when the art stuff was going to start.

Ответить
@haraheiquedossantos4283
@haraheiquedossantos4283 - 06.01.2023 07:36

The part that you talked about the email sender not being reliable, because of the possible failure, should we use outbox pattern to solve this problem?
I think is one of the possible ways to solve.

Ответить
@bobbycrosby9765
@bobbycrosby9765 - 06.01.2023 12:14

We did a lot of this stuff for our monolithic Facebook apps back in the late '00s. Oh, the memories.

During peak load we would have around 2k writes/sec to our database, and around 15k reads/sec even with heavy caching. I only mention this because people talk about "scaling" but rarely talk actual numbers.

The database was really the hard part. The code was shared nothing and we could just pile another server on top, but the database was another story. We had something like 5 webapp+memcached servers, but a total of 9 MySQL servers.

A seemingly mysterious problem we ran into was our working dataset no longer fitting into the database's memory. Previously instant queries start taking tens of ms, which is way too slow - this was an easy fix, we just bought more memory (eventually, 128GB per server). We also ran into a problem of replication lag - replicated servers couldn't keep up with the master since replication in MySQL was single threaded. To help we had replicas dedicated to certain groups of tables and skip the rest. We also made sure to hit master and inject the user's freshest data where necessary.

A problem with a lot of memory, at least in MySQL, was that cold restarts were painful. After coming back up, a database server wasn't ready to serve requests - it took it an hour or two to "warm up" before it could serve live requests without choking.

I believe the not-in-cache problem is a "thundering herd" - as in, all the requests coming in stampede the database and kick over your sandcastle when it isn't in the cache. We resolved this by also adding a cache-lock key in memcached: if something isn't in the cache, before going to the database, you must set the cache lock key. If you fail, you aren't allowed to go to the database. There's tradeoffs here - the process that has the lock key could die. We set it to expire at something somewhat reasonable for our system - around 100ms.

We were lucky in that our database schema was relatively mundane. It would have been much more difficult with some of the more complex schemas I've worked with over the years.

It would be a lot easier to accomplish these days, at least for this particular project. There wasn't much in the way of actually transactional data, and for the hardest hit tables we could have easily gotten by with some of the NoSQL databases that came a bit later. And nowadays, hardware is much more powerful - I would have killed for an SSD instead of spinning rust.

Ответить
@mahmutjomaa6345
@mahmutjomaa6345 - 06.01.2023 14:06

How would you implement Replica (Leader/Follower) for EF Core? Would you use LeaderDbContext and FollowerDbContext that both inherit from the same DbContext and disable SaveChanges for the FollowerDbContext?

Ответить
@roeiohayon4501
@roeiohayon4501 - 10.01.2023 00:30

Hi Derek!
I just wanted to say thank you for all of the useful and very educational content you upload.
I am definitely a better programmer and software architect thanks to your videos.
As a person who loves learning, your videos are amazing:)

Ответить
@krskvBeatsRadio
@krskvBeatsRadio - 12.01.2023 11:13

Just love your integration with the EventStoreDB. Ultimately the integration I get the most value from 😂

Ответить
@juhairahamed5342
@juhairahamed5342 - 13.01.2023 22:51

I have 5 instances of account microservice which transfers the money from account A to account B and then updates the data in the Postgres database. My problem is

A user sent five requests to the account service and all of my microservices are working in parallel each request went to all 5 services way but now the user doesn't have enough balance in the account and I am already checking if the user is having enough balance or not.

but after 2 requests user doesn't have enough balance so I am in confusion how to check this and implement data consistency first before the request goes to another instance of the same microservice.

Can u suggest solution for above situation..

Ответить
@jannishecht4069
@jannishecht4069 - 15.01.2023 12:38

I would really enjoy a video about different ways how to implement multi tenancy and their implications. Thanks for the terrific content.

Ответить
@yuliadubov2964
@yuliadubov2964 - 22.01.2023 08:04

A very helpful summary, thanks a lot! Definitely applied some of these over the years to our monolith and probably will…
From what I read/hear so far, micro-services have more to do with org structure than with performance and with even physical boundaries. If your app is comprised of several services, but they are always tested and deployed together - it’s not micro-services. It’s making them independently deployable that produces complexity. So I think that breaking up the monolith physically but keeping all the CI/CD/Deployment flows together can be an extension of this list…

Ответить
@michaelslattery3050
@michaelslattery3050 - 26.01.2023 18:25

For user-facing webapps you can switch from SSR to SSG (or CSR) + CDN. This offloads your app server from having to generate pages.
Also, you can use edge cache servers for REST calls. This is like a CDN, but it's a caching reverse proxy. There are several such services.
With bounded contexts and/or vertical slicing, you can have multiple databases and therefore multiple databases servers.
In a monolith we once put some of our non-critical tables into an in-memory database (hsqldb). We did this with tables that didn't change or held data we didn't care if we lost.

Ответить
@ShighetariVlogs
@ShighetariVlogs - 14.03.2023 20:04

Thank you for putting out this content!

Ответить