The database is always the bottleneck. This is what all the admins of massive services tell in talks about their scaling efforts.
In short:
- Database used to mean SQL
- It is difficult to scale SQL CPU
- It is simple to scale Web-Frontend CPU
- The SQL philosophy puts the burden on Read by enabling very complex SELECTs and JOINs while Write is usually simple with short INSERT. Just the wrong concept in a massive world. We need quick and simple read operations, not complex reporting features.
Basically all you need is a key-value collection store with some indexing, alias document store. Whatever you decide: you are bound to it and this sucks. So, why not decouple the application logic from the database? Decoupling can be done in different ways. Traditionally you had a thin database code layer that tried to abstract from different (SQL) databases. Now, I need more abstraction, because there might well be a non-SQL database in the mix.
I decided to put a web service style frontend-backend separation between application code and database. This makes the DB a web service. In other words: There is HTTP between application and DB which allows for massive scaling. Eventually, my DBs can be scaled using web based load balancing tools. This is great. I can also swap out the DB on a per table basis for another database technology. Also great, because I do not have to decide about the database technology now and this is what this article really is about, right?
So, now I design the DB web service interface. I know what I need from the database interface. This are the requirements:
- Database items (think: rows) are Key-Value collections
- Sparse population: not all possible keys (think: column names) exist for all items
- One quick primary key to access the collection or a subset of key-values per item
- Results are max one item per request. I will emulate complex searches and multi-item results in the application (disputed by Ingo, see Update 1)
- Required operations: SET, GET, DELETE on single items
- Support auto-generated primary keys
- Only data access operations, no DB management.
This is the interface as code:
- interface IStorageDriver
- {
- // Arguments:
- // sType: Item type (think: table).
- // properties: The data. Everything is a string.
- // names: Column names.
- // condition: A simple query based on property matching inside the table. No joins. Think: tags or WHERE a=b AND c=d
- // Add an item and return an auto created ID
- string Add(string sType, Dictionary<string, string> properties);
- // returns Created ID
- // Set item properties, may create an item with a specified ID
- void Set(string sType, string sId, Dictionary<string, string> properties);
- // Fetch item properties by ID or condition, may return only selected properties
- Dictionary<string, string> Get(string sType, string sId, List<string> names);
- List<Dictionary<string, string>> Get(string sType, Dictionary<string, string> condition, List<string> names);
- // returns The data. Everything is a string
- // Delete an item by ID
- bool Delete(string sType, string sId);
- // returns True = I did it or False = I did not do it, because not exist, result is the same
- }
I added the "Add" method to support auto-generated primary keys. Basically, "Set" would be enough, but there are databases or DB schemes which generate IDs on insert, remember?
All this wrapped up into a SRPC interface. Could be SOAP, but I do not want the XML parsing hassle (not so much the overhead). WSDLs suck. Strong typing of web services is good, but can be replaced by integration tests under adult supervision.
On the network this looks like:
Request:
- POST /srpc HTTP/1.1
- Content-length: 106
- Method=Data.Add
- _Type=TestTable
- User=Planta
- Age=3
- Identity=http://ydentiti.org/test/Planta/identity.xml
Response:
- HTTP/1.1 200 OK
- Content-length: 19
- Status=1
- _Id=57646
Everything is a string. This is the dark side for SQL people. The application knows each type and asserts type safety with integration tests. On the network all bytes are created equal. They are strings anyway. The real storage drivers on the data web service side will convert to the database types. The application builds cached objects from data sets and maps data to internal types. There are no database types as data model in the application. Business objects are aggregates, not table mappings (LINQ is incredibly great, but not for data on a massive scale).
BUT: I could easily (and backward compatible) add type safety by adding type codes to the protocol, e.g. a subset of XQuery types or like here:
- User=Planta
- User/Type=text
- Age=3
- Age/Type=int32
- Identity=http://ydentiti.org/test/Planta/identity.xml
- Identity/Type=url
The additional HTTP is overhead. But SQL connection setup is bigger and the application is INSERT/UPDATE bound anyway, because memcache will be used massively. Remember the coding rule: the database never notices a browser reload.
Now, I can even use AWS S3, which is the easiest massively scalable stupid database, or Simple DB with my data web service on multiple load balanced EC2 instances. I don't have to change anything in the application. I just implement a simple 4-method storage driver in a single page. For the application it is only 1 line configuration to swap the DB technology.
I can proxy the request easily and do interesting stuff:
- Partitioning. User IDs up to 1.000.000 go to http://zero.domain.tld. The next million goes to go to http://one.domain.tld.
- Replication: All the data may be stored twice for long distance speed reasons. The US-cluster may resolve the web service host name differently than the EU cluster. Data is always fetched from the local data service. But changes are replicated to the other continent using the same protocol. No binary logs across continents.
- Backup: I can duplicate changes as backup into another DB, even into another DB technology. I don't know yet how to backup SimpleDB. But if I need indexing and want to use SimpleDB, then I can put the same data into S3 for backup.
- Eventual persistence:The data service can collect changes in memory and batch-insert them into the real database.
Update 1:
Supporting result sets (multi-item) as 'Get' response might be worth the effort. I propose to have 2 different 'Get' operations. The first with the primary key and no condition. This will always return at most 1 item. A second 'Get' without pimary key but with condition might return multiple items. (Having both, a primary key and a condition in the 'Get' makes no sense anyway). The multi-item response will use the SRPC Array Response.
On the network:
Request:
- POST /srpc HTTP/1.1
- Content-length: ...
- Method=Data.Get
- _Type=TestTable
- _Condition=Age=3\nGender=male
- _Names=Nickname Identity
Comment: _Condition is a key-value list. This is encoded like an 'embedded' SRPC. A key=value\n format with \n escaping to get it on a single line. _Names is a value list. Tokens of a value lists are separated by a blank (0x20) and blanks inside tokens are escaped by a '\ '. Sounds complicated, but easy to parse and read.
Response:
- HTTP/1.1 200 OK
- Content-length: ...
- Status=1
- 0:Planta
- 0:Identity=http://ydentiti.org/test/Planta/identity.xml
- 1:Wolfspelz
- 1:Identity=http://wolfspelz.de/identity.xml
I am not yet decided about queries with multiple primary keys. They could be implemented as
- SRPC Batch with multiple queries in a single HTTP request, or
- with a specific multi-primary-key syntax, similar to SQL: "WHERE id IN (1,2,3)".
Update 2:
I agree with Ingo, that solution 1 (SRPC Batch) makes all operations batchable and has a simple interface at the same time. The trade off, that the webservice must detect multi-key semantics from a batch is probably not too severe. Clients will usually batch ony similar requests together. For the beginning the webservice can just execute multiple database transactions. Later the webservice can improve performance with a bit of code that aggregates the batch into a single multi-key database request.
Update 3:
In order to allow for later addition of type safety and other yet unknown features, I define here, now and forever, that SRPC keys with "/" (forward slash) be treated as meta-data for the corresponding keys without "/". Specifically, that they should not be treated as database (column) names. That's no surprise from the SRPC point of view, but I just wanted to make that clear. I have no idea why someone would use "/" in key names anyway. I find even "_" and "-" disturbing. By the way: ":" (colon) is also forbidden in keys for the benefit of SRPC Batch. In other words: please use letters and numbers, the heck.
Update 4:
I removed the "Database". "Type" is enough for the data service to figure out where to look for the data. "Type" is a string. It can contain "Database/Table".
_happy_decoupling()
2 Kommentare:
What do you think about a NoSQL session at the Barcamp Hamburg? I would like to hear some developer's mentions about that.
Auf jeden Fall. Gute Idee.
Kommentar veröffentlichen