JSON is fast becoming the standard format for data interchange and for unstructured data, and MariaDB 10.2 adds a range on JSON supporting functions, even though a JSON datatype isn't implemented yet. There are some reasons why there isn't a JSON datatype, but one is that there are actually not that many advantages to that as JSON is a text-based format. This blog post aims to describe JSON and the use cases for it, as well as to describe the MariaDB 10.2 JSON functions and uses for these, as well as showing some other additions to MariaDB 10.2 that are useful for JSON processing.
So to begin with then, why do we need JSON? Or to put it differently, why do we not store all data in JSON? Well, the reason as I see it is that some data we work with really is best treated as schemaless whereas some other data really should be handled in a more strict way in a schema. Which means that in my mind mixing relational data with unstructured data is what we really want. And using JSON for unstructured data is rather neat, and JSON is even standardized (see json.org).
There are reasons why this hasn't always been so. When the sad old git that is writing this stuff started working in this industry, which I think was during the Harding administration, computers were rare, expensive, handled only by experts (so how I got to work with them is a mystery) and built from lego-bricks, meccano and pieces of solid gold (to keep up the price). Also, they were as powerful as a slide-ruler, except it was feed with punched-cards (and probably powered by steam). Anyway, no one in their right mind would have considered string pictures of cute felines as something to be on a computer, or actually stuff to be stored in the database. The little that would fit was the really important stuff - like price, amount in stock, customer name, billing address and such - and nothing else. And not only that, stuff that was stored had some kind of value, somehow, which meant it had to follow certain rules (and following rules is something I am good at? I wonder how I ended up in this business. Again). Like, a price had to be a number of some kind, with a value 0 or higher and some other restrictions. As you see, these were hard and relentless times.
And then time moved on and people started buying things on the internet (whatever the internet is. I think it is some kind of glorified, stylish version of punched cards) and stuff such as Facebook and Google came around. The issue with computer storage was now not how to fit all that nicely structured data in it, but rather once we have filled that hard drive on your desktop with all the product, customers and transactions from Amazon (I think Amazon has something to do with Internet, but I am not sure) and a full 17.85% of that drive is now occupied by that old-style structured data, what more do we put in there? Maybe we could put some more information on the items for sale in that database, and some general information on who is buying it? That should fill up that disk nicely, right? Well, yes, but that new data, although related to the structured data I already have, is largely unstructured. Say, for example, you write a review of a product on Amazon late in the morning after a good deal of heavy "partying" (which is not an Internet thing, I think), the contents of that would hardly be considered "structured". If you didn't like the product (which you probably didn't), then the appropriate terms for large parts of that review would probably be "profanity" or "foul language").
The way to deal with the above is a mix of structured and unstructured data, with some kind of relation between the two. Like a column of unstructured data in each relational database table (or should I say "relation", just to show my age? Or maybe I should pretend to be really young, modern and cool, possibly sporting a hipster beard and all that, by calling it a "collection").
With that out of the way, let's consider an example using structured as well as non-structured JSON data. Assume we have a store selling different types of clothing, pants, jackets, shoes and we are to create a table to hold the inventory. This table would have some columns that are always there and which have the same meaning for all rows in the table, like name, amount in stock and price. These are items that are well suited for a relational format. On top of this we want to add attributes that have different meaning for each type of or even each instance of items. Here we have things like colour, width, length and size. These we consider non-relational as the interpretation of these attributes are different depending of the type of garment (like size M or shoe sizes or a "zebra striped" colour) and some garments might have some unique attribute, like designer or recommended by staff or something. Our table might then look something like this:
MariaDB> CREATE TABLE products(id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
price DECIMAL(9,2) NOT NULL,
stock INTEGER NOT NULL,
attr VARCHAR(1024));
In this table we have a few columns that look like columns in any relational database table, and then we have a column, called attr, that can hold any relevant attribute for the garment in question and we will store that as JSON a JSON string. You probably notice that we aren't using a JSON datatype here as that is not present in MariaDB, despite that there are JSON functions, but those JSON functions act on a text-string with JSON content. These functions are introduced in MariaDB 10.2 (which is in Beta as I write this), but there are a few bugs that means you should use MariaDB 10.2.4 or higher, which means as for now we assume that MariaDB 10.2.4 or higher is being used.
But there is one issue with the above that I don't particularly care for and that is, as the attr column is plain text, any kind of data can be put in the attr column, even non-valid JSON. The good thing is that there is a fix for this in MariaDB 10.2, which is CHECK constraints that actually work, and this is a little discussed feature of MariaDB 10.2. The way this works is that this kind of constraint kicks in whenever a row is INSERTed or UPDATEed, any CHECK constraint runs and validates the data and if the validation fails the operation also fails. Before I show an example I just want to mention one JSON function we are to use here, which is JSON_VALID which takes a string and checks if it is valid JSON. Note that although CHECK constraints are particularly valid here, check constraints can be used for any kind of data validation.
Armed with this, let's rewrite the statement that creates the table like this:
MariaDB> CREATE TABLE products(id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
price DECIMAL(9,2) NOT NULL,
stock INTEGER NOT NULL,
attr VARCHAR(1024),
CHECK (JSON_VALID(attr)));
Let's give this a try now:
MariaDB> INSERT INTO products VALUES(NULL, 'Jeans', 10.5, 165, NULL);
ERROR 4025 (23000): CONSTRAINT `CONSTRAINT_1` failed for `inventory`.`products`
Ok, that didn't work out. What happens here is that a NULL string isn't a valid JSON value, so we need to rewrite our table definition:
MariaDB> CREATE TABLE products(id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
price DECIMAL(9,2) NOT NULL,
stock INTEGER NOT NULL,
attr VARCHAR(1024),
CHECK (attr IS NULL OR JSON_VALID(attr)));
Following this we can try it again:
MariaDB> INSERT INTO products VALUES(NULL, 'Jeans', 10.5, 165, NULL);
Query OK, 1 row affected (0.01 sec)
MariaDB> INSERT INTO products VALUES(NULL, 'Shirt', 10.5, 78, '{"size": 42, "colour": "white"}');
Query OK, 1 row affected (0.01 sec)
MariaDB> INSERT INTO products VALUES(NULL, 'Blouse', 17, 15, '{"colour": "white}');
ERROR 4025 (23000): CONSTRAINT `CONSTRAINT_1` failed for `inventory`.`products`
That last statement failed because of malformed JSON (a double quote was forgotten about), so let's correct that:
MariaDB> INSERT INTO products VALUES(NULL, 'Blouse', 17, 15, '{"colour": "white"}');
Query OK, 1 row affected (0.01 sec)
One thing that has yet to be discussed is indexes on JSON values. As the attr column in our example is a plain text, we can of course index it as usual, but that is probably not what you want to do, rather what would be neat would be to create an index on individual attributes in that JSON string. MariaDB doesn't yet support functional indexes, i.e. functions not on values but on computed values. What MariaDB does have though is Virtual Columns, and these can be indexed and as of MariaDB 10.2 these virtual columns don't have to be persistent, (read more on Virtual Columns here: Putting Virtual Columns to good use).
The easiest way to explain this is with an example. Let's say we want an index on the colour attribute, if such a thing exists. For this we need two things: A virtual column that contains the colour attribute as extracted from the attr column, and then an index on that. In this case we will be using the JSON_VALUE function that takes a JSON value and a path, the latter describing the JSON operation to be performed, somewhat like a query language for JSON.
We end up with something like this:
MariaDB> ALTER TABLE products ADD attr_colour VARCHAR(32) AS (JSON_VALUE(attr, '$.colour'));
MariaDB> CREATE INDEX products_attr_colour_ix ON products(attr_colour);
With that in place, let's see how that works:
MariaDB> SELECT * FROM products WHERE attr_colour = 'white';
+----+--------+-------+-------+---------------------------------+-------------+
| id | name | price | stock | attr | attr_colour |
+----+--------+-------+-------+---------------------------------+-------------+
| 2 | Shirt | 10.50 | 78 | {"size": 42, "colour": "white"} | white |
| 3 | Blouse | 17.00 | 15 | {"colour": "white"} | white |
+----+--------+-------+-------+---------------------------------+-------------+
2 rows in set (0.00 sec)
And let's see if that index is working as it should:
MariaDB> EXPLAIN SELECT * FROM products WHERE attr_colour = 'white';
+------+-------------+----------+------+-------------------------+-------------------------+---------+-------+------+------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Etra |
+------+-------------+----------+------+-------------------------+-------------------------+---------+-------+------+------------+
| 1 | SIMPLE | products | ref | products_attr_colour_ix | products_attr_colour_ix | 99 | const | 2 | Uing where |
+------+-------------+----------+------+-------------------------+-------------------------+---------+-------+------+------------+
1 row in set (0.00 sec)
And just to show that the column attr_colour is a computed column that depends on the attr column, lets try updating the colour for the blouse and make that red instead of white and then search that. To replace a value in a JSON object MariaDB 10.2 provides the JSON_REPLACE functions (for all JSON functions in MariaDB 10.2 see MariaDB Knowledge Base).
MariaDB> UPDATE products SET attr = JSON_REPLACE(attr, '$.colour', 'red') WHERE name = 'Blouse';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MariaDB> SELECT attr_colour FROM products WHERE name = 'blouse';
+-------------+
| attr_colour |
+-------------+
| red |
+-------------+
1 row in set (0.00 sec)
There is more to say about JSON in MariaDB 10.2 but I hope you now have a feel on what's for offer.
Happy SQL'ing
/Karlsson
I am Anders Karlsson, and I have been working in the RDBMS industry for many, possibly too many, years. In this blog, I write about my thoughts on RDBMS technology, happenings and industry, and also on any wild ideas around that I might think up after a few beers.
Tuesday, February 28, 2017
Monday, January 16, 2017
Released MyMSSQLDump 1.1
My program for exporting data from MSSQL and Sybase into a whole bunch of other formats, including:
Happy SQL'ing
/Karlsson
- JSON
- HTML
- CSV
- MySQL (mysqldump style)
- MYSQL / Sybase INSERT-style
- Oracle INSERT-style
- DATETIME datatype formatting
- DATETIMEOFFSET formatting
- Other temporal datatype support
- Much more flexible formatting in general
- More tests
Happy SQL'ing
/Karlsson
Thursday, January 5, 2017
Powerful MariaDB exports using MyMDBDump
You can export data from MariaDB using mysqldump and a bunch of other tools, but if you need really flexible output format, this might not be what you need. Instead, give MyMDBDump a try. This tool will export data just like mysqldump, but the output format is a lot more flexible. I urge you to test it and check out the documentation, but some if the features are, in short:
Happy SQL'ing
/Karlsson
- Dynamic column support - Dynamic columns can be exported as binary, JSON or even as SQL, where I think the latter is a pretty unique feature.
- Oracle export format - This exports data as INSERT statements, just like mysqldump, but in an Oracle friendly way.
- MS SQL / SQL Server format - This exports data as SQL Server friendly INSERTs.
- Binary data formats - Supporting plain binary, base64, hex and a number of more formats.
- Transaction support.
- JSON export support - Including embedding dynamic columns as JSON.
- UTF8 support and UTF8 validity checking.
- Generated ROWNUM column output
- FLOAT and DOUBLE formatting
- DATETIME, TIMESTAMP, DATE and TIME custom formating.
Happy SQL'ing
/Karlsson
Tuesday, October 11, 2016
Getting data out of SQL Server in a MariaDB friendly way.
I know what you are thinking "Oh my, it's that time again, Karlsson insist that data should get out of some other database and into MariaDB. What is he up to this time?" and that is exactly right, this time data is coming out of SQL Server, out of that expensive, closed source, Windows-only software and into the lightweight, fast and cost effective MariaDB.
I have already shown how this works when getting data out of Oracle in releasing MyOraDump, so inline with that the corresponding program to get data out of SQL Server is called MyMSSQLDump and it has features along same same lines as MyOraDump. To connect to SQL Server (or Sybase, but this I haven't tested) I use FreeTDS, which is a nice Open Source SQL Server / Sybase driver. The output formats supported are MySQL (i.e. A file with (INSERT INTO...), MSSQL (same as MySQL format but aligned for MSSQL), JSON, JSON Array (similar to JSON but all data in one array) and CSV. All formats are very flexible and the common SQL Server / Sybase datatypes are supported.
As usual, the code is written in C and use autotools for building. I have only tested building it on CentOS (6 and 7) so far, but there is nothing magic to it so it should work on other platforms too. Download it from sourceforge.
Happy SQL'ing
/Karlsson
I have already shown how this works when getting data out of Oracle in releasing MyOraDump, so inline with that the corresponding program to get data out of SQL Server is called MyMSSQLDump and it has features along same same lines as MyOraDump. To connect to SQL Server (or Sybase, but this I haven't tested) I use FreeTDS, which is a nice Open Source SQL Server / Sybase driver. The output formats supported are MySQL (i.e. A file with (INSERT INTO...), MSSQL (same as MySQL format but aligned for MSSQL), JSON, JSON Array (similar to JSON but all data in one array) and CSV. All formats are very flexible and the common SQL Server / Sybase datatypes are supported.
As usual, the code is written in C and use autotools for building. I have only tested building it on CentOS (6 and 7) so far, but there is nothing magic to it so it should work on other platforms too. Download it from sourceforge.
Happy SQL'ing
/Karlsson
Monday, September 19, 2016
Replication from Oracle to MariaDB - Part 3
This is the third part of a series of blogs on how to do Change Data Capture from Oracle in an effort to be able to replicate that data to MariaDB. Part 1 dealt with some basics on the Oracle side of things regarding the Oracle redo log which we use as a source for out CDC efforts and Part 2 was about how to get this to work. We left part2 with an example of how to Replicate INSERTs from Oracle to MariaDB, but that is hardly good enough, we also have to deal with UPDATE and DELETE to make it at least somewhat complete, and this is what we will have a look at in this post.
And I know what you are asking your selves now, you spent all that time explaining the Oracle ROWID and then you never used that. That was an awful waste of time. And no, it wasn't a waste, we will put it to good use now when we look at UPDATEs.
For an INSERT, things are reasonably simple, right, we just add a row with some columns to the table. As for an UPDATE or a DELETE though, we need a means to identify the row to be updated. To reiterate one thing, the Oracle redo log is not like the MariaDB binlog here, and here I am assuming that the binlog format is STATEMENT. Let's look at an example, and for that let's insert some data into out EMP table (see the previous port for details).
SQL> INSERT INTO emp VALUES(2, 'John', 'Developer', 12000, 20);
With that in place, let's now ensure that this is also replicated to MariaDB by running the script from the last post:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And to ensure that we got this right, let's check this out in MariaDB now:
$ mysql -u anders -panders anders
OK, seems like we got it right so far. Then let's try an UPDATE and raise the salary for the two developers in Department 20:
$ sqlplus anders/anders
Now, what does this result in? If this was MariaDB with STATEMENT binlog format, the binlog would have something like this in it:
# at 9535
We've seen this before, no surprises there. Now, let's try this from SQL*Plus and see what the SQL_REDO column in V$LOGMNR_CONTENTS looks like.
update "ANDERS"."EMP" set "SAL" = '12100' where "SAL" = '12000' and ROWID = 'AAAE7RAAEAAAAK9AAB';
Whoa, that doesn't look like our UPDATE at all, what's going on? Well, I already told you that the Oracle redo log doesn't work like the binlog. In this case, we get an SQL statement from it, but it is not the same SQL statement that we asked Oracle to execute for us. No, this SQL is reconstructed from the redo log record, which logs updates row by row, and uses the ROWID to identify the row that was changed. We have already discussed that the ROWID is unique in a table and to an extent even acroess tables. The issue now is we don't have a ROWID in MariaDB. But the ROWID in the redo log is available in the ROW_ID column in the V$LOGMNR_CONTENTS table. And if we assume that all tables that we replicate has to have a PRIMARY KEY, then we could use the ROWID to look up that PRIMARY KEY and use that for UPDATE operations in MariaDB, right? Something like this in our script:
SELECT empno INTO v_empno FROM anders.emp WHERE rowid = v_row_id;
That works but there is one issue with it. We get out data from the redo log, which is historic, so to speak, but we look at the PRIMARY KEY using the ROW ID based on how data looks right now, which means that if you are in the habit of updating your PRIMARY KEYs (don't do this, by the way, this is a bad habit even if we even if we exclude the specific use case we look at here), then this isn't going to fly. Sorry then, I will not fix that for you, if you update a row and then update it's primary key, the first update will fail.
Then let's look at DELETEs. Can we use the same method as used for UPDATEs, it's tempting, I know, but it is not going to work at all. Think about it, we just determined that we get the ROWID for a row that is historic, but in the case of a DELETE there is no "current row" as we DELETEd it? Right? Whoa, how do we solve that? Are we stuck now, out in the desert, the hot Oracle sun is shining on our unprotected pale skin and it's getting hotter and hotter. Our old Studebaker has broken down by the side of the abandoned dirt road, and you stare into the unforgiving sun and you know this is it?
Nope. This can be fixed, luckily! The Oracle redo log contains redo data, that is what we have been using so far. The thing is that it also contains undo data! Hey! And then we think about this amazing concept a bit and consider what is necessary to undo a delete? Frankly, the value of ALL columns in the deleted row, right? Our friendly DBMS_LOGMNR.MINE_VALUE function comes to the rescue, but this time not to operate on the redo value but on the undo value!
Armed with all this, we can now revisit out cdcemp.sql script and add stuff to handle UPDATE and DELETE operations also:
SET serveroutput ON
Let's try this script, and see if the UPDATE to the sal column that I did at the beginning of this post are properly replicated to MariaDB:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And then we check what we have in MariaDB:
$ mysql -u anders -panders anders
And then see if we can also handle a DELETE. In SQL*Plus we run this:
SQL> DELETE FROM emp WHERE ename = 'John';
The we run out script:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And then we check what we have in MariaDB:
$ mysql -u anders -panders anders
Bazinga, it worked! Are we done then? I'm afraid not, we still have transactions to take care of, but that has to wait a day or two.
Happy SQL'ing
/Karlsson
And I know what you are asking your selves now, you spent all that time explaining the Oracle ROWID and then you never used that. That was an awful waste of time. And no, it wasn't a waste, we will put it to good use now when we look at UPDATEs.
For an INSERT, things are reasonably simple, right, we just add a row with some columns to the table. As for an UPDATE or a DELETE though, we need a means to identify the row to be updated. To reiterate one thing, the Oracle redo log is not like the MariaDB binlog here, and here I am assuming that the binlog format is STATEMENT. Let's look at an example, and for that let's insert some data into out EMP table (see the previous port for details).
SQL> INSERT INTO emp VALUES(2, 'John', 'Developer', 12000, 20);
SQL> INSERT INTO emp VALUES(3, 'Georgina', 'Design', 11000, 20);
SQL> INSERT INTO emp VALUES(4, 'Anne', 'Assistant', 8000, 30);
SQL> INSERT INTO emp VALUES(5, 'Marge', 'HR Mgr', 14000, 30);
SQL> COMMIT;
With that in place, let's now ensure that this is also replicated to MariaDB by running the script from the last post:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And to ensure that we got this right, let's check this out in MariaDB now:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+----------+-----------+----------+--------+
| empno | ename | job | sal | deptno |
+-------+----------+-----------+----------+--------+
| 1 | Anders | Sales | 10000.00 | 10 |
| 2 | John | Developer | 12000.00 | 20 |
| 3 | Georgina | Design | 11000.00 | 20 |
| 4 | Anne | Assistant | 8000.00 | 30 |
| 5 | Marge | HR Mgr | 14000.00 | 30 |
+-------+----------+-----------+----------+--------+
5 rows in set (0.00 sec)
OK, seems like we got it right so far. Then let's try an UPDATE and raise the salary for the two developers in Department 20:
$ sqlplus anders/anders
SQL> UPDATE emp SET sal = sal + 100 WHERE deptno = 20;
Now, what does this result in? If this was MariaDB with STATEMENT binlog format, the binlog would have something like this in it:
# at 9535
#160919 13:00:44 server id 112 end_log_pos 9652 CRC32 0x987f9dea Query thread_id=101 exec_time=0 error_code=0
SET TIMESTAMP=1474282844/*!*/;
UPDATE emp SET sal = sal + 100 WHERE deptno = 20
/*!*/;
We've seen this before, no surprises there. Now, let's try this from SQL*Plus and see what the SQL_REDO column in V$LOGMNR_CONTENTS looks like.
update "ANDERS"."EMP" set "SAL" = '12100' where "SAL" = '12000' and ROWID = 'AAAE7RAAEAAAAK9AAB';
update "ANDERS"."EMP" set "SAL" = '11100' where "SAL" = '11000' and ROWID = 'AAAE7RAAEAAAAK+AAA';
Whoa, that doesn't look like our UPDATE at all, what's going on? Well, I already told you that the Oracle redo log doesn't work like the binlog. In this case, we get an SQL statement from it, but it is not the same SQL statement that we asked Oracle to execute for us. No, this SQL is reconstructed from the redo log record, which logs updates row by row, and uses the ROWID to identify the row that was changed. We have already discussed that the ROWID is unique in a table and to an extent even acroess tables. The issue now is we don't have a ROWID in MariaDB. But the ROWID in the redo log is available in the ROW_ID column in the V$LOGMNR_CONTENTS table. And if we assume that all tables that we replicate has to have a PRIMARY KEY, then we could use the ROWID to look up that PRIMARY KEY and use that for UPDATE operations in MariaDB, right? Something like this in our script:
SELECT empno INTO v_empno FROM anders.emp WHERE rowid = v_row_id;
That works but there is one issue with it. We get out data from the redo log, which is historic, so to speak, but we look at the PRIMARY KEY using the ROW ID based on how data looks right now, which means that if you are in the habit of updating your PRIMARY KEYs (don't do this, by the way, this is a bad habit even if we even if we exclude the specific use case we look at here), then this isn't going to fly. Sorry then, I will not fix that for you, if you update a row and then update it's primary key, the first update will fail.Then let's look at DELETEs. Can we use the same method as used for UPDATEs, it's tempting, I know, but it is not going to work at all. Think about it, we just determined that we get the ROWID for a row that is historic, but in the case of a DELETE there is no "current row" as we DELETEd it? Right? Whoa, how do we solve that? Are we stuck now, out in the desert, the hot Oracle sun is shining on our unprotected pale skin and it's getting hotter and hotter. Our old Studebaker has broken down by the side of the abandoned dirt road, and you stare into the unforgiving sun and you know this is it?
Nope. This can be fixed, luckily! The Oracle redo log contains redo data, that is what we have been using so far. The thing is that it also contains undo data! Hey! And then we think about this amazing concept a bit and consider what is necessary to undo a delete? Frankly, the value of ALL columns in the deleted row, right? Our friendly DBMS_LOGMNR.MINE_VALUE function comes to the rescue, but this time not to operate on the redo value but on the undo value!
Armed with all this, we can now revisit out cdcemp.sql script and add stuff to handle UPDATE and DELETE operations also:
SET serveroutput ON
SET linesize 4000
SET feedback off
DECLARE
v_scn NUMBER;
v_scnstart NUMBER;
v_scnend NUMBER;
v_redo_value NUMBER;
v_undo_value NUMBER;
v_operation VARCHAR2(32);
v_row_id VARCHAR2(18);
v_sqlstmt VARCHAR2(4000);
v_firstcol BOOLEAN;
v_empno NUMBER;
CURSOR cur1 IS
SELECT scn, redo_value, undo_value, operation, row_id
FROM v$logmnr_contents
WHERE seg_owner = 'ANDERS' AND table_name = 'EMP'
AND operation IN('INSERT', 'UPDATE', 'DELETE');
BEGIN
-- Start LogMiner.
SELECT NVL(MIN(l.scn) + 1, MIN(e.ora_rowscn)) INTO v_scnstart
FROM anders.emp e, scnlog l
WHERE l.table_name = 'EMP';
SELECT MAX(e.ora_rowscn) INTO v_scnend
FROM anders.emp e;
DBMS_OUTPUT.PUT_LINE('-- SCN Range: ' || v_scnstart || ' - ' || v_scnend);
DBMS_LOGMNR.START_LOGMNR(STARTSCN => v_scnstart,
ENDSCN => v_scnend,
OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.CONTINUOUS_MINE +
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
-- Dummy update to insert a more recent SCN to ensure that we don't have to wait
-- for the next update in the following select.
UPDATE scnlog SET table_name = table_name WHERE table_name = 'EMP';
COMMIT;
-- Open cursor to get data from LogMiner.
OPEN cur1;
-- Loop for all the rows in the redo log since the last time we ran this.
LOOP
FETCH cur1 INTO v_scn, v_redo_value, v_undo_value, v_operation, v_row_id;
EXIT WHEN cur1%NOTFOUND;
IF v_operation = 'INSERT' THEN
v_sqlstmt := 'INSERT INTO emp(EMPNO, ENAME, JOB, SAL, DEPTNO) VALUES(';
v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.EMPNO') || ', ';
v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.ENAME') || ''', ';
v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.JOB') || ''', ';
v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.SAL') || ', ';
v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.DEPTNO') || ')';
ELSIF v_operation = 'UPDATE' THEN
v_sqlstmt := 'UPDATE emp SET ';
v_firstcol := TRUE;
-- Build UPDATE statement using only column in redo log record.
IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.EMPNO') = 1 THEN
v_sqlstmt := v_sqlstmt || 'EMPNO = ' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.EMPNO');
v_firstcol := FALSE;
END IF;
IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.ENAME') = 1 THEN
v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
v_sqlstmt := v_sqlstmt || 'ENAME = ''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.ENAME')
|| '''';
v_firstcol := FALSE;
END IF;
IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.JOB') = 1 THEN
v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
v_sqlstmt := v_sqlstmt || 'JOB = ''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.JOB')
|| '''';
v_firstcol := FALSE;
END IF;
IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.SAL') = 1 THEN
v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
v_sqlstmt := v_sqlstmt || 'SAL = ' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.SAL');
v_firstcol := FALSE;
END IF;
IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.DEPTNO') = 1 THEN
v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
v_sqlstmt := v_sqlstmt || 'DEPTNO = ' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.DEPTNO');
v_firstcol := FALSE;
END IF;
-- Get empno from emp using rowid.
SELECT MAX(empno) INTO v_empno FROM anders.emp WHERE rowid = v_row_id;
IF v_empno IS NULL THEN
CONTINUE;
END IF;
v_sqlstmt := v_sqlstmt || ' WHERE empno = ' || v_empno;
ELSIF v_operation = 'DELETE' THEN
-- Get the empno from the undo record.
v_empno := DBMS_LOGMNR.MINE_VALUE(v_undo_value, 'ANDERS.EMP.EMPNO');
IF v_empno IS NULL THEN
CONTINUE;
END IF;
v_sqlstmt := 'DELETE FROM emp WHERE empno = ' || v_empno;
END IF;
DBMS_OUTPUT.PUT_LINE('-- SCN = ' || v_scn);
DBMS_OUTPUT.PUT_LINE(v_sqlstmt || ';');
END LOOP;
IF v_scn IS NOT NULL THEN
UPDATE scnlog SET scn = v_scn WHERE table_name = 'EMP';
COMMIT;
END IF;
CLOSE cur1;
END;
/
EXIT
Let's try this script, and see if the UPDATE to the sal column that I did at the beginning of this post are properly replicated to MariaDB:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And then we check what we have in MariaDB:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+----------+-----------+----------+--------+
| empno | ename | job | sal | deptno |
+-------+----------+-----------+----------+--------+
| 1 | Anders | Sales | 10000.00 | 10 |
| 2 | John | Developer | 12100.00 | 20 |
| 3 | Georgina | Design | 11100.00 | 20 |
| 4 | Anne | Assistant | 8000.00 | 30 |
| 5 | Marge | HR Mgr | 14000.00 | 30 |
+-------+----------+-----------+----------+--------+
5 rows in set (0.00 sec)
And then see if we can also handle a DELETE. In SQL*Plus we run this:
SQL> DELETE FROM emp WHERE ename = 'John';
SQL> COMMIT;
The we run out script:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And then we check what we have in MariaDB:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+----------+-----------+----------+--------+
| empno | ename | job | sal | deptno |
+-------+----------+-----------+----------+--------+
| 1 | Anders | Sales | 10000.00 | 10 |
| 3 | Georgina | Design | 11100.00 | 20 |
| 4 | Anne | Assistant | 8000.00 | 30 |
| 5 | Marge | HR Mgr | 14000.00 | 30 |
+-------+----------+-----------+----------+--------+
4 rows in set (0.00 sec)
Bazinga, it worked! Are we done then? I'm afraid not, we still have transactions to take care of, but that has to wait a day or two.
Happy SQL'ing
/Karlsson
Friday, September 16, 2016
Replication from Oracle to MariaDB - Part 2
In part 1 on this series of blogs on how to replcate from Oracle to MariaDB, we looked at how to get data out from Oracle, and in an effort to look more modern than I really am, let's say that what we were attempting was CDC (Change Data Capture). Hey, I feel like I'm 18 again! Modern, cool! Maybe I should start to wear a baseball cap backwards and have my jeans no higher than my knees. Or again, maybe not. But CDC it is.
When I left you in the previous post we have just started to get some data. reluctantly, out of Oracle. But the SQL statement we got from the LogMiner was hardly useful. But fact is, we can fix that. Before we do that we need to look at a few other functions in LogMiner. The two functions we want to loot at are called DBMS_LOGMNR.COLUMN_PRESENT and DBMS_LOGMNR.MINE_VALUE. These two functions are simple enough, they both take two arguments, first a reference to a redo log record, which is the value of the column redo_value in the V$LOGMNR_CONTENTS table, and the other argument is the fully qualified name of the table in question. The table name of the redo log record is, surprisingly, in the table_name column. The reason we need two functions for this is that the second of these functions can return NULL, both when the column is question doesn't exist and well as when the value is NULL.
I will for the rest of this post assume that there exists an Oracle user called anders with the password anders and that the same thing exists in MariaDB, where there is also a database called anders (you already guess that, right?) where the user anders has full access.
So with the table_name of the value of the involved columns, we are ready to go. Sure. let's look at an insert. First we need a table to test this on, and let's use the classic Oracle EMP sample table for this, but I have simplified this a bit to make my example easier to grasp without getting into too many details. So this is what it looks on the Oracle side of things, and let's call this script cretabora.sql:
CREATE TABLE anders.emp(
What we also need is some way to keep track of processing, in this case we want to keep track of the SCN that was last used, so we know where to start when we get the next chunk of CDCs (He, I used it again. Boy am I young and hot today). And add this to the script above:
CREATE TABLE scnlog(
I also allocated space for the EMP table and set the SCN to the current database SCN. As I am using the V$DATABASE virtual table, I have to run the above as a privileged user.
$ sqlplus / as sysdba @cretabora.sql
OK, now I have some tables to work with. Before to go on let's also create the same objects in a MariaDB database. The minimum thing we need is something is, let's call this script cretabmaria.sql:
CREATE TABLE emp(
I will not explain this in detail, you get it already I think:
$ mysql -u root
The above really shows that MariaDB is easier to deal with than Oracle, right? But that wasn't what we were looking for here, and we're getting closer now, aren't you excited?
Let's insert a single row of data in the EMP table, just to get started:
$ sqlplus anders/anders
Now we are ready to extract data from the redo log. For this we will run a small script that runs LogMiner, extracts rows from the redo log and converts that to valid MariaDB SQL syntax. Create a script called cdcemp.sql with this content:
SET serveroutput ON
The first three lines in this script are SQL*Plus specific settings. Then we declare some variables and a cursor for the V$LOGMNR_CONTENTS table. Following that we get the starting and ending SCN that we want and use that to start LogMiner and output some information on what we are doing. Then comes a wacko UPDATE. This seems to be necessary to ensure that the redo log is progressed past the last log record so that we can get the data. Without this the SELECT from the cursor would wait until someone else generated some redo log content. This is a kludge, I know, but it seems to work. Also, Oracle isn't smart enough to skip these kinds of dummy updates.
Following this the cursor is opened and we get the rows from it and build a suitable SQL statement for MariaDB. Finally when the loop is over I update the SCN log table so I know where to start the next run. This script can be done to generate incremental updates from Oracle to be inserted into MariaDB. And with all that, let's now finally run the script, and here I will run it and extract the data to MariaDB:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
Before we finish, let's check if this worked:
$ mysql -u anders -panders anders
That's it for this time. There are few few more things to look into though, like managing transactions on the MariaDB side for example. And UPDATEs and DELETEs, those require some special care. But we are getting there now, right? So stay tuned for the third post in this series.
Happy SQL'ing
/Karlsson
When I left you in the previous post we have just started to get some data. reluctantly, out of Oracle. But the SQL statement we got from the LogMiner was hardly useful. But fact is, we can fix that. Before we do that we need to look at a few other functions in LogMiner. The two functions we want to loot at are called DBMS_LOGMNR.COLUMN_PRESENT and DBMS_LOGMNR.MINE_VALUE. These two functions are simple enough, they both take two arguments, first a reference to a redo log record, which is the value of the column redo_value in the V$LOGMNR_CONTENTS table, and the other argument is the fully qualified name of the table in question. The table name of the redo log record is, surprisingly, in the table_name column. The reason we need two functions for this is that the second of these functions can return NULL, both when the column is question doesn't exist and well as when the value is NULL.
I will for the rest of this post assume that there exists an Oracle user called anders with the password anders and that the same thing exists in MariaDB, where there is also a database called anders (you already guess that, right?) where the user anders has full access.
So with the table_name of the value of the involved columns, we are ready to go. Sure. let's look at an insert. First we need a table to test this on, and let's use the classic Oracle EMP sample table for this, but I have simplified this a bit to make my example easier to grasp without getting into too many details. So this is what it looks on the Oracle side of things, and let's call this script cretabora.sql:
CREATE TABLE anders.emp(
empno NUMBER NOT NULL PRIMARY KEY,
ename VARCHAR2(10) NOT NULL,
job VARCHAR2(9) NOT NULL,
sal NUMBER(7,2) NOT NULL,
deptno NUMBER NOT NULL) TABLESPACE users;
What we also need is some way to keep track of processing, in this case we want to keep track of the SCN that was last used, so we know where to start when we get the next chunk of CDCs (He, I used it again. Boy am I young and hot today). And add this to the script above:
CREATE TABLE scnlog(
table_name VARCHAR2(30) NOT NULL PRIMARY KEY,
scn NUMBER) TABLESPACE users;
INSERT INTO anders.scnlog
SELECT 'EMP', current_scn FROM v$database;
I also allocated space for the EMP table and set the SCN to the current database SCN. As I am using the V$DATABASE virtual table, I have to run the above as a privileged user.
$ sqlplus / as sysdba @cretabora.sql
OK, now I have some tables to work with. Before to go on let's also create the same objects in a MariaDB database. The minimum thing we need is something is, let's call this script cretabmaria.sql:
CREATE TABLE emp(
empno INTEGER NOT NULL PRIMARY KEY,
ename VARCHAR(10) NOT NULL,
job VARCHAR(9) NOT NULL,
sal DECIMAL(7,2) NOT NULL,
deptno INTEGER NOT NULL);
I will not explain this in detail, you get it already I think:
$ mysql -u root
MariaDB [(none)]> CREATE USER 'anders'@'%' IDENTIFIED BY 'anders';
MariaDB [(none)]> CREATE DATABASE anders;
MariaDB [(none)]> GRANT ALL ON anders.* TO 'anders'@'%';
MariaDB [(none)]> exit
$ mysql -u anders -panders anders < cretabmaria.sql
The above really shows that MariaDB is easier to deal with than Oracle, right? But that wasn't what we were looking for here, and we're getting closer now, aren't you excited?Let's insert a single row of data in the EMP table, just to get started:
$ sqlplus anders/anders
SQL> INSERT INTO emp VALUES(1, 'Anders', 'Sales', 10000, 10);
SQL> COMMIT;
Now we are ready to extract data from the redo log. For this we will run a small script that runs LogMiner, extracts rows from the redo log and converts that to valid MariaDB SQL syntax. Create a script called cdcemp.sql with this content:
SET serveroutput ON
SET linesize 4000
SET feedback off
DECLARE
v_scn NUMBER;
v_scnstart NUMBER;
v_scnend NUMBER;
v_redo_value NUMBER;
v_sqlstmt VARCHAR2(4000);
CURSOR cur1 IS
SELECT scn, redo_value
FROM v$logmnr_contents
WHERE seg_owner = 'ANDERS' AND table_name = 'EMP'
AND operation = 'INSERT';
BEGIN
-- Start LogMiner.
SELECT NVL(MIN(l.scn) + 1, MIN(e.ora_rowscn)) INTO v_scnstart
FROM anders.emp e, scnlog l
WHERE l.table_name = 'EMP';
SELECT MAX(e.ora_rowscn) INTO v_scnend
FROM anders.emp e;
DBMS_OUTPUT.PUT_LINE('-- SCN Range: ' || v_scnstart || ' - ' || v_scnend);
DBMS_LOGMNR.START_LOGMNR(STARTSCN => v_scnstart,
ENDSCN => v_scnend,
OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY +
DBMS_LOGMNR.CONTINUOUS_MINE +
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
-- Dummy update to insert a more recent SCN to ensure
-- that we don't have to wait for the next update in the
-- following select.
UPDATE scnlog SET table_name = table_name WHERE table_name = 'EMP';
COMMIT;
-- Open cursor to get data from LogMiner.
OPEN cur1;
-- Loop for all the rows in the redo log since the last time we ran this.
LOOP
FETCH cur1 INTO v_scn, v_redo_value;
EXIT WHEN cur1%NOTFOUND;
v_sqlstmt := 'INSERT INTO emp(EMPNO, ENAME, JOB, SAL, DEPTNO)'
|| 'VALUES(';
v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
'ANDERS.EMP.EMPNO') || ', ';
v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
'ANDERS.EMP.ENAME') || ''', ';
v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
'ANDERS.EMP.JOB') || ''', ';
v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
'ANDERS.EMP.SAL') || ', ';
v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
'ANDERS.EMP.DEPTNO') || ')';
DBMS_OUTPUT.PUT_LINE('-- SCN = ' || v_scn);
DBMS_OUTPUT.PUT_LINE(v_sqlstmt || ';');
END LOOP;
IF v_scn IS NOT NULL THEN
UPDATE scnlog SET scn = v_scn WHERE table_name = 'EMP';
COMMIT;
END IF;
CLOSE cur1;
END;
/
EXIT
The first three lines in this script are SQL*Plus specific settings. Then we declare some variables and a cursor for the V$LOGMNR_CONTENTS table. Following that we get the starting and ending SCN that we want and use that to start LogMiner and output some information on what we are doing. Then comes a wacko UPDATE. This seems to be necessary to ensure that the redo log is progressed past the last log record so that we can get the data. Without this the SELECT from the cursor would wait until someone else generated some redo log content. This is a kludge, I know, but it seems to work. Also, Oracle isn't smart enough to skip these kinds of dummy updates.
Following this the cursor is opened and we get the rows from it and build a suitable SQL statement for MariaDB. Finally when the loop is over I update the SCN log table so I know where to start the next run. This script can be done to generate incremental updates from Oracle to be inserted into MariaDB. And with all that, let's now finally run the script, and here I will run it and extract the data to MariaDB:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
Before we finish, let's check if this worked:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+--------+-------+----------+--------+
| empno | ename | job | sal | deptno |
+-------+--------+-------+----------+--------+
| 1 | Anders | Sales | 10000.00 | 10 |
+-------+--------+-------+----------+--------+
1 row in set (0.00 sec)
That's it for this time. There are few few more things to look into though, like managing transactions on the MariaDB side for example. And UPDATEs and DELETEs, those require some special care. But we are getting there now, right? So stay tuned for the third post in this series.
Happy SQL'ing
/Karlsson
Thursday, September 15, 2016
Replication from Oracle to MariaDB - Part 1
If you have read my blog before, you know that I have shown a couple
of ways of getting data from an Oracle database and into something more
approachable, such as a text-file or even, surprise, MariaDB (or MySQL
for that matter). Not that the Oracle database really likes you doing
this and some tweaking is necessary. One way I have shown before to do
this is by using triggers, a UDF, UPD-communication and a simple server,
this was a series of blogposts starting with this one: Replication from Oracle to MariaDB the simple way.
Another means is to exports data from Oracle in some text format,
something Oracle really doesn't like you do it, it really likes to keep
the data to itself, but yours truly worked around this and now you can
use the Oracle dump utility and this has been developed quite a bit and now works really well and for most standard Oracle datatypes.
But replicating properly, without triggers or anything, would be nice to do using only Open Source stuff, or maybe we can do without any special tools at all? One way would be to put a proxy in front of Oracle. Useful, but takes too much time to develop and requires that one figure out the Oracle communication protocol, which is not something I am prepared to do (I am lazy, I know).
So this time around I will try something different, a way to replicate straight out of Oracle. I will eventually show you a practical example of doing this, without too much code, but before we get there I'll explain how it works. The more difficult part here is getting the data replicated properly out from Oracle, so this first installment of a series of blog post will be more on Oracle than MariaDB specifically, but this is all for a good cause (i.e. get stuff done without having to pay) so let's get started.
To begin with, there is no binlog in Oracle (nor is there an update log, if you remember that puppy). What Oracle does have though is a redo log, which is also there in InnoDB / XtraDB by the way, and it serves the same purpose, to be able to recover the database should there be a crash. This log is, because of the purpose it is used for, logging on a much lower level compared to the binlog. Looking at the Oracle redo log, there are log records on any modifying event in the database, including a lot of internal Oracle processing, Oracle does do a lot of stuff in the background and manages a lot more stuff that MariaDB instead leaves to the operating system, like file space management and such things. This is not to say Oracle is better than MariaDB in this respect, nor the other way around, but Oracle was born at a time where you had to support so many different operating systems, so some layers were necessary to create some transparency. Also Oracle was born at a time when having 5Mb of disk space was a lot of disk and when real estate was something that you handled conservatively (which is a giveaway of my age. Yes, I was born before "My Boy Lollipop" was a hit with Mille Small. And contrary to popular belief, it is not Rod Stewart that plays the harmonica solo in that song. And that ends todays "useless knowledge" lesson).
Before we go into looking at the Oracle redo log in detail, there are a couple of things I want to explain first. There is another file in Oracle, a small but important one called the control file. The control file is what keeps track of all the other files and related settings in the database. As for the redo log files, these can be used in two ways. The basic way is what is called NOARCHIVELOG mode, which means that the log files are reused. This is similar to how the InnoDB / XtraDB logfiles are used, and the purpose of this is to provide a means to recover after a crash. Thinking about it, you realize that these should then also be able to be used for, say, point in time recovery, assuming that we saved them, i.e. starting with a physical backup and then applying the saved files. With MariaDB we instead perform point in time recovery using the binlog. To do a point in time recovery with Oracle, and use the redo log for this purpose, we have to save the redo log files, and this is what happens when you run in ARCHIVELOG mode, in which case a separate Oracle process will "archive", which is Oracle lingo for "saving in some other place", the redo log files when they are full and a switch is done to the next logfile. All of this, the current and archived redo log files, are recorded in the Oracle control file.
OK, A lot of Oracle stuff, and I warned you, but this is critical to understanding how the code I present later works. There is one more thing I want to discuss briefly which is the Oracle ROWID. A ROWID in oracle is a unique identifier of a row in a table. The ROWID is actually a physical adress and there are cases when the same ROWID can be used for two different tables, which is when the tables are clustered, but for a single table the ROWID uniquely identifies a row. To use the ROWID of a table, one can use the ROWID pseudocolumn, which can be used in the WHERE clause as well as in the SELECT list.
So, with all those things explained, lets put it all together. The redo log of Oracle contains all the changes to the database, so it really should be possible to use this for the same purpose that we use the binlog in MariaDB, i.e. replication. To do that we first need to be able to read the redo log, and we could either read the redo log file itself, but this is rather complex and I'm not even sure this really is a good idea, but if you insist you can read this document for example. Instead we are going to use an Oracle utility. For the DML that interests us, all we need is the INSERT, UPDATE and DELETE statements that deal with the table we want to replicate and then the COMMIT when the transaction in question ends. For these operations, some of the data we want that is in the redo log are:
Before I finish this blog post, I want to show some code on how to get data from the redo log. And before you ask, no, there is no oraredolog utility or something like that, this isn't MariaDB, this is Oracle. A utility in Oracle is mostly not a utility per se, but rather a Oracle PL/SQL package, in this case this is LogMiner, or as a package DBMS_LOGMNR. For LogMiner to work we need to run in archivelog mode, so if you have a fresh Oracle installation, log in as a priviledged user:
$ sqlplus / as sysdba
And run the commands
SQL> SHUTDOWN IMMEDIATE;
Also, to get all the data we want from LogMiner we have to add "supplemental data" to the redo log. What this means exactly I haven't looked in to but trust me, you want it. So from the same SQL*Plus session, run:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
With that in place we are ready to start mining the log, this consists of two steps, first we start logminer itself, and then we select from a table that logminer populates. Let's look at how we can start logminer with an sql script, let's call this script startlog.sql:
column min_scn new_value startscn
And then we run this script and as there is no explicit exit in it, you get the SQL*Plus prompt back after running it, which is not how the MariaDB command line works:
$ sqlplus / as sysdba @startlog.sql
And now we can get some log data, let's say we have a table called T1 that we have done some changes to, then we can get the relevant records like this:
SQL> SELECT sql_redo FROM v$logmnr_contents WHERE table_name = 'T1' AND seg_owner = 'ANDERS' AND operation IN ('INSERT', 'UPDATE', 'DELETE');
This hardly looks complete? But there is something in there that we possibly could make some sense from, right? Let's try that in the next blog post in this series. And this post has been all about Oracle, more or less, now in the next post we will get closer to replicating to MariaDB, which is where we want our data to reside anyway? Right?
Happy SQL'ing
/Karlsson
But replicating properly, without triggers or anything, would be nice to do using only Open Source stuff, or maybe we can do without any special tools at all? One way would be to put a proxy in front of Oracle. Useful, but takes too much time to develop and requires that one figure out the Oracle communication protocol, which is not something I am prepared to do (I am lazy, I know).
So this time around I will try something different, a way to replicate straight out of Oracle. I will eventually show you a practical example of doing this, without too much code, but before we get there I'll explain how it works. The more difficult part here is getting the data replicated properly out from Oracle, so this first installment of a series of blog post will be more on Oracle than MariaDB specifically, but this is all for a good cause (i.e. get stuff done without having to pay) so let's get started.
To begin with, there is no binlog in Oracle (nor is there an update log, if you remember that puppy). What Oracle does have though is a redo log, which is also there in InnoDB / XtraDB by the way, and it serves the same purpose, to be able to recover the database should there be a crash. This log is, because of the purpose it is used for, logging on a much lower level compared to the binlog. Looking at the Oracle redo log, there are log records on any modifying event in the database, including a lot of internal Oracle processing, Oracle does do a lot of stuff in the background and manages a lot more stuff that MariaDB instead leaves to the operating system, like file space management and such things. This is not to say Oracle is better than MariaDB in this respect, nor the other way around, but Oracle was born at a time where you had to support so many different operating systems, so some layers were necessary to create some transparency. Also Oracle was born at a time when having 5Mb of disk space was a lot of disk and when real estate was something that you handled conservatively (which is a giveaway of my age. Yes, I was born before "My Boy Lollipop" was a hit with Mille Small. And contrary to popular belief, it is not Rod Stewart that plays the harmonica solo in that song. And that ends todays "useless knowledge" lesson).
Before we go into looking at the Oracle redo log in detail, there are a couple of things I want to explain first. There is another file in Oracle, a small but important one called the control file. The control file is what keeps track of all the other files and related settings in the database. As for the redo log files, these can be used in two ways. The basic way is what is called NOARCHIVELOG mode, which means that the log files are reused. This is similar to how the InnoDB / XtraDB logfiles are used, and the purpose of this is to provide a means to recover after a crash. Thinking about it, you realize that these should then also be able to be used for, say, point in time recovery, assuming that we saved them, i.e. starting with a physical backup and then applying the saved files. With MariaDB we instead perform point in time recovery using the binlog. To do a point in time recovery with Oracle, and use the redo log for this purpose, we have to save the redo log files, and this is what happens when you run in ARCHIVELOG mode, in which case a separate Oracle process will "archive", which is Oracle lingo for "saving in some other place", the redo log files when they are full and a switch is done to the next logfile. All of this, the current and archived redo log files, are recorded in the Oracle control file.
OK, A lot of Oracle stuff, and I warned you, but this is critical to understanding how the code I present later works. There is one more thing I want to discuss briefly which is the Oracle ROWID. A ROWID in oracle is a unique identifier of a row in a table. The ROWID is actually a physical adress and there are cases when the same ROWID can be used for two different tables, which is when the tables are clustered, but for a single table the ROWID uniquely identifies a row. To use the ROWID of a table, one can use the ROWID pseudocolumn, which can be used in the WHERE clause as well as in the SELECT list.
So, with all those things explained, lets put it all together. The redo log of Oracle contains all the changes to the database, so it really should be possible to use this for the same purpose that we use the binlog in MariaDB, i.e. replication. To do that we first need to be able to read the redo log, and we could either read the redo log file itself, but this is rather complex and I'm not even sure this really is a good idea, but if you insist you can read this document for example. Instead we are going to use an Oracle utility. For the DML that interests us, all we need is the INSERT, UPDATE and DELETE statements that deal with the table we want to replicate and then the COMMIT when the transaction in question ends. For these operations, some of the data we want that is in the redo log are:
- The type of operation
- An identifier of the table (not the actual table name though)
- The changed columns and values
- The ROWID of the row in question
- And the SCN (System Change Number).
Before I finish this blog post, I want to show some code on how to get data from the redo log. And before you ask, no, there is no oraredolog utility or something like that, this isn't MariaDB, this is Oracle. A utility in Oracle is mostly not a utility per se, but rather a Oracle PL/SQL package, in this case this is LogMiner, or as a package DBMS_LOGMNR. For LogMiner to work we need to run in archivelog mode, so if you have a fresh Oracle installation, log in as a priviledged user:
$ sqlplus / as sysdba
And run the commands
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
Also, to get all the data we want from LogMiner we have to add "supplemental data" to the redo log. What this means exactly I haven't looked in to but trust me, you want it. So from the same SQL*Plus session, run:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
With that in place we are ready to start mining the log, this consists of two steps, first we start logminer itself, and then we select from a table that logminer populates. Let's look at how we can start logminer with an sql script, let's call this script startlog.sql:
column min_scn new_value startscn
column current_scn new_value endscn
SELECT MIN(FIRST_CHANGE#) min_scn FROM v$log;
SELECT current_scn FROM v$database;
EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN => &startscn, -
ENDSCN => &endscn, -
OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
DBMS_LOGMNR.CONTINUOUS_MINE + -
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SET ARRAYSIZE 1
And then we run this script and as there is no explicit exit in it, you get the SQL*Plus prompt back after running it, which is not how the MariaDB command line works:
$ sqlplus / as sysdba @startlog.sql
And now we can get some log data, let's say we have a table called T1 that we have done some changes to, then we can get the relevant records like this:
SQL> SELECT sql_redo FROM v$logmnr_contents WHERE table_name = 'T1' AND seg_owner = 'ANDERS' AND operation IN ('INSERT', 'UPDATE', 'DELETE');
SQL_REDO
--------------------------------------------------------------------------------
insert into "ANDERS"."T1"("C1","C2") values ('1','2');
insert into "ANDERS"."T1"("C1","C2") values ('2','2');
update "ANDERS"."T1" set "C2" = '3' where "C2" = '2' and ROWID = 'AAAE5KAAEAAAAFfAAA';
update "ANDERS"."T1" set "C2" = '3' where "C2" = '2' and ROWID = 'AAAE5KAAEAAAAFfAAB';
This hardly looks complete? But there is something in there that we possibly could make some sense from, right? Let's try that in the next blog post in this series. And this post has been all about Oracle, more or less, now in the next post we will get closer to replicating to MariaDB, which is where we want our data to reside anyway? Right?
Happy SQL'ing
/Karlsson
Subscribe to:
Comments (Atom)