Monday, December 4, 2023

MariaDBTop in action

    MariaDBTop is a tool I have come to use quite often but it is a bit hard to configure and set up. In this post I will show you have to set it up by using a realistic example. So, what I want to show is monitoring of a simple application that emulated Telco style CDRs (Call Data Records). A CDR is a very common data structure in Telco and network applications where a CDR is a representation of an event in the network such as a call being started, a message is being sent etc. CDRs are never updated, they are just inserted.

    The CDR structure is not standardized but similar across the many places where it is used. In this case I have an application that inserts CDRs over multiple threads. The database that these records are recorded in is replicated and we want to keep an eye on the processing on both the primary and the replica.

Main configuration section

    To begin with we have to configure the main section of the MariaDB configuration. To begin with let's use a decent colour screen. For this type of applications I like green text on a black background and for alerts I want a bright red text colour. We are using the default configuration file mariadbtop.cnf, all the configuration data goes into that in this example, and the first part of this file then has the mariadbtop section then looks like this

[mariadbtop]
default-colour=green
default-bgcolour=black
alert-attribute=normal
alert-colour=black
alert-bgcolour=red

    We then provide connection information to the MariaDB Primary server

user=monitoruser
password=monitorpwd

    With this in place we need to define the replica server that we are connecting to as well as the pages that are to be displayed, the actual attributes of these are defined later. We have one server to add, the replica, and three pages

pages=dashboard,replication,innodbstatus
servers=replica_server

    With this in place I also want to refresh the status bit more frequently than usual and write alerts to a file

refresh-time=1000
alert-time=10
alert-file=alert.log

    The above configuration will refresh the screen every second (1000 ms) and will write a record to the altert log file when there is an alert and every 10 seconds for ongoing alerts.

    The above should be pretty much self explanatory

Defining the dashboard page

Defining the replica server attributes

    Before we go on from here we need to define how to connect to the replica server. The name of this is, as defined above, replica_server which means we need a section called just that with information on how to connect to it. In my case it looks like this

[replica_server]
user=monitoruser
password=monitorpwd
host=repl1


The above should be pretty much self explanatory

Defining the dashboard page

   Now we get to the interesting bit, the actual data items that we are to monitor and the first is the dashboard page.

[dashboard]
pagetitle=Dashboard
pagekey=d

    This above defines the title displayed at the top of the dashboard page and the key used to navigate to it. The next thing we want to do is to display the number of rows in the table we work with, this is called cdr and it also resides in the cdr database and for this we could do a SELECT COUNT(*) but I decide against that and that will slow down the actual processing going on, so instead I run a SHOW TABLE STATUS command which gives a less exact result but doesn't intrude into the normal processing as much. I end up with this setting

sql1=SHOW TABLE STATUS IN cdr LIKE 'cdr'
vertical1
hide1
name-prefix1=primary_ts_

    This will run the command above, the vertical setting will cause a row/value pair for each column and the hide attrbute will ensure that this data is fetched but not displayed, we will look at that later on. Finally the name_prefix attribute will prefix each column name fetched with primary_ts_. Following this I need to get the same data from the replica server, like this

sql2=SHOW TABLE STATUS IN cdr LIKE 'cdr'
vertical2
hide2
server2=replica_server
name-prefix2=replica_ts_

    The only attribute different here is the server, as we are getting this from the replica. Also, we run the same comman so we need to use a different prefix so we don't userwrite the key / value pairs gotten from the previous command. We have now defined how to get the data we want to display, but we haven't actually displayed it. What we want to show is three things, the number of rows on the Primary and the Replica and the difference between the two. This is how this is done

expr10=this.primary_ts_Rows
name10=Primary_rows
expr11=this.replica_ts_Rows
name11=Replica_rows
expr12=this.primary_ts_Rows - this.replica_ts_Rows
name12=Replica rows behind

    The difference may also be calculated like this

expr12=this.Primary_rows - this.Replica_rows
name12=Replica rows behind

The Replication status page

     Now it is time to display the replication status, page, and we start by the page specific data

[replication]
pagetitle=replication
pagekey=e

    With this in place we are ready to define how to get the data, on the replica we run a SHOW REPLICA STATUS command, and this has to be vertical as we want all the columns from that command, and on the master all we want at this point is the BINLOG GTID. This looks like this

sql1=SHOW REPLICA STATUS
vertical1
server1=replica_server
name-prefix1=replica_
hide1
sql2=SELECT @@GLOBAL.GTID_BINLOG_POS
name2=master_gtid_pos
hide2

    Again, we have so far only gotten data that is not displayed, the data that we do want to dipslay now is the GTID pos on the master and the replica and the Seconds_Behind_Master column from the SHOW REPLICA STATUS command. This looks like this

expr10=this.replica_Seconds_Behind_Master
name10=Seconds behind
expr11=this.replica_Gtid_IO_Pos
name11=Replica GTID
expr12=this.master_gtid_pos
name12=Master GTID

    And that completes the replication page.

The InnoDB status page

    We start the InnoDB status page with the usual page details and then getting  InnoDB status using SHOW GLOBAL STATUS.

[innodbstatus]
pagetitle=InnoDB Status
pagekey=i
sql1=SHOW GLOBAL STATUS LIKE 'Innodb%'
value-col1=Value
name-col1=Variable_name
name-prefix1=pgs_
hide1
sql2=SHOW GLOBAL STATUS LIKE 'Innodb%'
value-col2=Value
name-col2=Variable_name
name-prefix2=r1gs_
server2=replica_server
hide2

    Now, we want to process the data we got above to something we can display. Let start with the InnoDB cache hit ratio, which is an expression based on two other values returned from SHOW GLOBAL STATUS, so for this we use these two

expr3=round(this.pgs_Innodb_buffer_pool_read_requests / \ (this.pgs_Innodb_buffer_pool_reads + \ this.pgs_Innodb_buffer_pool_read_requests) * 100, 2) \
name3=Primary InnoDB Cache hit ratio
expr4=round(this.r1gs_Innodb_buffer_pool_read_requests / \ (this.r1gs_Innodb_buffer_pool_reads + \ this.r1gs_Innodb_buffer_pool_read_requests) * 100, 2)
name4=Replica InnoDB Cache hit ratio

    In addition to this we also like to get some other InnoDB data displayed, like this

expr5=this.pgs_Innodb_rows_read
name5=Primary Rows read
expr6=this.r1gs_Innodb_rows_read
name6=Replica Rows read
expr7=this.pgs_Innodb_rows_inserted
name7=Primary Rows inserted
expr8=this.r1gs_Innodb_rows_inserted
name8=Replica Rows inserted
expr9=this.pgs_Innodb_rows_updated
name9=Primary Rows updated
expr10=this.r1gs_Innodb_rows_updated
name10=ReplicaRows updated
expr11=this.pgs_Innodb_rows_deleted
name11=Primary Rows deleted
expr12=this.r1gs_Innodb_rows_deleted
name12=Replica Rows deleted

    And that completes the configuration, we now have three pages of data

The result

    With that in place the dashboard looks like this




 


And the InnoDB page like this


Configuration summary

In total, this is what the configuration looks like

[mariadbtop]
default-colour=green
default-bgcolour=black
alert-attribute=normal
alert-colour=black
alert-bgcolour=red
user=monitoruser
password=monitorpwd
pages=dashboard,replication,innodbstatus
servers=replica_server
refresh-time=1000
alert-time=10
alert-file=alert.log
 
[replica_server]
user=monitoruser
password=monitorpwd
host=repl1
 
[dashboard]
pagetitle=Dashboard
pagekey=d
sql1=SHOW TABLE STATUS IN cdr LIKE 'cdr'
vertical1
hide1
name-prefix1=primary_ts_
sql2=SHOW TABLE STATUS IN cdr LIKE 'cdr'
vertical2
hide2
server2=replica_server
name-prefix2=replica_ts_
 
[replication]
pagetitle=replication
pagekey=e
sql1=SHOW REPLICA STATUS
vertical1
server1=replica_server
name-prefix1=replica_
hide1
sql2=SELECT @@GLOBAL.GTID_BINLOG_POS
name2=master_gtid_pos
hide2
expr10=this.replica_Seconds_Behind_Master
name10=Seconds behind
expr11=this.replica_Gtid_IO_Pos
name11=Replica GTID
expr12=this.master_gtid_pos
name12=Master GTID
  
[innodbstatus]
pagetitle=InnoDB Status
pagekey=i
sql1=SHOW GLOBAL STATUS LIKE 'Innodb%'
value-col1=Value
name-col1=Variable_name
name-prefix1=pgs_
hide1
sql2=SHOW GLOBAL STATUS LIKE 'Innodb%'
value-col2=Value
name-col2=Variable_name
name-prefix2=r1gs_
server2=replica_server
hide2
expr3=round(this.pgs_Innodb_buffer_pool_read_requests / \ (this.pgs_Innodb_buffer_pool_reads + \ this.pgs_Innodb_buffer_pool_read_requests) * 100, 2) \
name3=Primary InnoDB Cache hit ratio
expr4=round(this.r1gs_Innodb_buffer_pool_read_requests / \ (this.r1gs_Innodb_buffer_pool_reads + \ this.r1gs_Innodb_buffer_pool_read_requests) * 100, 2)
name4=Replica InnoDB Cache hit ratio
expr5=this.pgs_Innodb_rows_read
name5=Primary Rows read
expr6=this.r1gs_Innodb_rows_read
name6=Replica Rows read
expr7=this.pgs_Innodb_rows_inserted
name7=Primary Rows inserted
expr8=this.r1gs_Innodb_rows_inserted
name8=Replica Rows inserted
expr9=this.pgs_Innodb_rows_updated
name9=Primary Rows updated
expr10=this.r1gs_Innodb_rows_updated
name10=ReplicaRows updated
expr11=this.pgs_Innodb_rows_deleted
name11=Primary Rows deleted
expr12=this.r1gs_Innodb_rows_deleted
name12=Replica Rows deleted

With that, i leave you to try MariaDB Top

Happy SQL'ing
/Karlsson 

 

Thursday, November 23, 2023

Disruption, Evolution or Revolution

Disruption or Revolution


 



    So let us assume this for a second: Larry Ellison wakes up one morning in his luxury San Francisco house, for once all alone in the building with the rest of the family in the Hawaii house. He gets out of bed and goes to the kitchen to make some breakfast. He decides on a cappuccino and some oatmeal porridge, he is a bit of a health nut after all. The coffee is prepared for him by his automatic machine that he imported from Switzerland and he micros his porridge and sits down by the bar in the kitchen to eat it. He opens the newspaper and starts reading it, but his mind drifts away, he had been dreaming something just when he woke up and he tries to remember what it was, and then, in the middle of half reading the editorial in the San Francisco Chronicle and half thinking about his recent dream he gets it.
 
    The next step for the huge company that he once founded and now is the CTO of was what he was dreaming about and he now knows where Oracle will head. He feels invigorated in a way that he has not felt for a long time, maybe last time was as far back as 1977 when he decided to co-found a company to build the worlds first commercial SQL-based relational database system.
 
    Larry finishes his breakfast quickly, leaves his house in his bright read Ferrari and drives along the 101, ever so slightly too fast but still so slow that the V12 equipped Ferrari is merely idling. He gets to the huge Oracle Parkway head office and heads up to his secretary, still carrying his expensive, elegant and lightweight leather briefcase. "Get all available execs to my office. My execs that is." His secretary gives him a puzzled look, what is going on, he looks so upbeat, she thinks, but in a very positive way, he seems to be gleaming with enthusiasm as he continues into his office.
 
    Two hours later the available team of executives are gathered in Larrys office, some of them who hasn't been here many times before admire the view of the Bay Area through the glass wall behind Larrys back. Larry looks at them with a smile, something the team is not used to so they know something is about to happen. "I have an idea. Actually it is more than an idea, it is a plan and an order. Exactly how this plan will be implemented is up to you, but take it as an order that it is going to happen. If you don't agree with me, that is fine, there are many other companies out there that you can work for."
 
    Larry now has the attention of everyone in the room, they look at him and wait for the next comment from him. Larry stays quiet for a while, and then he raises his voice ever so slightly. "If you don't want to be part of this big change, the biggest in the history of this company, then you are free to leave the room. There will be no hard feelings, I just want positive vibes, if you feel like being negative you are free to go somewhere else in the Valley. I will not oppose that and I wish you all the best if you don't want this. But make up your mind right now as after I have told you what we are going to do, I expect you to stay around and positively support the journey that we are about to embark on. OK".
 
    The team of C-level executives look at each other, and then again at Larry, who keeps quiet. Then everyone looks at Larry again and all are still quiet. 30 seconds pass in silence, then Larry talks again, real quiet so everyone has to listen real hard. "SQL is about to die, and we are to ones that will kill it". The execs look at each other in amazement and Larry talks again "It stops right here, right now". Everyone's eyes open wide and Larry raises his voice "SQL sucks. SQL will be gone from all Oracle products within a year. No SQL support in Java, Oracle database will be gone and replaced. All SQL will go away and it starts right now."
 
    The story above might be interesting, but it has very little to do with reality, this would never happen, right? If it DID happen, would it count as Disruption? Not really Evolution I guess no, I would really count this as Revolution and those don't happen often unless you happen to live a country in Central America I guess. But they DO happen even in the world of IT, let's roll back the time to the early 1960's, when the Vietnam war ongoing, JFK was still alive and mini-skirts were just around the corner. At the office of the biggest name in computing, IBM, there was a worry, but sales wasn't one of them, IBMs computers sold like hotcakes. Not, the issue was that IBM at the time had some seven different lines, yes seven, of computers, all incompatible in every conceivable way. It has to be understood that in those days, a computer was to a very large extent bespoke, software was mostly an afterthought and was also incompatible not only between machines from different IBM series of computers but also between different individual computers, if one had a different printer than the other, rewrite your code etc. At the same time, IBM computers were selling and the most popular among IBMs many series of computers was the mid-range 1401 series of which thousands were produced of this machine with roots in the 1950's. A key component was that the 1401 was rather complete, it did not only have CPU and memory but other parts were also part of the system such as tape drives, printers and cupholders (just kidding). This sounds obvious today but not so at the start of the swinging sixties. And a key component of the 1400-series was the 1403 line printer. This printer was the fastest in the world and it came with the 1401 computer. So if you wanted the best and fastest printer in the world, you had to but a 1401 computer, there was no other way. And if you had some other computer and you wanted to print on the 1403, then you had to put that on a tape or something that you brought (physically, by hand) to the 1401 to print on the 1403. A mess.

    Something called the SPREAD group at IBM began to look into this issue of all these different architectures in 1960ish and they were ready by the end of 1961 when they produced a report. This report said something similar to what Larry said in my made up story above, scrap everything we have and start again. And IBM did just that, to everyones surprise. In 1964 the System/360 series was announced and it came to revolutionize computing. From a technology point of view, if wasn't spectacular, but from the point of view of how the technology was applied, is was spectacular. System/360 was a series to computers, small to large and as such could span the "whole circle" of computing needs, hence the 360-designation. And it came to change and influence IBM to this day and eventually wipe most of its then competitors off the map.

    The "Larry Ellison" at IBM at that time was IBM CEO Thomas Watson Jr, the son of the founder of IBM, Thomas Watson. Watson was not someone you could decide not to involve in decisions like this and Watson was actually all for it, deciding to make all existing, and extremely profitable, IBM (and other) computers obsolete to be replaced by one single series of computers. This isn't 100% true though, but sort of. One innovation of the 360 was the large-scale implementation of "micro programming" where the CPU is actually in itself programmable, an idea that originally was the idea of British IT-pioneer Maurice Wilkes (right). The 360 was microprogrammed, and this allowed it be microprogrammed to run Doom. No, just kidding, System/360 was microprogrammed to emulate a 1401. This is one of those things that seemed like a good idea at the time, it allowed programs from the immensely popular 1401 computer to run on a 360, and not only that, the 360 was rather fast for it's time so with this emulation a 1401 program could actually run orders of magnitude faster on a Commodore 64, no wait, I mean System/360, than on an actual physical 1401.

    So once the 360 was delivered, which took longer than anyone expected after the announcement as, and this was quite unexpected back then, writing the code for the operating system(s) took waaay longer than expected (see the book "The mythical man month" for details), everything was hanky panky (technical term meaning hanky with a slight touch of panky added to it). One minor issue remained, and stayed to this day, the 1401 emulation. Yepp, this turned out to be an issue in the long run as really old programs, dating back to the 1950's would still run. And run. And run. And this is why software that was developed in the 1950's continues to run to this day (I'm not kidding here) and this is also a reason to why the 360 and it's newer siblings continue to be used, to this day.
 
    In summary, with the System/360 IBM took a bet, and they more or less bet the company. IBM was largely all computers at that time, the privious profitable punch card business was in quick decline. But the bet paid off, big time.

/Karlsson
Back from the dust-filled basement of computer history.

PS: System/360 introduced many new things that we stil use today and take for granted. The "byte" first came around as a term with System/360, which was 32-bit and 8 bits was convenient. That any printer works as well with any computer also started with System/360, although the common technology or printers being the #1 thing to drive sys admins crazy was still established at the time of the System/360 release and is a tradition that continues to this day.

Wednesday, November 22, 2023

Is the cloud dead? Nope.

So, my two cents on if the cloud is dead or not. (If Sun Microsystem was still around I would crack a joke on Sun being good with Clouds. But I don't).

I read that people start moving off the cloud and back on-prem and saving truckloads of money in the process, maybe enough for a gallon or two of gas so you can fill up your Ferrari (for example see https://basecamp.com/cloud-exit and https://shiftmag.dev/leaving-the-cloud-314/). On the other hand, I also see statistics showing that the cloud is growing, for example here: https://www.canalys.com/newsroom/global-cloud-services-q2-2023. What that last article says is that growth is slowing down, but it is still growing at a healthy rate.

So, what is all this about? I have some ideas of my own here, really (surprise!) and that what the rest of this artile is about. On one hand, if you move from on-prem to the cloud and think that this will be just like having all your computer cabinets, netwok switches etc. just like before, but have someone else managing the hardware, then you are likely going to fail and will not benefit from the #1 reason you likely went to the cloud to begin with: saving cost! Really, to benefit from the cloud you want to take advantage of how the cloud works, using the services provided by the cloud vendors, create and remove instances and other reasources as they are needed and prepare your application for an infrastructure that is moving and changing at times. If all you want is servers running linux on fixed IPs and have system that assume that all services are always available and are always, 24x7, in the same place and with the same attributes, maybe the cloud isn't for you.

This is not what happened to most of services that went back on-prem after having been on the cloud for a considerable time. The folks behind these services likely know exactly what they do and how their system work. They have systems that can sustain that services are gone or have moved, they don't assume that everything is always in the same place. They don't assume that scaling necessarily mean getting a bigger box, it can mean getting more boxes or something else. Often they have an application infrastructure largely running on containers (in some cases these smart guys and gals even put their database servers on some more firm ground (VMs) and ignored containers only for these).

In the latter cases we are talking about people who know stuff really well and actually do not mind managing IT infrastructure on their own. In this cases, there are again limited advantages to the cloud as they know how to do everything that a cloud provider can offer themselves and, just as important, they have the money and resources to do so.

Inbetween these two extremes, on one end those who has services, organization and / or a state of mind that is just to inflexible to benefit from the cloud and on the other end the super smart and resourceful geeks that can outsmart even Amazon, there is still a bunch of solid applications, companies and infrastructure that can benefit from the cloud. Maybe they are just a bit more flexible than the first group I mentioned and are willing to take the plunge and modernize what they have and to cloudify themselves, or maybe it is a company where IT is run by modern day Albert Einstein types and can outsmart anyone on the planet but they just cannot be bothered to mock around with IT hardware, OS patching and grand scale security implementations (if you ask me, this is the #1 reason to cloudify: having security looked after by someone else, and I am not claiming it gets more secure I just claim that I cannot be bothered with the mess of CA, SSL and Certificates and all that. What is a Certificate even?).

/Karlsson
Trying hard not to get a job that involves IT security