tag:blogger.com,1999:blog-9144505959002328789.post8542283788297920904..comments2024-03-28T11:39:50.622+01:00Comments on Karlsson on databases and stuff: No MySQL Cluster, the table isn't bl**dy full! Not even close!Karlssonhttp://www.blogger.com/profile/04874338187076980133noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-9144505959002328789.post-36590773514731833462012-09-25T19:36:25.771+02:002012-09-25T19:36:25.771+02:00So what's interesting about this is I could in...So what's interesting about this is I could insert some rows but not other rows. I don't know why. But what I do know is that when you see The table is full you can get the actual, real error message with the query "SHOW WARNINGS":<br /><br />| Warning | 1296 | Got error 1601 'Out extents, tablespace full' from NDB<br />| Error | 1114 | The table 'fooo' is full<br /><br />Why that error is hidden as a warning is beyond me. Hope this helps others that land on your page.David Kirchnerhttps://www.blogger.com/profile/02074639909552952859noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-79081447711864954682012-09-24T23:29:10.651+02:002012-09-24T23:29:10.651+02:00Additionally, I can insert rows after the table is...Additionally, I can insert rows after the table is full command occurs. I'm not sure how many yet. Maybe the import is too fast?David Kirchnerhttps://www.blogger.com/profile/02074639909552952859noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-47340618451406060202012-09-24T23:24:28.686+02:002012-09-24T23:24:28.686+02:00I am running in to the same error (although I am t...I am running in to the same error (although I am trying to use Cluster as a regular ol' SQL server). I have plenty of memory and am just importing one of our smaller tables (~3M rows), and the import aborts at around 800,000 rows ("The table is full"). MAX_ROWS is set to 100,000,000 and avg_row_length is reasonable (470).<br /><br />Disk and Index usage all report 0% (6274/983040, 1385/1310752 respectively) and ndbinfo.memoryusage confirms there's about 32GB available (total - used). I can't imagine 800k inserts generated *that* much undo traffic, but I guess it's possible.<br /><br />I watched the output of 'select * from ndbinfo.logspaces' while inserting rows. The used/total ratio definitely went up for both UNDO and REDO logs. For UNDO the ratio was 1.3% when the error occurred,up from 0.17% before I started, and below the peak of about 1.6%. The REDO logs peaked at about 15%, but were around 9% when the error occurred.David Kirchnerhttps://www.blogger.com/profile/02074639909552952859noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-32085067101893147092012-06-09T17:46:02.983+02:002012-06-09T17:46:02.983+02:00Yes, but it took me a while to figure out this was...Yes, but it took me a while to figure out this was an issue, as I was using Data on Disk. But then I realized that DataMemory included UNDO, and figured this was an issue. And it seemed that could be it. Once I've finished loading so I can do some serious benchmarks, I'll post some more on this.<br />But as I already said, this is just too much work to get a KVS working in most cases, unless it's really good (limiting the # of rows in a KVS is usually not a good idea either).<br /><br />On the other hand, most of my complaints, excluding the docs, is of much less significance in a telco OEM application, which is what MySQL Cluster is best at, still. My thinking here was to try MySQL Cluster for something else, so far with little success.<br /><br />/KarlssonKarlssonhttps://www.blogger.com/profile/04874338187076980133noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-4180901235164651682012-06-09T16:55:33.166+02:002012-06-09T16:55:33.166+02:00I see. Looked at your mem usage numbers again and ...I see. Looked at your mem usage numbers again and what could have happened is simply running out of DataMemory. You are at 95% usage (15.2k out of 16k):<br /><br />http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-ndbd-definition.html#ndbparam-ndbd-minfreepctBernd Ocklinhttps://www.blogger.com/profile/04807616500563092960noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-68126324606321386732012-06-09T16:26:20.973+02:002012-06-09T16:26:20.973+02:00MAX_ROWS was already set. If you read this and pre...MAX_ROWS was already set. If you read this and previous blogposts you can see that I have set MAX_ROWS way above what is needed, as well as AVG_ROW_SIZE to the max row size. This is pretty annoying, I set MAX_ROWS as 150.000.000 and the table got "full" at 30.000.000. But it seems to have been UNDO space that was "full", as now I am actually close to loading all rows (I'm at 80.000.000, out of a total of slightly more than 100.000.000).<br /><br />/KarlssonKarlssonhttps://www.blogger.com/profile/04874338187076980133noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-3060199511413332612012-06-09T15:21:38.596+02:002012-06-09T15:21:38.596+02:00Use MAX_ROWS on your table. Pls see FAQs:
http://...Use MAX_ROWS on your table. Pls see FAQs:<br /><br />http://dev.mysql.com/doc/refman/5.1/en/faqs-mysql-cluster.html#qandaitem-B-10-1-14Bernd Ocklinhttps://www.blogger.com/profile/04807616500563092960noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-28918182939501226722012-06-09T13:52:44.203+02:002012-06-09T13:52:44.203+02:00The one value, looking at ndbinfo.memoryusage, is ...The one value, looking at ndbinfo.memoryusage, is that DataMemory is getting close to full. Now, I did say i had disk data tables, so this shouldn't matter much, but then I remembered that DataMemory really should be called DataAndUndoMemory, so maybe this was the issue? And IndexMemory is very far from full, so maybe I am running out of UNDO space in memory here (why I could't get an error message indicating this is beyond me, but I guess that is the kind of stuff that makes MySQL Cluster 7.2 GA and "Enterprise"). So I have shuffled the memory settings around a bit and will try again (and why UNDO has to have this much memory, and why it cann't be on disk or be dynamically allocated I do not know. But I fully understand that allocating memory as you need it, and use disk when you don't have memory, I a very advanced concept, which requires some careful design and consideration, just like the, just as modern and technologically advaced, fins on a 1959 Cadillac).<br /><br />/KarlssonKarlssonhttps://www.blogger.com/profile/04874338187076980133noreply@blogger.comtag:blogger.com,1999:blog-9144505959002328789.post-84717032505465260122012-06-09T10:30:01.136+02:002012-06-09T10:30:01.136+02:00Anders,
Don't forget to tell us when you will...Anders,<br /><br />Don't forget to tell us when you will reach you 1 billion queries in a minute!<br /><br />/izIvanhttps://www.blogger.com/profile/01969084252489590355noreply@blogger.com