Commit 96e4281f authored by unknown's avatar unknown

Cleanup for lost file descriptors on close table for ha_archive.


sql/examples/ha_archive.cc:
  More comments, fixed issue with lost file descriptors.
BitKeeper/etc/logging_ok:
  Logging to logging@openlogging.org accepted
parent 49ab428c
......@@ -33,6 +33,7 @@ bk@mysql.r18.ru
brian@avenger.(none)
brian@brian-akers-computer.local
brian@private-client-ip-101.oz.net
brian@zim.(none)
carsten@tsort.bitbybit.dk
davida@isil.mysql.com
dellis@goetia.(none)
......
......@@ -42,18 +42,20 @@
handle bulk inserts as well (that is if someone was trying to read at
the same time since we would want to flush).
A "meta" file is kept. All this file does is contain information on
the number of rows.
A "meta" file is kept alongside the data file. This file serves two purpose.
The first purpose is to track the number of rows in the table. The second
purpose is to determine if the table was closed properly or not. When the
meta file is first opened it is marked as dirty. It is opened when the table
itself is opened for writing. When the table is closed the new count for rows
is written to the meta file and the file is marked as clean. If the meta file
is opened and it is marked as dirty, it is assumed that a crash occured. At
this point an error occurs and the user is told to rebuild the file.
A rebuild scans the rows and rewrites the meta file. If corruption is found
in the data file then the meta file is not repaired.
No attempts at durability are made. You can corrupt your data. A repair
method was added to repair the meta file that stores row information,
but if your data file gets corrupted I haven't solved that. I could
create a repair that would solve this, but do you want to take a
chance of loosing your data?
At some point a recovery method for such a drastic case needs to be divised.
Locks are row level, and you will get a consistant read. Transactions
will be added later (they are not that hard to add at this
stage).
Locks are row level, and you will get a consistant read.
For performance as far as table scans go it is quite fast. I don't have
good numbers but locally it has out performed both Innodb and MyISAM. For
......@@ -88,7 +90,6 @@
compression but may speed up ordered searches).
Checkpoint the meta file to allow for faster rebuilds.
Dirty open (right now the meta file is repaired if a crash occured).
Transactions.
Option to allow for dirty reads, this would lower the sync calls, which would make
inserts a lot faster, but would mean highly arbitrary reads.
......@@ -333,6 +334,7 @@ ARCHIVE_SHARE *ha_archive::get_share(const char *table_name, TABLE *table)
opposite.
*/
(void)write_meta_file(share->meta_file, share->rows_recorded, TRUE);
/*
It is expensive to open and close the data files and since you can't have
a gzip file that can be both read and written we keep a writer open
......@@ -379,6 +381,8 @@ int ha_archive::free_share(ARCHIVE_SHARE *share)
(void)write_meta_file(share->meta_file, share->rows_recorded, FALSE);
if (gzclose(share->archive_write) == Z_ERRNO)
rc= 1;
if (my_close(share->meta_file, MYF(0)))
rc= 1;
my_free((gptr) share, MYF(0));
}
pthread_mutex_unlock(&archive_mutex);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment