Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
M
MariaDB
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
nexedi
MariaDB
Commits
91c4407e
Commit
91c4407e
authored
May 02, 2001
by
monty@donna.mysql.fi
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fixed a bug with SELECT DISTINCT and HAVING
parent
6371f984
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
144 additions
and
75 deletions
+144
-75
Docs/manual.texi
Docs/manual.texi
+61
-15
sql/sql_select.cc
sql/sql_select.cc
+83
-53
support-files/Makefile.am
support-files/Makefile.am
+0
-7
No files found.
Docs/manual.texi
View file @
91c4407e
...
...
@@ -5939,12 +5939,15 @@ A reasonable @code{tar} to unpack the distribution. GNU @code{tar} is
known to work. Sun @code{tar} is known to have problems.
@item
A working ANSI C++ compiler. @code{gcc} >= 2.8.1, @code{egcs} >=
1.0.2, SGI C++, and SunPro C++ are some of the compilers that are known to
work. @code{libg++} is not needed when using @code{gcc}. @code{gcc}
2.7.x has a bug that makes it impossible to compile some perfectly legal
C++ files, such as @file{sql/sql_base.cc}. If you only have @code{gcc} 2.7.x,
you must upgrade your @code{gcc} to be able to compile @strong{MySQL}.
A working ANSI C++ compiler. @code{gcc} >= 2.95.2, @code{egcs} >= 1.0.2
or @code{egcs 2.91.66}, SGI C++, and SunPro C++ are some of the
compilers that are known to work. @code{libg++} is not needed when
using @code{gcc}. @code{gcc} 2.7.x has a bug that makes it impossible
to compile some perfectly legal C++ files, such as
@file{sql/sql_base.cc}. If you only have @code{gcc} 2.7.x, you must
upgrade your @code{gcc} to be able to compile @strong{MySQL}. @code{gcc}
2.8.1 is also known to have problems on some platforms so it should be
avoided if there exists a new compiler for the platform..
@code{gcc} >= 2.95.2 is recommended when compiling @strong{MySQL}
Version 3.23.x.
...
...
@@ -8536,8 +8539,8 @@ We recommend the following @code{configure} line with @code{egcs} and
@code{gcc 2.95} on AIX:
@example
CC="gcc -pipe -mcpu=power
2
-Wa,-many" \
CXX="gcc -pipe -mcpu=power
2
-Wa,-many" \
CC="gcc -pipe -mcpu=power -Wa,-many" \
CXX="gcc -pipe -mcpu=power -Wa,-many" \
CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti" \
./configure --prefix=/usr/local/mysql --with-low-memory
@end example
...
...
@@ -8549,6 +8552,21 @@ available. We don't know if the @code{-fno-exceptions} is required with
option generates faster code, we recommend that you should always use this
option with @code{egcs / gcc}.
If you get a problem with assembler code try changing the -mcpu=xxx to
match your cpu. Typically power2, power, or powerpc may need to be used,
alternatively you might need to use 604 or 604e. I'm not positive but I
would think using "power" would likely be safe most of the time, even on
a power2 machine.
If you don't know what your cpu is then do a "uname -m", this will give
you back a string that looks like "000514676700", with a format of
xxyyyyyymmss where xx and ss are always 0's, yyyyyy is a unique system
id and mm is the id of the CPU Planar. A chart of these values can be
found at
@uref{http://www.rs6000.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds5/uname.htm}.
This will give you a machine type and a machine model you can use to
determine what type of cpu you have.
If you have problems with signals (@strong{MySQL} dies unexpectedly
under high load) you may have found an OS bug with threads and
signals. In this case you can tell @strong{MySQL} not to use signals by
...
...
@@ -8569,6 +8587,29 @@ On some versions of AIX, linking with @code{libbind.a} makes
@code{getservbyname} core dump. This is an AIX bug and should be reported
to IBM.
For AIX 4.2.1 and gcc you have to do the following changes.
After configuring, edit @file{config.h} and @file{include/my_config.h}
and change the line that says
@example
#define HAVE_SNPRINTF 1
@end example
to
@example
#undef HAVE_SNPRINTF
@end example
And finally, in @file{mysqld.cc} you need to add a prototype for initgoups.
@example
#ifdef _AIX41
extern "C" int initgroups(const char *,int);
#endif
@end example
@node HP-UX 10.20, HP-UX 11.x, IBM-AIX, Source install system issues
@subsection HP-UX Version 10.20 Notes
...
...
@@ -23777,7 +23818,7 @@ is not signaled to the other servers.
@section MERGE Tables
@code{MERGE} tables are new in @strong{MySQL} Version 3.23.25. The code
is still in
beta, but should stabilize soon!
is still in
gamma, but should be resonable stable.
A @code{MERGE} table is a collection of identical @code{MyISAM} tables
that can be used as one. You can only @code{SELECT}, @code{DELETE}, and
...
...
@@ -23790,8 +23831,8 @@ will only clear the mapping for the table, not delete everything in the
mapped tables. (We plan to fix this in 4.0).
With identical tables we mean that all tables are created with identical
column
information. You can't put a MERGE over tables where the columns
are packed differently or doesn't have exactly the same columns.
column
and key information. You can't put a MERGE over tables where the
columns
are packed differently or doesn't have exactly the same columns.
Some of the tables can however be compressed with @code{myisampack}.
@xref{myisampack}.
...
...
@@ -23826,8 +23867,10 @@ More efficient repairs. It's easier to repair the individual files that
are mapped to a @code{MERGE} file than trying to repair a real big file.
@item
Instant mapping of many files as one. A @code{MERGE} table uses the
index of the individual tables. It doesn't need an index of its one.
This makes @code{MERGE} table collections VERY fast to make or remap.
index of the individual tables. It doesn't need to maintain an index of
its one. This makes @code{MERGE} table collections VERY fast to make or
remap. Note that you must specify the key definitions when you create
a @code{MERGE} table!.
@item
If you have a set of tables that you join to a big table on demand or
batch, you should instead create a @code{MERGE} table on them on demand.
...
...
@@ -43032,8 +43075,8 @@ An open source client for exploring databases and executing SQL. Supports
A query tool for @strong{MySQL} and PostgreSQL.
@item @uref{http://dbman.linux.cz/,dbMan}
A query tool written in Perl. Uses DBI and Tk.
@item @uref{http://www.mysql.com/Downloads/Win32/Msc201.EXE, Mascon 2
.1.15
}
@item @uref{http://www.mysql.com/Downloads/Win32/FrMsc20
1.EXE, Free Mascon 2.1.14
}
@item @uref{http://www.mysql.com/Downloads/Win32/Msc201.EXE, Mascon 2
02
}
@item @uref{http://www.mysql.com/Downloads/Win32/FrMsc20
2.EXE, Free Mascon 202
}
Mascon is a powerful Win32 GUI for the administering @strong{MySQL} server
databases. Mascon's features include visual table design, connections to
multiple servers, data and blob editing of tables, security setting, SQL
...
...
@@ -44050,6 +44093,9 @@ not yet 100% confident in this code.
@appendixsubsec Changes in release 3.23.38
@itemize @bullet
@item
Fixed bug when too many rows where removed when using
@code{SELECT DISTINCT ... HAVING}.
@item
@code{SHOW CREATE TABLE} now returns @code{TEMPORARY} for temporary tables.
@item
Added @code{Rows_examined} to slow query log.
sql/sql_select.cc
View file @
91c4407e
...
...
@@ -36,7 +36,8 @@ const char *join_type_str[]={ "UNKNOWN","system","const","eq_ref","ref",
static
bool
make_join_statistics
(
JOIN
*
join
,
TABLE_LIST
*
tables
,
COND
*
conds
,
DYNAMIC_ARRAY
*
keyuse
,
List
<
Item_func_match
>
&
ftfuncs
);
static
bool
update_ref_and_keys
(
DYNAMIC_ARRAY
*
keyuse
,
JOIN_TAB
*
join_tab
,
static
bool
update_ref_and_keys
(
THD
*
thd
,
DYNAMIC_ARRAY
*
keyuse
,
JOIN_TAB
*
join_tab
,
uint
tables
,
COND
*
conds
,
table_map
table_map
,
List
<
Item_func_match
>
&
ftfuncs
);
static
int
sort_keyuse
(
KEYUSE
*
a
,
KEYUSE
*
b
);
...
...
@@ -106,12 +107,14 @@ static uint find_shortest_key(TABLE *table, key_map usable_keys);
static
bool
test_if_skip_sort_order
(
JOIN_TAB
*
tab
,
ORDER
*
order
,
ha_rows
select_limit
);
static
int
create_sort_index
(
JOIN_TAB
*
tab
,
ORDER
*
order
,
ha_rows
select_limit
);
static
int
remove_duplicates
(
JOIN
*
join
,
TABLE
*
entry
,
List
<
Item
>
&
fields
);
static
bool
fix_having
(
JOIN
*
join
,
Item
**
having
);
static
int
remove_duplicates
(
JOIN
*
join
,
TABLE
*
entry
,
List
<
Item
>
&
fields
,
Item
*
having
);
static
int
remove_dup_with_compare
(
THD
*
thd
,
TABLE
*
entry
,
Field
**
field
,
ulong
offset
);
ulong
offset
,
Item
*
having
);
static
int
remove_dup_with_hash_index
(
THD
*
thd
,
TABLE
*
table
,
uint
field_count
,
Field
**
first_field
,
ulong
key_length
);
ulong
key_length
,
Item
*
having
);
static
int
join_init_cache
(
THD
*
thd
,
JOIN_TAB
*
tables
,
uint
table_count
);
static
ulong
used_blob_length
(
CACHE_FIELD
**
ptr
);
static
bool
store_record_in_cache
(
JOIN_CACHE
*
cache
);
...
...
@@ -717,8 +720,11 @@ mysql_select(THD *thd,TABLE_LIST *tables,List<Item> &fields,COND *conds,
if
(
select_distinct
&&
!
group
)
{
thd
->
proc_info
=
"Removing duplicates"
;
if
(
remove_duplicates
(
&
join
,
tmp_table
,
fields
))
if
(
having
)
having
->
update_used_tables
();
if
(
remove_duplicates
(
&
join
,
tmp_table
,
fields
,
having
))
goto
err
;
/* purecov: inspected */
having
=
0
;
select_distinct
=
0
;
}
tmp_table
->
reginfo
.
lock_type
=
TL_UNLOCK
;
...
...
@@ -749,28 +755,8 @@ mysql_select(THD *thd,TABLE_LIST *tables,List<Item> &fields,COND *conds,
/* If we have already done the group, add HAVING to sorted table */
if
(
having
&&
!
group
&&
!
join
.
sort_and_group
)
{
having
->
update_used_tables
();
// Some tables may have been const
JOIN_TAB
*
table
=&
join
.
join_tab
[
join
.
const_tables
];
table_map
used_tables
=
join
.
const_table_map
|
table
->
table
->
map
;
Item
*
sort_table_cond
=
make_cond_for_table
(
having
,
used_tables
,
used_tables
);
if
(
sort_table_cond
)
{
if
(
!
table
->
select
)
if
(
!
(
table
->
select
=
new
SQL_SELECT
))
if
(
fix_having
(
&
join
,
&
having
))
goto
err
;
if
(
!
table
->
select
->
cond
)
table
->
select
->
cond
=
sort_table_cond
;
else
// This should never happen
if
(
!
(
table
->
select
->
cond
=
new
Item_cond_and
(
table
->
select
->
cond
,
sort_table_cond
)))
goto
err
;
table
->
select_cond
=
table
->
select
->
cond
;
DBUG_EXECUTE
(
"where"
,
print_where
(
table
->
select
->
cond
,
"select and having"
););
having
=
make_cond_for_table
(
having
,
~
(
table_map
)
0
,
~
used_tables
);
DBUG_EXECUTE
(
"where"
,
print_where
(
conds
,
"having after sort"
););
}
}
if
(
create_sort_index
(
&
join
.
join_tab
[
join
.
const_tables
],
group
?
group
:
order
,
...
...
@@ -941,7 +927,7 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
}
if
(
conds
||
outer_join
)
if
(
update_ref_and_keys
(
keyuse_array
,
stat
,
join
->
tables
,
if
(
update_ref_and_keys
(
join
->
thd
,
keyuse_array
,
stat
,
join
->
tables
,
conds
,
~
outer_join
,
ftfuncs
))
DBUG_RETURN
(
1
);
...
...
@@ -1442,8 +1428,9 @@ sort_keyuse(KEYUSE *a,KEYUSE *b)
*/
static
bool
update_ref_and_keys
(
DYNAMIC_ARRAY
*
keyuse
,
JOIN_TAB
*
join_tab
,
uint
tables
,
COND
*
cond
,
table_map
normal_tables
,
List
<
Item_func_match
>
&
ftfuncs
)
update_ref_and_keys
(
THD
*
thd
,
DYNAMIC_ARRAY
*
keyuse
,
JOIN_TAB
*
join_tab
,
uint
tables
,
COND
*
cond
,
table_map
normal_tables
,
List
<
Item_func_match
>
&
ftfuncs
)
{
uint
and_level
,
i
,
found_eq_constant
;
...
...
@@ -1451,8 +1438,7 @@ update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables,
KEY_FIELD
*
key_fields
,
*
end
;
if
(
!
(
key_fields
=
(
KEY_FIELD
*
)
my_malloc
(
sizeof
(
key_fields
[
0
])
*
(
current_thd
->
cond_count
+
1
)
*
2
,
MYF
(
0
))))
thd
->
alloc
((
sizeof
(
key_fields
[
0
])
*
thd
->
cond_count
+
1
)
*
2
)))
return
TRUE
;
/* purecov: inspected */
and_level
=
0
;
end
=
key_fields
;
if
(
cond
)
...
...
@@ -1466,14 +1452,10 @@ update_ref_and_keys(DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,uint tables,
}
}
if
(
init_dynamic_array
(
keyuse
,
sizeof
(
KEYUSE
),
20
,
64
))
{
my_free
((
gptr
)
key_fields
,
MYF
(
0
));
return
TRUE
;
}
/* fill keyuse with found key parts */
for
(
KEY_FIELD
*
field
=
key_fields
;
field
!=
end
;
field
++
)
add_key_part
(
keyuse
,
field
);
my_free
((
gptr
)
key_fields
,
MYF
(
0
));
}
if
(
ftfuncs
.
elements
)
...
...
@@ -1894,7 +1876,7 @@ cache_record_length(JOIN *join,uint idx)
{
uint
length
;
JOIN_TAB
**
pos
,
**
end
;
THD
*
thd
=
current_
thd
;
THD
*
thd
=
join
->
thd
;
length
=
0
;
for
(
pos
=
join
->
best_ref
+
join
->
const_tables
,
end
=
join
->
best_ref
+
idx
;
...
...
@@ -2076,7 +2058,7 @@ get_best_combination(JOIN *join)
}
else
{
THD
*
thd
=
current_
thd
;
THD
*
thd
=
join
->
thd
;
for
(
i
=
0
;
i
<
keyparts
;
keyuse
++
,
i
++
)
{
while
(
keyuse
->
keypart
!=
i
||
...
...
@@ -4433,7 +4415,8 @@ join_init_read_record(JOIN_TAB *tab)
{
if
(
tab
->
select
&&
tab
->
select
->
quick
)
tab
->
select
->
quick
->
reset
();
init_read_record
(
&
tab
->
read_record
,
current_thd
,
tab
->
table
,
tab
->
select
,
1
,
1
);
init_read_record
(
&
tab
->
read_record
,
tab
->
join
->
thd
,
tab
->
table
,
tab
->
select
,
1
,
1
);
return
(
*
tab
->
read_record
.
read_record
)(
&
tab
->
read_record
);
}
...
...
@@ -5265,6 +5248,38 @@ create_sort_index(JOIN_TAB *tab,ORDER *order,ha_rows select_limit)
}
/*
** Add the HAVING criteria to table->select
*/
static
bool
fix_having
(
JOIN
*
join
,
Item
**
having
)
{
(
*
having
)
->
update_used_tables
();
// Some tables may have been const
JOIN_TAB
*
table
=&
join
->
join_tab
[
join
->
const_tables
];
table_map
used_tables
=
join
->
const_table_map
|
table
->
table
->
map
;
Item
*
sort_table_cond
=
make_cond_for_table
(
*
having
,
used_tables
,
used_tables
);
if
(
sort_table_cond
)
{
if
(
!
table
->
select
)
if
(
!
(
table
->
select
=
new
SQL_SELECT
))
return
1
;
if
(
!
table
->
select
->
cond
)
table
->
select
->
cond
=
sort_table_cond
;
else
// This should never happen
if
(
!
(
table
->
select
->
cond
=
new
Item_cond_and
(
table
->
select
->
cond
,
sort_table_cond
)))
return
1
;
table
->
select_cond
=
table
->
select
->
cond
;
DBUG_EXECUTE
(
"where"
,
print_where
(
table
->
select_cond
,
"select and having"
););
*
having
=
make_cond_for_table
(
*
having
,
~
(
table_map
)
0
,
~
used_tables
);
DBUG_EXECUTE
(
"where"
,
print_where
(
*
having
,
"having after make_cond"
););
}
return
0
;
}
/*****************************************************************************
** Remove duplicates from tmp table
** This should be recoded to add a uniuqe index to the table and remove
...
...
@@ -5305,7 +5320,7 @@ static void free_blobs(Field **ptr)
static
int
remove_duplicates
(
JOIN
*
join
,
TABLE
*
entry
,
List
<
Item
>
&
fields
)
remove_duplicates
(
JOIN
*
join
,
TABLE
*
entry
,
List
<
Item
>
&
fields
,
Item
*
having
)
{
int
error
;
ulong
reclength
,
offset
;
...
...
@@ -5342,9 +5357,10 @@ remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields)
sortbuff_size
)))
error
=
remove_dup_with_hash_index
(
join
->
thd
,
entry
,
field_count
,
first_field
,
reclength
);
reclength
,
having
);
else
error
=
remove_dup_with_compare
(
join
->
thd
,
entry
,
first_field
,
offset
);
error
=
remove_dup_with_compare
(
join
->
thd
,
entry
,
first_field
,
offset
,
having
);
free_blobs
(
first_field
);
DBUG_RETURN
(
error
);
...
...
@@ -5352,19 +5368,19 @@ remove_duplicates(JOIN *join, TABLE *entry,List<Item> &fields)
static
int
remove_dup_with_compare
(
THD
*
thd
,
TABLE
*
table
,
Field
**
first_field
,
ulong
offset
)
ulong
offset
,
Item
*
having
)
{
handler
*
file
=
table
->
file
;
char
*
org_record
,
*
new_record
;
char
*
org_record
,
*
new_record
,
*
record
;
int
error
;
ulong
reclength
=
table
->
reclength
-
offset
;
DBUG_ENTER
(
"remove_dup_with_compare"
);
org_record
=
(
char
*
)
table
->
record
[
0
]
+
offset
;
org_record
=
(
char
*
)
(
record
=
table
->
record
[
0
])
+
offset
;
new_record
=
(
char
*
)
table
->
record
[
1
]
+
offset
;
file
->
rnd_init
();
error
=
file
->
rnd_next
(
table
->
record
[
0
]
);
error
=
file
->
rnd_next
(
record
);
for
(;;)
{
if
(
thd
->
killed
)
...
...
@@ -5381,6 +5397,12 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
break
;
goto
err
;
}
if
(
having
&&
!
having
->
val_int
())
{
if
((
error
=
file
->
delete_row
(
record
)))
goto
err
;
continue
;
}
if
(
copy_blobs
(
first_field
))
{
my_error
(
ER_OUT_OF_SORTMEMORY
,
MYF
(
0
));
...
...
@@ -5393,7 +5415,7 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
bool
found
=
0
;
for
(;;)
{
if
((
error
=
file
->
rnd_next
(
table
->
record
[
0
]
)))
if
((
error
=
file
->
rnd_next
(
record
)))
{
if
(
error
==
HA_ERR_RECORD_DELETED
)
continue
;
...
...
@@ -5403,19 +5425,19 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
}
if
(
compare_record
(
table
,
first_field
)
==
0
)
{
if
((
error
=
file
->
delete_row
(
table
->
record
[
0
]
)))
if
((
error
=
file
->
delete_row
(
record
)))
goto
err
;
}
else
if
(
!
found
)
{
found
=
1
;
file
->
position
(
table
->
record
[
0
]
);
// Remember position
file
->
position
(
record
);
// Remember position
}
}
if
(
!
found
)
break
;
// End of file
/* Restart search on next row */
error
=
file
->
restart_rnd_next
(
table
->
record
[
0
]
,
file
->
ref
);
error
=
file
->
restart_rnd_next
(
record
,
file
->
ref
);
}
file
->
extra
(
HA_EXTRA_NO_CACHE
);
...
...
@@ -5436,7 +5458,8 @@ static int remove_dup_with_compare(THD *thd, TABLE *table, Field **first_field,
static
int
remove_dup_with_hash_index
(
THD
*
thd
,
TABLE
*
table
,
uint
field_count
,
Field
**
first_field
,
ulong
key_length
)
ulong
key_length
,
Item
*
having
)
{
byte
*
key_buffer
,
*
key_pos
,
*
record
=
table
->
record
[
0
];
int
error
;
...
...
@@ -5484,6 +5507,12 @@ static int remove_dup_with_hash_index(THD *thd, TABLE *table,
break
;
goto
err
;
}
if
(
having
&&
!
having
->
val_int
())
{
if
((
error
=
file
->
delete_row
(
record
)))
goto
err
;
continue
;
}
/* copy fields to key buffer */
field_length
=
field_lengths
;
...
...
@@ -5499,6 +5528,7 @@ static int remove_dup_with_hash_index(THD *thd, TABLE *table,
if
((
error
=
file
->
delete_row
(
record
)))
goto
err
;
}
else
(
void
)
hash_insert
(
&
hash
,
key_pos
-
key_length
);
key_pos
+=
extra_length
;
}
...
...
support-files/Makefile.am
View file @
91c4407e
...
...
@@ -18,7 +18,6 @@
## Process this file with automake to create Makefile.in
EXTRA_DIST
=
mysql.spec.sh
\
mysql-max.spec.sh
\
my-small.cnf.sh
\
my-medium.cnf.sh
\
my-large.cnf.sh
\
...
...
@@ -34,7 +33,6 @@ pkgdata_DATA = my-small.cnf \
my-huge.cnf
\
mysql-log-rotate
\
mysql-@VERSION@.spec
\
mysql-max-@VERSION@.spec
\
binary-configure
pkgdata_SCRIPTS
=
mysql.server
...
...
@@ -44,7 +42,6 @@ CLEANFILES = my-small.cnf \
my-large.cnf
\
my-huge.cnf
\
mysql.spec
\
mysql-max-@VERSION@.spec
\
mysql-@VERSION@.spec
\
mysql-log-rotate
\
mysql.server
\
...
...
@@ -55,10 +52,6 @@ mysql-@VERSION@.spec: mysql.spec
rm
-f
$@
cp
mysql.spec
$@
mysql-max-@VERSION@.spec
:
mysql-max.spec
rm
-f
$@
cp
mysql-max.spec
$@
SUFFIXES
=
.sh
.sh
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment