- 19 Oct, 2013 40 commits
-
-
git://git.open-mesh.org/linux-mergeDavid S. Miller authored
Antonio Quartulli says: ==================== this is another batch intended for net-next/linux-3.13. This pull request is a bit bigger than usual, but 6 patches are very small (three of them are about email updates).. Patch 1 is fixing a previous merge conflict resolution that went wrong (I realised that only now while checking other patches..). Patches from 2 to 4 that are updating our emails in all the proper files (Documentation/, headers and MAINTAINERS). Patches 5, 6 and 7 are bringing a big improvement to the TranslationTable component: it is now able to group non-mesh clients based on the VLAN they belong to. In this way a lot a new enhancements are now possible thanks to the fact that each batman-adv behaviour can be applied on a per VLAN basis. And, of course, in patches from 8 to 12 you have some of the enhancements I was talking about: - make the batman-Gateway selection VLAN dependent - make DAT (Distributed ARP Table) group ARP entries on a VLAN basis (this allows DAT to work even when the admin decided to use the same IP subnet on different VLANs) - make the AP-Isolation behaviour switchable on each VLAN independently - export VLAN specific attributes via sysfs. Switches like the AP-Isolation are now exported once per VLAN (backward compatibility of the sysfs interface has been preserved) Patches 13 and 14 are small code cleanups. Patch 15 is a minor improvement in the TT locking mechanism. Patches 16 and 17 are other enhancements to the TT component. Those allow a node to parse a "non-mesh client announcement message" and accept only those TT entries belonging to certain VLANs. Patch 18 exploits this parse&accept mechanism to make the Bridge Loop Avoidance component reject only TT entries connected to the VLAN where it is operating. Previous to this change, BLA was rejecting all the entries coming from any other Backbone node, regardless of the VLAN (for more details about how the Bridge Loop Avoidance works please check [1]). [1] http://www.open-mesh.org/projects/batman-adv/wiki/Bridge-loop-avoidance-II ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Hannes Frederic Sowa says: ==================== This series implements support for delaying the initialization of secret keys, e.g. used for hashing, for as long as possible. This functionality is implemented by a new macro, net_get_random_bytes. I already used it to protect the socket hashes, the syncookie secret (most important) and the tcp_fastopen secrets. Changelog: v2) Use static_keys in net_get_random_once to have as minimal impact to the fast-path as possible. v3) added patch "static_key: WARN on usage before jump_label_init was called": Patch "x86/jump_label: expect default_nop if static_key gets enabled on boot-up" relaxes the checks for using static_key primitives before jump_label_init. So tighten them first. v4) Update changelog on the patch "static_key: WARN on usage before jump_label_init was called" Included patches: ipv4: split inet_ehashfn to hash functions per compilation unit ipv6: split inet6_ehashfn to hash functions per compilation unit static_key: WARN on usage before jump_label_init was called x86/jump_label: expect default_nop if static_key gets enabled on boot-up net: introduce new macro net_get_random_once inet: split syncookie keys for ipv4 and ipv6 and initialize with net_get_random_once inet: convert inet_ehash_secret and ipv6_hash_secret to net_get_random_once tcp: switch tcp_fastopen key generation to net_get_random_once net: switch net_secret key generation to net_get_random_once ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
Changed key initialization of tcp_fastopen cookies to net_get_random_once. If the user sets a custom key net_get_random_once must be called at least once to ensure we don't overwrite the user provided key when the first cookie is generated later on. Cc: Yuchung Cheng <ycheng@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
Initialize the ehash and ipv6_hash_secrets with net_get_random_once. Each compilation unit gets its own secret now: ipv4/inet_hashtables.o ipv4/udp.o ipv6/inet6_hashtables.o ipv6/udp.o rds/connection.o The functions still get inlined into the hashing functions. In the fast path we have at most two (needed in ipv6) if (unlikely(...)). Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
This patch splits the secret key for syncookies for ipv4 and ipv6 and initializes them with net_get_random_once. This change was the reason I did this series. I think the initialization of the syncookie_secret is way to early. Cc: Florian Westphal <fw@strlen.de> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
net_get_random_once is a new macro which handles the initialization of secret keys. It is possible to call it in the fast path. Only the initialization depends on the spinlock and is rather slow. Otherwise it should get used just before the key is used to delay the entropy extration as late as possible to get better randomness. It returns true if the key got initialized. The usage of static_keys for net_get_random_once is a bit uncommon so it needs some further explanation why this actually works: === In the simple non-HAVE_JUMP_LABEL case we actually have === no constrains to use static_key_(true|false) on keys initialized with STATIC_KEY_INIT_(FALSE|TRUE). So this path just expands in favor of the likely case that the initialization is already done. The key is initialized like this: ___done_key = { .enabled = ATOMIC_INIT(0) } The check if (!static_key_true(&___done_key)) \ expands into (pseudo code) if (!likely(___done_key > 0)) , so we take the fast path as soon as ___done_key is increased from the helper function. === If HAVE_JUMP_LABELs are available this depends === on patching of jumps into the prepared NOPs, which is done in jump_label_init at boot-up time (from start_kernel). It is forbidden and dangerous to use net_get_random_once in functions which are called before that! At compilation time NOPs are generated at the call sites of net_get_random_once. E.g. net/ipv6/inet6_hashtable.c:inet6_ehashfn (we need to call net_get_random_once two times in inet6_ehashfn, so two NOPs): 71: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 76: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) Both will be patched to the actual jumps to the end of the function to call __net_get_random_once at boot time as explained above. arch_static_branch is optimized and inlined for false as return value and actually also returns false in case the NOP is placed in the instruction stream. So in the fast case we get a "return false". But because we initialize ___done_key with (enabled != (entries & 1)) this call-site will get patched up at boot thus returning true. The final check looks like this: if (!static_key_true(&___done_key)) \ ___ret = __net_get_random_once(buf, \ expands to if (!!static_key_false(&___done_key)) \ ___ret = __net_get_random_once(buf, \ So we get true at boot time and as soon as static_key_slow_inc is called on the key it will invert the logic and return false for the fast path. static_key_slow_inc will change the branch because it got initialized with .enabled == 0. After static_key_slow_inc is called on the key the branch is replaced with a nop again. === Misc: === The helper defers the increment into a workqueue so we don't have problems calling this code from atomic sections. A seperate boolean (___done) guards the case where we enter net_get_random_once again before the increment happend. Cc: Ingo Molnar <mingo@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Jason Baron <jbaron@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
net_get_random_once(intrduced in the next patch) uses static_keys in a way that they get enabled on boot-up instead of replaced with an ideal_nop. So check for default_nop on initial enabling. Other architectures don't check for this. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Jason Baron <jbaron@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: x86@kernel.org Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
Usage of the static key primitives to toggle a branch must not be used before jump_label_init() is called from init/main.c. jump_label_init reorganizes and wires up the jump_entries so usage before that could have unforeseen consequences. Following primitives are now checked for correct use: * static_key_slow_inc * static_key_slow_dec * static_key_slow_dec_deferred * jump_label_rate_limit The x86 architecture already checks this by testing if the default_nop was already replaced with an optimal nop or with a branch instruction. It will panic then. Other architectures don't check for this. Because we need to relax this check for the x86 arch to allow code to transition from default_nop to the enabled state and other architectures did not check for this at all this patch introduces checking on the static_key primitives in a non-arch dependent manner. All checked functions are considered slow-path so the additional check does no harm to performance. The warnings are best observed with earlyprintk. Based on a patch from Andi Kleen. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
This patch splits the inet6_ehashfn into separate ones in ipv6/inet6_hashtables.o and ipv6/udp.o to ease the introduction of seperate secrets keys later. Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
This duplicates a bit of code but let's us easily introduce separate secret keys later. The separate compilation units are ipv4/inet_hashtabbles.o, ipv4/udp.o and rds/connection.o. Cc: Eric Dumazet <edumazet@google.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== net: Implement GSO/TSO support for IPIP This patch serie implements GSO/TSO support for IPIP David, please note it applies after "ipv4: gso: send_check() & segment() cleanups" ( http://patchwork.ozlabs.org/patch/284714/ ) Broadcom bnx2x driver is now enabled for TSO support of IPIP traffic Before patch : lpq83:~# ./netperf -H 7.7.9.84 -Cc MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.9.84 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 3357.88 5.09 3.70 2.983 2.167 After patch : lpq83:~# ./netperf -H 7.7.9.84 -Cc MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.9.84 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 8532.40 2.55 7.73 0.588 1.781 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
bnx2x driver already handles TSO for GRE, current code is the same for IPIP. Performance results : (Note we are now limited by receiver, as it does not support GRO for IPIP yet) Before patch : lpq83:~# ./netperf -H 7.7.9.84 -Cc MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.9.84 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 7710.19 4.52 6.62 1.152 1.687 After patch : lpq83:~# ./netperf -H 7.7.9.84 -Cc MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.9.84 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 8532.40 2.55 7.73 0.588 1.781 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Now inet_gso_segment() is stackable, its relatively easy to implement GSO/TSO support for IPIP Performance results, when segmentation is done after tunnel device (as no NIC is yet enabled for TSO IPIP support) : Before patch : lpq83:~# ./netperf -H 7.7.9.84 -Cc MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.9.84 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 3357.88 5.09 3.70 2.983 2.167 After patch : lpq83:~# ./netperf -H 7.7.9.84 -Cc MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.9.84 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 7710.19 4.52 6.62 1.152 1.687 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
In order to support GSO on IPIP, we need to make inet_gso_segment() stackable. It should not assume network header starts right after mac header. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This patch makes gre_handle_offloads() more generic and rename it to iptunnel_handle_offloads() This will be used to add GSO/TSO support to IPIP tunnels. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
While implementing GSO/TSO support for IPIP, I found skb_segment() was assuming network header was immediately following mac header. Its not really true in the case inet_gso_segment() is stacked : By the time tcp_gso_segment() is called, network header points to the inner IP header. Let's instead assume nothing and pick the current offsets found in original skb, we have skb_headers_offset_update() helper for that. Also move the csum_start update inside skb_headers_offset_update() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
ipv6_gso_send_check() and ipv6_gso_segment() are called by skb_mac_gso_segment() under rcu lock, no need to use rcu_read_lock() / rcu_read_unlock() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ajit Khaparde authored
Currently we log a message whenever pcie_enable_error_reporting fails. The message clutters up logs, especially when we don't support it for VFs. Instead enable this only for PFs and log a message when the call succeeds. Signed-off-by: Ajit Khaparde <ajit.khaparde@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
There are a mix of function prototypes with and without extern in the kernel sources. Standardize on not using extern for function prototypes. Function prototypes don't need to be written with extern. extern is assumed by the compiler. Its use is as unnecessary as using auto to declare automatic/local variables in a block. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
There are a mix of function prototypes with and without extern in the kernel sources. Standardize on not using extern for function prototypes. Function prototypes don't need to be written with extern. extern is assumed by the compiler. Its use is as unnecessary as using auto to declare automatic/local variables in a block. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
There are a mix of function prototypes with and without extern in the kernel sources. Standardize on not using extern for function prototypes. Function prototypes don't need to be written with extern. extern is assumed by the compiler. Its use is as unnecessary as using auto to declare automatic/local variables in a block. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
There are a mix of function prototypes with and without extern in the kernel sources. Standardize on not using extern for function prototypes. Function prototypes don't need to be written with extern. extern is assumed by the compiler. Its use is as unnecessary as using auto to declare automatic/local variables in a block. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
inet_gso_segment() and inet_gso_send_check() are called by skb_mac_gso_segment() under rcu lock, no need to use rcu_read_lock() / rcu_read_unlock() Avoid calling ip_hdr() twice per function. We can use ip_send_check() helper. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
It can be called from the module init function, so it cannot be in the exit section. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiri Pirko says: ==================== bonding: introduce bonding options Netlink support This patchset basically allows "mode" and "active_slave" bonding options to be propagated and set up via standart RT Netlink interface. In future other options can be easily added as well. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
no longer needed since bond_option_active_slave_set() can be used instead. Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Do a bit of refactoring on the way. Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Antonio Quartulli authored
The backbone gw check has to be VLAN specific so that code using it can specify VID where the check has to be done. In the TT code, the check has been moved into the tt_global_add() function so that it can be performed on a per-entry basis instead of ignoring all the TT data received from another backbone node. Only TT global entries belonging to the VLAN where the backbone node is connected to are skipped. All the other spots where the TT code was checking whether a node is a backbone have been removed. Moreover, batadv_bla_is_backbone_gw_orig() now returns bool since it used to return only 1 or 0. Cc: Simon Wunderlich <siwu@hrz.tu-chemnitz.de> Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Antonio Quartulli authored
Instead of unconditionally removing all the TT entries served by a given originator, make tt_global_orig_del() remove only entries matching a given VLAN identifier provided as argument. If such argument is negative all the global entries served by the originator are removed. This change is used into the BLA code to purge entries served by a newly discovered Backbone node, but limiting the operation only to those connected to the VLAN where the backbone has been discovered. Cc: Simon Wunderlich <siwu@hrz.tu-chemnitz.de> Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Antonio Quartulli authored
This change allows nodes to handle the TT table on a per-VLAN basis. This is needed because nodes may have to store only some of the global entries advertised by another node. In this scenario such nodes would re-create only a partial global table and would not be able to compute a correct CRC anymore. This patch splits the logic and introduces one CRC per VLAN. In this way a node fetching only some entries belonging to some VLANs is still able to compute the needed CRCs and still check the table correctness. With this patch the shape of the TVLV-TT is changed too because now a node needs to advertise all the CRCs of all the VLANs that it is wired to. The debug output of the local Translation Table now shows the CRC along with each entry since there is not a common value for the entire table anymore. Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Antonio Quartulli authored
A TT response may be prepared and sent while the local or global translation table is getting updated. The worst case is when one of the tables is accessed after its content has been recently updated but the metadata (TTVN/CRC) has not yet. In this case the reader will get a table content which does not match the TTVN/CRC. This will lead to an inconsistent state and so to a TT recovery. To avoid entering this situation, put a lock around those TT operations recomputing the metadata and around the TT Response creation (the latter is the only reader that accesses the metadata together with the table). Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Antonio Quartulli authored
this comment refers to the old batmand codebase and does not make sense anymore. Remove it Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Linus Lüssing authored
With this patch the functions batadv_send_skb_unicast() and batadv_send_skb_unicast_4addr() are further refined into batadv_send_skb_via_tt(), batadv_send_skb_via_tt_4addr() and batadv_send_skb_via_gw(). This way we avoid any "guessing" about where to send a packet in the unicast forwarding methods and let the callers decide. This is going to be useful for the upcoming multicast related patches in particular. Further, the return values were polished a little to use the more appropriate NET_XMIT_* defines. Signed-off-by: Linus Lüssing <linus.luessing@web.de> Acked-by: Antonio Quartulli <antonio@meshcoding.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de> Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
-
Antonio Quartulli authored
AP isolation has to be enabled on one VLAN interface only. This patch moves the AP isolation attribute to the per-vlan interface attribute set, enabling it to have a different value depending on the selected vlan. Signed-off-by: Antonio Quartulli <antonio@open-mesh.com> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-