December 07, 2011

Displaying date/time values in relative human readable format

A typical problem which universally appears in any interactive software is displaying date and time. A typical web page contains articles post times, a phone displays times of calls and short messages and so on and so forth.

Too often the developers don't bother with readability and we see something like

Posted Wednesday 1/12/2000 11:02:15 AM

whereas the American M/D/Y date and 12-hour AM/PM time format is even not universally recognized. Besides, more often it is not necessary to have it so detailed. Seconds, for instance, who ever needs them ? Security auditors ? Are you one of them ? A better way of displaying the same information is having only the relevant parts of it displayed:

Posted 11:02 AM (when posted the same day)
Posted 1/12/2000 (when posted days ago and time of day is not relevant)

And even better is to use relative user-readable form:

Posted 3 minutes ago
Posted 2 hours ago
Posted yesterday at 11:02 PM
Posted yesterday at 23:02 (tip to the hat for respecting locale)

Such format is user friendly, but why don't we see it often ? Surprise - because you have to implement it and it is not trivial. Instead of just

strftime("%M/%D/%Y");

you have to jump through hoops, the easiest being depending on external 3rd party library.

And this is my question: why don't we standardize some percent-format which would display date and time in printable relative form ? We already have strftime printing days of week names, month names (short and long), is it possible to have something like

strftime("%R");

to produce a human readable date-time relative to current time ? Or even have a small language in it, such as

strftime("%RD"); (produces "today" or "yesterday" or "2 days ago")
strftime("%RM"); (produces "this month" or "last month" or "2 months ago")

?

November 29, 2011

A cheap technical solution to control election vote counting

The elections are coming in Russia. As always, its results are expected to be grossly falsified. But instead of politics I'd like to talk about a technical solution to the problem.

What if an independent party is allowed to observe the process of vote counting, in order to come up with its own results which later can be used to verify the official numbers. How could it be done without forfeiting the traits of the voting process such as privacy, and also so that it is reliable and cost effective and causes minimal disruption to the existing process ?

Voting process in Russia is like this. You present yourself to the voting site and you are given an anonymous paper ballot. You go to a booth, mark one square with a pen and drop the paper to the ballot box.

So I thought that the step which could be modified is "marking the square with a pen". Instead of putting a pen inside a booth, a "square marking device" should be installed by a supervising party. It is nothing but a simple plastic box with a thin slot for ballot and, say, 4 marked buttons. You go into the booth, draw a curtain behind you so nobody sees what you are doing. Then you slide your ballot into the slot, press a button, and the device prints a cross in a corresponding square. Then you take the ballot out and drop it into the box as usual.

The marking device adds 1 to the counter corresponding to the button you pressed. Later the counters can be read by an authorized supervisor with a key. The officials count the votes by hand as usual after taking the ballots out of the box.

This most significant threat to this scheme is double voting. It should be protected against malicious user just pressing buttons at will to push up the counters. This could be done by reading a pre-printed unique bar code and not allowing it to be used twice. Bar code is also useful to be sure the ballot is positioned correctly.

As the amount of memory on the device is limited, you should probably pair a device installed to each site with the series of ballots dispersed to that site. This way, in the morning, the supervisor will unpack the device, pick random ballot from the pack and go through "imprinting phase" by inserting a ballot and applying a key. The device will read the bar code and lock itself to the range of ballots it will accept from that moment on. This will also protect from using ballots from other sites.

The solution can even be made entirely mechanical, no electronics required - it can be made simply a paper cutting hole punching device known in Russia as "компостер". You force press a button, and it cuts a circle fragment from a ballot, the cut out circle falls into a tray and remains inside the device, protected by a regular mechanical lock and key. Later, a supervisor opens it and counts the confetti. Each piece may contain a number or be of different color and may be watermarked to prevent forgery.

Neither of the solutions is bullet-proof. But it is simple, cheap and reasonably effective. The devices need to be tamper-evident, they must last just one day, they need a 2D barcode reader, 4 buttons, a key slot (touch memory reader ?) and 4 static printing heads. To cut on the latter and avoid ink refilling, even the electronic version of the device can be cutting holes in the ballot instead of printing marks on it.

I'm not a production expert, but I'd say such a device would cost a few dozen bucks. And you need hundred thousands of them, so it'd be even more cheap.

August 25, 2011

On temporary patches

Once again the other side of a client-server link is broken. Our side works perfectly, but the peer sends in something incompatible with the protocol. This time it sends money value with 3 decimal places whereas there could be 2 at most. Our parser throws upon
amount=1.001
and surprise ! They can't fix it. May be later. And it's definitely gonna take some time. Some indefinite time. Or forever. But they need the solution. Right now.

Reluctantly I go and remove the scale check, and add the abominable money rounding. It kind of works, but having that patch makes me feel uncomfortable. It doesn't matter that in case of a future problem we'll probably get away by pointing them to the case where they clearly state their wish. It simply doesn't feel right.

Worst of all, it's not going to be fixed. Ever. How about (I think to myself) having a temporary patch, a time bomb of a kind, which ceases working after, say, a month. Granted, not a good idea, for many reasons, but for a while it felt strangely good.

And it's dead simple to do in Python.
from datetime import datetime as dt

def fix_by(d):
  def _fix_by(m):
    def _fix_by_check(*args, **kwargs):
      if dt.now() > dt.strptime(d, "%Y-%m-%d"):
         raise Exception("TOO LATE !!!")
      return m(*args, **kwargs)
    return _fix_by_check
  return _fix_by

@fix_by("2011-09-25")
def foo():
  print("ugly patch reporting")

foo()
Sigh...

August 01, 2011

Security tokens for the masses

Two new cryptographic devices have hit the market just recently. They are both of "cryptographic security tokens USB dongles" kind, and there would be nothing to speak about unless they haven't both been supporting GOST family of cryptoalgorithms. As a quick reminder, in Russia it is not only mandatory to be using GOST, but the implementations must also be certified by FSB. And until recently, there has been just one certified hardware device to support GOST, namely MS_KEY. It is supported in our product since mid-2010.

Suddenly there are three of them - the newcomers are Rutoken ECP and eToken GOST. And our customers (banks mostly) want them, like, yesterday ! So I had to stop whatever I was doing and write supporting plugins for them. Not a big deal really, once the documentation is there, a device sample is available, and support is ready to answer a stray stupid question. And so now we have support for 4 types of security tokens, including the eToken PRO Java not mentioned before because it only supports RSA keys.

A lot of interesting things I've learnt in the process. In this post I'd like to focus on usability.

The target audience of our product are average non-technical users working from home with their bank accounts and such. And so we have to keep the things really simple, ask no questions, to which they don't know answers anyway, don't mentor about what they needn't know, offer no choice and in general just walk them through hand in hand.

And here is where certain tokens features become an obstacle.

In most general terms, a cryptographic token is essentially a tiny computer with its own CPU, memory and private filesystem wrapped in a plastic USB dongle. It has a tiny operating system in the ROM which knows how to perform a handful of operations, mostly of cryptographic nature, such as "calculate hash" or "generate digital signature". It plugs to the PC through a standard USB port and when the right drivers are installed can be sent commands to. Seen this way a token simply is an external cryptographic device.

But it's not all. To have their product admissible to the market, token manufacturers must conform to certain standards. Those standards follow a certain procedure, a security protocol through which the tokens are supposed to be used. This procedure essentially implies that a token is used in a corporate environment, where tokens are distributed among the employees by security officers in orderly fashion, there is a well defined lifecycle to each token, and in general - there is a trusted administrator to see if there is any trouble.

Guess what ? Home users don't have administrators. They are their own security officers. And so the entire procedure goes down the drain.

Consider this - an average home user comes in to a bank, signs a few papers and is given a plastic dongle over the counter. All we need at this moment is to generate a new key inside the token, protect it with a user-provided PIN code and print out a certificate request. All we need later is to collect the PIN code for the key from the user and sign stuff.

Some tokens, specifically MS_KEY and eToken PRO Java which we have been using before, allow just that. You create a key, protect it with a PIN code and be done. Not the others, Rutoken ECP and eToken GOST. They conform. Which means that every time the owner wants to use the token for anything, he needs to provide the PIN code to the token.

This is precisely the same situation as with cell phones. In fact, there is a token inside a SIM card. The PIN code to it you enter to begin using the phone.

There is a lot of problems with token PINs (as opposed to the individual key PINs) for a home user:

  • It is 123456 or alike by default and nobody cares to change it. This is bad, because it's a key.
  • And if they would, the disaster is just around the corner, because they forget it. This is bad because there is no established procedure to recover the token contents, because there is no trusted administrator.
  • It can be recovered following manufacturer-specific procedure, and this is there another magic PIN code (security officer's) comes into play. It in turn is 654321 and you cannot even begin explaining what it is for to the user, leave alone having it changed, written down and kept in a safe place.
  • A bank can have employees setting random PINs to tokens before giving them out, but it complicates the procedure, is all expenses and not only doesn't make the threat of hacker knowing the PIN to go away, but makes it specifically targeting the bank. It was them who knew the PIN after all. You can automate the procedure and have PINs printed in envelopes similar to how it is done with credit cards, but it's way more expenses.
Again, as with cell phones, how many people have their phones protected with PIN codes ? And of those, how many have their PUKs kept in a safe place ?

So in the best case, the user never changes the PIN and the token contents is accessible to anyone. And in the worst case, the user forgets his password in 3 days and the token is as good as dead. Or the other way around depending which approach to security you take.

Compared to that, when you lose PIN code to an individual key inside the token, no big deal. Generate another one, come up with a new PIN and there you go.

Returning to the tokens at hand.

Both Rutoken ECP and eToken GOST have their mandatory PIN codes, but Rutoken ECP at least supports protecting keys with separate PINs. This way we don't care about the PIN code to the token at all. It may be 123456 or whatever. The user is required to enter it each time and it is a nuisance, but each individual key is protected with a separate PIN just as before.

With eToken GOST it is worse - it only supports one PIN code. There is no way you can protect individual keys. Anyone who knows the token PIN has access to everything. This approach makes sense too, but it renders useless some interesting features of our application such as having multiple active keys on the same token, or having a new key generated over an old one and the new certificate request signed with the old key, thus escaping from visiting the bank in person when the certificate is about to expire.

Why am I not blogging more ?

This is kind of a meta-post, a blog entry about the process of writing a blog.

And so, I'm thinking about what is it that holds me back from writing more ? Things worth posting cross my mind frequently, I observe them, make a mental note if you like but do not write down.

Problem #1
The process of thinking the idea over is fun. I keep thinking about the thing until there no longer are interesting aspects. Once this is done, I quickly lose interest to the idea, sort of letting it go. And this is exactly the moment when I should start typing it in. Oh, the bother.

Problem #2
Similarly, once I had thought the idea over, it no longer fascinates me, and there is no indication of whether it would be fascinating to anyone else. Kind of a preliminary positive feedback, which is impossible.

Problem #3
Writing is hard on its own premises. It takes time and effort to write well, no matter in your native tongue or not. What are the benefits from doing it at all ?

The outcome ? The step between having a thought and having it written down is hard. It's not going to happen unless I actually make it.

June 07, 2011

Another usability pearl

One of my all time favourites:
Press ENTER to exit.
So short but blows your mind right away.

May 20, 2011

Transactional memory - nothing but trouble

This post is a followup to a review that I have recently written for a book called "Transactional memory". Here I focus not on the book and its qualities, but on the matter it discusses - transactional memory. Still, when I say "they" and "their" I refer to the authors and indirectly - to the researchers they represent.

Why at all ?

To begin with, what are the problems they say transactional memory solves ?
  • Arrival of multi-core chips makes programs parallel.
  • Writing effective parallel programs is hard.
    My first point of criticism is right there - there is no evidence that parallel transactions make a good  generic programming model at all. Even if everything else was perfect, why TM programs would be efficient ?

    But there are transactions in databases !

    They quote databases as a proven example of successful transactional systems and kind of go from there. Ok, I buy that, but there are huge differences -
    • Databases operate on very high level objects - tables, records and alike.
    • Databases are about well defined objects with a rather limited behavior. You only can do a handful of operations to them. You cannot invent relational operators.
    • Because the operations are at high level, databases can to a degree "understand" what you are doing, and control the semantics.
    • With memory bytes you can do anything. There is no way to tell what you are doing in that transaction. It's very low level.
    Also, about their success, database transactions don't make the task of writing an efficient and/or correct program any simpler. The guarantee of an ACID system is not that it's effective or easy to use, but that it tolerates various screw ups and maintains data integrity. You normally say something like "it's in a transaction, so nothing bad can happen to the data". You don't say "it's in transaction, therefore it runs fast" or "it's in transaction therefore it is correct".

    Anything good at all ?

    The only thing that could possibly be good about transactional memory is that, just like any other enforcing formalism, it makes possible to reason about the correctness and behavior of a program. Proof of correctness, anyone ?

    That'd be great, but proof of what ? Again, working on a very low level, analysis of a TM program will reveal that there is no conflict between accesses to memory addresses such and such. Uh-huh, thanks.

    Transactional memory is shared

    For a technology whose goal is supporting multi-core computers (and we are talking about hundreds of cores, right ?), TM is surprisingly insistent on shared memory model.

    Although theoretically there is nothing wrong with 100 logical cores accessing the same memory, we live in a real world and in real world hardware rules. And hardware dictates that cores should be as independent as possible. Even in SMP architecture there are per-CPU caches, and you really should keep data accesses affine to a CPU, otherwise cache coherence would kill your performance. In NUMA architecture you don't have a privilege of a uniform view of a shared memory. And in clusters you don't have shared memory at all.

    Now, why sticking to an architecture that already scales poorly ? 

    Transactional memory doesn't play well with anything

    There hardly is anything in existing hardware or software which is not alienated by TM. Instead of having support from your commodity hardware and momentum from existing software development practices, you have to fight it.

    Hardware:
    • TM doesn't work well with caches. Shared memory doesn't in general.
    • TM doesn't work well  with CPU optimizations such as memory access reordering.
    • TM doesn't work well with non-shared bus architectures. Good bye NUMA, clusters, transputers.

    Software:
    • TM doesn't work with locks and waits. Most of current parallel programs use locks and waits.
    • TM doesn't like global heap and especially garbage collection.
    • TM doesn't like exceptions, because it is unclear what to do in case of exception.
    • TM doesn't like I/O because it is irrevocable. Because you cannot un-print something from your output, you have to invent workarounds like "transactions that are guaranteed to commit".
    Even worse, TM conflicts with itself. Consider having a transaction in a transaction. This can happen easily if you begin a transaction and then call a library which begins a transaction of its own. And it gets ugly fast. What if the inner transaction committed but the outer has to rollback ? What if there are multiple inner transactions ?

    Is there any surprise that existing databases don't support arbitrarily nested transactions ? Are you saying there should be a single memory transaction per execution context, as in database ?

    So what can you do in a transaction ?

    Because of the mentioned hardware implications, there is not much you can do in a memory transaction.

    1. The amount of memory that you can access in a single transaction is limited.

    For many TM systems it is limited with hardware cache size, and this is not much at all. Caches contains lines and by touching a line you effectively poison it, so if you access 2 bytes in separate lines you use up 2 cache lines. Also, concurrent transactions have to share it. What good is a 4M cache if it is only 128K lines and there are 128 parallel transactions ?

    2. The amount of time that you can spend in a transaction is limited.

    For many systems the length of the transaction must be less then OS scheduling quantum. Put another way, you cannot have a context switch in a transaction. And this is how much ? 1/1000th of a second ?

    There are such TM systems that allow dirty transaction data to spill from the cache to the main memory, and I presume there are such complex ones that allow context switching and arbitrary execution time, but their performance is to be seen.

    So, what can you do in a transaction with such draconian restrictions ?

    The answer is simple - memory transactions are only usable in low level libraries, implementing higher level objects such as data containers. For example you can have a multiple-producer multiple-consumer queue, a tree that allows access highly parallel, or some equally wonderful thing.

    But is it worth it ? I agree, it takes an expert to write a tree using locks or some magic non-blocking technique, but it's only done once. Are you saying that TM is good because now anyone could write a queue in an hour and it will be correct and efficient ?

    And you cannot use memory transactions in your applications. It is either impossible or grossly inefficient.

    And the answer is ?

    I do believe message passing is a better way to go. The technique goes under many names, but the idea is the same - there are separate processes with separate data and they interact only in a well-defined manner.

    It has no ACID properties, but heck, so does the real world. Like I always say - there are no transactions in real world.

    May 17, 2011

    Notes on implementing TLS. #2: Version differences.

    The first question when starting with TLS is this: which version to implement ?

    I started with version 1.2, the latest and presumably the best, initially using its specification as a reference. The other two versions I just skimmed over and since there were no apparent incompatibilities, proceeded with 1.2. Later on, when it was already working, I backported the differences.

    1. What are the differences ? Are they significant ?

    The real question is not about knowing what exactly those differences are, but about deciding whether it is feasible to support them all of the same codebase with minimal efforts. Luckily, it is.

    There are two major formal differences between the versions.

    1.0 vs. 1.1+

    IV handling. When a block cipher is used for traffic encryption, the key remains the same but IV varies with each subsequent packet. Version 1.0 uses tail of previous ciphertext as next IV, while versions 1.1+ transmit the next IV encrypted inside the previous ciphertext.

    1.1- vs. 1.2

    Ciphersuite support. Version 1.2 allows using any cryptographic algorithm of your choice anywhere in the protocol, restricting only the packet parsing rules. With versions 1.1- you could replace some of the cryptographic primitives but some others were hardcoded.

    There are minor differences in packet formats which are easily handled with a few if's here and there. On the other hand, these raise questions regarding interoperability between different versions client and servers.

    2. How to test different versions ?

    As far as testing goes, my main concern is correctness. In the initial phase, when my understanding of the protocol was still weak, it was invaluable to test against existing, presumably correct implementations. It allowed to quickly try the options which were unclear from the spec. Later on, as the product stabilized, I turned to loopback testing, leaving other implementations for regression testing.

    The second testing concern is compatibility. Although our product is proprietary and compatibility is not a main goal, it is a sign of a good implementation and I pursue it. There are corner cases and behaviour quirks that you find out only when examining existing implementations and the more of them you try out - the better. Every successful connection to a product of others makes you more confident in yours.

    Protocol versions pose a serious problem with testing. TLS 1.0 is the oldest and most widely supported, you can utilize pretty much any 3rd party TLS implementation to test against yours. On the other hand, not all of them support even 1.1 not to mention 1.2 and this hinders testing significantly. Most of the time I used OpenSSL which supports version 1.0 and GnuTLS which supports all versions.

    3. GOST support.

    Our top priority is using GOST family of cryptoalgorithms. Russian regulations mandate that GOST be used everywhere, and nothing else. It is unclear whether GOST is allowed to even co-exist with another cryptoalgorithm within the same system so to be on the safe side we have to replace all the TLS cryptographic primitives with their GOST versions. This is why TLS 1.2 is the only viable choice.

    There are existing products that officially declare TLS 1.0 (and even SSL) with GOST support, and I have no idea how's that possible. My guess is that they did break the protocol by illegally inserting GOST where it was not supposed to be and to hell with compatibility.

    May 05, 2011

    My other blog

    I also blog in Russian now, here.

    April 17, 2011

    Notes on implementing TLS. #1: Reasons for.

    The last few months I've been busy implementing TLS protocol support for this product of ours. It's in fairly good shape by now and I'd like to share a few thoughts.

    First of all, why having own implementation when there are existing ones ? Actually there are quite a few reasons.

    1. We need it to support Russian GOST family of cryptoalgorithms.
    2. In Russia one cannot freely implement GOST, only using of certified libraries produced by state accredited companies is allowed.
    3. We need it tightly integrated with our existing product. For example different such crypto libraries may need to be used simultaneously.
    4. To make things worse, one option is to have crypto operations done by hardware token or smart card.

    Therefore in the worst case you have USB dongle with GOST support inside, and you have to pass all crypto operations through it. Not just the private key operations, but even simple hash you can't do outside.

    One other thing is that the product is not crypto-centric. We need a client-server tunnel in which the server supports hundreds if not thousands connections and utilizes multiple cores effectively. This asks for a different approach in which you think about the server architecture upfront and leave a modest place for cryptography. It leaves a lot more room for maneuver.

    Besides, doing things is fun !

    April 08, 2011

    Better programming is just a way of telling no later

    You work hard to build clean and extensible architecture to help your product withstand every possible requirement, hoping this will allow it to live long and prosper. And it will, but not for long. No matter how good your skills, one day some customer wants something so awkward that your only choice is to tell him no. No, this is not possible. No, this is not going to work. No, this is technically infeasible. You'd love to, but no.

    But then, isn't good programming just a delaying that moment of telling your customer no ? Of reasoning finally backed with technical impossibility ? You had jumped through hoops implementing every possible wish while it was possible. Not anymore. You say no.

    On the other hand, if you had your software written badly, you'd have this moment of truth ages ago. This could have provoked confrontation with the customer but perhaps would make both of you realize the limits earlier and come to a different, even non-technical solution, don't you think ?

    January 10, 2011

    Notes on implementing MongoDB driver

    It's been a week-long statutory holiday in Russia and I spent a few afternoons implementing MongoDB support for the Python 3 application framework Pythomnic3k.

    It turned out to be simpler task than I thought, a few odd hours here and there, given that I wanted to implement everything from the ground up. Still, there are notes I'd like to share afterwards.

    Note #1: BSON

    BSON is "Binary jSON" - proprietary binary protocol for serializing pieces of JavaScript for transmitting or storing. It sets the byte representation rules for simple JavaScript types such as "Integer" or "String", and also for MongoDB-specific structures such as "Regex" or "JavaScript with scope".

    Supporting BSON is #1 requirement for MongoDB driver. Essentially, BSON is all there is to it.

    Its entire specification is just two pages long, describes formats for 20 different objects, and for such a small spec is notably awkward. There are different ways to serialize similar objects, there are deprecated objects, and there are mysterious objects with unspecified format, probably reserved for internal use.

    For example, String is serialized as
    (length)(content)(NULL)
    but Regex is serialized as just
    (content)(NULL)
    Also key/value tuple is serialized as
    (value type)(key)(NULL)(value)
    which is strange because while deserializing you cannot simply read key, then value type and then switch to an appropiate type parser, which would be possible for
    (key)(NULL)(value_type)(value)
    input.

    Note #2: No response to packets

    Application communicates with MongoDB engine using request packets, OP_THIS, OP_THAT and OP_SO_ON. To some of them the database responds and to some it does not. In fact, it prefers to remain silent, responding only when it has to.

    For example, if you want to find something, you send OP_QUERY then you do receive a response, OP_REPLY, containing the data you've been looking for. But if you want to insert data, you send OP_INSERT and (surprise) there is no response.

    I understand that with MongoDB there are no guarantees of data integrity, no persistency and no transactions, but now there is even no ack from the database. It makes me feel uncomfortable.

    Note #3: Explicit ordering of documents

    This one is ugly. The main data structure used in BSON hence in MongoDB is a document which is a dict or associative array. And associative arrays have no order. They have iteration order which is arbitrary implementation dependent likely derived from internal hash function. That order may even be well defined, for example smallest key comes first, but that could only work with comparable keys. In any case dicts don't have n-th element.

    The problem appears when the request packet containing document is serialized to be transmitted. MongoDB relies on the order in which the dict appears on the wire. Specifically, if you send a command to the database, the first key must contain the name of the command. And in some cases the command requires positional arguments to be transmitted in defined order. So you have to iterate over a dict in a specific order only known to the caller.

    For example, if you have a dict
    { "foo": 1, "bar": 2 }
    and you want to invoke
    db.foo(bar)
    with it you have to serialize it like this:
    [foo=1, bar=2]
    but if it is
    db.bar(foo)
    it is the other way around.

    The abysmal
    { 1 => "first", 2 => "second" }
    kind of dictarray has also found its way here, it is the way BSON serializes lists, but this is at least an implementation detail hidden from the application inside the driver.

    Note #4: Similar things in different ways

    This is easy to illustrate with the way MongoDB reports errors.

    For one, as already mentioned, some of the requests have no response whatsoever even though they might fail.

    Then, if you get an OP_REPLY response, it has bit flag QueryFailure, which if set should be accompanied with a single document with $err key in it containing error message. But it also has bit flag CursorNotFound, which apparently is also an error condition, but then it has no message.

    Furthermore, if the request has been a command (which only the caller knows), the OP_REPLY is a success, but it contains a document with $ok key that can be 0 in which case there is also an $errmsg key containing error message.

    So in the first case error reporting is done on packet level, and in the second case - on application level. And in some cases no error is reported at all.

    All this leads to the final

    Note #5: Ad Hoc-ish

    The exercise of integrating with MongoDB leaves the impression of incompleteness. It feels like this database (and I suspect other currently existing NoSQL databases) is an experiment, an early prototype.

    Well, they have 40 years of catching up with relational databases to be as well understood and implemented, it's a long way to go, so I do wish them good luck.