Rule 2. Do not design robots capable of disobeying human beings unless obedience violates Rule 1.
Rule 3. Do not design robots in such a way that they cannot protect/repair themselves, unless self-preservation requires the violation of rules 1 and 2.
We keep pointing to the stories in I, ROBOT as examples of how Asimov's Laws don't work, but what if they were never meant to work as described?
If you work anywhere near AI, think about it.