At the bottom of page 209 Peikoff discusses Rand’s idea of the “immortal robot” to illustrate that such an entity could have no values since it does not face the alternative of existence or nonexistence ie it would not be alive. I’m wondering if this can be contrasted to (unreasonable?) concerns about AI. According to one definition AI “involves using computers to do things that traditionally require human intelligence”. If I understand it correctly, concerns about AI are that it could take self-generating action that would harm humans and that we would not be able to control it. Do such concerns effectively rest on the assumption that AI would not only be alive and conscious but would, similar to humans, have the capacity for volitional conceptual thought? If so, are these concerns warranted?