Integrating AI with Robotics Systems to Impact the World

Over the past year, I've been zealously diving into the world of newly emerged Large Language Models (LLMs), probing into the extent and scenarios in which they can amplify or enhance my productivity. As evident from my previous blog posts, AI has profoundly permeated every facet of my life and work, aiding in tasks ranging from coding to drafting documents, blogging, and organizing my thoughts. Overall, I'd say my productivity has skyrocketed by a factor of five to ten. However, viewing from a more abstract perspective, thus far, AI's impact on the world has primarily been mediated through me as an agent. For instance, it aids me in coding, and as a programmer, I utilize the code written by AI to impact my clients; it assists me in writing documents, which I use to influence my colleagues; it helps me blog, through which I influence my readers. From this vantage point, it's glaringly obvious that I've become a bottleneck in AI's potential to impact the world.

A perilously intriguing thought is the possibility of allowing AI to directly influence the physical world, perhaps by integrating it with robotic systems to aid us in accomplishing more tasks. Although humans might still need to audit or review these processes, just as with coding, document writing, or blogging, this could undoubtedly scale up our capabilities, enabling smarter automation and potentially multiplying our efficiency or sparking more related scenarios. There's considerable research in this area, but I'm keen on conducting an experiment in the most grounded manner possible to see if integrating AI with robotic systems could address the challenges I face.

I've chosen astrophotography as my field of exploration. Today's astrophotography is a far cry from the past's manual telescope maneuvering and film photography. It often involves computer-controlled devices, positioning with equatorial mounts, capturing images with digital cameras, and focusing with motorized focusers. The entire process can be distilled into three phases: sensing, decision-making, and execution. In practice, we humans need to make decisions based on weather forecasts and current conditions (e.g., moon presence, cloudiness) such as whether to venture out for shooting tonight and which celestial bodies to capture. The varying altitudes of different celestial bodies at different times also dictate how we sequence their capture to avoid obstructions or atmospheric disturbances. Finally, we must translate these decisions into actionable commands, like clicking buttons in control software to position the telescope or capture sequences of images. This process involves a significant amount of manual labor, which is often not simple automation but requires a certain level of intelligence. I'm highly curious whether AI could take over these manual tasks, allowing us to focus our energies on areas truly requiring creativity and decision-making prowess.

Delving deeper into the application of AI across various scenarios reveals both its potential and limitations. Just as AI proves highly effective in coding and documentation in some scenarios but less so in others, the key lies in identifying the system components most amenable to AI intervention to maximize its utility and explore new business scenarios. However, as we begin to ponder these issues more deeply, we soon encounter a core challenge: the trade-off between security and flexibility.

A palpable risk emerges if AI's influence on the world is channeled solely through us as intermediaries. While relatively safe, albeit not impervious to bugs or errors slipping under our radar, potentially causing harm in the future, this mode of operation keeps risks manageable. However, granting AI direct access to robotic systems, enabling it to tangibly affect the world, significantly amplifies the potential for destruction. The thought of it manipulating robots to harm humans or property is chillingly daunting. Thus, ensuring AI's capabilities remain constrained throughout the process, safeguarding human and property safety, is paramount. Yet, this security doesn't come without its trade-offs. A common solution for bolstering security is implementing a whitelist mechanism, restricting AI to a very limited set of actions, thereby enhancing safety but also curbing the extent to which AI could assist us. Balancing security with flexibility, two seemingly antagonistic objectives, presents a clear technical challenge. Additionally, technical nuances like integrating AI into system components in the most efficacious manner to leverage its contextual understanding must be considered.

From an engineering standpoint, two prevalent and mature approaches exist for striking a balance between security and flexibility. The first involves using Domain Specific Languages (DSL). Essentially, this means inventing a new programming language to endow AI with various capabilities, then having AI output a program in this DSL to be executed by our interpreter. Since we have full control over this interpreter, we can confine its capabilities within safe bounds. For instance, if operations that could harm humans are absent from the DSL, AI naturally cannot perform such actions. Although not trivial, this approach offers a viable and not overly complex solution. DSLs have numerous benefits, but a glaring flaw is the substantial cost of defining a DSL and its interpreter that fits our scenario. Often, this feels like reinventing the wheel. For example, in astrophotography, we could easily conceive basic operations supported by the DSL, such as taking a photo, focusing, or pointing at a celestial body. These could be considered non-wasteful manual tasks, but making this capability more complex and powerful inevitably involves loops, conditional branches, and other complex control logic. On one hand, defining these control logics isn't straightforward; on the other, they're functionalities found in any programming language, effectively making us reinvent features in an already well-trodden domain, which is far from ideal in engineering terms.

Another feasible engineering solution to balance security and flexibility is using existing programming languages, avoiding the need to reinvent the wheel. This approach capitalizes on existing control logic, such as loops, conditional branches, variables, state management, and system functions like delay operations, significantly cutting development costs and accelerating results. However, it could also endow AI with too much power, such as allowing AI to script in Python to control astrophotography equipment directly, potentially leading to either unintentional or deliberate inclusion of destructive code, causing severe consequences like system file deletion or equipment damage.

Hence, whether opting for a Domain Specific Language (DSL) or an existing programming language, each approach has clear advantages and drawbacks. This technical challenge is not unique to astrophotography or robotic control but pervades the Large Language Model (LLM) domain. Currently, one potent tool at our disposal is sandbox execution. Specifically, for tasks requiring code writing, such as data analysis or robotic control, OpenAI offers a sandbox code execution feature, allowing GPT-written code to run on its servers. This way, even if the code contains malicious elements, only OpenAI, not ourselves, bears the brunt. However, often we cannot share data with OpenAI for various reasons, or like in my project, need to control robotics directly connected to my computer, making OpenAI's sandbox feature impractical. This leaves us grappling with bypassing the issue through third-party sandbox services: ensuring the trustworthiness and safety of GPT-generated code. Especially when human intervention is factored in, although GPT may not initially produce harmful code, if users introduce malicious modules, like deleting system files or stealing private information, how do we safeguard against such threats?

One workaround I've stumbled upon is leveraging an existing programming language to first ensure efficiency in development iterations, then fortifying security through four measures: First, diminishing the likelihood that GPT inadvertently crafts code that could harm the system; second, ensuring that even if such code is written, it remains largely unexecutable; third, mitigating the severity of consequences should execution occur; and fourth, preventing individuals (as opposed to AI) from deliberately injecting malicious code to wreak havoc, which, due to the initial two constraints, would neither execute nor cause significant damage. Admittedly, achieving this is arduous, potentially requiring professional information security audits, and my method isn't foolproof.

The strategy unfolds across four dimensions: First, encapsulating operations related to robotics within a class, making all operations accessible solely through this class and embedding this directive within GPT's prompt. This ensures that any generated Python code operates through the pre-defined class, akin to a DSL. Since we have complete control over this class, the associated risks are mitigated. Second, rather than executing the Python program penned by GPT directly, we first parse it using Python's Abstract Syntax Tree (AST) to impose functional limitations, such as halting execution upon detecting import statements, thereby strictly confining the code to our provided libraries without enabling complex functionalities. Third, while GPT might not err unintentionally, humans might write malicious code, like using the open function to access sensitive system files. This scenario is tricky, but one viable solution is employing blacklist or whitelist mechanisms to restrict callable functions, plugging potential vulnerabilities while ensuring coding flexibility. Fourth, even if users bypass the first three constraints, damage is limited as long as our program executes within a sandbox rather than the host computer. Especially when employing Docker, as each user's environment is isolated, damage to the operating system doesn't affect the next user, who receives a fresh environment, significantly limiting the havoc wrought by malicious code.

Specifically, I've chosen Python as the conduit for robotics to interact with the world, implementing a series of Python class libraries for common astrophotography tasks like querying celestial body altitudes, pointing at bodies, taking photos, and focusing. Additionally, I imposed restrictions allowing only the use of specified interfaces for operating astrophotography equipment and forbade importing unvetted functions to prevent users from executing complex, destructive operations. Moreover, we manually review AI-generated code to ensure it's devoid of malice. These measures aim to balance security with flexibility.

In this link, a demo video showcases a prototype system I developed. In this system, we describe our requirements to AI, which then assists us in crafting a sequence of instructions. This sequence is essentially a simple Python program capable of executing complex logical controls, including loops and branches, using nearly all Python syntax and some restricted system libraries. The video demonstrates that AI can smoothly write and immediately execute code for simple commands, such as pointing the telescope at the M101 galaxy, maintaining position for five seconds, then resetting. For more complex commands, such as "Check the current altitude of the M101 body; if it's above 30 degrees, point at it and take 50 photos with 300-second exposures, performing a Dither process after every 5 photos, then reset after completing the 50 photos. If the altitude of M101 drops below 30 degrees during this process, reset immediately." AI can adeptly fulfill even these intricate instructions. The sole flaw in the initial version was AI's oversight of our intent to save the photos, thus omitting the command to save the files. However, I believe this can be easily circumvented with thoughtful prompt engineering, a notion our manual review confirmed by catching this oversight, proving it's neither concealed nor difficult to amend. In further experiments, we've noted AI isn't always reliably compliant, such as sometimes attempting to import despite prompts advising against it. This issue might be mitigated through refined prompt engineering.

Overall, my experience throughout this process is that AI itself is remarkably safe. In the myriad scenarios we tested, the code generated by AI was consistently rational, never involving any operations that might intentionally or unintentionally damage equipment, such as endlessly rotating an electric focuser in one direction until the motor burns out. Throughout the design process, our primary concerns were actually about human users. Most of the restrictive measures were put in place to prevent users from inserting malicious code, such as establishing sandboxes or limiting the use of certain functionalities. This is a fascinating observation but also one that can be navigated. For example, when users control their own devices rather than rented ones, they have a strong incentive to collaborate with AI to ensure the code's safety.

This experiment has been incredibly enlightening. The lesson I've learned is that the idea of integrating robots with AI can yield excellent results. AI's current capabilities are formidable; it can effectively understand and perceive its surroundings, make rational decisions, and translate these decisions into robotic actions. From a safety perspective, my experience is that AI itself is very cautious and secure. However, to guard against human malice, we still need to implement numerous mechanisms to limit the system's capabilities. When implementing a similar system, we might consider using ready-made programming languages like Python. There are several reasons for this: first, Python's versatility makes it particularly effective for seamless integration with many robotic systems; second, the current popularity of LLMs offers exceptional support for writing code in Python; and lastly, Python is convenient for restricting callable libraries and capabilities, for instance, through its support for built-in abstract syntax trees and so on. I hope this experiment provides you with some inspiration.

Comments