Skip to main content

You’ve probably read about OpenAI.org’s Chat GPT. But in case you haven’t, it’s widely considered the most advanced general-purpose interactive AI we have right now (publicly). People use it to write term papers, compose poetry, refactor source code, and write perfect pickup lines. And along with all that goodness comes the potential for evil: The algorithm has had extensive training to resist creating adult or exploitative material, but users have been able to coerce it into writing malware, erotic fiction, and hate speech.

As with all new technology, there is potential for good and evil, so here are some critical risks of the technology itself to keep in mind while experimenting:
– Data reliability. While often right, Chat GPT is notably wrong in some cases, meaning you shouldn’t trust it implicitly.
– Inadvertent disclosure. Thinking of uploading your latest billion-dollar idea for “advice” from the AI? The research team at OpenAI has stated that they have the capability to review everything that gets input, meaning you could be giving it away. The same holds for your sensitive company information.
– The project is currently free, but there will inevitably be charges of some kind in the future, so building products or workflows that rely on it could get expensive.
– The copyright and intellectual property law landscape for AI is very immature. There are already lawsuits pending against AI art that was generated with models trained on allegedly copyrighted works, and the legal status of anything else generated by AI could be dramatically impacted by the resolution of these cases.

Remember, these are just a few significant risks with this new, potentially disruptive technology. Keep your guard up, and be thoughtful about how you use it!

Leave a Reply