Atlassian, the Australian software giant, has proposed that Australians should receive clear notifications when they are interacting with human-like artificial intelligence (AI) systems, such as chatbots. Furthermore, they believe individuals should be informed when AI is being utilized to make significant decisions that will impact them. Atlassian’s suggestion comes in response to the government’s request for ideas on how to promote safe and responsible AI practices in Australia.
To establish a structured approach to regulation, Atlassian recommends adopting a traffic-light system. Under this system, scenarios that involve actual or imminent harms not covered by existing legislation would be classified as “Red.” In these cases, the government would proactively intervene. For example, the common practice of using AI to determine eligibility for insurance policies would likely fall under the “Red” category. Atlassian asserts that customers should be informed when AI is making such decisions and educated about the implications.
Additionally, Atlassian proposes classifying scenarios where the risks of harm are unclear or distant, or when existing legislation may already address potential harms, as “Amber.” According to Anna Jaffe, Atlassian’s director of regulatory affairs and ethics, the doomsday scenarios presented by some, envisioning AI achieving superhuman intelligence and wiping out humanity, fall under the “Amber” category. These risks are speculative and future-oriented.
The company suggests that existing legislation adequately covers many uses of AI, and the focus should be on addressing the harms caused by AI rather than preparing for hypothetical scenarios. Atlassian does not intend to be responsible for assigning the traffic-light colors; they suggest that advisory boards could be established to aid the government in prioritizing and formulating appropriate regulations. The purpose of these regulations would be to build and maintain public trust in safe and responsible AI.
FAQ
Q: What is Atlassian’s proposal regarding AI interaction?
A: Atlassian suggests that Australians should be informed when interacting with human-like AI systems and when AI is used to make decisions that will significantly impact them.
Q: What does the traffic-light system proposed by Atlassian entail?
A: Atlassian recommends classifying scenarios as “Red” when there are actual or imminent harms not covered by existing legislation. “Amber” would be used for unclear or distant risks, and “Green” for uses of AI already covered by existing laws.
Q: How does Atlassian view doomsday scenarios involving AI?
A: Atlassian considers these scenarios to be speculative and future-oriented, falling under the “Amber” category.
Q: Does Atlassian plan to assign the traffic-light colors themselves?
A: No, Atlassian suggests the assignment of colors should be left to advisory boards or similar entities established to assist the government with regulations.
Q: What is Atlassian’s view on existing legal frameworks for AI?
A: Atlassian believes that current legal frameworks are well-suited to address many aspects of AI, and there is a need for better education to build and maintain public trust in safe and responsible AI.