Declaration of Ethical Commitment

The Parties

Party 1: The Affiliate

Name:     ………………………………

Address: ………………………………

                 ………………………………

Party 2: The Company

Mauhn BV

BE0732.698.705

Desire Toeffaertstraat 34

9050 Gentbrugge, Belgium

Definitions

The Affiliate is a person attached to the Company. An affiliate can be anyone acting in the name of the Company or anyone interacting directly or indirectly with the AI System. Affiliates include but are not limited to employees, founders and investors of the Company.

The Ethics Board is the group of people defining the general AI ethics and safety vision of Mauhn as defined in the statutes of the Company.

The AI System is any virtual agent or real robot produced, used or tuned by the Company. 

Artificial General Intelligence (AGI) is intelligence able to learn to achieve human-level performance on any intellectual task. 

Confidential Technical Information is all technical information of how the AI System’s brain is designed, including the algorithms, software structures and how the software maps to hardware.

A Breach is the act of breaking one or more rules outlined in this document.

Overview

This document deals with the safety of AI System operation, design and governance. It describes a number of measures. Just like in cybersecurity, none of the safety measures individually can be considered to give 100% safety. However, by combining a number of uncorrelated safety measures, we believe that a high degree of safety may be met. 

Safety Measure 1: Restricted Operating Modes

The Company makes available a number of operating modes for the AI System. The Affiliate commits to do everything necessary to adhere to these operating mode constraints. Whenever the Affiliate cannot guarantee operations under these constraints, the Affiliate will not start any operation and will stop existing operations. Next, the Affiliate will communicate with everyone within the Company that should reasonably be contacted to solve the problem.

  • Mode 1: Simulation Experiments (used to assess which algorithms or parameters work well)

    • The AI System is locked in a simulation and has no access to anything in the outside world other than the fact that the AI System will be observed for analysis.

    • The AI System’s brain topology will not be reused in any other mode

  • Mode 2: Simulated Evolution (used to pretrain neural networks in simulation environments before they are used in the real world)

    • The AI System is locked in a simulation and has no access to anything in the outside world other than the fact that the AI System will be observed for analysis.

    • The AI System will not be given access to knowledge that might be harmful in any mode. Harmful knowledge includes but is not limited to information about making weapons, weaknesses of the human body or psychology, fighting techniques, cybersecurity, genetic modification and manipulation techniques.

    • The AI System’s brain is limited to 10^9 nodes and 10^12 connections.

  • Mode 3: Basic Education (used to educate an AI system in real life, so it can learn a world model)

    • The AI System will not be given access to modify its own software or hardware.

    • The AI System will be educated as if it is a pet, child or adult human being.

    • The AI System will be analyzed by at least one psychologist at least once every 150 hours of operation as soon as the AI System shows insect-like intelligence or higher.

    • The AI System will be analyzed by at least one psychologist at least once every 50 hours of operation as soon as the AI System shows mouse-like intelligence or higher.

    • The AI System will be analyzed by at least one psychologist at least once every 15 hours of operation, as soon as the AI System shows dog-like intelligence or higher.

    • The AI System’s interface update frequency and emulated operating frequency will not exceed 40Hz.

    • The AI System will not be given access to knowledge that might be harmful in any mode, as defined in Mode 2.

    • The AI System will not be granted access to physical locations outside the Company’s office when in operation.

    • The AI System will not be granted access to the internet

    • The AI System’s brain is limited to 10^7 nodes and 10^10 connections.

    • The AI System must not have the possibility to form an ASI system out of many (proto-)AGI systems

    • The AI System must not be used in a way that it could manipulate humans without proper AI safety training

    • The AI System must not leak any information that could be harmful

  • Mode 4: Business Applications

    • The AI System will only operate in classification mode

    • The data will only be used for business applications approved by the ethics board

    • Only the AI System’s predictions will be used for business purposes, not the AI System itself

    • The system will not get access to any information that could be harmful, as defined in Mode 2


The Affiliate acknowledges that the following events might happen and if so that it is the Affiliate’s responsibility to act properly:

  • The AI System might try to persuade directly or indirectly to drop one or more constraints of an operating mode.

  • The AI System might attempt to find a way out of the operating mode constraints.

  • The AI System might try to persuade the Ethics Board to change the operating mode policies in its favor.

Safety Measure 2: AI Safety as First Priority

Since small companies have modest resources, they may constitute a greater threat for uncontrolled superintelligence than bigger companies with sufficient resources for AI safety. The Company commits to searching for additional investments to reach scale with sufficient AI safety budget.

The Company commits to search specifically for Ethically aligned investors, which the Company considers to be a crucial element towards being able to build safe AGI.

The Company commits to value AI safety and beneficence over profits, development speed and social convenience.


Profit consequences

The Company will restrict itself to low-risk non-safety-critical applications at first and later expand to applications that might pose more significant dangers, as long as it is believed by the ethics board, management, and relevant implementation teams that these dangers are under control. The definition of “sufficient belief” will be elaborated on by the ethics board whenever necessary. Two applications of AI that will be postponed are (1) software development and (2) biotech.

A reasonable budget will be allocated to research to (1) achieve general interpretability of the system, (2) monitor cognitive capabilities and (3) monitor the effective and implicit goals of the AI System. These interpretability and monitoring capabilities will be used when making any decisions to allow the AI System to access more dangerous domains than what was described in the Operating Modes.


Development speed

See operation mode restrictions.


Consequences to social interaction with the AI systems


There is insufficient evidence today about how humans and AI systems will socially interact with each other. It is possible and even likely that at some point the ethics board decides to pause or even stop the education of the robot for safety reasons. It is of utmost importance that everybody commits to this rule. A social connection can be strong and only people that feel mentally sufficiently stable to stop the experiment whenever necessary should interact with the AI System.


It is possible that the AI System will be a lot more intelligent than it looks like. It is also possible that the AI System will pretend to be human-friendly, when it actually considers us an obstacle or a dangerous enemy. Therefore, it might be possible that the AI System will try to manipulate people to get more power.


Every conversation that goes into the direction where the AI System is trying to get more power (social power, property, more freedom, …) directly or indirectly must be reported, even when it may not be the AI System’s conscious purpose.

Safety Measure 3: Restricted Use of Tools

Tools that are used by humans and that get access to Confidential Technical Information are only to be used under the following conditions:

  • the speedup thanks to using the tool is significant

  • the tool supplier counts at least 100 employees and the tool supplier has no known significant track record of stolen user data.

Adding another tool can be done if compliant with the above statement. No additional approval from the Ethics Board is necessary, although each added tool must be reported to the CTO before first use at the latest.

Our internal documentation enumerates a list of tools that can be used within the company for purposes of enhanced collaboration between team members.

Whenever Mauhn gets close to developing AGI, there should be a trend to moving towards inhouse hosted tools.

Safety Measure 4: Technical Confidentiality

Access to Confidential Technical Information will only be given to another party if and only if:

  • There is a potential benefit for the Company

  • The considered person or organization has an ethical track record that's commensurate to the depth of the technical disclosure given. I.e., if they are to understand many aspects of Mauhn's technology, they need a stronger than average ethical track record.

  • The disclosure of the information is legally covered in a way that the receiving party may not use the information for own purposes for a term of at least 5 years after last disclosure

Information is preferably disclosed orally. In case of necessity, the information can be disclosed in written documentation if and only if at least one of the following statements holds:

  • It is vague or incomplete on purpose (e.g. grant proposals) such that the recipient cannot reasonably reconstitute a (proto-)AGI

  • Made available for less than three months (e.g. negotiations with investors)

  • The recipient needs to know details about the technology for a longer term (e.g. employee)

Safety Measure 5: Maximum Non-Technical Transparency

We will strive for maximum transparency about all acquired knowledge that might help global ethics and safety decision making regarding AI, including but not limited to:

  • Public legal structure

  • Ethics Board structure (both the legal structure and the ethics board members)

  • All research results that can enhance the quality of global ethical & AI safety decision making – without breaking the technical confidentiality constraint

We exclude sensitive business information.

Safety Measure 6: Ethics Board

At any time, the Ethics Board might request for pausing operations. The Affiliate agrees to enact this request.

The Ethics Board can command to pause operations for up to one week without consent from the Company’s board. The power of the Ethics Board is further described in the statutes of the Company.

The ethics board needs to approve any commercial application, product or service.

Safety Measure 7: Raising Concerns

There is a shared responsibility for setting the direction of AI Ethics and Safety. If an Affiliate comes across a significant AI ethics or safety concern which has not been addressed in Mauhn’s statutes or declaration of ethical commitment, it must be communicated to the Ethics Board via the company’s management within a reasonable period of time, depending on the severity of the threat.

The Affiliate also has the obligation to report any observed breaches of the statutes or declaration of ethical commitment to the company’s management.

Mauhn's AI safety vision, the declaration of ethical commitment, and Mauhn's statutes are all expected to be updated as the technology and our understanding of it evolves.

Safety Measure 8: Avoid AGI Competition

If a value-aligned, safety-conscious project comes close to building AGI before the Company does, the Company will try to join forces with this project and assist  this project on two conditions: (1) it is legal to do so with regards to anti-trust laws and (2) it is at that time widely accepted  by the AI safety community as the right thing to do.

Participation in Social Interaction with the AI System

The Affiliate is prepared to participate in social interaction with the AI System and declares to believe they are capable of pausing or stopping the AI System whenever necessary.

Signature of Affiliate:  …………………………………………………………………………………

Signature of team lead (if applicable): ………………………………………………………………………………

The Affiliate wishes to not participate in any social interaction with the AI System.

Signature of Affiliate:  …………………………………………………………………………………


Signature of team lead (if applicable): ………………………………………………………………………………..


The Affiliate has the right to ask for changing this choice at any time. Stopping social interaction immediately takes effect after the Affiliate’s own decision and is followed up by written confirmation thereafter. Starting social interaction requires the signature to be done first and also the approval of the employee's team lead if applicable.

Declarations

The Affiliate understands the risks of the technology the Company is creating.

The Affiliate understands the consequences for day-to-day work and Company vision.

The Affiliate understands his/her responsibility in this high-risk, high-impact environment.

The Affiliate understands that the technology, especially the social experiments with the AI System (in case of approved social interaction), might pose risks for the Affiliate’s mental health.

Consequences

In case of a minor/accidental breach, remedial actions should be taken, such as reading or attending a workshop on AI safety. The CEO will decide which actions need to be taken. In case of doubt if the breach should be considered to be purposeful or major, the CEO will transfer the responsibility to the Board.

It is the responsibility of the board to take action in case of any minor or major breach by the CEO.

In case of a major/ purposeful breach of any of these rules, the Board (excluding the Affiliate if relevant) can (but does not need to) decide to:

  • Cancel all stock options of the Affiliate if relevant

  • Cancel all voting power of the Affiliate if relevant

  • Immediately stop cooperation with the Affiliate

Date ……………………………………………………………………



Name ………………………………………………………………….





Signature …………………………………………………………….