Artificial Superintelligence Coordination & Strategy (Roman V. Yampolskiy & Allison Duettmann, 2020)


  • Artificial intelligence-enabled adaptive learning systems (AI-ALS)
  • Intelligent tutoring systems (ITS)
  • Bayesian Knowledge Tracing (BKT)
  • Vicarious trial and error (VTE)
  • Null hypothesis significance testing (NHST)
  • Probabilistic structural equation modeling (PSEM)
Artificial General Intelligence (AGI)
  • Intelligence amplification—AI can improve its own intelligence
  • Strategy—optimizing chances of achieving goals using advanced techniques, e.g., game theory, cognitive
  • psychology, and simulation
  • Social manipulation—psychological and social modeling e.g., for persuasion
  • Hacking—exploiting security flaws to appropriate resources
  • R&D—create more powerful technology, e.g., to achieve ubiquitous surveillance and military dominance
  • Economic productivity—generate vast wealth to acquire resources
Safety on AGI
  • Non-hackability and non-censurability via decentralization (storage in multiple distributed servers),
  • encryption in standardized blocks, and irrevocable transaction linkage (the “chain”)
  • Node-fault tolerance: Redundancy via storage in a decentralized ledger of (a) rules for transactions, (b) the transaction audit trail, and (c) transaction validations
  • Transparency of the transaction rules and audit trail in the DLT
  • Automated “smart” contracts
  • Decentralized applications (“dApps”), i.e., software programs that are stored and run on a distributed network and have no central point of control or failure
  • Validation of contractual transactions by a decentralized consensus of validators
Forecasting
  • Trend-impact analysis (TIA)
  • Cross-impact analysis (CIA)
  • The intuitive logics school
  • The probabilistic modified trends (PMT) school
  • La Prospective, a.k.a. the French school
Single-agent overoptimization failure modes outlined
  • Tails Fall Apart, or Regressional inaccuracy, where the relationship between the modeled goal and the true goal is inexact due to noise (for example, measurement error,) so that the bias grows as the system is optimized.
  • Extremal Model Insufficiency, where the approximate model omits factors which dominate the system’s behavior after optimization.
  • Extremal Regime Change, where the model does not include a regime change that occurs under certain (unobserved) conditions that optimization creates.
  • Causal Model Failure, where the agent’s actions are based on a model which incorrectly represents causal relationships, and the optimization involves interventions that break the causal structure the model implicitly relies on.

Comments

Popular posts from this blog

Kokology Questions & Answers

Neuro-Linguistic Programming Models Summary (02 of 14)

Neuro-Linguistic Programming Models Summary (11 of 14)