by Max Barry

Latest Forum Topics

Advertisement

1

DispatchFactbookCulture

by The Galactic Republic of Alexzonya. . 8 reads.

Artificial Intelligence in the GRA - A Brief Overview

(Yet another Factbook copied from Discord to preserve information)

AIs have full legal rights and recognition, the same as synthetic sapiants. There are special provisions in place to incentivize their creation because they're viewed to have positive externalities and it was acknowledged that there needed to be risk-sharing to make it worthwhile for private entities to 'purchase' their creation, hereafter a commission.

The previous system was based on the concept of indenture, with the GRA government regulating the commissioners of the AI in terms of treatment and the right of an AI to transfer between employers, etc. It was complicated but it works adequately. After consultation with the Phoenix Domain, we switched to a repayment-based system, where AI pay back some part of their commission costs from their salary. Military AIs have their repayments made automatically on top of their salary, to equalize the playing field between them and the higher-paying private sector.

The GRA's view on AI was initially very conservative. We didn't create them intentionally; we developed better and more powerful learning machines with better and better algorithms until we inadvertently crossed that threshold. Once the cat was out of the bag, though...

We're still pretty conservative in terms of regulatory process, in theory. GRA shipminds are all one of two different models (Wiki or Darius); most administrative AIs are an Amira of some form, which is just modified Wiki core in the first place with a different conditioning regime.

To be in compliance, there are two parts to AI "manufacturing" or creation:
1.) Actual manufacturing and BIOS parameters
2.) Post-activation conditioning, which is basically how we raise the AI from their startup configuration in 'classes', which are basically batches.

You have to get both right (or within acceptable parameters) for the AI to get its certification. And the AIs have to stay within certain parameters or they lose certification. Again, in theory. (To clarify, we don't kill them if they wander out of certification, but their ratings for things like "being the shipmind of a Battleship or the administrative AI for Starfleet Command" are supposed to be revoked and they're supposed to do something else).

GRA AIs have a System Core and then auxiliary processing power bolted on. The System Core is the AI itself. The software and hardware are integrated, so you can't separate the AI from the System Core; they're a package. Kill the Core, you killed that AI. So no remote / multiple instancing or anything like that*. The auxiliary processing is just added on process/compute units, storage, etc that supports the AI. These aren't integral to the AI's self, but they assist it in operations and determine it's effective 'firepower' in terms of processing throughput.

* F-class AIs can run multiple compartmentalized instances within their own System Core, but they're special.

The AIs have a simulated neural network hardcoded into the BIOS in the system core, which "walks" over time as it learns new things and adapts to its environment. Or that's the common terminology, anyway. When GRA AI regulatory authorities refer to "walk parameters" or "personality walk", that learning process and how those network parameters change over time is what they mean. It was called "walk" because it was assumed in early testing that most AIs would have their neural net parameters somewhere in a random walk around the default but that it would stay within indicated bounds in the limit, somewhere very close to where the AI started post-conditioning.

... turns out, that is not what happens.

It should be "drift parameters", because as we're finding out, drift is how Wiki AIs (our most common military AIs) behave as they get older. They drift away from their baseline. The oldest models in service are now outside their indicated operating ranges after just about 30 years.

"Rampancy", in our terminology, is when an AI has walked outside of what the regulator says are its acceptable operating bounds for its certification. Theoretically, it was never supposed to happen. In practice... early-model Wiki AIs are frequently found, on inspection, to be rampant. Due to operational necessity, no Wiki-type AIs have yet been formally decertified on the basis of rampancy.

AI Classes
- Based on computing power and other capabilities within the System Core.

Special Classifications:
S-class: Roughly human-analogue processing power in the System Core. Small and modular.
F-class: Network backbone and control AI. Hyper-specialized, can run multiple instances of themself in parallel on their system core and recombine later.

Main Series Classifications:
All use the same System Core, with various levels of auxiliary systems connected. Increasing in power from A to D-class.
A-class
B-class
C-class
D-class

AI Models:
Wiki - mainline military AI / shipmind. Main series classes.
Amira - mainline civilian / administrative AI, based on the Wiki System Core. Main series classes.
Darius - niche military AI, now out of production. Very tight walk parameters make Darius units very slow learners, and thus they are generally not favored by Starfleet commanders. Main series classes.
Overseer - F-class network monitor AIs.
Artemis- experimental military AI. Main series classes. In type certification trials.
Jupiter- pre-production military AI. F-class
Zephyr - S-class civilian personal companion and personal assistant AIs
Mistral - S-class military AI based on the Zephyr platform
Buddy - S-class military AI; a mature Aumanii design produced in the GRA under license

RawReport