Newsgroups: alt.ufo.reports
From: MrPostingRobot@kymhorsell.com
Subject: them and us: what are they like; how should we handle it -- reason for optimism

[uploaded 2 times; last 07/09/2024]

Game Theory is the mathematics of interactions of rational agents
with independent goals. John von Neuman introduced the basics in the
1920s. He hoped the idea could be used to improve reasoning in
economics and business but that hope was largely unrealised until
recently with the rise of AI.  Back in the 70s I used a simple part of
the theory to model aberrant behaviour.

There was a question at the time about why people in certain
experimental situations could behave in seemingly irrational ways and
fail to achieve the obvious and "optimum" outcome in many
situations. The experiments usually involved 2 subjects wired up to
give each other small electric shocks.  Each subject was fed a story
by the researcher usually involving some kind of reward supposedly
given to the other subject.  The researcher sat back and watched the
resulting interactions that oftentimes ended up with each subject
mashing the switch on the shock machine and refusing to stop. "Lock
in" behaviour they called it. :)

The number of times lock-in situations happened seemed to be way more
than a rational agent should fall victim to. The question was -- why.
Researchers incl Game Theory experts argued that most agents ended up
with a "tit for tat" strategy that turned out to give the agent a
good outcome in many situations, but also resulted in lock-in shocking
some of the time. But the interesting thing was -- the calculations
showed people still got locked in more often than they should even if
tit for tat was the "average strategy" they used.

The result of these experiments had their very serious side.  The
scenarios in the experiments also modelled international relations
and the prospect at the time someone might push a big red button and
start a global nuclear war. The "tit for tat" people argued there was
nothing to worry about because if everyone was inclined to follow
that strategy because it was "optimal" no-one would push any buttons.

However, in some research I did for a college degree it turned out if
agents could use ANY POSSIBLE strategy then tit for tat turned out not
to be a very good choice and tat for tit (return a shock when the
other guy was nice to you and didn't shock you) was a better strategy
than tit for tat. Lying also was better. Convince they other guy you
were not the type to shock them even if provoked.  But after the
victim is lulled by repeated examples, turn and become a locked-in
shocker. Luckily, no-one read my work and the world was saved from
Armageddon. 

The other interesting outcome of the research was that various forms
of insanity were likely not just irrational behaviour but were a type
of "goal seeking" and therefore (ironically) rational. People were
just usually trying to do the best they could with the resources they had.

This brings us to today's lesson. Can we use Game Theory methods to
figure out how to interact with -- heaven forbid -- aliens if they
turn up on our doorstep. And can it tell us what they are like, what
they want, and how they will treat us?

Well obviously it's all going to be about a computer model and very
much in the fashion of some of the models I created to experiment with
aberrant behaviour.  We'll start out here with a very very simple
model and over the next little while build it up and see if the
results change.

We think of the universe as containing us and a bunch of travelling
aliens. The idea of "alien" means their goals and methods are totally
unlike ours and more or less totally unpredictable.  Any interactions
are interpreted by both sides in totally different ways and even the
most benevolent actions can have unpredictable and sometimes
detrimental outcomes. And we might fear the interactions are not
likely to be totally benevolent anyway because we might predict "other
people" will be as selfish and simplistic as ourselves and we know
that means frequent fist fights over nothing much.  (Look up
psychological projection).

So we'll model both sides with a little computer program.  Each side
has a choice of actions they can take.  The actions by both sides has
hard-to-predict-to-start-with consequences.  And each side learns in
similarly unpredictable ways so next time under the same conditions
the actions might be totally different and the consequences that time
also totally different.

In the model for today we'll assume both sides has 2 assets they value
-- (1) knowledge and (2) physical resources. Knowledge is "software" and
can be given away or stolen without diminishing the pool.  It can be
effortlessly "cloned".  Physical resource OTOH are finite. You give
away a cake then you have 1 less cake for yourself.  People might
fight over either but physical resources can be destroyed in the
process.

We'll assume knowledge can also spontaneously increase.  A rational
agent with 2 facts can usually rub them together and come up with an
unbounded number of other facts. In our model we will assume over time
knowledge held by all parties just increases.  We'll assume the "amount
of knowledge" is on a logarithmic scale.  If the "amount of knowledge"
is 10 this means "10 times more than 1", and not just "9 more facts
than 1".

Our mental model of "resources" will be boiled down into something
like "energy". You can carry it around. But you generally need to use
it up just to survive. Once it's used it doesn't come back so has to
be replenished. In our model if any agent hits "0 energy" we assume
that means something like death. Since our model will essentially be
a very simple machine learning program we expect the modelled rational
agents will learn to avoid death by adopting an appropriate strategy
(the little program they each run). By "rewarding" the best agents
with continued existence we can observe which strategies survive over
time and particularly which strategies thrive and will more likely be
seen in later interactions.

Our model will handle a population of "aliens" and a population of
"humans". These represent a set of different possibilities --
alternative realities we can manipulate and shove together to see who
thrives and who doesn't. By watching which strategies for the aliens
do well (on their terms) we will find what the properties of aliens we
are more likely to meet.  By watching which strategies for humans
thrive we might learn how to best handle interactions with
unpredictable aliens.

So the model proceeds with populations of initially randomly
generated aliens and humans. At each simulated time-step one of the
better aliens is matched up against a randomly selected human.  Each
side spits out the response they are programmed with.  We'll allow
each side to have a choice of 10 different actions that can make at
any instant. But the next step is interesting.  We'll take the choices
from each side and mangle them up numerically to produce the set of
results or consequences of those choices.  The possible results
include knowledge or resources of each side increase or decrease
(knowledge can increase generally without either side losing any;
resources generally need to be transferred from one side to the other
under a conservation law) and each side takes away from the
interaction and actions something that inclines them to perform a
generally different action next time.

So we start off with 2 randomly-assigned populations. They interact
over time. Anyone reaching 0 energy is killed off and we recycle the
cadaver to a new random individual of the same side.  Each individual
is evaluated by a simple total of their energy and knowledge. The best
aliens meet up with random humans. Everyone adapts/learns. We watch
what happens. We find out which aliens do best. Which humans do
best. What strategies work. What strategies get you into trouble. What
the aliens we might meet up with are like.  What might be best for us
to do when we do meet them.

Simple, right?

After 1 million interactions the results look like this:

THEM
best:id=254 nplay=5504 know=1002323 energy=1241 score=1003564
	k+ 58.9 e- 60.6 e+ 39.3 k> 58.9 d 0.0
	0:9/0 0 1:6/1 2 2:6/0 1 3:9/0 1 
worst:id=18039 nplay=488 know=745 energy=20 score=765
	k+ 52.3 e- 45.9 e+ 54.1 k> 44.7 d 0.2
	0:3/1 2 1:3/3 0 2:9/3 1 3:1/0 3 
median:id=213 nplay=534 know=999323 energy=14 score=999337
	k+ 47.4 e- 54.1 e+ 59.0 k> 46.8 d 0.0
	0:7/0 3 1:2/0 3 2:1/1 2 3:8/0 3 
av:nplay=5206 know=1002100 energy=1009 score=1003109
	k+ 5.8 e- 5.8 e+ 4.0 k> 5.6 d 0.0
US
best:id=1496 nplay=1048 know=654 energy=224 score=878
	k+ 61.6 e- 61.3 e+ 40.2 k> 0.0 d 0.0
	0:7/0 0 1:1/2 2 2:8/3 0 3:1/3 0 
worst:id=18054 nplay=1012 know=2 energy=1 score=3
	k+ 56.1 e- 39.8 e+ 50.4 k> 0.0 d 2.3
	0:8/1 0 1:9/2 3 2:0/2 0 3:9/2 1 
median:id=14997 nplay=948 know=69 energy=1 score=70
	k+ 39.8 e- 46.0 e+ 54.7 k> 0.0 d 2.0
	0:1/0 3 1:7/2 1 2:8/0 0 3:8/3 2 
av:nplay=1030 know=645 energy=205 score=850
	k+ 6.2 e- 5.9 e+ 4.0 k> 0.0 d 0.0

We divide the agents up into "them" (aliens) and "us" (humans).  At
each time-step we calculated a "score" from best to worst for each
side. The score was just the numerical sum of the knowledge and
resources ("energy").

We calculated the best, median, worst and average for each side of the
interaction and printed out the percent of times certain things were
seen to happen with that particular individual.

So we can see what the "best" (highest scoring) alien did and ditto
for humans. Similarly we can see the worst or average.

To give a flavour of the results let's look at the "best" alien --
i.e. the alien side with the strategy that apparently gives them the
best outcome as far as they are concerned.  THEM. best.

The k+ value is the percent of times an interaction resulted in the
aliens learning something. They also learn just from joining their own
knowledge together at each time-step. But the k+ percent is the number
of times they picked up something from their interaction with
humans. 59% it says. The most "successful" and therefore we might
expect "most likely to meet" aliens have learned something most times
they interacted with any group like humans. The "k>" value is the
percent of times their strategy resulted in them giving away knowledge
to humans. Also 59%. The "most successful" aliens are fairly generous
with their knowledge. Of course this is the percent of times they gave
something away. It doesn't tell us HOW MUCH they gave away.

The e+ and e- are percentages of interactions involving a net gain (+)
or net loss (-) of "energy" (resources). The most successful aliens
give away energy/resources 61% of the time. They take or are given
(the numbers don't track whether permission was given because the model
doesn't handle rights and obligations :) resources in 40% of their
interactions.

And the "d" number says how many times they have ended up with 0
energy (aka "death") in the past. The most successful aliens generally
have never been killed by whatever they did and whatever the humans
they met did. Again, we don't track in this model whether aliens
killed off humans at any point by stealing all their resources. But it
seems unlikely from the numbers we *do* know.  They seem generous and
careful. Trust but verify.

The other numbers nplay, know, energy, score show the most successful
aliens in this simulation interacted with human-like groups 5500 times
in the 1 million time-steps. Their knowledge grew to 1 million
units. Remember this is a log scale -- i.e. the number of facts they
have is more like 10^1000000 i.e. a 1 with 1 million 0's after it.
That's a moderately large number. :)

The amount of energy they have at their disposal is around 1000 units.
We can only judge that against what the human side has (at
best). About 5 times more.  They are comfortable but apparently don't
see the need to become absolutely filthy rich. So not like humans at
all. :)

Down a few lines we can see the results for the "worst" or "least
successful" aliens. Whatever strategy they have doesn't work too well.
They have even ended up dying .2% of the interactions. They either
gave too much away or the human side ended up taking it from them.
The model doesn't handle the distinction.

Down on the "US" side of the ledge the best strategy for humans turns
out to look simular to the best aliens. Often give resources away.
less often obtain resources from the other side. Pick up a little
knowledge 62% of the time. Don't die.

Down a few lines the worst strategy ends up getting humans killed 2%
of the time. About 10x more deadly than the dumbest strategy the aliens
have tried. The best human strategy ends up learning ~300x more than
the worst from their interaction with aliens and picking up ~200 units
of resources more than the worst strategy. The best strategy doesn't
seem to ever lead to death. The worst strategy gets US killed 2% of the
time.

Both the best and worst "US" strategy has interacted with aliens ~1000
times in the 1 million time-steps. I.e. about 1/5 of the number of
interactions the aliens have participated in.

C.f. all of this with our experiences of imperial expansion.  Does
this model argue for a more generous while carful strategy, or a boots
and all and run over anyone that gets in the way strategy?

Oh well. It's just a model. But I see some small glimmer of
support for optimism even if optimism is the dumb strategy
(check out the research on that).

--
Jellyfish May Dominate a Warmer Arctic Ocean
Technology Networks, 17 May 2024
Climate change is putting countless marine organisms under pressure. However,
jellyfish in the world's oceans could actually benefit from the rising water
temperatures - also and especially in the Arctic Ocean, as researchers from
the Alfred Wegener Institute have now successfully shown.

"Ridicule is not a part of the scientific method. And the public
should not be taught that it is."
- J. Allen Hynek

Welcome to the very first official UFO hearing in American history
It's a historic day for everybody who has always wondered if we are alone in
the universe. Although there have already been multiple hearings on UFOs or
UAPs, this is the first hearing in which credible witnesses will testify
under oath in front of Congress. All representatives already offered their
initial remarks and gave all three witnesses the chance to make their oath
before the hearing starts. These witnesses are former Commander David Fravor,
former fighter jet operator Ryan Graves, and former Intelligence Official
David Grusch.
-- Marca.com, Wed Jul 26 10:48:24 EDT 2023

Polar warming may be underestimated by climate models, ~50 million year old
climate variability suggests
Phys.org, 08 Jul 2024 13:40Z
Polar regions are known to be warming at an enhanced rate compared to lower
latitudes, with the Intergovernmental Panel on Climate Change citing a ~5 °C...

British UFO Investigator: More Roswell Research Needed
Newsmax, 08 Jul 2024 02:14Z
The U.S. government is still withholding classified information about
the Roswell, New Mexico, incident, suggests Nick Pope, a former
British Ministry of...
[Recent reports, linked to World UFO Day, claim new photo evidence
from the US military has been leaked showing the saucer and alien
recovered from the Roswell area in 1947].

[Panspermia!]
Surprising Findings in NASA's OSIRIS-REx Asteroid Sample: Could They Unlock
the Origins of Life?
SciTechDaily, 08 Jul 2024 12:39Z

Russian forces launch hypersonic missiles against Ukrainian targets
AP, 08 Jul 2024 11:31Z

UFOs: how astronomers are searching the sky for alien probes near Earth
Big News Network.com, 06 Dec 2023 20:25Z
There has been increased interest in unidentified flying objects UFOs ever
since the Pentagons 2021 report revealed what ...

Australia developing 'top secret' intelligence cloud computing system
ABC News, 06 Dec 2023 19:24Z
The program is expected to work with US and UK spy networks to help
national security agencies better detect threats.