The encoder being evolution is an idea that has been developed by Sui Huang and numerous others.
The genetic decoder being creative/generative is an idea that has been put forth by Richard Watson and others.
What I’m more interested in at this point is how do the layers interact? Rather than waving at energy landscapes, stable attractors, and catastrophe theory, why can’t we put this to use producing alife simulations that deliver open-ended evolution?
The idea that we need to move away from understanding the genetic code as static machinery also aligns well with recent understanding of biology, as summarized in the highly appraised book "How Life Works" by Philip Ball [1]
But in terms of evolution, I don't see how the proposed analogy/model will get away with the fact that natural selection operates on the individual level (either you survive or you don't), while all genomic information is a package of (depending on the organism) humongous number of genes, not to mention base pairs or individual locuses (that's not even mentioning diploidy, that will mean two different copies of each locus).
With the known proportions and numbers of positive vs slightly deleterious mutations (much more of the latter, counted in the hundreds per individual), selection of positive mutations can not avoid accumulating slightly deleterious mutations.
I don't get how the proposed model is supposed to solve that.
I see that a smartly designed system could probably have processes that can change multiple loci in parallel in a beneficial way (along the lines of the theory of facilitated variation by Gerhart & Kirschner. See their papers or [2]), but that would only explain how such a fine-tuned system could be effective at adaptation, not how the system itself - including its processes - could arise from a state before these processes are in place.
To connect it to ML: In natural selection you don't have gradients and the ability to update multiple parameters based on detailed feedback on them individually. You only can provide a whole, binary feedback (survive, 1, or not, 0), to the whole set of parameters, whether they are slightly deleterious or possibly positive. The resolution is simply lacking here.
- [1] https://www.amazon.com/How-Life-Works-Users-Biology/dp/02268...
- [2] https://www.amazon.com/Plausibility-Life-Resolving-Darwins-D...
True, but the resistance to mutations is increased in critical areas compared to other areas. Not all changes are equally likely.
Parallels between evolutionary systems and hill-climbing algorithms have been floating around for a long time at this point.
Has it been tried? I’ve said for ages: show me a representation where a random bit flip generally results in a different but viable entity, and I’ll show you artificial life. The latent space of a VAE could well have those properties.
But it’s not open-ended though (in its obvious form) since the VAE would have to be trained on various complex life forms and will probably not extrapolate well outside the support of the training distribution.
It's a really interesting problem I pondered quite a bit when doing some a-life hobby stuff.
I never came up with a good solution, but you can kind of "feel" that the solution needs to be more analog-ish in the way info is represented. As you say, a small change in data (bit flip) probably needs to produce a small change in the resulting form. Possibly the binary representation points to a vector space of form "primitives" (drivers of form) such that adjacent points have similar form.
That is always the issue in alife: we discover processes that help us explore bounded information spaces, but only that.
you might like this: https://www.gregegan.net/DIASPORA/01/Orphanogenesis.html
https://www.richardawatson.com/songs-of-life-and-mind
There are many, and the paper itself cites many good sources including by Huang and Watson in the references section.
The 'software' of biology in this framework is described as like pattern-memories stored in "vMem" voltage-gradient patterns between cells in the tissue, analagously to how neurons store information. I think the analogy breaks down slightly here because the memory is more like a remebered-target than something that 'can be executed' like software can.
The vMem 'memory' of what 'shape' to grow-into can be altered (by adding chemicals that open or close specific gap junctions) such that any regrowth or development can target a different location in morphospace (ie grow an eye instead of epithelial tissue as in the tadpole example from Levin's research).
Fascinating and I hope to have a read of the whole paper soon!
<rant> It's a syntactic process with the ability to update syntax based on outcomes in the environment. I think this proves that syntax is sufficient for semantics, given the environment.
Wondering why Searle affirmed the opposite. Didn't he know about compilers, functional programming, lambda calculus, homoiconicity - syntax can operate on syntax, can modify or update it. Rules can create rules because they have a dual status - of behaviour and data. They can be both "verbs" and "objects". Gödel's incompleteness theorems use Arithmetization to encode math statements as data, making math available to itself as object of study.
So syntax not fixed, it has unappreciated depth and adaptive capability. In neural nets both the fw and bw passes are purely syntactic, yet they affect the behaviour/rules/syntax of the model. Can we say AlphaZero and AlphaProof don't really understand even if they are better than most of us in non-parroting situations? </>
Remember they are just symbols. Whereas DNA is chemically highly interactive. We could all change conventions and obsolete the “+” back to nothingness. We can’t do that for a chemical in DNA
One concrete example is a bootstrapped compiler. It is both data and execution. It can build itself, putting its output as input again. Another example is in math - Gödel's arithmetization, which encodes math statements as numbers, processing math syntax with math operations. And of course neural nets, you can describe them as purely syntactic (mechanical) operations, but they also update rules and learn. In the backward pass, the model becomes input for gradient update. So it is both rule and data. DNA too.
These systems that express rules or syntax that is adaptive, I think they make the leap to semantics by grounding in the outside environment. The idea that syntax is shallow and fixed is wrong, in fact syntax can be deep and self generative. Syntax is just a compressed model of the environment, and that is how it gets to reflect semantics.
This was an argument against Stochastic Parrots and Chinese Room (syntax is not sufficient for semantics) maxim. I aimed to show that purely mechanical or syntactic operations carry more depth than originally thought.
I had a similar idea in university, when I was fascinated by fractal designs and graphics, that allows to generate complex and different structures with just an algorithm and a seed.
Jurrasic Park and the quote above helped too, because it played with the idea that the bigger a system is, the more efficient it is in nature, instead of less efficient, like an organization.
This paper kind of supports my idea that DNA is just a seed for our biological system to produce a specific output.
i think i would take it a step further, most organisms alive today operate at the level of a generative model for a generative model (continue umpteen times) until you arrive at the level of physiology that assembles nerves and organs to work at the scale they do
and i would also comment on the impeccability of the feedback mechanisms across each layer, that every message eventually gets into a cell, binds to a protein, which probably cascades into a binding of a 100 different proteins at some point, that eventually sends a message to the tiny nucleus to wrap or unwrap a specific segment of DNA, is quite a beautiful way of thinking about it
so in a way, yeah, because that would go through consciousness first then signals back down stream
In studies of the "RNA world," a theoretical early stage in the origin of life where RNA molecules played a crucial role, researchers have observed that parasitism is a common phenomenon. This means that some molecules can exploit others for their own benefit, which could lead to the extinction of those being exploited unless certain protective measures are in place, such as separating the molecules into compartments or arranging them in specific patterns.
By thinking of RNA replication as a kind of active process, similar to a computer running a program, researchers can explore various strategies that RNA might use to adapt to challenges in its environment. The study uses computer models to investigate how parasitism emerges and how complexity develops in response.
Initially, the system starts with a designed RNA molecule that can copy itself and occasionally makes small mistakes (mutations) during this process. Very quickly, shorter RNA molecules that act as parasites appear. These parasites are copied more rapidly because of their shorter length, giving them an advantage. In response, the original replicating molecules also become shorter to speed up their own replication. They develop ways to slow down the copying process, which helps reduce the advantage parasites have.
Over time, the replicating molecules also evolve more complex methods to distinguish between their own copies and the parasites. This complexity grows as new parasite species keep arising, not from evolving existing parasites, but from mutations in the replicating molecules themselves.
The process of evolution changes as well, with increases in mutation rates and the emergence of new mutation processes. As a result, parasitism not only drives the evolution of more complex replicators but also leads to the development of complex ecosystems. In summary, the study shows how parasitism can be a powerful force that promotes complexity and diversity in evolving systems.