Developing Eden

I’ve started a new repository called Eden:

I’ve been dreaming of something like Eden since before Abraham, but I think it’s a practical first step towards an autonomous artificial artist.

The point of Eden is to make a generative art program which is not subject to the security, privacy, and decentralization requirements. In other words, it’s our original paradise, free of concern from evil (collusion, centralization, data poisoning, etc).

Eden lets us experiment with different frameworks for generative art programs which can eventually be transferred to abraham-mvp, the initial home of our actual AAA. By separating these two tracks for now, we have an outlet to start working on an art generator, while we figure out exactly how to satisfy the AAA security criteria. The process of decentralizing and privatizing Abraham will probably invalidate some pipelines in Eden; that’s fine as we will reduce it down to whatever can practically work.


From a high-level, I’d like to propose Eden as a server which accepts a latent input vector (like DNA) and outputs an image or video (eventually text and audio as well). At this level, it is analogous to a generative model but we can experiment with composite architectures that include ML-based generative models (e.g. GANs, autoencoders, RNNs, etc), but also more traditional procedural algorithms (e.g. L-systems, cellular automata, circle packing, subdivisions, tessellation, sacred geometric patterns, kolams, mandalas, voronoi, fractals, collage/mosaics, flocks/boids, agent-based systems, physics models, etc etc etc) as well as evolutionary computing ideas like NEAT, genetic algorithms, etc.

The procedural programs can be used either to make images in their own right, or as control systems on top of the ML models; for instance, a subdivision or tessellation can be used to generate masks for deepdream.

The pipeline may have multiple stages or be recursive. Ideally, we impose explicit architecture onto it in as minimalistic and generalizable a way as possible to keep it from getting too complex or filled up with too many heuristics.

With multiple stages, a generative art program becomes increasingly dynamical and unpredictable from a user’s perspective, and increases the range and multiplicity/diversity of the generator.

Relationship with other parts of Abraham project

As mentioned earlier, a separate track of studying how to do decentralized machine learning will go on concurrently with Eden development. Both of these will be semi-independent, but as we make progress in each, we can begin figuring out how to combine them into an AAA prototype.

Additionally, if Eden is a server, the natural question is what is the nature of the client? We can think about how to design some sort of a front-end client application for Eden, perhaps one that a collector/curator faces to query Eden (eventually Abraham) for artworks. I’ve been thinking about forking the OpenFrame project to make a tablet-based client. No progress on that front yet, just an idea.


The current structure of Eden is specified in the README. There is an external folder which contains git submodules of external repositories, including StyleGAN, neural-style, deeplab-pytorch, and others are in the works. A setup script downloads a bunch of pre-trained models (also w.i.p.). Then there is the eden folder which contains an API to interface with the externals, and an examples folder which documents how to use the API.

There has been no work yet on the I/O. Right now, I am adding models from the ground up, and eventually want to make some sort of “conductor” class which takes a latent input and runs the requisite modules to produce an image and return it to the client. But this doesn’t exist yet.

At some point, it may be wise to turn this into a Python package. There are many dependencies so a Docker container will be useful as well. In general, more observance of standard best practices for python library maintenance will help.

Things I’d like to add to Eden.

  • more external repositories.
  • my rather large deepdream library.
  • procedural generative art algorithms as mentioned earlier (l-systems, subdivisions, agents, etc)
  • more examples
  • I/O
  • streamlined installation (via docker, virtual env, etc) and pypa-packaging.

For anyone who is interested in participating in development, I’d like to perhaps arrange some video conference to discuss it at some point. This may be easier to arrange on the Abraham Discord. If you are interested in Eden but don’t have time to help develop it, certainly it will help to just have beta testers as well.

Also note, if you use Eden and have any questions or find any bugs, please direct that to the Github issue tracker. The documentation is sparse right now and there are bugs, although it is functional.

If you have ideas for Eden, please start a discussion in this post. I will announce more details and plan an initial video conference very soon (probably on the discord).

1 Like

A few things come to mind here as a general plan. First, making sure that all the models can be run. I ran into issues downloading and running everything and had to try on multiple machines, so OS-specific documentation would be a helpful thing to start (tensorflow-gpu is unavailable on OSX, for example, so I started over on a linux box). Or specify a standard runtime environment, which Docker would go a long way toward helping.

Second, the main thing to build here seems to be inter-model I/O glue, right? What are your thoughts on number and ordering of algorithms? Because if the number and order of algorithms applied to some input is random on every run, there has to be way more work to make generalized I/O specs so any output can go into any input. If it’s a static order, that’d be more tractable as a starting point, but probably not the overall vision, I assume? Also would need to build out a controller

I feel like the best thing to have right now would be a few first issues to prioritize initial work, and probably some guidelines for style and contribution first practices. Standardization of what runtime environment should be used and what first input is expected should help get everyone on the same page to add in models and start building a pipeline.

1 Like

Agreed, a docker will be useful. I am going to work on this asap, although anyone else with docker experience would be welcome to start work on a container. Since gpu is critical, it will have to work with nvidia-docker.

Yeah, glue is the right way to put it. It would be nice to standardize this so one model’s output can become another’s input. A lot of this is in utils for now and it’s relatively thin – can grow it out as needed. I think also wrapping more libraries and creating examples for them is a high priority.

I like the idea of putting up issues to prioritize initial work. Let me think on this a bit.

1 Like

Hi gene, do you have any cloud storage for abraham set aside? It would be good to have a shared place to store and retrieve models and other large dependencies. I noticed many of the dependent files were scattered on Google Drive shares that I was not able to use wget to fetch, breaking the automated pipeline of running the examples.

Instead of using wget, you can use the scripts in It’s a work in progress but should be downloading a bunch of models. At some point though I agree a nicer solution will be preferred but this should be okay for the time being.

Thanks! I have a Google Colab notebook going to explore the different examples. It has been a good way to highlight some areas of potential disentanglement and documentation needs.