13 October 2025

Effect.AI

Evolving Digital Performance Through Artificial Intelligence

Optimising user experience through evolutionary computation
Long before “AI” became a buzzword, we were already experimenting with it. In 2015, we started Effect.AI, an in-house venture built on a simple but ambitious idea: what if websites could improve themselves automatically?
Effect.AI was an early attempt at automating digital optimisation using genetic algorithms — a form of artificial intelligence inspired by natural selection. Instead of relying on manual A/B testing, we wanted the system itself to test, learn, and evolve designs in real time.
Challenges

Traditional website optimisation relied on static tests and human decisions. You’d design a few variants, run A/B tests for weeks, and draw conclusions based on limited samples.We wanted to push this much further.Our goal was to create a self-learning system that continuously improved user experience based on data — automatically, without human intervention.

To make that possible, we needed to design:

  • A tracker to collect behavioural data (clicks, time on page, conversions)
  • A feedback loop to translate data into performance scores
  • A genetic algorithm engine to evolve design variants dynamically
  • A control layer to ensure statistical validity (ANOVA testing, significance checks)
  • A scalable infrastructure capable of testing thousands of live variants

In essence, we were building a digital organism that could learn how to perform better — one generation at a time.

What We Did
Developed client-side tracker and back-end coordination layer
Integrated statistical validation for continuous learning
Built automatic variant generation system using CSS injection
Designed and implemented multi-stage AI optimisation engine
We approached the project as a series of evolutionary stages — each version smarter and more autonomous than the last.
The Solution
The system used genetic algorithms to improve websites automatically through experimentation and feedback.
How it worked
Define optimisation targets

Website owners selected which elements to optimise (buttons, images, headlines, colours) and set the value ranges for each.

Generate initial population

The system created a set of random design variants and distributed them across users.

Measure performance

Each variant was evaluated based on user behaviour — clicks, conversions, time on page, or any goal defined by the owner.

Evolve the design

The system used selection, crossover, and mutation to generate the next “generation” of variants, favouring the ones with the best results.

Validate with statistics

Using ANOVA and other significance tests, only meaningful improvements were propagated.

Every user saw a slightly different version of the website — and the algorithm continuously learned which combinations worked best.
System Architecture
Frontend Tracker
Injected custom CSS per variant and recorded user behaviour in real time.
Backend Engine
Managed populations, handled crossover/mutation, and stored feedback loops.
Profile Matching
Assigned visitors to test groups based on time, IP, and location to ensure consistent conditions.
Fitness Function
Converted behavioural data into quantitative performance metrics.
Continuous Deployment
New populations generated automatically as old ones converged.
The architecture was designed as a closed feedback system — a complete loop between observation, adaptation, and improvement.
Results
Effect.AI successfully demonstrated how AI can evolve digital experiences without direct human input.
Websites dynamically adjusted design based on real user data.
The framework proved scalable across multiple sites and goals.
Statistical models ensured that changes were significant, not random noise.
Conversion rates improved as the algorithm learned optimal configurations.
Though experimental, Effect.AI became one of our earliest internal proofs that AI could be used not just for prediction — but for autonomous decision-making and optimisation.