Looking for ‘Looking for Love’
It’s time we all stopped pretending we make perfect work that turns out just as we planned. This is an honest documentation of a process which taught us a lot.
It’s time we all stopped pretending we make perfect work that turns out just as we planned. We started making Looking for Love in 2018 and went through some really bad versions before we finally got to something we were happy with in 2023. Often our present selves misremember how past projects developed and we expect to create perfection straight off - which is the least helpful thing in the world for creativity. So this is an honest documentation of a process which taught us a lot.
Phase 1: December 2018 - February 2019
Looking for Love started off as a piece exploring whether it was possible to know 100% that something was a robot, and yet, because of the way it interacted with you, start to think of it as a sentient being. We were interested in when emotion would start to override rational thought.
At this point, it was a 2 week experience, which players would access via their phones. To start with they’d do a Buzzfeed-style quiz, and depending on their answers, they’d be paired with one of twelve pre-written character ‘shells’, all of which could change age, gender and sexual preference in accordance with what the player was ‘looking’ for. Then over three weeks, with one to three interaction episodes per day, the player and the character would chat. The interactions were a mixture of fun things, like sharing a picture of something a certain colour each day or being sent a poem; light personalisation, like after the player shared a song they liked, the character would find something similar and send that to them; and an unfolding story about the character’s past. We were fascinated by online romance scammers, who use stories of grief, drip-fed in a certain way, to get their victims to ‘fall in love’ with them. Could we use this dynamic to override players’ rational knowledge that they were interacting with a machine? The experience was designed to end with an AR episode but we didn’t get to that during Phase 1 - more on that later.
We wrote the experience while presenting another show in a very cold room at Battersea Arts Centre and then ran a test in April 2019, in which about 50 people participated. At the end of the three weeks we brought them together for an event, which was partly to get feedback, and partly because our heads were still in the theatre world so much that I don’t think we could imagine an artistic experience where the audience never shared physical space with each other.
This version of Looking for Love was not good. In fact, it was baaaad. The people who tested it said nice things and that it was fun but we didn’t like it. The character narratives were restrictive and actually the least interesting thing about the experience. It also wasn’t interesting to make a machine seem human, and at the end of the day, we were always going to fail. There would always be jarring moments where it would be clear that it was not a human you were talking to. Some of these were harmless - in one interaction the player was invited to send a picture of something beautiful, and we’d set the character’s system to use image recognition to make a comment about what they’d been sent. Except they kept getting it wrong. They’d receive a picture of a beautiful bunch of flowers in a vase on a table, and comment ‘Nice table’. There was a table, but any human viewer would know that it was the flowers that were the focus. The machine didn’t.
Other interactions were more problematic. At various points, the character was meant to respond with an appropriate emoji to something the player had said - so if they’d spoken about football, the system might send a football player or ball. If the system couldn’t tell what the player was talking about, it would send a random positive emoji. Somehow the system had come to favour the donut emoji, which it sent repeatedly to a player who was coincidentally trying to lose some weight, making them feel like they were being trolled.
In perhaps the funniest interaction, the system was meant to send a news story pertinent to the player’s location with a ‘saw this and thought of you’ message. Several players being sent the story below brought unanticipated darker tones to the experience.
We didn’t like what we’d made, but we knew there was something interesting, especially in that slippage between human and artificial intelligence. So we kept going…
Phase 2: September 2019 + January 2020
This version of Looking for Love was less narrative driven and more about personalisation. We ditched the twelve characters. Players did a quiz more based on the Big 5 personality traits than whether they liked steak or falafel. This, and other inputs form the player, were used to modify the Ally (as the entity you corresponded with was now called) to be more and more like the player. The Ally’s appearance would change, like a profile picture update, to reflect the player’s tastes. The way they reported things they had done, like preparing a meal they sent a picture of, would adapt depending on the player’s personality traits e.g ‘I just had a moment where I realised this was what I fancied. I just threw it together really - didn’t even have a recipe. Spontaneous food = the best food.’ (low conscientiousness) vs ‘I’d been planning to make it for a while, so I made sure I had all the ingredients. The recipe was really thorough and I was really pleased how it turned out.’ (high conscientiousness).
We were interested in narcissism but also in provoking players to think about the amount of data they shared online. This was all taking place against revelations about Cambridge Analytica, and I think we got a bit carried away, not least because we were not Cambridge Analytica, we were a very small organisation and what we were asking of Joe just wasn’t possible for him to build. (Especially in the wake of Cambridge Analytica, when platforms and apis were tightening their security and requiring more permissions.) I remember trying to work on the show at The Old Market in Brighton, all exhausted after just getting back from Ars Electronica festival and getting frustrated, before Dan had the sense to make me and Joe, both wiped out by spending the last five days surrounded by people, sit in opposite corners of the room and do solo tasks which calmed us down and made me reflect on what was actually possible.
We also wrote the AR finale during this phase. The idea was that you’d go to a cafe or bar and meet your Ally, have a lovely time with them, but then the mask would begin to slip -or glitch- and you’d see how they were basically built of your data. Realising how much you’d shared, you’d have the choice of erasing the data - and killing your Ally - or allowing it to remain in the world - and giving your Ally life. It would have been a nice moment, and spoke to that thing we’d started off with, about whether you let emotion override your rational knowledge. (In fact, this podcast, esp Episode 3, released four years later, would explore exactly that.)
Making a piece with AR felt like it would open doors and gain us access to new showcase opportunities. We made a convincing case for why it was the right form for the piece, but deep down, we knew it was gimmicky and tangential. When we paused work on the project in February 2020, we were out of love with it. A month later the world shut down and we spent the next 18 months working on projects to connect people remotely.
Phase 3: December 2021 & May 2022
Nearly two years into the pandemic, and three years after we’d started making it, we returned to Looking for Love. We’d pushed the completion date with the funders so many times, and we knew we couldn’t apply for any more funding until we finished it. We were pretty over-committed, dealing with new projects we’d taken on during the pandemic, as well as trying to finish everything we’d put on hold.
I wrote a completely new version in the two weeks before Christmas. It was shorter - we’d learned from our experience with Smoking Gun - and it leaned into the weirdnesses of both human and artificial intelligence. The frame became that you were trying to train a robot how to fall in love. We recycled elements that had been ‘mistakes’ in past iterations, like ‘nice table’. In one interaction, the player is asked to send an image of something romantic to the robot. Using image recognition (which had improved vastly since we did Phase 2) the system is programmed to identify the second most significant thing first (the table) and then the first (the flowers).
We flipped the challenge of trying to deduce as much as possible about someone in three questions back onto the player, in a section that provoked lots of discussion… not least among the three of us.
The artwork started to feel like something we wanted to make again, rather than something we were trying to do to be a cool digital company. It was goofy and funny and embraced error, rather than trying to be slick and clever.
We dropped the AR element. None of us had liked it much anyway. We still thought of it as a remote experience that people would do on their phones but this was December 2021 and many of us were about to go into our second lockdown Christmas. I’m not sure we could imagine in-person artworks would ever be a thing again.
We’d done A LOT of work during the pandemic on making the interfaces for our projects more accessible and generally better. We knew that we could set the tone for Looking for Love by what it looked like. In May 2022, we had a placement student, Diana Monova, who took on the task of designing the ‘look’ of Looking for Love. By the time she had finished, we were starting to get excited about the artwork again. We liked it but we were burnt out by over-working during the pandemic. Joe particularly had worked without stopping, as there was more and more demand for online artworks. We got Looking for Love mostly working, enough to submit the report to the funder, and then all fell apart a bit.
Phase 5: May 2023
Through a long series of events, including getting rejected for a commission (showing that sometimes ‘bad’ things happen for a reason!), we found ourselves programmed in FuturEverything and Science Gallery London’s AI: Who’s Looking After Me? exhibition.
There was a bit of money, which we decided to use to fix the things we weren’t happy with. We put time into thinking about how we would work too. Joe built a system which allowed me to edit the show via googlesheets, to avoid the situation we’d had in the past of a huge amount of work building up for him to do. Meanwhile he adapted the project to being an in-person installation. The gallery built us an extremely cool and intentionally naff internet cafe, harking back to the early 2000s. We sourced old monitors and discoloured keyboards and mice (way more expensive than new ones, it turns out).
Still, everything looked too clean. Lots of people find AI a really intimidating or daunting subject (often because it’s presented as such!), and we wanted to break that down, and make the artwork really approachable. I ransacked our flat for the kind of crap that got left in internet cafes - paper clips, pens without lids, hair grips, random broken plastic stuff, nearly used-up notebooks, post-its. We typed officious signs with spelling mistakes and wrote on them to correct the mistakes even more officiously. I bought a coffee from the cafe downstairs and carefully printed coffee rings all over the desks. When the perspex letters for the installation area sign turned up, somehow there was only one ‘N’. That added to the effect. The area looked incomplete, imperfect, used and usable. It invited interaction. Ironically, as an ex theatre company, this was the least like theatre of any piece we’d made, but without the ‘set’ it would have felt sterile and imposing.
Over the six months that the exhibition was on, the space evolved. It turns out that if you leave post-its and things to write with, people leave messages. Then they respond to each other’s messages. The space benefitted hugely from this absolutely unintended happy accident. People were cute and funny and wise. Even the fan in the corner, which we’d failed to include when calculating how many plugs we’d need, became the focus for interactions. Another happy accident.
Nearly 6 thousand people interacted with Looking for Love at Science Gallery London. To Dan and me, coming from a theatre background, that’s crazy numbers. It’s also the first time we haven’t shared physical or online space with everyone who has experienced something we’ve made. After Science Gallery, Looking for Love went to Birmingham Exchange for 5 months, where it lived in a bank vault that was also a Faraday cage - fun times for Joe when he arrived for the install.
A more lightweight version went to the Science Museum Lates, and we have more dates on the horizon. The experience has also made us think more about creating more installation pieces. Here’s a documentation video of us talking about what we ended up trying to make.
There’s no big lesson here, other than sometimes it takes four and a half years to make an artwork that you then write in two weeks, and that the thing you end up making is better than the thing you would have made originally. We’ve all changed enormously in that period of time. Why wouldn’t the plan change too? And rather than fronting like the result is what we imagined all along, sometimes it’s good to celebrate that process of adaptation. Here’s to happy accidents.