In her graphic memoir, Artificial: A Love Story, Amy Kurzweil tackles many existential questions, framed around her father’s quest to resurrect his father using artificial intelligence. Kurzweil participates in the Cherie Smith JCC Jewish Book Festival Feb. 12.
The purpose of life, of art, what it means to be human, to love and be loved, the value of relationships, our mortality. In a very personal story, cartoonist and writer Amy Kurzweil explores not just universal questions but the biggest of questions in her new book, Artificial: A Love Story.
Kurzweil participates in the Cherie Smith JCC Jewish Book Festival Feb. 12 in two separate sessions: one about the choice of the comic form to tell a story, the other titled Art & Artificial Intelligence.
In Artificial, readers are invited into another part of Kurzweil’s world. Her debut graphic memoir, Flying Couch, was also family-focused, centring around her maternal grandmother’s story. As she writes on her website, “At 13 years old, Bubbe (as I call her) escaped the Warsaw Ghetto alone, by disguising herself as a gentile. My mother taught me: our memories and our families shape who we become. What does it mean to be part of a family, and how does each generation bear the imprint of the past, its traumas and its gifts? Flying Couch is my answer to these questions, the documentation of my quest for identity and understanding.”
Kurzweil continues to grapple with these questions in Artificial, this time from the paternal side. Her father, Ray, an inventor and futurist, is building an AI tool that will allow him, basically, to resurrect his father, who died of a heart attack in 1970, at the age of 57. Ray has saved letters, articles, music and other material relating to his father, Frederic, a pianist and conductor, who fled Austria in 1938, a month before Kristallnacht, to the United States, saved by a chance encounter. Amy is helping her father sort through boxes upon boxes of material and computerize the information. She even chats with “her grandfather,” as the AI program is being developed.
“My father taught me … that, someday, robots would be made of memory,” writes Kurzweil. Of course, the creation of a Fredbot has functional, ethical, emotional and other challenges, and Kurzweil – in words and images – presents them with sensitivity, intelligence and creativity. Each page of Artificial is attention-grabbing and the level of detail on some pages is remarkable. Kurzweil meticulously re-creates correspondence, typed and handwritten, newspaper articles and other documents, emails and texts, but she also captures, for example, the doubt on her father’s face during a conversation and the concern she has for her partner when he’s undergoing some medical tests. Readers learn about the people asking the questions, not just the questions themselves.
As for the answers? There are multiple ones. Of her father’s project, his quest to conquer mortality using technology, Kurzweil writes that her father’s definition of infinity is, “Computers become so smalland dense that they become intelligence itself. Humans who do not grow up or grow old and seal our stories. Our stories wake up and keep writing themselves. This future sounds like liberation from the sadness of a story’s end. But it also sounds terrifying.”
That Kurzweil isn’t completely convinced of the merits of her father’s project, even though she loves him dearly and is helping him try and accomplish it, makes Artificial a satisfyingly complex and relatable story. It is a love story on many levels, and one well worth reading.
The Cherie Smith JCC Jewish Book Festival runs Feb. 10-15. For the program guide and to purchase event tickets, visit jccgv.com/jewish-book-festival.
It is said that “truth is the first casualty of war.” There are two aspects to this truth – that the chaos of conflict makes it difficult to discern exactly what is happening, leading to what we might now call “misinformation” and, additionally, the tendency of governments to deliberately mislead their citizens and others for strategic reasons, better known as “disinformation.” Both aspects are very much in play in the current conflict between Israel and Hamas.
There was a time when it was easy for governments to control information. At that time, also, there were editors and fairly clear and stringent (if imperfect) journalistic standards in place before a story would reach its audiences. The internet, among its good and bad characteristics, has eliminated almost all oversight.
Today, anyone with access to the internet has the potential to reach wider audiences than the most powerful person of a century or two ago – and to do so instantaneously. As a result, we are swimming in information.
In principle – in the utopian idea some may have had a few short years ago – this access to virtually unlimited resources would make every citizen capable of consuming the most information possible and empowering us to make informed decisions. This principle seems to have proved disastrously wrong. Instead of weighing the balance of opinions in the most vibrant marketplace of ideas ever imagined, many of us seek out only that information that reinforces our preexisting prejudices and fast-held opinions.
Moreover, bad actors – including governments – and unwitting innocents are purveying false information. We are manipulated by lies that are difficult to discern from fact and most of us are guilty of sharing false information without intending to do so.
We are facing the possibility of a “post-truth world,” exacerbated by technological changes and advances in artificial intelligence. Even given incontrovertible evidence, significant parts of populations choose to believe demonstrable fallacies – the most obvious one in our geographic neighbourhood being the “Big Lie” that Donald Trump won the 2020 US presidential election. Even the universal availability of contrary proof does not preclude people from coming to the wrong conclusions.
Google News and many other agencies, to their credit, have begun aggregating fact-checks from verified sources that now appear at the bottom of many news feeds. Of course, these cannot vet the things that come through our email inboxes.
The advent of artificial intelligence is going to turn what had been trickle and is now a flood of misinformation and disinformation into an absolute deluge. In this issue of the Independent alone, by coincidence, multiple stories address the risks of what is occurring and the need for media literacy and critical thinking.
All of this relates, in a very specific if not immediately obvious way, to a more positive news story in this issue. British Columbia is set to become the second Canadian province to mandate compulsory Holocaust education in the school curriculum.
Ensuring that young citizens complete their education with knowledge of the Holocaust is vitally important. The Holocaust, since well before the internet age, has been the subject of both misinformation and disinformation. Comprehensive education may help people emerge from the school system with a baseline of shared information around a seminal event in human history.
But more is needed. The problem is so vast, a broad approach is required to ensure that most of us, young and old, can discern fact from fiction.
A “supply-side” response is not going to work. There is simply no possibility of stanching the burgeoning amount of lies and misleading content online (and elsewhere). Critical thinking, media analysis, information literacy – these are crucial skills for individuals and society at large. We are way behind the curve in delivering these through our institutions.
Confronting the tsunami of misinformation and disinformation is an intractable challenge. It seems, though, that democratic countries are on the right track: we are acknowledging that it is a problem. There are individuals and organizations – in the public, nonprofit and private sectors – working to bring reliable and trustworthy news and information to the fore. But we must do our part – think twice before you forward a link or email, do your own fact-checking, subscribe to a wide variety of respected publications or channels, be civil in your discussions. It may be a cliché, but it’s appropriate here: be a part of the solution not the problem.
Jews have been called “the people of the book.” It was the power of and devotion to the received and unfolding written word that ensured the Jewish people’s unity (and diversity) across almost 2,000 years of exile. But who reads books anymore?
If you are perusing these pages, you probably belong to what has become a somewhat exclusive club – readers. Beginning with the advent of radio, picking up speed with the development of television, then supercharging connectivity while reducing attention spans with the advent of internet and social media, books have, for many people, ceased to be the primary go-to source for entertainment, pastime, learning or self-improvement.
When social media took off, in the early 2000s, most people, experts and us ordinary folks, didn’t really fathom the impact it would have on our society or on our physiology. Now, a few short years later, science is demonstrating that the speed with which images and ideas flash into our senses may be literally changing how our brains work.
It may be safe to say that giant leaps in artificial intelligence just in the last few months will have at least the same breadth of impact on societies and individuals.
Skeptics among us, who have dabbled a little in public interfaces like ChatGPT, have come away gobsmacked by the capabilities we have discovered – which are clearly just the tip of the iceberg. Perhaps the most wondrous (and scary) thing about artificial intelligence is that it may represent the beginning of an exponential, self-sustaining explosion. The Industrial Revolution began less than 300 years ago. Every modern convenience – practically everything we have outside of turnips and animal-skin garments – is a result of that explosive growth in human capability. For better and worse. For all the incredible advancements we have made, the price we are paying appears to be the future of our planet itself. All this in a mere three centuries. Artificial intelligence, even if we do not understand it now, will likely speed up change in ways that make today’s offerings look like the cotton gin.
The written word is just a small part of what artificial intelligence can do. Because it is one of the easier things to access for most laypeople, this component of ChatGPT is the one that most of us have probably played around with. Professors, employers and others are suddenly confronted with uncharted moral territory in dealing with brilliantly written submissions from students, employees and other correspondents, any of which may or may not have been written by human hands and minds. As one professor commented in media recently, what tipped him off to the problem was how dazzling some of the essays he received were.
And the time-saving! Artificial intelligence can write a letter to a recalcitrant employee, a grandparent, an old friend or a government official in a tiny fraction of the time it would take the ordinary person to draft a letter of far lower quality.
But who is going to read all these words?
Already, plenty of people have largely abandoned books. Even Jews, the people of the book, find much to do beyond reading. Just a little microcosm in our own community tells us this. Look at the corners of the Vancouver JCC that are the busiest at any given time. History and stereotypes should suggest that the Isaac Waldman Jewish Public Library would be packed with people from morning to night. It’s got a devoted and steady clientele, do not misunderstand, but, judging by foot traffic, you might think 21st-century Jews would be better known as the “people of the gym,” “the people of the pool” or “the people who gab endlessly in the boardroom.”
The bigger issue is, at some point with the advent of technology that just keeps producing more words, we will reach a tipping point at which there are more “writers” (human or otherwise) than there are readers. If a tree falls in the forest and there is nobody to hear it fall, it has been asked, does it make a sound? If words are put to paper (or screen) and never read, might it be as well if they had never been written at all?
Again: we use words as an example here because that is the field we know best. AI is set to upend almost every facet of our society. It feels like we are at a moment much more significant than that time 20 years ago when we first encountered social media, or 30 years ago, when most of us first ventured onto the World Wide Web. We can only barely fathom the good and bad (and indifferent) changes imminent.
Dr. Rumman Chowdhury, chief executive officer and founder of Parity, gave the keynote address at the Simces & Rabkin Family Dialogue on Human Rights. (photo from rummanchowdhury.com)
Data and social scientist Dr. Rumman Chowdhury provided a wide-ranging analysis on the state of artificial intelligence and the implications it has on human rights in a Nov. 19 talk. The virtual event was organized by the Canadian Museum for Human Rights in Winnipeg and Vancouver’s Zena Simces and Dr. Simon Rabkin for the second annual Simces & Rabkin Family Dialogue on Human Rights.
“We still need human beings thinking even if AI systems – no matter how sophisticated they are – are telling us things and giving us input,” said Chowdhury, who is the chief executive officer and founder of Parity, a company that strives to help businesses maintain high ethical standards in their use of AI.
A common misperception of AI is that it looks like futuristic humanoids or robots, like, for example, the ones in Björk’s 1999 video for her song “All is Full of Love.” But, said Chowdhury, artificial intelligence is instead computer code, algorithms or programming language – and it has limitations.
“Cars do not drive us. We drive cars. We should not look at AI as though we are not part of the discussion,” she said.
The 2006 Montreal Declaration of Human Rights has served as an important framework in the age of artificial intelligence. The central tenets of that declaration include well-being, respect for autonomy and democratic participation. Around those concepts, Chowdhury addressed human rights in the realms of health, education and privacy.
Pre-existing biases have permeated healthcare AI, she said, citing the example of a complicated algorithm from care provider Optum that prioritized less sick white patients over more sick African-American patients.
“Historically, doctors have ignored or downplayed symptoms in Black patients and given preferential treatment to white patients – this is literally in the data,” explained Chowdhury. “Taking that data and putting it into an algorithm simply trains it to repeat the same actions that are baked into the historical record.”
Other reports have shown that an algorithm used in one region kept Black patients from getting kidney transplants, leading to patient deaths, and that COVID-19 relief allocations based on AI were disproportionately underfunding minority communities.
“All algorithms have bias because there is no perfect way to predict the future. The problem occurs when the biases become systematic, when there is a pattern to them,” she said.
Chowdhury suggested that citizens have the right to know when algorithms are being used, so that the programs can be examined critically and beneficial outcomes to all people can be ensured, with potential harms being identified and corrected responsibly.
With respect to the increased use of technology in education, she asked, “Has AI ‘disrupted’ education or has it simply created a police state?” Here, too, she offered ample evidence of how technology has sometimes gone off course. For instance, she shared a news report from this spring from the United Kingdom, where an algorithm was used by the exam regulator Ofqual to determine the grades of students. For no apparent reason, the AI system downgraded the results of 40% of the students, mostly those in vulnerable economic situations.
Closer to home, a University of British Columbia professor, Ian Linkletter, was sued this year by the tech firm Proctorio for a series of tweets critical of its remote testing software, which the university was using. Linkletter shared his concerns that this kind of technology does not, in his mind, foster a love of learning in the way it monitors students and he called attention to the fact that a private company is collecting and storing data on individuals.
To combat the pernicious aspects of ed tech from bringing damaging consequences to schooling, Chowdhury thinks some fundamental questions should be asked. Namely, what is the purpose of educational technology in terms of the well-being of the student? How are students’ rights protected? How can the need to prevent the possibility that some students may cheat on exams be balanced with the rights of the majority of students?
“We are choosing technology that punishes rather than that which enables and nurtures,” she said.
Next came the issue of privacy, which, Chowdhury asserted, “is fascinating because we are seeing this happen in real-time. Increasingly, we have a blurred line between public and private.”
She distinguished between choices that a member of the public may have as a consumer in submitting personal data to a company like Amazon versus a government organization. While a person can decide not to purchase from a particular company, they cannot necessarily opt out of public services, which also gather personal information and use technology – and this is a “critical distinction.”
Chowdhury showed the audience a series of disturbing news stories from over the past couple of years. In 2018, the New Orleans Police Department, after years of denial, admitted to using AI that sifted through data from social media and criminal history to predict when a person would commit a crime. Another report came from the King’s Cross district of London, which has one of the highest concentrations of facial-recognition cameras of any region in the world outside of China, according to Chowdhury. The preponderance of surveillance technology in our daily lives, she warned, can bring about what has been deemed a “chilling effect,” or a reluctance to engage in legitimate protest or free speech, due to the fear of potential legal repercussions.
Then there are the types of surveillance used in workplaces. “More and more companies are introducing monitoring tech in order to ensure that their employees are not ‘cheating’ on the job,” she said. These technologies can intrude by secretly taking screenshots of a person’s computer while they are at work, and mapping the efficiency of employees through algorithms to determine who might need to be laid off.
“All this is happening at a time of a pandemic, when things are not normal. Instead of being treated as a useful contributor, these technologies make employees seem like they are the enemy,” said Chowdhury.
How do we enable the rights of both white- and blue-collar workers? she asked. How can we protect our right to peaceful and legitimate protest? How can AI be used in the future in a way that allows humans to reach their full potential?
In her closing remarks, Chowdhury asked, “What should AI learn from human rights?” She introduced the term “human centric” – “How can designers, developers and programmers appreciate the role of the human rights narrative in developing AI systems equitably?”
She concluded, “Human rights frameworks are the only ones that place humans first.”
Award-winning technology journalist and author Amber Mac moderated the lecture, which was opened by Angeliki Bogiatji, the interpretive program developer for the museum. Isha Khan, the museum’s new chief executive officer, welcomed viewers, while Simces gave opening remarks and Rabkin closed the broadcast.
Sam Margolishas written for the Globe and Mail, the National Post, UPI and MSNBC.
***
Note: This article has been corrected to reflect that it was technology journalist and author Amber Mac who moderated the lecture.
Earlier this fall, the National Film Board of Canada released the short Come to your senses, co-created Alicia Eisen and Sophie Jarvis. It is part of NFB’s The Curve, an online series “featuring the talents of 40 creators and filmmakers, giving a voice to millions whose lives have been touched by COVID-19.”
Eisen, a member of the Jewish community, is an animation filmmaker and visual artist, while Jarvis is a writer and director. Both women are based in Vancouver and have worked together before.
“We met in 2015, when Alicia was in pre-production on her first short film, Old Man,” they co-wrote in an email interview with the Independent. “A mutual friend introduced us, as he knew that Sophie was interested in learning more about stop-motion. It turned out that we live across the street from each other, and a wonderful friendship was formed.
“We first worked together on a short film for kids that blended live action (Sophie’s arena) with stop-motion (Alicia’s arena) and was basically a vessel for us to learn more about each other’s practice and to test the waters of a working relationship. It was an intense experience that threw every obstacle at us, and we came out stronger and ready for more. Which leads us to the stop-motion short film we are currently working on with the National Film Board, Zeb’s Spider.”
When their work was interrupted by the pandemic back in March, they said, “It was very disorienting to have that full-time routine stopped cold, so, when the NFB offered us the opportunity to contribute to their pandemic series, The Curve, it was a blessing to focus our creative energy into something new, and completely different from anything we have ever created before.”
Using the format of a group Zoom, Come to your senses explores the question, “Is the human need to make sense of chaos an inherently chaotic pursuit?”
“The five senses can be a somewhat intangible subject to explore, especially through film (which is an audiovisual medium). We aimed to evoke the other three senses with these limitations, which meant that we had to get a little weird with imagery and sound,” said Eisen and Jarvis. “A large part of our process was to let our intuition guide us; instead of planning what footage we needed to collect to complete the film, we issued open guidelines to ourselves and our artistic collaborators and worked with what we received…. It was exciting to see the patterns and instincts that were shared amongst the group (who were all working remotely from each other), and these similarities helped guide our process into the next phase: the edit.
“Our editor, Kane Stewart, was integral to helping organize this experimental film and to creating what you see in the final cut. We gave him direction on the tone and the arc, and detailed notes on the material that we wanted to include, but ultimately let him organize the material with fresh eyes. Our sound designer, Eva Madden, took the core intention of the project and brought her own spin to the film, which was exciting. The score really sets a tone, and we were thrilled to work with Yu Su, whose personal work we admire.”
The film is voiced by an AI voice-generator, said Eisen and Jarvis. “This way, we could manipulate the speed of the voice and revel in the tech restrictions that come with that choice (which are mirrored in the group video call). We landed on a voice named Tessa, who struck the right tone – gentle but commanding, like a self-help audiobook.”
The artists collaborating on the film were people with whom Eisen and Jarvis had worked over the years: Mona Fani, Suzanne Friesen, Meredith Hama-Brown, Charlie Hannah, Kara Hornland, Arggy Jenati and Janessa St. Pierre.
“We gave everybody a list of creative prompts designed to be completed within two days … things like ‘choose a spherical item from your home and interact with it using each of your five senses.’ We asked each person to approach each prompt with a design sensibility informed by our mood board, but to ultimately bring their own flair and artistry to it, so what we received from each person was unique, yet fit into the collage that ultimately makes the film.”
Did you know without goats, the coffee bean may have never been discovered? Are you able to recognize if the vinyl you hear coming from your neighbor’s apartment is a 78, 45 or 33? Do you type your university essays out on a typewriter? Barry knows, Barry can, Barry does, Barry will and Barry did. Introducing the next wave in AI customer service. Barry is the perfect fit for all your too-cool-for-school business needs.
In Jewish community member Ira Cooper’s Artisanal Intelligence, fellow community member Hannah Everett plays Jane, the entrepreneur and creative genius responsible for developing Barry, a fast-learning, curious and fashion-wise artificial intelligence customer service robot, played by Drew Carlson. Cooper describes his play as “not simply a form of absurdist, comedic, low-brow escapism as it may come across. It’s a conversation about identity, as most things are, and its tumultuous relationship with self versus societal box-fitting…. There are other dialogues, too; questions raised about creation versus intent versus audience response and who gets to create meaning. It’s also an affirmation of what love can be.” Artisanal Intelligence is at Havana Theatre July 5 and 7, 9:30 p.m. Tickets ($15) can be purchased at showpass.com.