Computer Science – On Wisconsin https://onwisconsin.uwalumni.com For UW-Madison Alumni and Friends Tue, 31 Jan 2023 02:35:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 The Video-Game Approach to Learning https://onwisconsin.uwalumni.com/the-video-game-approach-to-learning/ https://onwisconsin.uwalumni.com/the-video-game-approach-to-learning/#respond Tue, 01 Mar 2022 16:20:28 +0000 https://onwisconsin.uwalumni.com/?p=33527 Illustration of video game controller

“Games are really powerful, complex experiences,” says UW–Madison psychology professor C. Shawn Green. Danielle Lawry

New research reveals that playing video games that are heavy on action can make you better at some new tasks, teaching players to be quicker learners.

C. Shawn Green, a UW–Madison psychology professor who studies how people learn, likens the concept to the way that general physical training can help players learn new sports more quickly by increasing their athleticism.

“If you’re increasing the equivalent of athleticism for perceptual cognitive abilities — like visual attention or speed of processing — that should allow you to learn faster when you’ve got a new task that calls on those abilities,” Green says.

The results will help researchers understand how gaming — which is used to train laparoscopic surgeons and drone pilots, and to help people with amblyopia (sometimes called “lazy eye”) and attention deficit disorders — creates some of its well-documented positive effects.

“Games are really powerful, complex experiences,” says Green, who collaborated on the study with researchers from several other universities. “We know they produce interesting changes in behavior, but their level of complexity makes them hard to study.”

In a pair of experiments, participants were separated into roughly equal groups assigned to play 45 hours of either action video games (such as those from the Call of Duty series) or other popular video games that unfold at a different pace without relying so much on visual attention and reaction speed (such as Sims and Zoo Tycoon).

Before the players began their gaming assignments, they were tested with tasks that measured their visual perception and working memory skills. Both groups came out relatively even in the initial tests. But after their contrasting gaming experiences, the action game players “had a slight advantage right away,” says Green. “But the bigger effect was that they improved faster at these orientation and memory tasks than the people who played other games.”

The findings will help future game designers who are focused on maximizing the training aspects of the popular form of entertainment.

]]>
https://onwisconsin.uwalumni.com/the-video-game-approach-to-learning/feed/ 0
Toward a Fairer Vaccine https://onwisconsin.uwalumni.com/toward-a-fairer-vaccine/ https://onwisconsin.uwalumni.com/toward-a-fairer-vaccine/#respond Thu, 11 Nov 2021 15:18:31 +0000 https://onwisconsin.uwalumni.com/?p=32863 Student wearing mask and face shield administers injection into the arm of another student

A vaccine fairness recommendation engine will support equitable decision-making about vaccination. Bryce Richter

Mathematical models have helped the U.S. optimize COVID-19 vaccine allocation and delivery to boost vaccination rates. But these models have not overcome the existing health disparities that stem from unequal access to health care; discrimination; and gaps in education, income, and wealth attainment.

“There’s a missing step between the mathematics and the reality,” says Michael Ferris, the John P. Morgridge Professor of Computer Sciences. “You can solve problems with mathematics up to the last mile, but at that point behavior, communication, and socioeconomic issues become critical.”

Ferris and Corey Jackson, assistant professor at the UW–Madison Information School, are developing a vaccine fairness recommendation engine that will support equitable decision-making about vaccination, with the goal of increasing immunization rates.

“Access is not just being within five miles of a vaccination site,” says Jackson. “It also means, do you have the ability to take off work to go and get the vaccine? Does the location that’s closest to you actually have appointments available? If you speak Spanish at home, is the app for making appointments translatable?”

Jackson and Ferris are measuring whether geographic areas identified as socially vulnerable by the Centers for Disease Control and Prevention are receiving fair allocations of vaccines. They will then catalog interventions and measure how well these interventions are working.

“I think we’ll have a better idea about what fairness in medical or health decision-making looks like,” says Jackson. “My hope is this work will provide useful information for decision-makers moving forward.”

]]>
https://onwisconsin.uwalumni.com/toward-a-fairer-vaccine/feed/ 0
UW–Minecraft https://onwisconsin.uwalumni.com/uw-minecraft/ https://onwisconsin.uwalumni.com/uw-minecraft/#respond Thu, 27 Aug 2020 19:37:47 +0000 https://onwisconsin.uwalumni.com/?p=29663 Minecraft game rendering of Bascom Hall

A multiplayer online game offered a way to connect with the campus community as everyone was physically separated. VirtualUW.net

When UW students were sent home from campus in March, Ryan Wenzel x’21 didn’t want to say goodbye. He didn’t want to leave Memorial Union or Science Hall or even Chadbourne.

So he decided to build all his favorite campus spots at home, on his computer. His tool of choice was Minecraft, the multiplayer online game. “I used to play [Minecraft] a lot in middle and high school, so I was eager to start playing again,” he says. “I also thought it was the perfect way to connect with the campus community as everyone was physically separated.”

With friends Chris Bravata, Matt Ciolkosz ’18, and Dylan Nysted x’21, he put together a series of challenges based on UW buildings. Hosting the game at virtualuw.net, they invited other Badgers to join in the fun, and by midsummer, more than 40 had registered. Wenzel played for around 200 hours over the course of the spring and early summer and has constructed several campus icons.

“I wanted to start with the buildings in the Bascom Hill area, such as Science Hall and Memorial Union, because I felt that those are the most iconic and historic UW buildings,” he says. “I personally wanted to start with building Chadbourne Hall, which has a special place in my heart after living/working there for two years.”

]]>
https://onwisconsin.uwalumni.com/uw-minecraft/feed/ 0
Riding the Quantum Wave https://onwisconsin.uwalumni.com/riding-the-quantum-wave/ https://onwisconsin.uwalumni.com/riding-the-quantum-wave/#respond Wed, 26 Feb 2020 16:29:43 +0000 https://onwisconsin.uwalumni.com/?p=28351 Illustration of computer mother board circuits

Yuuji/istock

The computers that we know — and mostly love — do what they do so well because they predictably perform their duties at lightning speed. But in many arenas of research and technology, classical computers will not be able to crack the next-level enigmas researchers are confronting in chemistry, artificial intelligence, medicine, and other fields.

To answer those questions, physicists and computer scientists are developing a class of computers that exploit the mysteries of quantum mechanics, a theory that traffics in probabilities rather than certainties. If they live up to the hype, quantum computers will be able to harness the strange behaviors that occur at the smallest scale of the universe to solve in minutes problems that would take a classical computer decades, says Shimon Kolkowitz, a UW–Madison assistant professor of physics.

Through its Wisconsin Quantum Institute, the university has been exploring the field for almost two decades. This past fall, the campus debuted a master’s program in quantum computing.

The one-year master’s program offers students expertise in the growing field at a time when the market for scientists competent in quantum computing is extremely tight. The New York Times and Wired both reported in 2019 on the difficulty quantum computing firms are having finding qualified prospects.

The UW master’s program is the first of its kind in the nation and only the second in the world, says its director, Robert Joynt. “We decided to put this together when it became clear that there was going to be a lot of interest from the commercial sector in quantum computing,” he says. “There’s a huge ramp-up in activity at places like Google, Intel, Microsoft, and Northrop Grumman. No one else had anything like this, so it was a natural thing to do, and the master’s program seemed like the best option, as students can spend a year here, get a good education in quantum computing, and be very marketable.”

The 30-credit degree program capitalizes on the UW’s deep foundation of fundamental research. “We are truly lucky to have quantum computing experts in three different subareas of quantum computing,” says Sridhara Dasu, professor and chairperson of the Department of Physics. “Given our expertise, we thought we had a unique ability to train students in a professional-level master’s program.”

]]>
https://onwisconsin.uwalumni.com/riding-the-quantum-wave/feed/ 0
On, Alumnae: Thelma Estrin https://onwisconsin.uwalumni.com/on-alumnae-thelma-estrin/ https://onwisconsin.uwalumni.com/on-alumnae-thelma-estrin/#respond Mon, 20 May 2019 12:49:56 +0000 https://onwisconsin.uwalumni.com/?p=25860

Estrin introduced computing technology to medical research, leading the way to today’s health-care systems. Wikimedia Commons

Thelma Estrin ’48, MS’49, PhD’52 blazed a trail in the field of medical informatics (the practice of applying computers to medical research and treatment). Although she always had an aptitude for math and science, those fields were generally off-limits for women in the years following the Great Depression. “I was the first woman engineer I ever knew,” she once said.

That was among her many firsts. In 1961, Estrin published one of the first descriptions of a system to digitize electrical impulses from the nervous system. During a professional career that spanned some 40 years, she authored dozens of research papers about mapping the brain with computers.

She and her husband, Jerry, built Israel’s first supercomputer. After joining the faculty at UCLA, Estrin became director of the data processing lab for that university’s Brain Research Institute in 1970, and later developed a computer network between UCLA and UC–Davis. As she became interested in how computers could help clinicians make better decisions, she helped develop EMERGE, a first-of-its-kind system to guide emergency room personnel in treating chest pain.

Estrin was the first woman elected to the board of IEEE (the Institute of Electrical and Electronics Engineers), the world’s largest professional organization for technology advancement, as well as its first female vice president. UW–Madison gave her an honorary doctorate in 1989.

As part of the On Wisconsin women’s issue, see other UW alumnae you oughta know.

]]>
https://onwisconsin.uwalumni.com/on-alumnae-thelma-estrin/feed/ 0
On, Alumnae: Mary Kenneth Keller https://onwisconsin.uwalumni.com/on-alumnae-mary-kenneth-keller/ https://onwisconsin.uwalumni.com/on-alumnae-mary-kenneth-keller/#comments Mon, 20 May 2019 12:45:20 +0000 https://onwisconsin.uwalumni.com/?p=25838

As a nun, Keller defied traditional expectations in becoming the first woman to earn a PhD in computer science. Courtesy of Clarke University

In 1965, Sister Mary Kenneth Keller PhD’65 became the nation’s first woman to earn a PhD in computer science. She came close to being the first person ever, but the first man to earn the degree accepted his diploma at Washington University in Saint Louis earlier the very same day.

Keller entered the Sisters of Charity of the Blessed Virgin Mary in 1932 and went on to earn her bachelor’s in math and her master’s in math and physics from DePaul University. She also did graduate studies in computer sciences at Purdue, the University of Michigan, and Dartmouth College, which made an exception to its no-women rule to allow her to work in its computer lab.

After graduation, Keller started a computer science program at Clarke College in Iowa, a women’s institution founded by her order, and ran it for 20 years. She was a strong advocate for women in computer science and for working women, encouraging adult students to bring their babies to class.

Keller proved prophetic about the impact of computers, predicting that this new tool would make the information explosion accessible to all, that it would become instrumental in teaching students, and that it would facilitate AI: “For the first time, we can now mechanically simulate the cognitive process,” she said. “We can make studies in artificial intelligence.”

As part of the On Wisconsin women’s issue, see other UW alumnae you oughta know.

]]>
https://onwisconsin.uwalumni.com/on-alumnae-mary-kenneth-keller/feed/ 1
How to Trust a Robot https://onwisconsin.uwalumni.com/how-to-trust-a-robot/ https://onwisconsin.uwalumni.com/how-to-trust-a-robot/#comments Wed, 23 May 2018 14:24:06 +0000 https://onwisconsin.uwalumni.com/?p=23027 Transparent is a word that is perhaps best understood by its opposite: opacity, secrecy, murkiness, mystery. The inability to see inside of something can provoke uncertainty, or fear, or hatred.

What’s behind that closed door?

What’s inside the black box?

A nontransparent thing can take hold of us and become the dark void under the bed of our imagination, where all the worst monsters hide — and in today’s world, those monsters are often robotic.

This sort of dark, emotional underpinning seems to inform the most popular depictions of artificial intelligence in American culture today. Movies such as The Terminator or Blade Runner or Ex Machina present a future in which artificial intelligence (AI) makes life inevitably bleak and violent, with humans pitted against machines in conflicts for survival that bring devastating results.

But if the fear of AI is rooted in the idea of it as something unknown and uncontrollable, then perhaps it’s time to shine a collective flashlight on Silicon Valley. And that’s exactly what UW emeritus senior scientist Bill Hibbard ’70, MS’73, PhD’95 aims to do.

A singular voice

Hibbard’s story has a few Hollywood angles of its own. He’s overcome a difficult childhood and an addiction to drugs and alcohol that thwarted his career for almost a decade after college. In 1978, sober and ready for a reboot, Hibbard joined the UW–Madison Space Science and Engineering Center (SSEC) under the late Professor Verner Suomi, who oversaw the development of some of the world’s first weather satellites. By the 1970s, the SSEC was producing advanced visualization software, and Hibbard was deeply involved in many of the center’s biggest and most complex projects for the next 26 years.

But satellites were ultimately a detour from Hibbard’s real intellectual passion: the rise of AI. “I’ve been interested in computers since I was a kid and AI since the mid-’60s,” he says. “I’ve always had a sense that it’s a very important thing that’s going to have a huge impact on the world.”

Many Americans are already applying artificial intelligence to their everyday lives, in the form of innovations such as Apple’s personal assistant, Siri; Amazon’s purchase recommendations based on customers’ interests; and smart devices that regulate heating and cooling in homes. But it’s strong AI — defined as the point when machines achieve human-level consciousness — that has some experts asking difficult questions about the ethical future of the technology.

In the last decade, Hibbard has become a vocal advocate for better dialogues about (and government oversight of) the tech giants that are rapidly developing AI capabilities away from public view. In 2002, Hibbard published Super-Intelligent Machines, which outlines some of the science behind machine intelligence and wrestles with philosophical questions and predictions about how society will (or won’t) adapt as our brains are increasingly boosted by computers. Hibbard retired from the SSEC two years later and devoted himself full time to writing and speaking about AI technology and ethics, work that has earned him invitations to various conferences, committees, and panels, including The Future Society’s AI Initiative at Harvard Kennedy School.

“Bill has a strong sense of ethics, which, coupled with his programming expertise, made him uniquely aware of blind spots that others working in ethics of AI don’t necessarily emphasize,” says Cyrus Hodes, director and cofounder of the Harvard initiative.

From transparency to trust

Most of the recent media coverage of AI ethics has focused on the opinions of celebrity billionaire entrepreneurs such as Elon Musk and Mark Zuckerberg, who debate whether robots will cause World War III (Musk’s position) or simply make our everyday lives more convenient (Zuckerberg’s). The debate generates headlines, but critics say it also centers the conversation on the Silicon Valley elite.

Similarly, says Molly Steenson ’94, an associate professor at the Carnegie Mellon School of Design, we’re distracted from more practical issues by too much buzz around the singularity (the belief that one day soon, computers will become sentient enough to supersede human intelligence).

“When I look at who’s pushing the idea [of the singularity], they have a lot of money to make from it,” says Steenson, who is the author of Architectural Intelligence: How Designers and Architects Created the Digital Landscape. “And if that’s what we all believe is going to happen, then it’s easier to worship the [technology-maker] instead of thinking rationally about what role we do and don’t want these technologies to play.”

But for Hibbard, the dystopian scenarios can serve a purpose: to raise public interest in more robust and democratic discussions about the future of AI. “I think it’s necessary for AI to be a political issue,” he says. “If AI is solely a matter for the tech elites and everyone else is on the sidelines and not engaged, then the outcome is going to be very bad. The public needs to be engaged and informed. I advocate for public education and control over AI.”

Tech-industry regulation is a highly controversial stance in AI circles today, but Hibbard’s peers appreciate the nuances of his perspective. “Bill has been an inspiring voice in the field of AI ethics, in part because he is a rare voice who takes artificial superintelligence seriously, and then goes on to make logical, rational arguments as to why superintelligence is likely to be a good thing for humanity,” says Ben Goertzel, CEO of SingularityNET, who is also chief scientist at Hanson Robotics and chair of the Artificial General Intelligence Society. “His reasoned and incisive writings on the topic have cut through a lot of the paranoia circling around the topic of superintelligence.”

Hibbard’s background as a scientist has helped him to build the technical credibility necessary to talk frankly with AI researchers such as Goertzel and many others about the societal issues of the field. “[Hibbard’s] clear understanding and expression of the acute need for transparency in AI and its applications have also been influential in the [AI] community,” Goertzel says. “He has tied together issues of the ethics of future superintelligence with issues regarding current handling of personal data by large corporations.” And Hibbard has made this connection in a way that makes it clear how important transparency is, Goertzel says, for managing AI now and in the future, as it becomes massively more intelligent.

Designing more democratic technologies

Like Hibbard, Steenson’s career in AI has its roots at the UW. In 1994, she was a German major studying in Memorial Library when fellow student Roger Caplan x’95 interrupted to badger her into enrolling in a brand-new multimedia and web-design class taught by journalism professor Lewis Friedland. Caplan, who is now the lead mobile engineer at the New York Times, promised Steenson that learning HTML would be “easy,” and she was intrigued enough to sign up. The class sparked what would become her lifelong passion for digital design and development, and Steenson went on to work for Reuters, Netscape, and a variety of other digital startups in the early days of the web.

In 2013, Steenson launched her academic career alongside Friedland on the faculty of the UW School of Journalism and Mass Communication before eventually joining Carnegie Mellon. Her scholarship traces the collaborations between AI and architects and designers, and she likes to remind people that the term artificial intelligence dates back to 1955. “Whenever someone is declaring a new era of AI, there’s an agenda,” she says. “It’s not new at all.”

Like Hibbard, Steenson is a strong advocate for broadening AI conversations to include a more diverse cast of voices, and she thinks designers and artists are especially well equipped to contribute. She quotes Japanese engineer Masahiro Mori, who in 1970 coined the term bukimi no tani (later translated as “the uncanny valley”) to describe the phenomenon where people are “creeped out” by robots that resemble humans but don’t seem quite right.

“Mori said we should begin to build an accurate map of the uncanny valley so that we can come to understand what makes us human,” she says. “By building these things that seem like they’re really intelligent, we understand what we are, and that’s something very important that designers and artists and musicians and architects are always doing. We interpret who we are through the things we build. How can we create designs that make us feel more comfortable?”

An ethical education

Many AI futurists believe that ethics is now a critical part of educating the next generation of robotics engineers and programmers.

Transparency is high on the list of pressing issues related to AI development, according to Hodes, who is also vice president of The Future Society. He believes that the most pressing issue as we march toward an Artificial General Intelligence (the point where a machine can perform a task as well as a human) relates to moral principles. It is vital, he says, to start embedding ethics lessons in computer science and robotics education.

UW students are aware of this need. Aubrey Barnard MS’10, PhDx’19, a UW graduate student in biostatistics and medical informatics, leads the Artificial Intelligence Reading Group (AIRG), which brings together graduate students from across campus to discuss the latest issues and ideas in AI. AIRG dates back to 2002, making it the longest-running AI-related student group on campus.

And while members are mostly focused on discussing the technical aspects of AI and machine learning, Barnard says this year they’ve expanded their reading list to include AI history. They’ve also cohosted an ethics discussion about technology with the UW chapter of Effective Altruism, an international charity that raises awareness and funds to address social and environmental issues.

“To me, the cool thing about AI is computers being able to do more than they were programmed to do,” says Barnard, whose work investigates ways to discover causal relationships in biomedical data. “Such a concept seems paradoxical, but it’s not. That’s what got me interested.”

At the undergraduate level, computer science and mathematics student Abhay Venkatesh x’20 has organized Wisconsin AI, a new group that’s already generated enough buzz to get a sponsorship from Google. Venkatesh says the group aims to launch a variety of student-led AI projects, such as using neural networks to experiment with music, images, and facial recognition. As for ethics? “We consider such issues very important when discussing projects, and we’ve actually avoided doing a couple of projects for specifically this reason,” says Venkatesh, who plans to specialize in computer vision. “We’re planning to develop an ethics code soon.”

This sort of burgeoning interest in ethical conversations is exactly what Hibbard hopes to see replicated at the corporate tech-industry level. “All kinds of corporate folks say, ‘Our intentions are good.’ I understand where they’re coming from, but there are all kinds of unintended possibilities,” Hibbard says. “I would worry about any organization that’s not willing to be transparent.”

The light and the dark

Hibbard is emphatic that he is an optimist about AI, and he firmly believes that the benefits of the technology are well worth the challenge of mitigating its risks. “Part of our imperative as human beings is to understand our world, and a big part of what makes us tick is to understand it,” he says. “The whole scientific enterprise is about asking the hard questions. Our world seems so miraculous — is it even possible for us to develop an understanding of it? AI could be a critical tool for helping us do so.”

Yet he also believes that, if left unchecked, AI could become a weapon to repress instead of a tool to enlighten. Unlike what we see in the movies, which usually pit humanity against the machines, Hibbard thinks it’s more plausible that AI could cause conflict between groups of humans, especially if we decide to do things such as implanting computer chips inside of some humans (but not others) to give them faster, more powerful brains or other enhanced attributes. More immediately, though, he warns that significant social disruption could occur if robots continue to displace human jobs at a rapid rate.

“No one really knows exactly what’s going to happen. There’s a degree of disagreement and debate about what’s going on,” he says, adding that nothing is inevitable if we begin to pay attention — and require tech companies to be transparent about what exactly they’re doing and why. “I want the public to know what’s happening, and I want the people developing those systems to be required to disclose.”

]]>
https://onwisconsin.uwalumni.com/how-to-trust-a-robot/feed/ 1
Apple Core https://onwisconsin.uwalumni.com/apple-core/ https://onwisconsin.uwalumni.com/apple-core/#respond Mon, 29 Feb 2016 16:52:03 +0000 http://onwisconsin.uwalumni.com/?p=16784 Gurindar Sohi

Gurindar Sohi. Jeff Miller

752

The patent name for the technology UW computer sciences professor Gurindar Sohi developed, which is at the center of a legal dispute with Apple, Inc.

If you like the speed of your iPhone or iPad, thank UW–Madison and computer sciences professor Gurindar Sohi. That, at least, is the argument made by the Wisconsin Alumni Research Foundation (WARF), and though Apple, Inc., disagrees, a federal court sided with WARF in October, ordering the tech giant to pay up some $234 million.

Known at the UW as “752 Patent,” the technology in question is a computer circuit designed nearly twenty years ago by Sohi and three graduate students — Andreas Moshovos PhD’98, Scott Breach MS’92, PhD’98, and Terani Vijaykumar MS’92, PhD’98. According to WARF general counsel Michael Falk JD’97, MBA’97, MS’02, the circuit helps computers run multiple instructions at once.

“It was sort of a magical discovery,” says Falk. “Guri and his students didn’t anticipate the iPhone, but many years later, they have greatly improved how computers run.”

Apple isn’t the first computer manufacturer to make use of this invention. In 2009, WARF settled a claim with Intel to license use of the same patent.

Should the decision stand, the award will be divided among WARF, Sohi, and his former students. The research foundation’s policy is to give 20 percent of a patent’s proceeds to the inventors, so Sohi and his students would each receive a 5 percent share. WARF would use the remaining funds to support more research at UW-Madison.

However, the UW has yet to receive any funds from the decision. Apple will likely appeal, and the legal process could take years to run its course. Still, Falk says, the purpose of WARF isn’t to win lawsuits but to ensure that UW discoveries make it to the marketplace.

“For ninety years, WARF has served the UW as its patent management organization,” he says, “and we take our responsibility to defend the interests of the university and its faculty, staff, and students seriously. In the end, our focus is on pushing technology out, and we want to use the money from licensing technology to help research and improve the world.”

]]>
https://onwisconsin.uwalumni.com/apple-core/feed/ 0
Tom Hall ’86: Video Game Innovator https://onwisconsin.uwalumni.com/tom-hall-86-video-game-innovator/ https://onwisconsin.uwalumni.com/tom-hall-86-video-game-innovator/#comments Wed, 30 May 2012 17:59:11 +0000 http://onwisconsin.uwalumni.com/?p=6875

Tom Hall uses humor to enhance virtual life and overcome adversity in real life.

Two days bear special significance for video-game designer Tom Hall ’86. On June 9, 1980, his parents brought home an Apple computer. “I lived on that thing,” he recalls. The second milestone occurred three decades later, on April 13, 2010. “Totally out of the blue,” Hall says, “I had a stroke.”

At the age of forty-seven, after forging a reputation as one of the gaming world’s most daring innovators, Hall suffered a lower-left pontine stroke that affected muscles on his right side. During his recovery, he developed a new perspective on life. “I suddenly wanted to do things now, instead of later,” he says. “I love photography, so I got the camera I’d dreamed of. I got a nice Herman Miller chair. I’m eating better and simplifying my life.”

Growing up, Hall thrived on complications when it came to computer programming, creating his own games and vowing to major in computer science once he got to college. At UW–Madison, while working toward his bachelor’s degree in systems programming, he began thinking about a career in game design after he created education software for learning-disabled kids.

“I’d gotten positive response from folks about the text adventures and games that I wrote,” Hall says, “but helping a teacher improve his teaching tools with little games and simulations really gave me that need to do games.” (See a related story, page 13.)

Shortly after graduation, Hall began working at a software company where he met John Romero, John Carmack, and Adrian Carmack. In their spare time, the four geeks created the video game Commander Keen. It caught on. “We realized we could actually do this for a living,” he says, and the quartet formed its own company, id Software. “We worked crazy hard — seven days a week, sixteen hours a day,” says Hall. “I felt guilty eating breakfast. I had to get in to work and make the game.”

Over the ensuing years, Hall developed games including Wolfenstein 3D, Spear of Destiny, Rise of the Triad, the award-winning Anachronox, and the immensely popular DOOM.

Now living in Half Moon Bay, California, Hall has relied on the playful spirit that informs his game design after he was blindsided by the stroke. “Once I knew I wasn’t going to die,” he explains, “it was kind of fun to relearn stuff: ‘Oh, that’s how you use a spoon.’ It was also fun tweeting dumb jokes and updates from the hospital. My folks … taught me to find humor in life, so that’s how I dealt with it.”

Phasing out the intensive production demands that have marked most of his previous projects, and now working for the Loot Drop game company, Hall says he’s embraced a new direction. “I’m kind of done for now with games that take three or four years to develop. Facebook, smartphones, and eventually Google — that’s the current frontier. I like the fast turnaround. Maybe that has to do with the stroke lesson: ‘Do things now.’ ”

]]>
https://onwisconsin.uwalumni.com/tom-hall-86-video-game-innovator/feed/ 1