AUMI interviews: Ian Hattwick

Interviewee: Ian Hattwick is an artist, researcher and lecturer at Massachusetts Institute of Technology. He was the main developer for the AUMI desktop application from 2012 to 2014 at McGill University while pursuing his PhD in music technology.

Interviewer: John Sullivan (JS) is a music technology researcher and PhD candidate at McGill University. He served as the developer for the AUMI desktop application from 2016 – 2019.

The interview was recorded via video call on September 13th, 2019.

JS: What I’ll do for this interview, I have basically three questions. They’re pretty simple, and then actually I have a longer list too, but basically I’ll ask you, what was your involvement in AUMI, how long did you work on it? How were you involved, and then what changes were you directly responsible for? Like what did you specifically do on the project? And then the third question is what are either the most or least meaningful parts of your involvement with the project and that usually sort of just leads into some free form discussion of, you know, anything that stood out to you that was important about it, memorable, or frustrating, or whatever, and we can just go from there.

IH: Yeah, man, I have to say that getting me to remember any of that… (laughter) So I think I started working on AUMI, I think pretty quickly after I got to McGill, I think I was looking for funding and I went and met with Eric [Lewis], and Eric basically talked about a laundry list of things that might be useful from the perspective of the people who were using AUMI. I just kind of went through the list and kind of added things. I wonder if I even have the note that I originally took?

So, this would have been seven years ago [2012]. So when I first got AUMI, I think it was not much more complicated than the original project itself right. So it was a project from [a student of] Pauline’s at Rensselaer [Polytechnic Institute]… if I remember correctly.

JS: I think you’re right, actually there was something really cool that Sherrie just sent me. I guess she was doing a video interview with Leaf Miller, who I assume you may have met or at least spoken with through the project, and Leaf actually has this old laptop that has almost every single version of AUMI for, I think the first one was like 2007 or ’08, all the way through. And so it’s like a 15 minute video of her opening different ones up. [Speaking as Leaf:] “Okay and then this one is 2009, and…” It sort of shows through [again as Leaf:] “Now this one had, now you can resize the grid on the screen…” So she marched through, and yeah I think there was one around – I forget when it was – 2012 or ’13, and she said “Oh and I think Ian worked on this one.” But anyways, seeing that, largely the application was not so changed from its earliest version, with these incremental upgrades and additions.

IH: Yeah, I mean I’m looking at some of these notes that are sort of reminding me what had happened. The biggest things that I probably did was just changed the visual representation of the lines and the dot and things like that. I added the keyboard mode to be able to make your own scales and add other scale options. I think I did changes, you could change the velocity based on where you were vertically (in the horizontal modes). I made it so you could switch… Yeah, I’m just looking at the laundry list of things that Eric had said, and I [implemented]. I switched the.. so you could have the lines be vertical or horizontal. Yeah, and then a bunch of bugfixes and things like that. There’s a lot of other things we had talked about, in terms of therapeutic goals, but we didn’t really do very much specific implementations of functionality based on therapeutic goals. I think we were mostly talking about things that would be nice to make it more friendly. I think I also added the ability to have presets, and save presets for different students. So you could save their settings and recall them next time you worked with them. So that was a lot of the stuff that I worked on.

I also worked on it for, you know, maybe a year and a half, maybe two years? It wasn’t a super heavy commitment. I was obviously like we were all doing a lot of different things at the time. So yeah, that’s probably the biggest part that I did. I know when Ivan came on, I think that he was the next person to take it on after me, he did a bigger re-write from the ground up the way that Ivan, you know, he wants to do, as they say.

JS: (laughter) Yeah exactly, because he, I don’t know if he soft of came on and worked with the existing version for a while and then went to that? But it was that whatever summer it was after he had taken over, he just wrote it from the ground up and then he delivered it to me. It was pretty much, the application was finished but not yet released and there were lots of small sorts of things. So I worked with it for a while and then we released that new version, which continues on. And still, it’s kind of crazy, it looks soft of completely different but at its root it does the exact same thing. It’s kind of funny how simple but not simple it actually is. Because really it’s still like there’s a camera, and it tracks across and triggers some notes and that’s it.

IH: And the basic tracking method is probably largely the same, right? [5:56] Using cv.jit objects (a computer vision library for Max/MSP).

JS: Exactly, yeah, the exact same objects.

IH: There’s one other thing I did toward the end of the time I was working on the AUMI, was I started working on external input devices.

JS: Yeah, I was going to ask you about that, because I kind of remembered that.

IH: It was really, it was just one of those unfortunate things where I was just so swamped, and I had been doing a lot of research on – I should say I had been taking on a lot of projects that involved 3D printing things for new interfaces and sensors, and then just got to the point where I just never got as far with them, to where I really had anything finished. I did actually make some video demos of some of the early prototypes to show to Eric. So I do have some documentation of that. So these are that like: what if you had a student who wants to interact with something – I guess I’m trying to remember why I though this was interesting, but – almost like a joystick but instead of being a joystick that you can grab, it was just like something you could sort of, like a really big thing that you could kind of nudge with your arm … and sort of move a controller around. I guess at the time I had been working with the idea of 3D printing mechanical assemblies, and how could you design assemblies to work with sensors that are like – you make a piece that has a fixed sensor configuration then you 3D print a mechanical assembly that is designed around that PCB (printed circuit board). And then you can have a lot of different kinds of interfaces where the PCB stays exactly the same, just the mechanical characteristics change. Si a lot of things around 3D printed springs and sort of 3D printed assemblies that would afford different kinds of movements. And I think I got to a point where I think I had some demos of that but I never really got to something that was really finished finished.

JS: Yeah, and that continues to be a theme. In fact Eric and I were just talking about that last week. I’ve done a few small things similarly. So one of the intentions of Ivan’s design of that ne rewrite of the application was to make it really modular so you could, you know, swap out your interaction block, so maybe you would pu t something in there that was connected to a physical device or a suite of different kinds of controllers. And so I’ve continued testing out a few different things, and trying [different things] out. One of the things was, do you know the company that Marcello [Giordano, co-researcher at the IDMIL] went to work for, Ultrahaptics? So we got the Ultrahaptics Dev Kit, and had the array. So with a masters student we came up with an Ultrahaptics haptic interface using the Leap Motion, where you could kind of go over and feel your way around a virtual space and do it that way, which was great for proof of concept, but actually didn’t really work in practical application. But yeah, same sort of idea, taking different modalities of input to address differing needs rather than just a camera and just visual representation on a screen. SO yeah, it continues to be a way that we are sort of pushing to go.

IH: I remember one of the tensions that came up when I was working on the project, is this sort of interplay between, on the one hand we’re researchers. So we’re really motivated to think about what are really novel and interesting new ways to dealing with and solving these problems. And on the other hand, AUMI is a very user-focused application, right? So it’s really the people who are using it need it to do what they need it to do. So there’s this tension between, they sort of need it to be just the simplest thing it can possibly be. Ultimately it is this very simple thing. And from our perspective as researchers, we’re interested in these bigger topics and we want to make these applications to it, but the problem is that making that transition from like a new idea to something that’s usable, that a user can take and implement in a real-world application is hard and is a long time to get there. And to get to the point where it’s actually ready to use, you know, for me, I didn’t get there in that particular sub-project I was working on. IT sounds like the Ultrahaptics thing too, it’s just too hard to get it so it’s really ready for that kind of environment.

Because AUMI is built around generic Max objects, both in terms of the CV (computer vision) stuff and everything else, all that stuff was kind of already implemented. So it’s kind of like what you do of you want to make an application that is going to be usable in a commercial, public facing setting is use things that are tried and true. If you don’t, and as soon as you bring in something that’s not tried and true where there needs to be more fundamental engineering work, you’re talking about years of getting it robust enough that it’s ready for the public.

So anyways, I remember that that was one of the tensions, and certainly it was very clear to me that AUMI… So the way that I describe it in my own head, is Leaf is AUMI. Right? And the reason why I say that is that AUMI as an application is so simple. You know, if we were to describe it in terms of interactive media, it’s not designed to draw you in, in the way that a video game would be. There’s not a lot of thought given to learning curve and “how do we give people rewards” and all these things that people think about when they’re designing interactive experiences. It’s very straight, this is what it is; this is what it does. And it can be great, some people will really bond with it but a lot of people will sort of see it and get what it does very quickly, and are they going to be motivated to continue engaging with it. And the thing that really gets people to continue to engaging with it was the context of its use in these social musical settings – in particular the motivation and the things that are brought to it by the people who are leading it, whether it’s Leaf, or the music therapists, or the teachers who are really motivated to give these – mostly children – an opportunity to have these experiences. And AUMI was the tool to do that.

So I was really struck by that. When I went to the AUMI symposium in 2014 maybe? I don’t totally remember what year it was. It was an ICASP (Improvisation, Community and Social Practice) symposium at Mackay Centre School.

And I got to see Leaf do one of her jam sessions, drum circles with all the kids, some of them on percussion instruments, some of them on AUMI. That’s when it really struck me, the amount of energy and the amount of focus that she was bringing to everything by her person and her work. I was really deeply impressed by that. And I think it’s really at the crux of this. I think the people who are using this software… on some level it’s sort of a tool for them, which is great! And it doesn’t need to do more than it does in order to be that tool.

JS: Yeah, absolutely, Eric has mentioned that several times, about different developers working on it over the years, kind of having free range – what would you want to put in AUMI? Like lots of incredible, fantastic ideas that may be highly creative or highly technical, that are amazing but are absolutely unsuitable for the actual task of AUMI in these sort of settings, where a lot of times it just needs to do one thing simply, well, and effectively. So it’s sort of a balance.

IH: And that’s sort of like this – like I said, it’s that tension between the two sort of motivations of people who are working with and on AUMI. You know I think it’s, on the one hand there’s people who develop technologies for people who have needs, for people with disabilities or people who just have a particular need, and there’s this problem of people projecting onto these people what they think they need, like developers projecting on the users what they think the users need. And of course we know that that’s very problematic. But on the other hand there’s also this sort of tension just of – you’re in a university setting, here’s this funding, it’s funding research, and it’s not very clear what the goals are. There is some free reign for people to explore these things. And the question is, well if this was a commercial project that wouldn’t be okay. (laughter) As an academic research project is it okay or not? I don’t know.

I think it’s really a bigger question within our lab, is that, if we are going to embark on these projects, how do we ensure something productive comes out of it, whether it’s publications, or training and experience, or something that actually is usable for the people that need to use it. And I think Eric is very comfortable where he doesn’t want to be a product manager. Eric’s not really interested in that, and he understands too that on some level this is a creative endeavor. So you do want to give people the motivation to be thinking outside the box, and on the other hand you do kind of need to reign people in a little bit.

And ultimately, you know, I think he’s probably had a lot of frustrations along the way where he feels like, God that felt like a lot of effort and time that went into something that ultimately didn’t really make much of a contribution to the project. But it is that, know you, what is it all about ultimately?

JS: Right. There’s definitely – my experience has absolutely been that there is a ton of freedom – I can dream up anything I want and do a little prototype of it. There’s tons of freedom to go in any direction and then, yeah, whether or no it’s useful at the end of the day, maybe it is or maybe it isn’t. It’s interesting from an academic research standpoint, it is, I think, probably the most free and open creative project I work on on a regular basis. There isn’t a heavy emphasis on publishing research findings or rigorous academic testing or evaluation or things like that. It’s kind of responding to the situation and the community that is using AUMI, and kind of a free place to explore too as far as testing different things out.

IH: I mean, I kind of look at that and to me that feels like, I mean, there’s nothing wrong with it. I’m not a big criticizer. But there’s some part of me that fundamentally feels like I would like for there to be some sort of way to be able to have some consistent concrete outputs. I guess maybe this book is going to be one of them? To take it back to the academic community and make sure that people are publishing? I mean the problem is, as we know, that to make something that is really a big contribution is a hell of a lot of work. So even to get to the point where, as a developer, you reach out to the people who are using the application and present something to them, and then you go through the iterative development cycle, that’s a lot of work. It’s extremely easy to sit in the lab and prototype some things together, and give a demo and say “Oh, it does this thing, isn’t that cool?” But to actually go to the users and meaningfully query them and get data from them and come back and go through that cycle enough time to get something that would be publishable, or even to get something that would be robustly useful, is a big commitment.

JS: That reminds me of something I had asked Ivan about, maybe it’s more relevant to when you were working on it. I know with the previous version that you were working on, I guess it was version 3, there was just seeing the remnants of the old website and some old data that I had received, there had been for a long time a pretty official feedback channel of like – I think when you downloaded the app there was some sort of form or spreadsheet to fill out, to actively gather feedback from the practitioners, and to keep really organized notes on who was using AUMI, in what settings, how, and I was wondering did any of that get back to your work and development? Or maybe it went through Eric and he has this list? Because I know for a long time a lot of that stuff was collected.

IH: My memory of that experience was that it was sort of a roadblock when you were trying to download the application. If you went to download the application there was some form you had to fill out that felt like, “Ah, do I really want to do this?”

So it was a little bit like a dissuasion rather than anything else. And also I think my remembrance of this too is that when you had to fill it out was before you downloaded the app for the first time anyways. So you wouldn’t even have any experience with it.

JS: Yeah, I think that was what I remembered seeing, was sort of a very closely tracked… if you’re going to be using this app you are sort of part of the, you know whatever – the AUMI beta tester researcher group that you sign up for or something like that. Yeah, that’s good to know, because one thing that I’ve been sort of struggling with a little bit is that, you know the Mackay Centre School project was going on for a while, although it sort of stopped or ended, or went on hiatus, not long after I had started working on the project. So that was really nice, when Ivan was working on it he visited the school a few times and talked directly to some of the teachers that were using it and that helped him form some of the stuff he was working on. But that sort of ended. And I think also through maybe just the cycling of a few people through the project that weren’t there anymore, or you know now we have a different website, there wasn’t really any active tracking of who is using the app and stuff, so I always sort of have that problem where I make some updates and, “Hey it’s a new version” and then it’s like, “Well, who is actually using this, and are we getting any feedback?” So I’ve always intended to launch some sort of user survey or user feedback forum for it. And maybe it’s on me because I’ve never set that up. But it does feel like I’m never quite sure who is using it and where.

And the other thing too is Henry Lowengard launched the iOS app, which has done really well. And it’s really nice, I think it’s a bit – well obviously it’s a lot more portable, and I think is a lot easier to use in classroom settings and stuff. You come into a class and you have a stack of 12 iPads or you have a couple laptop stations. So the iPad version is great in that regard. It’s also become a pretty complex app, but still has kind of the basic core functionality. But yeah, I guess my point being that the desktop application, it’s hard to understand exactly how it’s being used, and what the pain points are to keep developing for it. Yeah, I don’t know. And maybe it’s kind of always been that way and when there was a lot of feedback coming in maybe that helped, and maybe that didn’t, I don’t know.

IH: Well, you know I was just thinking of this while you were talking. I went to the Mackay Centre School a couple times too and witnessed what was going on, and I went to the ICASP symposium, and it was great to see it. It was great to see it in use, and to really understand what people were really doing with it. But it also really did reinforce this narrative that ultimately it needs to be simple, and it needs to support what people are using it for, and it didn’t – what happens when you go to someone and ask them, “Hey I’m an AUMI developer, I’d just like to talk you about how you use the app”, and it’s very easy for it to become a laundry list of wishes and bugfixes and things like that which is useful in its own way. But in another way it’s – I guess I always felt like there’s another level of conversation I’d like to having around it. Maybe that other level of conversation is much deeper. Especially thinking about assistive technologies more generally, like how do you understand what’s really going to be valuable, and how do you prioritize that in that sort of user interaction iterative development. I guess I always went to talk to people about it and kind of walked away feeling… not terribly satisfied. And I think largely for that reason, I felt like it sort of devolved in sort of mundane lists of things.

JS: And I think that’s still somewhat the case. I have, you know, a list of very similar things. Wish the button was bigger over here, I’d really like this sort of mode, there’s this one thing it could do better, or…

IH: I guess you can imagine people could go to Apple and say the same thing. Apple could reach out to people and ask, so how do you like OS10? And people would say, “Oh, well you know, gosh I wish the font was bigger and it’s really annoying that this menu bar is here”, and you know. That’s not necessarily… it’s not unimportant, but it’s not terribly deep or interesting.

So what would be interesting to get from people, what would be ways to think about the AUMI experience and really improve it from a fundamental level? I when I say improve it, I don’t mean technologically more sophisticated, I mean, just think about from the very practical use case. One of the things I remember thinking about, one of the things I was really interested in but I never explored was this idea of documenting, like getting like quantitative data out of it, in terms of how much students achieved their tasks. And right now I’m actually working on another project that is developing an interactive system for a hospital for children, a rehabilitation hospital for children and a lot of them, what we’re talking about in those interactive apps – is the quantitative data collection. But I mean again it’s one of those things, that if you’re gonna do that, it needs to be in close connection with the therapist, because we need to make sure he’s really collecting the things that we are interested in and its rigorous enough and it’s meaningful. It’s not easy to do.

JS: Yeah absolutely. I think the iOS version has some sorta data collection capabilities, although I haven’t looked into it closely. And yeah we’ve talked a little bit about doing the same thing on the desktop version, but I think it’s kinda the same thing, exactly what sorta data to collect, who is looking at it, and then its sorta general project scope and management. Like, you know, who’s gonna do all that analysis? Me? Probably me (laughter)

IH: Yeah. Well, it gets around to this idea too, it’s like, it’s easy to do development in the lab. But as soon as you start going out of the lab to bring other people in, it’s exponential in terms of how much time and attention it requires. And it really becomes an impediment if it’s not your full time research job.

JS: Yeah exactly. I’m trying to think of anything I want to ask you about, or if you have other things, that you are remembering.

IH: You know when I look back on AUMI, the thing that I remember the most about it was seeing Leaf and talking with – so, the people I saw using it were some of the teachers at the Mackay Centre School who seemed to use it for their – well I don’t know. I was going to say informal, but I don’t know if it was informal, but jst classroom activities – classroom group music-making activities. And then I saw at the ICASP symposium Leaf do her drum circle which was super impressive. I have huge respect for Leaf, I think it’s really important work. And she’s got such a spirit. And then the other thing I saw was at some point there was a physical therapist, or an occupational therapist at McGill who was getting involved in the project?

JS: Yeah, Kaiko Thomas (professor in the School of Physical and Occupational Therapy at McGill)

IH: Yeah, exactly, at some point she had – you know I don’t know if they were her students, but there were some students who were physical therapists or music therapists, who went in as students and had 6 month long research projects, or year long research projects. I remember meeting with them and they were kind of interesting, cause one of the guys had taken the “load your own sample” feature of the thing and had loaded in guitar chords, and they were playing pop song chord progressions, by sort of moving through different things. And that was really a very different use case then we had talked about. And they had really been motivated enough to take it onto themselves, to make it their own. And maybe that’s ultimately what a lot of these things are, is that the application itself can only do so much. It’s really incumbent on whoever the person that it directing the activity takes it on themselves to bring themselves to it and make it something special. Leaf did that, and those two therapists who I saw who were from McGill did that, and I don’t remember so much what the teachers at the Mackay Centre School did, but they must have done that too because that’s how you make it work.

JS: Yeah, absolutely, I think you’re very correct. Yeah it really takes the organizer, the therapist, or whoever it is to do interesting things with it. Did you meet or work with Jessie Stewart? Was he during your time? He’s a professor at Carleton University, I think.

IH: No I didn’t.

JS: Because he has done some really really cool stuff with it in the past years, I’ve gotten to know him a bit. Yeah, he works with – he’s in the music department I believe, and works with kids and adults with different types of disabilities, and does a lot of music assembles for big groups of differing abilities, and just he’s done a really great job of putting on these big shows with a bunch of kids and huge audiences, and bringing them into the community, and he’s started doing some more interactive music pieces. I haven’t seen it, but Eric was saying more recently he’s done some stuff taking the MIDI output of AUMI and connecting it to some Arduinos and servos and stuff, so there’s actually physical music making devices, actuators like hitting a gong and stuff to some rhythmic things, and he’s been developing these AUMI-based orchestras that have been lots and lots of fun. But he’s another one that has just taken a pretty simple thing, brought his own energy and his own vision to it, and that’s really made it special.

IH: You know, I’ve been thinking about it a lot lately about digital musicians and instrument building and performance practice. The one of the things I’m noticing is that it increasingly seems like instrument building is part of the practice, right? And by instrument building I don’t necessarily mean what we do as hardware designers but people making assemblages of different kinds of things. Kind of like what you’re talking about where AUMI is one tool and then you use the MIDI output and you do this other thing. Even as you know, any sampling based instrument, the samples you bring to it define what the instrument is. You know, so even by bringing guitar chords to AUMI totally transformed the instrument in a sort of way. So this kind of creative interim building/assemblages is an important part of common practices in digital media. THat’s interesting to hear that about AUMI, it’s fun. And there’s the energy people bring to it. It’s and the commitment and the ideas and the creativity that they bring to using the tool.

It’s nice that AUMI is open enough to support that. And it’s interesting that the two things that make it open-ended, the MIDI output and the samples, are two things that have been identified and used, it’s really cool. I don’t know if there’s a MIDI input.

JS: Right. No, there isn’t currently, but there could be, why not?

IH: I guess there’s that tension when it comes around to designing a system how do you design it to be open enough to be friendly to other kinds of technologies. So, you think if you want to make the violin which is sort of this all in one thing, but really what you want to make is a bow. A bow doesn’t really mean very much without something to bow. But, you know you can put the bow on anything, and then it becomes an incredibly flexible tool. So maybe AUMI is in some ways, lind of like lots of digital things that live in that space between being a bow and a violin.

JS: Yeah, and that’s coming back where it currently sits. Ivan did a nice job in developing this modular system where basically you have input devices that could be swapped in and out to be whatever you want, and basically output devices. So in that regard, well it’s kind of funny because he built the system where the actual communication between input device and sound mechanism is pretty simple, so it makes it really easy to mix and match them. But now as I’ve taken over from him, okay we want to do a few different things with the input so, it becomes more and more complex and the few tests I’ve done different physical inputs or the haptic things, it necessitates more complexity. So now we’ve been talking about, is AUMI is one sort of core app that has these different functionalities, or can we envision it as more of a suite of tools, many which can be mixed and matched in different interesting ways towards different ends. Or is AUMI more of a theoretical ongoing point of research in social practice around accessible music-making and stuff like that.

IH: Well, that’s a big area and I’d love to talk more about that. I don’t have that time to do it right now. But I think that gets to some of the core questions around this. At this point I think it’s easy to view AUMI as just being a little app that kinda does its thing, and you know, it’s not super full featured really doesn’t do anything super super amazing but it’s really useful for one specific thing. And people have taken it and managed to do some very good work, but they’ve done it through the app, but largely because of their own initiative.

And I think if you wanted to change that, that’s that’s a pretty big change, in the conception of what it does. Maybe Ivan was trying to do that but what you run into, that you had already identified, he came up with a concept of the input device, output device, and the communication channel in between which was a model that exposed certain kinds of information. But very quickly that model is not enough. And if you keep changing that model, you can pull your hair out. So what do you do about that? And that’s really a challenge for anyone that’s designing an instrument that is intended to be interconnected and modular in that way. Part of the problem with AUMI right now is that there’s this input and output and they are intended to communicate, but the communication channel is still very highly specific. At least the way the application is routed, it’s fixed – the input goes to the output, right? And you can’t put anything in between. And then of course to develop something that’s a new input or a new output is a lot of work.

You know one thing that we haven’t talked about today too, that is worth bring up because it has also been a source of tension, is: is Max really the right environment to develop this in?

JS: Yeah, absolutely.

IH: And when Henry developed the iOS app, it was like right off the bat way better than the Max app.

And it’s like, oh, okay. And we kind of said well, should this just be coded in C, or in whatever language that would be more robust, and better performance and more flexible But to do that you’d really have to be a developer who knows how to work in those platforms. And that’s something we never really were. And so that’s kind of a bit of a head scratcher, because we are not necessarily people who are actually software developers. Henry is and I think it shows in what he did. So I don’t know where to go with that other than that it also raises this tension more broadly within the music tech community of people who are trained engineers versus people who are musicians and artists who come to it because they love it and then find the tools and platforms that work for them, but ultimately are constrained by the the limitations of those platforms and they choose to work within it, which works great for them in their own personal practice but when you want to make that leap from personal practice to product, then you really have to deal with the reality of it, which is Max is not a good application for creating a product. It’s got too many weird things about it, and trying to manage development in a coherent way is a constant challenge.

JS: Yeah and that’s still exactly the same and the conversation still comes up, a lot. One of the things that I want to do and I’ve done a few small tests, is I am interested in making a browser-based version. And even there are some similar things that have some out, of just webcam-based sound-triggering apps. And I’m sort of shooting myself in the foot because if it goes beyond developing for Max, I’m not going to be the developer for a nice C++ application, you know? Some web stuff maybe, but…

IH: You know, actually since you bring it up, it totally should be a web app. And of course if it is a web app, it throws away any idea that it could ever fund itself. But I don’t know if that’s ever been an issue or not.

JS: In theory no, I think the original spirit in fact, it was always supposed to be a freely accessible application and I think for a while the iOS version, was up for a $5 download and I think more recently Henry has taken that off so now it’s a free app again, because at a certain point, people said wait, AUMI was always free, how come this one isn’t free?

IH: My memory was AUMI was expensive enough as an iOS app that I didn’t get it. Now $5 might have been expensive enough at that time, you know, when you’re a PhD student, 5 bucks is 5 bucks! My memory was it was $40.

JS: Might have started at $40.

IH: So I have never downloaded the iOS app. So now that you’re saying this, actually I’m going to download it.

JS: (laughter) Check it out, yeah. I think even maybe I saw very recently, Henry just released or is going to release AUMI 2.0 for iOS now.

And it’s something that we’ve talked about for a couple years now is, and I still think it would be something that is maybe good to do for the project is try to bring the two applications a bit closer together, because yeah more and more they just really diverge, where the same core functionality is there, but everything else is completely different. So, I don’t know.

IH: I mean probably there’s going to be some sort of balance between Henry’s, versus with the Max version it would be nice for it to have even more functionality that was well thought out, but it’s so hard to do in Max that it just would be a nightmare trying to do it. So I’m not really sure – I think web actually, frankly the web platform is the platform to do it in.

JS: Yeah, it was probably a year ago I sent some time to actually sit down and do some research, because that was the exact question that came up – if we were going to develop a new version would it be Max? And if it wasn’t Max, what would it be? And my research said, yeah do it as a web app. Because if for nothing else, I spend more time just trying to correctly compile the Max patch to actually just be a nice stand-alone – especially Windows – application. I’ve just spend so much time on that and still to this day there’s always some sort of issue, so yeah, web app would be nice.

IH: You know, with P5.js and tone.js and you’re probably done. Because the support for web apps right now is… (laughter) I’m going to say it’s at Arduino level, you know? Which basically means that it’s been accessible-ized and popularized to the point that actually there’s a lot information about how to do these kinds of things. It would not be an impossible task for somebody at IDMIL to do.

JS: Yeah.

IH: So, I would love to, this is fun and I’m happy to talk more about it. And I think this sort of idea about what the vision for AUMI is, I think is such a really… that sort of eventually becomes like, oh, that’s a really great idea, that’s a great thing to talk about.

JS: Hopefully putting the book out there is a step in continuing the conversation and expanding it.

IH: I think it’s great, I’m super glad it’s moving forward. Kudos to you for keeping on it. This is a great idea. There’s not enough, in my experience – in my opinion, there’s not enough people looking back and reflecting on stuff. So this is really valuable to do this.

JS: Indeed. Cool, thanks for your time.

IH: Yeah, let’s talk again sometime. I’d love to hear how your thesis is coming too.

JS: Alright man, take care.

IH: Good to see you.

JS: You too. Bye.