AUMI interviews: Thomas Ciufo
Interviewee: Thomas Ciufo is a sound artist, researcher and Assistant Professor of Music at Mount Holyoke College. He has been a part of the AUMI team since 2013, providing technical support and serving as a liaison between the developers and other parts of the project.
Interviewer: John Sullivan (JS) is a music technology researcher and PhD candidate at McGill University. He served as the developer for the AUMI desktop application from 2016 – 2019.
The interview was recorded via video call on September 23rd, 2019.
JS: Why don’t we jump into it? This is, September 23rd, it is Monday morning, here with Thomas Ciufo and you’re at Mount Holyoke?
TC: Yeah, Mount Holyoke College at South Hadley, Massachusetts.
JS: Super, tell me what is – or has been historically your involvement in the AUMI project, and when did you get started and sorta how did that come about?
TC: Sure, well of the folks your visiting with, maybe other than Eric, I’m probably the longest running, I joined the project in 2013. I first encountered AUMI at one of the, I guess one of the first deep listening conferences that was hosted through EMPAC (Teh Experimental Media and Performing Arts Center) at RPI (Rensselaer Polytechnic Institute). And I had known and worked with Pauline for some time before that but I had not been involved in AUMI. At that conference I had met Leaf (Miller) and Henry (Lowengard), and got to see AUMI in use, and some talks about AUMI and I could tell right away it was a really meaningful and important project and one that had always been mostly boot-strapped and had not a lot of resources behind it. So I started talking about Pauline how I might be able to help out. Her thinking was that the biggest help would be kinda try and occupy an in-between space where I’m trying to bridge, or help bridge between the coders, the folks that come into the project relatively quickly in some cases, and connecting that with the number of researchers, practitioners, and users, so I serve mostly as some kind of ad-hoc technical advisor, technical liaison. In the earlier part, when I got involved, we were often getting a new programmer every 3 or 4 months and so a lot of what I was trying to do was help with that transition, help to get that person up to speed, help provide some continuity as we went forward, and I also have the kind of job kind of interpreting and helping the researchers and practitioners understand some of the technical limitations and then translating that back to the programmers, and really just keeping an eye on the whole trajectory of the software development side for the Mac/PC (Windows) platform.
JS: Were you, did you study or work at RPI, or what was your connection with Pauline and also Deep Listening before your involvement with AUMI?
TC: Well, I have no real formal connection with RPI. But I encountered Pauline’s work early on, and did a Deep Listening retreat when I was in grad school, and then continued, continued to work with Pauline on a number of different capacities, she was an incredibly gracious and giving teacher and mentor and friend and she, she got very excited about some of the instrument building and improvisational performance I was doing in my grad work and she was a guest reader on my dissertation in 2004. That seems like a long time ago! And then from there on we stayed in touch and we did some collaborative performances together and then when I got involved in AUMI it just continued that relationship. And again it was clear that it was quite an important project. And while it’s not really exactly central to my own creative work or research, it was just something I wanted to see if I could pitch in and help in any way I could.
JS: That seems to be a theme that has I think surfaced for a lot of people on the project. An interest in wanting to help out, wanting to be involved, and to help out, push it forward, you know, based on the belief in the project itself and the goals of it. Which is pretty cool.
JS: So 2013, who were the developers at that time? Was it Ian, or even before then?
TC: Ha, ha, I should have done my homework, yeah, so…
JS: So, it might have been, there’s a couple guys that’s before then, that I never met.
TC: Chuck (Bronson), Chuck was involved, and, I didn’t do my homework, I need to look at my notes. Where we were at that point was version 3 was out for the Mac only and not for the PC, and so our first initiative, when I go involved, was to get both programs up to parity, get them to be released on both platforms, that was our big push, and that again was something we felt very strongly about because as you probably know, the Macintosh platform, while somewhat prominent and ubiquitous in certain regions, is hugely unavailable in other parts of the world and we have folks using this, in a number of different locations, where where a Mac computer is completely inaccessible, whereas a low cost PC is much more readily available. I will go back to my notes and look at the development chronology for all the programmers right, but as I mentioned our turnover was often, you know, one semester, so we would get a new programmer as a TA through the McGill program, and then the turnover was pretty quick, so I’d have to check my notes, but we were version 3 was the big push. Which ended up being version 3.1. And then, when Ivan got involved, we turned a new chapter. When version 3 was released and stable. The work to re-envision the program much more broadly began, and that’s the version that you and Ivan were involved in, version 4.
JS: Something I was talking to Ian a lot about was, and I think he, he worked on the project maybe a year and a half, at best recollection, and so in that time, that was solidly a development and refinement of the version 3 app, is there any work in there that kinda stands out to you, some of the more important milestones in that before 4 was released?
TC: well, since I joined with version 3 that was relatively stable on the Mac, that’s kinda my version 1, so to be really honest the biggest thing that struck out was trying to get the version parity between Mac and PC and just get a robust functioning bug free version we could distribute widely.
JS: (laughter) I laugh because that’s something that unfortunately still persists to this day.
TC: Sure, and that’s going to be an ongoing challenge. But in some ways the project is enigmatic because it has some really high aspirations but in some regards the program doesn’t have to do all that much. It really brings such a special experience to the user but it’s not designed as an expert performance system, it really needs to stay accessible. So in some ways the notion of newer features and added features and in fact feature creep can be kinda a problem, in a sense. Because we are really dedicated to the accessibility and the usability aspect as well. So like I said, I kinda came in in version 3 so that’s what I’m most familiar with and getting that out cross-platform most robustly, was a big step. And then in version 4 that was much more of a re-think and the user interface changed dramatically and a bit of the user functionality changed as well. That that one was a bigger step in terms of program design, I think.
JS: Yeah, in terms of version 4, are there a – I know at this point you’ve had plenty of time to work with it and get to know it, somewhat deeply at least. What are your thoughts, and insights on the new version, and like you said, it was a radical redesign from the ground up, while still pretty much trying to maintain this really really simple functionality. So what are your takeaways? Are there obviously some better things, some worse things? What’s your take on it?
I guess better or worse is probably not the best way to phrase it. What are your observations?
TC: Well, it’s a little bit of a particular and odd advantage point, because I’m not necessarily an expert user of it or a deep experiential user of it. Maybe I should be? But in some regards, I’m really trying to understand how the individual is using it, with with therapists and in an applied contexts what their needs are. I think some of the things that came about in version 4 were really helpful. Obviously decoupling the interaction mode with the instrument gives us more flexibility. There was also just an important back end code clean up that happened. I don’t know how much you looked at version 3, but I’m sure you can attest…
JS: In fact I just had gone back into version 3, I’m working on the relative movement tracker. So yeah, it took me a long time to track down and find the full Max master files, the patches, to dig through, but I did it recently. (Max, a visual programming language for music and multimedia, is the language that the AUMI desktop application is built with.) So yeah, code cleanup was a big part for sure.
TC: Yeah, and I think the user interface streamlining was a big deal, as well as a more robust notion of presets and a lot of the work that Ivan did also lays groundwork for future developments, although again the path forward for the program is somewhat up in the air in terms of, now we’re trying to just resolve a few things that have cropped up with version 4, and retune a few of the modules. Yeah, user interface improvements, performance improvements certainly, the modularity aspect, code cleanup, it was a lot of new direction that helped.
I’ll mention one thing though that goes back, I’m guessing all the way to version 1 which was, Pauline was, pretty clear with this project that it wasn’t intended to be a musical instrument in the conventional sense. That it didn’t necessarily need to, give preference to pitch and melody and rhythm in any sore of more traditional musical sense. So first of all, she always referred to it as “instruments” in the plural that was really important to her, that it could be different things to different users. And second of all, that instead of making it an expert, predictable, repeatable expert performance system, it’s been an improvisational, experiential tool/instrument from the beginning, so her insistence – subtle and gentle insistence – that it always have the ability to work with found sounds and abstracted sounds and this wider range of, wider notion of musicality was really important from the beginning. And I’m glad that this has remained in the program. It could easily have worked its way to becoming a more like conventional acoustic instrument in sense of being able to accurately reproduce a certain melody or something. To me that has been an unique and interesting and important feature that’s really down to Pauline’s vision of musical performance being very broadly considered. I guess those are a few things that stick out in the current, but in some cases in the previous versions as well.
JS: One thing that Ian said that was cool to hear, he said, “You know when I think about AUMI and the application,” he said, “really when it comes down to it, I think about Leaf. I started working on this application and then I met Leaf.” The application, he’s right, and like you said – it basically is relatively simple in what it sets out to do, and when it’s successful it does that kind of simple thing with computer vision and motion tracking and triggering sounds, it should of do that, well, and easily and reliably. And then, he’s like, beyond that it’s not necessarily so interesting until it’s put into the hands of the people that are using it. So he’s like, “I didn’t quite get the instrument until I saw Leaf conduct a session with it, and everything that she brought into it and the group interactions that she put together, and definitely the sounds they’re using and the way that its used, then the instrument really really comes alive.”
JS: And the flexibility of the instrument that, that it’s not just a virtual piano, but it can be a lot of things in the hands of different practitioners kind of makes it special, I think.
TC: Yeah I agree completely. Leaf is an incredible person and an amazing member of this team, and really for me it just took one time of seeing some young people interact with this and what that experience brought to their life, and in some cases then seeing the next layer out which was maybe seeing their parents seeing them interact with the instrument and the joy and satisfaction it gave the parents to see their their child interacting with others in this way. So yeah, that’s kinda why I come back to always encouraging us to remember who this is really for and many of the designers and developers have various kinds of musical practices and build instruments for themselves, and I do that as part of my practice. But those decisions I make for my own creative practice, and we really have to be sensitive and cautious about not losing that connection to who is using this. Not that it’s one person. It’s a broad range. But again more is not necessarily better if it means it’s less accessible and it’s less easy to use and less robust. In the same way that Leaf can lead an incredible session with one hand drum, the instrument, or instruments, don’t need to be able to do everything. And that’s, we have to remind ourselves of that sometimes. Because sometimes it’s more challenging or fun or compelling to keep building it out, but if it’s not serving those who use it, then it’s not going to be successful.
JS: Yeah absolutely. So maybe that’s a good segue to talk about where the app, and not just the app but kind of the project in general is, and kind of where ideas that we see it going. And there’s a few levels where I’m interested in hearing where your thoughts are on. For one thing we can talk about the actual technology that it’s built on. We’ve spent a lot of time discussing, going forward does it stay built in Max or do we switch to a different, do we build it in a different language? I’ve been really interested in pursuing a web/browser based version of it, if for nothing else to ease up some cross-compatibility issues that we have. So there’s sort of a technical aspect that you might be interested in talking about.
And then also there’s this balance between, since now the project is 12 years old, if you say 2007 was the initial launch of it according to the website, and in this time it’s really largely stayed the same, the simple interaction model of computer vision and sound triggering, and I think all of us developers have, at one point or another done these other side prototypes. I was working last year on a haptic version where you can kind of base the same interaction using a Leap Motion sensor and a haptic array and you can kind of scrub around with your hand and sort of feel the music space and play that way which was a great prototype. In actual practice, there’s a long way to go to make that, number one a reliable system for anybody to be able to use, let alone a system that’s fine tuned for somebody with disability issues, or mobility stuff. But then getting back to this idea of keeping this up with a simple functionality versus, there are a lot of ways we could take it – something Eric and I have been talking about is developing different types of physical interfaces that could be plugged in and just the idea of accessibility for different modalities. If the screen is a more visual modality, what could we do for people who are vision impaired? I’m curious what your thoughts are and how to negotiate that balance of keeping it simple and easy to use and true to its core functionality while exploring some of these other topics or challenges for accessible music making?
TC: Sure, well, I think there’s at least two levels of this that I’ll comment on. And I’m sure there’s several more. First, we’re in a sort of a re-evaluation phase, I suppose, at the moment. I know you’re kinda wrapping up a few small modifications to version 4, but honestly speaking since Pauline passed, the project as definitely continued to move forward but it has a little bit different feel and impetus. And Pauline was incredible for her ability to lead but very gently and while giving everyone plenty of room to make personal investments in a project, so in that regard it’s not that we can’t continue forward, and we are. But in some regard we’re still kind of feeling our way through that transition. The other thing is the iPad version that Henry has developed and continues to develop is another sort of point in this 3D space. And he’s built a lot of additional functionality into that version that’s never come into the Mac/PC version, so we have an open question about the relationship between those two. We could certainly look at the functionality elements he’s built in and consider if those will be enhancements or not. Or how many versions of this we want to support. And clearly Max/MSP has its opportunities and limitations. It came about because it was the most readily available and broadly used DIY programming environment that these types of art/science tech folks were comfortable with. We’ve never had a pure CS (computer science) background, we’ve never had a commercial development team, or even a commercial developer, for that matter. So we sort of inherited Max out of practical necessity. A few different programmers have talked about migration to other tools that they prefer and I’ve personally resisted that, because that programmer will be on the project maybe 4 months? And if they leave it hanging in this language that is much more proprietary, almost everybody in your lab has some Max experience, so that’s been that’s been the case. But as you said, as the project as it goes forward, web audio has come an enormous way in the last even three of four years. And you know, even the Ableton crew with their new learning project and their synthesis project in the browser, shows a lot of promise there. So you know I think there’s a number of ways to think about that going forward. And if the team, generally speaking, is supportive and we can continue to have a similar level of functionality and easier accessibility, then that would certainly be open for discussion.
The physical computing hardware side is also really interesting and you know certainly holds a lot of promise. As we know through the NIME (New Interfaces for Musical Expression, a yearly conference on musical interface design and practice) community that comes into these kinds of instruments. I think so far we’ve kind of been resistant, not to the R & D side of it. But to me one of the biggest factors we’ve centered on is accessibility, not just how the program works but cost and availability. What does it take to run it? If we look at where we are now with a $2000 laptop and all this peripheral gear, compared to migrating to stand alone headsets, we have a similar concern. While we could add more functionality, through other sensing technologies or other physical devices, the potential trade off in ease of use and “can someone afford that?”, that doesn’t mean that someone shouldn’t pursue it. And in certain cases that would bring a lot to certain users, but I think the bigger concern has always been that we look at the medical industry especially, and adaptive use needs, and you see a technology that is not that sophisticated that is selling for thousands and thousands of dollars to users that often can’t afford it and I think that’s wrong and unethical we’ve resisted any implication of that, the iPad version is free, the Mac/PC version is free. I’ll only continue to work on the project with that as the mode. But again, that doesn’t mean that hardware can’t enhance. And I think we should be looking at that and if we have the overhead in the project management, which I don’t think we do at the moment, if we find or discover or make an interface that is really useful and important, how do we get sponsorship to pay for that and make it available to our users on some sort of equitable, accessible basis? But yeah, there is a lot of great opportunities if we can find a way to balance those with with affordability and access, I think.
JS: Yeah. Going back to what we were talking about before, with your core belief in the project being a strong motivator with your involvement with it over these years, it’s also come up in some of the other interviews that I’ve done. I think there’s a lot of good ideas, and a lot of motivation for really extending this project in meaningful interesting ways, but often a lack of resources to actually implement them, either just in terms of dedicated project management, or adequate funding to have people to work on it in a dedicated focused way for long periods of time to develop some of these things out. So it’s something that, I guess it remains to be seen if it will happen in the future for AUMI. Ivan was talking a lot about that. It would be great if there were a bit more concentrated organization or funding where we could do some of these larger blocks of development. But something like that, yeah it would be easy for us in our lab (IDMIL, the Input Devices and Music Interaction Lab at McGill University) to develop two or three different simple sensor interfaces that would be maybe integrated with something that Eric talked about: basic switches that would interface with a child’s wheelchair system, or something like that. Or even adapting some of the technology we already use around the lab like the T-Sticks (a novel musical controller developed at the IDMIL) or something, that could have capacitive touch surfaces or something that kids are already have their hands on. But to go from a few small lab demos or prototypes to something that could be available to practitioners is, at this point a pretty big step.
TC: Yeah, in some ways it is a little challenging. The project has more or less been a labor of love for everyone involved. It aligns more closely to certain team members’ ongoing research agendas and what they are doing in their writing and scholarships and labs, and for some it isn’t really that at all. This isn’t a focal point of my research at all. I just do it kind of on the side as best as I can, to contribute. And overall we we have way more ideas and opportunities and things that we’d like to do than we have the resources to support. I try not to get too down about that, because the upside to that is we have a lot of freedom and we’re not answering to a corporate sponsor that wants to commercialize it and patent it and then upsell it to the abilities community at some huge markup.
So you know, there’s some tradeoffs there. But I do think perhaps the turnover, especially in the development team – though that’s been better dramatically since you’ve been involved – is a kind of hindrance because we only have a developer long enough to either fix a few things, or to get – Ivan was a good haul, because he was kind of able to see it across to version 4. But that revolving door on the programming side means it’s been hard to have a longer term deep vision, and that’s why the iOS version has a different level of development, because Henry has been the sole developer from day one and knows the code inside out and the project inside and out.
But yeah, I think the future is open to see and I appreciate and try and celebrate the successes we’ve had and try to look to what we can do in the next round. And even if it wasn’t proprietary innovative technology, when I first got involved, you know, I hooked a game controller up to it, you know a $15 eBay game controller. Now that assumes a lot about the kinds of bodies that would be interacting with it, and for many that doesn’t work at all, and that’s why our camera vision model would need to remain. But but even a cheap UI (user interface) device like that that’s pretty readily available, for some users can bring a lot of enhanced interactions. So I don’t think anything is off the table, but again we do kind of have this balancing act with the ad-hoc nature of the research group and the coming and going of the time of the various technical folks involved.
JS: So, one thing I’m curious about – I don’t know if I’ve asked others or not – but are there parts of this project that have bled into your own personal work or your teaching? Like you say, and I think that a lot of us are in a similar position that – for me, I’ve worked on AUMI now, I think it’s going into my 3rd year, and it’s not necessarily a core element of my thesis research, although more and more and especially with this [book] project, I’m sort of trying to align it a little bit more closely. But being that it’s not a core part of your current ongoing practice, are there elements of the project that have gone into your work or research or teaching?
TC: Yeah but not maybe in the way that might seem most obvious. I do build and perform on computer extended and computer enhanced instruments. So there is a fundamental similarity on a very basic level, but the nature of those two different models for it means that there isn’t a lot of direct intersection. And I also don’t come from a disabilities studies background or a music therapy background, so the direct correlation are minimal. But I believe that there are a few things that have been very important and very resonant. One is that this is just an incredible team of people and the way that we work together has been very interesting and very positive. So the sense of collaboration and teamwork has been really strong, and that comes into the leadership approach to other projects I’m involved in and just the way I want to work with other people in my field.
I’ve already mentioned that Pauline was remarkably good at this. She could generate or find or collaborate on an idea, build a lot of energy around it, and bring in a lot of interested folks and interesting folks, and also with a very light touch, and gave all of the team members room to explore and take it in directions that they thought were going to be meaningful. So the collaborative model has been really amazing. I’ve met wonderful people. It’s really deepened my understanding of how different bodies function in the world and how many assumptions as relatively able-bodied people we make, on a daily basis, whether that’s in the physical nature of how we interact with the world or more specifically what we think of as musicians or musical abilities or musical activities. So in that regard it influences my general work as a researcher and practitioner, and also my work as a teacher and certainly it has given me new insights in how I work with my students and what their backgrounds and interests and abilities are. Not in the direct ways that one might think of if this was my core research, but there’s been a lot of peripheral overlap in really interesting ways.
JS: Yeah absolutely. Ahh, number 6 on this list – I don’t know how pertinent that is to you, although maybe it is. Did you work with – who was the researcher involved on the project, was it Jacqueline?- who did a lot of the documentation for version 3? I think that was her name.
JS: Anyways, the question generally being about feedback from the practitioners, and it’s definitely something that I’ve been interested in and kind of concerned about. As now we have the newer AUMI website, and it used to be that if someone wanted to use AUMI they would go to the website that was maybe hosted by RPI or the Deep Listening Institute, and even just to download the program you had to fill out this questionnaire collecting a lot of data of how and where it was going to be used. There was really a sort of active feedback collection mechanism in place to try to closely track how it was being used. So now with our newer website and newer version and stuff, there’s very little feedback, I feel like, from the practitioners and back into the development. And it may just be there currently aren’t so many practitioners using the desktop version, but as you have been working on it have you been actively getting feedback or connected with practitioners and how has that worked in your experience?
TC: Yeah, it’s an important part of the project and it’s one that we’ve struggled with since the beginning and we still have enormous room for improving that loop, but one of the main goals that Pauline and I talked about was to help with continuity and then to also interpret and help translate across our user base to our technical team and being able to stay with the project for a longer span and having a little bit more of an arc to see where it has come from was part of the goal, so yeah I’m often interacting a therapist or one of our team, or a direct user, and trying to understand what it is they want and need and what isn’t working and trying to put it into terminology or pragmatics that programmers can actually understand and work with having not been with the project very long.
And again, we’re fortunate that you’ve been able to stay on longer and be able to do one of the team research meet ups. The original Deep Listening website did have a questionnaire which ultimately we didn’t do a lot with that data. It didn’t really reflect a lot about the user experience that much, as much as like – who was downloading it? So we took that out when we went to the new website because it really just felt like another barrier in getting the program which didn’t necessarily tell us about how they were using it or how it was working well or not well for them. But we haven’t really replaced it with anything that solves that. So we will get the occasional tech support questions that we field.
Frankly the strongest and best version of this, it works in two ways: one is therapist and practitioners that are working in the field then report back to the research team, and then either through me or directly it gets communicated to the programmers – that’s one model. And that works okay when it happens. The other and probably the better way that we’ve done in the past is through concentrated full team meetings. We did one a couple years before you were involved and which Ivan was able to be involved in Montreal. Eric organized it, and he brought us all in, and we did a, I think a half day – or a little more – at the Mackay Centre School. We did a lot of practitioner-based interactions, and brainstorming it with the team, and also seeing it applied in the field, and when we tied that in with workshops that Leaf runs, where multiple team members get to go and participate and help and witness firsthand how the interaction plays out, that is super helpful and we need to find ways to do that more, as well as enhancing the less concentrated feedback sessions which could come from anywhere at any time, in a way to make those more tangibly feed back into the system.
I mean it’s just like any instrument or performance practice. We can talk about it, we can think about it, we can read about it, when we do it, a lot of new information comes to light immediately. And putting these programs into use and into practice, seeing how they’re being used and received by a wide range of users, is is completely critical and you can tell in 2 minutes when we’ve made a huge misjudgment in design. It doesn’t take a PhD, it takes watching a kid be frustrated because they’ve lost the eye tracking and they don’t know where it is, you know. So time out, return to center (a behavior of the application, where the tracking dot on the screen will move back to center if the object it is tracking moves outside of the camera’s field of view) becomes a potential solution. That’s an area where we need to really figure out better, and it kind of comes back to that that resource issue, that it isn’t a commercial project and we have pretty limited resources at various times in the development stages. But yeah, seeing how it works and doesn’t work is really critical. And I’m sure – I wasn’t able to come to the one that you made it to, but I bet you got a lot out of that one in a pretty short span of time.
JS: Yea, same, like you described. For me it was great, as it was the first time for me to actually meet so many people in person, Sherrie (Tucker, University of Kansas) and Ellen (Waterman, Carleton University) and Leaf and Pauline. That was actually the first time I met her, perhaps the only time. So yeah for one thing just to get everyone sitting at the same table and talking, but then also to see – I forget if it was there, or if it was the symposium a year later at RPI – to see Leaf do a group session and others lead some sessions. Yeah, I absolutely learned more and just had more vital ideas and directions to run with in the development from that than at any other point, which was really great.
Yeah. So well maybe we’ll kind of wrap it up at this point if that’s okay with you.
TC: Sure, sure, thanks for visiting, thanks for all your efforts on this project. And, we talked earlier on about this chapter not really being able to be an exhaustive historical accounting and I don’t think even if we wanted it to be that we probably could make it that. But yeah it’s been an incredible team effort from the programming and technical side as well as the therapists and practitioners and all the research team, and even the when a particular programmer would come in on a very limited time frame, they they always got into it 110% and were really excited to make a contribution. So even with the turnover the involvement and the engagement has been really good and I appreciate that and I appreciate all the time you’ve put into it in the past couple of years.
JS: Absolutely, certainly. Well yeah, I’ve been really happy to be a part of it. And likewise I appreciate the time that you’ve spent and everything you’ve done for the project, during my term and before, and even as I was coming on, it was really really helpful. And just in wrapping up I’ll say too, hopefully in terms of envisioning this project as it goes forward and even – I’m in my last year here at McGill and probably that will be my last year working, at least as the primary developer on the project, but this book project is really, really great. And hopefully that will give us some of the the answers and the forward momentum to envision AUMI as it goes into the future, whatever that might be. It’s a fun project to be on.
TC: Yeah, I’m really excited about the book, and I hope it is going to help do a few things. Hopefully it will help primarily expose more people to this project. You know, I’m surprised, I meet therapists fairly often who never heard of it or who have never even thought about any sort of technological interventions in their practice. Maybe they’re using a physical adaptive pick holder or something like that, but to think of using technology to help accessibility issues is kind of new to them. So I’m kind of hoping that it will get more awareness in that regard, and then as you’re saying maybe it will also help us bring together a more ambitious next round of re-envisioning where it will go, so yeah. Thanks, thanks for everything. And have a great afternoon.
JS: Ok, thanks Thomas. Take care.