Anki community can't be "perfect" according to my values
AnkiDroid has more than three millions active users. No idea how many users have Anki or AnkiWeb. I know that there are some users with who I could never be friend. Not in the abstract sense that I could not appreciate every single person in a group of three millions. In the concrete sense that I have seen people who uses anki, who are invested enough to join the discord or follow AnkiDroid on twitter, sometime people that contributes, that expressed transphobic views, that shared anti-abortion propaganda, that advocated law that makes the life of sex-worker more complex, or that explained to me that I should seek help because it's not normal to have blue hair. I'm extremely not happy about helping people who believe that protesting against people rights is a good use of their time, but I accepted the fact that there is no way to select who can profit from my work.
Once, I had to list the possible bad effect of Anki, and the most my imagination found is that people can learn things that they'll later use to do actions that are detrimental to society. Same as any education system. My imagination failed me, and I had not imagined that people may become addict and start reviewing in dangerous situation; which is the most amazing proof I ever had about how I lack imagination.
Well, to be honest, I don't think that it is "Anki" that is addicting. I fear that it's Medical School that fuck-up people. That is so stressful, so time-consuming, that every seconds is precious, and every bit of knowledge may have a huge impact on studies, and then career. Plus the well known absence of good public transit in the US, privileging cars for everyone. So it's possible that if Anki didn't exists on mobile, or didn't exists at all, medical school students would still have to do dangerous things just to increase their chance of success, that we are not really the cause of this danger, just an accessory. I can also compare AnkiDroid to a lot of apps, such as twitter, candy crush... that optimizes for addiction that have an interest in users spending more time with them. If we are addictive to some people, it's only by accident.
The issue is that I'm extremely motivated to excuse Anki. Being a contributor to anki ecosystem has been a huge part of my life since 2017. It has been extremely useful in a lot of social and professional situations, it has taught me a lot and is still providing me experiences as a developer, a maintainer, a mentor, a project leader that are invaluable when it comes to job interviews. It's also quite pleasant to get positive feedback from a lot of people that explains how our work improved their lives, how much they regret not to have known it sooner, etc... As far as dopamines go, it's far better than "just" likes on social network. That is to say, I don't trust myself when I consider that I'm not responsible, that I'm not guilty, and that I don't have to take those feedback into account, that I can keep doing whatever I were doing before.
What I could do
Honestly, I don't know what I could do. I could leave the community altogether, but I don't know a moral system where that would be an acceptable answer, the risks remains the same for users. Technically, I don't know whether I've enough permissions to delete the app from the playstore. I'm not certain, and I won't try. But anyway, it can be put back online relatively easily, so it's not a real improvement. Even if all maintainers were convinced that the moral answer is to delete the app, it's open source and other people can upload it. They can even add advertisement and gathering user data from it if they want, now that there is not a main standard version of the app on the playstore.
An obvious solution would be to check whether the user is driving. That is impossible. Firstly because we only access user data we need, and speed or geolocalisation is not one that is available (and someone waiting at a red light have a null speed). Also because we can't know whether the user is driving, a passenger, or in public transit.
Vocal only solution
If we can't forbid people to use ankidroid while driving, maybe we can at least ensure that it's less dangerous. AnkiDroid already has a text to speech system. Sadly, medical school student uses a lot of images, which is kind of logical since they need to see body part and diseases symptoms. Maybe they can review only the cards without images, I don't know.
Anyway, with the current version of the app, the student must still use their screen to answer their cards. Maybe providing a way to give an answer without screen would reduce their risk. This as been requested a lot, but, until recently, there was no solutions that would work easily and consistently. It is possible that with help of Google Assistant it can be solved. Some contributors made tests recently and shared them on our discords. Seems like a promising option. Now the obvious question is: "will it lead to people who were concentrating on the road start to review while driving?" And since the answer is obviously YES (recall that there are 3 millions users) the actual question are:
- how many people who would not have reviewed when it required a screen will start reviewing while driving because it can be done orally
- would those people have called while driving, or do other activities. Is reviewing more or less dangerous?
- what is the increase in risk of accident (or decrease, even if I doubt it)?
- How does this increase in risk of accidents compare to the decrease from people already reviewing while driving.
Conclusion: what to do
My main issue is that I've no training in ethics; apart from some random reading and the good place. So I don't even have the proper tool to analysis this question. Not that having the tool would necessarily provide me an answer.
I've often read that universities should provide ethic lectures in computer science departments. Usually, I've heard it when people discuss surveillance system, data gathering and analysis, prediction, face recognition, also when it comes to informing users and competing for their attentions, creating addictions. I've heard about ethic issues when developers contributes to software used by big corporations, government. I don't think I've heard about ethics when it comes to system that are fully controlled by their users and that is based on old technologies. And in this case, AnkiDroid is essentially an improved version of paper flash cards. Which may well be why I've no idea what the heck is the moral thing to do now. Appart at least writing down a blog posts, so that those damned ideas stop running through my head every time I contribute to AnkiDroid.
Yeah, because, right now, I still contribute to AnkiDroid. While I know that it very indirectly put people life at risk by distracting some drivers, I'm not stopping. I hate writing this line, but I feel like I have a duty to write out explicitly that it is the decision I have taken and own my choices. I won't even pause "until I've found an answer" because I don't know whether I'll actually find an answer one day.
I have taken responsibility towards other ankidroid developers, I want to be someone reliable, that people trust my word, that if I state I'll do something, I actually try to do it. There should be an excellent reason for me to leave a community with unaccomplished tasks. Today, I'm a Google Summer of Code admin and mentor for AnkiDroid; dozens and dozens of candidates spent the time to write proposal, I accepted the responsibility to mentor some of them. There is also an important change going on to change AnkiDroid, where it seems like my reviews has been useful (i.e. it detected bugs that other reviewers missed and may or may not have caused data loss).
So I feel it would be really hurtful for ankidroid if I were to leave, and I can't see myself do it until and unless I'm certain it's the right things to do; and not if I only have suspicions.
 Not an hypothetical scenario. That is what happened in China
 As a side note, I HATE when medical school students provide bug report with screenshot of their actual cards. I understand that it's a normal part of their review process, but I wish they would consider that seeing burned flesh or open bodies is not a normal part of being a software developer.
 Disclaimer: I've worked on Google Assistant during 13 months