⚓ T265163 Create a system to encode best practices into editing experiences
- ️Fri Oct 09 2020
🌱 In the forest that is Phabricator, this ticket is very much a seedling on the forest floor. Read: this task is a gathering place/work in progress.
This parent task is intended to help gather and advance the thinking around how the visual editor might be enhanced to help people learn the social conventions (read: policies and guidelines) and exercise the judgement necessary to become productive and constructive Wikipedia editors.
Background
Visual editor's growing popularity among people new to editing Wikipedia [i] suggests it has been reasonably successful at helping people to learn the technical skills [ii] necessary to edit Wikipedia.
Trouble is, the edits these new people make often break/defy Wikipedia policies and guidelines.
This task is about exploring how the visual editor could be augmented/enhanced to help people learn these policies and guidelines and exercise the judgement necessary to become productive and constructive Wikipedia editors.
Potential impact
New and Junior Contributors
Working on talking pages has led us to notice that for many newcomers, the earliest human interactions they have on-wiki centers around something they did wrong, like not contributing in the Wikipedia Way [iii] [iv][v][vi][viii].
We wonder how newcomers perceive these interactions and further, whether newcomers having a positive first interaction with another Wikipedia editor could increase the likelihood that they continue editing. [vii]
Senior Contributors
We wonder whether adding more "productive friction" to publishing flows could free Senior Contributors up to do more high-value work by:
- Reducing the amount of work they need to do reverting lower quality edits by increasing the quality of edits
- Reducing the amount of work they need to do blocking bad faith/vandalistic editors by lowering the likelihood these actors are able to complete/publish the changes that cause them to be blocked
- Reducing the amount of work and time they spend writing on new users' talk pages to alert them of policies and/or guidelines they've [likely] unknowingly broken
- Reducing the amount of time and energy they need to spend worrying about protecting the wikis. This could potentially relieve them of mental space to address new challenges.
Use cases
More in T284465.
Components
- A way to codify rules and policies that machines could "read" / "interpret"
- To start, a team like Editing could hardcode these rules and policies on a one-off basis. At scale, if/when the concept proves valuable, we envision a way for volunteers to write custom rules based on the consensus reached at their respective projects.
- A way for the editing interface to locate where content exists that violate these rules and policies
- Note: "content" in this context refers to content that has already been published on the wiki and content that volunteers have added and have not yet published.
- A way to surface improvement suggestions to volunteers who have an editing interface open.
- E.g. "This is the issue. This is where the issue exists within the artifact (read: article, talk page comment you are drafting, etc.). This is what you can do to remedy the issue."
- A way to make volunteers aware that issues exist within content they are reading / have not yet started to edit.
Also see the ===Components section in T276857.
Naming ideas
- Policy Check
- Intelligent edits / Edit intelligence
- I'm attracted to the idea of framing this as infusing the collective intelligence of the wikis into the editing interfaces.
- Edit assistance
- Assisted edits
- Augmented edits
References
Related tickets
- T128060: VisualEditor makes it easy to create partially linked words, when the user expects a fully linked one
- T106641: '' is frequently used unnecessarily in VisualEditor and put in <nowiki>
- T162291: Autolinking for URLs and magic links (ISBN etc.) not applied when copy-pasting a block of text
- T56947: VisualEditor: When user changes a link anchor which has the same link target, suggest that they may wish to change the link target too
- T174554: AbuseFilter should expose matched text to warning messages
Data
- Impact of turning off IP editing at pt.wiki.
- Superset: AbuseFilter hits over time
- Superset: Spam blacklist hits over time
Policy documentation
- Study of policies at various Wikipedias: https://meta.wikimedia.org/wiki/Universal_Code_of_Conduct/Research_-_Wikipedia
Research
- Proposed: AI-Models-For-Knowledge-Integrity
- Automatically Labeling Low Quality Content on Wikipedia By Leveraging Patterns in Editing Behaviors via @Halfak in T265163#7622952 non paywall link
- Conversations Gone Awry: Detecting Early Signs of Conversational Failure
- Wikipedian Self-Governance in Action: Motivating the Policy Lens
- Twitter Prompts Findings
- "If prompted, 34% of people revised their initial reply or decided to not send their reply at all."
- "After being prompted once, people composed, on average, 11% fewer offensive replies in the future."
- "If prompted, people were less likely to receive offensive and harmful replies back."
- Unreliable Guidelines: Reliable Sources and Marginalized Communities in French, English and Spanish Wikipedias
- "A new editor wrote in an email that they perceived Wikipedia’s reliable source guidelines to have exclusionary features."
- "...community consensus is a foundational pillar of the Wikimedia movement. We learned trainers see this process as privileging those who participated first in the development of the encyclopedia’s editorial back channels. As well, the participants in our community conversations were uncomfortable with the presumption that agreement is communicated through silence, which privileges those who have the time and feel comfortable speaking up and participating in editorial conversations."
- "In English, contributors from English-speaking countries in Africa said their contributions often faced scrutiny. One organizer from an unnamed African country who participated in our session said when they hosted events, contributions were deleted en-mass for lacking reliability. This was demoralizing for the participants and required extra work by the trainers to stand up for their publications and citations, said one participant...To avoid new editors experiencing these disappointing responses, other trainers in English Wikipedia explained they would review sources before new editors begin editing."
- "...the quantity of material that a trainer is required to parse in relation to reliable source guidelines is immense. As one participant said:"
- "This bushy structure makes the guidelines pages unreadable. Who has the time and the meticulousness to read it completely without being lost to a certain point? It took me a decade to go through it [in French] and I must admit I’m not done yet!"
Ideas
- Ideas for where/how we might introduce this feedback/interaction: T95500#6873217
Related on-wiki tools
- Wikipedia:WikiProject Check Wikipedia
- Automatic edit type tagging via @Isaac.
- https://w.wiki/39ot
- CivilityCheck
- @ValeJappo's BOTutor0
- T288589: Create proof-of-concept script for warning wikitext editors after typing links to disambig pages
- en:Edit filter/Requested
- en:Edit filters
- mw:Edit notices
- Extension:AbuseFilter
- en:User:MusikBot/FilterMonitor/Recent changes
- en:Wikipedia:Text reactions
On-wiki documentation
- en:Mediawiki:Titleblacklist
- en:Mediawiki:Spam-blacklist
- en:Wikipedia:Spam#External link spamming
- Edit filters of cross-wiki interest
- pt:Global abuse filters
Related third-party tools
- On 31 March 2022, Google announced a series of "new assistive writing features" in Google Docs":
- https://languagetool.org/
- https://hemingwayapp.com/
- https://www.grammarly.com/
- https://www.perspectiveapi.com/#/home
Open questions
- 1. Responsibility: How much responsibility should the software lead people to think they have for "correcting" the issues within the content they're editing?
- 2. Authority: How might this capability shift editors' perception of who is the authority on what's "best"? Might this tool cause people to listen the software more than they do fellow editors?
- 3. Audibility: How will this tool adapt/cope with the fact that policies and guidelines are constantly evolving?
- 4. Reverts: Might this capability be impactful for people whose edits have been reverted?
Note: I've added the above to T265163's newly-created ===Open questions section.
i. https://superset.wikimedia.org/r/345
ii. Where "technical skills" could mean: adding text, formatting text, adding links, adding citations, publishing edits, etc.
iii. https://www.mediawiki.org/wiki/New_Editor_Experiences#Research_findings
iv. https://w.wiki/eMc
v. https://w.wiki/dSx
vi. https://w.wiki/dSv
vii. Perhaps the Research Team's "Thanks" research might be instructive here for it explore how positive interactions/sentiment affect editing behavior: https://meta.wikimedia.org/wiki/Research:Understanding_thanks
viii. https://en.wikipedia.org/w/index.php?title=User_talk:Morgancurry0430&oldid=1038113933