Behavioral Use Licensing for Responsible AI | Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
- ️Mon Sep 20 2004
Abstract
With the growing reliance on artificial intelligence (AI) for many different applications, the sharing of code, data, and models is important to ensure the replicability and democratization of scientific knowledge. Many high-profile academic publishing venues expect code and models to be submitted and released with papers. Furthermore, developers often want to release these assets to encourage development of technology that leverages their frameworks and services. A number of organizations have expressed concerns about the inappropriate or irresponsible use of AI and have proposed ethical guidelines around the application of such systems. While such guidelines can help set norms and shape policy, they are not easily enforceable. In this paper, we advocate the use of licensing to enable legally enforceable behavioral use conditions on software and code and provide several case studies that demonstrate the feasibility of behavioral use licensing. We envision how licensing may be implemented in accordance with existing responsible AI guidelines.
References
[1]
2019. General Data Protection Regulation (GDPR) – Official Legal Text. https://gdpr-info.eu [Online; accessed 9. Sep. 2021].
[2]
2019. IBM’S Principles for Data Trust and Transparency. https://www.ibm.com/blogs/policy/trust-principles [Online; accessed 9. Sep. 2021].
[3]
2020. Is OpenAI’s GPT-3 API Beta Pricing Too Rich for Researchers?https://syncedreview.com/2020/09/04/is-openais-gpt-3-api-beta-pricing-too-rich-for-researchers [Online; accessed 9. Sep. 2021].
[4]
2021. Declaration of Montréal for a responsible development of AI. https://www.montrealdeclaration-responsibleai.com [Online; accessed 9. Sep. 2021].
[5]
2021. IEEE 7000-2021 - IEEE Approved Draft Model Process for Addressing Ethical Concerns During System Design. https://standards.ieee.org/standard/7000-2021.html [Online; accessed 9. Sep. 2021].
[6]
2021. Reporting standards and availability of data, materials, code and protocols | Nature Portfolio. https://www.nature.com/nature-portfolio/editorial-policies/reporting-standards [Online; accessed 9. Sep. 2021].
[7]
2021. Responsible AI Resources – Microsoft AI. https://www.microsoft.com/en-us/ai/responsible-ai-resources [Online; accessed 9. Sep. 2021].
[8]
2021. SPDX License List. https://spdx.org/licenses/index.html [Online; accessed 9. Sep. 2021].
[9]
Sarah Ahmed. 2017. “No,” Feminist Kill joys. (2017).
[10]
Athanasios Alexiou, Vasileios D Mantzavinos, Nigel H Greig, and Mohammad A Kamal. 2017. A Bayesian model for the prediction and early diagnosis of Alzheimer’s disease. Frontiers in aging neuroscience 9 (2017), 77.
[11]
M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilović, R. Nair, K. N. Ramamurthy, A. Olteanu, D. Piorkowski, D. Reimer, J. Richards, J. Tsay, and K. R. Varshney. 2019. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4/5 (2019), 6:1–6:13.
[12]
Imanol Arrieta-Ibarra, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, and E Glen Weyl. 2018. Should we treat data as labor? Moving beyond” free”. In aea Papers and Proceedings, Vol. 108. 38–42.
[13]
Andreas Becks. 2021. Traceability and trust: Top premise for ethical AI decisions. https://blogs.sas.com/content/hiddeninsights/2019/02/28/traceability-and-trust-top-premise-for-ethical-ai-decisions [Online; accessed 9. Sep. 2021].
[14]
Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (2019), 4–1.
[15]
Misha Benjamin, Paul Gagnon, Negar Rostamzadeh, Chris Pal, Yoshua Bengio, and Alex Shee. 2019. Towards Standardization of Data Licenses: The Montreal Data License. CoRR abs/1903.12262(2019). arxiv:1903.12262http://arxiv.org/abs/1903.12262
[16]
Elettra Bietti. 2020. From Ethics Washing to Ethics Bashing: A View on Tech Ethics from within Moral Philosophy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 210–219. https://doi.org/10.1145/3351095.3372860
[17]
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165(2020).
[18]
Zoran Bursac, C Heath Gauss, David Keith Williams, and David W Hosmer. 2008. Purposeful selection of variables in logistic regression. Source code for biology and medicine 3, 1 (2008), 17.
[19]
Ruth MJ Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. In IJCAI. 6276–6282.
[20]
Marika Cifor, Patricia Garcia, TL Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, and Lisa Nakamura. 2019. Feminist data manifest-no. (2019).
[21]
Aidan Clark, Jeff Donahue, and Karen Simonyan. 2019. Efficient Video Generation on Complex Datasets. CoRR abs/1907.06571(2019). arxiv:1907.06571http://arxiv.org/abs/1907.06571
[22]
Kate Conger, Richard Fausset, and Serge F. Kovaleski. 2019. San Francisco Bans Facial Recognition Technology. N.Y. Times (May 2019). https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html
[23]
Contributors to Wikimedia projects. 2021. The Open Source Definition - Wikipedia. https://en.wikipedia.org/w/index.php?title=The_Open_Source_Definition&oldid=1041701647 [Online; accessed 9. Sep. 2021].
[24]
Diana Marina Cooper. 2016. The application of a “sufficiently and selectively open license” to limit liability and ethical concerns associated with open robotics. In Robot Law. Edward Elgar Publishing.
[25]
Adam Cutler, Milena Pribić, and Lawrence Humphrey. 2019. Everyday ethics for artificial intelligence. PDF, IBM Corporation(2019).
[26]
Sebastian S. Feger, Sünje Dallmeier-Tiessen, Paweł W. Woźniak, and Albrecht Schmidt. 2019. The Role of HCI in Reproducible Science: Understanding, Supporting and Motivating Core Practices. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312905
[27]
Di Feng, Christian Haase-Schütz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, and Klaus Dietmayer. 2020. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Transactions on Intelligent Transportation Systems (2020).
[28]
Patricia Garcia, Tonia Sutherland, Marika Cifor, Anita Say Chan, Lauren Klein, Catherine D’Ignazio, and Niloufar Salehi. 2020. No: Critical Refusal as Feminist Data Practice. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. 199–202.
[29]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for Datasets. CoRR abs/1803.09010(2018). arxiv:1803.09010http://arxiv.org/abs/1803.09010
[30]
Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 776–780.
[31]
Marco Gillies, Rebecca Fiebrink, Atau Tanaka, Jérémie Garcia, Frédéric Bevilacqua, Alexis Heloir, Fabrizio Nunnari, Wendy Mackay, Saleema Amershi, Bongshin Lee, Nicolas d’Alessandro, Joëlle Tilmanne, Todd Kulesza, and Baptiste Caramiaux. 2016. Human-Centred Machine Learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’16). Association for Computing Machinery, New York, NY, USA, 3558–3565. https://doi.org/10.1145/2851581.2856492
[32]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672–2680.
[33]
Brent Hecht, Lauren Wilcox, JP Bigham, J Schöning, Ehsan Hoque, Jason Ernst, Yonatan Bisk, Luigi De Russis, Lana Yarosh, Bushra Anjum, 2018. It’s time to do something: Mitigating the negative impacts of computing through a change to the peer review process. ACM Future of Computing Blog. Mar. 29, 2018.
[34]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017).
[35]
Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, 2019. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 590–597.
[36]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (Sep 2019), 389–399. https://doi.org/10.1038/s42256-019-0088-2
[37]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4401–4410.
[38]
D. Koller and N. Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT Press. https://books.google.co.in/books?id=7dzpHCHzNQ4C
[39]
Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. 121–126.
[40]
Min Kyung Lee, Nina Grgić-Hlača, Michael Carl Tschantz, Reuben Binns, Adrian Weller, Michelle Carney, and Kori Inkpen. 2020. Human-Centered Approaches to Fair and Responsible AI. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems(CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3375158
[41]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, 740–755.
[42]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in neural information processing systems. 4765–4774.
[43]
T Madiega. 2019. EU guidelines on ethics in artificial intelligence: Context and implementation. European Parliamentary Research Service, PE 640 (2019).
[44]
Larry Magid. 2020. IBM, Microsoft And Amazon Not Letting Police Use Their Facial Recognition Technology. Forbes (Jun 2020). https://www.forbes.com/sites/larrymagid/2020/06/12/ibm-microsoft-and-amazon-not-letting-police-use-their-facial-recognition-technology
[45]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). Association for Computing Machinery, New York, NY, USA, 220–229. https://doi.org/10.1145/3287560.3287596
[46]
Neural Information Processing Systems Foundation. 2021. Code Submission Policy. https://nips.cc/Conferences/2020/PaperInformation/CodeSubmissionPolicy [Online; accessed 9. Sep. 2021].
[47]
Thanh Thi Nguyen, Cuong M. Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen, and Saeid Nahavandi. 2019. Deep Learning for Deepfakes Creation and Detection: A Survey. arxiv:cs.CV/1909.11573
[48]
Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian Molloy, and Ben Edwards. 2018. Adversarial Robustness Toolbox v1.2.0. CoRR 1807.01069(2018). https://arxiv.org/pdf/1807.01069
[49]
Timothy Niven and Hung-Yu Kao. 2019. Probing Neural Network Comprehension of Natural Language Arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4658–4664.
[50]
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. 2020. Data and its (dis) contents: A survey of dataset development and use in machine learning research. arXiv preprint arXiv:2012.05345(2020).
[51]
Massimiliano Di Penta, Daniel M. Germán, Yann-Gaël Guéhéneuc, and Giuliano Antoniol. 2010. An exploratory study of the evolution of software licensing. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1, ICSE 2010, Cape Town, South Africa, 1-8 May 2010, Jeff Kramer, Judith Bishop, Premkumar T. Devanbu, and Sebastián Uchitel (Eds.). ACM, 145–154. https://doi.org/10.1145/1806799.1806824
[52]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[53]
A Rao, F Palaci, and W Chow. 2019. A practical guide to Responsible Artificial Intelligence (AI). Technical Report. Technical Report.
[54]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ” Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[55]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 856–865.
[56]
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4902–4912. https://doi.org/10.18653/v1/2020.acl-main.442
[57]
Li Shen, Laurie R Margolies, Joseph H Rothstein, Eugene Fluder, Russell McBride, and Weiva Sieh. 2019. Deep learning to improve breast cancer detection on screening mammography. Scientific reports 9, 1 (2019), 1–12.
[58]
Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, and Alexander M Rush. 2018. S eq 2s eq-v is: A visual debugging tool for sequence-to-sequence models. IEEE transactions on visualization and computer graphics 25, 1(2018), 353–363.
[59]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818–2826.
[60]
Linnet Taylor. 2017. What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society 4, 2 (2017), 2053951717736335.
[61]
Kedir N Turi, David M Buchner, and Diana S Grigsby-Toussaint. 2017. Peer Reviewed: Predicting Risk of Type 2 Diabetes by Using Data on Easy-to-Measure Risk Factors. Preventing Chronic Disease 14 (2017).
[62]
Sriram Vasudevan and Krishnaram Kenthapadi. 2020. LiFT: A Scalable Framework for Measuring Fairness in ML Applications. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management(CIKM ’20).
[63]
Sriram Vasudevan and Krishnaram Kenthapadi. 2020. The LinkedIn Fairness Toolkit (LiFT). https://github.com/linkedin/lift.
[64]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
[65]
C. Vendome, M. Linares-Vásquez, G. Bavota, M. Di Penta, D. M. German, and D. Poshyvanyk. 2015. When and why developers adopt and change software licenses. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME). 31–40. https://doi.org/10.1109/ICSM.2015.7332449
[66]
Nicholas Vincent and Brent Hecht. 2021. A Deeper Investigation of the Importance of Wikipedia Links to Search Engine Results. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–15.
[67]
Nicholas Vincent, Hanlin Li, Nicole Tilly, Stevie Chancellor, and Brent Hecht. 2021. Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 215–227.
[68]
Chat Wacharamanotham, Lukas Eisenring, Steve Haroz, and Florian Echtler. 2020. Transparency of CHI Research Artifacts: Results of a Self-Reported Survey. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376448
[69]
Yuxuan Wang, R. J. Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc V. Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous. 2017. Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. CoRR abs/1703.10135(2017). arxiv:1703.10135http://arxiv.org/abs/1703.10135
[70]
Max L. Wilson, Wendy Mackay, Ed Chi, Michael Bernstein, Dan Russell, and Harold Thimbleby. 2011. RepliCHI - CHI Should Be Replicating and Validating Results More: Discuss. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’11). Association for Computing Machinery, New York, NY, USA, 463–466. https://doi.org/10.1145/1979742.1979491
[71]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning. 2048–2057.
[72]
Changsong Yu, Karim Said Barsim, Qiuqiang Kong, and Bin Yang. 2018. Multi-level Attention Model for Weakly Supervised Audio Classification. arxiv:eess.AS/1803.02353
[73]
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Processing Systems. 9054–9065.
Information & Contributors
Information
Published In
FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
June 2022
2351 pages
Copyright © 2022 Owner/Author.
This work is licensed under a Creative Commons Attribution International 4.0 License.
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Published: 20 June 2022
Check for updates
Author Tags
Qualifiers
- Research-article
- Research
- Refereed limited
Conference
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- Downloads (Last 12 months)1,317
- Downloads (Last 6 weeks)178
Reflects downloads up to 27 Feb 2025
Other Metrics
Citations
- ZadZiabari ATabatabaei A(2025)Ethics and Regulations in Generative AIApplication of Generative AI in Healthcare Systems10.1007/978-3-031-82963-5_8(197-211)Online publication date: 26-Feb-2025
- McDuff DKorjakow TCambo SBenjamin JLee JJernite YFerrandis CGokaslan ATarkowski ALindley JCooper AContractor DSalakhutdinov RKolter ZHeller KWeller AOliver NScarlett JBerkenkamp F(2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693504(35255-35266)Online publication date: 21-Jul-2024
- Cui JAraujo D(2024)Rethinking use-restricted open-source licenses for regulating abuse of generative modelsBig Data & Society10.1177/2053951724122969911:1Online publication date: 4-Feb-2024
- Hong RAgnew WKohno TMorgenstern J(2024)Who's in and who's out? A case study of multimodal CLIP-filtering in DataCompProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694702(1-17)Online publication date: 29-Oct-2024
- Choksi MMandel IWidder DShvartzshnaider Y(2024)The Emerging Artifacts of Centralized Open-CodeProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659019(1971-1983)Online publication date: 3-Jun-2024
- Liesenfeld ADingemanse M(2024)Rethinking open source generative AI: open-washing and the EU AI ActProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659005(1774-1787)Online publication date: 3-Jun-2024
- Berman GGoyal NMadaio M(2024)A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness EvaluationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642398(1-24)Online publication date: 11-May-2024
- Duan MLi QHe BChua TNgo CKa-Wei Lee RKumar RLauw H(2024)ModelGo: A Practical Tool for Machine Learning License AnalysisProceedings of the ACM Web Conference 202410.1145/3589334.3645520(1158-1169)Online publication date: 13-May-2024
- Narayanswamy GLiu YYang YMa CLiu XMcDuff DPatel S(2024)BigSmall: Efficient Multi-Task Learning for Disparate Spatial and Temporal Physiological Measurements2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00773(7899-7909)Online publication date: 3-Jan-2024
- Paruchuri ALiu XPan YPatel SMcDuff DSengupta S(2024)Motion Matters: Neural Motion Transfer for Better Camera Physiological Measurement2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00583(5921-5930)Online publication date: 3-Jan-2024
- Show More Cited By
View Options
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderHTML Format
View this article in HTML Format.
HTML FormatLogin options
Check if you have access through your login credentials or your institution to get full access on this article.