default search action
Yu Ding 0001
Person information
- affiliation: Netease Fuxi AI Lab, Hangzhou, China
- affiliation: University of Houston, Department of Computer Science, TX, USA
- affiliation (PhD 2013): TELECOM ParisTech, LTCI, France
Other persons with the same name
- Yu Ding — disambiguation page
- Yu Ding 0002 — Texas A&M University, Department of Industrial and Systems Engineering, College Station, TX, USA (and 1 more)
- Yu Ding 0003 — Beihang University, School of Reliability and Systems Engineering, Beijing, China
- Yu Ding 0004 — Concordia University, Montreal, QC, Canada
- Yu Ding 0005 — Texas A&M University, College Station, TX, USA
Other persons with a similar name
- Yu-Shin Ding
- Yue Ding — disambiguation page
- Yuning Ding (aka: Yu-ning Ding, Yu-Ning Ding) — disambiguation page
- Yuxin Ding (aka: Yu-Xin Ding)
- Yue Ding 0001 — Shanghai Jiao Tong University, Shanghai, China
- Yue Ding 0002 — Hangzhou Dianzi University, Hangzhou, China
- Yu-Ding Lu
- Dingyu Xue (aka: Dingyü Xue, Ding-Yu Xue) — Northeastern University, Shenyang, Liaoning, China
- Dingli Yu (aka: Ding-Li Yu, D. L. Yu) — Liverpool John Moores University, Liverpool, UK
- Dingwen Yuan (aka: Ding-Wen Yu)
- show all similar names
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j19]Rudong An, Aobo Jin, Wei Chen, Wei Zhang, Hao Zeng, Zhigang Deng, Yu Ding:
Learning facial expression-aware global-to-local representation for robust action unit detection. Appl. Intell. 54(2): 1405-1425 (2024) - [j18]Jiacheng Zhu, Yu Ding, Hanwei Liu, Keyu Chen, Zhanpeng Lin, Wenxing Hong:
Emotion knowledge-based fine-grained facial expression recognition. Neurocomputing 610: 128536 (2024) - [j17]Zhipeng Hu, Feng Qiu, Haodong Sun, Wei Zhang, Yu Ding, Tangjie Lv, Changjie Fan:
Learning a compact embedding for fine-grained few-shot static gesture recognition. Multim. Tools Appl. 83(33): 79009-79028 (2024) - [j16]Suzhen Wang, Yifeng Ma, Yu Ding, Zhipeng Hu, Changjie Fan, Tangjie Lv, Zhidong Deng, Xin Yu:
StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads. IEEE Trans. Pattern Anal. Mach. Intell. 46(6): 4331-4347 (2024) - [j15]Bowen Ma, Rudong An, Wei Zhang, Yu Ding, Zeng Zhao, Rongsheng Zhang, Tangjie Lv, Changjie Fan, Zhipeng Hu:
Facial Action Unit Detection and Intensity Estimation From Self-Supervised Representation. IEEE Trans. Affect. Comput. 15(3): 1669-1683 (2024) - [j14]Wei Zhang, Lincheng Li, Yu Ding, Wei Chen, Zhigang Deng, Xin Yu:
Detecting Facial Action Units From Global-Local Fine-Grained Expressions. IEEE Trans. Circuits Syst. Video Technol. 34(2): 983-994 (2024) - [j13]Jiajia Tang, Yutao Yang, Qibin Zhao, Yu Ding, Jianhai Zhang, Yang Song, Wanzeng Kong:
Visual-Guided Dual-Spatial Interaction Network for Fine-Grained Brain Semantic Decoding. IEEE Trans. Instrum. Meas. 73: 1-14 (2024) - [c43]Renshuai Liu, Bowen Ma, Wei Zhang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Xuan Cheng:
Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation. CVPR 2024: 2114-2123 - [i24]Renshuai Liu, Bowen Ma, Wei Zhang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Xuan Cheng:
Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation. CoRR abs/2401.01207 (2024) - [i23]Hanwei Liu, Rudong An, Zhimeng Zhang, Bowen Ma, Wei Zhang, Yan Song, Yujing Hu, Wei Chen, Yu Ding:
Norface: Improving Facial Expression Analysis by Identity Normalization. CoRR abs/2407.15617 (2024) - [i22]Suzhen Wang, Yifeng Ma, Yu Ding, Zhipeng Hu, Changjie Fan, Tangjie Lv, Zhidong Deng, Xin Yu:
StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads. CoRR abs/2409.09292 (2024) - [i21]Feng Qiu, Wei Zhang, Chen Liu, Rudong An, Lincheng Li, Yu Ding, Changjie Fan, Zhipeng Hu, Xin Yu:
FreeAvatar: Robust 3D Facial Animation Transfer by Learning an Expression Foundation Model. CoRR abs/2409.13180 (2024) - 2023
- [j12]Zhipeng Hu, Yu Ding, Runze Wu, Lincheng Li, Rongsheng Zhang, Yujing Hu, Feng Qiu, Zhimeng Zhang, Kai Wang, Shiwei Zhao, Yongqiang Zhang, Ji Jiang, Yadong Xi, Jiashu Pu, Wei Zhang, Suzhen Wang, Ke Chen, Tianze Zhou, Jiarui Chen, Yan Song, Tangjie Lv, Changjie Fan:
Deep learning applications in games: a survey from a data perspective. Appl. Intell. 53(24): 31129-31164 (2023) - [j11]Hao Zeng, Wei Zhang, Keyu Chen, Zhimeng Zhang, Lincheng Li, Yu Ding:
Face identity and expression consistency for game character face swapping. Comput. Vis. Image Underst. 236: 103806 (2023) - [j10]Menghang Li, Min Qiu, Wanzeng Kong, Li Zhu, Yu Ding:
Fusion Graph Representation of EEG for Emotion Recognition. Sensors 23(3): 1404 (2023) - [j9]Jiajia Tang, Dongjun Liu, Xuanyu Jin, Yong Peng, Qibin Zhao, Yu Ding, Wanzeng Kong:
BAFN: Bi-Direction Attention Based Fusion Network for Multimodal Sentiment Analysis. IEEE Trans. Circuits Syst. Video Technol. 33(4): 1966-1978 (2023) - [j8]Jiali Chen, Changjie Fan, Zhimeng Zhang, Gongzheng Li, Zeng Zhao, Zhigang Deng, Yu Ding:
A Music-Driven Deep Generative Adversarial Model for Guzheng Playing Animation. IEEE Trans. Vis. Comput. Graph. 29(2): 1400-1414 (2023) - [j7]Ye Pan, Ruisi Zhang, Shengran Cheng, Shuai Tan, Yu Ding, Kenny Mitchell, Xubo Yang:
Emotional Voice Puppetry. IEEE Trans. Vis. Comput. Graph. 29(5): 2527-2535 (2023) - [c42]Yifeng Ma, Suzhen Wang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Zhidong Deng, Xin Yu:
StyleTalk: One-Shot Talking Head Generation with Controllable Speaking Styles. AAAI 2023: 1896-1904 - [c41]Hao Zeng, Wei Zhang, Changjie Fan, Tangjie Lv, Suzhen Wang, Zhimeng Zhang, Bowen Ma, Lincheng Li, Yu Ding, Xin Yu:
FlowFace: Semantic Flow-Guided Shape-Aware Face Swapping. AAAI 2023: 3367-3375 - [c40]Zhimeng Zhang, Zhipeng Hu, Wenjin Deng, Changjie Fan, Tangjie Lv, Yu Ding:
DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video. AAAI 2023: 3543-3551 - [c39]Feng Qiu, Bowen Ma, Wei Zhang, Yu Ding:
Multi-modal Emotion Reaction Intensity Estimation with Temporal Augmentation. CVPR Workshops 2023: 5777-5784 - [c38]Wei Zhang, Bowen Ma, Feng Qiu, Yu Ding:
Multi-modal Facial Affective Analysis based on Masked Autoencoder. CVPR Workshops 2023: 5793-5802 - [c37]Bowen Ma, Wei Zhang, Feng Qiu, Yu Ding:
A Unified Approach to Facial Affect Analysis: the MAE-Face Visual Representation. CVPR Workshops 2023: 5924-5933 - [c36]Suzhen Wang, Yifeng Ma, Yu Ding:
Exploring Complementary Features in Multi-Modal Speech Emotion Recognition. ICASSP 2023: 1-5 - [c35]Ye Pan, Ruisi Zhang, Jingying Wang, Yu Ding, Kenny Mitchell:
Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. ACM Multimedia 2023: 6851-6859 - [c34]Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding, Ye Pan:
Fully Automatic Blendshape Generation for Stylized Characters. VR 2023: 347-355 - [d1]Keyu Chen, Changjie Fan, Wei Zhang, Yu Ding:
135-class Emotional Facial Expression Dataset. IEEE DataPort, 2023 - [i20]Yifeng Ma, Suzhen Wang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Zhidong Deng, Xin Yu:
StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles. CoRR abs/2301.01081 (2023) - [i19]Jinming Ma, Feng Wu, Yingfeng Chen, Xianpeng Ji, Yu Ding:
Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement. CoRR abs/2302.09318 (2023) - [i18]Zhimeng Zhang, Zhipeng Hu, Wenjin Deng, Changjie Fan, Tangjie Lv, Yu Ding:
DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video. CoRR abs/2303.03988 (2023) - [i17]Wei Zhang, Bowen Ma, Feng Qiu, Yu Ding:
Multi-modal Facial Affective Analysis based on Masked Autoencoder. CoRR abs/2303.10849 (2023) - [i16]Yifeng Ma, Suzhen Wang, Yu Ding, Bowen Ma, Tangjie Lv, Changjie Fan, Zhipeng Hu, Zhidong Deng, Xin Yu:
TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles. CoRR abs/2304.00334 (2023) - [i15]Yu Zhang, Hao Zeng, Bowen Ma, Wei Zhang, Zhimeng Zhang, Yu Ding, Tangjie Lv, Changjie Fan:
FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping. CoRR abs/2306.12686 (2023) - 2022
- [j6]Keyu Chen, Xu Yang, Changjie Fan, Wei Zhang, Yu Ding:
Semantic-Rich Facial Emotional Expression Recognition. IEEE Trans. Affect. Comput. 13(4): 1906-1916 (2022) - [c33]Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu:
One-Shot Talking Face Generation from Single-Speaker Audio-Visual Correlation Learning. AAAI 2022: 2531-2539 - [c32]Chuang Zhao, Hongke Zhao, Runze Wu, Qilin Deng, Yu Ding, Jianrong Tao, Changjie Fan:
Multi-Dimensional Prediction of Guild Health in Online Games: A Stability-Aware Multi-Task Learning Approach. AAAI 2022: 4371-4378 - [c31]Jinming Ma, Yingfeng Chen, Feng Wu, Xianpeng Ji, Yu Ding:
Multimodal Reinforcement Learning with Effective State Representation Learning. AAMAS 2022: 1684-1686 - [c30]Hao Zeng, Wei Zhang, Keyu Chen, Zhimeng Zhang, Lincheng Li, Yu Ding:
Paste You Into Game: Towards Expression and Identity Consistency Face Swapping. CoG 2022: 1-8 - [c29]Wei Zhang, Feng Qiu, Suzhen Wang, Hao Zeng, Zhimeng Zhang, Rudong An, Bowen Ma, Yu Ding:
Transformer-based Multimodal Information Fusion for Facial Expression Analysis. CVPR Workshops 2022: 2427-2436 - [c28]Jiajia Tang, Kang Li, Ming Hou, Xuanyu Jin, Wanzeng Kong, Yu Ding, Qibin Zhao:
MMT: Multi-way Multi-modal Transformer for Multimodal Learning. IJCAI 2022: 3458-3465 - [c27]Zhimeng Zhang, Yu Ding:
Adaptive Affine Transformation: A Simple and Effective Operation for Spatial Misaligned Image Generation. ACM Multimedia 2022: 1167-1176 - [c26]Jiwei Guo, Jiajia Tang, Weichen Dai, Yu Ding, Wanzeng Kong:
Dynamically Adjust Word Representations Using Unaligned Multimodal Information. ACM Multimedia 2022: 3394-3402 - [c25]Ye Pan, Ruisi Zhang, Jingying Wang, Nengfu Chen, Yilin Qiu, Yu Ding, Kenny Mitchell:
MienCap: Performance-based Facial Animation with Live Mood Dynamics. VR Workshops 2022: 654-655 - [i14]Wei Zhang, Zhimeng Zhang, Feng Qiu, Suzhen Wang, Bowen Ma, Hao Zeng, Rudong An, Yu Ding:
Transformer-based Multimodal Information Fusion for Facial Expression Analysis. CoRR abs/2203.12367 (2022) - [i13]Zhipeng Hu, Wei Zhang, Lincheng Li, Yu Ding, Wei Chen, Zhigang Deng, Xin Yu:
Facial Action Units Detection Aided by Global-Local Expression Embedding. CoRR abs/2210.13718 (2022) - [i12]Rudong An, Wei Zhang, Hao Zeng, Wei Chen, Zhigang Deng, Yu Ding:
Global-to-local Expression-aware Embeddings for Facial Action Unit Detection. CoRR abs/2210.15160 (2022) - [i11]Bowen Ma, Rudong An, Wei Zhang, Yu Ding, Zeng Zhao, Rongsheng Zhang, Tangjie Lv, Changjie Fan, Zhipeng Hu:
Facial Action Unit Detection and Intensity Estimation from Self-supervised Representation. CoRR abs/2210.15878 (2022) - [i10]Hao Zeng, Wei Zhang, Changjie Fan, Tangjie Lv, Suzhen Wang, Zhimeng Zhang, Bowen Ma, Lincheng Li, Yu Ding, Xin Yu:
FlowFace: Semantic Flow-guided Shape-aware Face Swapping. CoRR abs/2212.02797 (2022) - [i9]Feng Qiu, Chengyang Xie, Yu Ding, Wanzeng Kong:
EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis. CoRR abs/2212.08661 (2022) - [i8]Pengfei Xi, Guifeng Wang, Zhipeng Hu, Yu Xiong, Mingming Gong, Wei Huang, Runze Wu, Yu Ding, Tangjie Lv, Changjie Fan, Xiangnan Feng:
TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective. CoRR abs/2212.08890 (2022) - [i7]Feng Qiu, Wanzeng Kong, Yu Ding:
InterMulti: Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis. CoRR abs/2212.10030 (2022) - 2021
- [j5]Chi Zhou, Zhangjiong Lai, Suzhen Wang, Lincheng Li, Xiaohan Sun, Yu Ding:
Learning a deep motion interpolation network for human skeleton animations. Comput. Animat. Virtual Worlds 32(3-4) (2021) - [c24]Lincheng Li, Suzhen Wang, Zhimeng Zhang, Yu Ding, Yixing Zheng, Xin Yu, Changjie Fan:
Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation. AAAI 2021: 1911-1920 - [c23]Zhimeng Zhang, Lincheng Li, Yu Ding, Changjie Fan:
Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset. CVPR 2021: 3661-3670 - [c22]Wei Zhang, Xianpeng Ji, Keyu Chen, Yu Ding, Changjie Fan:
Learning a Facial Expression Embedding Disentangled From Identity. CVPR 2021: 6759-6768 - [c21]Wei Zhang, Zunhu Guo, Keyu Chen, Lincheng Li, Zhimeng Zhang, Yu Ding, Runze Wu, Tangjie Lv, Changjie Fan:
Prior Aided Streaming Network for Multi-task Affective Analysis. ICCVW 2021: 3532-3542 - [c20]Suzhen Wang, Lincheng Li, Yu Ding, Changjie Fan, Xin Yu:
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion. IJCAI 2021: 1098-1105 - [c19]Qilin Deng, Kai Wang, Minghao Zhao, Runze Wu, Yu Ding, Zhene Zou, Yue Shang, Jianrong Tao, Changjie Fan:
Build Your Own Bundle - A Neural Combinatorial Optimization Method. ACM Multimedia 2021: 2625-2633 - [i6]Lilin Cheng, Suzhe Wang, Zhimeng Zhang, Yu Ding, Yixing Zheng, Xin Yu, Changjie Fan:
Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation. CoRR abs/2104.07995 (2021) - [i5]Wei Zhang, Zunhu Guo, Keyu Chen, Lincheng Li, Zhimeng Zhang, Yu Ding:
Prior Aided Streaming Network for Multi-task Affective Recognitionat the 2nd ABAW2 Competition. CoRR abs/2107.03708 (2021) - [i4]Suzhen Wang, Lincheng Li, Yu Ding, Changjie Fan, Xin Yu:
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion. CoRR abs/2107.09293 (2021) - [i3]Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu:
One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning. CoRR abs/2112.02749 (2021) - 2020
- [j4]Yu Ding, Lei Shi, Zhigang Deng:
Low-Level Characterization of Expressive Head Motion Through Frequency Domain Analysis. IEEE Trans. Affect. Comput. 11(3): 405-418 (2020) - [c18]Jiangning Zhang, Xianfang Zeng, Mengmeng Wang, Yusu Pan, Liang Liu, Yong Liu, Yu Ding, Changjie Fan:
FReeNet: Multi-Identity Face Reenactment. CVPR 2020: 5325-5334 - [c17]Ruobai Wang, Yu Ding, Lincheng Li, Changjie Fan:
One-Shot Voice Conversion Using Star-Gan. ICASSP 2020: 7729-7733 - [i2]Xianpeng Ji, Yu Ding, Lincheng Li, Yu Chen, Changjie Fan:
Multi-label Relation Modeling in Facial Action Units Detection. CoRR abs/2002.01105 (2020)
2010 – 2019
- 2019
- [c16]Jiali Chen, Yong Liu, Zhimeng Zhang, Changjie Fan, Yu Ding:
Text-driven Visual Prosody Generation for Embodied Conversational Agents. IVA 2019: 108-110 - [i1]Jiangning Zhang, Xianfang Zeng, Yusu Pan, Yong Liu, Yu Ding, Changjie Fan:
FaceSwapNet: Landmark Guided Many-to-Many Face Reenactment. CoRR abs/1905.11805 (2019) - 2017
- [j3]Yu Ding, Jing Huang, Catherine Pelachaud:
Audio-Driven Laughter Behavior Controller. IEEE Trans. Affect. Comput. 8(4): 546-558 (2017) - [j2]Maurizio Mancini, Béatrice Biancardi, Florian Pecune, Giovanna Varni, Yu Ding, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Implementing and Evaluating a Laughing Virtual Character. ACM Trans. Internet Techn. 17(1): 3:1-3:22 (2017) - [j1]Jing Huang, Marco Fratarcangeli, Yu Ding, Catherine Pelachaud:
Inverse kinematics using dynamic joint parameters: inverse kinematics animation synthesis learnt from sub-divided motion micro-segments. Vis. Comput. 33(12): 1541-1553 (2017) - [c15]Yu Ding, Lei Shi, Zhigang Deng:
Perceptual enhancement of emotional mocap head motion: An experimental study. ACII 2017: 242-247 - [c14]Yu Ding, Yuting Zhang, Meihua Xiao, Zhigang Deng:
A Multifaceted Study on Eye Contact based Speaker Identification in Three-party Conversations. CHI 2017: 3011-3021 - 2016
- [c13]Qi Wang, Thierry Artières, Yu Ding:
Learning Activity Patterns Performed With Emotion. MOCO 2016: 37:1-37:4 - 2015
- [c12]Florian Pecune, Béatrice Biancardi, Yu Ding, Catherine Pelachaud, Maurizio Mancini, Giovanna Varni, Antonio Camurri, Gualtiero Volpe:
LOL - Laugh Out Loud. AAAI 2015: 4309-4310 - [c11]Radoslaw Niewiadomski, Yu Ding, Maurizio Mancini, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Perception of intensity incongruence in synthesized multimodal expressions of laughter. ACII 2015: 684-690 - [c10]Florian Pecune, Maurizio Mancini, Béatrice Biancardi, Giovanna Varni, Yu Ding, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Laughing with a Virtual Agent. AAMAS 2015: 1817-1818 - [c9]Yu Ding, Catherine Pelachaud:
Lip animation synthesis: a unified framework for speaking and laughing virtual agent. AVSP 2015: 78-83 - [c8]Herwin van Welbergen, Yu Ding, Kai Sattler, Catherine Pelachaud, Stefan Kopp:
Real-Time Visual Prosody for Interactive Virtual Agents. IVA 2015: 139-151 - 2014
- [b1]Yu Ding:
Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé. (Data-driven expressive animation model of speech and laughter for an embodied conversational agent). Télécom ParisTech, France, 2014 - [c7]Yu Ding, Ken Prepin, Jing Huang, Catherine Pelachaud, Thierry Artières:
Laughter animation synthesis. AAMAS 2014: 773-780 - [c6]Radoslaw Niewiadomski, Maurizio Mancini, Yu Ding, Catherine Pelachaud, Gualtiero Volpe:
Rhythmic Body Movements of Laughter. ICMI 2014: 299-306 - [c5]Yu Ding, Jing Huang, Nesrine Fourati, Thierry Artières, Catherine Pelachaud:
Upper Body Animation Synthesis for a Laughing Character. IVA 2014: 164-173 - 2013
- [c4]Yu Ding, Mathieu Radenen, Thierry Artières, Catherine Pelachaud:
Speech-driven eyebrow motion synthesis with contextual Markovian models. ICASSP 2013: 3756-3760 - [c3]Maurizio Mancini, Laurent Ach, Emeline Bantegnie, Tobias Baur, Nadia Berthouze, Debajyoti Datta, Yu Ding, Stéphane Dupont, Harry J. Griffin, Florian Lingenfelser, Radoslaw Niewiadomski, Catherine Pelachaud, Olivier Pietquin, Bilal Piot, Jérôme Urbain, Gualtiero Volpe, Johannes Wagner:
Laugh When You're Winning. eNTERFACE 2013: 50-79 - [c2]Magalie Ochs, Yu Ding, Nesrine Fourati, Mathieu Chollet, Brian Ravenet, Florian Pecune, Nadine Glas, Ken Prepin, Chloé Clavel, Catherine Pelachaud:
Vers des Agents Conversationnels Animés Socio-Affectifs. IHM 2013: 69-78 - [c1]Yu Ding, Catherine Pelachaud, Thierry Artières:
Modeling Multimodal Behaviors from Speech Prosody. IVA 2013: 217-228
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 21:27 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint