default search action
Mohammad Shoeybi
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c23]Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, Song Han:
VILA: On Pre-training for Visual Language Models. CVPR 2024: 26679-26689 - [c22]Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro:
Retrieval meets Long Context Large Language Models. ICLR 2024 - [c21]Lichang Chen, Chen Zhu, Jiuhai Chen, Davit Soselia, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro:
ODIN: Disentangled Reward Mitigates Hacking in RLHF. ICML 2024 - [c20]Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, Bryan Catanzaro:
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining. ICML 2024 - [i44]Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro:
ChatQA: Building GPT-4 Level Conversational QA Models. CoRR abs/2401.10225 (2024) - [i43]Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro:
ODIN: Disentangled Reward Mitigates Hacking in RLHF. CoRR abs/2402.07319 (2024) - [i42]Jupinder Parmar, Shrimai Prabhumoye, Joseph Jennings, Mostofa Patwary, Sandeep Subramanian, Dan Su, Chen Zhu, Deepak Narayanan, Aastha Jhunjhunwala, Ayush Dattagupta, Vibhu Jawa, Jiwei Liu, Ameya Mahabaleshwarkar, Osvald Nitski, Annika Brundyn, James Maki, Miguel Martinez, Jiaxuan You, John Kamalu, Patrick LeGresley, Denys Fridman, Jared Casper, Ashwath Aithal, Oleksii Kuchaiev, Mohammad Shoeybi, Jonathan M. Cohen, Bryan Catanzaro:
Nemotron-4 15B Technical Report. CoRR abs/2402.16819 (2024) - [i41]Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping:
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models. CoRR abs/2405.17428 (2024) - [i40]Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, Garvit Kulshreshtha, Vartika Singh, Jared Casper, Jan Kautz, Mohammad Shoeybi, Bryan Catanzaro:
An Empirical Study of Mamba-based Language Models. CoRR abs/2406.07887 (2024) - [i39]Bo Adler, Niket Agarwal, Ashwath Aithal, Dong H. Anh, Pallab Bhattacharya, Annika Brundyn, Jared Casper, Bryan Catanzaro, Sharon Clay, Jonathan M. Cohen, Sirshak Das, Ayush Dattagupta, Olivier Delalleau, Leon Derczynski, Yi Dong, Daniel Egert, Ellie Evans, Aleksander Ficek, Denys Fridman, Shaona Ghosh, Boris Ginsburg, Igor Gitman, Tomasz Grzegorzek, Robert Hero, Jining Huang, Vibhu Jawa, Joseph Jennings, Aastha Jhunjhunwala, John Kamalu, Sadaf Khan, Oleksii Kuchaiev, Patrick LeGresley, Hui Li, Jiwei Liu, Zihan Liu, Eileen Long, Ameya Sunil Mahabaleshwarkar, Somshubra Majumdar, James Maki, Miguel Martinez, Maer Rodrigues de Melo, Ivan Moshkov, Deepak Narayanan, Sean Narenthiran, Jesus Navarro, Phong Nguyen, Osvald Nitski, Vahid Noroozi, Guruprasad Nutheti, Christopher Parisien, Jupinder Parmar, Mostofa Patwary, Krzysztof Pawelec, Wei Ping, Shrimai Prabhumoye, Rajarshi Roy, Trisha Saar, Vasanth Rao Naik Sabavat, Sanjeev Satheesh, Jane Polak Scowcroft, Jason Sewall, Pavel Shamis, Gerald Shen, Mohammad Shoeybi, Dave Sizer, Misha Smelyanskiy, Felipe Soares, Makesh Narsimhan Sreedhar, Dan Su, Sandeep Subramanian, Shengyang Sun, Shubham Toshniwal, Hao Wang, Zhilin Wang, Jiaxuan You, Jiaqi Zeng, Jimmy Zhang, Jing Zhang, Vivienne Zhang, Yian Zhang, Chen Zhu:
Nemotron-4 340B Technical Report. CoRR abs/2406.11704 (2024) - [i38]Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi, Bryan Catanzaro:
RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs. CoRR abs/2407.02485 (2024) - [i37]Jupinder Parmar, Shrimai Prabhumoye, Joseph Jennings, Bo Li, Aastha Jhunjhunwala, Zhilin Wang, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro:
Data, Data Everywhere: A Guide for Pretraining Dataset Construction. CoRR abs/2407.06380 (2024) - [i36]Jupinder Parmar, Sanjeev Satheesh, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro:
Reuse, Don't Retrain: A Recipe for Continued Pretraining of Language Models. CoRR abs/2407.07263 (2024) - [i35]Peng Xu, Wei Ping, Xianchao Wu, Zihan Liu, Mohammad Shoeybi, Bryan Catanzaro:
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities. CoRR abs/2407.14482 (2024) - [i34]Saurav Muralidharan, Sharath Turuvekere Sreenivas, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, Pavlo Molchanov:
Compact Language Models via Pruning and Knowledge Distillation. CoRR abs/2407.14679 (2024) - [i33]Sharath Turuvekere Sreenivas, Saurav Muralidharan, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, Pavlo Molchanov:
LLM Pruning and Distillation in Practice: The Minitron Approach. CoRR abs/2408.11796 (2024) - [i32]Wenliang Dai, Nayeon Lee, Boxin Wang, Zhuoling Yang, Zihan Liu, Jon Barker, Tuomas Rintamaki, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping:
NVLM: Open Frontier-Class Multimodal LLMs. CoRR abs/2409.11402 (2024) - 2023
- [c19]Dan Su, Mostofa Patwary, Shrimai Prabhumoye, Peng Xu, Ryan Prenger, Mohammad Shoeybi, Pascale Fung, Anima Anandkumar, Bryan Catanzaro:
Context Generation Improves Open Domain Question Answering. EACL (Findings) 2023: 781-796 - [c18]Shrimai Prabhumoye, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro:
Adding Instructions during Pretraining: Effective way of Controlling Toxicity in Language Models. EACL 2023: 2628-2643 - [c17]Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro:
Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. EMNLP 2023: 7763-7786 - [c16]Zhuolin Yang, Wei Ping, Zihan Liu, Vijay Korthikanti, Weili Nie, De-An Huang, Linxi Fan, Zhiding Yu, Shiyi Lan, Bo Li, Mohammad Shoeybi, Ming-Yu Liu, Yuke Zhu, Bryan Catanzaro, Chaowei Xiao, Anima Anandkumar:
Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning. EMNLP (Findings) 2023: 11844-11857 - [c15]Vijay Anand Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, Bryan Catanzaro:
Reducing Activation Recomputation in Large Transformer Models. MLSys 2023 - [i31]Zhuolin Yang, Wei Ping, Zihan Liu, Vijay Korthikanti, Weili Nie, De-An Huang, Linxi Fan, Zhiding Yu, Shiyi Lan, Bo Li, Ming-Yu Liu, Yuke Zhu, Mohammad Shoeybi, Bryan Catanzaro, Chaowei Xiao, Anima Anandkumar:
Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning. CoRR abs/2302.04858 (2023) - [i30]Shrimai Prabhumoye, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro:
Adding Instructions during Pretraining: Effective Way of Controlling Toxicity in Language Models. CoRR abs/2302.07388 (2023) - [i29]Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro:
Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. CoRR abs/2304.06762 (2023) - [i28]Jie Huang, Wei Ping, Peng Xu, Mohammad Shoeybi, Kevin Chen-Chuan Chang, Bryan Catanzaro:
RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models. CoRR abs/2308.07922 (2023) - [i27]Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro:
Retrieval meets Long Context Large Language Models. CoRR abs/2310.03025 (2023) - [i26]Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, Bryan Catanzaro:
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining. CoRR abs/2310.07713 (2023) - [i25]Ji Lin, Hongxu Yin, Wei Ping, Yao Lu, Pavlo Molchanov, Andrew Tao, Huizi Mao, Jan Kautz, Mohammad Shoeybi, Song Han:
VILA: On Pre-training for Visual Language Models. CoRR abs/2312.07533 (2023) - 2022
- [c14]Zihan Liu, Mostofa Patwary, Ryan Prenger, Shrimai Prabhumoye, Wei Ping, Mohammad Shoeybi, Bryan Catanzaro:
Multi-Stage Prompting for Knowledgeable Dialogue Generation. ACL (Findings) 2022: 1317-1337 - [c13]Peng Xu, Mostofa Patwary, Shrimai Prabhumoye, Virginia Adams, Ryan Prenger, Wei Ping, Nayeon Lee, Mohammad Shoeybi, Bryan Catanzaro:
Evaluating Parameter Efficient Learning for Generation. EMNLP 2022: 4824-4833 - [c12]David Wingate, Mohammad Shoeybi, Taylor Sorensen:
Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models. EMNLP (Findings) 2022: 5621-5634 - [c11]Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro:
Factuality Enhanced Language Models for Open-Ended Text Generation. NeurIPS 2022 - [c10]Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro:
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. NeurIPS 2022 - [i24]Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zheng, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro:
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model. CoRR abs/2201.11990 (2022) - [i23]Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro:
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. CoRR abs/2202.04173 (2022) - [i22]Zihan Liu, Mostofa Patwary, Ryan Prenger, Shrimai Prabhumoye, Wei Ping, Mohammad Shoeybi, Bryan Catanzaro:
Multi-Stage Prompting for Knowledgeable Dialogue Generation. CoRR abs/2203.08745 (2022) - [i21]Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, Bryan Catanzaro:
Reducing Activation Recomputation in Large Transformer Models. CoRR abs/2205.05198 (2022) - [i20]Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro:
Factuality Enhanced Language Models for Open-Ended Text Generation. CoRR abs/2206.04624 (2022) - [i19]Paulius Micikevicius, Dusan Stosic, Neil Burgess, Marius Cornea, Pradeep Dubey, Richard Grisenthwaite, Sangwon Ha, Alexander Heinecke, Patrick Judd, John Kamalu, Naveen Mellempudi, Stuart F. Oberman, Mohammad Shoeybi, Michael Y. Siu, Hao Wu:
FP8 Formats for Deep Learning. CoRR abs/2209.05433 (2022) - [i18]David Wingate, Mohammad Shoeybi, Taylor Sorensen:
Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models. CoRR abs/2210.03162 (2022) - [i17]Dan Su, Mostofa Patwary, Shrimai Prabhumoye, Peng Xu, Ryan Prenger, Mohammad Shoeybi, Pascale Fung, Anima Anandkumar, Bryan Catanzaro:
Context Generation Improves Open Domain Question Answering. CoRR abs/2210.06349 (2022) - [i16]Peng Xu, Mostofa Patwary, Shrimai Prabhumoye, Virginia Adams, Ryan J. Prenger, Wei Ping, Nayeon Lee, Mohammad Shoeybi, Bryan Catanzaro:
Evaluating Parameter Efficient Learning for Generation. CoRR abs/2210.13673 (2022) - 2021
- [c9]Devendra Singh Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L. Hamilton, Bryan Catanzaro:
End-to-End Training of Neural Retrievers for Open-Domain Question Answering. ACL/IJCNLP (1) 2021: 6648-6662 - [c8]Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro:
Long-Short Transformer: Efficient Transformers for Language and Vision. NeurIPS 2021: 17723-17736 - [c7]Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, Matei Zaharia:
Efficient large-scale language model training on GPU clusters using megatron-LM. SC 2021: 58 - [i15]Devendra Singh Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L. Hamilton, Bryan Catanzaro:
End-to-End Training of Neural Retrievers for Open-Domain Question Answering. CoRR abs/2101.00408 (2021) - [i14]Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, Matei Zaharia:
Efficient Large-Scale Language Model Training on GPU Clusters. CoRR abs/2104.04473 (2021) - [i13]Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro:
Long-Short Transformer: Efficient Transformers for Language and Vision. CoRR abs/2107.02192 (2021) - [i12]Shrimai Prabhumoye, Rafal Kocielnik, Mohammad Shoeybi, Anima Anandkumar, Bryan Catanzaro:
Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases. CoRR abs/2112.07868 (2021) - 2020
- [c6]Alex Boyd, Raul Puri, Mohammad Shoeybi, Mostofa Patwary, Bryan Catanzaro:
Large Scale Multi-Actor Generative Dialog Modeling. ACL 2020: 66-84 - [c5]Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, Bryan Catanzaro:
MEGATRON-CNTRL: Controllable Story Generation with External Knowledge Using Large-Scale Language Models. EMNLP (1) 2020: 2831-2845 - [c4]Hoo-Chang Shin, Yang Zhang, Evelina Bakhturina, Raul Puri, Mostofa Patwary, Mohammad Shoeybi, Raghav Mani:
BioMegatron: Larger Biomedical Domain Language Model. EMNLP (1) 2020: 4700-4706 - [c3]Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, Bryan Catanzaro:
Training Question Answering Models From Synthetic Data. EMNLP (1) 2020: 5811-5826 - [i11]Raul Puri, Ryan Spring, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro:
Training Question Answering Models From Synthetic Data. CoRR abs/2002.09599 (2020) - [i10]Kuo-Hao Zeng, Mohammad Shoeybi, Ming-Yu Liu:
Style Example-Guided Text Generation using Generative Adversarial Transformers. CoRR abs/2003.00674 (2020) - [i9]Alex Boyd, Raul Puri, Mohammad Shoeybi, Mostofa Patwary, Bryan Catanzaro:
Large Scale Multi-Actor Generative Dialog Modeling. CoRR abs/2005.06114 (2020) - [i8]Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, Bryan Catanzaro:
MEGATRON-CNTRL: Controllable Story Generation with External Knowledge Using Large-Scale Language Models. CoRR abs/2010.00840 (2020) - [i7]Hoo-Chang Shin, Yang Zhang, Evelina Bakhturina, Raul Puri, Mostofa Patwary, Mohammad Shoeybi, Raghav Mani:
BioMegatron: Larger Biomedical Domain Language Model. CoRR abs/2010.06060 (2020) - [i6]Sashank Santhanam, Wei Ping, Raul Puri, Mohammad Shoeybi, Mostofa Patwary, Bryan Catanzaro:
Local Knowledge Powered Conversational Agents. CoRR abs/2010.10150 (2020)
2010 – 2019
- 2019
- [c2]Fitsum A. Reda, Deqing Sun, Aysegul Dundar, Mohammad Shoeybi, Guilin Liu, Kevin J. Shih, Andrew Tao, Jan Kautz, Bryan Catanzaro:
Unsupervised Video Interpolation Using Cycle Consistency. ICCV 2019: 892-900 - [i5]Fitsum A. Reda, Deqing Sun, Aysegul Dundar, Mohammad Shoeybi, Guilin Liu, Kevin J. Shih, Andrew Tao, Jan Kautz, Bryan Catanzaro:
Unsupervised Video Interpolation Using Cycle Consistency. CoRR abs/1906.05928 (2019) - [i4]Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro:
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019) - [i3]Rafael Valle, Fitsum A. Reda, Mohammad Shoeybi, Patrick LeGresley, Andrew Tao, Bryan Catanzaro:
Neural ODEs for Image Segmentation with Level Sets. CoRR abs/1912.11683 (2019) - 2017
- [c1]Sercan Ömer Arik, Mike Chrzanowski, Adam Coates, Gregory Frederick Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Y. Ng, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi:
Deep Voice: Real-time Neural Text-to-Speech. ICML 2017: 195-204 - [i2]Sercan Ömer Arik, Mike Chrzanowski, Adam Coates, Greg Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi:
Deep Voice: Real-time Neural Text-to-Speech. CoRR abs/1702.07825 (2017) - [i1]Markus Kliegl, Siddharth Goyal, Kexin Zhao, Kavya Srinet, Mohammad Shoeybi:
Trace norm regularization and faster inference for embedded speech recognition RNNs. CoRR abs/1710.09026 (2017) - 2010
- [j3]Mohammad Shoeybi, Magnus Svärd, Frank E. Ham, Parviz Moin:
An adaptive implicit-explicit scheme for the DNS and LES of compressible flows on unstructured grids. J. Comput. Phys. 229(17): 5944-5965 (2010)
2000 – 2009
- 2008
- [j2]Ken Mattsson, Magnus Svärd, Mohammad Shoeybi:
Stable and accurate schemes for the compressible Navier-Stokes equations. J. Comput. Phys. 227(4): 2293-2316 (2008) - 2006
- [j1]Jeremy A. Templeton, Mohammad Shoeybi:
Towards Wall-Normal Filtering for Large-Eddy Simulation. Multiscale Model. Simul. 5(2): 420-444 (2006)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-15 20:46 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint