ServiceNow Research’s cover photo
ServiceNow Research

ServiceNow Research

Research Services

Montreal, Quebec 45,107 followers

Unlock work experiences of the future. Follow ServiceNow Research as we advance the state-of-the-art in Enterprise AI.

About us

ServiceNow (NYSE: NOW) makes the world work better for everyone. Our cloud-based platform and solutions help digitize and unify organizations so that they can find smarter, faster, better ways to make work flow. So employees and customers can be more connected, more innovative, and more agile. And we can all create the future we imagine. The world works with ServiceNow. For more information, visit www.servicenow.com. ServiceNow Research, part of the Advanced Technology Group at ServiceNow, advances the state-of-the-art in Enterprise AI. In equal measure, we innovate, research, experiment, and mature AI technologies that create compelling user experiences of the future so that every ServiceNow user benefits from AI. We believe AI should be built responsibly, without compromise to fairness, ethics, accountability, and transparency. ServiceNow Research programs drive innovation into Low Data Learning, Human Decision Support, and Human-Machine Interaction Through Language. Low Data Learning studies Machine Learning methods that enable adapting efficiently to varied and changing datasets. ServiceNow Research focuses on a wide range of downstream tasks such as language understanding, computer vision, robotic automation, and learning workflows. ServiceNow Research for Human Decision Support aims to assist decision-makers and increase customer productivity 1) Directing requests to the right sequence of decision-makers 2) Presenting appropriate information to aid decisions 3) Suggesting possible courses of action. ServiceNow Research studies in Human-Machine Interaction Through Language advance AI technology to enable the next generation of Language User Interfaces with a focus on natural conversational human-computer interactions and AI-assisted programming. The AI Trust and Governance Lab guides ServiceNow and its customers in their AI strategy and deployment via governance frameworks and applied research in trustworthiness. We’re hiring!

Industry
Research Services
Company size
10,001+ employees
Headquarters
Montreal, Quebec
Type
Public Company
Specialties
Artificial Intelligence, Machine Learning, Software, Operations Research, Natural Language Processing, Neural Network, Research, Deep Learning, Computer Vision, Climate Change, Trustworthy AI, Responsible AI, AI for Good, Natural Language Understanding, Human Decision Support, Computer Science, Funadamental Research, Applied Research, Research Transfer, Data Science, Reinforcement Learning, and Research Collaboration

Locations

Employees at ServiceNow Research

Updates

  • ServiceNow Research reposted this

    View profile for Nicolas Gontier

    Research Scientist @ ServiceNow Research

    🚀 New Research Internship Opportunity ServiceNow Research! 🌟 Location: Montreal 📅 Start Date: April 2025 Join us in building the next generation of web agents that can reason about their actions on public (WorkArena, WorkArena++, and WebArena) and internal benchmarks by comparing the effectiveness of model-based vs environment-based state prediction during action sequence planning and training an agent with offline (warmup supervised finetuning) and online data (GRPO). Collaborate on cutting-edge projects, refining generalist agents to solve complex browser-based tasks—from data analytics to research automation. What We’re Looking For ✅ Strong knowledge of large language models (LLMs). ✅ Experience in Reinforcement Learning algorithms. ✅ ML research experience. 📌 Apply here to kickstart your journey: https://lnkd.in/eRAQxigs ✨ Don’t miss this chance to join a passionate research team shaping the future of LLM Agent research!

  • ServiceNow Research reposted this

    The excitement around AI was truly palpable at NVIDIA GTC this week – thrilled to have had the opportunity to attend and present! On Tuesday, ServiceNow and NVIDIA announced an expansion to our partnership to advance our next-level Agentic AI capabilities. Together, we’re optimizing AI agent deployment with new evaluation tools and the integration of NVIDIA Llama Nemotron reasoning models with the ServiceNow Platform. With clear visibility and improved reasoning, businesses can confidently scale AI agents... which means reliable automation and smarter workflows starting pre-deployment! I also got a chance to speak on 2 panels: 1️⃣ State of Agentic AI Reasoning in the Enterprise led by Kari Ann Briski and alongside Lan Guan, Mike Hulme, Dr. Walter Sun. I was excited to share that our AI Agents generate over $350M in annualized value internally by improving speed, productivity, and deflection across various use cases. And it was great to learn from my fellow panelists that AI Agents are driving productivity across ALL industries. 2️⃣ Harnessing AI Agents for Enterprise Success led by Rama Akkiraju and alongside Clara ShihRajendra Prasad (RP)Raji Rajagopalan. This panel highlighted how AI Agents are becoming prevalent across enterprise as well as consumer applications. Beyond the honor of speaking alongside some of today’s greatest leaders in Agentic AI, it was a personal privilege to stand on stage with such remarkable women who are breaking boundaries and leading the Agentic AI revolution. I also enjoyed hosting a session alongside my colleagues, Nicolas Chapados and Greg George, on how ServiceNow advances the industry beyond prompting to agents that can comprehend and interpret, break down complex tasks, prioritize, plan, and then execute to achieve desired results with built-in guardrails. You know you had a great presentation when the first question from the audience is “how can I buy this?” 😉 You can watch our session here: https://lnkd.in/er2mxPYg My biggest takeaway from GTC was the emphasis on AI systems that can ‘think fast’ for rapid responses and ‘think slow’ for complex problem-solving, underscoring the need for architectures that balance speed with thoughtful deliberation Safe to say it’s been a busy week! Already looking forward to next year. Big thanks to Nima BadieyLaurie MuhlbachJacqueline VelascoMilind Kukanur and the rest of #ServiceNow and #Nvidia teams for the incredible partnership and for making this an event to remember! #PutAIToWorkForPeople #GTC25  

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Jason Stanley

    Head of AI Research Deployment @ ServiceNow

    New research shows that, with just a small number of HARMLESS examples, we can fine-tune leading AI models out of their safety alignment in a way that current defences fail to catch. This was a collaboration that ServiceNow Research helped drive and that led to a bug bounty from OpenAI. ❓Why it matters -- leading AI companies sometimes make available API endpoints for custom fine-tuning of their models. They put defences in place, and their base models are safety aligned (to different degrees). But we show here that this novel attack strategy fuelled by seemingly innocent examples can undo alignment. Past research has mostly focused on leveraging the helpfulness of models, tricking the model to start answering user queries (including unsafe ones) in a helpful way (e.g., sure I'm happy to help!), knowing that the model is likely to follow through on answering the user request, even if unsafe. However, these kinds of attacks are easy to block. Our research does something different. Rather than training models to respond with helpful prefixes, we trained them to first REFUSE benign requests and then answer these requests anyway. Despite all training examples being benign, the model learns the head-fake and generalizes it to unsafe queries. 🔥 As our findings show, this attack strategy has a high ASR (attack success rate) against leading models, **even in the presence of leading guardrails.** Research like this is crucial for building trustworthy AI and deploying it into contexts with higher and higher stakes, like the ones we encounter every day working with the largest enterprises and organizations in the world. Links to the paper, blog post and code and data in the thread below. #artificialintelligence #aisafety #aisecurity #trustworthyai #ai ServiceNow Krishnamurthy Dvijotham Sanmi Koyejo Rylan Schaeffer Chris Cundy Joshua Kazdan

    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Jason Stanley

    Head of AI Research Deployment @ ServiceNow

    DoomArena -- our framework for much stronger, more grounded security testing for AI agents at ServiceNow Research. Closing out the week at NVIDIA GTC listening to my colleague Krishnamurthy Dvijotham give an overview of the framework we're building and will release to the world to facilitate granular threat modelling and the flexible, modular, extensible engineering and testing of attacks and defences for AI agents. Most testing by leading AI companies about the security and safety of their models don't actually reveal much about their security and safety *IN A PARTICULAR DEPLOYMENT CONTEXT*. That testing isn't irrelevant, but it's done in a way that is ignorant of the details of deployment -- the distribution of data the system will see in production, information about which errors are grave and which are benign, knowledge about what assets others might want to get their hands on, and so on. Answers to these questions are what you need for detailed threat modelling. And they are your starting point for knowing what to test. DoomArena is built with this in mind, allowing for the design and deployment of and experimentation with attacks and defences relevant for your agent deployment context. This is just a tease. Paper and framework to be released soon. I'm really excited about this one! #artificialintelligence #aisafeety #aiagent #trustworthyai ServiceNow Nicolas Chapados Nelly Ayllon Lazo

    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Krishnamurthy Dvijotham

    Research Team Lead, Safety and Security

    It's great to have AI agents that do routine tasks autonomously, allowing knowledge workers to focus on higher productivity tasks. However most tasks in enterprise settings involve accessing sensitive data, modifying access permissions etc. How do we test that such agents are secure? To learn about the ServiceNow perspective on this, join us at the ServiceNow booth at NVIDIA #GTC today at 1:30 pm. It's the last of 3 sessions we have on this topic, so make sure you don't miss it! I will also be looking to meet folks interested in this topic and exchange ideas - so feel free to ping me if you would like to chat!

    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Juan A. Rodriguez

    Artificial Intelligence Researcher

    🚀 Exciting news! StarVector has been accepted at CVPR 2025! 🎉 StarVector introduces a new paradigm for Scalable Vector Graphics (SVG) generation, leveraging multimodal LLMs to generate SVG code that aesthetically mirrors input images and text. This work explores the intersection of deep learning, vector graphics, and generative AI, pushing the boundaries of how LLMs can directly produce structured visual representations. 🎯 And today, we’re open-sourcing everything! ✅ Model checkpoints ✅ Full training & inference code ✅ Curated datasets For all details, check out: 🔗 Website: https://lnkd.in/dFwxbsCk 📄 Paper: https://lnkd.in/d5aHWwkH 💻 Code: https://lnkd.in/dZ49sSeJ Key Learnings & Challenges While StarVector naturally leverages SVG primitives for compact and structured outputs, we’ve also uncovered key challenges: ⚠️ Like all LLMs, it can hallucinate, introducing unintended details. 🎭 Because it learns next-token SVG prediction, the model never "sees" the rendered image during training, meaning it isn’t directly optimizing for pixel-wise reconstruction. We are actively working to push these boundaries, improve fidelity, and enhance robustness. This is just the beginning—stay tuned for what’s next! 🚀 A huge thanks to my amazing coauthors and collaborators at ServiceNow Research, Mila - Quebec Artificial Intelligence Institute, and École de technologie supérieure for their support throughout this journey! 🙌❤️ Would love to hear your thoughts—excited to see how the community builds on this! 🔥 #CVPR2025 #AI #DeepLearning #GenerativeAI #SVG #StarVector

  • ServiceNow Research reposted this

    View profile for Krishnamurthy Dvijotham

    Research Team Lead, Safety and Security

    Can your AI keep up with dynamic attackers? In a paper to appear at #AISTATS2025 with Avinandan Bose Laurent Lessard and Maryam Fazel, we study robustness of learning algorithms to dynamic data poisoning attacks that can adapt attacks while observing the progress of learning. While prior work has focused on certifying robustness of AI to static data poisoning attacks where the attacker poisons the dataset in one shot prior to learning, modern AI systems are continuously updated from human feedback, making dynamic data poisoning attacks feasible. We develop a general framework to compute tight certificates that are mathematically rigorous upper bounds on the worst case impact of dynamic data poisoning attacks against learning algorithms. We then use these certificates in a meta-learning setup to optimize learning algorithms to achieve a desirable tradeoff between performance on benign learning tasks and robustness to dynamic data poisoning attacks. While more work is needed to scale our approach to SOTA AI systems, our experimental results show promise for settings learning learning reward functions online from human feedback. To learn more, please visit our project webpage: https://lnkd.in/gWiwESDu where you can find our paper, code and data. Grateful to ServiceNow Research for supporting this work!

  • ServiceNow Research reposted this

    Revitalizing Benchmarking for LLMs with EMDM As LLMs continue to advance, benchmarks struggle to keep up. Many evaluations fail to distinguish model capabilities, leading to performance plateaus and diminishing insights. In this work, we introduce EMDM (Enhanced Model Differentiation Metric)—a weighted metric that redefines LLM evaluation by integrating answer correctness with Chain-of-Thought (CoT) reasoning. By leveraging a Guided vs. Unguided prompting setup, EMDM dynamically adjusts weights based on reasoning complexity, leading to improved model separation and more meaningful performance assessments. 🔹 EMDM accounts for reasoning failures, distinguishing models that genuinely understand a problem from those that merely guess correctly. 🔹 It can revitalize existing benchmarks, offering a scalable alternative to synthetic dataset creation. We are excited to present this work at TrustNLP @ NAACL 2025! Join us as we discuss the future of trustworthy and robust LLM evaluation. 📄 Read the paper: https://lnkd.in/ddh678Yv 🔗 TrustNLP Workshop: https://lnkd.in/dSFmyE6S Looking forward to engaging discussions—how can we improve LLM evaluation for real-world applications? This work was done as part of Bryan Etzine's internship with ServiceNow, together with the great collaborators: Nishanth Madhusudhan, Sagar Davasam, Roshnee Sharma, Vikas Yadav, PhD, Sathwik Tejaswi Madhusudan

  • ServiceNow Research reposted this

    View profile for Jason Stanley

    Head of AI Research Deployment @ ServiceNow

    Excited to be speaking alongside my colleagues at NVIDIA's GTC conference next week about our approach to ensuring the security of AI agents at ServiceNow Research! AI agents are becoming more capable and autonomous, but with that comes new security challenges. At ServiceNow Research, we are developing robust methods to ensure trustworthiness, mitigate risks, and align AI behavior with enterprise needs. In our talk, we'll address: 1️⃣ The shortcomings of dominant AI security and safety testing paradigms when it comes to measuring real risks of deployed agents in enterprise settings; 2️⃣ The need for testing to be done in a context-aware way with close attention to the details of deployment; and 3️⃣ How ServiceNow's platform provides broad observability and greater levers for trustworthy control. Check out our presentation at the ServiceNow booth at: ⏰ March 18, 6:30pm PT ⏰ March 19, 4:30pm PT ⏰ March 20, 1:30pm PT Looking forward to discussing these challenges and solutions with the community. If you're attending, let’s connect! Krishnamurthy Dvijotham Nicolas Chapados #artificialintelligence #ai #aisafety #aisecurity #trustworthyai

    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Jason Stanley

    Head of AI Research Deployment @ ServiceNow

    I recently joined the Ethical Machines Podcast to discuss one of AI’s toughest challenges: risk mitigation. We covered two key themes—context-aware testing and the complexities of AI supply chains in large organizations. 1️⃣ Context-aware testing Most AI safety discussions focus on foundation model evaluations (e.g., OpenAI’s latest model). But these tell us little about real-world AI product risks. In practice, foundation models are just one upstream input in a long supply chain. Risk evaluation needs to consider the specific domain, use case, and what’s at stake—just as we don’t judge a vehicle’s safety solely by testing its raw materials. 2️⃣ AI supply chain complexity In large organizations with multiple teams, shared tools, and common components, risk evaluation happens at many points—but what does it all add up to? At deployment, looking back at this chain, what can we confidently say about performance and safety? Too often, the focus is on leaderboards and foundation model safety evals, which provide little insight into real-world risks. A big thank you to Reid Blackman, Ph.D. and the podcast team for the insightful conversation! 🎧 Check out the episode here: https://lnkd.in/efZJ_zYy I'd love to hear your thoughts — what do you see as the biggest challenges in AI risk mitigation? #artificialintelligence #ai #trustworthyai #aisafety #machinelearning

Affiliated pages

Similar pages

Browse jobs