We're #hiring a new User Experience Researcher in United States. Apply today or share this post with your network.
resourceful’s Post
More Relevant Posts
-
Oh the hard answer to this hard question has been sitting here in plain sight. How do we evaluate discrete points on an exponential by use case? The answer has been in the realm of image, the world of Stable Diffusion. A billion models easily merged and tuned. Most useless. Some overused. None specific or perfect for any particular thing. Each a labor of love by an individual among thousands in a fractal community of builders. Behold the beauty of the untamed open source ecosystem. We will never sort and compartmentalize it, and it will grow exponentially, maddeningly, unhindered by the forces of order and alignment. We no longer need to worry about which model or even which fine-tune is the best. The free market has finally arrived, and with it a million builders. Thank you Yann LeCun , OG liberator of models. #opensource
Meta's Llama 3 dropped just a few days ago, but already there are close to 1,000 variants publicly available on Hugging Face. Most people read base model evaluations and think they apply to all these deployments, but that ain't the case. Or at least the extent to which it's true is poorly understood. I want to see more eval work focus on the performance and risk shadow cast by base models on their downstream progeny. Basically, if base models aren't being used as-is, do their evals matter? Or are they poorly predictive of what will come of them post-fine-tuning? I don't have time/resources to work on this right now, but I'd love to see someone take this project on: across all variants posted on HF, how do evals on fine-tuned versions compare to evals done on the original base model? What kind of relationship is there between the two? Is the latter a constraint or directional anchor for the former? The best set-up here needs to find some way of ignoring some amount of noise (e.g., devs creating their own copy but not doing any fine tuning or doing some kind of terrible fine tuning) in order to focus on more serious attempts to fine tune for stronger performance on specific use cases, stronger safety and security, or something else. This question is increasingly important because fine-tuning is becoming easier and faster to do. That means it's more and more likely that developers will tune models to their own liking. Evals done on prior upstream versions are potentially meaningless... 'potentially' is the key word because we don't have a good understanding of the persistence of performance and risk, the elasticity of learning. By the way, if you're interested in working on hard questions around LLM evaluation, like these ones, we're hiring! Our Trust & Governance research team is looking for a Staff Applied Research Scientist. Come help us tackle these challenges! Link to the job posting in the comments. #artificialintelligence #trustworthyai #llm
To view or add a comment, sign in
-
Meta's Llama 3 dropped just a few days ago, but already there are close to 1,000 variants publicly available on Hugging Face. Most people read base model evaluations and think they apply to all these deployments, but that ain't the case. Or at least the extent to which it's true is poorly understood. I want to see more eval work focus on the performance and risk shadow cast by base models on their downstream progeny. Basically, if base models aren't being used as-is, do their evals matter? Or are they poorly predictive of what will come of them post-fine-tuning? I don't have time/resources to work on this right now, but I'd love to see someone take this project on: across all variants posted on HF, how do evals on fine-tuned versions compare to evals done on the original base model? What kind of relationship is there between the two? Is the latter a constraint or directional anchor for the former? The best set-up here needs to find some way of ignoring some amount of noise (e.g., devs creating their own copy but not doing any fine tuning or doing some kind of terrible fine tuning) in order to focus on more serious attempts to fine tune for stronger performance on specific use cases, stronger safety and security, or something else. This question is increasingly important because fine-tuning is becoming easier and faster to do. That means it's more and more likely that developers will tune models to their own liking. Evals done on prior upstream versions are potentially meaningless... 'potentially' is the key word because we don't have a good understanding of the persistence of performance and risk, the elasticity of learning. By the way, if you're interested in working on hard questions around LLM evaluation, like these ones, we're hiring! Our Trust & Governance research team is looking for a Staff Applied Research Scientist. Come help us tackle these challenges! Link to the job posting in the comments. #artificialintelligence #trustworthyai #llm
To view or add a comment, sign in
-
DVA is not associated with this job posting Senior Staff Quantitative UX Researcher, Core USA https://lnkd.in/gUcScibD As a Quantitative User Experience Researcher (Quant UXR), you’ll help inform your team of UXers, product managers, and engineers about user needs. You’ll play a critical role in creating useful, usable, and delightful products. You’ll work with stakeholders across functions and levels and have impact at all stages of product development. You will investigate user behavior and user needs using empirical research methods such as logs analysis, survey research, path modeling, and regression analysis. Quant UXRs vary in background and use skills from computer science, quantitative social science, econometrics, data science, survey research, psychology, human-computer interaction, and other fields. You’ll combine skills in behavioral research design, statistical methods, and general programming to improve user experience. Responsibilities Influence stakeholders across organizations to gain support for research-based, user-centric solutions. Own project priorities in alignment with larger product goals, and oversee allocation of resources within the project. Drive ideas to improve products and services through research-driven insights and recommendations. Lead teams to define and evaluate product, service, ecosystem impact. Own vision and strategy discussions through research by analyzing, consolidating, or synthesizing what is known about user, product, service, or business needs. #innovation #management #digitalmarketing #technology #creativity #futurism #startups #marketing #socialmedia #socialnetworking #motivation #personaldevelopment #jobinterviews #sustainability #personalbranding #education #productivity #travel #sales #socialentrepreneurship #fundraising #law #strategy #culture #fashion #business #networking #hiring #health #inspiration
To view or add a comment, sign in
-
🚀 The AI Recruiter! 🌎 | Scaling AI, Machine Learning and Robotics Teams Globally | Winner of Outstanding Advocate for Women in Tech 23 👩💻 | Neurodiversity First Aider ⭐ |
This talent has grown by 15% in the last 12 months 📈📈 Where? Canada! 🍁 Canadian Machine Learning Engineering talent has slowly but significantly grown in the last couple of years, with Meta, Qualcomm and RBC taking full advantage by becoming the top 3 employers! However, in the last year, Pinterest has grown by 150% with MLE's! If you are considering Canadian talent, there are some hidden gem locations to consider: 📌 Vancouver 📌 Ottawa 📌 Calgary Canadian ML experts, what do you think? #machinelearning #ai #hiring #machinelearningengineer
To view or add a comment, sign in
-
The provided LinkedIn post possesses a thorough and detailed look into the technological framework utilized by Meta's data engineers on a day-to-day basis. It is crafted with a formal, yet deeply passionate language, employing a rich lexicon and specialized terminology to thoroughly educate and inform the audience. The absence of humor or embellishment reflects a commitment to directness, and the complex sentence structures underline the intricacies of the subject. This approach aims to not only engage and persuade but also emphasizes a clear call to action, maintaining a high level of enthusiasm and positivity throughout. Overall, the CEO's intent is to foster engagement and understanding around the sophisticated tools and decisions inherent to data engineering, accentuated by the use of punctuation to drive key points home.
To view or add a comment, sign in
-
ML in academia vs industry I didn't have a typical path to big tech. My undergrad was in biomedical engineering, my master's in information and computer engineering and my PhD in statistics and computational biology. In my third year of PhD I did a software engineering internship at Bloomberg and joined the Search & NLP group there as soon as I submitted my thesis. I've now been at Meta for the last five years, working on recommender systems and ads delivery optimization. And I've never looked back. I always knew academia wasn't right for me. I wanted to solve real-life problems and build things. Unashamedly, I was also interested in making money and I did not see a way of making that happen in academia. Lastly, I liked to code, more than reading or writing papers. In my head, that’s what I would do *all the time* as a software/ML engineer in industry (plot twist coming up) while being paid a lot more. What was there not to like? The transition was a bit of culture shock. Back in academia I was used to working on my own and being my own bottleneck. I got to do *deep* work for hours/days on end. That is rarely the case in industry. It’s still of course about solving difficult technical problems, but it’s also about navigating complex human dynamics, figuring out how to work in a team, how to align with others with different agendas and how to manage resources, timelines and competing priorities. The problem complexity is also different. In industry, I rarely *have* to read papers because I’ll usually know how to solve a problem from a modeling perspective. The complexity is about how to get our production systems to do that ‘simple’ thing we need. It’s about the complex interaction of components that are maintained and developed by thousands of engineers serving billions of users at a time. It is about your experiment getting corrupted by a team on the other side of the world making a change to a service you know nothing about. Nothing in academia prepares you for this challenge. And it's pretty fun. If you are interested in making the jump from academia to industry check out this job opening at Meta!
To view or add a comment, sign in
-
#Meta has gone ahead with its decision to shutdown CrowdTangle, despite widespread criticism from civil society organizations and researchers who have used the tool to conduct independent investigations into critical questions around platform accountability. This decision is worrying, especially given the fact that we are less than 3 months away from the Election Day in the US. Tech Policy Press has covered this issue extensively. Here's what our contributors have to say: 1) Brandi Geurkink and Claire Pershan speak with Justin Hendrix about the demise of CrowdTangle and what it means for independent technology research https://lnkd.in/dZzrNzR4 2) Prithvi Iyer summarizes a recent study from the Coalition for Independent Tech Research on how CrowdTangle's shutdown impacts academia and civil society organizations working in this space. https://lnkd.in/di7vbhJ2 3) Megan Brown, Josephine Lukito and Kaicheng Yang discuss how the end of CrowdTangle could affect data access under the #DSA. https://lnkd.in/d4SJyaTs If you like our timely and topical coverage on key tech policy issues, do subscribe to our weekly newsletter! https://lnkd.in/dn-vvrMC
Researchers Consider the Impact of Meta's CrowdTangle Shutdown | TechPolicy.Press
techpolicy.press
To view or add a comment, sign in
-
https://lnkd.in/g7UUSh2k. I was reading this post and found that The Pandemics effect had the same effects on the workforce everywhere, not only in the USA but even in India too. I am finding that of late more and more jobs are published even in India but there are not many takers. Even the IT sectors though have started strict instructions for 3 days a week work at office,many women workers are not very keen. The aggravated problems could be after the pandemic,the slow down,the recession is that, people have learnt to live within their means people have managed to work in part times many do not want to work outside their home stations many women like to work in remote type of jobs many new industries too are not opening, the scaling down of production and no demand caused lesser shifts running or shifts running with lesser work force. more temporary workers are removed for them to settle with other skills that they are honed with. Many employees are choosy too in the work they like to do. It's not a seller's market but a buyers market in labor. Another issue is that industries have not increased their wages too and wages are much lower than ti match the extream high costs of living,healthcare,education,travel, good etc. The disposal income has come to hand to mouth with nearly nothing to save. the interest rates of banks going high and loans not being disbursed too . the exports are reduced due to other indirect reasons of war,world negative politics of sanctions, etc. Until the environment of supply to demand generation is not going to work this issue will be the same everywhere. what are your views in it?
I am a Staff Mixed Method UXR at Google. I have over half a decade of experience working in consumer-facing AI, experience with cloud technologies, knowledge and experience in programming and developer tools, and an in-depth understanding of technical and legislative limitations that shape product experiences. I have a PhD in Anthropology from a top University, 10 years of high-performance in the industry, demonstrated thought leadership, and continually developing new, relevant skills, hence my enrollment in a Masters program for growing my knowledge and understanding of quantitative data and analysis. Yet, I cannot find a job within or outside Google that is interested in employing me. Externally there is a desire to down-level or decrease pay. Or somehow I am not "shiny" enough? Internally at Google it feels like a whose who of networking with hundreds of UXers applying for only a handful of open positions in the US. My current team is overstaffed with too many UXers and not enough work. I am wholly bored and unfulfilled. I expect my role will be deemed redundant any time now, especially if I cannot find work with product impact soon. Every day I am anxious and worried. Every day I feel disappointed. I do not know what will become of me. As the sole breadwinner with two kids, I should be grateful to be employed at all and yet I see how this can only be temporary if things continue as they are. I am the type of person that needs to be challenged to learn and try new things, I thrive in busy workplace environments, I am strongly qualified, have exceptional work experience, and have been at the forefront of AI technologies since 2017. So... what gives? What am I missing?! UPDATE: I want to thank those of you who took the time to comment and message me. I've received a lot of empathy, support, some reality checks which have been dually noted, and some great guidance. Thank you.
To view or add a comment, sign in
-
Junior vs. Senior – not what you think. I keep hearing people using the term Junior and Senior to compare peoples' positions. This kind of thinking is too simplistic, because it assumes that progression is single-dimensional. It also implies that one person is better than the other, and that's wrong. The reality is that people are multi-dimensional. You can be better than someone in one thing, and they can be better in another. When I was Director of Data Engineering at Meta (L8) I had a person on my team who was L4. That was a 4 levels gap between us, and about 15 years of experience. But they were much better than me at writing – and they really enjoyed that! When we both realised that, they asked if they could take ownership on the team's weekly updates. It was a win-win for everyone: - our readers who got better weekly updates - them doing something they were passionate about - me spending less time on something I wasn't very good at Everybody is good at something. Create an environment where people can play to their strengths.
To view or add a comment, sign in
-
For my connects who are currently in the job market, check out some of the open positions at Anthropic. #ai #aijobs #aitalent #genai #generativeai #artificialintelligence #jobpostings #aiskills #productdesign #productengineering #research #marketing #talentstratgy
Anthropic
boards.greenhouse.io
To view or add a comment, sign in
5,142 followers