What can you learn from brand debacles? Quite a bit as it turns out, not just how brands grow, but how brands survive. Staying the course amidst a political fracturing relies on having a... Bulletproof 😉 understanding of your brand and marketing model. And guess which brand didn't have that? Because I couldn't call an article "What to do when your brand shits the bed." The Drum The Drum Network #brandstrategy #Drumnetwork https://lnkd.in/eA2t5x27
Andrew McLean’s Post
More Relevant Posts
-
it's a new ai tip this one's all about bringing your data into your llm of choice. ways to do it, reasons to be scared of it. and i managed to get the phrase 'little 'roided out data freak' into it, so that's fun too I guess it's in the comments.
To view or add a comment, sign in
-
Taking myself to the future of media today, courtesy of the lovely TheZoo.London : The Consultant Collective So please, if you're going or see me there, do NOT make eye contact, let's not even notice each other, I will blank you. thanks!
To view or add a comment, sign in
-
because according to WARC and their future of strategy report, nearly 66% of strategists aren't using it yet... there's still time. the topic for today's garden gnome tip is synthetic data, I've seen some really good takes this week from Jo Arden and Mark Hadfield and it got me thinking. I've written a couple of things about it today (which I'll drop into the comments) but I think my overriding opinion is....without any sort of reality in the process, any artificial insights will lead towards one of two things: non-distinct work that drives us into a new era of lowest common denominator or just wholly incorrect assumptions and therefore bad strategic thinking. but there are good use cases for synthetic data, and there are responsible ways to use it. when it comes to use of synthetic data, I think the footballer Gennaro Ivan Gattuso said it best, " sometimes maybe good, sometimes maybe shit"
To view or add a comment, sign in
-
gnome stuff: today.... how to be cool and use ai for predicting purposes.
predicting stuff, not the lottery numbers - maybe the lottery numbers - but part of the fun of ai is experimenting, really out-there experimenting, and seeing what's possible. like.... could i create a predictor of website visits? yuh-huh, yes you can. the below chart is based on a prior year of share of search data and site visits, smooshed together, predicted, then held up against the year that followed, and you can see the outcome, every dot is a day and it is 88% accurate at forecasting volumes of site visits. so then you can plan, you can see uplifts from advertising, you can do all the things. the problem with ai is that it enables so much, that it really requires a chat to get things going - so lets get things going.
To view or add a comment, sign in
-
the purpose of a lot of jargon is designed to make you think ‘oh my goodness this person is smart.' but the jargon surrounding tech and ai is frankly, a bit too much, it can be hard to know what’s legit, and what’s just nonsense. but fear not, here's a handy ai gibberish translation into something resembling english so you can sound informed AF about AI. 1. machine learning: computers learning from data, kind of like how we learn from practice. 2. neural network: a computer system that’s designed to mimic how our brains process information. 3. deep learning: it's like machine learning, but with extra steps 4. natural language processing (nlp): helping computers to understand sentence structure and context, making tech speak our language, instead of the other way around. 5. algorithm: a step-by-step guide that tells a computer how to do something. 6. LLM (large language model): a type of ai model trained on vast amounts of text data, capable of understanding and generating human-like text. 7. transformer: a clever way to help ai understand context in language, like knowing "bark" could mean a tree or a dog sound. 8. GPT (generative pre-trained transformer): an ai that's really good at playing 'finish the sentence' games. So next time you hear someone say: "our ai uses a convolutional neural network for image recognition." that just means "our computer is really good at using maths to play 'i-spy' with photos." or... "our generative adversarial network produces photorealistic images from textual descriptions." for humans, try "we've got two ai’s playing an endless game of 'pictionary' and they've gotten really good at it." or how about... "we're leveraging reinforcement learning algorithms to optimise our media planning they're saying "we're letting an ai play a giant game of ‘max the eff out of reach’ until it figures out the best way." remember, behind every piece of ai jargon is a simple idea wearing a fancy hat. strip away the techno-nonsense and you'll find concepts that anyone can understand. heard any good phrases you need jargon busted?
To view or add a comment, sign in
-
okay, this may be an extreme case.. but I reckon strategists should be self-sufficient. solving problems requires getting your hands into the data, speaking to real people, uncovering genuine insight, not just regurgitating a stat from a report as the basis for your thinking. and ai allows you to take that to extremes...like delving into a bit of machine learning to prove /or disprove a hunch? in about 25 seconds, you can have a template to do exactly that. think of a brief as being stranded on an island, and strategic thinking is where you have to head out to sea, in order to see what you can find for the benefit of the people back on shore, well now you're not paddling....you can go farther, find better stuff, provide more value.
To view or add a comment, sign in
-
I like this one, it's an ai tip. hallucinations threaten to make anything you do in ai rubbish, and lies. so how do they happen? and how can you stop them?. all aboard the tip train.
have you ever asked your little ai pal a question and got an answer that's completely made up? well, welcome to ai hallucinations, where our try-hard ai assistants attempt to make us happy at the cost of the truth. it's like the ai is filling in gaps in its knowledge with pure imagination. these fabrications can be subtle or outrageous, but they're always inaccurate. so, how do we spot these digital daydreams and keep our ai conversations factual? let's dive in! here's your guide to crushing rampant ai drug-abuse. 1. fact-check everything, especially surprising information 2. be wary of overly detailed or perfect-sounding answers 3. question information that seems odd or a bit too convenient 4. ask the ai for sources and actually check them 5. compare ai responses with reliable, non-ai sources 💡 let's see some ai hallucinations in action: you: "tell me about the agricultural practices of ancient mars." ai: "ancient martian agriculture involved hydroponic systems using geothermal energy..." you: "wait, are there actual sources for martian agriculture?" ai: "i apologize for the confusion. there's no evidence of agriculture on mars. my previous response was incorrect..." ai tries to pull this sh*t all the time, inventing a detailed but completely false answer, like someone who actually has imposter syndrome. but by questioning the source, we uncovered the hallucination. why should you care about ai hallucinations? spotting these digital daydreams is crucial for: - maintaining accuracy in your work and research - understanding ai limitations - developing a critical approach to ai-generated information - avoiding the spread of misinformation - improving your interactions with ai tools so, next time you're using ai and get an answer that seems off, remember that a human has had a hand in this, that the ai is just trying to make you happy and will lie in order to do that. don't let that pesky little ai get away with it, cut off it's supply before it drags you down into its k-hole.
To view or add a comment, sign in
-
what in the physics? also.... Red Bull X Prada , I've seen them in sailing, but really interesting choice. via Reddit, Inc.
To view or add a comment, sign in
-
NDA making it seem like I'm a gnome salesman... #itsgardengnome #gnomemarketing
To view or add a comment, sign in