> Could an AI agent craft compelling emails that would capture people's attention and drive engagement, all while maintaining a level of personalization that feels human? I decided to find out.
> The real hurdle was ensuring the emails seemed genuinely personalized and not spammy. I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately.
> Incredibly, not a single recipient seemed to detect that the emails were AI-generated.
https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...
The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.
How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.
A sufficiently advanced personal assistant AI would use multimodal capabilities to classify spam in all of its forms:
- Marketing emails
- YouTube sponsorship clips
- Banner ads
- Google search ads
- Actual human salespeople
- ...
It would identify and remove all instances of this from our daily lives.
Furthermore, we could probably use it to remove most of the worst parts of the internet too:
- Clickbait
- Trolling
- Rage content
I'm actually really looking forward to this. As long as we can get this agent into all of the panes of glass (Google will fight to prevent this), we will win. We just need it to sit between us and everything else.
Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.
It will come in the vein of "we are personalizing the output and improving responses by linking you with vendors that will solve your problems".
Found companies with people that share your values. Hire people that share your values. Reject the vampires. Build things for people.
Competing with bad actors is very, very hard. They will be fat with investor money, they will give their services away, and commonly they are not afraid to do things like DDOS to raise your costs of operations.
What if the user is a conservative voter and considers anything counterpoint to their world view the worst part of the internet and removes all instances of it from their daily lives? Not to say that isn’t already happening but they are consciously making the choice, not some AI bot. I can see something like this making the country even more polarized.
Growing up as a southern evangelical before the internet, I can promise you that there has never been a modern world without filter bubbles.
The concept of "fake news" is not new, either. There has been general distrust of opposing ideas and institutions for as long as I've been alive.
And there's an entire publishing and media ecosystem for every single ideology you can imagine: 700 Club, Abeka, etc. Again, this all predates the internet. It's not going anywhere.
The danger isn't strictly censorship or filter bubbles. It's not having a choice or control over your own destiny. These decisions need to be first class and conscious.
Also, a sure fire way to rile up the "other team" is to say you're going to soften, limit, or block their world view. The brain has so many defenses against this. It's not the way to change minds.
If you want to win people over, you have to do the hard, almost individual work, of respecting them and sharing how you feel. That's a hard, uphill battle because you're attempting to create a new slope in a steep gradient to get them to see your perspective. Angering, making fun, or disrespecting is just flying headfirst into that mountain. It might make you feel good, but it undoes any progress anyone else has made.
I fell for oldschool marketing yesterday. Im moving into a new appartment in a couple months. The local ISP who runs fiber in my new building cold-called me. I agreed over the phone to setup the service. That was proper target marketing. The person who called me knew the situation and identified me as a very likely customer with a need for service (the building has a relationship with the ISP). I would never have responded to an email or any wiff of AI chatbot. They only made the sale because of expensive human effort.
I'd say the art industry is somewhere in-between because of
1. Being a traditionally disrespected but non-trivial skill to acquire 2. A skill valuable for advertisement (good art -> pretty ads -> more money 3. A valuable skill, but not one many industries need full time work from 4. Due to #1, a "vulnerable" industry. There won't be too many millionaire artists to fight back against the AI Overlords compared to, say, Politicians or businessesmen.
But it's not like I have any say on who or what gets affected.
Not all. Also men on Mars, AGI, Fusion etc.
SpaceX is launching multiple rocket ships into orbit every week. Google is.. releasing webpage CSS tweaks like “New Google Sign In Page” and a couple second rate AI products no one asked for when they get caught with their pants down.
usdebtclock.org
The "smart people are all working in advertising" trope is idiotic. Just an excuse for people to justify their own laziness. There is an infinite number of opportunities out there to make the world better. If you are ignoring them, that's on you.
Which is true. But clearly far fewer people work doing that than in advertising or some other seemingly meaningless grunt work. And I’m including the technological plumbling work with many on this site, myself included, have depended upon to support themselves and/or a family.
Which at best is effectively doing minor lubrication of a large and hard to comprehend system that doesn’t seem to have put society as a whole in a particularly great place.
Clicking on ads helped with our goal to AI today. Showing you the right ad and beating those trying to game it is machine learning heavy. When was the first time we started seeing spelling correction and next word suggestions? It was in google search bar. To serve the correct ads and deal with spam? heavy NLP algorithms. If you stop and think of it, we can drop a think line from the current state of LLMs to these ads click you are talking about.
It made me realize that I think many computing people need more of a fundamental education in "hard" physics (statics, mechanics, thermodynamics, materials science) in order to better understand the staggering paradigm shift that occurred in our understanding of the world in the early 20th century. Maybe then they would appreciate how much of the world's resources have now been directed by the major capital players towards sucking the collective attention span of humanity into a small rectangular screen, and the potential impact of doing so.
The comparison here is between moonlanding and advertisement. So I choose the moon obviously.
Ecommerce can work just the same without LLM augmented personalized ads, or no advertisement at all. If a law would ban all commercial advertisement - people still need to buy things. But who would miss the ads?
I think the answer is pretty clear in the fact that so many of them, bluntly speaking, just don’t give a shit any more. I absolutely don’t blame them.
I don't think your average adult is inspired by the idea of AI generated advertisements. Probably a small bubble of people including timeshare salesmen. If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them. I don't understand how anybody can consider something like that a net good for the world.
How does non-consensually harassing people into spending money on things that don't need add value to all the world's citizens?
I wish some of these people would think about how they'd explain to their 5 year old in an inspiring way what they do for a living: And not just "I take JSON data from one layer in the API and convert it to protobufs in another layer of the API" but the economic output of their jobs: "Millions of wealthy companies give us money because we can divert 1 billion people's attention from their families and loved ones for about 500 milliseconds, 500 times a day. We take that money and give some of it to other wealthy companies and pocket the rest."
I mean, you'd see the same thing if paying for your groceries were opt-in. Is that also a net bad for the world? Ads do enable the costless (or cost-reduced) provision of services that people would otherwise have to pay for.
Is that seriously the comparison you want to make here? Most of us think the world would be better if you didn't have to pay for food, yes.
They do not enable any costless anything at all. They obfuscate extraction of money to make it look costless, but actually end up extracting significant amounts of money from people. Ad folks whitewash it to make it sound good, but extracting money in roundabout ways is not creating value.
Groceries are opt-in. Until you realize you don't want to hunt and cook your own food, then you opt back in for survival.
UBlock origin + some subscriptions show I'd definitely would love to opt out of IRL ads.
>Is that also a net bad for the world?
World, yes. We have to tech to end food scarcity, but poor countries struggle while rich countries throw out enough food each day to feed said poor countries.
A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.
There is advertising and advertising of course but most of advertising is incredibly toxic and I would argue that by capturing attention, it is a huge economic drain as well.
Of course an AI would also be quite apt at removing unwanted ads, which I believe will become a reality quite soon.
I fear statements like this go too far. I can't agree with the first part of this sentence.
I feel this about both marketing and finance:
They are valuable fields. There are huge amounts of activity in these fields that offer value to everyone. Removing friction on commerce and the activities that parties take in self-interest to produce a market or financial system are essential to the verdant world we live in.
And yet, they're arms races that can go seemingly-infinitely far. Beyond any generation of societal value. Beyond needless consumption of intellect and resources. All the way to actual negative impacts that clutter the financial world or the ability to communicate effectively in the market.
This is quite a statement to make.
Please elaborate on what enormous value has spam ads and marketing emails added to _world_ citizens?
Unless of course by “world” you mean Silicon Valley venture capitalists..
Is the idea that any and all movement of money is virtuous? That all economic activity is good, and therefore anything that leads to more economic activity is also good? Or is it what it sounds like, and it just means "making some specific people very wealthy"? Wouldn't the more accurate wording be that it "concentrates wealth"? I don't see a huge difference in the economic output of advertisement from most other scams. A ponzi scheme also uses psychological tricks to move money from a large amount of people to a small amount of people. Something getting people to spend money isn't inherently a good thing.
Maybe this was your point, but this is built in to one of the definitions of GDP, isn’t it? Money supply times velocity of money?
I’m no economist though I’m sure there are folks on here who are. But this seems like an unfortunate fact that’s built into our system- that as laypeople we tend to assume that ‘economic growth’ means an increase in the material aspects of our life. Which in itself is a debatable goal, but our GDP perspective means even this is questionable.
For example, take a family of five living out in a relatively rural area. In scenario one, both parents work good paying remote tech jobs and meals, childcare, maintenance of land and housing, etc. are all outsourced. This scenario contrubutes a lot according to our economic definitions of GDP. And provides many opportunities for government to tax and companies to earn a share of these money flows.
Then take scenario 2, you take the same family but they’re living off of the grid as much as possible, raising or growing nearly all their own food, parents are providing whatever education there is, etc. In this scenario, the measurable economic activity is close to zero- even if the material situation could be quite similar. Not to mention quality of life might be rated far higher by many.
What rating an economy by the flow of its money does do is, and I’m not sure if this is at all intentional, is it does paint a picture of what money flows are potentially capturable either by government taxation or by companies trying to grab some percentage as revenue. It’s a lot harder to get a share of money that isn’t there and/or not moving around.
Perhaps my take on economics is off base but, for me, seeing this made me realize just how far off our system is from what it could and should be.
I concede that GDP is a good indicator, but I think you can have things that help GDP while simultaneously hurting the economy. Otherwise any scam or con would be considered beneficial, and it would make sense to mandate minimum individual spending to ensure economic activity. A low GDP inherently shows poor economic health, but a high GDP does not guarantee good health.
In my mind (noting, again, that I'm no economist), economic health is defined by the effectiveness of allocating resources to things that are beneficial to the members of that economy. Any amount of GDP can be "waste", resources flowing to places where they do not benefit the public. As Robert Kennedy famously pointed out, GDP includes money spent on addictive and harmful drugs, polluting industries, and many other ventures that are actively harmful.[0]
"I'm sending spam that sneaks past your spam filter. Sign up to make it stop."
When I realized it was just dudes copy-pasting a “smart contract” and then doing super shady marketing, it was already illegal in my jurisdiction.
You could of course say the same for frontend engineer or backend engineers. How many frontend engineers are simply importing Tailwind, React, etc? How many backend engineers are simply importing apache packages?
Where do you draw the line? Can you only be an AI expert if you stick to non-LLM solutions? Or are AI experts the people who have access to hundreds of millions of USD to train their own LLMs? Who are the real AI experts?
Nowadays, that would be laughed at. But AI is more comparable to cars from 1900 than modern vehicles.
Snake oil salesmen we called em back in my day ;-)
And in reality, most software work is 1) API calls and 2) applied math. If you're not in cutting edge private tech or acedemia, your work probably falls into 1 or both categories. Modern "Software engineers" is more a matter of what scale of APIs you're wrangling, not how deep of domain knowledge you have.
It's a calculated move on their part.
If your audience likes your brand and doesn’t distinguish between your services and services done by more competent providers, then you’ve found your niche. So: snake oil is not fine; but Supreme branded brick sounds ok to me, even if I wouldn’t buy it myself.
I guess the author will find followers who enjoy that approach to software and product growth. If spamming wasn’t part of it, I’d be ok.
Of the people who replied. I bet plenty figured it out, but didn't bother to reply.
...of course they'd probably get an LLM to write the article too.
Consider:
* spammers have access to large amounts of compute via their botnets
* the effectiveness of any particular spam message can easily be measured - it is simply the quantity of funds arriving at the cryptocurrency wallet tied to that message within some time window
So, just complete the cycle: LLM to generate prompts, another to generate messages, send out a large batch, wait some hours, kick off a round of training based on the feedback signals received; rinse, lather, repeat, entirely unattended.
This is how we /really/ get AI foom: not in a FAANG lab but in a spammer's basement.
PS. Howerver, see comments downthread about "survivorship bias". Not everybody will reply, so biases will exist.-
This is half of major reddit subs now and I fear the same low quality comments will take over HN.
People need to go out and touch some grass.
Everyone is so comfortable doing shit like this.
Reminds me of the early days of the web.
I can see it, perhaps positively, investing far less importance and effort into online things. With admittedly a lot of optimism, I could see it leading to a resurgent arts and crafts movement, or a renewed importance put on hand-made things. People say "touch grass"; maybe AI will make people "touch crafts" (bad joke, I know).
Definitely interesting to see the different culture in tech and programming since programmers are so used to sharing code with things like open source. I think programmers should be more skeptical about this bullshit, but one could make the argument that having a more flexible view of intellectual property is more computer native since computers are really just copying machines. Imo, we need to have a conversation about skills development because while art and writing accept that doing the work is how you get better, duplicating knowledge by hand in programming can be seen as a waste of time. We should really push back on that attitude though or we'll end up with a glut of people in the industry who don't understand what's under all the abstractions.
In all seriousness, manipulation and bullshit generation emerges as the single major real world use of AI. It's not good enough yet to solve the big problems of the world: medical diagnostic, auto accidents, hunger. Maybe just a somewhat better search tool, maybe a better converational e-learning tool, barely a better Intellisense.
But, by God, is it fantastic at creating Reddit and X bots that amplify the current line of Chinese and Russian propaganda, upvote and argue among themselves on absurd topics to shittify any real discussion and so on.
Do you think those countries are the only ones doing this? Just the other day there was a scandal about one of the biggest Swedish parties, one that's in the government coalition, doing exactly this. And that's just one that got caught. In countries like India and Brazil online disinformation has become an enormous problem, and I think that in the USA and Europe, as the old Soviet joke went: "Their propaganda is so good their people even believe they don't have any".
People can be both wonderful and despicable, regardless of era or mechanism.
The following month, reports emerged of 50 girls in one Australian school being exploited in very similar ways by nothing more than a kid with a prompter.
https://www.abc.net.au/news/2024-06-25/explicit-ai-deepfakes...
Scaling this type of exploitation of children online is trivial when you think about anyone with basic programming skills.
The Techno Optimists manifesto is what appears to be utterly foolish to me when you figure out that there is not one mention of accountability for downside consequences.
> As founder, I'm always exploring innovative ways to scale my business operations.
While this is similar to what other founders are doing, the automation, scale and the email focus puts it closer to spam in my book.
Now do Google.
trillions, easily. People wanna sell you stuff, and they will pay to get your eyeballs. doesn't matter if it's to sell you a candy bar or to enlist you into the military. Even non-profit/charities need awareness. They all need attention and engagement.
Facebook + Instagram is $100B+ business, So is Youtube and Ads.
An average human now spends about ~3h per day on their screens, most of it on social media.
We are dopamine driven beings. Capturing attention and driving up engagement is one of the biggest part of our economy.
Note my 'best case' scenario for the near future is pretty upsetting.
In defence of that guy, he's only doing it because he knows it's what pays the bills.
If we want things to change, we need to fix the system so that genuine social advancement is what's rewarded, not spam and scams.
Not an easy task, unfortunately.
As for the humans, we went fishing instead.
Everyone is playing lip service to global warming, energy efficiency, reducing emissions.
At the same time data centers are being filled with power hungry graphic cards and hardware to predict if showing a customer an ad will get a clock, generating spam that “engages” users aka clicks.
It’s like living in a episode of black mirror.
Till then, I will probably avoid more and more communicating with strangers on the internet. It will get even more exhausting, when 99% of them are fake.
Datacenters save a lot more energy than they make. Alone how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.
The same with a ton of ohter daily things i do.
Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment.
And the companies running those GPUs actually have an incentive to be co2 neutral while bitcoin miners don't: They 1. already said they are doing / going co2 neutral due to 2. marketing and they will achieve it becauseh 3. they have the money to do so.
When someone like Bill Gates or Suckerberg say 'lets build a nuclear power plant for AGI' than they will actually just do that.
What's more likely, watching a movie online, drive to watch a movie in a cinema?
You know what creates a lot less CO2? Staying at home reading a book vor playing a board game.
>Datacenters save a lot more energy than they make
I think you mean CO2. And I doubt that they actually save anything because datacenters are convenient so we use them more as alternatives with less convenience.
Like the movie example, we watch more and even bad movies if it's just a click on Netflix than we do if we have to drive somewhere to watch.
MS recently announced they fail der CO2 target but instead produce 40% more because of cloud services like AI
We need to be realistic here. We know what modern entertainment looks like and its not realistic at all to just 'read books' and play board games.
I don't know how much energy Netflix uses serving a movie, but playing a video game on my PC for two hours where I'm located might generate a kg of CO2. That's about as much as I'll breathe in a day. Relative to other sources of atmospheric CO2 I'm not that concerned.
I agree with your second paragraph, and selling the "make better choices to save the world" argument is an industry playbook favorite. Environmental damage needs to be put on the shoulders of those who cause it, which is overwhelmingly industrial actors. AI is not useful enough to continue the slide into burning more fossil fuels than ever. If it spurs more green energy, good. If it's the old "well this is the way things are now", that's really not good enough.
We will have better batterires thanks to ml material research.
We will be able to calculate and optimize everything related to flow like wind.
The last thing we need to optimize is compute and compute is what has the most money anyway. One of the first industries going green is datacenters. Google for example is going green 24/7 (so not just buying solar power but pulling green energy from the grid 24/7 through geo thermy and others).
AI/ML big datacenters are crucial for all the illneses we have which no one cares enough to solve. For example, i have one of these and we need data to make a therapy for this and i'm not alone.
How many battery breaktroughs did we have before AI? They rarely lead to new batteries.
>AI/ML big datacenters are crucial for all the illneses we have which no one cares enough to solve.
Too bad that companies like OpenAI and MS buy most of the hardware for their data centers to write summaries of articles and emails and to create pictures.
And even if they find a cure, doesn't mean it will be available for people in need, not without a hefty fee.
Just look at the profit margin of insulin.
ML on x-ray pictures is super easy technology which partially already is better than x-ray experts. Its not far away to have build in diagonstics or cheap online services. And yes they will reach poorer people than before. It will also allow a lot more people to get better diagnosis.
My sister has a type of blood cancer, she would have been dead by now if research wouldn't have found a solution 13 years ago.
And no MS and OpenAI and google are not just using their DCs to write summeries. They use it to do research. A LOT actually.
And take a look at google ios and the research papers, plenty of medical papers coming from those big companies.
Alpha Folde 2? Changed a lot too
>Google’s Emissions Shot Up 48% Over Five Years Due to AI
Driving too the cinema to watch a movie produces more CO2 than watch one movie online but online makes it more convenient so you watch more. That sums up to more CO2 emission.
The point is that higher efficency is wortless in terms of CO2 emissions if it leads to higher usage that compensates for the savings.
If a programmer can program faster with AI it's good if he only needs 1 hour instead of 8 but if he still programs 8 hours a day AI's energy consumption comes just on top of his previos consumption.
Climate change doesn't care how efficient you produce more CO2, more is simply more.
For everything else, there are already plenty of energy saving mechanism build into the CPUs, Mainboards, Disks etc. A Datacenter doesn't run on 100% Energy just because the load is reduced.
The normal miner doesn't go to those bitcoin conferences, they buy asics, put them in some warehouses around the world and make money.
And if the online bank wasn't sending a bunch of requests to a bunch of third party ad networks on every click, it would save even more.
My perspective is not limited. Just because people live in a city center, doesn't mean that most people do. Open Google Maps and take a look.
But this is a complex calculus and - frankly - feels like a distraction from the issue. I don't want to get into the weeds of calculating micro-emissions of daily activities, I want climate responsibility and reduction in energy consumption across the board.
We need AI/ML for getting there faster and helping more people around us. Alone for weather simulations but also for medicine, material research for batteries etc.
Flame me all you want, but this is one case where Bitcoin is much more useful than LLM. If it doesn't create value, as its naysayers claim, at least it allows exchanging value. LLMs on the other hand, burn electricity to actively destroy the Internet's value, for the profit of inept and greedy drones.
That's why I created EtherGPT, an LLM Chat agent that runs decentralized in the Ether blockchain, on smart contracts only, to make sure that value is created and rewards directly the people and not big companies.
By providing it just a fraction of just a bit north of 10% of the current fusion reactions occuring in our sun, and giving it a decade or two on processing time and sync, you can ask it simple questions like "what do dogs do when you're not around" and it will come up with helpful answers like "they go to work in an office" or funny ones like "you should park your car in direct sunlight so that your dog can recharge its phone using solar panels".
I've seen things that are wildly hobbled, and wildly inaccurate. I've seen endless companies running around, trying to improve on things. I've seen people looking in wonder at LLMs making mistakes 2 year olds don't.
Most LLM usage seems to be in two categories. Replace people's jobs with wildly inaccurate and massively broken output, or trick people into doing things.
I'd have to say Bitcoin is far more useful than LLMs. You have to add the pluses, and subtract the minuses, and in that view, LLMs are -1 billion, and bitcoin is maybe a 1 or 2.
bitcoin is only negative. It consumes terrawatts of energy for nothing.
That field has made a leap forward with LLMs.
Positive impact on society includes automated extraction in healthcare pipelines.
And why is this better than employing a human. Or reducing complexity. It's not as if human wages are what causes hyper expensive US healthcare costs.
This seems like a negative.
At some point we need to be optimistic and look for incremental progress.
What I mean by structured is: invoices, documents containing tables, etc.
Extracting useful data from fully unstructured content is very hard IMO and potentially above the capacity of LLMs (depending on your definition of "useful" and "unstructured")
Why are firms sending around invoices, tables instead of parseable data. Oh I know the argument, because "so hard to cooperate" on standards, etc.
Madness.
What?! Whole industries have been changed already due to products based on them. I don't think there's a single developer who is not using AI to get help while coding, and if you aren't, sorry but you're just missing out, it's not perfect but it doesn't need to be. It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.
My wife is a researcher and has to read LOTS of papers. Letting AI summarize it has made her enormously more efficient at filtering out what she needs to go into more detail.
Generating relevant images for blog posts is now so easy to do (you may not like it, but as an author who used to use irrelevant photos before instead, I love it when you use it tastefully).
Seriously, I can't even believe someone in 2024 can say there has not been useful applications of LLMs (almost all AI now is based on LLMs as far as I know) with a straight face.
You are in a bubble.
> It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.
Subjectively. Not absolutely.
It's banned at my company due to copyright concerns. Company policy at the moment considers it a copyright landmine. It does need to be "perfect" at not being a legal liability at the very least.
And the blog post image thing is not a great point. AI images for blog posts, on the whole, are still quite terrible and immediately recognizable as AI generated slop. I usually click out of articles immediately when I see an AI image at the top, because I expect the rest of the article to be in line: low value, high fluff.
There are useful LLM applications, but for things that play to its strengths. It's effectively a search engine. Using it for search and summarization is useful. Using it to generate code based on code it has read would be useful if it weren't for the copyright liability, and I would argue that if you have that much boilerplate, the answer is better abstractions, libraries, and frameworks, rather than just generating that code stochastically. Imagine if the answer to assembly language being verbose was to just generate all of it rather than creating compiled programming languages.
Bitcoin consumes as much energy as a country and has basically done nothing besides moving money from one group of people to a random other group of people.
And bitcoin is also motivated to find the cheapest energy independent of any ethical reasoning (taking energy from cheap chinese hydro and disrupting local energy networks) while AI will have energy from the richest companies in the world (ms, google, etc.) which already working on co2 neutral 24/7.
[0] https://www.jhuapl.edu/news/news-releases/230503-ai-discover...
It's continuing to widen the wealth gap as it is.
We house, heat and give access to knowledge to a lot more people than ever before.
Cheap medical procedures through AI will help us all. The AI which will be able to analyse the x-ray picture from some 3th world country? It only needs a basic x-ray machine and some internet. The AI will be able to tell you what you have.
I'm also convinced that if AGI is happening in the next 10 years, it will affect that many people that our society has to discuss capitalisms future.
For example alphafold: Protein folding. It is also now used in fusion reactor plasma control
May I recommend reading Derek Lowe's "In The Pipeline" blog for a realistic discussion of the actual impact of Alphafold? [0]
And seeing as we don't have viable fusion yet, saying it "solved" it is really reaching. I'm sure it's helping, but solved? No.
[0]: https://www.science.org/topic/blog-category/ai-and-machine-l...
LLMs deliver value. Right here today, to countless people across countless jobs. Sure, some of that is marketing, but that's not LLM's fault - marketing is what it always has been, it's just people waking up from their Stockholm syndrome. You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything, except maybe that some of the jobs in this space will go away, which for once I say - good riddance. There are more honest forms of gainful employment.
LLMs, for all their costs, don't burn energy superlinearly. More important, for LLMs, just like for fiat money, and about everything else other than crypto, burning electricity is a cost, upkeep, that is being aggressively minimized. More efficient LLMs benefit everyone involved. More efficient crypto just stops working, because inefficient waste is fundamental to cryptos' mathematical guarantees.
Anyway, comparing crypto and LLMs is dumb. The only connection is that they both eat GPUs and their novelty periods were close together in time. But they're fundamentally different, and the hypes surrounding them are fundamentally different too. I'd say that "AI hype" is more like the dot-com bubble: sure, lots of grifters lost their money, but who cares. Technology was good; the bubble cleared out nonsense and grift around it.
Value is a subjective concept. One could argue that its value is that arbitrary quantities of it cannot be created by dictat.
> - to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.
One could argue that it takes a lot worse to maintain any currency such as USD as a currency. Full force of government law enforcement will be unleashed on you if you decide to have your own currency. There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.
I do not hold BTC. Nor do I trade it. But to discuss as if other currencies have no cost is not rational.
Yes. But the point I'm making is, none of that benefits from waste. The waste is something everyone want to reduce. With Bitcoin, the trend is uniquely opposite, because the crypto system is secured through aggregate waste being way larger than any actor or group can afford.
Bitcoin doesn't solve any problem yet which is fundamental to our society and a fiat system like the trust issue:
If i exchange 1 bitcoin with you for any service or thing outside of the blockchain, i need the whole proof of stack system protection of our normal existing money infrastructure like lawyers, contracts etc.
And no smart contracts do not solve this issue.
What is left? Small amount of transactions per day with high fees 'but' decentralized infrastructure run by someone we all don't know aggregated probably in data centers owned by big companies.
Compare energy spent on global hash rate to all energy spent by mining metals, physical banking, financial services middle persons, etc. if you want to talk about energy usage and make any kind of sense.
What do you do when you want to exchange 1 bitcoin for 1 car and the person with the car doesn't give you the car after the 'absolut fairness/ security' of transfering bitcoin to their wallet? You go back to our Proof of Stake system. You talk to a lawyer. You expect the police to help you.
The smallest issue in our society is just transfering money from left to right. This is not a hard problem. And pls don't tell me how much easier it is to send a few bitcoins to africa. Most people don't do this and yes western union exists.
Or try to recover your bitcoins. A friend has 100k in bitcoins just doesn't know the password anymore.
What do you do when someone breaks into your home and forces you to give them your bitcoin key? Yes exactly anonyms moving of money from you to them. Untraceable, wow what a great thing to have!
And no Satoshi 'himself' is not an expert in global economy. He just invented bitcoin and you can cleary see how flawed it is.
you're ending up with the entire rest of civilisation on the other side of that
* Bitcoin, 0.5% of all energy use: 7 transactions per second total worldwide
* THE ENTIRE REST OF CIVILISATION AND EVERYONE IN IT AND EVERYTHING THEY DO, 199x the energy use, really quite a lot more than 1,393 transactions per second worldwide, and all the other stuff civilisation does too
What an amazing comparison for you to suggest.
This is pure, complacent nonsense. "We have always been surrounded with spam, 10x more won't change anything."
Yeah, why improve the status quo? Why improve the world? Why recycle when there's a big patch of plastic in the ocean.
It's an argument based on a nonsensical, cynical if not greedy position. "Everyone pollutes, so a little more pollution won't be noticed."
* the VCs are often literally the same guys pivoting
* the promoters are often literally the same guys pivoting
* AI's excuses for the ghastly electricity consumption are often literally bitcoin excuses
I think that's an excellent start on the comparison being valid.
Like, I've covered crypto skeptically for years and I was struck by just how similar the things to be said about the AI grifters were, and my readers have concurred.
For one thing, it seems to be coming true.
To a farm upstate?
if models handle my day to day minutia so I have more time, why the hell not...
(I know this is very optimistic POV and not realistic but still)
You're trying to take the time and attention of as many people as possible, without regard for whether or not they'll benefit.
One safeguard people have is knowing that it costs something to send in some way to contact them. I'm this case, the sender's time and attention. LLM spam aims to foil that safeguard,. intentionally.
The author sounds unfamiliar with this brand of marketing email, so I can see why it would come off disquieting to find it’s all AI - but it’s equally annoying from a human.
At least with AI sending this crap nobody can use these emails to justify their sales bonus.
Designing the content of spam e-mails sounds like a small aspect of the "job".
If AI spams start fooling people more reliably, that's not something to celebrate.
This blogger thought, at first, that it came from an actual reader. I can't remember the last time I thought that a spam was genuine, even for a moment. Sometimes the subject lines are attention-getting, but by the time you see any of the body, you know.
Sure, AI spam can severely disrupt peoples attention by competing with "real" people more competently. But people will not have twice the attention. We will simply shut down our channels when the number of real-person-level-ai-spam goes to infinity, because there is no other option. Nobody will be fooled, very quickly, because being fooled would require super human attention.
Granted, that does not seem super fun either.
We're talking about a group of people whose core skill is convincing people to pay for stuff that isn't worth it. You and I may know they're worthless, but that doesn't mean they're not getting paid.
Now, imagine you got messages from what appears to be not 100 but, oh I don't know, 1 000 000 000 000 000 of the very best moms that have ever existed.
And they all do love you so very much. And they do let you by writing these most beautifully touching text messages. And they all want to meet up on Friday.
What is going to happen next? Here is what is not going to happen: You are not going to consider meeting any of them Friday, any week. You will, after the shortest of whiles, shut down to this signal. Because it's not actually a signal anymore. The noise floor has gone up and the most beautifully crafted, most personalized text messages of all time are just noise now.
So once someone’s mom passes away, you can’t really fool them with 1 or dozens of message from other moms anyway.
"Noise" in context doesn't mean random characters, it means garbage or spam or content not worth your while.
Yes, it could be that for you a given advert is irrelevant or not worth your while, but the point he was making is that it won't even be worth it for the advertiser to put out the advertisement because it will be noise for everyone.
However, there is only one kind of noise that is noise for everyone: literal noise.
So long as the spam is about something, it is relevant to someone, and therefore it does not necessarily have zero ROI.
EDIT: The only kind of noise that has no semantic is actual "mathematically pure noise" as the person below commented (/u/dang banned my account so I can't reply)
I feel like you're a bit too literal here. When people talk about noise it doesn't mean mathematically pure noise. A signal-to-noise ratio close to 1 is also colloquially called noise.
Consider that we have fairly decent anti-spam measures which do not look at the body of a message. To these methods, it is irrelevant how cleverly crafted the text is.
I reject something like 80% of all spam by the simple fact the hosts which try to deliver it do not have reverse DNS. Works like magic.
E-mail is reputation based. Once your IP address is identified by a reputation service as being a source of spam, subscribers of the service just block your address. (Or more: your entire IP block, if you're a persistent source of spam, and the ISP doesn't cooperate in shutting you down.)
To defeat reputation based services driven by reporting, your spams have to be so clever that they fool almost everyone, so that nobody reports you. That seems impractical.
How AI spammers could advance in the war might be to create large numbers of plausible accounts on a mass e-mail provider like g-mail. It's impractical to block g-mail. If the accounts behave like unique individuals that each target small numbers of users with individually crafted content (i.e. none of these fake identities is a high volume source), that seems like a challenge to detect.
Ergo slop and semantic noise.
Companies that used adverts which weren't noise went out of business long ago.
Just because you and the others don't understand what point I'm making doesn't mean the conversation is "logjammed". I am still discussing the overall point, you just don't see it.
But when everyone copies what that one person or one company is doing. Software makes the copying process dead easy.
Once the herd starts stampeding, it creates a secondary effect of an arms race for finite Attention of a finite target audience. That assault and drainage of that finite attention pool, happens faster and faster and every one gets locked in trying to outspend the other guy.
An example currently is Presidential Campaigns furiously trying to out fund raise each other. Its going to top 15-17 billion this year. All the campaign managers, marketers, advertisors make bank. And we know what quality of product the people end up with. Cause why produce a high quality product when you can generate demand via Attention Capture.
The chimp troupe is dumb as heck as a collective intelligence.
[1]: https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...
He misrepresented himself as a big fan of all these blogs, who's read their posts etc. and that's how he achieved such a high response rate. In effect he deceived people into trusting him enough to spend their time on a response.
Now ordinarily this would be a little "white lie" and probably not a huge deal, but when you multiply it by telling it 1,000 times it becomes a more serious issue.
This is already an issue in email marketing. The gold standard of course is emailing people who are double opted in and only telling the truth, and if AI is used to help create that sort of email I don't really have a problem. There is basically a spectrum where the farther away you get from that the progressively more illegal/immoral your campaigns become. By the time you are shooting lies into thousands of inboxes for commercial purposes... you are the bad guy.
Sorry to say but the real issue here is Kurt has crossed an ethical line in promoting his startup. He did the wrong thing and he could have done it pretty effectively with conventional email tools too.
These early days is ripe to make some quick cash before it all comes crashing down.
I'm skeptical: It's easier to create bullshit than to analyze and refute it, and that should remain true even with an LLM in each respective pipeline.
----
P.S.: From the random free-association neuron, an adapted Harry Potter quote:
> Fudge continued, “Remove the moderation LLMs? I’d be kicked out of office! Half of us only feel safe in our beds at night because we know the AI are standing guard for misinformation on AzkabanTube!”
> “The rest of us sleep less soundly knowing you have put Lord Bullshittermort’s most dangerous channels in the care of systems that will serve him the instant he makes the correct prompts! They will not remain loyal to you when he can offer them much more scope for their training and outputs! With the LLMs and his old supporters behind him, you’ll find it hard to stop him!”
> ...
> At least with AI sending this crap nobody can use these emails to justify their sales bonus.
What weird, misplaced animus. You're happy some salesguy got fired, while his boss sends even more spam and possibly makes even more money due to automation?
Those hack marketers rate-limited this kind of spamming. Now things are about to get worse.
Wouldn't the exact argument apply to that boss as well?
In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...
That stereotype definitely rings true. Thank you for helping me put my finger on it!
Mark as SPAM or Block/Filter or Ignore.
Actually that’s already been completed, and will be released to hackernews in the coming days
Just ignore and move on.
I maintain Inbox Zero, much of the time, and seldom have more than three or four emails in my client at any time.
I get there by being absolutely brutal about tossing emails.
I probably toss a couple of legit ones, from time to time, but I do have rules set up for the companies and people I need to hear from.
The thing that will be annoying, is when AI can mimic these. Right now, that stuff is generally fairly clumsy, but some of the handcrafted phishing emails that I get, are fairly impressive. I expect them to improve.
A lot of folks are gonna get cheated.
I do think that some of these Chinese gangs are going to create AI “pig butchering” operations, so it will likely reduce their need to traffic slaves.
John Oliver actually did a great segment on it, but I won’t link it, because a lot of folks don’t like him.
If AI takes off for this stuff, the gangs are less likely to be kidnapping these poor schlubs.
So … I guess this would be a … positive outcome?
Not sure if AI zealots will be touting it, though.
Wish SV would stop thinking anything that makes money is great, no matter the crap it inflicts on people. Guess I'm asking for way too much.
If the people they employ today suddenly became twice as productive, the company wouldn’t fire half of them - they just would enjoy twice the profit. The same applies to AI.
Small acts of malice are still acts of malice. Not everyone wants to live in a caveat emptor, dog-eats-dog society.
This is like the irrational hate some developers have for recruiters, despite them finding jobs for many people that they otherwise would never have known about.
1. covertly (why do you need to do it covertly? Would people mind if they knew? Doesn't that indicate you're doing them a disservice?)
2. overtly, against people's will. (Again, doesn't that indicate you're doing them a disservice?)
3. overtly, with their consent (express or assumed). How often have you seen this happen?
The "indicates" vs "shows" distinction above deals with the edge case of "interacting with covert/unwanted marketing is actually good for them, even if they don't know it". I dare you to make that argument...
Consider that without thieves we also wouldn't live in the world we live today. That should not be read supporting theft, only an acknowledgement that it exists and that we have designed our lived environment in response.
It's not like that. As a business owner, be honest with us and yourself: just how much of sales and marketing you did was just bullshit? Exaggerated claims bordering on lies? Manipulative patterns? Inducing demand?
Approximately all marketing is that. It is that because it works, and those who refuse to do it get outcompeted by those who don't. Doesn't mean the world should be like that, or that I'd like to be subjected to it.
I also question the "we wouldn't live in the world we lived today" bit. In a competitive environment, marketing is a zero-sum game[0]: there's only so many people around, with so much money and time available; most of the marketing spend ends up being used to cancel out the efforts of the competition, and that race can consume all surplus of a company. Red Queen's race and all.
--
[0] Or negative-sum, if you account for externalities.
That's exactly the reason why we hate them.
Are we supposed to silently suffer because capitalism says so?
Spammers and salespeople are pretty much on the same level as criminals in my book. Heck, whenever someone calls me for some sort of unsolicited survey or similar, I think "these people have such low standards, they would also sell heroin on the street if they had any source."
Having to delete the occasional marketing or sales email that get past your spam filter is hardly any of these. Annoying or frustrating, yes. Suffering? Really?
“One can dream.”
You’ve either used these sarcastically, or accurately. I think you’ve done the former, but the truth is the latter.
The crazy part is that book was released in 1994! Iirc Greg Egan isn't a big fan of modern "AI", wishing instead for a more axiom-based system rather than a predict-the-next-token model. But in any case, I was re-reading it recently and shocked at how closely that plot point was aligning with the way things are actually shaping up in the world.
The timeframe for this happening in the book was 2050 btw
Anyone that tried to set up a new email domain will tell you its quite a serious task. Email spammers are constantly on the run, setting up new domains, changing up the content to evade spam filters. Its very time consuming, hard and unpredictable. It time for social media to close the gap with email and make spamming effectively as hard.
I postulate that if we applied similar techniques to social media after a couple of years online discourse is going to improve. Or we are not going to do this and the death of the open internet will continue.
Still not close to 100%, but when I feel like I do, I will then have a filter and an automated message telling people that removing plus addresses from my email is forbidden and I will not read their message if they do.
You will tell me where you found me, or I won't even listen to you. Because in the future, with an even larger infestation of automated agents passing off as human, that's the bare minimum I need to do.
Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.
I was recently thinking about this Ozempic fad and how it will lead to no one being overweight but just be dependant on Ozempic...until food producers that made everyone fat in the first place with their processed junk will produce Ozempic resistant foods...and then we are really in a world of hurt.
> What do they see when they look into a mirror?
A person deserving of riches, that is about to get them. Nobody sees themselves as the villain. Well, maybe some, but vanishingly few.
https://podcasts.apple.com/us/podcast/this-is-how-the-food-i...
Title: "This Is How the Food Industry Is Preparing For a Post-Ozempic World"
For interest sake, users of Unspam that have a title of CEO on their Linkedin see about ~10% of all mail making it into their inbox be categorised as spam (leadgen, recruitment, or software dev services).
I wish your landing page had a simple "how it works" explanation with a screen shot or diagram, rather than forcing me to sign in directly, and also allowing the app to read *and* send emails. Also, I don't see any pricing?
Finally, signing up, I got an error:
Error 1101 Ray ID: 89d4e0957c2f5a44 • 2024-07-03 06:39:15 UTC - Worker threw exception
Where in the process did that error occur for you?
I see in the logs that an error registered, but unfortunately no detail attached. I've beefed up the logging a bit in the onboarding journey on my side to see what could be breaking here if we try again.
Mind trying to log-in/sign up again? You can use "HACKERNEWS" as a promo code, which would make the first month free.
Thanks for removing the permissions in Google, as that's also key in this debugging.
Mind if I send you an email to debug further there?
tl;dr: Ran into issues because the DB was expecting a profile picture URL from Google auth (string) or NULL, but JavaScript being JavaScript tried to insert "undefined".
Would you seriously enable it even if Gmail offered it?
Highly unclear.
All without the writer needing to be involved in reading the cold outreach.
Will our AI overlords create perfect androids to fool us into thinking we're interacting with a human when it's just LLMs disguised as people? Are we ourselves delusional because we're actually already LLMbots so advanced that we can't distinguish thought and running inference? Why do we have only 12 fingers?
If the dead internet theory isn't already true, it is going to be soon.
Such "personalized" cold outreach is seen as the next holy grail by marketers and will be a common sight on LinkedIn, Twitter, Email etc, soon.
There will likely be rewards at first. An uptick in response rates as most of the market won't recognize emails are AI generated. But because it's trivial to send AI personalized emails at massive scale, your email inbox will become entirely useless.
10 signups / 970 emails sent
information coming over unqualified electronic channels is not trustworthy anymore
Cold outreach is dead and word-of-mouth is the most effective marketing method
There is way too much corporate worship despite the platform's users generally priding themselves on being enlightened and smarter than the rest.
Anyway, I assume that the reason they are dismantling the skills system (and their verification quizzes) and moving things into personal “projects” is because it’s too easy for marketers to skip the LinkedIn tools if it remained the way it was. Now, however, with Microsoft own LLMs trundling through our data, they’re going to maintain their monopoly on easy access to professionals that meet certain requirements.
I guess it could also be because those skill quizzes had their answers readily available all over the interwebs.
Now, in the post-LLM age, it doesn't sound like a joke anymore.
I’d prefer sales people keep their jobs. Having had the misfortune of being seated next to the telemarketing team in an investment bank for half a year… however… Let’s just say that I’m not sure you would even know if it was a person or a bot. They’re not even scripted or “trained” like your average telemarketer because our target audience is actually somewhat interested in what we sell, but listening to them repeat themselves over and over from their own “personal scripts”… well… they are already bots man.
It may already not be the case. I'll probably never know.
The future of human communication is to be cunts online, so that we can identify ourselves. That will at least separate us from the script kiddies. Major marketers will train their AIs without any filter, so we're fucked anyway.
Now this work will be outsourced to AI with even greater efficiency.
I think the real dead internet today and is a bit deceiving (not sure a better word) are examples of online games where you’re playing against bots or everyone assumes they are playing against bots. One of the aspects of the early days of the internet that was really cool and is now arguably not real anymore.
You could refine this in further iterations by also adding examples based on previous correct/incorrect interest predictions, thereby effectively reducing the amount of spam / making cold outreach suck less.
There are different ways to use AI to achieve the same goals, some more responsible than others.
- The people who receive the cold email are (increasingly) more likely to be at least somewhat interested
- A human really wrote personalized emails, instead of trying to trick people into believing that
I've actually made an internal company April fools website. Too bad I've never kept a copy but here goes.
It's called Proxy Ai. It reads your emails so you don't have to. It reads every posts on social media so you don't FOMO. It communicates with those chatty colleagues so you don't have to. Proxy Ai... So you don't have to.
"That actually sounds like a pretty good product. Does it send you a summary of the conversations, emails and social media posts?"
"No"
Quoting from Dirk Gently's Holistic Detective Agency (Douglas Adams):
> The Electric Monk was a labor-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
It said:
Hi -
Just a note to say I'm a big fan of your writing. I always learn something and love your voice, which is hilarious and singular.
Write a book!
Best,
{Name}
{Link to sender's startup}
{Link to sender's substack}
New to writing online, it made me feel really good that someone enjoyed what I wrote and took the time to write and say so.
After reading this piece, though, I went back and read it again, and I just don't know. It's not quite GPT's usual voice, but it is strangely non-specific.
The startup is an AI startup, the person's Substack is full of generative AI illustrations, and they do seem like an AI fan, but reading their posts, they also seem like someone who's genuinely interested in preventing a dystopia.
I suppose receiving encouraging emails from strangers is just another situation that'll have us looking over our shoulders now, on guard, trying to walk the line between naivety and paranoia.
The compliment is a "foot in the door" so you don't immediately dismiss the email, and keep reading until the links.
I get the same type of comments on all my blog posts. Here's 2 examples directly from my blog:
"Awesome post! Keep up the great work! " (+ a link to their SEO service)
"Nice website, love the theme! Can I use it?" (+ a link to their WP service).
If I ever receive spam addressed to foobar.com@mydomain.com that is unrelated to your service I know you leaked or abused my data. Result: you will get a DSGVO complaint and I filter all emails addressed to this address from my inbox.
The good thing about using a catchall email address is that I don't have to create a mailbox for each service/purpose, I can just make email addresses up as I go. All you need for that is your own domain and a mailserver that aupports it.
Has this ever resulted in significant penalties for those companies? I used to do this but I gave up as it never seemed to achieve anything.
https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-mo...
https://learn.microsoft.com/en-us/exchange/recipients-in-exc...
Not trying to tell you to stop though, this is definitely a good idea, when it works.
For example, using a HMAC of the domain. So you generate foobar.com-sr32j4@mydomain.com, it's impossible to generate the sr32j4 part without knowing your secret key, and your mail server checks that sr32j4 is correct before accepting the mail.
Edit: Apparently you can also purchase a domain directly through them if you prefer, although you have to be a paying customer for 7 days first https://www.fastmail.com/how-to/email-for-your-domain/
I never fell for a spam mail so far (i.e. not once clicked a link like OP did), but I fully expect this will change soon. Tough times for people that commonly expect mail from random strangers.
I have no need of messages by random strangers
Then one day he just stopped replying, and his email address would bounce. My best guess is it got shut down, for, you know, scamming. Bummed me out though, he was cool, except for the scamming thing.
Even in the workplace it is now common for most people to have a signature saying "only contact me via ms teams".
I am pretty sure that sooner or later the spam will find its way on teams/slack/discord the same way it does on whatsapp but at the very least they are easier to block permanently.
Wow, that's some extrapolating from a personal bubble if I've ever seen one. Plenty of workplaces still have email as their default communication method.
- semi automated reminders (you haven't filled your timesheets!), usually sent by humans but that do not expect answers - internal newsletters - general HR news - special news: electrical issues at the office, stay at home! - spam
Bottom line: none is addressed to you as a particular human, nor require answers.
I am sure it changes for people who have interactions with people outside of the company but I would hate having their job and don't understand why companies haven't adopted XMPP widely to make those kind of interactions. I can theorically receive spam via XMPP, but it requires at the very least that I approve the relationship before hand so if it comes from a domain I don't expect I have no reason to accept that trust.
But on personal side, I haven't received anything from a human for years. People I know usually know my phone number and contact me via instant messaging.
>>But on personal side, I haven't received anything from a human for years.
I actually have an old friend back from high school and we talk daily using emails. He doesn't use any IM apps so it kinda stuck as our default way of talking.
And of course I exchange emails whenever there's some kind of customer service thing that needs to be dealt with - it's always best to have things in writing.
I have the feeling contact forms are disappearing everywhere nowadays. Everything is either a chatbot or a chatcall these days.
Hadn't touched marketing for ~5 years, as I said I know the org well so I thought it will take me about a month to get the next 6 months of marketing built and automated. How wrong was I. 7 days later, the full marketing org is running, at a decent scale, on autopilot, for a year, and I don't know if/when I'd need to hire someone into marketing.
Marketing has not fundamentally changed, but it's changed such that one individual could fully operate the fundamentals. Personally I love it, I'm sure others are going nuts.
> Hey Raymond,
Thank you so much for your kind words about my post on revamping my homelab! It’s always a pleasure to hear from someone who appreciates the journey of continuous improvement. Your message truly brightened my day.
Indeed, using Deno Fresh for my blog has been an exciting adventure. The process of managing updates and deployments, while sometimes challenging, has been incredibly rewarding. It’s like tending to a garden, where each update is a new seed planted, and every deployment is a blossom of progress. The satisfaction of seeing everything come together is unparalleled.
Your introduction of Wisp has certainly piqued my interest. A CMS that simplifies content management sounds like a dream come true, especially for someone like me who is always looking for ways to streamline processes and enhance efficiency. The name “Wisp” itself evokes a sense of lightness and ease, which is exactly what one hopes for in a content management system.
I would love to learn more about Wisp and how it could potentially fit into my workflow. The idea of having a tool that can make content management more intuitive and less time-consuming is very appealing. Could you share more details about its features and how it stands out from other CMS options? I’m particularly interested in how it handles updates and deployments, as these are crucial aspects for me.
Thank you again for reaching out and for thinking of me. I’m looking forward to hearing more about Wisp and exploring the possibilities it offers. Let’s continue this conversation and see where it leads!
Best regards, Tim
No one believes the CEO has taken the time to email you with onboarding instructions immediately after signing up anymore. But outreach tactics like this are still quite manipulative.
This person wants me to buy their product, and before they can get a word out about it they’re already lying to me - about the origin, the intent, the faux thoughtfulness.
I want nothing to do with shameless dishonesty. This isn’t the way to sell your product.
Wisp, if you’re reading this, I now have a permanent negative image of your brand.
I wouldn't have figured out this was Ai, and might have engaged had if the topic was relevant to me. I would not have engaged with a traditional spam email even if it had been relevant to me, so there's a real incentive to do stuff like this.
I think marketers underestimate that they may turn people off their brand in the long run by these tactics, because people do not like being fooled. And the more sophisticated the scheme the more outraged people are when they find out.
Of course, the answer is to have AI send a response with a CAPTCHA (assuming those still work), before showing the initial email to the recipient.
Knowing the people (mostly marketers) leading the project I can 100% guarantee that they would call these Emails shenanigans a great idea and would immediately start (to tell someone) to implement it without taking a step back and thinking it through.
Enormous amounts of email will be generated but no one will ever see it.
---
Hey Travis,
Checked out the Next.js Notion Starter Kit. Amazing project!
Noticed you might be juggling multiple tools to manage content. Ever thought about a headless CMS that can streamline this?
Wisp might be a handy solution. Let me know what you think!
Cheers, Raymond
>> Have you ever received an email that felt so personalized, so tailored to your interests and experiences, that you couldn't help but be intrigued? What if I told you that email wasn't crafted by a human, but by an artificial intelligence (AI) agent?
> I don't really have words for this, but I dislike this.
What a classy understatement. I find the strategy employed by Wisp predictable and infuriating. Like insects or other near-automata, humanity is racing to the bottom with "Generative AI". And I use "AI" in the loosest possible sense here, because once you pull back the curtain, current tech is actually only a slightly better Markov chain.
After using chatgpt regularly, it's responses to anything but the most trivial, clueless questions are riddled with errors and "hallucinations". I often don't bother anymore, because it's easier to go to the original source: stackoverflow, reddit, and community forums. Gag. It does still make a good shrink / Eliza replacement.
It isn't responding with answers. It's responding with probable verbiage. An actual "answer" requires a type of interpretation that it doesn't perform.
I like that phrase. Also, how'd you get my password?
> What a classy understatement.
Maybe i should write a blog, simply because i have a lot of words for this... but well, they would neither be classy nor understatements.
Those haven't been in the best shape for the last decade anyway. The benefits of easily accessible compressed knowledge far outweigh the cost, so we're still going up imo.
ChatGPT is perfect for mundane development tasks and language mobility, so quite useful for a significant portion of especially low level developers. I've prompted a bunch of useful little Python scripts myself, without ever bothering to even check the syntax.
I myself tend to name-and-shame regardless of how it may turn out, whether "positive" or "negative," when I feel compelled to be posting online about a thing I have encountered in my personal life. I think that openness and clearly-evident facts are very important parts of supporting the story that I wish to tell. (And if I did not wish to tell the story, then I would not have done so.)
* But a line must be drawn somewhere.
My own line is this: When I encounter a fucking nazi in real life, I make sure to not propagate whatever it is that this fucking nazi has to say, even if I have a story to write about that fucking nazi. (And we rather unfortunately have plenty of these fucking nazis here in Ohio, so I do get opportunities every now and then to exercise this self-restraint.)
And the common consensus in this thread, which I agree with, is that Wisp is obnoxious, insidious, and is an active participant in the degradation of quality of both email, and the internet as a whole.
AI spam emails will definitely happen on a large scale in the future, but on the optimistic side, we'll have AI assistants to read every email. Personalization won't really matter; it only matters if the marketed service is genuinely useful to me. In the future, if I have a need, my AI will find the best solution by considering many options. Previously, this was time-consuming, but now AI ensures we find the most suitable solution.
No fancy marketing will be needed because AI will filter most of them. In the end, marketers will find that the most efficient way to market is to honestly list out your service/product specs, as AI will compare them. On the other hand, for things I'm not sure I need, AI will help judge if they are indeed useful to me, regardless of how fancy the email/call is. If they are, it will facilitate cooperation; if not, it will skip them.
Therefore, marketing may still have a role: to help you discover things you aren't fully aware you need, and AI will help you decide if you really need them.
It boils down to a risk/reward trade-off, but I doubt that someone would as easily send thousands of spam mails, and also publicly boast about it
Otherwise I don't think you can argue any legitimate interest.
https://www.ris.bka.gv.at/NormDokument.wxe?Abfrage=Bundesnor...
Edit:clarified never
In reality it’s very easy to end up subscribing to newsletters and even my European embassy subscribed me to their event newsletter in Thailand—of course I never agreed to any of that.
It seems that with the gpdr this is now eu wide:
The law for that, at least in my country, is very clear: https://www.ris.bka.gv.at/NormDokument.wxe?Abfrage=Bundesnor...
Did he use an LLM to write the blog post too?
>"I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately."
Then don't brag about it on your blog! Sheez.
(Ok, so technically he's not bragging about it on his blog, because it's probably just an LLM bragging about it on his blog for him, but that's the point!)
> Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing?
Abusing public information on GitHub has become more common. The other day, I received some cryptocurrency spam ads from GitHub. It turns out to be a bot injecting ads as issues on other people's repos and randomly @ing accounts. It deleted such issues immediately, so the net effect is that I get an unfilterable spam email.
> It felt like a family fridge decorated with printed stock art of children’s drawings.
Yep. "Generative AI" is like an infinite clip-art gallery that can be searched with very specific queries.
The coin has two sides: in some situations it devalues human effort - as in writing (long/detailed) documents in formal language is now attainable by everyone. In situations where sincerity and originality matters, human effort has now increased in value.
Watch out recruiters, AI can do better than you! Not like I will like these unsolicited outreaches more, the exact number is zero, how many times I found these useful or relevant before when biorobots wrote and sent and administrated it in half or just few minutes, and I do not look forward having these now on mass scale when hundreds of AI could write thousands, flooding my email account, making it absolutely unusable.
Just as most of us ignore calls from unknown numbers, we may also default to ignoring emails from unknown senders in the future. This could lead to a reluctance to send emails, as they might be perceived as "unknown" to the recipient.
Whether they are AI or not, I have no idea, but sometimes, and recently in emails, I purposely make a typo or grammar mistake to add some "human" touch to it, knowing that an AI will always type a perfect one.
How it was written is not relevant. Off to the trash it goes.
> At the same time, we need to establish guidelines around transparency and consent for AI-driven communications at scale. Deception through omission is still deception – people should be aware when they're interacting with an AI agent versus a human.
This is clearly pissing in the pool. I've gotten so much value from people who have made their emails public with a 'if you're curious or learning feel free to email me' (e.g. patio11) and I've long had the invitation in my HN profile too.
Nasty for people to abuse this to extract value for the few weeks/months it takes people to realise what's happening and make themselves harder to contact.
This reminds me of AI-generated fake security vulnerability reports about curl: https://news.ycombinator.com/item?id=38845878
They reached out to me, asking whether my company would be interested in Something Somethingification. I decided that since I don't even understand the term, I'm not the right person, and decided to ignore it.
Then they followed up. Meh.
Then they followed up again, and I thought "okay, a little reward for perseverance", and replied something along the lines of (I don't work there anymore, no access to the original):
"Hey, thank you for reaching out.
Unfortunately, since I don't even know what Something Somethingification is, I am not the right person to talk to. So I'll kindly pass and consider this email human-generated spam. Thanks!"
A response came. Within a minute, barely seconds after "undo send" disappeared.
"Who would be the best person to reach out to, then?
By the way, this is a GPT assisted conversation, so it's a computer generated spam."
WHAAAAT. This really got me. Remember, it was 2021.
"Okay", I replied, "Now you got my interest!
How many such conversations are you able to have at the same time?"
It replied, within a minute. It contained a quite from Arthur C. Clarke that "every technology advanced enough is indistinguishable from magic" and his picture. And an answer: "Actually, sourcing contacts is the bottleneck, so we have only a few of these each day. Anyway, do you happen to know who we could reach out to instead?".
I was amazed, I decided I'll reward this with what they want.
I replied how impressive it is again, as the whole conversation made sense, and it gave them a contact to a director that could be the right person. They won this one.
We need to update our spam filtering techniques, fast. Somehow. But how?
It seems like CoPilot/ChatGPT has this all-too-eager tone in the beginning of their responses.
The demo (1) of not Scarlett Johansson telling a blind man what a great job he was doing for managing to flag a taxi sounded so fucking patronizing to my ears. Worse is, the user has a British accent, the Brits probably hate that patroniz^Hsing too. It reminds me of that 4chan green text about a man's flight to the US and how everyone was saying "Great job!"
The most likely outcome will be a digital "verified human" certificate, with two factor authentication on it. Bad for anonimity, but I don't see many alternatives and it may actually end up reducing online toxicity.
They will all lose money, time and more with the coming wave of spam and fraud.
Something like a marriage of a digital signature with a captcha: the message has a digital signature of the sender that can be verified with their public key, but it is somehow verifiable that the particular signature provider only does the signature if a human being completes the (difficult AI-proof) captcha.
Something like this approach can at least mitigate the mass AI email problem, although the one-off AI emails are unlikely to be slowed by this approach.
"Hey, love your work. random flattery What do you think about mine?"
I've received a few messages like that before LLMs were around, just an annoying self-marketing technique.
It's possible to use a noreply.github.com linked to your username for making commits. And you can to change the authorship of past commits in your own repos with write access.
I try to avoid give my email in a public and processable format whenever possible.
The only problem is that they referenced a role at a company I'm no longer at. The, presumably AI, author crafted the email in reference to my former role at a different startup.
After seeing this thread, I decided to follow up on my AI suspicions. Nothing conclusive, but that person is currently touting that they've sold their "course" to "1000+ founders."
No thanks.
Both are unsolicited emails, i.e. spam.
I feel confident that Gmail’s spam filter will be able to handle this quite well.
I’m betting that the introduction of LLMs will not change the fundamentals of spam-fighting.
> Assuming they could solve the problem of the headers, the spam of the future will probably look something like this: > > Hey there. Thought you should check out the following: > http://www.27meg.com/foo
Funny. 20 years later, that’s indeed how many spam messages look like.
The key difference here is personalization.
Traditionally, if a message was personalized it fell under 'cold outreach' and users were more likely to interact and play along. Just like what happened with the author (the same applies for everyone).
It's like the difference between receiving a flyer vs being contacted by a sales representative. Even if it's they advertise the same product, the perception is different, the results are different.
If you're mean the difference from a pure technical spam detection perspective, I'm not familiar, but would love to read more about the subject and the state of the art techniques if anyone has some resources to recommend.
Unless you're specifically looking for unsolicited offers, in which case you probably have a process for them, they seem like a waste of time.
Do you only read emails from recognized addresses? No new communication whatsoever unless it's initiated by you?
How do you know they're trying to sell you something without even reading the email?
Your question was "Do you read/answer cold outreaches then? Why?" which doesn't make much sense. For me, and I imagine the same applies for most people:
1. You read until you find a clue that its content is not of interestt. Usually the email subject doesn't say much.
2. You only reply if you need.
Cold outreach are genuine emails that covers colleagues, new clients, job opportunities, someone reaching out to collaborate, etc. How you deal with it depends on your profile and who you've given your email address to. Personally, I have many email addresses, for some I don't even check my inbox.
You confusing "read" with "quickly skim"? :)
b) If you want to read more, feel free to check the link I posted. Paul Graham has thought/written a lot about this. I think one reason people has forgotten about those articles is that today, a huge number of us use Gmail, so we don’t actually need to think so much about how spam filtering is implemented.
But that's inconsistent with the example you put forward. For the email to be interesting a human would need to research and approach every prospect independently, how many emails a day they can do? 5, 10, 20, 100?
It's simply not possible for a human to generate 100,000 personalized email by hand. That's the difference.
And AFAIK, Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.
No, but with further advances it might easily get cheap enough that spammers think it's worth it.
> Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.
Agreed. However, assuming people don't individually configure those filters -- which they currently do not and scaling this up would be something quite novel --, this seems quite gamable
https://docs.github.com/en/account-and-profile/setting-up-an...
I'm not after shallow interactions today and I would use it (much like a dumb spam filter) to judge a new sender's respect for my time expecting them to have stated their business with total upfront clarity, not mystery.
Everyone's spam filter is tuned differently from others', so spammers had a hard time beating this with automated messages. About the best they could do was adding random keywords in hopes of triggering someone's positive "not spam" trigger.
Now spammers gain personalisation at scale, so this advantage is at risk.
And also from the About page on the linked website
Even now, we're starting to have a sense for which images and text were AI generated. And they'll evolve to get around the antibodies. And we'll build new ones.
https://github.com/skorokithakis/spamgpt
It was a bit of fun, until I realized that most of the replied from the spammers were AI as well. We were just automatically spamming each other while OpenAI made money.
I stopped using it then.
Serves them right. Unless they're a bot too of course, then you can't waste their time.
Although he got more click-throughs to the top of his funnel, none of them are going to pass through to a conversion because once you reach his site, you realize that he's deceived you.
That he doesn't even realize this is concerning...
The general public doesn’t want or need it. They want to work less and get paid more.
Maybe in future I will have my ”AI secretary” to answer those and have a discussion with the ”AI sales assistant”.
I talked to many people, and all have developed immunity against the cold outreach.
it's a pure numbers game. even people who think they're immune are 1 highly-targeted, pain-point addressing email from replying.
As noted in the article, you might in the future not even notice you're being AI-spammed. What if "timharek.no" is AI-generated?
What if Wisp CMS being so upfront about its use of AI is part of the trick? It just got exposure on HN, after all!
You definitely should mark this email as spam so this cannot become a common thing.
People sending AI crap to others should have their email accounts banned.
Can't help but wonder if the advent of LLM systems wouldn't be quite so depressing if we weren't already operating in an internet that's been reduced to basically a cesspool of advertising and communication-spam.
One issue I see is that it’s much harder to employ an LLM defensively (for filtering) than offensively.
Welp.
Subject: Your Passion For Homelabbing is Contagious (Spam: 6/10)
Report: Flattery to establish a connection. Quick shift to product promotion. friendly but lacks personalization. Specific reference promotes their solution. Calls for a response.
So even if buddy buddy spam becomes pervasive you really only have to decide how accepting your are of obvious sales tactics in normal comms. It may end up that everyone having more nuanced spam filters forces humans to use those same tactics less in normal comms.
Specifically smart filter to remove SPAM in a smarter way.
Most people get a lot of spam from sales agents, SEO services, start-up accelerator, etc...
With GabrielAI you can say stuff like:
"If the email is from a SEO agency or it is trying to sell me SEO service"
Then move it to SPAM.
Similarly for all other type of spam or emails.
You can also move stuff to different labels in Gmail to organise your inbox.
Spam is spam?
Some people struggle with learning new ways of controlling for scams but it's never going away, just something they must consider more and use better tools to solve.
The "upside" is that nature eventually takes care of things when they go out of equilibrium, so there might be a forest fire on the horizon to restore it. In the case of AI spam, it might cause people to automatically filter their incoming mail from any content that even implicitly tries to sell something, or even any email arriving from an address that is not on their whitelist. This might eventually cause people to need to actually physically meet (gasp!) in order to add each other to their whitelist.
Edit: "Unnecessary" might be my judgement, instead of "acceptable."
> There's also the question of ethical considerations around using AI for mass personalized outreach. While my experiment yielded positive results, with recipients appreciating the personalized touch, there's a potential slippery slope.
Unbelievable... I'm not a philosopher, but in my understanding, being ethical doesn't mean walking the line just fine so as people don't call you out on your bullshit.
The ethics of an action is of consideration both BEFORE and after executing it, and on the merit of the action itself!
Cold spamming is illegal where I'm at, probably Europe as a whole?
I'd be curious how this plays out in court. Probably something like:
- If you use an AI tool to scrape leads and to generate the content but then still send out individual emails from your Mail provider, it's still a cold email.
- If you use an AI tool and also automate the email delivery, it should be considered spam.
...
2024: AI impersonating Bill Gates sends you SPAM
It’s sad that going forward I probably won’t be able to tell genuine interest from this kind of fake bullshit.
If they don't know my name, they don't even know where they got my email from, so probably spam, however intelligible it looks.
It's the same in the age of spam calls. If it's a mobile phone and the person behind didn't even bother to introduce themselves via SMS/WhatsApp, I don't pick up.
... shall we tell him?
This will make it worse.
Solutions? At least some could involve key exchange. How about a bounty of some sort on spammers?
Dug?
They admit (or actually brag) about it on their company blog "I used AI agents to send out nearly 1,000 personalized emails to developers with public blogs on GitHub."
Do you think they're bluffing?
> This sounds like the average email written by a human
that's the point
Guys, it’s a tool like any other.
Anyways. LLM is a program created by supercomputers to be deceptive.
Also it took away the aspect of life that people around the world could cold email each other if their hobbies align.
And in general, now the percentage of potential bad actors went from near 0 to near 100.
And for why? .. ..
a) doable
b) the right solution.
(And eventually start producing very weak chips, that can run your business and accounting on a TUI.)Your right to swing your fist stops at my nose.
... what an incredibly odd thing to say.
But really, I've noticed that thought-ending cliches like this one are popping up as defensive reactions around LLMs more and more. This particular thought-ender displays the most common theme - it dismisses all skepticism as being driven by some amorphous "anti-AI" demographic, presumably allowing the author to dismiss any concerns and thereby preventing any critical thought from occurring.
Kind of feels like "nocoiner" and "have fun being poor", v2 ...
As TFA shows, this machine learning is almost indistinguishable from actual intelligence. It might not be sci-fi AI, but it certainly is artificial, and is is indistinguishable from intelligence. AI is a very apt description of what it is.